Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Like AMD in H1 2000. :)
The 670 would almost be as fast as the current Extreme Edition
That isn't saying much.
In Q2 Extreme Edition will be Smithfield-based
And that part will be much slower on most applications.
Any thoughts on why the 670 is delayed to next quarter? (That's the 3.8GHz 2MB P4.)
From the looks of that 1 xbit comparison, power usage may actually be lower clock/clock with the 6xx parts (vs. 5xx). That's a little surprising, so I'm not sure it is completely correct.
But if it IS correct, then what is the problem with producing a 670 part, if not power?
Or is it that 570 parts are actually quite rare as well, and it simply takes a while to fab enough 3.8GHz parts to launch?
I suppose their measurements were done in 32bit mode, so running iAMD64, the relative power usage could change, or something.
Here's the best bit:
We see the same situation as the one we have just discussed about PovRay 3.6. The 64-bit version of this benchmark works slower on Pentium 4 processor with EM64T support than the 32-bit version. In case of Athlon 64 processor, the situation changes to the opposite: the use 64-bit extensions improves the computational performance by the good 29%.
This behavior of the Pentium 4 processor with EM64T technology is pretty understandable. The thing is that Athlon 64 processors were initially developed as 64-bit solutions. It means that Athlon 64 simply doesn’t use some of its potential power when working with 32-bit code. As a result, Athlon 64 processor processes instructions almost equally fast for 32-bit and 64-bit code. However, since there are additional registers and also wider registers, which can be used, the performance can be significantly improved in some cases.
It is a different story with Pentium 4 processors supporting EM64T technology. When Intel engineers developed NetBurst architecture, they didn’t think about the potential advancement of this architecture for 64-bit modes. That is why they had to slightly revise the NetBurst architecture for Prescott core when they decided to introduce EM64T support. By the way, this is exactly why Prescott based Pentium 4 processors sometimes turn out slower than their Northwood based counterparts in 32-bit applications. But this is not all yet. Some instructions, such as integer multiplication or shift are performed much slower in 64-bit mode than their 32-bit analogs because of the NetBurst architectural peculiarities. Therefore, porting programs with integer arithmetic for EM64T may sometimes cause their slowing down, even though there are more general-purpose registers involved and the register width is higher.
LOL!!!
12V line power comparison.
A64 Winchester and Newcastle
Prescott 5xx and 6xx
Notice any differences? LOL.
Actually, the EKO 1.4 pathscale stuff seems to be giving them their best FP score at the moment.
Read again, more slowly.
We're talking about (or we were) comparing PROCESSOR performance.
You may legitimately say that both optimized and unoptimized code performance are important.
The point is that Spec "base" entries are not always showing you unoptimized code performance, especially with the Intel compiler.
It isn't that the P4 runs great on unoptimized "base" code output. (In fact the Opteron is much better, relatively, at unoptimized code.)
It is that the Intel compiler, with "base" flags, actually generates highly P4-optimized code. That's why there is no difference (or hardly any) in the "peak" and "base" scores for the Intel parts in many Spec entries.
Because of these problems, only PEAK spec scores are reliable.
If you want to guage the performance on unoptimized code, you'll need to use a different compiler, like gcc, where you know what is going on.
BASE Spec scores do not compare processor performance on unoptimized code, even if that is what they were once intended to do.
The Intel compiler guys (and perhaps others) put an end to that.
Base is BOGUS for comparing processor performance. The only thing worth looking at is Peak, and allowing each processor to use the best compiler and settings for it.
And Intel "cheats" with their compiler, and so it optimizes (for P4), even when entered as a "base" score.
For example:
http://www.spec.org/osg/cpu2000/results/res2004q4/cpu2000-20041115-03596.html
So you see, base is a game. It's quite east to change a compiler to optimize at a higher level by default, and even to recognize spec source, and apply particular optimizations to each module automatically. Suddenly the same part's "base performance" shoots up.
The Intel compiler, in particular, produces "base" results that are almost always the same as "peak", and most Intel-based spec entries exhibit this effect.
No, it merely proves that you were wrong.
Nope, the 2.6GHz Opteron gets 2036 in SpecFP (and the FX should match that or better with the same compiler (EKO 1.4).
In SpecInt the FX-55 is at 1854.
So the K8 leads in both.
EDIT: I see that the 3.73EE value is claimed to be "base". We'll need to see if base=peak for that score. "Base" scores are bogus, because you never know what the compiler is doing. (For example the Intel compiler usually optimizes anyway, resulting in almost no advantage for peak over base.)
AND I see they used the Intel C++ compiler (!!!) for both parts. No wonder they got good "base" scores for the P4 part.
Two duds. Keith's '3 or more' speedgrades of performance increase at the same clock are nowhere to be seen.
meaning the demonstrators didn't display the contents.
Gee, then how were photos taken of the contents in the AMD case?
Magic?
http://www.amdzone.com/modules.php?op=modload&name=Sections&file=index&req=viewarticle&a...
Let us know when you sober up.
The 16/128 way superdome box is an Itanic design.
The core needs to become more complex to accomodate the missing features like AMD64, better FP, etc. That'll probably impact scaling to some extent.
That's easily equivalent to AMD's closed-box demonstration in August.
Sorry to bring up reality, but it was Intel with a closed-box demo of *something*, and AMD with an open-box (DL 585 in fact) demo last August.
the DIY buys from the channel which is going negotiate OEM-like prices from AMD
No, I don't think so. That was part of my thinking. I believe that large OEMs (HP, say) get significantly better pricing than the channel distributors get. And I suspect PIB pricing to the channel results in more $ to AMD, even factoring in the warranty issue.
The real question is about the size of the effect.
Radeon Xpress 200M is an integrated chipset. You linked to a laptop using a discrete X700 Radeon Express part. (?)
No, the very recent Yonah demo is something like what AMD did last August. So 9 months from now is the very end of 2005. Throw in a little extra because of the process transition, and Q106 sounds just about right. Intel should know, after all, and they made the slides.
Ah. Well, in any event, the DL 380 is limited to 12GB, so you'll have to live with that.
No, it isn't due to extra memory (they have the same amount).
It's that they gave the Xeon system more disk, a different disk controller, and an extra array controller.
http://www.tpc.org/results/individual_results/HP/HP_DL385G1_2.6GHz_16GB_2P_Win2003_ES.pdf
http://www.tpc.org/results/individual_results/hp/HP_ML370_3.6GHz_2MB_16GB_es.pdf
Exactly. Opteron forces Intel to make Xeon good enough to doom Itanium.
And AMD forced Intel to adopt AMD64, which means that by the end of this year, AMD64 systems will outnumber IPF systems by a factor of a few hundred.
What do you want? AMD outperformed Intel over the few weeks before this one.
6xx series only 2-5% faster than 5xx equivalent.
3.73 EE loses to 570 (3.8GHz 1MB) most of the time.
http://www.theinquirer.net/?article=21329
Keith, you told us the 640 would be 2 or more speedgrades better than the 550. (i.e. 3 or more better than the 540).
http://www.siliconinvestor.com/readmsg.aspx?msgid=20578940
What happened?
Comparing Q1 to Q4, I expect that the % of CPG sales that go to DIYers is going to rise, (and was depressed in Q4), because of the nForce4 timing. That should be a positive factor when it comes to ASPs, assuming that big OEMs pay less than channel distributors for a given part.
And did you recognize the significance of AMD having a dual core server product running multiple threads last August? Or is your wonder reserved for Intel demos?
BTW, it was an Intel guy who just said that Yonah required further work, despite that sample demo.
I didn't mean to touch your sensitive spot.
You said a Horus system needs a serverworks-like component. That's what I asked "why?" about.
Except when it doesn't.
Yep. I think the 8-socket Sun Galaxy, due this summer (June?), will be the first place we see it.
Where did you get the idea that 1GHz HT was cHT only? 939 A64s have had 1GHz non-coherent HT since launch. The 2200 nVidia chipset designed for opteron workstations supports 1GHz HT. AMD specifies the HT for E4 parts as 1000MHz, not "800MHz/1000MHz".
Demonstrating an early sample that needs further work? Amazing!
Who would've guessed you'd be so impressed with it?
a Horus system is going to need something like whatever Serverworks is doing...
Why?
Unless Serverworks has really branched out, they aren't working on anything that competes with Horus.
Look here:
http://www.theinquirer.net/?article=15465
Now AMD Zone has filed a little more information about what's going on. The site claims that we'll see eight way Opteron chipsets, and in the future Sun may collaborate with Serverworks and make chipsets for 16- and 32-way systems.
Sun's strategy is erratic and inconsistent. They want to use Opteron to stick it to Intel, but they want UltraSparc to survive, too, with massively multithreaded designs.
Intel's strategy is erratic and inconsistent. They want to use Xeon to keep up with Opteron, but they want Itanium to survive, too, with its massively expensive designs and non-standard ISA. Their divergent approach won't work very well. In another three years, I see Itanium being EOLed.
:)
I think you are going to be surprised. Intel converting all P4s to 64-bit (including Celeron) in early Q205 is going to have quite an effect. By the time Yonah launches in Q106, 32-bit-only systems will likely be seen as dated.
It's difficult to get the HT link to bottleneck a 1P system (you need massive I/O requirements), but it is quite easy to imagine bottlenecking a 2P system with HT.
And even easier (but you might say "not fair") to imagine it if you have all the memory hanging off one of the processors, as some mobos do.
I'd guess probably not, but I really don't know enough details about what Serverworks has cooked up for them.
Yonah does support SSE3, but Yonah's problem is the lack of EM64T.
Sun is working with Serverworks, and that chipset is almost certainly the one powering their to-be-launched 8-socket Galaxy server.