Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Keith,
Conroe is expected to trail K8 in floating point performance clock-for-clock, but lead it in integer performance (which I consider more relevant for desktop apps and many if not most commercial server apps).
Agreed on integer being more important than floating point. Dothan is very good here, despite the higher latency memory.
In the meantime, on AMD side, with the core staying the same, the expected improvements will be linear, with clock speed, possibly some changes with caches (a long shot) and memory controller.
Unless Intel screws up somehow, Conroe may be the first Intel desktop CPU worth buying since Coppermine Piii...
Joe
alan,
Probably some confusion...
I think we were talking about 2 different subject at the same time. On floating point, I agree that compiler tricks will bring out the best of Conroe. I was kind of doubting that compiler tricks can do much to take advantage of the 4th integer pipeline. The CPU is on its own there, IMO.
Joe
alan,
I think the issue is floating point,
And in particular linpack. I think the fused macro-ops will allow four floating point operations per cycle, bringing floating point performance above the P4... when code is compiled to take advantage of the "virtual FMAC".
I thought these were 4 integer pipes...
Yeah, on floating point, you can have any performance you want. It is just a question of how much resources you throw at it. But I doubt Intel is going to make Conroe unbalanced in favor of floating point.
On Integer, it is a lot more tricky to extract parallelism, which is why additional pipes get diminishing returns. But Intel may have found some new tricks...
Joe
mas,
TPC-C HP ProLiant DL585: 236,054 / 2.02 US $
I don't expect Dell to submit any results with their 4 donkey buggy...
Joe
avatar,
But we were told (back when P4 was released) that new apps would be compiled with P4 optimized compilers. Even existing apps would be recompiled. wbmw told us.
The funniest one was Elmer. When presented with evidence that Netburst cores came behind various K7 core generations, except some hand picked by Intel (or produced by Intel influenced suits), Elmer introduced this distinction of "New apps" vs. "Old apps". "New apps" were the ones where Netburst could win.
And now, in 2005, software is still being sold using just generic code generation...
Joe
Keith,
Supply of high-density NOR flash said to be tight
Gee, it doesn't get any better than this for Spansion. If there are issues in testing and packaging, Spansion should just allocate these resources to high density end of the market, and sell fewer chips at higher prices...
Maybe this Q will see another leap by Spansion in direction of profitability. I wonder if it would not be better to postpone the IPO until mid January...
Joe
Alan,
I am fairly confident both the X2 and the Conroe will be clocking somewhere between 2.8Ghz and 3.6 Ghz
My expectation is 2.8 to 3.2 GHz, with AMD ahead in raw clock speed.
In terms of relative performance, I'll predict that on existing applications it will be very similar per clock, but on newly compiled applications (AKA spec) Conroe will be ahead. I think the "fourth pipe" and the new fused macro-ops will require new code to really shine.
It is possible Conroe will be ahead. There may be ways to get further advantages by compiler tricks, but I suspect that where it is possible, the advantage will be there in the existing apps that were not re-compiled. I am assuming that Intel is enhancing the algorithms for scheduling of instructions, and there will probably be more instructions in-flight.
My expectation is, that unlike the odd-ball Netburst processors, that were very picky, and unfreindly to old code, I think Conroe (like K7 and K8) will run old code just fine. Not just fine, great.
Joe
morrowinder,
Civilization IV is basically unplayable on huge maps with 1 GB. Battle for the Middle Earth is on the edge. For Battlefield 2 (which I have not played), to be the first class citizen, you need 2 GB. I read these comments on Newegg from reviews of 1 GB memory sticks. People were buying them (x 2) to be able to play Battlefield better.
Joe
Tenchu,
Or what must-have features will Vista have that will only be supported by a 64-bit version?
On consumer end, I think games will be the first ones that will make 32 bit users second class citizens.
I know of 3 games that need > 1 GB to be the first class citizen. That will go up and from the other side, graphics cards with 512MB of RAM will eat up the address space...
Joe
pfosse,
You should have gotten a 64 bit laptop if you want to keep it that long!
But tecate is using it as a fashion accessory (when she goes to local Starbucks).
Joe
Keith,
And your hypothetical undervolted ~200mm^2 desktop parts don´t help AMD´s market position.
If it targets the higher end, say $200 and over, it would be viable, once Fab36 capacity comes online.
Joe
wbmw,
Rather than putting a lot of people on highly integrated dual core Netburst with tons of frequency tuning, they put the engineers on Merom/Conroe/Woodcrest and were able to commit to an H2 2006 launch date. Assuming they can still hit that date, then sacrificing the competitiveness of their short term roadmap will have been a great decision.
Further assuming that Intel hits the desired level of competitiveness. There is a danger that some of the newly assigned engineers, namely those from the Itanium division will infect the team with the "day late and dollar short" disease...
Joe
Alan,
I don't think they really know what Intel's strategy is... comparing dual core parts between AMD and INTC does not speak at all to strategy.
The strategy is, AFAIK, to stop pushing the clock speed, and instead, to put more cores on a die. The first fruits of this new strategy of Intel (Smithfield, Paxville) were somewhere between disappointing and laughable.
Joe
wbmw,
First of all, you're wrong here. The biggest component would be #2, since it has the most number of transistors that go towards address translation, internal queues, DRAM arbitration and timings, any correctable or detectable error schemes, protocol translation, and any associated tasks.
Maybe you should take that up with chipguy, who thinks it is less than .5W. But I think he is ignoring the Cross bar and System Request Queue.
The only thing that needs to be in the DDR voltage domain are the actual transistors in the physical layer, and there are orders of magnitude fewer needed in #1 than for all the transistors needed in #2. Therefore, even with the higher power nature of the transistors in #1, the sheer numbers of transistors in #2 makes it a far larger contributor.
It is not just the number of transistors, but their function as well. They have to push current to the DRAM chips at higher voltage. AMD specifies 7.685W as the max.
So you are telling me that you know more about AMD chips that AMD specs?
Further, HT max power consumption is 1.89W. Sum of #1 (DRAM interface) and #3 (HT) is 9.57W. This is completely independent of the clock speed and voltage the CPU core operates under.
When you take Opteron at 90nm and low voltage, all the transistors contributing to memory controller logic that are not repeated between cores is still most likely in the range of 5W or less.
Yeah, that's the ticket. You claim that the sum of 3 components (2 of which are known) is 5W. So let's calculate that 3rd component which is low is low power chips:
#1 + #2 + #3 = 5W
7.685W + #2 + 1.89W = 5W
#2 = 5W - 9.575W
#2 = -4.575W
That's the ticket. AMD has achieved such a low power in its memory controller that not only it does not use any power, it supplies 4.575W to the rest of the CPU.
Joe
I_banker,
AMD's server market share gains are a remarkable achievement.
Agreed. It is possible that before Intel gets its $hit together in the server market (still nearly a year away), AMD may have higher server market share than AMD traditional strength - the desktops.
It would be nice to get better traction in notebooks...
Joe
chipguy,
IPF has about 16% share of its target market segment
compared to about 11% for Opteron's share of x86 servers
and its less than 2% share of x86 workstations.
That must have been Q2 figure based on the following:
For example, Gartner pegged AMD's global third-quarter share of x86 server revenue at 15.7%, up more than 4% sequentially and nearly 10% year over year.
http://biz.yahoo.com/fool/051128/113320884524.html?.v=1
Joe
wbmw,
They are, and it's why Intel chomped on Sun in the late '90s and early this decade, chewed them up, and spit them out as a shadow of their former selves.
True, but the timing is more complicated. As Xeon started to be a credible alternative, the Y2K frenzy kept rising, sales high. It was the dot.bomb that brought some sanity to the buyers, and Sun hardware started to be more rationally evaluating vs. Xeon (and now Opteron) using price performance metrics.
Joe
tecate,
Who said xeons are cheap replacements for sparc?
Marketplace, IDC, Gartner.
Joe
wbmw,
And I'll repeat once more that this was a rating for the max power of the 940 pin Opteron socket and can't be applied to a low voltage version of the chip running at 68W.
I don't think you have a clear picture of the 3 I/O components in K8. K8 has:
1. DRAM interface - interfacing external DRAM chips
2. memory controller (and the associated non-core related components (System request queue, crossbar)
3. HT interface
The biggest component is #1 and it is unaffected by the voltage and clock speed of the CPU. It runs on the clock speed of DRAM, and on the voltage of DRAM, independent of the CPU.
You can have low power CPU, but it is still accessing standard DDR RAM.
Joe
chipguy,
That doesn't matter unless AMD implemented it in a completely
brain dead manner. Any memory controller is basically a state
machine that transitions at the control state frequency of the
memory it connects to. In the case of DDR400 that is 200 MHz.
It does not need to change states at the CPU frequency.
I am not sure which component you are talking about here. I was responding to your post about memory controller, which is a short hand for collection of shared components (in K8 terms), which are:
- System Request Queue
- Cross-bar switch
- memory controller itself
These are the components that you need to subtract from the total to get the per core power consumption.
Joe
chipguy,
That is worst case peak power. Worst case sustained power
will be less.
AFAIK, peak is what AMD has used for their TDP.
It is very hard to see how the digital
logic portion of an IMC for DDR in a 90 nm device consumes
more than half a Watt running flat out.
I have not seen any data on the breakdown. One thing to keep in mind is that it can run at the full speed of the CPU (1.8 to 2.8 GHz), which is a lot different than some 200 to 400 MHz discrete north bridges apparently run at.
Joe
Mike,
It looks like A64s are flying off the shelves. Good orders for DC parts.
Joe
wbmw,
Great, so while quad core Opteron gets the press and mind share, dual core Woodcrest will get the volumes and market share. Sounds good to me.
Yes. Well, Woodcrest and DC Opteron as well.
This applies to the full power Opterons, not the low voltage HE parts. They have memory controller and HT TDPs far under that of the full power parts.
Power consumption of DRAM interface and HT interface does not change from standard to HE Opteron. Only memory controller portion would goes down. That portion is less than 1/2 of my 10W estimate.
How do you get 26W per core from a 68W Opteron?
I corrected it to 29W in the follow-up post. (68 - 10) / 2
These parts will be at lower voltage, and the memory controller portion will be much lower than 10W. More like 5W. That would make each core ~31W, which would put a quad core closer to the 130W envelope, once the memory controller and HT are added back in.
Chipguy just posted 7.7W for 1 out of 3 components. 10W may even be conservative. We are talking TDP here, not some average.
http://www.investorshub.com/boards/read_msg.asp?message_id=8641780
Joe
chipguy,
IPF has about 16% share of its target market segment
compared to about 11% for Opteron's share of x86 servers
and its less than 2% share of x86 workstations.
There is an overlap in the "target market" already. QC Opteron would extend the overlap further.
Also, QC Opteron would need minimum of additional investment for hardware and software infrustructure over the existing volume market, while Itanium needs a great deal of hardware and software investments.
Joe
Too late to edit. That should have been 29W per core.
Joe
chipdesigner,
So again, 100W @ 2GHz x 4 seems reasonable.
I think so too. I came up with 2.2 GHz at 110W in another post, so we are roughly in the same ballpark.
Joe
wbmw,
Volume: The 2GHz HE x70 is a low volume, boutique part that barely any OEMs are using, so AMD can stealth launch this to satisfy a few small orders and be ok with it. But when you need to satisfy the needs of big customers like HP, there's no way.
Generally, one can replace entire paragraph of your posts with the word "volume" in it with "FUD", and move on. Why would an early quad core (chipdesigner is suggesting) be anything other than boutique, initially? BTW, "boutique" status has not stopped HP from selling Itanium. Why should it stop HP from selling a boutique version of Opteron? A boutique version of Opteron would fit in generally the same infrastructure, with somewhat higher power spec.
You are assuming 20W for the memory controller, which is absurd for a piece of logic integrated in the CPU on a 90nm manufacturing process. My guess is that it will be less than 5W, or possibly as low as 1-2W. This means the power envelope for a 90nm quad core will be no less than 100W, and most likely in the 120-130W range, even for a low volume 1.8GHz sku.
It is not just memory controller, but DRAM interface, and HT interface that is shared. It is likely in 10W of the TDP. As far as power consumption, I think it would be above the standard, except maybe a very low clockspeed version. But AMD estaplished this 110W "SE" rating, and it I think it is possible that quad core would fit in that version. AMD is currently delivering "volume" of 68 Watt Opteron into the blade market up to 2.2 GHz, AFAIK. That would be about 26 Watts per core, roughly in the 110W target. This is assuming Q4 2005 specs. Things will change a bit in 1 year, with some power delivery optimization in Ref F, Socket F.
Performance: 90nm quad core Opteron will not come with very many performance improvements over the current generation.
The improvements currently on schedule will improve the infrastructure by somewhere between 1.5x to 2.0x with DDR2 and faster (possibly more numerous) HT links.
Joe
chipdesigner,
And here's another @ 22W for the Intel controller:
It may be true for Intel, but the figure is lower for AMD. Intel figure probably includes FSB interface power consumption on both side, north bridge and CPU. For AMD, it is an internal connection, using very little power.
Joe
chipguy,
Look at the Opteron datasheet. The memory interface draws a
maximum of 2.9A at 2.65V for DDR400. That is 7.7W absolute
peak power,. Realistic worst case thermal (i.e. sustained)
power is probably on the order of 5 or 6 W.
I think you need to count HT interface and memory controller (not just DRAM interface). The total is probably in 10W range.
Joe
highlandpk,
I wonder how it will compare with the top Opteron available at that time - 3ghz possibly? I also wonder how it will compare at 4 way config?
At the time of Dempsey introduction, Opteron may possibly be at 3 GHz (but only in single core). Dual core Opteron will be more relevant comparison, and there, Opteron probably will be 2.6 GHz at that the time of Dempsey launch.
Another variable that will help Opteron (but may not be available in time for Dempsey launch) is Opteron Socket F with support of DDR2.
Joe
wbmw,
Basically, it's true that the Dell XPS is not the best gaming system out there, but I think it's false to claim that people who buy one will be unhappy with it.
It is just a sub-optimal choice, and it is not Dell who is at fault, but Intel. Intel just does not currently have CPUs that are competitive in gaming performance.
Usually, the ones who claim this are the same ones with polarized viewpoints who think that the people are stupid or have low IQs if they buy a Dell computer.
I thought we already settled this subject. Only people who are clueless about gaming performance of computers buy Dell computers for gaming.
Joe
morrowinder,
Anything substantive to say? Or just more cheapshots:)
I have to consider the audience. Anything of substance would be completely lost on a "Dell gamer".
Joe
morrowinder,
The vast majority of gamers DO NOT have extreme editons or Athlon FXs or even X2s. They are incredibly expensive and not worth the money. At the midrange though AMD may still claim benchmark superiority, the differences are more dependent on what video card you choose than the processor.
The game performance does depend on the processor and video card. But it is clear that you can chose any price point for a processor, and Intel will always be clearly a bad choice.
I am a gamer and I bought a midrange dell.
Well there you confirm it. I think many posters and readers of this BBS had questions about your ability to think clearly...
Joe
wbmw,
If you were to separate the people in the world who have a "clue" about the gaming performance on computers, and those who don't, you'd see the vast share of those people fall into the second camp. Sorry, but this is a very polarized and unrealistic point of view.
What is polarized and unrealistic? That's exactly what I said, that there are enough people without a clue for Dell to have a successful "gaming" line.
What you seem to have hard time conceding is that more knowledge people have about gaming and gaming hardware, the more likely they are to go with AMD. The more clueless they are, the more likely they are to go with Intel, and the most clueless go with Dell XPS.
Joe
tecate,
We all should be driving Prius's right? All these suburbans in Texas hmm these people are obviously uneducated in cars riiiight?
I think you may be on the verge of getting the hint. If you want mileage, you get Prius. If you want roomy rugged vehicle, suburban may be a better choice.
Getting a Dell XPS for gaming is an equivalent of getting a suburban for the gas mileage.
Joe
wbmw,
That's true. Some people will do this, but some people will not. All I'm arguing is your claim of equating Intel purchases with "stupid" people.
being un-educated about a specialized subject does not equal being stupid in general.
I am clueless about some things I buy. A lot of my buying choices are. I just don't care enough about some things to spend time on research to possibly make a better choice.
This is how people end up buying Dell XPs. Upon any research, even casual, it becomes absolutely clear that Dell XPC is a mistake. Probably the worst choice a gamer can make.
Joe
mmoy,
I haven't tried VS 2005 Express Editions, (it may not even be available under those editions), but default install does not automatically install x64 or IA64 stuff. You have to select it in custom install.
BTW, this only applies to C++. For managed code, you run under .NET 2.0, which has support for 64 bit.
Joe
Chris,
However, you do end up paying delivery (S&H) charges plus sales tax on the non-rebated price, plus any upgrades the sales guy pushes you into.
Don't forget the "extended warranty".
Joe
wbmw,
Just pointing out that your view is completely polarized, and plenty of educated people who have a fully sized brain can come to the conclusion that they want a Dell XPS with Pentium Extreme Edition.
People who are educated about computers, and performance of computers running computer games? Don't be ridiculous.
I think Intel does appreciates your pro-forma defense, but I know that you know enough about computers to know that Extreme Edition and Dell XPSs are a joke, and only clueless people buy it.
Let me state it clearly:
Prerequisite of buying Dell XPS is that one is clueless about gaming performance of computers.
There are a number of people who satisfy this prerequisite, so it may be a viable business to sell "Extreme Edition" Pentium.
Joe
Keith,
HP's policy is to use the best of both worlds — AMD or Intel, and not favor one over the other.
http://www.serverwatch.com/hreviews/article.php/3565826
Sounds promising...
Joe