Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Wbmw:
Then why is a 65W TDP 2.4GHz EE A64 X2 4600+ using less power than a supposed 65W TDP 2.4GHz Core 2 E6600? Or a 65W TDP 2.66GHz Core 2 E6700?
As to the power readings, you do know that is measuring VRM input, not output? Xbit labs themselves says that VRM efficiencies are 70-80% and lower than that at the extremes. 47W * 0.70 = 32.9W.
I already debunked the S&M test as not optimized to pull max power from C2Ds. Wait six months for them to do that. You do know, it hasn't been six months yet? 8/1/06 to 9/29/06 is less than two months. Xbit labs used S&M version 1.7.3 which is not optimized for C2D. Even S&M 1.8.1 is not optimized for C2D. See: http://www.testmem.nm.ru/soft.htm
Only optimized for K8, Core Duo and P4 (and older). As its freeware from Russia, expect to wait awhile for version that recognizes C2D. When it does though, watch C2D go well above the well optimized for K8s.
Fantasy: You can compare Intel and AMD TDPs.
Reality: Its extremely long since you had your last bath. You're stinking up the joint.
Pete
Wbmw:
And how much is used in the NB to coonect to memory for the Conroes? And then there are the different standards used is computing the TDP of each family.
Hint, Intel uses a lower typical standard while AMD uses a hard upper bound of maximum possible TDP. Interesting that by test, a 65W A64 X2 90nm gets below 35W and a 35W A64 X2 gets below 25W. If Intel had to use the same standard for TDP as AMD, that 65W Conroe needs over 80W and the 75W one needs around 100W. And thats before the inclusion of the memory controllers and FSB links in the NB and CPU.
Do make note of the differing standards before placing your figurative foot in your figurative mouth.
Pete
PS: Who knows, if Intel's market share keeps shrinking, they'll have to conform with AMD on TDPs. They have been forced to follow AMD in the last few decisions, SDR and DDR instead of RDRAM, DDR2 instead of FBDIMMs, AMD64 instead of IA64 and now a HT lookalike (CSI) instead of FSBs. Intel's days of dictating to the market are over.
Wbmw:
Puh-lease! Do you know about retailing and this little thing called markups? If you don't know about that, you shouldn't complain.
Dell or anyone else is not going to sell you an item for $10 that costs them $10 to buy. They will charge $11, $15 or $20 depending on how much they think their typical customer will pay for it. So the difference is not $400, but $400/(1+X) where X is the average markup on all items. And it varies on each item. So if 2 or more different items are used to convert one into the other, than looking at any one's markup may overstate or understate the overall markup.
So lets take the example from the Inquirer, some of the underlying price difference is in the MB. Conroe MBs are less available and tend to be pricy wrt AMD AM2 MBs. $50-100 is not out of line here since nVidia 61xx class mATX MBs are quite cheap versus G965s (used since both have internal GPUs). Add $50 for the CPU cost difference. SATA to PATA converter adds about $30 more to the Intel box. This totals somewhere between $130 and $180 in actual cost differences. It does show up in motherboard combos on Pricewatch for example, ($241 vs $362 for just AM2 X2 3800+ & HSF & nVidia 6100 MB vs C2D 6300 & HSF & G965 MB).
Now you add in markups. Parts more pricy and hard to get, get higher markups. Markups to make up for slights (loss of preferential treatment for example) tend to be exagerated. And that markup applies to the whole cost not just the difference. So a $10 part with a 100% markup costs $20, but a $12 part with a 200% markup costs $36, the difference has an apparent markup of 700% for the $2 difference even though no underlying markup difference was higher than 100%.
Given that, its easy to see a mere $130-180 difference balloon to a $400 price difference or more. And then there is the tendency for there to be minimal configurations being loss leaders. Addons tend to have higher markups than fixed packages. So by adding to minimal configurations can exacerbate the differences. Its also how that by starting from different packages, you can have the exact same configuration have two different prices. Many times I have configured a minimum system to a packaged more expensive one and have it cost more than the packaged one.
In essence, those that highlight the differences as reflecting the underlying reality, tend to overstate the reality. There is a lot of truth that you should buy add ons from OEMs, but get them from reputable third parties. Use minimum configurations from OEMs and upgrade the components to your liking usually saves you quite a bit of money. The above is rather more difficult with Dell, because of their use of non standard components. That is why I buy only from OEMs that use standard components and interfaces as much as possible. You save far more money when you either upgrade or need service, than the slight reduction in initial cost.
I will agree that if you deal with a screwdriver type shop which gets a flat rate to build your PC, the charges for upgrades tends to reflect their underlying costs as you have either paid their markup in the above flat rate or the markup is the same on all products you add into your box. And these markups tend to be low because of all the competition. Thus the price differences are just a little above the real costs. Here you can get a feel for the underlying prices differences from the prices charged. We do this all of the time on this and the Intel boards. Most know of the assumptions used in this kind of proof.
Pete
Wbmw:
You forget the most likely reason for "How about a 3800+ for half a E6300?", that the Conroe MB is much more expensive than the one used for the X2. After all, Intel is supposedly having a chipset shortage and a Conroe shortage. Why shouldn't Dell get more for those? They need to make up for the lost preferential pricing, too. They also need to restrict demand closer to supply for those Conroe systems.
Transition is never as fast as Intel supporters would desire. It takes a long time to turn a Titanic.
Pete
Wbmw:
Sorry, you are inaccurate. A64 X2 3800+ EE 65W is in stock at:
http://www.ncix.com/products/index.php?sku=19785&vpn=ADO3800CUBOX&manufacture=AMD
Thus it can't be MIA.
Here is a A64 X2 4600+ EE in stock:
http://www.ncix.com/products/index.php?sku=19784&vpn=ADO4600CUBOX&manufacture=AMD
So it can't be MIA either.
Many of the others, you can special order. But I can see that it not being in stock could be construed as MIA. I have seen a A64 X2 3800+ EE 35W at the local screwdriver shop. So they do exist. It was special ordered.
Pete
PS: I know that these things do change quickly. You can do a long table like Mike's reseller list and have the availability change just as you "hang up" the web browser. Nice try though.
Wbmw:
You are the one who can't read the documentation. Then blame others for your own incompetence.
Pete
Wbmw:
Have we seen SiGe from AMD? No we have not.
As to extending the addressing in the FSB beyond 36 bits requires not only changes in the CPU, but in the chipsets and likely the socket as some idiot will put a 36 bit addr CPU in a 40 bit addr MB and wonder why he can't see anything over 64GB. And vice versa.
As to Woodcrest being in 128GB systems, that would require either 32 FB-DIMMs or 8GB FB-DIMMs. And still the CPUs won't see it because the virtual memory, page tables and TLBs would need to be changed from the EM64T specifications. Plus all of the OS software to boot. The latter two would be seen long before the CPUs and chipsets would support it. And that would be at least a minimum 12 month lag. And there are no changes noted in those documents in support of 40 bit addressing.
There were, but that came from Intel copying the AMD64 manuals and missing all of the relevant changes.
Pete
PS Clusters are different because the memory addresses are othogonal between nodes.
Phud:
Did you take a gander at the date of that datasheet? Server people do not like changes every few weeks. They tend to shy away from those systems and Intel can not afford them to shy away.
Second to change the addressing bits requires new chipsets and knowing Intel, will typically mean a socket change as well, else some idiot would put a 36 bit CPU in the 40 bit socket and complain to Intel why he wasn't getting 40 bits of addressing. This on a CPU just barely launched last month. Isn't going to happen. And even when it happens will likely be at the transition to CSI which is years away for x86, if ever.
The nice thing is that cHT goes to 64 bits physical and virtual and will likely not need changing anytime soon. There is talk of the next major Opteron revision (beyond K8L) upping the physical to 49 bits and the virtual to 58 bits (the easiest being going permanently to 2MB or 1GB pages from 4KB or 2MB pages respectively) as 1TB is possible for a glueless system of 32 sockets with 32GB each.
Pete
Phud:
Check the datasheet on page 69 of 112. Look at the top box.
"define a 2^36 byte physical memory address"
Intel just blew any arguments right out of the water. So you think that Intel is so utterly clueless? No! The one who is utterly clueless is yourself.
Everyone else who bothered to read the relevant sections of Intel's "Dual Core Xeon Processor 5100 Series Datasheet" has agreed with mas that Woodcrest, Conroe and Merom have a 36 bit physical address. Even the Processor Description from a Xeon 5150 Linux server box stated that addressing was "36 bit physical, 48 bit virtual". How much more evidence do you need?
Pete
Wbmw:
Straight from Intel's Dual Core Xeon Processor 5100 Series datasheet page 25 of 112:
A[35:3]#
That means only 33 address lines (bits 2:0 are for which bytes of the 8 byte wide FSB D[63:0]). That means 36 bits of addressing due to the FSB connecting Woodcrest to the chipset. That is 64GB, period! Even the Land Listing shows no address pin above A35 (47 of 112).
The datasheet is available at: http://www.intel.com/design/xeon/datashts/313355.htm
That is the definitive word. No single Woodcrest, Conroe or Merom will see more than 64GB of physical memory. The limitation is the FSB itself. Mas was right and you all were wrong. Live with it.
Pete
Morrowinder:
PS: You do remember the Hood don't you? You know that I'll conceived battleship that blew up after one shot from the bismark? Something about bad design, placing the ammo dump in a vulnerable area between the stacks. BOOM! LOL
I think the Hood refers to Mount Hood in Oregon. IIRC The Bismark got killed because of one little plane's torpedo caused the Bismarks rudder to stick to one side forcing the Bismark to steam in circles. A bunch of planes then sank it. It was one of the signals that the battleship era was over.
As far as SGI's wins in the supercomputer era, it helps when Intel just gives Itaniums away. At least Cray pays for Opterons in their supercomputers. And NERSC can replace any Opteron with a Clearspeed FPGA at their discretion. NASA or SGI can't replace any Itanium with a FPGA whether its a Clearspeed or any other. NERSC can also just drop in QC K8L's into the Hood. I doubt that SGI could do the same with QC Itaniums in NASA's.
If NERSC needed to speed up scientific visualization or rendering, I'm sure they could place some AMD GPUs in socket Fs. Again, SGI can't do that either. SeaStar doesn't care what is in the F socket as long as it uses cHT and makes the attached memory available. Not so with SGI and its Itanium socket.
Pete
Wbmw:
That should be "measured in ChipguyWatts, WbmwWatts or IntelWatts" all from hand waving. AMD's is in real worst case Watts.
Doing it using AMD's standard, you are measuring a 141W TDPmax 3Ghz Woodcrest versus a 120W TDPmax 2.8GHz Opteron 8220 SE. Most individual 8220 and 2220 SEs are lower.
Pete
Chipguy:
HPC benchmarks I have seen show Woodcrest walloping
Opteron badly.
I see Cray is winning those HPC benchmarks and Woodcrest is nowhere to be found. Perhaps every HPC benchmark you look at where Opteron beats Woodcrest gets immediately flushed from your memory. Didn't Cray just take the XT3E (opteron based) and beat all comers in the benchmarks for sustained real world output in codes the supercomputer purchaser uses?
Else you use that typical Intel response to benchmarks where it doesn't come on top, change the benchmark until you get the result you wanted. Bapco comes immediately to mind. So its not "noticed" until it looks the way you wanted it to.
That's really putting the blinders on.
What HPC benchmarks were you seeing anyway?
Try reading the fine print in recent AMD and Intel datasheets. Your characterization is utterly false.
I read both datasheets including all of the notes. AMD's is the maximum possible power consumption at nominal voltage, max current on all supplies and max temperature with no thermal monitoring. Intel's is using some hand waved number from some unknown procedure which can't be figured out except by looking at how it was defended when Intel started to use it, so it would take too many assumptions to get Opteron's TDP using that procedure. If it is set the same as previous ones, my characterization is quite valid.
Each Opteron having a A for TDP (variable) has an indidual TDP worst case. My A64 3500+ also has a A for TDP and although its family has a TDP of 67W, that particular CPU has a 50W TDP. So Woodcrest does not show maximum power consumption in conditions, which used to be defined as "worst case". It shows an amount lower than that. So they are not using the same standard. AMD is using the classic worst case TDP and Intel is using its own flavor that was quite a bit lower.
Does Intel really need that series resistance in the power supply? 90A across 1.3 milliohms of resistance dissipates 10.5W, a significant amount to the published TDPs. AMD doesn't take that into account, Intel requires it to subtract from its TDP.
Here it is straight from the datasheet in question:
4. The processor must not be subjected to any static VCC level that exceeds the VCC_MAX associated with any particular current. Failure to adhere to this specification can shorten processor lifetime.
5. ICC_MAX specification is based on maximum VCC loadline Refer to Figure 2-4 for details. The processor is capable of drawing ICC_MAX for up to 10 ms. Refer to Figure 2-1 for further details on the average processor current draw over various time durations.
The minimum allowed series resistance for the 3GHz Woodcrest is 1.3 milliohms. Max current is 90A, so 10.5W of the dissipation is missing for Woodcrest while being in that of AMD.
AMD uses a rather simple calculation for its maximum TDP. It sums the products of maximum voltages (series resistance of 0) and currents of all the supplies to the CPU and rounds up. Doing the same for Woodcrest in order to allow true comparisons to be made gets us to (90A * 1.5V + 4.6A * 1.26V + 0.13A * 1.605V =) 141W. That is the higher that the maligned Opteron 8220 SE (130W). And that doesn't include a on die 2 channel FB-DIMM controller that would be the Woodcrest equivalent to Opteron's on die controller's 2 DDR2 channels. If you allow Intel to have that series resistance (Vcc becomes 1.383V from 1.5V), you still get 131W TDP which is still higher than Opteron x2xx SE.
Do you see this note chipguy taken straight from the 5100 series Xeon datasheet:
2. Thermal Design Power (TDP) should be used for processor thermal solution design targets. TDP is not the maximum power that the processor can dissipate. TDP is measured at maximum TCASE.
Right there shows that the value is not capable of being matched to a value at worst case power dissipation, the classic TDP of old. Intel's published value is not even an upper bound to the worst case TDP of Conroe or Woodcrest. AMD's is a hard upper bound, period.
Perhaps it is you that is mistaken about the suitability of comparisons between values published to two vastly different standards. When comparing the two using AMD's much more stringent abeit classic standard, Woodcrest dissipates more power than even Opteron x2xx SE.
Pete
PS for those who want to see the 5100 series Xeon datasheet, you can get it here: http://www.intel.com/design/xeon/datashts/313355.htm
Chipguy:
Easy. AMD's is for maximum possible power usage and Intel's is for typical power usage. At the same standard, they are closer together. Then you add in the NB portion of the chipset, the 4 FB-DIMM controllers and a mininum of 4 more AMBs and you'll likely exceed 80 more maximum possible watts. Then 80+80+(>80) > 120+120. Push it with 12 more AMBs and Woodcrest spins the KW meter a whole lot faster.
Then while 8 Opteron SEs are gluelessly in a server, you need 4 complete 2 way Woodcrests to match. Thats more FB-DIMM controllers, more NBs, more interconnect, lots more AMBs, more controllers, more drives, etc. And 4 2 way Woodcrests don't work as well as 1 8 way Opteron. Woodcrest clusters don't scale as well either. And that is before K8L, before cHT 3.0, before chassis interconnect, before coprocessing and other things that are coming down the pike.
A simple 1207 Clearspeed FPGA and boom, Woodcrest loses with program specific integer processing and program specific FP processing. A Opteron 2220 SE and Clearspeed 1207 leaves 2 way Woodcrest in the dust when a suitable FPGA configuration accelerating the inner loop of the target program is used. The Opteron Clearspeed benches in 1 unit of time and the dual Woodcrest takes 45 units. Using that application, isn't the performance/power of the former far higher than the latter? This is from Clearspeed claims.
HPC benchmarks also show that Opteron has far higher interconnect BW and far lower latency than Woodcrest. It also scales much better.
And to answer AMD announces "an upgrade path" in the headline, because the Rev F product itself is so underwhelming.
What is clear is that Woodcrest itself it so underwhelming for the 4 way and up market. Intel had to pre announce a QC. They even had to release yet another Xeon MP based on the hot and slow P4 because Woodcrest is so underwhelming. And Intel has absolutely nothing for the 8 way AMD64 market. Intel knows how bad the overall performance will be when they stick four NGA cores on a single FSB. Can you say BOTTLENECK?
Pete
TDOU:
You were very wrong wrt Japanese Law. You were wrong that the Judge wouldn't allow AMD to use information from that case in this lawsuit. So you have been wrong in the past. You continue to be wrong as you are blinded by your internal biases. Remember this is why lawyers should not have a conflict of interest as it makes their reasoning and judgements inherently flawed.
As an Intel supporter in this lawsuit, you must be very worried that, if it is proven, Intel will suffer very badly indeed. What is the largest judgement Intel could handle before going effectively bankrupt? If found guilty, how large could the judgement be? IMHO, the answer to the latter is larger than the answer to the former.
Granted though, we haven't reached the point where due diligence will require Intel to settle out of court.
Pete
Dear Gordon H:
I am not aware of any x86 competitor that has cHT ports for use in the interconnect. And Woodcrest only goes to 2 sockets, not ##K CPUs. And the interconnect is faster than bankrupt SGI has either. Perhaps you should look into that given that many studies have interconnect as the key differentiator between HPC MMPs. Also I am not aware of any Intel box that has HTX ports or cHT ports for connecting FPGA accelerators to said supercomputer.
Intel better get its infrastructure in order for its OEMs to win such contracts. Its still stuck with FSBs. That's so old hat.
SO OUR TAX DOLLARS ARE NOT WASTED AGAIN on a bunch of over priced, FSB bottlenecked and under performing Intel Xeons or touchy Itaniums.
Pete
PS, FPGAs can accelerate >100x in some HPC code. And Clearspeed has a Socket 940 and 1207 FPGA accelerator on the market today.
Wbmw:
Conroe does not do more FP processing than A64 X2 in 3DMark06. It is mostly integer processing anyway and Conroe just isn't hit hard in S&M. Which sequence did S&M pick, P4 or P3? Now that the Russians have a Conroe, just watch the power used during their benchmark rise and then rise some more. It took them over a year to get the K8 over BurnK7 IIRC.
So the rest of your comments in that area were flawed as BurnK6 likely didn't push any of the K8s over their TDP either. And did they turn off HW thermal limitations of Conroe? You want it to "burn". They turned off C&Q for the K8s. I mean fair is fair.
Did they also try another well known "burner", Prime95?
Do remember that S&M advertises that no other "burn" program gets higher power usage than theirs. They have failed in that pledge with Conroe. Ergo, its not been optimized to recognize Conroe. And that likely means its got the wrong power using code as well. First they have to recognize that its Conroe and that will select the highest using routines it currently has. That's the first rise. The second comes from them trying different code sequences until they find the highest one. That will be the second rise.
The third will take longer as they play with it. You will get the highest using attempt so far. They certainly will be well above any other "burn" program because they can take any of the code from them and tweak it more to get above it.
And then your arguments just went up in flames, pun intended.
Pete
Dear Tenchu:
Your logic or reading comprehension needs work.
Fact, you stated that customer service "experts" at stores just pushed BS onto customers, " they try and make up for their cluelessness with bulls--t".
Fact, you are a customer service expert on the computers you build.
Straight conclusion from above facts, you try to make up for your cluelessness with bulls--t!
Talk about shooting off your own foot with that argument. And that is a symptom of being clueless. Hoist by your very own petard. And those without a clue shouldn't point at others being same.
Pete
Note to all you system builders out there: You are not clueless. You must have clues or you wouldn't get them to work. Systems integration is not easy, those intermittent bugs are hard to get rid of. You do have a well known bias, to what has worked in the past. Thats why your customers ask.
Tenchu:
If you ever took what customer experts at stores ever said about what to buy, then you are the one whose clueless. Do you take what the service techs at a car dealer as what to buy. Doesn't it seem strange that what the dealer sells is what they recommend?
Should we say that because you put computers together then you are one of those shady customer service techs you disparage? Blew your glass house to shards with that one. Then the only customer tech you will get advice from is one who doesn't recommend what his company sells. Since you build only Intel computers, perhaps only when you recommend AMD ones is when we should listen to you. When you recommend Intel ones, you are being shady. By your logic, you are a shady customer service "expert!" You just shot your own foot off! Why should we listen to one who does that?
Pete
Wbmw:
S&M hasn't been optimized to burn a lot on Conroe. 3DMark06 burning 91% of CPU Burn? I'd suspect that the S&M burn software was not pushing Conroe at all. The X2 3800+ EE uses only 66% in the same comparison. Pushing the E6300 the same amount would make it burn 61W. You saw that BurnK6 used 25% more power than S&M. When S&M makes Conroe do packed SSE2, 4 wide issue and work over the FSB all of the time, you would see it go well above its rating. Anandtech should turn off Intel CPU thermal management for the tests. Having it slow down when part of it gets hot doesn't show burn.
And of course, Anandtech didn't measure NB usage as the X2s include the NB and memory controller. Thats >10W right there pre VRM losses.
Pete
Wbmw:
Don't you read the sites you post links to? Supermicro states that only 533MHz DIMMs can be used when 4GB per slot is used.
SmartM is shipping PC2-6400RE memory in 4GB modules. http://www.smartm.com/product/product.cfm?productID=56
And here is a socket F, 1207 dual opteron board with 16 DDR2 DIMM slots: http://www.appro.com/product/server_1Uxserver_opteron.asp
The problem for Intel is that 1207 Opterons can go to 8S with 64 DIMMs. http://www.appro.com/product/server_3Uxserver_opteron_2b.asp for a 32 DIMM quad SF. That's well above any MCW.
Pete
Wbmw:
Show a link for a 16 FB-DIMM MB. I haven't seen a single one. The only ones are 8 FB-DIMM slot ones and they only have 2 FB-DIMMs per channel and are typically configured with only 4 as 8 are evidentally too power hungry.
On the other hand I have seen 16 DIMM slot 2S and 4S Opteron MBs. HP's 585 DL has 32 DIMM slots on their 4S server and so does Sun on their x4600 8S server IIRC. 8 DDR2 DIMM slot 2SF MBs are typical as well, although some have 16 DDR2 DIMM slots. 4SF MBs have 16 DDR2 DIMM slots. So AMD with DDR2 will have larger memory capacity and lower power at the same max memory capacity than Woodcrest servers.
Pete
Wbmw:
As I stated 1440x1080 is not HDTV. It shows 16x9 HDTV at 1440x810. My 6 year old CRT has a 1600x1200 resolution and HDTV videos show it at 1600x900 with black bars on top and bottom. 4:3 aspect ratio productions at 1920x1080i show at 1600x1200x75p. All this scaling (zoom) is done by nVidia for no CPU usage. It also does DCT and some other assists in HD decoding. So CPU usage is strictly dependent of platform architecture and software.
Playing back a DVD can be done on some laptops without any CPU usage at all although most of the time the OSD is run by the CPU. There are some Turions that can play DVDs longer than Core Duos. That doesn't mean they are good for doing real work, just that they can substitute for a DVD player.
Lastly, these battery life tests are flawed because there exists cheats done by some laptop makers to reduce power (and with it performance) when these "tests" are run. All to get better scores. This was discovered when someone had to do a clean install of Win XP and the battery life dropped by more than 1/3, IIRC. And many other things can affect run times, mostly screen backlight intensity, LCD efficiency, GPU power and chipset power.
Only when these so called "battery" tests check for performance at the same time they check run times (they must be done simultaneously otherwise cheating will happen) and test many simultaneous applications (like the old Winstone) (needed for multiple cores), will the scores be useful.
Pete
Dear Mas:
That was in Q1/06, not Q4/05. Like I said, Wbmw's speculations were bad. All the giving him a break doesn't change the fact that AMD did better than he expected from one 200mm fab. And before you go back to the die size issue, remember that Intel also was making only 256KB Celerons and 512KB P4s too. Most of what they shipped in Q4 was 256KB Celerons.
No more of the he thought stuff. He thought wrong. K8 didn't need the big caches to do well. Intel tried the big cache riute and it didn't work. They had to rework their designs, use big caches and go down a process size to catch up. And that may be short lived.
He said Fab 30 couldn't make 15-20% unit share. It not only did better than that, it did it in a larger overall market, stunning Intel. Although they are still stunning Intel given last Q's numbers.
Pete
Dear Mas:
AMD only exceeded 20% in Q4 by overflowing kit into Fab36 which will need moving back once that ramps.
There is not one slice of evidence of this anywhere I can see. Before you continue, show a link to such a report.
AMD's Q4 CC stated it all came from inventory and Fab 30. This was also reiterated in the Q1 CC. And where did the Inventory come from? Not Fab 36. Besides, AMD made DC Opterons, FXs and X2s of 2x1MB L2 in that quarter much bigger than 1MB SC K8s. Even the 2x512KB L2 K8s are bigger than SC 1MB L2 K8s. Thus Wbmw lost in both CPU unit market share and in CPU revenue share in a larger overall market. What was the size of the market when he made his speculation?
Pete
Dear Mas:
Did I give you grief? I agreed with you. Kudos for predicting that accurately.
Pete
Wbmw:
You were below me in the AMD contest the first time you entered. 25th out of 27th is very bad. I sometimes got close to that though being overly optimistic.
My early quarter Q2/06 Intel forcast was 2nd out of 11. You would have done worse given your statements at the time.
Pete
Wbmw:
Wrong again, Wbmw! The ATSC standard does not recognize that size. When showing a 4:3 aspect ratio, the screen zooms to that size. So 1920 pixels get displayed each being scaled down to 3/4ths of a square pixels size. On that size display generally what happens is that a 1280x720p image is displayed on either a pixel by pixel basis using the center of the display or zooming the image both horizontally and vertically until the screen limit is hit which makes for a 1440x810 image from any ATSC HDTV 16x9 format.
Even the latter that doesn't take much computing power for current CPUs, my AXP 2400 (K7 2GHz) did it two years ago scaling 1920x1080i HDTV movies to 1600x900 with CPU usage under 20% on Linux. A nVidia card I bought later, zoomed that under Linux for no additional CPU cost. Under Windows 98SE and later XP, it was called Pure Video. Of course under WinXP64 on my A64 3500+, it takes much less (The current card is a nVidia Geforce 6600/256MB AGP).
BTW, I like playing movies and TV shows using mplayer under Linux. And I do have a ATSC HDTV tuner for my TV. It can play 1920x1080i movies on a normal analog TV by scaling. Soon I will get one for my desktop. It works under both Linux, XP and XP64 which will net me a cheap non DRM HDTV recorder.
Pete
Dear Mas:
First Wbmw was wrong in many things. He said market share which if you look at the context was unit share. AMD has been above 15% in unit share for all of 2005, by Q4/05 they were well above 20%, making 2x1MB Opterons, FXs and A64 X2s, 2x512KB X2s, 1MB Opterons, FXs, A64s and Turions, 512KB A64s and Turions, and 128/256KB Semprons both desktop and mobile. That takes much more wafer area than 256KB K7s and AXPs. And all from Fab 30.
So Wbmw was wrong on all counts there. Do not make the mistake that the percentages quoted were for revenue share, even though AMD passed 15% there too in Q4. They have gotten over 18% last quarter (Q2/06). Granted Fab 36 is ramping and Chartered is helping. Building inventory for the back to school period and socket F Opteron launch.
So of course Wbmw wants to say now that revenue share was meant, when very few talked of revenue share in 2003. And so the estimate was breached in early 2005, long before Fab 36 was ready.
And wbmw would have put anyone in a verbal straightjacket for saying AMD would gain revenue share from Intel for 5 straight quarters, take away most of the 4 socket and up x86 server market and that Intel would have to slice CPU prices by one third to one half just to lose only 1 or 2% more revenue share in a quarter. And would be livid at the disasterous Q2 Intel just had wrt AMD.
Pete
Dear Snowrider2:
OK, I messed up.
I thought she was talking about Merom. Obviously with Yonah, they can't do 64 bit. Still the 3D being missing helps power usage. In fact I do not know of a single AMD Turion notebook with only a 2D core. Most have integrated 3D/2D cores from ATI. I happen to like the nVidia GPUs though. Their Linux support is much better.
Pete
Wbmw:
Wrong again! You were in three EPS contests:
http://epscontest.com/02q3eps/leader_board.htm
http://epscontest.com/02q2eps/leader_board.htm
and
http://epscontest.com/02q1eps/leader_board.htm
When you entered your first contest, you lost in both Intel (5th out of 13) and really bad in AMD (25th out of 27).
Much worse than you remembered which is par for the course. Three times versus once, that means you don't remember a third of what you did.
Because of your lack of success, you stopped trying. I OTOH, continue and do better estimating what Intel does than AMD. Yes I tend to be optimistic where they are concerned, but that should not be a surprise. At least I try, while you are a quitter.
Pete
Wbmw:
And Core Duo T2400 is available now? Oh that's right, another wait until T2400 is available a few months from now. You think AMD can't be faster and/or lower power by then? What do Intel have now? Yonah based ones that can't do 64 bit at all. And there are now signs that MCW aren't so good in 64 bit modes. Some EM64T code sequences can run 6 times longer.
And I do complain about the missing 3D graphics. It takes power to do 3D and to do it well takes quite a bit of power. As for doing HD video, 1440x1080 is not one of the sizes of HD video, either you do 1280x720p, 1920x1080i or 1920x1080p with double speed channel. And so many low power (<3W) set top boxes have no trouble with 1920x1080i. Any current 3D GPU can do most of the required HD decode and scaling without any significant CPU involvement.
Let's see about your other speculations when Merom based notebooks arrive in decent quantity. That may not be for a while. Then we can do a head to head comparison with the same components as much as possible. And that does not mean a brand new notebook to one that is a few months old.
Pete
Chipguy:
Where are the Itanium sales numbers for the CPUs only? Where are the CPU unit sales numbers? Not one has been stated publically. Without those, you can't see if Itanium is profitable for Intel even on a current basis.
Just look at this: http://www.investorshub.com/boards/read_msg.asp?message_id=12054632
Itanium is nowhere near either Opteron or Xeon in either revenue or profits. If you add in the expenses to date, the Opteron and Xeon are profitable, Itanium is a huge money sink.
Since at no time has Intel claimed profitability of the Itanium line, it is extremely probable that Itanium has never made money in any period to date.
That is the real bottom line in the success of a CPU.
Pete
Wbmw:
How about your frequents posts implying how AMD is road kill? None of them has panned out. Or how your EPS contest entries have been wrong? Or Chipguy's posts on Itanium sales, launches and taking over the market which bar keeps getting lowered? You have a tendency to forget your mistakes. Your by far most frequent posts are to cut down others. But when you make a mistake, you never ever do the classy thing and admit it.
Lets look at your old posts. Let's see this one stands out: http://www.siliconinvestor.com/readmsg.aspx?msgid=18458053
AMD got more than 15% of the CPU market making only K8s (mostly 512KB L2 ones I might add) from only Fab 30 in Q4, 2005. Oops, busted!
Or how about this one: http://www.investorshub.com/boards/read_msg.asp?message_id=12054632
Lots of speculation here. But the entire non EE Prescott line has lower performance/power, the fabled new Intel mantra, than a A64 3500+ selling in that range. And they lose often in pure performance and in performance/price as well. By the time you add in the infrastructure to the price, it gets even worse. Likely the only big inventory buildup is in the lower clocked P4s and Celerons. They will likely need to be written off in yet another hit to margins.
Pete
Snowrider:
Those were speculations with limited data. Its like entries in the EPS Contest. If you made one, you would see how hard that is. Wbmw and Chipguy make bad ones there. Try to guess the whole picture from the limited information we have and you'll see you are wrong far more often than you are right.
Chipguy has been so wrong in his Itanium hype, that he should hang his head in shame long before I would have to. And Wbmw has often claimed that Intel was going to make AMD into road kill and you know how that hasn't even been close to happening. And both of them were confident that was fact instead of speculation. So unless they are willing to apologize to all here for their great mistakes, why should I for the the supposed crime of speculating with limited information.
Besides the top post was for QC K8(L). No one, including Chipguy and Wbmw, knows what the actual power draw will be. However with the A64 X2 3800+ EE at 35W TDP worst case being sold, QC K8 would at 2GHz be less than 65W TDP worst case and that is before 65nm and SiGe. Another would be Turion TL-52 at 1.075V Vcc at 1.6GHz 2x512KB. 3.75 hours on a 57Whr battery. That's 19.2W for all, including the X2 CPU, 2x256MB PC2-4200 memory, ATI GPU, 10/100/1000 ethernet NIC, 8xDVD+-RW DL, 60GB HD, fan and display. This from the Inquirer review posted a short time ago.
Therefore as the above speculation was not far wrong from CPUs available today, it might be within range of a CPU arriving in 6 months.
But saying that post was wrong is par for the course for them. Especially when their, "facts", have proven to be so wrong many times.
Pete
Chipguy:
You're observations are noted and then thrown in the pig sty where they belong!
You have been so wrong in your previous observations that use of a Saturn V couldn't get you out of the deep hole you are in.
Look at now, even with Woodcrest launched, Intel is still in PANIC mode. You'll see just how bad it is in a couple of days.
Pete
Dear Snowrider2:
When Intel transitioned from P3 Xeon to P4 Xeon, it took more than 12 months. This is more like that transition as they are two different internal architectures. That means that the things that you assume is the same between two similar cores is violated with different ones. Second, the MBs that were not tested during Dempsey, are tested for the first time with Woodcrest. Woodcrest uses different VRM codes than P4 Xeons meaning that the MBs are new too. Then there are FB-DIMMs not used in the older servers. For these and other reasons, the full testing needs to be done. Thats typically about 12 months.
Another reason is that the 4S solution is missing like the P3->P4 Xeon transition.
Lastly, how quickly people forget. AMD had Athlon MP, a 2S version that gained a good following. Intel's Woodcrest is like that one, dual P2P FSBs with memory attached to the NB. It sold well even after the Opteron launch. So they had a presence in the server market and solid enough for people to try Opteron.
Pete
Chipguy:
Too bad some lunatic fringe AMD camp followers never learned
the lesson of the Nocona ramp.
ROTFLMAO!
Too bad some Intel apologists forget the lesson of Prescott or the very long P3 Xeon to P4 Xeon transition. Not to mention the long Itanium delays. Bankrupted a few companies waiting for them.
Nacona was going from one P4 Xeon to another. It wasn't as fast as the DC Opteron switchover, in any case. That was plug in and go.
Pete
Dear Alan81:
Late last year until now is only 6-7 months. If everything is ok, and various reports say otherwise, they will still need another 5-6 months to vet Woodcrest MBs. Given the problems, the clock is reset to now and 12 months is Q3, 2007.
As for Conroe production, 20-25% of desktop exitting Q4 and that Mike's reseller lists have it starting mid August, means that beginning Q4 will be between 6% and 8%. Averaging that out for Q4, that comes to between 13% and 16%. Given history that ramps are slower than predicted, 10-15% of desktop is reasonable. The problem is how many mainstream desktop CPUs Intel will sell in Q4. Figure this to be mostly 85-90% P4. Figure this is 50% of the total CPU market minus AMD's production. That is about 25 million minus 10 million from AMD leaving 15 million. Thats about 1.5 to 2.25 million Conroe. Add 100K Woodcrests needed for expanded testing and 50-100K Meroms due to longer delays of Mobiles, and you get 1.65-2.45 million NGAs in Q4, 2006.
I happen to think they will miss that target on the low side.
Pete
Chipguy:
So when the various Q3 CPU unit market share reports come in, you'll apologize to all for the low numbers of Woodcrests sold. And never spread any of your FUD here again.
Well I don't believe you to have that kind of class. You'll just say, "Wait until ...!" The Intel apologist's theme phrase.
Pete