Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Wbmw:
This from one who has no relevance?
Who can't even figure out that there is no exclusive content?
Who doesn't realize that vaporware is something that currently doesn't exist?
VIIV is a brand with nothing to show.
Pete
Wbmw:
The only joke is that VIIV doesn't do anything right now. Live does. It runs games faster. It's the do it your way rather than big brother's way. And VIIV is from a company that failed at web hosting. Failed in consumer TV. Failed with RDRAM. And is failing on a lot of other fronts. VIIV is far from a sure thing.
And you forget the other "Live" product that succeeded beyond any VIIV asperations, "Live" TV. And there is the other "Live" product, SB Live sound cards. Besides VIIV sounds like it stands for Various Incredible Idiotic Vaporware.
Pete
Dear Smooth2o:
What would happen to VIIV if an OEM made the following AMD system or notebook:
1) Turion64, Athlon64, X2 or FX CPU
2) 1GB DDR minimum
3) DirectX v9c capable GPU like Geforce 7300 or ATI1300 min
4) HDTV/ATSC/NTSC/FM tuner
5) 7.1 24bit/96KHz sound
6) 10/100/1000 NIC
7) 120GB SATA HD or better
8) Widescreen monitor with at least 1280x768p (1920x1080p a +)
9) Vista Pro 64
10) All of the top software for TV watching, video and music editting, creation, decoding and encoding.
11) Optional VMware.
12) Optional 801.11n or better.
13) Demostration programs that can't run on base VIIVs.
VIIV gets beaten for wireless speed, missing a tuner, slower at games, runs slower and crashes doing those demonstration programs.
AMD Slogan: VIIV can't keep up with Live.
Pete
Wbmw:
Were you ever in a R&D lab in a semiconductor company? In any R&D lab at any company? Given your responses, the answer must be a resounding NO!
Prototypes are not made on production machines. They are done by hand. First they had to take a bunch of Woodcrests and pick the few that worked. That's a tiny fraction of all the dice on those 20 wafers . Then of those, they had to find those that would run with three loads on the FSB. Of those, they had to find the ones that ran within a given power budget. Then to sort them by clock speed at that power level. By then you are down to just 20-40 Woodcrest dies.
Then you make a custom run of substrates usually by hand. There is no production line as we are far from that stage. Then we laboriously test each substrate to make sure that it is good. After you get twenty good ones, you place by hand the components onto those substrates. Then you test each MCM to see if it works. A twenty percent success ratio isn't that bad when you build a 100 item MCM. Now we have four working MCMs that we can use for prototyping the MBs. We pick the MCMs that work and show it to our bosses.
Its a statement of how desperate Intel is, that they demonstarted a prototype, when only 4 are known to be somewhat working. The demonstration can gloss over the numerous bugs by various means. Its normal for a dozen prototypes to be working before it is shown to privately to corporate insiders. Typically hundreds are made before it is shown privately to outsiders. A sample production line is made by the time it is shown publically. At least that is how Intel used to do it long ago.
All this demo did was show how deeply in trouble Intel must think it is. Contrast that with AMD and Socket AM2 CPUs. There are hundreds of engineering samples at AMD's customers and partners and not one has been publically demonstrated. That will change in about a month and they will likely show up in a quarter after that.
Remember that many here state that it takes a year for server testing of samples and QC NGA isn't even to sampling stage yet. This just shows that they are at least 15 months away from a real launch, probably longer. If you are not beside yourself with worry, Intel just told you that you should.
Pete
Duke:
I am not the one who is in contempt. You are the one with contempt. Contempt of Japanese law and the experts in it, which you are definitely not one. You are contempt of a law university, which disagrees with you. Contempt of other jurisdictions, their laws and their courts.
Perhaps you should not step into a Japanese courtroom and defend Intel in this. You may have to save face with them. The ancient punishment was commiting suicide in order to save face of your employer. That's a good punishment for lawyers with contempt.
Pete
PS: Given your views, it would be ok for the only grocery store in 100 miles advertising a "buy one, get one free" deal to see "The Duke of URL" coming and force you to buy 50 of them to get 50 more free while letting Doug get just one for his free one. The fact that you only needed 55 doesn't matter. And, of course, you had to sign an agreement to not resell any of them. That according to you is not an abuse of monopoly power. Fun world you live in.
You seem to forget that the experts in Japanese antitrust law have stated that Intel was guilty of violating Japanese antitrust law. That is now considered fact as far as Japanese law is considered. Intel can refute that this caused any harm and/or that no remedy is needed. Their lawyers told them that they couldn't win either way. By accepting the proposed remedy, they could save face. They accepted the remedy and got to save face.
Japanese Law in this area goes after what one does, not what one says. If they break the terms of the remedy, it goes to crminal court, not civil court. Face saving isn't possible then.
Many of the facts as found by those experts are also violations of US law. Those facts even became a case study on violations of US Antitrust law by a Law university here in the US. After seeing the documents, redacted of course, they determined it violated US laws.
Yet here you are trying to state that Intel didn't violate the law when everyone, not working for the defendents in the case, with more knowledge of the details claims that AMD has a strong case or would win. They do acknowledge that Intel will likely delay as much as possible the case and that has been bourne out. One thing is sure, justice is not swift. Intel's only real hope is that AMD will go bankrupt, not persue the case or that the laws changed to make what they did legal or have minimal damages. The first is very unlikely now, the third has no chance and the second likely would involve a lot of money, measured in billions of dollars.
Pete
Smooth2o:
But you missed the 6 month delay from now till late Q3, most likely Q4 after the holiday build season.
Pete
Smooth2o:
Current MBs can not be upgraded as the VRMs in them plus the BIOS precludes higher precision in Vcc requests. Hint the VRMs are soldered into the board (especially in mobiles as height is a major consideration. Thus no current MB will accept Conroe, Merom and Woodcrest. Could they have started Napa MBs for Merom? Yes they could, but that doesn't happen normally. What MB OEMs likely did is waited for Santa Rosa to make the VRM/BIOS changes. Why change twice and double design, testing and Q/C? Change once and save money. Santa Rosa would not have been much later than the new VRMs on the old roadmap.
Now Santa Rosa slipped by a quarter or two. TTM may make two changes necessary to maintain or increase market share as most customers won't wait 1 to 2 more quarters. Now they start the VRM/BIOS changes for Napa MBs. If everything goes exactly to plan, they may make the holiday build season. But any problems and they will miss and have to release in late Q4. They still will have Merom capable Napa MBs at least one quarter before Santa Rosa MBs, but will it bring in enough money to pay for the extra design, testing and Q/C? Will Santa Rosa slip more making it easier to justify the costs? Only they could know that.
In either case, Merom MBs will be delayed a quarter and maybe more. You might get a few early, but those will likely be grabbed up for prototype testing with the usual caveats at rediculously high prices. None of the big notebook OEMs will likely use the few Merom capable MBs left to do a launch.
One kicker for those who take the plunge, is that Intel could shaft them by delaying Merom itself making all of their hard work go down the drain. Given Intel's recent history, that is not a small risk anymore.
Pete
Wbmw:
FBD is going to be another Rambust. Being hot (read power consumption is too high), slow (read the latency is too high and the speed too low) and restricted availability (read the yields are too low, so it will be expensive). The result, FBD will be like Rambust2, the sequel (of Rambust). When will Intel learn?
Pete
Wbmw:
Since Conroe needs a new VRM which precludes it from existing 775 MBs even if, the chipset can support it, the same is likely true with Merom. The Napa chipset may support it, but the current VRMs will not. Thus Merom will not be a "drop in" replacement. More OEM engineering work will be required along with the subsequent testing and Q/C time needs. Thus delaying Merom even with current MBs.
It must be tough for Intel supporters when Intel tears up recent roadmaps and slips key stuff. Now the latest slip is FBDs being slow, hot and unavailable. That delays all the server and high end desktop stuff. Is FBD the next Rambust, i820 mess and FDIV disaster?
Pete
You should acquaint yourself with SPEC rules.
I suggest you had better review them. Patches are allowed in software tools. Else how do changes and upgrades get allowed? Patches applied are also allowed for the operating system. And you can run a post processor tool as long as the source code is available and the tool does the same thing for all code. Replacing all cases of "GenuineIntel" with "AuthenticAMD" is just such an allowed post process. As its applied equally to all code generated, its even allowed for base scores.
You know even a modern loader in any OS replaces all addresses in a program with relocated ones depending on where its loaded when you run any program. And if thats not allowed, no one could run any programs at all. So just on the face of it, your objection is pure nonsense. Some web sites even include the patchers source code and runtime program, when they write about this typical patch. So its not like the software is unavailable.
Pete
Dear Chipdesigner:
I notice that neither of them has shown one link to show that any x86 CPU can't do what the capabilities flags say they can do. What is the documented case (link please) where the CPUID MSR says this CPU does SSE2 for example and the underlying CPU can't do SSE2. AFAIK, no CPU puts into that capabilities register what it can't do. Given that, there is no justification for Intel to check GenuineIntel. Every ethical programmer knows to just check that capabilities register for the appropriate flags for portability and proceed, if they are present.
And even if there is such a CPU, having it blow up in various programs is the right thing to do. It doesn't do what it is specified to do. Thats a broken CPU. And should be returned for a refund or replacement. Intel has had those. That's why a lot of code still has a FDIV bug check. Intel added in the two instructions not present in their earlier AMD64 implementation (AKA EMT64T).
Pete
Chipguy:
So what compiler is AMD still using for its own SPECint
submissions?
http://www.spec.org/cpu2000/results/res2005q4/cpu2000-20051212-05262.html
"Intel C++ 9.0 build 20050912Z for IA32"
It may be patched to get rid of the "GenuineIntel" with "AuthenticAMD". Then the test is bypassed. Many are doing this and its justifiable given the Intel actions.
Yet Here is the same hardware on a different compiler with better results both base and peak:
www.spec.org/cpu2000/results/res2005q4/cpu2000-20051212-05264.html
It gets 1743/1945 vs 1708/1940 for your link.
Icc++9.0 is no longer the best in int on AMD. It has been far exceeded in FP, Sun's Studio 11 in particular, blows it away in SPECfp_2000.
http://www.spec.org/cpu2000/results/res2005q4/cpu2000-20050906-04678.html
2256/2518 vs ?/?.
No one wants to use the Intel compilers by themselves in an Opteron x54 SPECfp_2000 submission in the last 2 quarters. There are quite a few adding the PGI C and Fortran compilers in Windows XP or 2003.
Pete
bmw:
You must know that Intel TDP is using some low power use programs. Prime95 easily beats Intel's TDP rating on their CPUs. For dual and later quad core, you need to run 2 to 4 copies. Typically using the same methods AMD uses to rate their CPUs maximum power draw, Sum(Imax*Vmax)) over all supplies yields a figure 33% higher than Intel's TDP rating. Then of course you need to add in that used by the NB for the FSB switch and memory interfaces. I didn't include that used by the FV-DIMM buffers over the underlying DRAM.
And you forget to add in the additional latencies when adding FB-DIMMs into each channel plus the cost of switching in the NBs and MCHs. Adding one FB-DIMM to a channel increases latency more than one HT hop. Then there is the fact that 4 cores compete for one FSB whereas in the Opteron that traffic goes through the far higher bandwidth of the on die crossbars (XBAR and SRQ) clocked at CPU speed versus chipset speed (2.8GHz versus 333MHz). Lastly the effective cache sizes because Opterons use exclusive caches and can use the remote copies is more like 9MB for the quad 90nm and 17MB in the quad 65nm dual socket. Because Intel uses inclusive cache, they need that at every socket.
As to remote uses, that is mitigated by intelligent use of processor/memory affinity. The occasional need for remote memory is typically used in synchronization and coordination. And that's a small percentage of overall memory requests. If it wasn't, caches would be completely ineffective. Latency is the king in most applications even server based ones. Here the advantage is all AMD.
At the typical quad socket level each FB-DIMM channel (of 4) would have to have 4 FB-DIMMs on it compared to 2 DDR2 DIMMs in each of 8 channels of AMD (AMD can have 4 DDR2 DIMMs per channel). That amount of FB-DIMMs will make latencies increase faster for Woodcrest than that of Opteron widening the present latency gulf.
So high BW needing solutions go to AMD, Low latency needing solutions go AMD. Higher IPC needing solutions also go to AMD. The only thing still going for Intel is complacency (and corruption).
Pete
Re:
Parameter QC NGA QC Opti
Freq 2.33Ghz 2.4 Ghz
Cache 8MB 4MB
Die size 2x135? 400
Memory BW 8.5GB/s 10.6 to 12.8GB/s
Power 80W? 140W?
Not quite, Alan. Woodcrest QC will have two FSB connections and quad FBD channels, so the combined memory bandwidth is 17GB/s, which is greater than QC Opteron, so it should scale better. I would also expect Intel to do a 2x Conroe TDP for their MCM to achieve higher frequencies, so you can expect Woodcrest QC to clock at least two 2.66GHz @ 130W. Though there may be an 80W "LV" option for blades as well.
Here is my version of the chart.
Parameter QC NGA QC Opti @ 90nm QC Opti @ 65nm
Freq 2.66+ Ghz 2.0+ Ghz 2.6+ GHz
Cache 8MB 4MB 8MB
Die size 2x135? 400 350
Memory BW 17.0GB/s 10.6 to 12.8GB/s same
Power 130W? 140W? same
Your chart forgets that the chipset covers two sockets and puts one FSB to each socket. There aren't enough pins for Intel to put two FSBs into one Xeon socket. So Alan81 is correct.
If you want to show the dual socket version of the table with TDPs set to same standard as well:
Parameter QC NGA QC Opti @ 90nm QC Opti @ 65nm
Freq 2.66+ Ghz 2.4+ Ghz 2.8+ GHz
Cache 8MB 4MB 8MB
Die size 2x135? 400 350
Memory BW 17.0GB/s 21.3 to 25.6GB/s 21.3 to 33.6GB/s
Power 170W? Up to 140W? same
NB incl. (DDR2/1066 support)
ROFL. Intel could give away three for every one sold and
IPF MPU ASP would probably still be an order of magnitude
higher than AMD's MPU ASP.
Not when the target is some HPC research outfit, then Intel gives them away. That's an ASP of $0 for you, the reading challenged out there.
When will Intel go to the promotion, "Please give this orphaned Itanium CPU a home. He's free!"?
Pete
And if you offer a 2 for the price of 1 as the Madison Itaniums are by Intel, the brand goes further down the toilet.
Also that French site better check their CPUs, they might be HP built MX2s instead of the hyped Montecitos, two Itanium Madisons in each MCM ala Paxville, Dempsey and Smithfield. They couldn't tell by the clock now could they? I wouldn't put it past Intel.
Pete
Chipguy:
More reading comprehension problems.
"As IBM continued to show price/performance leadership in the Unix server space with its dual-core Power4 processors, and HP and Sun Microsystems readied their own dual-core RISC processors, Intel had a conversion of sorts, it killed off Chivano and pushed out the "Montecito" kickers to the Madisons to 2005, but decided to make them a dual-core chip, thereby getting dual-core processors to market almost a year earlier than planned."
http://www.itjungle.com/tug/tug050604-story02.html
From further down the article, "In late 2002, when Chivano was still on the roadmaps and Intel was not understanding the situation, HP decided to engineer what amounted to a dual-core Madison for its own server line. Of course, by changing the Itanium roadmap, Intel has shortened the useful life of the mx2 modules made by HP. That statement is only true if you believe that Intel can get Montecito out the door on time, of course."
So the design was in the works in 2002. Tape out in late 2002/early 2003 and 12-18 months of testing gets HP to put out "dual core Madison" in 2004. Just as the Geek article stated. Montecito was successor to Madison. Lets push the original design plan from the inception. Intel started by designing Merced, HP designed McKinley, Intel designed Madison and thus HP designs Montecito. Result Montecito was dual core from the start. Only Intel took over the design from HP, which didn't want to design CPUs anymore. Intel Marketing's clout got these writers to put in some face saving spin on not making the tape out/sampling dates. And of course you bought the revisionist history hook, line and sinker.
Pete
Pete
Chipguy:
With your apparent reading problems, you better hope you are not fired.
You claimed Montecito was not originally dual core. The Geek article (dated in 2003) says you are wrong (as usual). It first slipped from mid 2004 to mid 2005. Madison was given more cache and a small raise to 1.6GHz. With the typical feature creep that seems to dog Intel, features were added to Montecito, things changed, etc. A good deal of those add ons turned out to be problematic. So it slipped to late 2005. Then more problems happened and it slipped into 2006 with features either turned off or removed. The clock was not as high as planned either. First it was above 2GHz. Then at 2GHz. Then to 1.8GHz, 1.7GHz and finally just 1.6GHz.
Just because Intel wasn't ready for tapeout in early 2003 for shipping in mid 2004 as per plan, you want to give Intel a pass. So design and tapeout got delayed at least one year. Then testing showed problems and missing the frequency and power targets which slipped it altogether about 2 years. Remember, it was to be released mid 2004 after a 12-18 month testing period.
Face it, every Itanium CPU has been later than planned and due to that and some clock speed reductions, slower than competing CPUs. Montecito is just another disappointment.
Pete
Chipguy:
Your allegation that the IPF design currently known as
Montecito slipped from 2004 is pure and utter nonsense.
http://www.xbitlabs.com/news/cpu/display/20040619180753.html
This after acknowledging that Montecito slipped. It then slipped some more. Volume wasn't available in 2005.
In addition, Intel, of Santa Clara, Calif., is planning to introduce dual-core processing in its 64-bit Itanium architecture in mid-2005 with the introduction of Montecito.
http://www.eweek.com/article2/0,1895,1612324,00.asp
But this is only a warm-up for the next version of the Itanium, code-named Montecito, which began sampling in 2004.
By far the most ambitious server-processor design in
terms of transistor count is Intel's Montecito, using 1.72 bil-
lion transistors (Figure 5) in 90nm to create a dual-core processor.
http://www.hp.com/products1/servers/integrity/2004_Server_Processor_of_the_Year_article.pdf
Of course this site's a little biased.
Of course this just shows how much in a revisionist's fantasy you are:
Intel has confirmed that the dual-core Itanium is delayed until 2005. The dual-core "Montecito" chip was originally supposed to be released next year (2004), but Intel will instead launch a faster than 1.5GHz Madison core with 9 MB of on-chip cache.
http://www.geek.com/news/geeknews/2003Jan/bch20030117018221.htm
Of course Intel apologists always like to change history to spout Intel Marketing's current line. Too bad this article that came from the original delay announcement. Of course they changed the design as 90nm would not have been mature enough in 2004. Montecito was always dual core. Slipping it by a year, allows one to add more stuff in.
Pete
PS: Like with Prescott, it would have been better if they just took Madison, mirrored it and done a dumb shrink. Prescott turned out badly in 90nm too.
Chipguy:
Here is link for you: http://news.com.com/2100-1001-941924.html
Look down at Successors. Montecito (2004). Slipped two years into 2006. From 7/8/2002.
Not the little slip you made it out to be. SGI has a valid complaint against Intel for being late and slow to boot.
Pete
Chipguy:
Montecito has been slipped more than once. Its always funny that Intel apologists always forget previous slips and call them the original target date. Montecito was due to come out summer last year as per a 2003 Madison 9M slip announcement to 2004. Summer 2005 came and went and no Montecito. The apologists said well not until fall 2005. Then it was end of 2005. Now its later sometime in 2006.
Shipping in volume is an oxymoron wrt Itanium anyway. Itanium has yet to ship in real volumes (typically in the 100K's per quarter) so you can call "volume" at any given point. And that makes the statement really nearly worthless. What is your definition of "volume"? Enough for 1000 servers? 100? 10? 1? Over what time? 1 week? Intel has not sold enough CPUs for 1K servers covering all Itanium models in an average week.
No Montecito is late. If it takes much more time to come out, it will be late squared (DOA).
Pete
Dear Smooth2o:
Hhmmm! Sonoma doesn't have the problem. Sempron's ATI chipset doesn't have the problem (higher power usage is due to the discrete graphics GPU on the Sempron notebook uses more power and has much higher performance than the slow integrated graphic on the Intel notebooks in addition to the half capacity battery). The only chipset with the problem is the one specially made for Yonah.
If it was a simple fix, it would have been done 6 months ago in a jiffy. Because nothing happened in those six months, it is likely requiring a major amount of design and a respin or two to fix it. Because they are not going to use this for more than a few quarters, they thought that it would be hard to find so they could get away with it. Smells like the FDIV bug thinking again. When will they learn?
Pete
Do you think to see which messages I am responding to? I don't read the entire thread at once, I go a post at a time. So non Apple Yonah is in the field. Yonah problems are multiplyng as the errata (not bugs) sheet is getting longer by the minute. Just another buggy Intel CPU.
Going by SPECint and SPECfp, 3.73GHz Dempsey is not going to compete even against Opteron x80's, much less x85s and x90s. Dempsey will not catch up to current Opterons. Yet another "leap ahead" that falls woefully short.
Pete
Dear j3pFlynn:
But the news did state that there are no 285 SEs, just 285s at the normal 95W TDP. Sun could call them anything they like. I suppose they would like to have 290s in hand when 285s are available for everyone else.
The problem is that Keith will think that they are unavailable even though customers would have 290s in their hands.
Of course to be fair, then he should state that Yonah isn't in the field either because only Apple has them. But he doesn't say that.
No the only criteria should be do customers have them. And the answer to that is yes, customers have Opteron 285s and 885s in hand. The fact that customers have 185s are not disputed, they are simply called FX-60s.
Pete
No Keith its your strawman to say they need to be launched. All they need be is available for sale. Thus AMD needs only to ship one grade more, not two as you state.
As I stated, launching is irrelevant. Given that they are in the field, launching it is a formality, not a requirement.
Which would you rather have x90's in the hand and available for sale or x90's launched? Having 3.73GHz Dempsey on Bensley launched doesn't mean a thing if they are not available. I remember when Intel didn't have any of a flagship SKU available for 5 months after launch.
To reiterate, 95W TDP Opteron 285s are in the field. In fact, those in the know stated that AMD didn't give SUN 285 SEs at 110W, but simply started shipping 285s at the standard 95W TDP with Sun allowed to sell them before others ala Apple wrt Intel Yonah.
Pete
Dear Keith:
Then why are these in the field: http://cgi.ebay.com/2-Opteron-285-DUAL-CORE-OSA285FAA6CB-INTROUVABLE_W0QQitemZ6841724109QQcategoryZ1....
Sort of says that they are shipping now as they were produced week 45 of 2005. Also notice that they are variable TDP. 49C equals 35W and 67C equals 95W. That's right the OPN states that they are 95W TDP maximum. OSA285FAA6CB if you can't see from the picture.
I expect apologies from you as the launch is irrelevant once they begin to ship and are present in the field.
Pete
PS: IMHO, it is far better to have a SKU shipping than launched with none in the field. Intel has been doing a lot of the latter lately.
Dear Keith:
It is you who are mistaken. AMD is shipping 285s at or below the 95W TDP level. Just because they are not launched yet is irrelevant to this discussion. AMD has a history of shipping parts that are being sold before the parts are launched. It typically gets buried due to their incremental improvements and is found in the variable TDP ratings burned onto their CPUs.
2.8GHz Paxville is not as fast as 2.4GHz Opteron 280s. That would take 3.7GHz Paxvilles. 3.73GHz Dempsey might get to 2.6GHz Opteron 285's performance with the increased FSB speed and number, but that will be superceded by 2.8GHz Socket F Rev F Opterons. Intel would have to ship 1333MHz Dual FSB 4+GHz Dempseys to match. That is not scheduled to happen unless Intel tries to force servers with water block cooling or phase change cooling on OEMs.
And then you have to deal with the bugs in current 65nm P4s thermal management. By the time they have that fixed, it may be H2 2006, where 65nm SiGe Opterons start to show up and Intel's transistion to NGA starts.
I'm sure that Intel will find some esoteric benchmark or some basterdized version of a current one that shows that they have caught up, but the general ones will still show Opteron to be on top in 1S, 2S, 4S and 8S.
Pete
Dear Keith:
AMD is already shipping standard power 285s (2.6GHz DC). so its just one bin now. X90s will be likely with Rev F, if not higher. SiGe is supposed to be a 20% kick in frequency and 2.6*1.2 is 3.12GHz making x95 at 3GHz probable with 200/300/900 in the wings at 3.2GHz before the jump to 65nm, if needed.
Pete
Chipguy:
The only one in la-la land is you. Q4 shows just how far you are in that land. Don't compare future Intel with what AMD has now, compare them at the same time. Socket F will be here probably before Montecito and definitely before NGA. Revision F will be as well along with 90nm SiGe. Given how late Montecito is already, it might even slip further to beyond NGA. And NGA will likely try to compete with 65nm SiGe based AMD Opterons with cHT3.0 which AMD has stated includes 32 socket capability.
As for the benchmark, it probably doesn't use the three things that 8 sockets brings over 4 sockets, 8 IMCs vs 4, 64 DIMMs over 32 or 32 over 16 and more GHz per core. Network bound benchmarks tend to do that. Those that excel with higher memory BW, higher memory capacity, higher MIPs, higher MFLOPs and/or lower interprocess comm, do better with 8 sockets of single core than 4 sockets of dual core. Benchmarks that like low latency also favor the 4 socket over 8 socket systems. SPECrate tends to favor the SC 8 socket systems over the DC 4 socket ones for example.
You just like to dream AMD retreats or even stands still while Intel catches up. Not going to happen!
Pete
2) What makes you think HP would use Horus? Or do you
think HP could whip up an 8+ way chipset in less than a
year?
Forget about socket F? It could double the number of sockets in easy glueless systems. It will allow a hypercube arrangement straight up. Thats 16 sockets with a 4 hop maximum and a 2 hop average. Instant 32 way without needing a chipset. Of course AMD states that this will get to 32 socket glueless. With already available dual core, thats 64 way. With quad ccore, thats 128 way and that pushes IPF to the very small 256 way and up market.
Hypertransport reduces cost for glueless systems. Otherwise the chipset has to work around AMD's broadcast CC system. And the IMC makes a high level of RAS very difficult. You seem not to realize that features that help in the entry level server market can hurt you in the mid range segment.
And you fail to take into account that the lower level segment gets additional features that then take over in the higher level because of favorable performance/cost and feature/cost ratios. And you fail to say which RAS features as they are typically used can not be done with an IMC. Examples and links please.
This power and cost argument is absurd for an 8+ socket
server, x86 or otherwise. Also the power knock against
FBDIMM is overblown. Much of the power the buffer chip
uses is power not expended elsewhere in the system.
Sounds like the same argument was used for Presslers and P4s that power wasn't a factor. Yet we see that it is a factor. It killed the P4 style uarch forward. 1 draw may not be significant but multiple copies are. A few watts mnay not be much in comparison to the typical server, but an 8+ way server must have dozens, if not hundreds, in each times the dozens and hundreds of servers in the computer room. Thus the few watts becomes quite a few kilowatts and a significant fraction of the power supplied.
Another way to look at it is that there are between 4 and 8 DIMMs per CPU. Thats 32-64W, a server CPU. P4 was killed for using about that much more per CPU. And one of the reasons used to promote FB-DIMMs is the higher amount per server (and CPU). This makes it less likely to be used. Its not the first time that a new option kills itself by screwing up its reason for its existance.
Better scalability than NGA? Now you are on to matters
of faith not technology or business and I don't ascribe
to your religion.
NGA's scalability is not known. But with its use of current infrastructure, its platform derived scalability problems are well known. IPF has many of the same ones. And that scalability is lower than the platform scalability of Opteron. Your faith is what took a hit this last Q.
Pete
smooth2o:
Intel has not had 4 years of QoQ growth. They had a decline from Q4 to Q1 in 2004 and 2005. Its readily apparent on Intel's own CC slides. AMD has had more consecutive QoQ growth than Intel.
Pete
Dear Jhalada:
I already have three HD titles in avi format. They are roughly 9GB in size each (stored on 3 DVD-Rs each). People already have HD DVRs that record the 2.4MB/s ts stream. I guess Intel needed it for 2005 when I played mine on AMD.
Typical Intel, 3 years late.
Pete
Dear Alan81:
That is on the thermal profile that has an ambient of 42C and a 0.48C/W HSF. 55W*0.48C/W+42C=68.4C rounded up to 69C. This is a HSF with a rating of 0.33C/W: http://www.svc.com/dk8-8id2a-ol.html
Using same ambient and TDP, you get 55W*0.33C/W+42=60.15C or about 60C. As you can see you don't need a big HSF, just a cheap one will do. 42C anbient is 42C*1.8F/C+32F=107.6F. Not your typical inside the case temperature unless you live in an unairconditioned room during August in Phoenix, AZ. The above elcheapo aluminum HSF ($7.25), would allow for a 140F ambient and still properly cool the Opteron 275HE. Pressler 950 on the other hand using the same HSF would need to be in a refrigerator (6C or 43F) to stay under 60C.
Pete
Wbmw:
It should in this case, since there is no OS that supports audio/video streaming, as well as transparent home networking as well as WinMCE.
Boloney! My brother's company has a box that is able to record and playback more than 100MB/s of streaming video/audio and it runs Linux which has transparent networking better than anything Microsoft has. And its robust and secure.
WinCE being robust and secure is an oxymoron! ROTFLMAO!
Pete
Wbmw:
I'd say you're single core Athlon 64 notebook is going to look pretty outdated compared to a Centrino Duo notebook with half the weight, twice the battery life, and twice the performance. I guess it won't stitch together 50k x 50k pixel images very well, but it will do almost anything else far better.
Only in selected narrow apps. There are plenty of apps where SC Turions are faster. First as many here know, any 64 bit app or OS, Turion SC runs rings around Yonah, hereafter referred to as Yonot. Why didn't Intel allow third parties to benchmark Yonot against Turions? They must do poorly in games especially the Centrino Duo ones as their integrated video is piss poor in comparison to the typical Turion chipset. Even when paired with a discrete video, the Yonot must not do well against Turion in games.
And oh if we get to select some benchmarks for Turion, we can do encryption and decryption. Where 64 bit versions run 500% faster than 32 bit flavors on the same CPU. Thus Yonot would not run faster than 16.7% per core than an equally clocked Turion. Given the perfect 200% speed of two core versus one, a 2.16GHz Yonot would only run as fast as a 864MHz Turion. That's bested by a MT-28 at 1.6GHz. Even at a 800MHz idle, a Turion MT-28 would best the fastest Yonot.
Not very good for Intel. Yonot faster than a MT-28 Turion. Yonot able to run 64 bit anything. Yonot able to run faster in games. Yonot going to get good graphics performance. Yonot going to get good future proofing. Yonot going to be able to defeat rootkits like Sony's DRMs security holes. If you buy one, Yonot going to live it down. The only one in the deal who laughs is Intel. They know that you bought a sucker. Yonot laughing!
Pete
Wbmw:
You are confusing addressing space with physical memory space. Linux, Windows and various other OSes use the HD to expand physical memory by using virtual memory. All modern x86 CPUs do the virtual to physical translation using page tables and TLBs as caches to speed that up.
On Yonah, you can only allocate 4GB of memory space to the various applications, kernel, drivers and devices. It is not uncommon for 128MB of graphics memory to be allocated 256-384MB of address space. The kernel takes about another 64-256MB deending on what it is needed to do and the various loaded drivers.
When you run applications each is allocated address space for the code, space for the data, space for the stack and space to heap or temporary data. It is not uncommon on those machines that do many tasks at the same time to allocate more address space than the machine has in physical memory, just the kind of environment where dual cores are desired. Then the OS uses an area of disk known as the Windows Swap File or Linux as the swap partition, to use as additional virtual memory. On my home box I have 4GB of swap and 1GB of physical memory (plus 256MB in the video card). It doesn't use much of that swap (mostly due to the efficiency of Linux and its applications, but on Windows, it uses up to 2GB) but, to take advantage of all of it, I would need to run a 64 bit OS on a AMD64 system.
So you don't need to have 2GB of physical memory to need the larger 64 bit addressing space. Everything you run simultaneously must allocate no more than 4GB. Unless you would want to swap the entire application (actually the running code and all associated data of the program currently running) to disk. Its slow, but you can run. SCs run just fine that way, but are slow and the system can appear to go away for a while while the programs swap in and out. For DCs the problem is that both programs running on both cores have to be in memory at the same time, else you get no benefit from the second core. So a DC machine needs more memory rather than less.
The AMD64 machines can allocate 48 bits of virtual memory addressing space (256TB) even though there might be only 1GB of physical memory. It can simply swap in the pages (4KB is typical for PCs) needed by all the running programs and kernel. Thats a far smaller subset of all of the virtual memory. And they can get swapped in and out much faster than the GBs of of programs in the other method. That reduces any apparent going away periods.
The 2GB limit in most PCs is mostly due to the Windows XP only using 2GB due to the way XP allocates addressing space. The other 2GB are reserved for the kernel, drivers and devices. This partition point can be changed as a boot parameter to 3GB for apps and 1GB for the rest but, only XP pro and Win2000 allow it (XPhome and all of the other MS OSes can't (except XP64 (and later Vista) of course)). In Linux, it is a kernel compiler config option. To efficiently use more than 2GB, you need AMD64. And installable memory sizes are ratcheting up as we speak.
If you talk to Linus Torvalds, he has stated that x86 needed AMD64 extension to 64 bit addressing space for the last 5-6 years. It was getting more and more difficult to be more efficient in allocations to keep the footprint within 4GB. 48 bit virtual addressing space of K8 was a godsend and they can (and do) use it to better optimize performance and increase security. And most x86 OS designers and maintainers agree with him on this topic. Including those at Microsoft.
Pete
Dear Michael:
But where AMD started, servers, these things were worked out long ago. If it was SMP unfriendly, it was repaired or replaced by something that was friendly (only in house made stuff could escape). Most people who need DC on the desktop, they were doing things or came from the server/workstation world. Either the apps got fixed, or they went looking elsewhere for better apps.
By the time DC will be required (or sold by default), these teething problems will be gone. Laptops are the last area that will go DC. Those that will get it first are the DTR ones, of course. By the time the typical user will get it by default, all the software will be fixed at the desktop where it originates. Joe Sixpack will look at the DCs, but opt for the cheaper SCs. He'll get it a lttle heavier to save on price. 64 bit OSes, apps and games however will trickle down to the laptop world much faster than DC desires. The fact that 64 bits stopped the Sony DRM hijacking and like security features are a big hot button for most people and are a much easier sell especially, when they get it free or, even easier, for less money.
Pete
Wbmw:
You make the argument that 32-bit processors won't be able to access the 64-bit features of Vista, but so far no one has listed which features these will be. There will be a 32-bit version of Vista, and what if it has 95% of the features in the 64-bit version? It would make the 32-bit argument silly.
Hmmm. You can run 64 bit and 32 bit applications simultaneously. You can't do that with a 32 bit only CPU. Programs won't be able to hijack system calls like they do in 32 bit XP in 64 bit mode. You can load larger images with 64 bit applications and separate multiple 32 bit applications into their own memory (no sharing needed). Video editting can deal with entire movies in RAM and have multiple copies, needed to be able to undo editting and video filtering.
Heck a standard ts video stream is 2.4MB/s. 2 hours of that is 17.28GB and that's for one copy. You would need at least twice that for before and after each change. There are just some applications that run far better with more than 4GB of total memory. Oracle is one off the top of my head. I'm sure that VLSI design and circuit simulation is far beyond being shoe horned into 4GB (actually 2-3GB in XP and Win2000). And it only takes one 64 bit application or a bunch of memory using 32 bit apps to push one into needing a 64 bit OS. Virtualization also begs for a 64 bit underlying OS.
Games now recommend 1GB (some talk of 2GB to work better) and GPUs are now up to 512MB (where you need 2-3 times that in memory space allocations to supply) and there can be 2 in SLI setups. Whoops, we are already over 4GB in allocations even with a single 32 bit game.
You say that 32 bit is good enough for laptop use. Thats the same tired old argument over the ages. Spoken every time the current limits were exceeded by something new. You could say the same thing about you really don't need 32 bits for laptop use. The 16 bit stuff was good enough. It was more efficient than today's bloatware and a 16 bit 286 or 68K are real sippers compared to Dothan or even Geode.
Sorry, but all it takes is the hint of something real soon now and the jump to the next level is on. Especially if the old stuff runs as fast or faster.
So Yonah can run the old ancient apps. It might be in some situations faster. But it will no be able to do anything with 64 bit. Turion can do everything Yonah can (sequential multitasking is typical and has a very long trouble free history) and so much more. So Yonah can run games and encode simultaneously at 170% as fast as Turion on a 32 bit OS. But Yonah runs XP, Solaris and Linux under VMWare at 1-5% as fast (if it doesn't crash) as Turion does. Swapping is such a morass for performance (like hitting a wall is one quote that comes to mind).
Where Turion could just demand page active memory areas because they are separate in memory space, Yonah has to swap entire program spaces to disk because they are in the same space. The latter works, but is ungodly slow compared to demand paging. That is why demand paging won out when the two were in competition (Intel and IBM took the program swap option and Motorola, Sun and DEC took the demand paging option).
Moreover, single core processors are never going to be able to get the benefit of dual core processing as more dual core applications become available,
This is total BS! All single core CPUs can run all threads of all applications using round robin (or some other type of) scheduling, swapping between active threads. So there won't be a single time where a SC can't run a dual (or more) threaded application. DC just allows for two threads being run at the same time. When, like most code, one thread is active at any given time, the SC runs at full speed, but the DC has one core that just idles doing nothing. And when the DC cores are slower than the SC core, the ratio is doesn't need to be too high for the SC to be overall faster than the DC.
In fact, IMHO, for laptops, the ratio won't get much above 110-115% of SC/DC clock ratio for same overall performance. For desktops it is more like 115-120%. Only in server work does the typical ratio get above 180%.
In fact, those very users that would pay the premium for DC on their laptops are the same ones who will want 64 bit. And AMD has the only options, A64 X2 and DC Opteron 1xx. Smithfield and Pressler are just too power hungry for that work.
Pete
Tecate:
You always forget about Linux, Unix, AIX, TruUnix, COS, etc. These were 64 bits before AMD specified AMD64. Linux was running AMD64 when Opteron was released. For you, if it doesn't say Microsoft or Apple, it doesn't exist. Well I was running UNIX System 7 on PDP-11 before there was any Microsoft OS (DOS 1.0). IIRC, SCO Xenix was running on Altos 586 (8086) multiuser and multitasking before MS released DOS 1.0. I was running 32 bit Unix on 386s before Microsoft (or Apple IIRC) had a 32 bit OS. I ran 64 bit UNIX (called OSF at the time) on Alpha before Itanium was even specified to the public.
So you don't know what you are talking about. Like always.
Pete