Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Wbmw:
That Extreme Edition CPU did not include the memory or chipset so doesn't compare them on an equal basis, since the GPU card includes both of them. In addition he 4870x2 is a 2.4TFlops processor, that would take 100 EE CPUs to match, which boosts their power usage well over 20KW.
Besides on a performance per watt basis, the 4870x2 beats the GTX280. And on a performance per dollar basis as well. The EE CPU is nowhere near as high in graphics performance on any basis, absolute, per watt or per dollar.
Pete
Dear Mas:
That should be 2GB, not 2MB. That would make it the largest gaming GPU memory size wise. Of course we will have they nVidia supports point out that it is 2x1GB. Still its possible AMD/ATI added some features allowing the local 256 bit channel to be used by the other GPU. This would effectively extend the memory beyond anything nVidia has (a good deal closer to 2GB than 1GB). AMD has a lot of know how in that area.
Pete
Smooth2o:
No the error in your logic is that 3D isn't supportable under the power constraints. Else the 690G or the 780G (or the nVidia mobile IGPUs) wouldn't be able to do 3D at the same power levels.
And if you are talking about the G45, it has a hardware fault in the video decoding logic. OEMs seem to think that they will use a Broadcom part to do that job for it (more logic, cost and power usage). And until the Intel drivers prove that G45 can do DirectX 10.1, the 3D part won't be believed either. The previous part, G35, either had a non fixable 3D hardware fault or a continual series of crappy 3D drivers (depending on which story you listen to). Either way, it can't do DirectX 10.0 as it was advertised to do. The 965G also had problems and the 915G couldn't even do minimal Aero. 2D is the easy part as people have done that since the 70's.
As I said, Intel's track record is terrible in this area. With that history, its a wonder anyone believes Intel wrt graphics. "Real Soon Now" will be met with "Yeah, and pigs will fly then too!"
Pete
Smooth2o:
They only "captured" that market when 2D GPUs are included. This from a company that built the i915 which could not run Aero on Vista. And only if you don't include consumer markets with GPUs like LCD TVs, LCD monitors and STBs. Intel has no presence there. If you go by discrete GPUs, Intel has practically no share at all.
Its all depends on the definition of "the market". If I said the car market. The leaders wouldn't be GM, Ford or Toyota. The leader would be Mattel. They make over 100 million Hot Wheels cars a year. Far larger than the 15-17 million cars and trucks produced each year world wide. Just because the definition would include toy cars doesn't mean that Mattel is a great builder of cars.
Pete
Wbmw:
Yet if you look at latency, two evolutionary designs per two years does quite a bit better than one bigger design every other year. Because if you make one error like making a pipeline too long or not making enough resources available, you are screwed for a year. With the longer design cycles, it is easier to fall into trap of not looking far enough ahead so that when your next gen comes out, its enough better to be competitive. And then there is the trap of looking too far ahead and miss a new different need of the market.
You see the problem isn't that the process gives you twice the transistors every two years. Its the design. GPUs can change their ISA and still be very effective as the only ones that had to know are the driver writers and, to a smaller extent, the game optimizers. The CPU has to grandfather the ISA else they lose a lot of their customers. Look at what happened with x86 and IA64. AMD giving a third option of x86-64 was so well received that the huge Intel had to backtrack or be destroyed.
So if a new demand is made, they can add a function unit to do that and add some to the ISA to cover it. If it isn't well received, they can have their driver writers not use it. If it is well received, they can change other things to make it better or more effective. If over time it becomes so good that the old way never needs to be used, the old portion of the ISA is simply tossed. This incremental improvement works extremely well in most cases. The only time the big change all at once works is when the target is relatively fixed and well known and the method to reach it is well understood.
That hasn't been the case in the GPU side so far. As more transistors become available, more stuff is found for it to do. Some methods that couldn't be used before for performance or cost reasons are now possible and even desired. Other seemingly hot button stuff was ignored and later dropped.
Of course that isn't Intel's real problem. Its problem is that it doesn't have a real idea of how to do today's stuff at today's performance levels. So it tries ways that, on paper, looks good with today's stuff. And the driver writers can't get it to perform well enough to be a starting basis for that incremental improvement process. And the one year or two between attempts doesn't help much either, because they can't take a failure, change what was broke and get feedback quick enough to blindly stumble on the tricks needed to get that performance.
Essentially, Intel is too big to get a decent product out given the constraints. Going to the quicker design cycle that a small company would do until it got close, is not something they are used to. When they buy talent in this area, it is quickly smothered by the speed at which the talent knows things need to be done and the slowness by which Intel works. AMD does not have that problem. It has the lack of resources and that forces it to do the incremental improvement route. It understands it. And when it buys talent in an area, it doesn't smother it, it works with it.
Tying the design cycle to the process cycle is one of the reasons Intel fails at doing graphics. A better design shouldn't have to wait for the process to reduce.
Pete
Chipguy:
This from a person that claimed without any links that the typical Itanium server was $90K +/- 5K and had 10 CPUs in it. When that can be shown to be just wrong with even a little basic research into 16P IPF servers (the minimum that could have 10 IPF CPUs within). If that is the quality of your thinking, perhaps you need to rethink what you do.
BTW, Intel is considered a nearly complete failure in the graphics business by most even after many attempts. As was stated before, competence in one area doesn't mean it in all others. I would add, one's comptetence isn't even guaranteed in the area one works in.
About the only area you seem to have any competence in is the quick quip.
Pete
Wbmw:
Here is the official word on Intel's Tick Tock Strategy from the Company's own website:
http://www.intel.com/technology/tick-tock/index.htm
Here the CPU takes two years for a Tick Tock cycle. GPUs go through this every year, a new generation and a shrink/update. 65nm R600, about ten months ago, 55nm RV670 about four months ago and now 55nm R700 (RV770) two months from now. So I was correct that in GPUs, "Tick Tock" is twice as fast.
That is why you didn't have a clue, trying to tell someone who did. That's why you are losing it, big time.
Pete
Wbmw:
They don't do it with C2D because C2D is all tied up just displaying the BR video. And we don't know how good the quality will be with G45 wrt 780G's UVD, which by all accounts is really good. Intel has been known to miss graphics targets before. And the G45 will have other faults as it will be too slow for many 3D games whereas the 780G will be perfectly adequate. The G45 will have to add a $50 discrete GPU just to be adequate. The 780G with a $50 Radeon 34xx HD card will outrun it again.
So the upgrade to G45 will still leave Intel behind at the platform level. And if G45 gets delayed anymore, it will fight the 880G with even more performance than the 780G (rumors say it will be 2-3 times faster than 780G). Intel forgets that in the GPU business, tick tocks run twice as fast (every 6 months instead of every 12 months). Get too far behind and you can't ever catch up. And Intel is way way behind.
Pete
No you don't understand that the Power choice locks in their customers. Going Itanium would do to them what losing PA-RISC did to HP. Itanium for HP lost them revenue from the PA-RISC days. Many switched to other vendors, including IBM, rather than moving to Itanium.
Half of the Power revenue and profits is more than the cost of keeping Power on top. The same is not true of Itanium. The cost of keeping Itanium up with Power, is more than the revenue and profits it brings in for Intel. The only real reason for its existance was that Intel wanted to go propietory and away from x86, where they must live with competitors. AMD64 blew that right out of the water.
The arguments you use with IBM dropping Power are more applicable to HP dropping Itanium. HP could switch those customers to x86 and live off the service and support revenue. They make money with x86 servers and have this very profitable printer business to fall back on.
Of course what I said for IBM where Power locks in their customers is the same for HP and Itanium (which they co own with Intel and have to help with its upkeep). They would lose the hold on their customers, if they would switch their software to run on x86 (even with their glue). Many more could potentially switch at any time after that.
In either case, the arguments for dropping their respective CPU lines don't hold much water. Its far more costly for either to drop their own CPU line.
Sheriffbakaay:
It also obvious that if they used Itanium, they would quickly lose the bulk of their sales and service as a "me too" company. They sold Itanium servers and didn't make money on them. Besides, they already have a higher performing CPU line that makes them scads of money.
Intel's Itanium OTOH doesn't make them scads of money. What little they break out shows that they lose money on that part of the business overall.
Pete
Ephud:
You definitely think that what you say is true, even when its obvious it isn't.
Pete
Ephud:
Intel has that huge fixed overhead to service. They lose lots more on the first parts they sell. If they lose the volume, they start hurting real bad. And going from the numbers, it seems it costs Intel more in variable costs to make a CPU than AMD. Its just the higher ASPs that they charge that makes them have higher gross margins than AMD.
BTW, haven't you heard? AMD makes money on the CPU division. Its the GPU side that is losing money. And given what has happened lately on that part of the business, they will likely turn that around soon.
Lastly, you statement that "they lose money on every CPU sell" falls flat on its face, when one considers the fact that every time AMD sells more of them a quarter, their gross profits rise and the losses (during the periods they have them), go down. That is the classic case of making money on every additional unit sold.
Pete
Chipguy:
That goes many times more for Itanium than Power. On the basis of revenue and profits, Power related business makes far more for IBM than Itanium related business does for Intel. So on the basis of your arguments, Intel should get out of the Itanium business long before IBM gets out of the Power business.
So you get REAL!
Pete
Chipguy:
Does those numbers include business software, maintenance and other services? IBM gets a major portion of that while Intel gets very little, if any. So on the basis of revenue for the respective companies, IBM Power related revenue far exceeds Intel Itanium related revenue. And the growth in absolute dollars is far higher for the former than the latter.
Since none of the numbers show IPF CPU revenue, its hard to compare against other lines. From what numbers can be estimated, IBM gets far more profits from Power related business than Intel gets from Itanium related business. So it should be that Intel get out of the Itanium business, since it makes so much less profit and margins than their cash cow, x86 CPUs rather than IBM getting out of the Power business which is the bulk of theirs.
Pete
Chipguy:
Your assumptions also flies in the face of historical precedence. How many companies give up on their cash cow while its still making them lots of money? Name one. IBM makes lot of money on their mainframes. To keep them desirable, they have to make them faster than all comers. To keep them fast, they need to have a process thats on the cutting edge. To get that, they need to invest in the R&D to stay at the cutting edge. No foundry will have a cutting edge process until the progress slows significantly. So they must either make a foundry get a cutting edge process and pay to keep them there or do the research itself and fab them themselves. They do the latter.
Not because its cheap, but because it enables them to make billions of dollars. It doesn't matter what value they assign to the products that come out of that fab. It only goes into their stuff so it can be valued at any number, even zero. That won't affect their bottom line of billions in profits.
Its like elevator companies. They can give the elevators away. The money is made in the required service contracts. So it costs them a few billion in losses to make elevators. They even hand out high discounts on the installations. They charge big bucks on the multiyear service contract though. So they make tens of billions on elevator service contracts. You claim that the business only gains a few billion dollars, much less than other who make ten billion. But fail in seeing that they are making fifty billion every year even after the losses are added in. Sure they might have gotten out of the stairs business and the electric motor business, but they won't drop the elevator design and construction business or the elevator safety R&D. They fall behind on those and they'll lose a lot of the business to competitors who kept up.
Your precedence argument could just as easily be used to say that Intel got out of the DRAM business, has spun off the flash business and has shut down the display business, so now they must be getting out of the CPU business.
Its your dream, but not a good argument after one thinks it through.
Pete
Dear Mas:
Even if IBM leaves the R&D portion, there are still the other members, Chartered, Infineon and Samsung who would help keep it up. Even if IBM left, AMD would likely pick up their researchers to keep going. More likely though would be that IBM lets AMD take over the R&D and pay them to keep fabbing Power and do the R&D to stay there.
Although I agree with you that it isn't at all likely that they will get out of the litho R&D altogether. It keeps their CPUs among the fastest around. And they need that to keep their cash cow mainframes around. It is chipguy's dream for IBM to get out of the CPU business. IBM's Power is doing to well against his favorite, Itanium.
Pete
VBG:
Prove same.
Even if true, that still doesn't make my post any less true.
Just because someone works in a field, doesn't mean they have any credibility. There are so many examples of someone who should know better, makes some blanket statement and is found to be so wrong.
Case in point, Bill Gates famous (or infamous) line in 1981, "640k is more RAM than anyone will ever need." Or the latter one, that the internet was "just a passing fad".
More:
"I think there is a world market for maybe five computers."
Thomas Watson, chairman of IBM, 1943
"I have traveled the length and breadth of this country and talked with the best people, and I can assure you that data processing is a fad that won't last out the year."
The editor in charge of business books for Prentice Hall, 1957
"Heavier-than-air flying machines are impossible."
Lord Kelvin, president, Royal Society, 1895.
"This 'telephone' has too many shortcomings to be seriously considered as a means of communication. The device is inherently of no value to us."
Western Union internal memo, 1876.
"Drill for oil? You mean drill into the ground to try and find oil? You're crazy."
Drillers who Edwin L. Drake tried to enlist to his project to drill for oil in 1859.
"The wireless music box has no imaginable commercial value. Who would pay for a message sent to nobody in particular?"
David Sarnoff's associates in response to his urgings for investment in the radio in the 1920s.
"Everything that can be invented has been invented."
Charles H. Duell, Commissioner, U.S. Office of Patents, 1899.
More at:
http://wilk4.com/humor/humore10.htm
Being in a field doesn't make you competent. The last case proves it.
Pete
Wbmw:
What full frame EUV exposure tool does Intel have? NOTHING! NADA! ZILCH! The Nikon EUV1 tool is sitting at their factory. It was supposed to get shipped to Intel last year, but Intel pushed it out, indefinitely. Canon doesn't have one.
ASML, the other big litho tool maker, has two ADT EUV scanners in the field. One is at IMEC in Leuven, Belgium and the other at Albany NanoTech (College of Nano Scale Engineering, CNSE) in New York. They currently do 5WPH and IMEC's is being upgraded to 10WPH with CNSE's to follow.
Being first to try to do something means little, if someone later actually does it. The Russians had a great space program. They were first to orbit, first with two people and first spacewalking. But they didn't rendezvous first, didn't put three in one vehicle first, take earthrise picture over the moon first or land on the moon first. The US did that. Another example is all those who tried to make an airplane. Lots of big wigs in their day tried and failed. Some with lots of money. Two bicycle makers tried and succeeded.
Intel did early EUV work, but lately seems to have slowed their research. IBM also did early EUV work too, but you seem to forget that. They and AMD are closer to the goal of getting to a fully fabbed full size die using EUV. ASML says they will have a pre production tool (because it will only do 60WPH) by the end of 2009 (as of 11/30/2007). 5 are on order, as of February, 2008, per ASML. One or more of those must be AMD for Fab 4X since it will be started well before then (shells don't take that long to do (from experience with Fab 30)) and should be ready to install the first tools. ASML estimates that it will be part of an operating production line by mid 2011.
Isn't that when Intel says they will be at 22nm? AMD, being burned too many times, is more conservative and says 2012. Previous experience from many others, including Intel, says that is the better bet.
As for the rest of your TOU violating post, it is being referred to the appropriate authorities.
Pete
VBG:
It is when Intel has failed to do a full frame EUV shot to mask a layer. Intel is behind in EUV development. You do know that IBM was doing development in EUV, XRAY and EBEAM before Intel? That AMD and IBM are collaberating at IBM's research facility on Albany NanoTech's campus?
Oh, I forgot, Intel has to be ahead for Intel boosters fantasies to be real.
Intel failed to get ArF Immersion to work. IBM and AMD did get it to work. So Intel had to skip to the complex Double Patterning Technology, DPT in order to get to 45nm, else they would be behind in process node. Even given the higher costs in using it in both fab space and in yield. That's why it takes them 4 45nm fabs to equal the output from 4 65nm fabs. When they need to get ArFi tools, they have to wait in line behind all of the other adopters who already ordered them. ASML has only made 60 of them by the end of 2007. Nikon and Canon have made something less than that. Most of ASML's are going to AMD, IBM, Toshiba, Micron, Chartered, Infineon and Samsung. Nikon's are going to the other NAND and DRAM producers. That doesn't leave much for Intel to fill out what's needed for a 32nm process line.
So its you who has a "credibility gap".
Intel has fallen behind before. And with AMD in particular. Remember AMD64? They had to even copy, verbatim, the AMD64 manuals including the old, since changed, information (it introduced even more errata for P4s at the time). It looks like they will have to again.
And that hurts your flawed view of reality. TOUGH!
Pete
Wbmw:
You still missed the fact that the MET only shoots a 0.6mm by 0.6mm frame. No die sold by Intel was ever that small (even around 2004). It can't even shoot on a 200mm or 300mm wafer used by all leading edge fabs today (and in 2004, Intel had 300mm fabs for CPUs, AMD got there in 2005 with Fab 36 and 200mm wafers that Fab 30 was churning out). BTW, Exetech isn't in existance today.
AMD and IBM use the CNSE EUV ADT tool near IBM's research facility in Albany on the said campus. Since Intel doesn't have a full frame tool, Intel is behind AMD/IBM. Too bad for Intel.
And IBM was in EUV (and even XRAY litho) before Intel:
http://www.sematech.org/meetings/archives/litho/ngl/20010829/Poster%20Presentations/12-Photronics%20NGL%20WS%20poster.pdf
So Intel may have been ahead, but they have been passed by others including AMD/IBM. Poor Wbmw. Didn't read the paper his link pointed to. Just saw a something and assumed that it was something it was not.
The date on the paper was SEPTEMBER 2004. You couldn't even read my response, let alone the link I gave you.
Oh I did. Including all of the supporting stuff. Did you? Or did you gloss over the stuff that contradicted your comments? The latter is true given the above.
" prior to 2016"
Do you have a clue as to what "prior to" means? According to the ASML document I linked to, the ASML PPT EUV tool will be shipped 2009. It will support 60 300mm wafers per hour. There are 5 on order, one or more would likely go into Fab 4X near to NCSE. Given the typical time for setup and process verification, they should be running in 2011 with saleable CPUs out in 2012. BTW the ITRS uses logic (ASIC, DSP) for determining when a half pitch node is needed (at foundries). See chart on slide 9:
http://www.secinfo.com/d139r2.u1Mm.b.htm
DRAM is about the same as logic at 22nm and NAND is about 4 years ahead of logic at 22nm (mid 2012). Of course leading edge CPUs will need 22nm a few months before that. Of course, Intel needs it sooner to stay ahead of AMD (they don't normally do well, when AMD is at the same node as they are).
Looking at slide 25 (with repeats later on) on same 11/30/07 document.
While ArFi with double patterning could get down to 32nm, to get 22nm, it will need multiple (beyond double) patterning with 1.35NA ArFi or still something better than double patterning (just not as high a multiple) with 1.55NA ArFi to get close to 22nm. 0.25NA EUV does it easily with a single exposure single pattern.
Notice that if you fill in for double patterning (0.22) at 45nm node for 0.93NA ArF dry, the technique Intel uses in its 45nm fabs, and (0.15) for 32nm node, they would have to go to multiple (>2) patterns with multiple exposures to do 32nm using dry litho. They would have to do something similar with 1.35NA ArFi (immersion) to get to 22nm. Since those tools go for 45 million dollars a pop and many would still be needed to do the job of one EUV tool, EUV using PPT tools, even at 100 million a pop (really 65 million), would be cheaper with less risk (single exposure single pattern versus many patterns each with double exposure). That is 3 high end ArFi 135WPH tools versus 1 EUV PPT tool to get the same 60WPH for a the same layers. And the EUV likely would also be able to do single exposure at 16nm, beyond what a 1.55NA ArFi tool could get to even with lots of patterns and exposures. That is a lot of equipment that would be relegated to upper metal layer processing (short equipment lifetime) in a couple of years versus 4 years normally that they are relevant. Intel might have the cash to do that, but it will cost them more later in many ways.
Look at slide 54 to see what Intel is doing to get to 45nm at 0.93 ArF dry and how, using 1.35NA ArF Immersion, how much easier AMD can do 45nm and that 31nm is possible. It will likely take Intel longer to get immersion tools and get them to work than it will be for AMD to go double patterning for 32nm.
No, Intel is letting AMD/IBM go past hoping for a breakthrough elsewhere, just like they couldn't solve immersion before AMD/IBM. And since that shreds your "Intel is great" fantasy, you ignore it.
See the CNSE's ASML ADT EUV tool on slide 58.
EUV throughput based on source power on slide 60 with tool intro times (PPT, PT1, PT2).
Estimated tool prices (Euros) on slide 65. Multiply by 1.5 to get USD. It looks like tool prices are being inflated by FUDsters given that EUV PPT (60WPH) will be about $65 million and 1.35NA ArFi (XT1900i @ 135WPH) is about $45 million.
Estimated costs per minimum feature size on slide 66. Notice how double patterning has about the same cost as the previous tool did, but new tools are cheaper at same node. This is what happened to Intel with the use of DPT instead of ArFi which AMD uses. Given it will take longer for Intel to go immersion than AMD to go DPT, its possible that AMD will get to 32nm around the same time as Intel.
Slide 73 shows ASML's future roadmap.
Pete
Chipguy:
Intel has no wafer processing EUV tools. Sure they can whip something up in a lab to throw a little EUV light on a small area, but it would not make any usable test chip (by that I mean a production design with millions of transistors, not some small number of gates) on a production sized wafer.
There are currently only three EUV tools that can shoot a full frame (33mm by 22mm) on a real production 300mm wafer, 2 ASML ADTs and a Nikon EUV1 (not counting what might be in the process of being built at the two factories). Those ASML ADTs can now process 5WPH (yes, 5 wafers per hour). Not sure of the rate of the Nikon tool. 80WPH would be a production level 300mm wafer tool, IIRC.
Here is a ASML presentation given to the SEC as an addendum to a 6K document:
http://www.secinfo.com/d139r2.u1Mm.b.htm
It goes into the needs for EUV, 13.5nm at 22nm and what 193nm ArF immersion does for 45nm and 32nm.
Pete
Alan81:
Not a single wafer is processed by that "EUV pilot line". The EUV tool only has a field size of 0.6mm square. Name me one production die sold by Intel that was 0.6mm square or smaller. You can't.
If this is what you mean by having a EUV line "up and running", then your definition is way too broad. They may get some test shots done to small chips, but no wafers would be processed. In fact they think so little of this "line" that they pushed out the acquisition of a real EUV tool from Nikon in order to proceed with ILT, double patterning, immersion or some combo to bypass EUV. It is not going to use nanoprint.
http://www.eetindia.co.in/ART_8800469569_1800000_NT_81d34204.HTM
Pete
Wbmw:
Failure to read the documents you link to is a common problem of yours. From the link on page 3:
"but a very small field size of 600 x 600 microns (a full field size is 26 mm x 32 mm)."
Name me the last production die Intel made that was only 0.6mm square. The last logic die. The last CPU die. AFAIK, Intel never made CPU dies that small. Nor logic ones. Even test chips, used for the validation of a production line, were never that small.
Oops, you took it to mean that any tool is good enough to be "up and running". Also does it even process a wafer or just a small chip? I guess you could open the large port for a 300mm wafer, but isn't that a lot of screwing to do to change a wafer for processing? I see that you missed the part about "running test WAFERS through the tool" to be "up and running".
AMD and IBM succeeded in producing working test chips on the wafer they sent through the Albany NanoTech EUV tool. As the article below states, now all they need do is run all of the EUV needed processing steps on a wafer (some of the latter metal layers do not need 22nm resolution) that produces a working die to satisfy my definition of "up and running". Although any layer processed by EUV on a working die is enough to technically satisfy that definition, I want all layers that need EUV (22nm) to be done on the tool.
http://www.digit-life.com/news.html?10/10/24
Boy would Intel be shitting bricks, if AMD produced a working 22nm CPU through EUV tools. Or a Fusion CPU with many CPU, GPU and APU cores. Or a system on a die. Just add memory and other desired components.
Pete
Koog:
Its hard to believe him, when Intel has no EUV tool. They were supposed to get one from Nikon by the end of 2007, but t never shipped. It is still at Nikon's facility.
Its hard to have EUV "up and running" without the equipment to do so. Its also hard to believe it, when AMD and IBM did the first successful full field EUV test chip on the first metal layer. It was on a 300mm wafer and processed by Fab 36 both before that metal layer and after.
Its easier to believe that IMEC (Leuven, Belgium) has EUV "up and running" with at least one EUV scanner tool (ADT) (the one that imprints the resist). Albany NanoTech (CNSE) also has one of those tools. It likely is the one that did the full field EUV test for IBM and AMD. Intel's RP1 facility in Hillsboro, Oregon, doesn't have any EUV tools from either ASML or Nikon.
But usually "up and running" means that the EUV process is fully functional. I set the bar for "up and running" to be higher than just having a tool and pushing test wafers through it. I put it at having usable chips being made. So far, none of the EUV tool sites satisfy that level of operation. Those two sites are at least at the level of pushing wafers through the tool or at least were.
The IMEC tool is said to be in the process of being upgraded at this time, so CNSE is running tests for IMEC on its EUV tool with the understanding that, when it upgrades its tool, IMEC will reciprocate and run CNSE test wafers through its EUV tool. So the current "up and running" label could only be applied to one site (not including the ASML and Nikon factories where, at least, the Nikon prototype sits).
http://www.semiconductor.net/article/CA6524322.html
http://www.azonano.com/news.asp?newsID=2938
http://www.eetimes.com/news/semi/showArticle.jhtml?articleID=206103303
Thus, Alan81 is incorrect about Intel having EUV "up and running" at RP1. Either they ship wafers overseas through IMEC in Belgium, or they ship them to NCSE in Albany. They don't have that Nikon EUV (ADT) scanner at RP1.
Pete
Dear Alan81:
Intel has no EUV tool to work with. Nikon hasn't shipped it to Intel yet. ASML has shipped a prototype EUV tool to NYCE near Luther where AMD plans to build Fab 4X. IBM may have another one at its East Fishkill Fab, where AMD does its research. Intel does belong to IMEC consortia which has a EUV tool, but not at any Intel site. It is also a ASML prototype.
So you are wrong that Intel has in house EUV "up and running".
Pete
Smooth2o:
Yes here is the patent awarded in 1998 to WARF long before C2D was sold:
http://www.patentstorm.us/patents/5781752.html
And yes it is in the C2D documentation and advertising. Thus Intel is infringing on the patent. AMD K7/8/10 doesn't speculate on load stores and fix it afterward like C2D does, so they won't be affected by this patent.
As to what it will cost Intel, it depends on the settlement. Last time Intel got sued for patent infringement, it cost them $550 million. And that was for something buried in the design. For something that is advertised and documented, it will likely be higher. Intel can't afford to redesign C2D to remove it without hurting IPC and performance and they certainly can't afford not being able to sell them (even for the time it is getting redesigned, tested, and validated).
Pete
Smooth20:
And what are you saying? Not anything relevant! Intel has committed more criminal acts (patent infringement) and they got sued for it.
As for WARF, this is their website:
http://www.warf.org/
The news release with more details:
http://warf.org/news/news.jsp?news_id=221
Read it and weep!
Pete
Smooth2o:
On the contrary, Intel used the ideas that were patented in every Core 2 CPU made. They even advertised those ideas as a valuable feature of their new CPU. Thus Intel thought that those ideas were valuable. Else why point them out?
No, its Intel that keeps doing things they aren't supposed to. Look up patent infringement before shooting your mouth off.
OTOH, go ahead and shoot your mouth off, we will be blessed from the emptiness of your ideas.
Pete
Smooth20:
Don't have much reading comprehension, do you? WARF isn't a semiconductor company, they are the patenting arm of the University of Wisconsin Alumni that funds research and education. Instead of paying them for a license, Intel stole the ideas and used them in their Core 2 CPU designs.
It would be like someone came up with a producible anti gravity generator, was getting a patent on it and then as he was talking to GM about licensing it, they took his ideas and made flying Cadillacs and Suburbans. By your argument, it would be covered by a cross licensing agreement that he has with GM. Like an individual would need a cross licensing agreement with GM and would be satisfied with that in lieu of the billions of dollars that invention would be worth.
Get real! That argument doesn't hold any water.
Pete
Smooth2o:
WARF is the University of Wisconsin Alumni Research Fund. They have the key Embrionic Stem Cell patents amongst many others. The researchers, who are part of the Alumni, that do the basic research and development, the fund helps them patent their discoveries and use the money to further research and education. Thus, they are not patent trolls, but the original inventors. So they are not scumbags. Its the Intel management who are scumbags. They prove it every time they open their mouths.
Pete
Wbmw:
Same old story. When Intel is caught red handed, you ignore that. Intel has a history of paying out for patent infringements. The last one was for $550 million IIRC.
Thus Wbmw's rose colored world ignores more Intel misdeeds. The fantasy is yours.
Pete
Wbmw:
Perhaps you should get a C2D or C2Q before Intel has to stop selling them.
Intel infringed on a WARF patent and they are nasty about companies that do that.
http://www.jsonline.com/story/index.aspx?id=715549
A few billion dollars for the license should send the right message to infringers of WARF patents. Else Intel will have a major revenue stream stopped cold.
Pete
Subzero:
The "basic" laws of physics has changed obver the years. Clasical theories defined an ether and going against that grain labeled one a crackpot. At that time the "basic" laws were thought to be all known. Little did they know that in a few decades, it would all unravel. Einstein's Photon Effect, the ultraviolet explosion and Michaelson-Morley cracked those "basic" laws and made a mockery of them. Now if you take the classical view, you are thought to be a simpleton.
Most do becuase it can come close to the physical laws as long as certain assumptions are made. However, as we get smaller, the effects begin to be large enough not to ignore. Yet we don't take them into account and try to mitigate their effects. So we can still use the classical methods of design that we are good at. And to do this requires higher currents than would be necessary, if one considers quantum effects. The designer could also use quantum effects to remove some parameters needed by classical devices and thus, the limitations they cause.
Once such circuits are known, they can be replicated like any other design block. Thus quantum effect circuits could be used in places along the critical paths and normal slower ones in the non critical paths. This can also be true of other alternative circuit types as well.
Do I have to enumerate such circuits? No, they are being researched by many and they are being implemented in silicon as we speak. Current scaling is thought to be good until 22nm. It wasn't too long ago (<10 years) that 90nm was thought to be the end of scaling. We are well past that.
Pete
Chipguy:
Yes there is one set of real physics, but many theories about it. Once it was thought that all of physics was known during the classical era, but that was all dashed by the Michaelson-Morley experiment. Classical circuits have their limitations, but there are other circuits like quantum effects that bypass many classical limitations. Now we have Quantum Chromodynamics and theories like String Theory. Most of the polysilicon devices look only at classical effects and try to ignore any quantum ones.
Yes it requires different thinking and circuits to implement, but it can get far higher performance at the cost of large design changes. There is also the possibility of using dynamic circuits or circuits that don't need high on/off differences to compute without error. Sure it may leak, but the leakage is controlled by allowing for quite a bit lower on currents and thus overall current usage is less, yet performance is increased.
You forget that all else has to be equal for your statement to be true and unfortunately for you, something else can be different. Your unvoiced assumptions can be false making your statement just a load of FUD.
Pete
Chipguy:
There are physics that can allow smaller polysilicon transistors that don't leak that much yet have very high performance. It does require a different approach to circuits though. However, AMD/IBM/etal processes haven't used standard polysilicon. They have used strain, SiGe and many other performance enhancers. I have heard about this problem time and time again, and they have been proven wrong just as much. What scares you seems to be that AMD will be at the same process generation as Intel.
Intel has a big throughput loser, using double patterning instead of waiting for immersion to get to 45nm. That is what they paid for a 9 month time to market reduction. It will bite them in the ass and early reports bear that out. They can build enough fabs to get enough output, but it will hit them in the cost of goods.
Pete
Tecate:
The AMD TLB bug is tiny compared to the FDIV because the latter corrupts data. C2D has far more errata, some of which cause uncontrolled execution (jump to a random address and start executing what's there), data corruption (altered data with no way to get back to the original settings without backup) and system crashes (least problematic error of the three). The fourth is those which don't effect anything. FDIV is of the second type. The AMD TLB is of the third errata type. C2D has many, still unfixed, errata of that type.
At least AMD fixes their errata. Intel just ignores it (until it bites them in the ass). FDIV was a complete fiasco for Intel. They knew about it for 6 months prior according to them. Knowingly shipping a CPU with a type two errata is very bad. Not even trying to fix it was just as bad. They only did it because it became public knowledge. P3-1.13 was less of one, third errata type. But it showed conclusively how slow Intel's top bin sold, less than 100 in a whole month and how poor their idea of obscure software was.
Pete
Elmer:
Having been on major server installs to Fortune 500 companies like Ford, DuPont, Cargill and others. But Elmer who doesn't know what they do when they intend to buy a new server platform, doesn't realize the time to get through all of the testing they do. He must think that just because its another Intel CPU, that they do testing for a month and assume a whole bunch of stuff still works like it used to.
IT people are risk adverse. They would rather test and make sure, than to assume and get into trouble. If a mission critical server goes down, they can lose the company tens of millions of dollars every hour. They could not earn back that money, even if they worked their entire lives for the company. Thus they would be fired on the spot and never be back at that level again for another such company. Against that risk, would anyone not test throughly? Especially since they have to do it not just to hardware, but to software, procedures and people too.
Given that mindset, they don't trust their OEMs. They will get a new server in and still load it with their software and test it, then test it by running it in parallel to the old box and once they are convinced it works, transfer over to it. Even turn key systems (those provided by VARs to use for a single purpose) get installed, and go through about a month of testing by the customer on site with some of their people, take another week for training the rest of their people and a month or two in parallel before putting it into service. And those are tested by the VARs for two to three months before it is even installed (customizations to the software/hardware (assuming a known good platform and base code). If the VAR looks at a new platform (usually by customer request), they do a month or two of more testing with the ported base code which can take a week to six months depending on how different it is.
So either by VAR route or directly from the OEM, a year between purchase and install is typical. And it takes a while to just get them to sign, two to three months or testing various options. Sure they may be a few bought for that testing (but heavily discounted) on large buys, but typically less than a few percent. So the first year of a new server platform is mostly samples to the customers with a few pioneers in the mix. Its the second and subsequent years where the real purchases (volume) and profits are made for a new platform.
The C2D Xeon MP platform is only a few months old and won't make a dent in the market until Q4/08. But the 2nd gen (Rev F) Opteron platform is over a year now and is coming into its money years. By being plug ins, Barcelona will bypass much of the platform testing and go through about 3-6 months of testing as a new CPU, which the x347s and x350s are doing now. So the server market will be ready to do volume buys in Q1 or Q2/08. Socket 1207+ will go through that year worth of testing however. Its easier for IT when they need to change only one thing. That is why they like upgrades and grandfathering.
The trouble with Penryn and its siblings is that they need a new platform. Thats the trouble when each new Intel CPU requires a new chipset to work. A whole year of testing before its bought in volume rather than 3-6 months. And Nehalem is going to do it again. So even if its out in Q4/08, it won't be bought in volume until 2010. Even if Shanghai launches as late as Q1/09, it will be in volume by Q3/09 because its an upgrade and not a platform change.
Many people looking at the server market fail to remember the lags at the customer side have to be figured in. Getting a CPU launch faster does not mean faster uptake no matter how good it is. Requiring a chipset change as well as the CPU change, leads to 2-3 quarters of more delay. So a latter CPU in an existing platform will do better sooner than a earlier CPU in a new platform.
Pete
Jokerman:
You are right, I meant 2007 for both. It happened last month.
Pete
Correction, December 2007 and April of 2007.
Pete
Elmer:
You didn't perform computer forensics and determined the cause? Then you have no say. Without the above, one can't say if it was even the CPU, that particular CPU, or any CPU in the family.
Besides, you still don't get it. A simple crash is less problematic than one that destroys data. Worse is one that corrupts data. The Conroe crash was of the worst type.
Pete