Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
...Except that Apple has the touch operating system technology in house. It just has to graft the touch operating system modules to the Mac OS. That way Apple is way ahead of Microsoft.
That would be useless. Not only iOS core UI elements, but also all current iOS applications, are designed for touch input.
"Grafting" touch on MacOS is not going to make Adobe PS or MS Office for Mac act as touch applications: one still would have to port software from iOS or adapt existing software.
I suspect just about everything about Hans's initial claims for
Bobcat are rather wrong - die size, core size, floorplan, and
transistor count.
We have already known for some time that Hans' die size estimate is correct.
A better micrograph of Ontario die is now available here (thanks to NVO for the link):
http://www.flickr.com/photos/amd_unprocessed/5368539206/
It confirms Hans' interpretation of the chip floorplan and the core size.
http://img443.imageshack.us/img443/7412/amdontariobobcatvsintel.jpg
As far as I know, transistor count is still an open question.
Hans seems to think TSMC processes can walk on water.
Having done custom circuit design and layout using several
TSMC ASIC processes in the past I'd rather disagree.
Do you have specific comments about the observation that Bobcat ended up being approximately half the size you estimated (related to TSMC process or Bobcat design)?
It remains to be seen when AMD was made aware of this and if they are now prepared. Reportedly, Samsung was behind the push for gate last, and they have been open about it for some time now.
Regarding Ontario die size.
I just [measured] it carefully, on a 3x magnified image in PS, and got 75 mm2 for Ontario die.
... which turned out to be the correct figure: http://www.techreport.com/articles.x/19937
Now, it remains to be seen whether the analyses published by Hans and others were also correct regarding core size and chip floorplan. I think they were.
I just did it carefully, on a 3x magnified image in PS, and got 75 mm2 for Ontario die. I estimate the error on this measurement be +/- 3 mm2. The die projects a shadow on the right, and the coin right side as well as the top of the die are badly defined because of JPEG compression.
I obtained a similar result using another reference image. For all I can tell, Hans estimate is likely spot on.
The nature of the picture Anand used makes it difficult to estimate the die size with precision, because you have to correct the picture perspective (you said it). His methodology is bad and he does not try very hard, hence the erroneous result.
Really, the fact that the die is in fact not square, but the estimate is 10x10mm, should ring a bell to a critical ear.
This is a better picture.
http://www.computerbase.de/bildstrecke/30669/6/
The diameter of the Euro coin is 23.25 mm
It is much easier to estimate it that way: you can do it in Powerpoint in the office. I expect you will come to the conclusion that Hans is right.
LOL. Addition of 25mm2 changes the perspective completely.
Funny that you mention perspective. I think that Anand's die size esimate is erroneous, and that Hans was much closer to the truth on this topic.
I also suspect that Anand may have ulterior motives in publishing this estimate: he would probably appreciate a bit more information about the part. AMD, are you listening?
I don't quite know what you mean by DC yield
I meant the proportion of working parts, discounting the fact that they may not meet the thermal of clock specifications of commercial products. What is the proper word?
Thanks for the clarification about cost and price and your other clues: one could go back and revise the estimate of the cost of Ontario accordingly, using USD 7000 as a fully processed wafer price (this leads to a USD 15.5 cost to AMD for Ontario).
I have not properly factored in all handling and stock management costs that would be needed to build a proper target ASP estimate. Still, I remember reading about cost estimates of USD 6-8 for the initial, small die Atom. If they are correct, that places a low upper bound for such costs lumped together with packaging and testing, by analogy, and give credence to the final figure to some extent.
http://forums.vr-zone.com/news-around-the-web/345098-tsmc-next-generation-28nm-process-2010-a.html
(See: "Gartner estimates...").
While trying to dig deeper, I have also seen several articles mentionning Samsung working to gain market share through aggressive pricing, maybe back down to USD 4000-5000 price for processed 40 nm wafers. This figure could provide with a lower bound for Ontario cost esimates.
Suddenly, I am unsure about something. Are you familiar with the concept of back of the envelope estimates? You spend much effort arguing about one of the parameters, when to my eye, all of the other ones I provided, which are equally important, are also equally iffy, even when I tried my best to make them reasonable guesses. To me, the end result looks like an adequate ballpark estimate, an it is not so far from the result we obtained from the bottom-up method.
Your point seems to revolve around the idea that Sapphire, one of the market leaders, likely the largest ATI/AMD partner, and not even the cheapest at Amazon on this product, sell their card for no profit at all, based on a hypothesis about xx.78 price at NewEgg.
I observe that this card is also among the cheapest, or the cheapest one, in various countries in Europe. It is also the best, or among the better distributed there, with a very high proportion of retailers carrying it. The chart of the price evolution shows a gradual decline from introduction, like other 5450 based cards, and no steep decline indicating a sudden fire sale.
So the question to you is what the margin structure would look like, assuming a card that sells for $60, like this one
I do not know: why do not you tell me? Still, I would be far more at ease with the idea that this specific card is priced higher because the supply is constrained for this specific ODM (common practice) than to imagine that the market leader is selling its product at no profit. Do you think that they are selling heaps of those USD 60 items, when basically anybody can buy the same card at USD 40 at NewEgg or Amazon from a reputable brand? Which ones do you think represent volume sales, and which one should really be ignored? Anyways, who knows. At least I do not have the arrogance to think that my guesses are anything more than what they are.
If you personally are more at ease with an estimate based on a different, higher starting price, for whatever reason, that is fine to me. You are, of course, also free to tweak the other parameters. Considering the wiggle room in all of those, it would simly be an illusion to imagine that one has found the truth.
By the way, do you still consider it laughable that Sapphire may be getting better prices from ATI/AMD than smaller OEMs? I am interested in you opinion.
The processed wafer cost estimate (USD 5000) for TSMC has been floating around for some time. Sometimes, it is used as a cost estimate, which I did as I wanted to be conservative, and sometimes as a normal price to the foundry customer.
Looking further now, I have found estimates of "leading-edge" wafer costs close to USD 7000, with indications that high-K adds USD 2000 to the cost of a wafer because of the need for atomic layer deposition tools that are expensive and slow.
It is impossible for me to judge how realistic the ballpark estimate I used really is. I hope to ask friends about it when I get the opportunity to do so.
About the yield estimate, I though that a 80-85% DC IC chip yield fr a sub-100 mm part would be OK, and figured out that I had to subtract a small figure for the hot and slow parts and other losses. While several articles have been published about intra-die and die to die fluctuations, as well as packaging issues, I also miss a reasonable estimates for current commercial processes.
Thanks again for your insights both this message and the other one. I gather that Intel likely enjoys a significant cost advantage.
I was rather calling you out for claiming $30.78 as the lowest price, which took into account a rebate, and then following that by claiming that backing off to $40 was a fairer baseline.
I am sure you are contributing in good faith here, but actually, I never used USD 30.78 as a reference, as this specific card is part of a bundle. You did. I do not know why you attribute those claims to me: they are not mine. You can look back if you want. The lowest price I considered was USD 35, and I used USD 40 as a reference, which seems a price where on can find ATI 5450 SKUs at various retailers.
As for the rest, I remark that you think I misunderstand you and feel invited to repeat, while I merely disagree with your opinion.
Why would AMD give them a bigger discount than their other partners, especially the ones who can shift their volumes to more nVidia product?
What makes you think that Sapphire could not negotiate a deal with nVidia? And why do you think that they, as the biggest AMD/ATI OEM, has decided to stay AMD-only for such a long time? Could it be because it made economical sense, as they got deeper rebates and other advantages compared to competitors?
You know, when you disagree with something and you don't have any more personal data to add, it's usually more productive just to state that you disagree, rather than restating your thesis in a, "I don't care about your facts; I think your wrong, and I think I'm right for these same reasons..." kind of way.
From my perspective, you tried to advance an argument of yours from numerical estimates based on non-sequitur and empty generalizations - and I am not obliged to watch silently while you do so in replies to my posts. My advice in this case would simply be to admit that you are guessing. I already shared that my own guesses would not be much different for various reasons.
Still, it was not my intent to annoy - sorry about that.
You are right, thank you for the correction and sorry for the mistake. I miscalculated the effect of the hypothetical gross margins, not only here, but also in some of my previous estimates. So, according to this set of numbers, the reference price would be close to USD 33.5, assuming 50% margins.
I would argue that even with a weighted average price of $33.50, Ontario won't be doing much good for AMD.
AMD presence in netbooks and CULV MPUs is currently negligible, they do not have to fear cannibalization. All the pie they can eat will be revenue growth, and each sale will come together with a Hudson chipset.
Since I am no AMD investor, I am more interested in the effect of all this on Intel. If the expectations about Ontario outperforming Atom turn out to be true, I expect AMD to initially price it higher than Atom, and go down in price when (and if) they can deliver very high volume. I get a feeling that AMD is a late to the party in netbooks. But there is a place for useable cheap notebooks with good battery life (not limited to 12.1-inches screens and cheap desktops (and other small form factor PCs).
I do not think we know enough about the performance Ontario to imagine how Intel will react. But I do not expect Atom will be the only product line affected.
Unless we are looking at different data, the lowest price on Newegg is $40.78, and you chose $40.
Wow, a whole 2% difference! I have a feeling that my back of the envelope estimate is ruined. Anyways, we already went through this: the lowest price after rebate is USD 35 (the USD 30.8 priced SKU you refer to is actually part of a bundle) and there are two more cards below 40 after rebate at NewEgg. I am willing to believe that the margin on the lowest priced part is lower than 9%. The SKU I have selected as a basis for the cost estimate, at USD 40, is 14% more expensive, and I used a 9% gross margin for it. This means that according to me, the lowest priced part is indeed sold at a loss when the rebate is cashed in, while the USD 40 SKUs are not (and neither is the USD 40.8 sapphire card). It seems reasonable to me.
By the way, it is not like you cannot find such cards at that price elsewhere (here, the Powercolor card at USD 40 before rebate):
http://www.amazon.com/dp/B0038JREUY/ref=asc_df_B0038JREUY1240847?smid=ATVPDKIKX0DER&tag=pg-469-01-20&linkCode=asn&creative=395093&creativeASIN=B0038JREUY
You may be aware that retail at NewEgg does not even represent the lowest margin business those ATI/AMD OEMs are in, since they also ship bulk video boards to large computer OEMs. I dount that the pay more than consumers do for retail boxes.
Both Ontario and Cedar are AMD products, and they should receive the same AMD discount.
Are you speaking from personal experience, or would you quote a source?
That makes no business sense whatsoever.
Giving higher rebates to your higher-volume OEM, Sapphire in the case of ATI/AMD, makes no business sense? You cannot imagine such a thing taking place? You know, like Intel used to do with Dell?
I am not not where you hope to go to with such claims, but I am not following you there.
You're arguing with an irrefutable fact.
Maybe I have more realistic expectations than you do about the management of production lines, and I realize that any decrease in cost has to be fought for rather than wished for. It is anything but a given. Sometimes, unit costs even automatically increase with increased volume - for instance when going over production line or fab design capacity, when one has to pay extra hours and deal with higher incident rates. What you coined an "irrefutable fact" is merely a fallacy, or, to be nicer, a potential trend that has to be considered in the light of specific cases to be verified. It cannot be quantified without specific information.
You may also be aware that reducing costs often involves additional investments that may not be desirable for various reasons, for instance because the expected return does not warrant it.
I add that there is nothing personal in observing that beyond a bit of handwaving, you have no point here. How could I disagree with your "20% cost disadvantage at TSMC offsets Ontario smaller die" opinion, when you even even started to explain how you came to that number - save for observations about TSMC production volume that have no solid logical connection to the detailed issue at hand? If anything, I show respect for you in choosing to reply. See also Elmer's contribution for insight about the (non-)evolution of yields as Intel processes mature.
Well, the starting price I have chosen is actually 14% over the price of the lowest priced 5450 one can buy at NewEgg - I think that it is fair enough, because I do not believe for a second that the three least expensive cards (sold below USD 40) are sold at a 15% loss to the manufacturer. It seems completely unreasonable to me. As for the rest, I would say that in my opinion, AMD will get higher volume discounts from TSMC on Ontario than on Cedar, hence my +30% price differential estimate compared to Cedar.
Finally, I realize that I did not understand you point about AMD breakeven gross margins. Would you please share a link about the original statement?
I highly doubt that AMD gave Sapphire a significant discount, because Sapphire doesn't ship any cards with nVidia GPUs, and so there's no fear of losing their businesses, and no need to sweeten the pricing for any one vendor.
My business sense is quite different from yours. I see a point in AMD/ATI giving discounts to their largest graphic parts OEM which has been faithful even in the periods when ATI was trailing nVidia in various performance metrics. This actually plays well in your hand.
By the way, how about this:
The cost of a processed 300 mm2 TSMC 40 nm wafer is estimated to be USD 5000. There are 800 Ontario die per wafer. Assuming 70% final yield (not DC yield, but rather fully functional dice within the power and clock envelope of shipping parts), USD 3 cost for the puny BGA package and testing, and adding 35% gross margin for TSMC, I get a part price to AMD of USD 16, and a profitable reference price of USD 24.
They hit the ground running at mature yields.
Thank you for your insight.
With increasing volumes come decreasing costs. The two go hand in hand.
That is not so simple. This stops to be significant at some point, which depends on the specifics of the product being discussed, and the context. At this level of hand-waving, it is not even possible to disagree with you: you basically have no point as far as I am concerned.
It's impossible, because things will always get better as volumes increase.
Always? You mean that company-average defect density can never go down when a new line is opened? Or a new fab put online?
I don't need to be an insider to look at their public data I don't need to be an insider to look at their public data, and history of public data, and draw my own extrapolation curves. You can do the same thing [...]
I would like to, but alas, no, I cannot. And judging by your reply, I think you cannot either. TSMC publishes almost no data to the public, and, as far as I am aware, certainly no info about defect density. And other sources show that the picture is indeed complex, and can vary quantitatively and even qualitatively between processes.
Either we have considerably different standards in what is a reasonable estimate, which I judge unlikely because you are no stranger to nitpicks yourself, or you had no solid ground to stand on. Maybe it is better to leave it at that.
I would still be interested in expert opinion about this, or any solid clue to the relative costs of manufacturing from foundries and Intel.
Still, if I had to guess, I would, like yourself, suppose that Intel cost would indeed be lower, from better process research from the indirect clues of the high number of patents and the cost of Intel R&D, higher market power of the equipment manufacturers from Intel size and collaboration in development, maybe better internal procedures to manage live production-line yield, the absence of anecdotal evidence of recent mishaps at Intel similar to the TSMC initial 40 nm problem and indeed, the longer experience running the 45 nm process. But to put numbers on an actual cost per die advantage at the 40/45 nm node is completely beyond me. Is it significant for small-die products ? In the context of this discussion, would it be enough to offset Ontario smaller die? How would I know?
You rightly point that I did not fully work back the costs and the margins back to a Cedar die price estimate in my post - which does not make the example good or bad in itself: the development that I shared with you is just incomplete.
The incentive to share such analyses on this board is actually not very high. Anyways, starting from a price of USD 40, substracting Newegg margin (11%, from the IPO prospectus), substracting the card manufacturer margin (9%, my estimate), substracting a reasonable estimate for the other costs (including RAM and all other components mounted on the PCB, except the GPU, as well as testing and shipping to Newegg) (USD 12, my estimate), and AMD target gross margin for graphics chip (35% - used to be ATI target - and there were no competing DX11 parts against Cedar), I get a chip full cost close to USD 13 - that is for a good die, tested, packaged, and shipped to AMD.
I would estimate a BGA packaged Ontario to cost 30% more, expecting AMD to get even higher volume discounts from TSMC for Ontario than what they got for Cedar. There are some yield issues for the CPU part of the APU which I consider dealt with if AMD can sell the few parts with only one good core for a discount. Anyways, the CPU cores are really small. Please note how simple the package is, it looks even simpler than Intel micro-FCBGA8 559: http://www.xtremesystems.org/forums/showthread.php?t=258499
Add the 45% gross margin target you mentioned (I use the CPU business one, even if only 20% of the die surface consists of CPU cores) and I get a "reference price" of USD 25. Actually, given the rather optimistic nature of some the estimates above, I consider this close to a floor for profitable pricing, and regardless of the cost base, I do not expect AMD to run to the bottom. Still, they have almost no presence in low-power laptops, and any sale will translate to increased market share.
Feel free to play with these illustrative numbers to come up with a figure you can believe in for your own purposes.
Vendors will not pay $80 for all Ontario skus
I think there is no need for that - and I doubt that AMD even expects to sell any part at this price point. Both Intel and AMD can make a profit at much, much lower prices. Basically, much of the pricing aspect will depend on Intel, and how they price their own parts in reaction to the introduction of Ontario. If AMD design is indeed better performing, it may lead to a cruel pricing dilemma.
Another question is, what proportion of the market could AMD address with this, and how does the 20 W version perform? Will it find any significant uses in desktops?
Sorry, I am not sure I follow your reasoning. You claim that TSMC 40 nm output will still increase in H2 2010 and beyond, and this I agree with. TSMC plans volume ramp taking into account several factors, and in particular customer demand.
On the other hand, I think you have brought no significant support to the idea that yields, or defect density, for TSMC 40 nm process will not reach your arbitrarily defined 90% end-of-life cutoff point (or any other cutoff point one may imagine) when Ontario enters production in Q4 2010.
I understand you think it will not be there and you draw some cost conclusion based on this opinion. I just see no way for me to understand how you came to that conclusion or why.
In Q4 2010, TSMC 40 nm process will have been in volume production for 12 months. From what I gather from reading publications from Intel and other sources, the time evolution of the defect density of specific processes has been shown to vary very significantly from one process to another, both before and after the start of volume production. I think there is no way for an outsider to know how far or close TSMC will be to the asymptote at this point in time.
I do not even disagree with you: I simply do not know. Pointer to papers or the expert opinion from process engineers would be appreciated.
Chipguy published estimates quite some time ago, if I am not mistaken. You can dig them up if you are interested.
Reading your post, I think I agree with the conclusions, which I consider the main points. On the other hand, I can't see the cause of your reluctance to call TSMC 40 nm process mature. This is probably not very important, but I will go on as I feel it worth discussing for a bit.
AMD has been selling designs based on TSMC 40 nm process in consumer products from may 2009. At the time they will introduce Ontario, Cypress, an AMD design which is 4.4 times as large as Ontario, will have been available for more than a year. AMD will likely release designs based on an even larger die in the coming months. The same process is used to manufacture low-end proucts which end up sold to end-users at prices as low as USD 35-40, in spite of the fact that they include the added costs other components on top of the Radeon 5450 chip (PCB, RAM, voltage regulation), the assembly, and the added margins from the video board manufacturer.
Those are mere observations which indicates that AMD has considerable in-house experience in designs targeting this process, that it has been used in production for a long time (20+ months at the point where Ontario is introduced), and that yields are sufficient to produce very large-die designs and cost efficient smaller dies. Honestly, I fail to understand your point about how TSMC process mix in H1 2010 somehow reflects the immaturity of their 40 nm process, especially in Q4 2010 when Ontario is supposed to be released. Where is your fact base?
I remember that there were rumors of yield problems with this process until march 2010. Now, they seem to have died out.
Cedar is actually a poor example.
Cedar is manufactured in the same process Ontario will be fabricated on and the die size is similar, although not identical. Cedar is an AMD design, like Ontario, and chances are high that a major proportion of Ontario will be made of structures which are very close or identical to Cedar. It is actually a very good example. This said, if you think you can find a better one, I am all ears.
I'm not sure where you're getting the data point for the pricing floor (you said USD 35-40)
That is strange. The very part I linked to in the post you replied to, ”PowerColor Go! Green AX5450” , currently sells for USD 35 after rebate (there are two other cards below USD 40 after rebate at NewEgg), and the lowest-priced part, discounting any rebate, currently sells for USD 40.8. Maybe I miss something: thank you for your help if you can show me how.
Until then, a price floor of USD 35-40 seems fair to me. And that is of course much higher than the price of the AMD 5450 chip to Powercolor. Which in turn is higher than the cost of the part to AMD.
the first one (USD 40.8) is probably a loss leader
“Probably”? Says who, based on what? It has to be a loss leader because loss leaders are not unheard of in retail ? That does not sound very convincing at all. And it is not even the lowest priced one.
The point about nVidia 8400 makes you sound like you are confusing cost and price. What makes you think that the relative margin that nVidia used to get on the 8400 are the same as AMD current margin on the 5450, which seems to be necessary to make your comparison valid?
The difference with Intel is that they can take Atom even lower in power, if they wanted to.
I agree that a potential, large market (in number of parts sold/year) will be out of reach of Bobcat at the time of introduction, and for quite a long time after that: mainly smart phones, tablets. Currently, Intel sells next to nothing there, if I am not mistaken, and ARM implementations, not AMD x86 designs, are the competitors. Atom potential in that space, while interesting, sounds like another story to me.
So Ontario may be the only big bang AMD has in the next 6-9 months, and the only big sales driver they will have for 2011.
I am unsure about the potential of BD in the server space, and the release date. But in the client space, I would say it is likely.
Maybe a comparison with currently shipping parts built from TSMC 40 nm process would be useful. AMD Cedar die is 59 mm2. Video boards based on it currently sell for as low as USD 35-40.
http://www.newegg.com/Product/Product.aspx?Item=N82E16814131339&cm_re=5450-_-14-131-339-_-Product
The price includes manufacturing the GPU die, packaging, testing, mounting the BGA package on the PCB together with the other board components, testing the assembled board, as well as the cost of whatever end-user box it comes in (with manual, DVD). Of course, it also factors-in margins from TSMC, AMD/ATI and finally PowerColor.
Now, Ontario, while probably higher-volume than Cedar, has a slightly larger die and the yields may be lower also from the design of the part. The packaging may be a little bit more complex, too. But this indicates that TSMC 40 nm process is definitely mature enough to play in Atom arena. Dual-core Atoms with integrated graphics currently ship for USD 86 in 1k quantity. That leaves a very wide gap for AMD to prosper, and I estimate they could still thrive at a much lower price point. It is likely that they will be competitive on cost.
I personally expect to see Bobcat derivatives not only at the 10- and 20 W power level, but eventually at 5 W, too.
Before anybody misinterprets, I should note that the pic is merely meant as an illustration to the die size in the context of my reply to WBMW, as opposed to a "proof" of anything.
Some more info here:
http://www.hardware-infos.com/news.php?news=3681
By the way, I found a clue about volume allocation at TSMC:
http://www.digitimes.com/tag/tsmc/001264.html
Search for "fusion" on the page.
http://www.chip-architect.com/news/ontario_vs_atom.jpg
Sorry for the late reply.
Wow, that thread is incredible.
That thread is everything but incredible. It is merely yet another iteration of a tired conflict among trolls, both camps trying to bend slim clues their way, at a point in time where no definite answer is to be found to the question of performance. Likely, even AMD do not know how high the final parts will clock, which has first-order implications on single thread performance.
Same old, same old.
On the other hand, clues to the part performance have been published for Bobcat. I think it has a lot of potential at the low end. If die size estimates turn out to be correct, it will outclass Atom variants in situations where very low power is not paramount, at similar manufacturing costs. That means low-cost desktops for office work and other desktops for emerging markets, low-cost laptops and netbooks. I think it will lead to competitive SOCs from AMD. I wonder how much capacity is reserved for Bobcat-based products at TSMC.
Have to disagree with your metrics
I am not suprised, since this is obviously a qualitative matter. All I can say is that I was by no means attempting to set up an equation or to define an all-encompassing metric, but merely to illustrate a point about relieved competitive pressure. Number of cores relates to die size, which in turn relates to manufacturing cost, which relates to margins.
As a matter of fact, there is no 32 nm quad core desktop Intel MPU yet. I think this indicates that such a part is not needed. As you say, Nehalem is pretty competitive with AMD on a 2 core with HT vs. 4 AMD core: it is a metter of competitive positioning.
The core revolution was nice but nehalem has been a significant improvement especially as a platform.
Core 2 was a night-and-day shift in power efficiency. That is far above nice in my book. On the other hand, I do not disagree that there have been significant progress in Intel offerings since then. I merely think that those are not as clear-cut as the jump to Core 2, and therefore, while the software often does not even tax the previous generation systems, harder for the end-user to recognize as something significant.
And frankly the avg. consumer doesn't even know how many cores something has anymore. It is simply about perceived performance.
I hope this is not true, but I suspect it may be. On the other hand, I am unsure about how much perceived performance gains are to be expected from the latest Intel platform refresh: in many cases, I suppose it will not amount to much (unlike the gains from adding, for instance, a SSD drive to the system). To me, percieved performance currently looks like a much harder sell than explaining that one gets more cores at the same price.
Have to disagree with your metrics
I am not suprised, since this is obviously a qualitative matter. All I can say is that I was by no means attempting to set up an equation or to define an all-encompassing metric, but merely to illustrate a point about relieved competitive pressure. Number of cores relates to die size, which in turn relates to manufacturing cost, which relates to margins.
As a matter of fact, there is no 32 nm quad core desktop Intel MPU yet.
The core revolution was nice but nehalem has been a significant improvement especially as a platform.
Core 2 was a night-and-day shift in power efficiency. That is far above nice in my book. On the other hand, I do not disagree that there have been significant progress in Intel offerings since then. I merely think that those are not as clear-cut as the jump to Core 2, and therefore, while the software often does not even tax the previous generation systems, harder for the end-user to recognize as something significant.
And frankly the avg. consumer doesn't even know how many cores something has anymore. It is simply about perceived performance.
I hope this is not true, but I suspect it may be. On the other hand, I am unsure about how much perceived performance gains are to be expected from the latest Intel platform refresh: in many cases, I suppose it will not amount to much (unlike the gains from adding, for instance, a SSD drive to the system). To me, percieved performance currently looks like a much harder sell than explaining that one gets more cores at the same price.
I indeed think that Intel clients have experienced nothing of the sort of the Core 2 revolution with the latest MPU updates. Whether this qualifies as baby steps or not is a matter of perspective, and I think I have overstated it, but the point remains that these technicalities matter so little to Apple customers that the Mac went through a record quarter in spite of the technically obsolete offerings. It just did not matter. I am willing to believe that end-users would have been more sensitive to Apple being a generation late at the Pentium 4 / Core 2 interface.
Note that this is not a judgment about the micro-architecture, but rather about the products and price points that matter to the end-user: I bought a 4-core consumer MPU years ago, and it replaced a dual-core processor at the same price point. There is simply no 8-core, or even 6-core MPU at a comparable price point now. The decreased competitive pressure is a boon to Intel, but it also explains how it is possible for the Mac line to succeed without the need for timely upgrades, much better than the paranoid scenarios of the article I commented on. In my view, there is simply no clash between Apple and Intel.
As for AMD, they simply are not competitive, but it is as much a matter of starting point that a matter of progress: as a matter of fact, they have been trailing Intel since the Core 2 release.
I do not see the point of quoting the whole article.
I don't see much of a point in the article itself either: I remember that Apple jumped to Intel for desktops and laptops, and it went very well. Wouldn't they do the same for tablets and phones, if the advantages become overwhelming down the line? I think they would. But considering that Apple is a major player in smartphone right now, they are bound to use what currently represents the best hardware solutions for their needs, and that cannot possibly be Intel.
On the other hand, I am inclined to think that Apple did not care for updating most of their Mac lines to the latest Intel hardware because Intel has been taking baby steps recently, mostly from reduced competitive pressure. There just is not much point in upgrading. And it seems that Apple clients do not care much anyways, since Apple personal computers sold very well in spite of not being showcases of the latest Intel technology. The Mac line is everything but in "such sorry shape": it is doing very well thank you. And runs on Intel hardware.
WebOS will be ported to x86, if it hasnt already. Not a major task.
What makes you so sure about this? I mean both, the idea that porting will certainly occur and the hypothesis that it is a piece of cake.
Supporting both architectures would make the distribution of applications more difficult - if the vendor wants to go the route of a single app store.
Makes sense for HP to offer a tablet that will boot either Win7 or WebOS - and that can only be done on an atom.
Sorry, how does that make sense?
The most successful tablet ever cannot boot WebOS or Win7. It offers no option other than booting who-cares-about-its-name iPad OS.
I can see the point of having the option of booting different OS on a desktop or laptop computers (even when most actually only boot the OS they shipped with), but it is much less clear for a tablet.
Actually, my comment was based on what I believed to be the original quote, or something closer to it, and I forgot about the link that had been published here and the differences in wording between the sources. Sorry about that.
Here is what I had in mind, for reference:
http://online.wsj.com/article/SB10001424052748704197104575051592877745472.html
Chipguy, that really sounds like a misstatement.
I am sure that there is way to charge various IBM costs to the Power7 project, be they directly related to Power7 MPU development in the usual sense of the word or not, up to the point where on gets to this incredibly high figure. After all, IBM is vertically integrated and does semiconductor, server, storage and system software in-house - both design and manufacturing.
Considering who has made the number public (Rod Adkins, senior VP of IBM Systems and Technology), it is meant to be interpreted as "Wow, those guys are really commited to the architecture, unlike these other guys at Sun/Oracle". Of course, it can as well be interpreted differently depending on where one is coming from.
The question is, why should Intel compete against a company where chip margins are relatively unimportant, compared to the system and solutions margins, of which Intel does not have a similar play?
I think that the idea is to hold the high-end customers captive, and progressively bleed them dry from the point when any significant competition has been suppressed.
It is true that the systems and solution margins are very high. I am sure that this would gradually change if there was a single supplier for the most critical part, the MPU.
The upside it potentially high if they can make their market segmentation work. It is x86 all over again, with higher barriers to entry, and without AMD to annoy them.
Even in the current situation, I wonder about the contracts with HP. There are likely long-term commitments on both sides, and special provisions for pricing going forward.
The simple fact is that for the scale-up, and mission
critical server market IPF targets Tukwila is the best
performing of the 65 nm generation of processors.
Unlike your previous claim, this prediction of yours may well prove true. As I wrote, I do not expect other benchmarks to show Tukwila under-perform in the same way as it does in SPEC rate. This is (part of) the actual good news.
However, Tukwila competition will not be, once servers are finally released three months from now, the 65 nm generation of processors. That is a missed opportunity for Intel to leapfrog IBM, since POWER6 is not extremely strong in an absolute sense. Not having had the performance crown, even for a short time, is no good for the image of competitiveness of the Itanium product line, which has been lately all about inflated promises.
The interesting implications
for the competitive situation that will likely occur at the
upcoming cross architectural new product introduction
convergence at 32 nm [will be even more clear].
I see that you inserted in a "likely" in your sentence and I can appreciate that. Wait and see. Based on Intel track record on Itanium, I prefer to adopt an even more prudent attitude.
Tukwila is not the highest performing 65 nm chip in SPEC rate, you were wrong. But since the title has no merit, it does not matter one bit. Other architectures, at least when the option existed, have moved on to other process nodes instead of trying to push the reticle limits.
Nevertheless, I am not sure that Tukwila disappointing performance so far matters a lot. Other benchmarks will be less abysmal, and luckily for this product line, the dynamics of this market are such that per-socket performance is not the end-all and be-all. This is the good news. Not some contrived performance comparison against senior citizens among server chips that Tukwila does not even manage to win.
LOL, Niagara is hardly a general purpose server processor.
Nice backpedalling. Let's review your claim, will you?
The good news is Tukwila is the fastest ever 65 nm server
processor.
... and you went on to "prove" it using SPEC rate figures. It seems that then, you were claiming this and now you are retreating to something else, trying to redefine the meaning of "server processor". You lost any disguise of intellectual honesty in the process.
Those 2s SPECrate scores are achieved with 127 threads.
What can I say? You have chosen both the benchmark (SPEC rate) and the terms of comparison (65 nm server processors). You have been confronted with the opinion that the comparison did not make much sense, and the fact that your conclusion was wrong. Since then, you have been complaining that the benchmark is ill suited to Tukwila, and that "65 nm server processors" may have been too wide a category after all. Why? Because Tukwila does not even appear to dominate this tailor-made and irrelevant category, in spite of being the monster that eats the most silicon to get where it is. One of the rare 65 nm server processors released within 12 months of Tukwila (but barely so - when Tukwila systems are finally available, Fujitsu SPARC Enterprise T5240 will have been on the market for 11 months) outclasses it in your benchmark of choice.
You really have guts to now try and call the comparison unfair, after you insisted that comparing a 700 mm2 chip with very fast caches, gobbling heaps of memory bandwidth, and benefiting from a significant advantage from more recent compiler technology, against puny vintage 2007 x86 server processors derived from commodity parts was perfectly fair and informative and a sign of Itanium supremacy. The comparison is not any more unfair now than it was when you imagined that Tukwila was the winner, or when you unsuccessfully tried to fit Tukwila into a 300 mm2 die.
Surely in the interest of trolling this forum with your
IPF bashing garbage you wouldn't stoop to the level of
pretending to have never heard of Amdahl's law?
Excuse me? I am not trolling this forum in any way, just politely pulling you back to reality. You know that your very comparison, exposed in another forum, was greeted with much less patience than what I displayed here with you, but the points exchanged were the same. Overwhelmingly, experienced contributors voiced the opinion that the comparison did not make sense - and that the wrong winner has been crowned. The reason why I am disagreeing with you is because I, too, think you are wrong.
On the other hand, your constant name-calling and accusations are tiresome. Please consider stopping it.
Here's the cluebat genius. When Sun brought out its first
idiot-convention-on-a-chip in 2005 IPF sales were about
1/4 of SPARC sales. [...]
I am amazed at the quantity of extraneous items you throw into this conversation. You surely understand that server sales have nothing to do with the point at hand. It is true that HP has managed to sell many Itanium servers, mainly, as you know, to existing HP-PA customers. But that does not make you more right one iota.
A client IPF chip would get similar attention.
What does a hypothetical client IPF chip have to do with the topic at hand? I was right: there is no way one is going to make your laundry list of features fit in less than 300 mm2 without designing a different MPU (by the way, the recipe you gave is not sufficient at all - more things need to go to get down to Barcelona size). And then, who knows how it would perform? And how it would compare against other hypothetical 65 nm MPU implementing a different architecture? This is just completely irrelevant.
me: You fail to recognize that you elected to compare those very different processors, not me. This "highest performing 65 nm MPU ever" story was devoid of any profound meaning or practical reach.
you: No, it is just a painful truth that makes IPF bashers like
yourself upset.
So, according to you, this title of "highest-performing 65 nm MPU in SPEC rate" has practical reach. EduardoS just reminded us about what MPU currently gets the crown.
http://www.spec.org/cpu2006/results/res2009q3/cpu2006-20090721-08256.html
http://www.spec.org/cpu2006/results/res2009q3/cpu2006-20090721-08255.html
Yes: UltraSPARC T2 Plus / Victoria Falls
Talk about a painful truth. And Victoria Falls does it with a die half the size of Tukwila (the actual one, not the hypothetical sub-300 mm2 fantasy chip).
By the way, please refrain from using this "IPF bashers" terminology to talk about the many people who express disagreement with you on this topic. I think it is needlessly derogatory.
Tukwila is an
excellent replacement for Montvale
Sure. Tukwila is late, does not clock as high as planned, and is completely outclassed by competing designs. That defines "excellent". Merely "good" would mean being outperformed by 2007 vintage commodity x86 chips while emitting deadly radiation and spoiling the milk, I suppose.
and will help grow IPF
sales to record levels as IT spending recovers.
It could well be. Depending on the workload, it seems that Tukwila under-performs POWER7 by a factor 10, per socket. A clever strategy to populate more sockets with expensive Intel products? But then, isn't Tukwila performance too high ?
Actually Tulsa is 435 mm2 genius. Any more lies you want
to spread today?
In a previous post from this exchange, I made clear which x86 MPUs I was comparing Tukwila die size to (Barcelona, less than 300 mm2; Tigerton, 2x150 mm2). Tukwila die is indeed approximately 2.5x as large. I could do without the ad-homs, thank you.
As you point out, it is also much larger than Tulsa, although less overwhelmingly so.
Barcelona is using a much more ideal platform for
2s SPEC CPU rate - direct connect DIMM with 1 single rank
DIMM per channel.
Wait: you insisted to compare those products on this specific benchmark to Tukwila advantage. I know what markets those products address, and understand the major trade-offs in their designs. They are widely different in several dimensions - therefore any comparison must be qualified, as I pointed out.
Indeed, Barcelona SPECint rate performance is helped, compared to Tukwila, by the lower latency access to main memory. Poor Barcelona, the puny chip needs all the help it can get from this, considering that Tukwila benefits from approximately 7 x as much on-die cache and a significant advantage is memory bandwidth. Barcelona 2-socket systems reach 17.4 GB/s in stream TRIAD, while I suspect this Tukwila system may reach 25 GB/s.
Most of the extra area doesn't
help generate more 2s SPEC CPU rate
Some of the extra area does, and some does not. But scrapping all big-iron areas from Tukwila would still not make it slender. See below.
If Intel wanted a 65 nm quad core IPF under 300 mm2 for 2s
max systems it could throw out the directory cache, cross
bar router, half the L3 cache, 2/3 of the QPI links and
directly connect to 3 DDR3 DIMMs like Nehalem-EP
Tukwila core logic alone is 276 mm2 - approximately the size of the whole Barcelona die. Core logic, with no uncore of any sort, no L3, much less 12 MB of it, No QPI links, no DDR PHY. And of course, no directory cache. Poof, there goes the "under 300 mm2" budget. For reference, the 12 MB L3 that you imagined would fit, measures nearly 100 mm2. Isn't this alone more than the size of a pair of K10 cores, including 2x 512 KB L2?
It is of course perfectly conceivable that Intel could have designed a 65 nm IPF MPU in less than a 300 mm2 budget, but it would have been so different from Tukwila that it makes little sense to try and guess how it would have performed.
The only travesty is your irrelevent obsession with die
size comparisons with x86 chips that can't scale in system
size or memory capacity worth crap.
You fail to recognize that you elected to compare those very different processors, not me. This "highest performing 65 nm MPU ever" story was devoid of any profound meaning or practical reach. Other MPU lines have moved on from this process node, while Tukwila belatedly tries to squeeze as much juice as it possibly can from it. HP is left with an under-performing part.
Care to guess what happens when IPF catches up in process?
I have no idea. Actually, I will believe it can happen when I see it ship in server products. There has been much guessing around the next iteration in the Itanium product line, and so far, it has often proven to be too optimistic.
How competitive would
Power7 and current x86 server MPUs be vs Tukwila if they
were based on shrunk and tweaked Power4 and P4 cores?.
To me, this rethorical question has about as much relevance as the initial comparison at 65 nm. Whatever the reason, Intel only made the Itanium core evolve incrementally from I2. Of course, had Intel delivered along the lines of the hopes of the Itanium enthusiasts, while at the same time, products competing with Itanium had failed to evolve, the situation would be very different now. But this is not what happened, and Itanium is taking a public beating in SPEC rate.
Tukwila outperforms vintage 65 nm x86 server chips primarily because it has the benefits of being a much larger chip (much more than twice as large as the x86 devices in your comparison - which is in part made possible by the current process maturity), running on a better hardware platform, using binaries compiled with a more recent version of Intel compilers. Any comparison of SPEC rate performance at 65 nm that includes those x86 MPUs and fails to recognize this, is a travesty.
The good news is Tukwila is the fastest ever 65 nm server
processor.
I fail to see how this is, in any way, relevant. Tukwila has a die size of approximately 700mm2; Barcelona, less than 300 mm2; Tigerton, 2x150 mm2. Shrink Tukwila to the next process node, and it would still be larger than the other two contenders. Moreover, Tukwila benefits from a very modern infrastructure (hardware: interconnects, memory subsystem and software: compilers) which puts it at an advantage over the older designs you chose to compare it to. One would in fact expect it to show an even larger lead taking all of this into account. You make it sound like there exist some kind of level playing field, which would be summarized in a satisfactory way solely by the manufacturing process: this is simply not true.
Even discounting the self-inflicted handicap of using an obsolete process, Tukwila performance in SPEC rate in not impressive.
Larger, more powerful 65 nm x86 server processors were not developed, not because it was technically impossible or even particularly difficult, but because it made more economical sense to move on to a more advanced manufacturing technology with new designs.
This means that not only Power 7, but also x86 server processors from Intel and AMD have noved on and completely outclass Tukwila in SPEC rate. And I am speaking about commercially available products, which Tukwila-based servers are not.