Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Wouter Tinus
Also, Merom has a longer pipeline, and there is a very long list of other architectual differences.
Please don't gaggle only, lay eggs: Post this very long list. And a link to an Intel statement of a longer pipeline for Merom as well.
K.
Wbmw
Halfway. It is not that features would be deliberately "enabled" for Merom (or "locked" for the Yonah product) but both products are different feature-yields from the same silicon. See a recent post to Joe Halada at which stage the dies are sorted and split to products packaged and marketed immediately and dies accumulated in die inventories to be marketed later, and take the process learning curve into account.
Having said this, the short answer is yes.
K.
wbmw
Well I was _not_ suggesting Intel said anything wrong. Merom can rightly be called a new µarch - as Hyperthreading and EMT64 as well, if you want. It's the intuitional perception a new µarch necessarily is new silicon as well which is false. Intel never said it is. :)
wbmw See my reply to your other post. K. eom
wbmw
Are you trying to suggest that the 4th issue port for Merom may be fused out on some parts as a DFM feature?
In a simplistic view of things, Merom is kind of a hyperthreaded version of Yonah, implemented in the DFM in a similar way.
Here's a link to a paper outlining the broader context, which might explain why I chose the above analogy.
http://www.ece.rochester.edu/~albonesi/wced03/papers/ekman.pdf
K.
Joe
What do you mean by feature yields? Do you mean that the feature is there, but it is disabled?
Not quite so, Joe. The circuitry of the feature is on the die, but not every die passes all tests for the feature at wafersort-stage. Current designs for manufacturabiliy take this into account and allow fuse-outs for different feature-sets.
K.
wbmw
In short, many products are packaged from the very same die. Prescott is a recent example - unless you think Prescott EMT64 is different Silicon from Prescott 32bit, or more general every feature Intel adds requires a new design-spin. This applies to current nodes, coming nodes and goes back many nodes as well.
K.
wbmw
Yawn. All true. But all are examples of introducing feature-yields from very same silicon when you have it. Now, after watching this for ten years, it's about time to get a grip on it, isn't it?
K.
wbmw
I don't know what planet your book is based, but over here when someone aims for 5.0 and delivers less than 4.0, that's 4/5 = 0.8, or at least a 20% miss.
Well on my planet if you come from 3.4, aim for 5.0 and end up at 3.8 it is actually a 300% miss.
Yet you incorrectly attributed this to a process problem, rather than a micro-architecture problem. Do you have any other reasons to object to my previous statement about process shrinks offering improved performance and power, other than the Northwood to Prescott transition?
Well we recently exchanged our thoughts about what the problem was wrt µ-arch versus process for Prescott already.
And yes, I have reasons to challenge shrinks are offering improved performance and power going forward. Although AMD is doing surprisingly well in the 90nm node - I'd call that fabtastic.
K.
chipquy
They are very different designs. Had Prescott been
implemented in 130 nm like Northwood it would have
been bigger, slower and hotter than it was. That is
the benefit of the process shrink.
Speculative and irrelevant. Neither a plain shrink nor the benefits from it happened.
K.
Joe
Just the headline ones - AMD64 support and 4 issue ALU make up basically a new core, rather than just a revision of the existing one.
64bit is feature yields, same as with Prescott. The issue-width story is - plain and simple - brainwash-blurb. Pardon my french.
K.
wbmw
It's not the same silicon.Yonah is Pentium M based, while Merom is the new micro-architecture
C'mon mate you should know better. Whenever did Intel introduce a new µ-arch in the middle of a node? Nobody with a sane mind would ever do this.
K.
wbmw
Northwood was a 20-stage pipeline CPU, and Prescott is a 31-stage pipeline with a large amount of additional logic aimed at topping 5GHz clock speeds, but due to the power wall, it didn't even hit 4GHz. This was a 20% miss to expectations
Strange math, mate. Northwood was at 3,4 GHz already. Prescott was indeed supposed to reach 5GHz, it topped out at 3.8GHz.
In my book your portrayal of the miss is about off a dimension, mate: It's not a 20% miss, but a 200% miss.
But that's only half of the story. You know a deeper pipeline does help to increase frequency, but only pays off it you can scale indeed. Consequently, a 3.8GHz Prescott offers about the performance of a 3.4 performance part, well, maybe a tad more for 2MB L2-cache product, but I left out Northwood EE 3.46GHz anyway.
In essence: Prescott offers no performance increase at all, but has a power penalty over its predecessor nevertheless.
K.
wbmw
process shrinks *undeniably* offer improvements in performance and/or power.
Umm. Prescott vs. Northwood?
K.
drjohn
Nothing really new but this link makes it sound like Merom will be compatible with the Yonah platforms, a little hard for me to believe.
It's the same silicon... so it would be surprising it would need a platform change.
K.
Keith
Thanks. Talking about pricing, I heard rumours out there AMD will be doing something with Opteron pricing this month. However it could well be below the line, to counter Intels rebates in kind scheme in place.
K.
Keith
Yeah, usually you talk about crossover in unit-shipments or revenues. The term performance in this context is unusual.
Unusual wordings always make me listening closely. :)
K.
Keith
Oh I don't mind entertaining claims from Dell at all. :)
Btw did you notice kny's recent posting on w:o, saying you can currently get a second Xeon for free when buying a server from Dell if you mention Opteron?
K.
Keith
Dell predicts dual-core Xeons to top AMD in 2006
Well, we all know already how long a nanosecond in Dell terms is.
How long a year is in Dell terms is to be seen. :)
K.
Rink
Never mind. No personal offense taken. :)
It seems that what you're saying is actually quite close to what we said
Well, halfway so, actually there is two considerations I intended to add when shifting focus away from units in crossover:
1. Taking shifts in Intels inventories into account
2. Suggesting there is no performance increase to expect from 65nm for quite a while for Pentium4 and Xeons. (What Intel recently disclosed about a "second generation 65nm process" - well a different process number at least - supports this)
Maybe this was not evident at first glance in the synopsis my internal translator produced for Intels statement you cited. :)
K.
What Buggi confirmed was in line with what I heard, so it is rather likely to be the truth (65nm crossover for the performance segment Q3 at the earliest).
Just to set the record straight: This was the underlying statement which i suggested an interpretation for. I have no doubts in neither your listening skills nor buggis.
You don't know what you're talking about.
Possibly. That would make it two then. :)
K.
Rink
I also heard something like this on the call but wasn't entirely sure of how to interprete it. What I heard was 'crossover for performance' in 'Q3'.
While I did not yet listen to the call, what you cite is not necessarily in the unit dimension at all. Actually, my internal interpreter of semitalk to plain english suggests a possible translation for this particular vocal snippet taken out of context as
"We hope the sweetspot of our 65nm production will have superior performance over products from current node in about a year".
K.
Congrats to the Board for the Team-win!
Many thanks to jj for hosting the constest.
K.
wbmw
This predicts an x86 processor market that is nearly twice the size it is today. Anyone else see a disconnect here, or are there huge growth predictions going on?
Think along the line of marketing currently unused low feature yield dies in embedded markets. Both Intel and AMD are exploring into this and take it into consideration for current and upcoming dfm.
To what extent this will be feasible, and if at all when, is unclear, at least, and questionable as well in my book. Hector is a strong believer in the strategy; he bought a mips-company in 02 (mips to die and migrate customers to X86) and took over the X86-unit from National to prepare for it to materialize.
While the idea is tempting (basically it's the turning lead to gold thing, or the recent iteration of it to produce energy from waste), I am not sure AMD will ever make money with it, let alone get back what Alchemy [sic!] and Geode lost until eventual breakeven of this unit. It could well turn out as the very same error of strong belief in the vision of the NROM-miracle to materialize.
I would put more faith in the strategy if X86 would not be a power-hog architecture - or if ARM would not exist.
K.
jj,
indeed. Traditional (list)priceround beginning of October did not take place this year, and no price moves are seen in retail below the line. While I believe there are price moves in OEM-space, these will only show up next year, in the q4 reports. I further believe this is very beneficial for AMD, in terms of ASP-improvements and its die-inventory-structure.
K.
Yeah,
actually what Chris Tom posts is just an excerpt of a MS troubleshooting advice. Reading this it's even less clear..
K
avatar
Could you elaborate a little bit further where and why you are reading this into the issue report? I'm stumbling to do so, currently.
K.
OT Phil
Well, what you describe seems to be one of the (zillions of) approaches of building an architecture just because you can - and only after you have done it start thinking what it could be useful for. ;)
Nothing really wrong with it: It's fun to do, does not require much tiring out-of-the-box thinking, and is guaranteed to impress humble minds even if it is basically useless except for producing mips, flops of whatever numbers. The best about it is there is always a chance for an occasional fallout that you accidently find a useful application for it. :)
K.
Phil
and could/should IMO start with using Bio-molecules or bigger structures as the "non-binary" basis for storing and processing information.
This goes way farther back than I suggested. :) I would be all for it, if we had a fundament how biological information processing works. However afaik, we don't have that, yet.
What I suggested could eventually done in one or two human generations, which is the scope of a journey to mars as well. What you suggest might be in the scope of a journey to a neighbouring galaxy. :)
K.
Phil
Don't worry, no offense taken. :)
Actually, I woke up from the dream of the holy grail of || computing some ten years ago. :)
I'm not saying we should not work in this direction, it is just || code does not work for sequential problems (yes I know you can do a lot of speculative things in hard- and software). And we have a lot of serial problems to deal with.
In terms of hardware, well, yes, maybe we should go back to before von Neumann architecture and start from scratch on a fundament of understanding the problem's natures in terms of sequentiality, then try to find a migration path for the codebases out there and design an appropriate architecture.
But well, flying to Mars looks like a piece of cake in comparison. :)
K.
Phil
Apologies I cannot add any value inside the Code-box. I believe comp.arch is the appropriate place to offer your thoughts. There has been a long (and winding) thread recently "Not enough parallelism in programming?" which gives an overview where things are (better said, where people are:)) in this respect.
And no, I did not mean this, but my remark with respect to X64 device-driver compatibility.
K.
upndown
Try posts #266 to #272 here for recent anecdotical remarks, and a wild speculation for what still could come.
http://groups.google.de/group/comp.arch/browse_frm/thread/ffab56ec3cde863b/7277dbd76d0c1ac7?tvc=1&am....
K.
wbmw
Thanks for sharing your thoughts. K. eom
mas
Remember it was based on an old 2001 AMD64 spec.
While I can confirm the insurance policy idea from what I heard back then, I also heard Intel had an own implementation of X64, much less sophisticated than AMDs instruction set to protect IPF, and tried to convince MS to "adopt the least common denominator."
MS did not follow this but said it would write code for AMD64, not for less than that. This came to a surprise of Intel, not the challenges of the 90nm node.
K.
wbmw
Do you mean "perception"? Otherwise, I don't understand.
Make that "idea".
Wrt it did not come at a suprise, there were dozens of conferences, hundreds of publications about the underlying physics of leakage-dimensions to come. Intel knew the challenges as well as everybody else. That's why they got Dothan right.
The 65nm Netburst processors are obviously a transitionary product. The real next generation chips are the ones announced at IDF, called Merom, Conroe, and Woodcrest.
How long do you believe what you call an obviously transitionary product will be manufactured?
K.
Possibly, Kate. Without any doubt, there are people who know much better what really happened than me. Are you one of these?
K.
wbmw
Turion uses specific low power transistors and an entirely different back-end design that is not comparable with the Athlon 64
Interesting conception.
Obviously, your confidence in Intel is based on the ideal that they knew the impact of the power wall before it hit them.
Definitely. It did not come at a surprise.
It only took them a year to cancel Tejas and redo their entire roadmap for power efficient micro-architectures
I'm not sure which roadmap you are looking at. If you mean Intels, how long is that year?. Netburst is still on Intels 65nm roadmap.
K.
wbmw
Well now that is really Apples to Oranges. If you compare SV Banias to LV Dothan do the same on the AMD side and compare Newcastle to Turion.
Wrt Prescott design, frankly, what you are suggesting is the chaps at Intel designed something they knew it will suck power like hell and won't scale nevertheless in the first place, increasing pipeline-lengths which is helpful to scale frequency, but useful only if it scales indeed? Interestingly, I have more confidence in the sanity of Intels minds than you seem to have.
All Intel had to do was make the op-codes compatible with AMD64 by request of Microsoft (which needs no more than a micro-code change).
Umm. Actually two AMD64-instructions are only supported since days or weeks by EMT64, as you might know. Maybe you could have done it much better than Intel could, though.
K.
chipguy
Are you seriously suggesting that the decision to add EM64T to Prescott was only made in 2H03?
Certainly not. Intel's own X86-64 implementation must have been in the original design. Microsoft's decision in automn 03 forced a respin to ensure compatibility to AMD64; which likely screwed the balance or the design wrt to power, limited FSB-speeds e.g., and most probably did a lot more to the design I will never learn about.
K.
p.s: I'm stumbling to remember when MS decision actually was, 02 or 03. I wouldn't bet on either. Only thing I believe to remember is it was automn when a mail from MS to an ISV war forwarded to me leaving no doubt where X64 will go, which was unclear up to this date. Irrelevant, history anyway.
wbmw
Except, in the case of Northwood->Prescott, there was a significant change in the design
I agree. Even if the design-changes from Northwood to Prescott do not look really significant at first glance. From how I understand the design-history of Prescott, there was a redesign at a very late design-stage (automn 03) necessary to meet the needs for AMD64-instructions, which likely was fuel into the fire Netburst is challenging in leakage terms anyway.
AMD did very well transitioning from from Newcastle to Venice, and Intel did very well transitioning from Banias to Dothan.
Well. AMD did at least 100% better comparing these for power reductions in relative terms. But then, Banias was far superior over AMDs 130nm parts. So it's all apples and oranges..:)
I think it will be interesting to compare Yonah as well.
Compare to what?
K.