Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Duke - Senior execs are always booked well in advance for major events. To go to the event management folks and say 'please substitute exec x for exec y' would telegraph a possible departure... I can think of several senior MS execs whose departure was planned for months, but still appeared on venues as the keynote speaker days before the official announcement of their exit. I don't know if the decision on Pat was 'sudden' or not (tend to think not), but the data point this writer brings up is not material IMO.
I have nearly 30 years of experience dealing with Intel - both on their side of the table and the other side. There are strict rules of conduct which are frequently revised, which were designed to avoid exactly this kind of situation - but there was also a culture of pushing the competitive boundaries as far as possible within those guidelines. And, like any big organization, the further from central office, the more likely people stepped over the line.
Once that happened, my guess is that Intel senior management would back their guys unless there is an obvious immediate problem. Over time, that creates a web of potential liability, and now that is coming back to haunt Intel.
My personal opinion is that Intel is doing itself more harm than good by the position it is taking with the EU. I think they would be better off to work it very low key, and concentrate on the differences between the EU and US systems to be sure they don't end up with a repeat here.
I had some experience with Elias and was not impressed - politically savvy but otherwise not CEO material. Doubt if the CFO is really all that likely. I would see the real contest as between Pat and Paul Maritz.
Tenchu -
Power management is the next frontier IMO - powering down unused memory and cores is a start, going after all the other power drains in chipsets and other board components including the VRMs is the next step.
For Intel, the ability to not only save power but use the thermal headroom generated by dynamic power management to provide higher performance when the load demands, should be a big advantage - if done right, we should see a steady progression of increased performance in lower thermal envelopes for the kind of things most users do.
Intel has a great roadmap to drive this in most areas - the only area that looks to me like it needs more focus is graphics. I like the foundation work they are doing to enable broader access to special purpose hardware, but execution either on a competitive discrete product or an IGP that is a leader rather than a 'good enough' checkbox doesn't seem likely any time soon. Admittedly, the whole of the graphics market is not financially significant to Intel, so maybe it's just not on anyone's radar. But at some point in the near future, the ability to include graphics in the dynamic power management scheme will become important, since it could be the dominant load in an otherwise well optimized system. That's similar to the very low power mobile space - the display itself draws more power than all the stuff driving it.
Duke -
I have been testing a wide variety of clients and peripherals on Win7 64 bit and so far have not found anything that doesn't load up automatically, including an Olympus P400 dye printer that never had any support under Vista. One of the tricks that MS pulled in Win7 - if there is no Win7 or Vista 64 driver, they will try to load an older driver and run it in XP X86 emulation mode (that's what the P400 did). All of the features from the older drivers are available - previews, resource monitoring, etc.
I think this is a great idea - provides a great out of box experience. The same thinking applies to the XP emulation mode - got an old program that hasn't been upgraded since 2002? Run it in emulation. But that requires a CPU that can run virtual machines - the printer drivers load on anything.
I think he meant Nvidia - Apple has pretty much cut Nvidia out of their planning. ATI is filling the gap for new designs at present.
re: And because AMD don't have an Atom competitor, they out of nowhere have all of a sudden become ARM fanboys.
I don't see any evidence of AMD pushing ARM, do you have a link? Or are you talking about the posters?
Intel clearly has to appeal, if only to establish the playing field for any future legal actions, but I think their best course is to have public awareness of the issue disappear in the haze of news about GM bankruptcy and such, and put it behind them publicly.
Larrabee is a big departure from current dedicated GPU design, and an important initiative for Intel in many ways. I hope that hype about potential performance does not get out of hand, as that could obscure the real value of the innovation.
re: Intel is a "financially
struggling chipmaker"
This is very clearly a sarcastic reference, like referring to Microsoft as 'a not-for-profit software company'... The exact opposite is implied.
re: the idea that Vista would osborne a whole cycle of retail machines
yea, they were trying to land a 200 lb fish with 10 lb line - the comparison with WinME is apt. Most of the work on Win7 is to deliver what Vista promised, and I think it does a good job. But the ecosystem has evolved a lot in the last few years. More people care about transcoding, HD video, and other tasks where big compute power makes a difference. The base experience is better on old hardware - Win7 runs acceptably on a P4 mobile. The user experience is more intuitive and less confusing. Most of the changes are invisible - and that's a good thing.
But just before the Vista launch, most of the OEMs were expecting a tsunami of replacement, and Vista was just not compelling enough to drive customers in that direction.
I agree with all your points except these:
For them not to be in step with intel and their chipset lineup was a foolish mistake on their part.
They were deeply engaged with Intel around roadmap alignment with Vista. The devil was in the detail - and a combination of an early push on logo requirements, and a 90 day delay in availability of the 915 placement created the problem.
Vista required more memory and processor than xp. This also was a problem.
It turned out to be a problem, but it was a design goal.
Intel (and the industry) were enthusiastic supporters of the increased hardware demands for Vista - they were looking for a compelling requirement to drive next generation hardware. XP had not done that - it ran fine on anything that ran Win2K, so many buyers felt that what they had was 'good enough'. Product mix was not where anyone wanted it to be. and Vista was seen as the engine that would pull significant upgrades.
No question that there were many more serious errors in Vista than the graphics issue, and even that was to some extent a MS created problem. But the solution Intel requested made many of the user interface problems more visible, especially on hardware that was not DX10 capable.
re: Apparently you must be referring to the Vista Capable lawsuit emails which have nothing to do with lowering standards for graphic performance unless you are privy to other sources?
I was working with Intel under contract during that period and was deposed in the trial, so I guess you could say I'm privy to other sources. The decisions around changes to logo requirements were a central part of the investigation.
I happen to think the lawsuit was complete BS - but if the logo requirements had not been lowered, it would never have happened.
If you did a little more background on this, you would do yourself a favor. And, as morrowinder said, there was plenty of blame to go around, this was just one act in a large circus.
And as to enabling competitors, most of the higher margin Intel systems, both desktop and mobile, used Nvidia or ATI graphics - and even today, Intel platform is still by far the largest market for ATI products. In the run-up to Vista launch, Intel provided me with 4 different evaluation units, all of which used ATI graphics and all of which had great Vista performance (aside from the irritations morrowinder mentions, which are not hardware issues). During that time ATI was not a competitor to Intel in embedded graphics.
So Intel DID support DX-10 - but for a variety of reasons (primarily that the switch away from 915 AGP was later than expected, and dropped into the 'Viata ready' period), they initially wanted an exception to allow 915 systems to be marked 'Vista capable' during the period when OEMs would be ramping for holiday sales, and for ongoing commercial sales which were the bulk of the 915 use. They had a good justification for that - referencing Vista deployment on the many 915 based commercial systems already in the field. Later, they successfully lobbied to have that exception made permanent. By the way, according to 'a well placed source', they have also arranged for that same exemption for Netbook under 10.2 screen size for Win7.
I was working under contract with Intel on several MS areas at the time, so I understood what they were doing, and largely agreed with it. What I didn't understand was how dependent Vista was on good graphics performance - I didn't look at performance on a 915 based system until long after the launch. I also had little insight into what this did to OEM SKU positioning - basically, it eliminated 'logo placement' to justify a price differential, which collapsed margins for notebooks with discrete graphics.
My point is not that Intel was an evil manipulator - although some might take it that way, there is nothing in my post to that effect. I've talked since then with the Intel folks involved in that effort, who now allow as how it might have been better to look for a less disruptive solution to the 915 issue, given the heat they took from their OEM partners - but what's done is done.
I was responding to the herb will post claiming that the announcement about graphics integration value in Vista was an example of AMD lack of vision. The Allchin quote was a part of the announcement of intent to merge AMD and ATI, and discussion of enhanced visual experience was one of the few 'neutral' things he could talk about in support of that announcement. Subsequent choices about lowering the bar for Vista logo were not the only thing MS did wrong, but they had a lot to do with disappointment about what 'Vista Ready' meant. I don't think any of the parties (Intel included) could have forseen the eventual impact. Read through the Vista Capable lawsuit detail and you will see what I mean.
RE: Vista uesr experience -
That's a pretty interesting example - Intel lobbied Ballmer and Will Poole to lower the standards for graphics performance, which MS did only a month before launch. Allchin was described as 'berserk' over the decision, since it would, in his opinion, mean that the whole visual experience for Vista would be poor for many users. Incidentally, it meant that both ATI and Nvidia had invested hundreds of millions in DX10 silicon which now gave them no market edge.
This is a perfect example of Intel working in its business interest (it had no DX10 capable parts at the time) but in this case harming not only competitors, but also MS, a key partner. And of course Allchin was right - Vista user experience was substandard on less capable DX9 hardware. That experience resulted in a lawsuit in several jusrisdictions and surely to the poor opinion of Vista in the marketplace. Those were not from ATI or Nvidia - they were from consumers.
This is, in my view, exactly the kind of behavior that puts Intel at risk. They may have had no intent other than driving business, but it clearly was a bad deal for everyone else, including consumers.
re: it is pretty ridiculous to expect a manufacturer to offer options for every company that makes addon software or hardware
MS already supports exactly that model for thousands of component vendors through WHQL - that includes user interface and software - so obviously it is fine when it seems in their interest.
I had a netbook running Win7 for a few weeks recently - it was perfectly adequate, much better than XP. Rumor is that 'starter edition' Win7 will be the same or lower price than the XP starter edition, which means competitive with Linux although not free.
I wouldn't call foul on Intel - they ought to have the ability to influence how specialty products like Atom are used, to assure some level of user experience, but they don't. And I know for a fact that there are no restrictions on what OEM/ODMs can make, several have models over the 10.2 inch size. Intel works hard to make vendors successful with their products, with reference designs and engineering support. Vendors who accept that guidance are likely to have a better performing and more reliable product, IMO.
Interestingly, MS does not seem to have a problem in restricting the sale of their 'Starter' windows to 10.2 screen or smaller.
I talked to a MS exec who works with the hardware qualification team (WHQL). He was familiar with the Palit board, and said the issue for them was it crashed with WHQL drivers that run on all the other ATI reference designs. He also noted that the Palit board is 'about the same' performance as the 'special' 4850's from Gigabyte, HSI, PowerColor, and Saphhire - and those are not as fast as a 4870 but within spitting distance on performance.
The other thing I discovered with a little web searching is that there is not much difference between 4850 and 4870 on price - maybe $20 for similar configurations - and $20 at retail is not much to the chip vendor. The 4850 has the same features, in the same number, as the 4870. The main difference is that it draws less power and produces less heat, at a small performance penalty. So users get a high end experience without listening to a jet engine when they do something graphics intensive.
The geek.com discussion - Another case of people making assumptions without knowing the facts.
Another possibility - one that seems more likely - is at http://www.theinquirer.net/inquirer/news/1137514/gainward-palit-ati-spat-rate-hatchet-job
Palit wanted to make board changes to reduce cost, not to please the high-end overclocking set as has been intoned. Those new changes did not pass ATI reliability validation for whatever technically arcane reason that boards fail these things.
Palit pushed on with the changes against ATI's wishes, so ATI cut it off.
The reason I think that more likely is that ATI has always encouraged the board vendors to push the limits, and the chip spec appears to allow everything from GDDR2 to GDDR5. Palit is playing to the peanut gallery.
I agree with charlie - they should patch it up. But the idea that Palit was trying to improve the card and ATI objected just doesn't fit with everything else ATI does.
re:it will take years to resolve the appeal(s). For a benchmark, consider the Microsoft case
Microsoft paid the original fine amount into an escrow account about 4 months after the original decision. They lost the appeal. They also applied to have the sanctions suspended pending appeal. The application was denied.
And, they got hammered with another $1.39bn (899m euro) fine last year for not complying with the original ruling. They are appealing that. They are also in dispute over additional restrictions on IE.
Seems like once the EU gets on a tear, there's no letting up. And it also looks like you have to act like they won, while appealing - i.e. escrow the fine and alter behavior. And by the way, you lose the appeal.
Duke -
this is not migration or dual boot, it is the ability to run both Win7 and XP at the same time (even allowing XP apps to be on the Win7 desktop). I was at the private launch event last week (where I got my copies of the RC, so avoided the download frenzy) and they described the new mode. It requires desktop hardware that supports virtualization, and runs a virtual XP machine on the desktop for corporate or other users who have some incompatible apps but want the other benefits of Win7. They showed both Nehalem and AMD clients running the package. Intel folks at the event said that 'almost all' existing desktop CPUs will either be able to run the VM without change, or with a simple upgrade.
The XP VM has to be licensed separately (or use existing XP if the Win7 is an upgrade to an existing machine), and maintained separately. It is actually quite innovative, and should go a long way to getting a larger share of enterprise users to be early adopters, since much of the certification (or upgrading) of XP apps can be delayed.
I'm way out of my depth on what parts do what inside the chip. But it seems like the per-core power would be less if the 'overhead' for non-core power is more, no?
beamer -
Re: I measured direct dissipation via instrumented heatsink
Instrumented heatsinks are a well established method of determining actual dissipation of a part. You set up a known heat source with characteristics similar to the load (I use high power thin film resistors with a surface area similar to the CPU heat interface, and otherwise standard mountings). You load the sink with power below, at and over the expected load power, measuring the heat rise to ambient through thermistors in the base and at the outlet of the cooling fan. The heat rise of the sink driven by the real load is plotted against that curve. Load power measurement can be accurate to within a percent or so.
re: Are you talking about Shanghai here, or an Istanbul sample?
I have not seen any istanbul parts - I was testing quad shanghai 2.7G. Based on chipguy's guidance, it seems reasonable that if a quad at 2.7 uses 21W per core, then a 2.6G part might use 20W. Even if it is 21W, 6 of them are about 125W total, assuming linear 6/4 on power.
Thanks for a very enlightening post, even if much of it went over my head. Are there any simple process enhancements to that base AMD design that could make a dramatic improvement in the power profile? It seems like getting to a much lower power part will be important to maintain any profile in mobile.
I'm pretty sure chipguy was assuming that the istanbul frequency would be lower - see his 79050 post.
http://investorshub.advfn.com/boards/read_msg.aspx?message_id=36953820
He assumed 25W per core on a 2.9, which looks like a good estimate against the 21 watt per core I saw on 2.7, and that istanbul would be at 2.5G. I measured direct dissipation via instrumented heatsink and saw 85W after an hour of CPU intensive work on the 2.7 parts. So a 2.6G six-core part could fit inside the published 125W TDP if those are around 20 watt per core. Chipguy's frequency estimate was a little low but everything else was right on the money.
chipguy -
X64 was pretty much a new architecture, and although it was a lot later to market than AMD intended, it came out the door in fairly good shape. My impression is that AMD was in weak shape during that development.
Is there something about the 'bulldozer' that makes it more difficult?
We use the clients' apps for most tests. Most are SSE3 optimized, or at least the subset that is common to AMD and Intel. Since much current work is around virtualization and migration strategies, keeping to a 'lowest common denominator' is a focus. Some loads are optimized specifically for Intel architecture.
Congrats to chipguy on the 30% performance increase estimate for Istanbul - doesn't get any closer than that.
There's no chance that Istanbul will compete with x5570 head to head. But X5570 servers sell at premium price. Current HP Shanghai DL385 is priced closer to their E5540 DL380 G6. At the processor level, the 2.9G Opteron 2389 is priced like E5550.
On most of the tests I run, and the benchmarks I have seen published, E5540 is about 85% of the performance of X5570. 2389 is just a few percent faster than 2386 but uses a lot less power - E5540 and 2389 are very close on both processor and total system power for the DL300 configurations.
Of course, we don't know where AMD will price Istanbul. If they stay near E5500 pricing, Opteron will be closer - still behind but within 15-20% except SPECint 30% behind, and close to even on SPECjbb. That is certainly a better position than today, and if they get the product out in June, they get maybe 6 months before the next wave from Xeon.
re:Istanbul I predict will be generally out-performed by Nehalem, just less out-performed than Shanghai
I have a couple of questions for the chip experts here
1 - how is the power on Istanbul likely to compare to shanghai? I have seen comments that it is in the same power envelope - does that imply a slower clock or what?
2 - does anyone have an idea on the cache snooping feature in Istanbul?
On workloads that respond well to more cores (either real or HT), Istanbul should be substantially better than shanghai. Although Nehalem does a lot better than Shanghai on many workloads, Istanbul could be competitive on a broader range of workloads - certainly anywhere the current differential is 30% or less and multi-threading is well supported. On that basis alone, I tend to agree with beamer.
Another question is pricing. Nehalem at the high end seems to have set a new band above the shanghai and harpertown levels, with lower performance models that match previous generation pricing. I would expect AMD to price Istanbul based around some price / performance metric, which would put it well above shanghai but probably below the top Nehalem parts.
Since Istanbul will presumably be offered in all the current products (i.e. up to 8P), doesn't that put them in better shape on large systems, and restore the edge in 4P and 8P servers, at least until Nehalem 4P is available?
re:measurements should be in energy consumed per task, e.g. joules/job.
What you want to measure depends entirely on the use scenario. In addition to system and CPU draw, we measure PSU efficiency, impact of different disk management, and many other factors, both short and long term.
For the particular use I was testing, the goal was to characterize power draw and heat loading for a group of servers running consolidated virtual workloads, with a goal of 70% average utilization. 100% utilzation is one goal post, sleep states the other. When running a farm, if overall utilization drops below 60%, lightest loaded server has its workstreams migrated to another box, and gets put to sleep. Any server heading to 90% sheds load, and servers are brought back up to accept load if the overall loading looks like it will violate SLA or peak load response.
Clearly, if a majority of those loads are ones where Nehalem shows better than a 50% advantage, the user is money ahead to go that route. Also, if peak response requires more performance than opteron can provide, nehalem may win on that basis.
On some loads, which I have discussed before, Nehalem does not show a significant advantage over opteron, and in those cases, the lower price and power draw favors Opteron. In other loads, nehalem showed as much as a 70% advantage, and I have seen well-documented results showing 100% or more.
Existing architecture may have a big impact even when the newer products are different, say Harpertown to Nehalem, because many subsystems may be common. That's not true for HP's DL serices, but Opteron has no equivalent to ML350 or ML380, which happen to be the best selling servers HP has.
I just offered the numbers as a footnote to the power draw discussion. It is not the top criterion for most deployment decisions.
I measured both system power and processor power (measured by calibrating heat sink rise) on two nearly identical configurations - HP DL380 G6 (nehalem) and DL385 G5p (Opteron), both with the same disk subsystem and other components. The Nehalem was the X5560 2.8G, the Opteron the 2384 2.7G. The Intel system had 12GB RAM, the Opteron 16GB. I didn't pay for the systems but looking at HP's site today, the Intel box is just over $6,500 and the Opteron is around $4,600.
Running a variety of CPU intensive loads, The Intel system draws between 320W and 360W. The Opteron draws between 280W and 320W. Nehalem wins at idle - 100W to 180W for Opteron. Calculated CPU load power was between 80W and 95W for Nehalem, and between 60W and 80W for Opteron. That matches the system draw difference pretty closely - it appears that the difference in total system draw is almost entirely due to processor draw, as you would expect in otherwise nearly identical systems.
This looks to me more like a point attack on one of the few nehalem weak spots - the 'fork lift upgrade' requirement. Unfortunately for AMD, few datacenters upgrade CPUs, but perhaps in today's tough times, more will consider it. In any event it's low hanging fruit for AMD - the server was already sold, so any processor sales should be incremental revenue.
EP -
Thanks for the link. It actually did seem witty and charming, in stark contrast to the kind of overblown or strident press I have seen elsewhere from AMD. Still a bit self-serving, but what can they say in the face of the tsunami of good Nehalem reports?
Is this guy new to AMD? Personally, I think AMD could stand a good dose of realism in their positioning. I hope this is a trend and not an anomaly.
You could start with Richard Dracott or Stephen Wheat. They would probably say that while the Larrabee implementation is not suitable because of the lack of ECC, and the short FP, that a version with a heavier FP, ECC, and some enhanced connectivity would be the right building block. But maybe they don't know what they are talking about either.
Since you bring them up, According to IDC, the HPC market has been over 9 billion for the last three years. Processor content might be 10% of that - if it was, it would mean that HPC processor content is about the same as AMD's total server chip sales. What part of AMD server chip sales were into HPC? My guess is that Intel had 80% of that business already.
As far as knowing what I'm talking about, if you were as plugged in as you claim, you would be talking about Larrabee type architectures in HPC, not Nehalem.
They sell single digit units a year, I think - just a flag on the mountain play.
Virtualization is one of the few hot trends - here's a comment from an IDG analyst
According to Matthew Eastwood, Group Vice President of Enterprise Platform Research, 2009 is a big year for virtualization. If current trends hold, this year will mark a big crossover with more virtual servers than physical servers worldwide at year's end.
http://blogs.computerworld.com/2009_virtualization_crossover_year
With Nehalem's strong performance on VMWare, Intel should be well positioned to ride the wave. AMD has had the upper hand in that space, and should get some momentum from their previous performance, but they will need to pull a rabbit out of their hat to avoid losing share.
I'm not sure how you get AMD HPC death out of a workstation article. The CRAY CX1 (the Intel collaboration) is for smallish (16 socket max) parallel computers - the giant XT3 machines are still Opteron based, and will be at least through 2010 according to Cray.
Nehalem is a much better architecture for large scale HPC than previous Xeon, and the performance will make it attractive. But the rest of your post is the kind of hyperbole usually reserved for AMD fanbois.
The last rev of Nehalem engineering samples cut into that access time penalty, and they are continuing to look at adjustments to make additional progress. I have not upgraded to production silicon but expect that to happen soon. Increasing performance on that particular workload apparently impacts other, more common workloads - or so I have been told.
Your point about AMD pricing is interesting - I hadn't thought about it, but it makes sense. I guess the question is whether additional profit in that space would offset a less competitive mainstream position, and that discussion is pretty far out of my area of expertise ;=)
I am not suggesting that customers with legacy Intel servers would go to AMD - I stated the opposite, which is that there are substantial cost and implementation barriers that will keep them in a processor family for a given server pool. I contributed test methodologies to Intel IT around configurations for a POC they did on FlexMigration support - it was published in January. http://download.intel.com/it/pdf/Testing_Live_Migration_with_FlexMigration.pdf
That gives you an idea of some of the issues. What Intel supports is essentially to limit CPUID reporting for both privileged and non-privileged code, and then verifying that all VMs can run in the reported common subset. They are working on an advanced capability for Nehalem which will, if successful, reduce barriers for live migration, but that will probably not be ready for prime time until next year.
We are not talking about yesterday's virtualization - we are talking about the most advanced virtualization in use today. Here's a description from VMWare on current state and constraints - http://www.vmware.com/files/pdf/vmotion_info_guide.pdf
Finally, in today's IT world, a 2 year old server is not 'legacy' - shops are looking for ways to extend tech refresh to 48 months or more. The numbers I gave were to draw the analogy - real numbers might be 1000 servers or more in a pool. If the goal is to maintain some QOS or SLA level, and that requires adding more servers to the pool, Dunnington might be the right choice, or some other Core Xeon might be right. At the current state of the art, Nehalem is not a good choice - although some will try it.
In several areas of virtualization, in particular virtual database support, which is increasingly important for 'cloud' services, Nehalem performance is not better across the board when compared to Shanghai. There are a number of reasons for that. I am confident that Intel will address those issues, but we also have new products from AMD which will raise the bar. My educated guess is that by late summer, performance in that space will be 50% better than it is today, for both platforms. It seems likely that Intel will pull ahead in that particular race in early 2010.
re: I've seen you move from "AMD has a bright future ahead in virtualization" to "AMD at least can serve its customers who already have AMD servers."
No movement - I think both statements can be true at the same time. AMD can improve their position with some customers who have the specialized needs which AMD's architecture does well, and I also believe they will hold on to customers who bought into AMD architecture for reasons other than price. Intel Nehalem will take share in many areas where customers are just buying bang for the buck. Intel already took much of that business from AMD with core, so they are really consolidating a winning position and denying AMD re-entry.