Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Looks like AMD stock is finally picking up some upward momentum. I think this run may gain some legs as there has been a lot of positive news since the last run.
I'm predicting a quick return to $18, with a few days of bouncing on that, and hopefully enough volume to push us through that to $20.
This may be wishful thinking, but it just feels ripe for a run.
I'll probably unload the shares I was assigned last month (Feb $15 puts) around $17, if I can.
Which is still the vast majority of the baggage of x86.
I'm not so sure about that. Getting rid of legacy mode segmentation is huge.
As you correctly pointed out, x86 will continue to shed baggage. I'd be very surprised if K9 has Legacy Mode; many of the x86 historic disadvantages will be cut loose.
Well it hasn't shed any baggage yet, and I think it is going to take a much longer time to do it than K9 timeframe. AMD won't drop legacy modes until there is absolutely no doubt it won't be a problem. That is at least 5 years out, IMO. They don't want to be flagged as being incompatible in any way, even if it is only a small percentage of applications.
First all the cost that is important to the real customer is
the system cost, not one of the components. In the server
market that IPF currently resides in, IPF processors go
up against CPUs like POWER4+ and USIII and US-IV
which require big off-CPU caches
I wasn't arguing against IPF chip costs in the existing markets it plays in. It is doing fine there. I was arguing as to whether or not it could be a mainstream part (25M units/quarter), or rather when it would be feasible to be a mainstream part assuming there was a market for it.
Thanks for your 2 posts and same to wbmw. I think I see a point sometime in the future where IPF could be more competitive cost wise in a mainstream desktop system application. I'm not sure it will ever happen because of the resistance against it in the x86 world, but I see how the large cache disadvantage today diminishes over time to the point of being no disadvantage.
Keep in mind x86 may shed some baggage as time goes by, although some things can never die. I doubt x86-64 is the last of the extensions or evolution.
and they want workstations capable of using the exact same
applications.
What happens to the apps that don't run on IPF? I guess they just keep their legacy machines around until everything is running on IPF. I'll buy that. This is how I've seen companies transition from solaris to linux. I guess this is no different unless IT is looking for direct replacements for existing x86 workstations without adding more machines to the support list. IPF requires a phase out period where there are multiple machines to do all the jobs with the added benefit of more CPU cycles to use, where x86-64 could be a machine swap.
That process shrinks will eventually allow larger caches to take up less die area.
I can see that, but x86 will also be moving to larger caches and multicores using this same rule, and probably be adding more execution units to each core, a new way of doing floating point, etc..
I'm wondering when an IPF part would perform better than a similarly sized x86 part in all aspects. Is there a crossover point where more cache for an x86 part doesn't make much difference, but it still does for an IPF part?
Today there is a large disparity in performance per unit of die size with current IPF implementations. You double the die size, but only gain 30% in fp performance, and possibly lose in integer performance when going from x86 to IPF. You could design a multicore x86 with a shared cache and still be under the Madison (6MB) die size, and it would likely outperform it in every benchmark, and in some cases kill it.
I think the metric for mainstream parts is performance per unit of die size. IPF has to at least be somewhat competitive in this metric for broad adoption, don't you think?
IPF is still the best platform for the future, as soon as infrastructure can support it.
Can an IPF part be die size competitive with x86? In other words, does Itanium just require huge caches by nature to perform well? It seems if we took an x86-64 part (either Intel or AMD), and put enough cache on it so it was roughly the same size as an Itanium today, how would performance stack up between the two? Or even the opposite. If IPF were suddenly targeted for desktop machines with a dize size of say 150 square millimeters, how would it stack up? It there some point in the future where there is a crossover of performance per unit of die size for IPF parts?
I know it is not all about die size, but for IPF to be the future platform as you suggest, it will need a lower cost mainstream solution that doesn't have poor performance.
What am I missing?
HailMary, that isn't the way most developers work. If, some day, effective IPF graphics workstations are sucessfully targetted towards CAD and similar markets, then those workers will have an IPF machine to run their specialized programs and an AMD64 PC (or laptop) to run everything else.
IT departments are trying hard to consolidate the number of architectures and systems they support.
A couple of years back, I used to have access to Solaris, HP-UX, Alpha, and x86 (solaris and windows), but now everything is Linux, Linux, Linux on x86 with a few exceptions. IPF could be one of those few exception workstations where you run the specialized program. My point was IPF won't be the broad base use workstation until all the primary apps needed can run on it.
Why would IT buy an IPF machine to get a 25% benefit for 1 or 2 applications and another Linux x86 machine for everything else, when they could buy one Opteron machine to cover everything with general great performance and save money and resources at the same time?
I'm not so sure that will be the case a year from now, however.
You may be right. I know a lot of these EDA vendors are porting to both IPF and Opteron with equal priority.
There are other workstation markets where you'll find similar problems. The movie and music industry uses a lot of x86 apps running on Xeon workstations, and I don't think IPF has done much of anything there. This is a place where 64 bit is needed, just like EDA. It is a natural progression to x86-64, and now Intel can play there too.
You forget about the IA-32 EL.
I didn't forget. My post mentioned caring about performance of legacy apps.
This problem even exists in the EDA world, where IPF has some backing.
Lets say a company uses all Linux based EDA tools running on x86 today. They use logical verification tools from one company, design capture from another, and routing/physical verification from yet another. As I have seen to date, only 1 or 2 apps in this chain run on IPF, and there is no way the others could be run under IA-32el due to performance concerns. This company has little choice but to stick with an x86 solution until all the apps can run on IPF, or they have to have specialized machines for doing different things. The performance benefit from having a specialized IPF machine is just not worth the extra cost and hassle of supporting multiple platforms.
x86-64 fits better here than IPF until IPF hits the critical mass of apps that the target consumer wants to run. In some areas, IPF already has done this, in others, there is still lots of work to be done.
So I'm not saying there are more x86-64 apps than IPF apps. I'm saying there are a ton more x86-32 apps than IPF apps that are still critical and performance sensitive, where a solution like Opteron works best. If you have enough of these situations, it will be hard for IPF to ever gain traction in the workstation space.
If these programs are used now,
and are available in EPIC format,
and these programs behave like Spec-fp relations performancewise,
and the last 10% of performance do really matter,
and costs for Hard- and Software do not matter much,
then this would be my decision as well.
You forgot:
and other legacy application performance or even run-ability is not important to me.
A lot of workstations are running a whole bunch of different apps, and until all of them are running on IPF without sacrifice, willingness to switchover from another architecture workstation might be low. This is where AMD64 and now ia-32e have a distinct advantage. This advantage does dwindle over time as more applications get ported to IPF, but I don't think IPF has hit the critical mass yet. Until then IPF will be useful as a dedicated workstation or server running specialized applications, but not as a broad base workstation.
AMD earns $0.75 per share on revenues of 1.5B for Q3" or some such.
Yes, except I would apply your numbers to Q4, so we might be waiting a while. I guess that means I just declare more long term capital gains instead of short term. Works for me.
I have to admit marketing has improved, moderately.
It is this myopia that creates the investment opportunity that AMD appears to offer.
Welcome aboard! Sooner or later the street will get it.
I really think we need AMD's management to put on a show for the analysts so they will get it. Jerry Sanders was very good at this even though he had trouble delivering on some of the promises. I think current management and investor relations need to find ways to get some excitement flowing. I miss this aspect of Jerry. I think overall the company is better run now, but it has lost some pizazz.
As I've said on earlier posts, it sometimes is hard to believe how slow the market uptake is on good news on the AMD side. Bad news is quickly dealt with, but good news usually seems to just sit there for a week or two before being reflected, even partially, by a stock price rise.
This is spot on. Bad news is often overblown on the downfall, and good news takes forever to build momentum.
My gut feel is there is a very solid base of investors in AMD now. All it will take is some general market cooperation, and AMD will be in the $20s and beyond quickly. Things have never looked better.
Time for some patience I guess. I would buy more here, but with my latest assignment, I'm overloaded, on margin, although not dangerously so.
Wafer scale integration has been tried
before but has never been proven commercially feasible.
LOL. That was actually pretty funny.
I doubt we'll see 2GB L2 from AMD anytime soon. The biggest things coming for Opteron are faster HT and lower power and higher clock speeds on 90nm.
This is *great* news.
I think the HP news is great, but I expect the full realization by the stock market for AMD will not occur until later this year. I expect a slow run up, with an occasional burst around earnings times.
I think the best bet is a buy and hold strategy here. Put writing is perfect here. Very little downside, and a good time to accumulate. I think short term we may see some weakness even with the HP announcement, but this will subside soon enough and give way to positive momentum once again once it all settles in.
I would not be surprised to see >$30 by the end of the year. Too many good things going AMD's way. It is almost unbelievable. I think profits will be way up by Q4. The seeds are planted except for 90nm. That one still has me a bit worried, but less so than before.
Now is no time to panic!
Kevin McGrath talks Hammer.
Great presentation. Maybe too good. I think he gave away some helpful hints to competitors.
One of the things I liked hearing (there were many) was once in 64-bit long mode, the switch to the compatibility sub-mode was on the same order as a far call, or about 80 cycles. This is very good. As long as the app isn't making a zillion calls to the o/s, the 32 bit app should run well. The calls to the o/s may be made up for by the o/s code called will now be running in 64-bit mode with extra registers, so it may work out to be a net benefit for most 32-bit applications. Some o/s calls may be slower if the call is a short one as every 32-bit app to 64-bit o/s call has the mode switch and the wow64 windows layer to deal with.
Also interesting (although I already knew this) is seeing how the FP execution unit carries around the worst x86 baggage in the form of a few more pipe stages because of the old stack architecture. FP performance could probably be improved a bit if this could be dropped. I fear it will take many, many years before they start dropping legacy support just because they don't want to stop supporting an installed codebase. Until then, advantage IA-64. Fortunately it doesn't appear to matter much at the moment.
Hectorisms
"Hail Mary"
My favorite!
I expect that it won't be long before we see a version of Opteron that leaves off Legacy Mode.
I'm sure everyone is looking forward to that. AMD/Intel will probably even save significant development time of new x86 cores as they won't have to verify all the different ugly legacy modes.
from the multi-ton millstone around the neck of the poor souls
who have to design x86 chips to support *every* layer of x86
accretion all the way down to 16 bit segmented madness and
x87 stack stupidity. In the x86 world baggage is forever.
I'm not sure I agree here. I think we'll see compilers fading out support of old modes as chips do the same. If you need to compile a 16-bit app, use an old compiler.
You are trying to reconcile the money that has gone into IPF in total with the revenue generated in one year.
Yes I was doing that. Ongoing profitability is more important going forward. On that level, I'm sure they are getting incremental dollars out of it.
n terms of revenue, I think you are being conservative about ASP. I think it's closer to $1750
I was saying profit of $1000, not ASP. As they try to grow the market, the ASP will recede downward significantly.
The bottom line is Intel should keep IPF alive as it is making an incremental profit, but nobody will ever try to do a new architecture like this again as recooping the original investment is unlikely.
IMO Intel and HP have invested around $3B in IPF to date.
Another thought. It will be a long time before another company attempts a new architecture given the price of entry. So I think it is IPF, x86, or bust for anything but embedded. Maybe over time x86 will be able to drop most of the legacy ugliness and become more useful. It is already going in the right direction. I'm sure we haven't seen the last of the extensions and evolution.
MO Intel and HP have invested around $3B in IPF to date.
Thanks for the numbers. They are helpful. I'm really just trying to determine whether or not this ultra high end market is viable, and it appears it is assuming a company can get most of the marketshare in this niche. The gradual death of high end PA-RISC, Sparc, Alpha, and MIPs over the years have made me suspect. Most of those came at the hand of low cost Xeon and now Linux. Are we sure ia-32e or AMD64 isn't going to displace some of this IPF marketshare and make the market even smaller? You cite Sparc as a target, yet Sun now has AMD64 to move their existing customers to. IPF has to be good enough to create a large enough market to make it self sustaining. I think it is going to be a challenge.
From a purely CPU sales perspective, I think Intel is making enough money to pay for the CPU R&D, so in one respect, I already consider it profitable.
Thanks for the straightforward response. I wasn't trying to bait you. I see numbers and statements thrown around by people who don't know enough about the subject, and you seem closer to the subject, so I wanted your opinion. I find the above statement hard to believe though. 100000 Itaniums at $1000 profit each is only $100M. The research on IPF has to be in the billions by now, isn't it? Or is that another overblown number that gets tossed around?
Also - what would happen to Itanium if an x86 based solution was around that could challenge its strengths at less cost? It's possible AMD's next generation might pose this challenge even if Intel intentionally keeps their products out of that space.
IPF is not behind, it's ahead.
Do you see IPF as being a profitable venture at this point? Most of the money is made in Xeons. I just don't see how IPF can ever make up for the money sunk into it now that it won't be hitting mainstream computing which was an original goal of this architecture. It could have made up for all the investment had it been properly pushed into the mainstream earlier. AMD would have been put out to pasture if that happened as they have no cross license for it. Intel can keep it alive, and even grow the market niche, but I just don't believe it can be self sustaining.
Any longs actually pleased about IA-32e?
Yes. I am overall. It removes the concern of an incompatible instruction set. Intel had to make this move to IA-32e sooner or later as it was clear AMD64 was starting to build momentum. It shows Microsoft has some control over Intel, and they're allowing AMD to compete, keeping the market as large as possible.
Sure it would have been better if Intel did this in another year, but that wasn't very realistic.
AMD just needs to stay ahead in performance and play in all market segments, and it will be a fine investment.
I agree they still face some of the same problems of the past with penetrating the corporate space. AMD64 was their ticket in, and Intel has just removed that advantage. I think that makes it harder for them to make inroads, so that is certainly a negative.
The number of instructions for the 64 bit extension to the IA32 instructions set is trivial in number.
While the number of new instructions is trivial, the details are not. The new 64-bit long mode with compatibility sub mode is pretty significant. It is what will allow windows 64 to run all existing 32 bit windows apps.
Everyone is focused too much on the instruction set itself, and not the more important stuff like new modes, extra general purpose registers, and protection features. Nothing revolutionary, but you don't need to or want to reinvent the wheel, especially when compatibility is the most important concern.
I wouldn't call it an extension, but I wouldn't call it a new architecture either. It is an evolution. If Intel markets it as IA-32E, I think it will only hurt them to not identify it clearly a 64-bit capable processor. Maybe not today, but within a year or so. Perhaps by then they'll have a better marketing name for it.
Yeah, but Prescott already had all of this.
Prescott already had a long mode with both native and compatibility implemented exactly like AMD64? It already had the same number of GPRs? They added the new protection checks?
I find it hard to believe they didn't have to make some hardware changes, or that they didn't use AMD64 as a guide from the start and just named some instructions differently to be incompatible. There were many little details related to hardware that could have been done differently and would have been if they took their own path.
If you think that it takes a complete redesign of the core, then you would be mistaken.
It is also more complicated then you let on. It isn't a complete redesign, but there are significant hardware changes needed outside of the ALU and microcode. Read some of the AMD64 system manual. You'll realize many things need some hardware changes (new registers, tables, control lines, feedback lines) to work properly. The ALU and microcode changes are relatively easy. It is all the other fine details that are complicated.
If you have a 64-bit ALU, you can change the microcode to make the instruction format whatever you want.
There is a lot more to a CPU than a simple ALU. There are lots of special registers, tables, control lines, etc. that need to be changed to support a 64-bit architecture, and especially a hybrid that can support many modes. Now there are some things you can do in microcode to make your processor look more like another architecture, but performance is going to be terrible if you don't have some optimized hardware to back it up. Addressing and mode changes are not isolated to just microcode changes.
Nope. No hardware support.
Read page 14 - long mode:
Long mode consists of two submodes: 64-bit mode and compatibility mode...Compatibility mode provides binary compatibility with existing 16-bit and 32-bit applications when running on 64-bit system software.
At the fundamental technical side of things, IA32E is an instruction extension - a microcode edit.
It is a lot more than that! There are a significant number of hardware changes needed to support IA32E (assuming it is fully compatible with AMD64). Take a look at the AMD64 system programming manual to get an idea of the changes and how that would affect the hardware implementation. The microcode edit is a very small part of the overall changes required.
I think the new long mode is a pretty big deal. Being able to have the O/S run in 64-bit mode while apps can run in 32-bit is the most important point. This is what will make both AMD64 and IA32E hugely successful. This doesn't rule out IA64. There is still a place for it, but it is rather small. I'm not convinced that niche segment can be profitable.
I don't have a beta, but doesn't Windows install from an AMD64 directory, and the name is used throughout Windows documentation?
The delay in the Windows XP launch must have been so Microsoft could remove all references to AMD64.
That was a joke. This board needs to lighten up (not targeted at you Joe)!
So if he don't sell by then he pays tax ? Is that how it works ?
If he doesn't exercise by then, he loses the options. He could hold the shares after exercise instead of selling them immediately though.
Also, don't forget that just because Intel says it's coming doesn't necessarily mean it'll be on time or right. The situation appears to potentially be not quite as wonderful as before, but I still think AMD has a pretty strong hand to play.
We know nothing about the timeframe (other than 2004) and performance relative to AMD's implementation.
Also IA32E is a pretty poor marketing name. It will end up creating confusion for the consumers. I guess for Xeon it doesn't matter, but it will for desktop. If Intel wants to push this technology, they'll need to market it properly, and not try to hide it from competing with IA64. Doing something with only half conviction is asking for trouble. They need to use their marketing muscle as this is one of Intel's major advantages over competitors.
My understanding is that the main reason for it in the Opteron is to physically protect the expensive processor from getting chiped or crushed, not to help dissipate heat.
Yes. You are correct. AMD had a big issue with fractured die chips being returned. For Athlon's they had to start issuing a large instruction poster in the retail kit as well as adding corner pads to the chip to try to cut back on the huge number of returns and complaints. They moved to the IHS on Opteron to resolve this.
AMD CPU/NB -> HT -> HT/PCI Express Tunnel -> PCI Express
INTC CPU -> CPU FSB -> NB -> PCI Express
By the way - in the above scenario, it is possible the advantage goes to AMD depending on what INTC uses for the FSB and the performance of their NB. The HT/PCI Express Tunnel will be a very lightweight device, and the AMD on chip NB should easily perform better than an off chip NB.
I think I convinced myself there is not going to be any problems for AMD with PCI Express, which is what they have even stated themselves.
I don't think Intel's Northbridge talks to the CPU in "PCI Express". Just as the signal on PCI-E needs to be converted to the type of signal used in HT the signal on PCI-E needs to be converted to whatever method or protocol Intel uses to communicate from the northbridge to the CPU.
The difference is PCI Express video cards on Intel's platform won't need to go through the CPU bus to get to system memory. On AMD's platform a video card would have to go through HT to get to memory because the memory lives on the other side of the northbridge built into the CPU.
I still argue the small latency penalty for translation from PCI Express to HT is going to make almost no difference in the real world. Video cards don't need low latency access to main memory. They need high bandwidth.
Now that I have stated this in 3 posts, I think I will let it settle out. :)
edit: I see he was claiming something else not related to system memory, with the processor sending a request to a PCI express device. In that case, consider this:
AMD CPU -> On chip connect -> NB -> HT -> HT/PCI Express Tunnel -> PCI Express
INTC CPU -> CPU FSB -> NB -> PCI Express
One less 'hop' for INTC, although AMD's connection between the CPU and NB are on die, and the NB runs at the processor speed may completely negate the extra hop. You could almost treat the AMD CPU+NB as one device:
AMD CPU/NB -> HT -> HT/PCI Express Tunnel -> PCI Express
INTC CPU -> CPU FSB -> NB -> PCI Express
HailMary, so you are saying that a future Intel North Bridge will have PCI-Express built directly in.
Yeap that is what I am saying. Just like AGP ports exist on northbridges today.
I think AMD actually has the overall advantage as the CPU benefits the most from low latency memory access. Video cards are mostly going to use main memory as an off-card texture storage area, which get loaded into the local video card memory in bursts. Latency is not important for that. Raw bandwidth is.
Now if for some reason video card makers decide they no longer need as much local memory on the card, as they have a fast PCI Express route to memory, then it may become more of an issue having extra latency. I still don't see this happening, as they will pay a latency penalty just by having to hop through PCI Express and across the north bridge instead of using a local memory controller. The local memory controller on the video card will still have an advantage, even with PCI Express, so I think we'll continue to see this trend of video cards having local memory.
As relatively low as the throughput rates are on those devices on the peripheral busses compared to memory or other processors on HT, that's probably still going to be the primary issue, not the latency involved with those devices, wouldn't you think?
Yes I do think you are right here. The latency increase for having to hop through a PCI Express to HT tunnel is going to be insignificant, and may be made up for somewhat by having a much faster on chip northbridge. Bandwidth is the key here anyway. Any PCI Express device that needs low latency memory is going to have its own local memory specialized for that (like a video card). System memory will only be needed for the high bandwidth stuff, and HT is at no disadvantage here.