Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Right after the end of the quarter, in typical AMD fashion
Sounds right to me.
They now have capacity that lets them easily produce 250K candidate dice per quarter. So what if they have to throw away 200K, and 40K have at least one bad bank of cache - that still leaves more product than the market can absorb.
This looks to be wrong by about 2 orders of magnitude. Are you mixing up wafers and dice?
AMD may not have a choice
Ah, but IBM does, and if most of the value (which lies in potential successes down the road) disappears on an aquisition, then IBM aren't going to find such an aquisition very attractive.
it's IBM's process. No other choice.
As far as we know (unless the decision is already made) IBM is willing to share process knowledge etc. with AMD for cash rather than for a stake in the company.
What about if AMD sells some convertible bonds to IBM? If AMD does well and the stock gets above a trigger point, IBM get a boatload of valuable stock. In this scenario, AMD stays independent, but the common stock gets watered down. AMD gets a cash injection and IBM gets an incentive to help AMD for a while (until they finish unloading the stock). The loan to AMD isn't as risky for IBM as it would be for others, since IBM can influence AMD's business prospects in both ends (production and sales).
Anyway I'm not sure the cash crunch is that imminent at AMD. They just did a large bond offering and it was fully subscribed so I assume they have the cash they need for a while again. The bonds are convertible at just over $7, so 'underwater' right now.
As in the case of SIMD, the Intel standard for 64 bit computing will dominate the Industry.
The question is, did Intel make x86 big, or did x86 make Intel big. I think (gross oversimplification coming up) it was the latter, which makes Intel's decision to abandon x86 look like a big mistake.
Of course, if it was the former, then they can make IA64 just as big as they made x86.
Re IBM buying AMD
I don't see it. AMD loses all chance of getting HPQ, Dell or any other large/largish player to use their CPUs if they are part of IBM.
Perhaps (to throw out a wild idea) BRCM wants to get into the CPU business and have their own fabs. BRCM have half the revenue and 3 times the capitalisation, so it seems the market has more faith in BRCMs leadership.
Also, Serverworks chipsets for Opteron would be nice .
Why don't you run the numbers for us?
I don't think there is enough public info to do that.
Also, I don't think statements like the one I quoted can be taken at face value. If there is spare flash capacity but no spare CPU capacity, then could they convert a plant? Probably not instantly of course. Are there other dormant resources that could be activated? To some extent I'm sure they can regulate things by varying prices to steer consumers in the direction of larger and smaller chips too.
I lean more towards the idea (presented by someone else here) that the statement was intended as a signal to AMD.
>> AMD clearly has different strategy to make 100%
>> compatible chips from $40 to $2,000.
>
> A winning strategy for AMD? Someone should tell you
> that AMD has lost huge amounts of money for many quarters
They weren't using that strategy at the time (no $2000 chips).
Intel fabs maxed out.
Intel Corp. Chief Financial Officer Andy Bryant said Tuesday that capacity in its microprocessor and chipset business is currently matching demand.
http://biz.yahoo.com/djus/030610/1027000846_1.html
Does this mean that Intel yields are very very bad? After all, demand isn't up since they were making 0.18um chips with the same fabs.
chipguy, luckily Intel are too busy pushing their high-end-only 64 bit offering to pose a threat to AMD's mass market 64 bit instruction set. If I'm wrong about that (ie if Yamhill exists) it will be a big blow to AMD. Also if Intel makes an IA64 product for the sub-$1000 segment that would be a blow to AMD64. I don't see any evidence that they plan to do that. There are Ace's rumours that Tejas has some sort of IA64 compatibility, and if that turns out to be true and they are performance-comparable with contemporary Hammer chips that too would be an interesting development.
x86-64 is an altogether more interesting instruction set architecture than all those multimedia instruction sets could ever hope to be. Even now I can install the newest Red Hat on my Intel-based server with no multimedia instruction set support at all (PPro, no MMX, 3DNow!, SSE, SSE2, SSE3), and it all runs (a little slowly .
Surely an increase in the money supply will go quite some way to curing the problem of excess debt. The resulting price increases will mean that those that have fixed interest debt (bonds) will see the value of their debt evaporate. Those who have variable interest debt will be bankrupted, which will also wipe out their debt. After some years of 1981-style stagflation, very high unemployment and bankruptcies a lot of the debt will be gone and we can start over.
No?
I think it helps that Japan, USA and Europe seem to be pulling the same (inflationary) way. In Europe it is against the will of the central bank, but if France and Germany agree to it then the bank and the rest of the EU will be dragged along.
...AMD is forcing Microsoft...
You really meant to write that? Seems rather unlikely on the face of it.
The fact that Microsoft is hosting a web-cast that is going to be presented by an AMD guy sounds like a good endorsement.
On the other hand, the possibility that software from MS will be late and/or buggy doesn't surprise. Lucky that Opteron runs 32 bit software so well, while giving purchasers that warm fuzzy feeling that comes from knowing they will be able to run 64 bit software if it becomes the wave of the future.
James Cramer wrote...
Should we really care what Cramer is writing?
Here was is view of the world on February 29 2000:
"724 Solutions (SVNX:Nasdaq - news), Ariba (ARBA:Nasdaq - news), Digital Island (ISLD:Nasdaq - news), Exodus (EXDS:Nasdaq - news), InfoSpace.com (INSP:Nasdaq - news), Inktomi (INKT:Nasdaq - news), Mercury Interactive (MERQ:Nasdaq - news), Sonera (SNRA:Nasdaq - news), VeriSign (VRSN:Nasdaq - news) and Veritas Software (VRTS:Nasdaq - news).
"We are buying some of every one of these this morning as I give this speech. We buy them every day, particularly if they are down, which, no surprise given what they do, is very rare. And we will keep doing so until this period is over -- and it is very far from ending. Heck, people are just learning these stories on Wall Street, and the more they come to learn, the more they love and own! Most of these companies don't even have earnings per share, so we won't have to be constrained by that methodology for quarters to come."
http://www.thestreet.com/_tscs/funds/smarter/891820.html
Yes, thanks for the correction eom.
The fact that there is no Centrino for desktop is actually very interesting.
It means that Intel thinks that the Ghz race on the notebookm space is over.
I don't see how that follows at all. It merely means that Intel thinks the power/performance requirements of the desktop are so different that Banias isn't interesting on the desktop.
I'm sure there's nothing to stop OEMs making a desktop Banias or even Centrino machine, in fact I think there is probably a niche for it. I have a 100% silent machine on my desk and it's lovely - you could make machines like that with Centrino (and they'd be a lot faster than my C3-based cheapo machine).
A company named Amitech here in Denmark has done good business with desktop machines based on laptop components. They are expensive, but very quiet and very sleek/compact.
The P4 will end its life somewhere in 7 or 8 Ghz in few years. But Centrino will bump the ceiling at 2 Ghz and I think that's it.
Again, I don't know how you reach such a conclusion from such sparse data. They will continue to move Banias to smaller and faster processes as long as they can.
Banias has the advantage vs. Hammer that it was totally redesigned with power consumption in mind. Hammer on the other hand is one core that has to be good in server, desktop and power-critical apps. Quite a range to span, and of course it also has to do 64 bits.
On the other hand Hammer has the advantage of being a rather good design on an SOI process. It should be interesting, but I am pretty sure there will be a low power niche where Hammer has no chance, while Banias will do OK. Astro will be playing in that niche too. It's hard to say how big the niche is, but according to the Inq the sales of tablet PCs have been disappointing. I don't think AMD has the resources to Baniasise the K7 design, though they could well put it on SOI.
This is related to the fact that AMD is moving its chip design team to IBM headquarters, right?
Hardly. The AMD-Intel decision came a lot later than the Apple decision to move to HyperTransport must have come.
I think it's related to the fact that HyperTransport is a very good solution that has won very widespread support, and the fact that Apple need to strive to use PC technology wherever they can in order to keep costs down.
No, KT600 isn't held back by the FSB since the FSB has enough bandwidth to cope with the memory bandwidth of the KT600's memory interface.
VIA KT600 is very close to the nForce2's dual channel performance
nForce2 is held back by the FSB bottleneck.
Fyodor, you really think that the more agressive memory controller of the Athlon 64 can compensate for only having one memory channel. I find that hard to believe.
borsa: re: X-Architecuture
THere's a pdf off this page http://www.pc.ibm.com/us/eserver/xseries/xarchitecture/enterprise/ not sure whether it has all the details. Google is your friend
What's important is the ability to support 1600x1200 resolution (at least 300 Mhz better 400 Mhz DAC) and dual-head support.
Any machine with PCI should be able to do that
http://www.matrox.com/mga/media_center/press_rel/2001/g450_pci.cfm
CV = Curriculum Vitae eom
One of the architects of the P6. He is rather well known on comp.arch because he always has interesting ideas and is reasonably prolific there. He wasn't the lead architect.
Responsible for the sysenter instruction and the performance counters AFAIR, plus probably lots of stuff that is invisible to outsiders.
His CV was accidentally posted on the net just before he quit Intel, and picked up by the Inq and others. He mentions 'yet another' cancelled 64 bit x86 project, but it isn't Yamhill.
It is obvious that cheap 2-HT processors may not be used to build 4-way and up servers.
It's obvious, but is it true? IBM's X-Architecture chipsets can make many-way machines out of 2-way Xeons. They connect the 2-way Xeon nodes together using not the GTL+ Xeon FSB, but their own connectionms. The benefits are twofold. Firstly they get to use cheaper 2-way chips from Intel. Secondly they avoid overloading the FSB too much by putting 4 CPUs on each. Thirdly perhaps they can have a faster FSB than they could if they had 4 CPUs on it.
You could do something similar with Opteron. Either like X-Architecture with a proprietary interconnect or with a Hypertransport hub.
As a developer, I must share with you my many-year experience tells that the value of developing on cheap personal workstation and deploying on expensive server is the key of the platform choice. If Intel wants Itanium to take off, they need $1,500 Deerfield computers with good graphics, i.e. Deerfield chip must cost under $400. If not, I think they will never get over 1% of the market.[7i]
I agree, though I can't see why good graphics are necessary. If the machine is being used to develop server apps then the graphics card is only stressed when the 3d screensaver kicks in.
Andy Glew is at AMD now?
http://groups.google.com/groups?selm=9fa03643.0207221334.7c89d4ab%40posting.google.com
Personally I think it was partly dissatisfaction with Intel's 64 bit strategy, but he isn't saying and it could just as easily be something much more personal (relationship-with-boss, promotion, salary, conditions etc.)
Here's his opinion of Itanium:
http://groups.google.com/groups?selm=nkUQ9.5822%24Ck4.257275001%40newssvr14.news.prodigy.com
Note the opening: "I don't know whether Yamhill exists"
You are just bitter that the best x86 compiler development group in the world will never
use any of the 64 bit bells and whistles that AMD has spent years cramming into Hammer.
And worse yet, they still can make it run faster than all the second class compilers that do.
Leaving aside the question of who is more bitter about what I am reminded of a quote from Intel chip designer (now at AMD) Andy Glew that "for many of us, gcc is the only compiler we care about. (By the way, that includes many people inside Intel.)". Almost the entire Linux world feels this way, and almost the entire Windows world feels that way about MSVC.
http://groups.google.com/groups?selm=x42ra.534%24Ho6.85733637%40newssvr15.news.prodigy.com
Outside of benchmarks and commercial Unixes, Intel's highly optimised compiler isn't all that relevant. They are the best compiler team for x86, but in terms of market penetration they have some catching up to do.
How can narrowing a range of uncertainty by bringing both sides closer together by moving
each inwards equally be considered changing one's mind?
Now if they had brought only one of the sides closer together, that would have been changing their minds. Or if they had brought them closer by moving them outwards, that would also have been more newsworthy...
From the article:
And, yes, for these initial tests we are running a 32-bit version of Windows as it was all that AMD would allow us to publish numbers from; they are against me using an Alpha version of the AMD64 version of Windows that has recently become available.
So MS still haven't got the bugs or performance regressions out of AMD64 Windows...
Intel fails to warn
http://biz.yahoo.com/bw/030605/55510_1.html
That means that a 16-bit offset can contain up to 2^^16 / 16 = 4096 entries. With each entry addressing a 32-bit address space, that gives this scheme the ability to address an astounding 17,592,186,044,416 bytes!
You can change the entries on context switch, so you are not limited to 4096 entries. In fact Linux for x86 only uses one single entry, which it updates in software on context switch. This means they avoid a 4096-process limitation, and apparently the penalty is minimal. Hammer has this clever page-table snooping stuff, which is intended to speed this stuff up further (avoids unnecessary TLB flushing on context switch).
The Linux kernel resides in the top 4Gbytes of virtual memory (or was that top 2Gbytes), which avoids 64 bit jumps/branches within the kernel.
(Of course, early Opteron and AMD64 processors will not have 64 address lines, so the amount of system memory will be somewhat more modest.)
There is also a limit on the virtual memory, and it's less than 64 bits, though more than the number of physical lines. They will raise the limit in future generations so that it is always out there where noone notices it.
Interesting freebie w/ Win32 apps on Win64/AMD64
Not clear, but it seems more likely to me that he was talking about Linux.
I wonder when the "real" 4-way systems come out.
Within 2 months according to http://www.spec.org/cpu2000/results/res2003q2/cpu2000-20030421-02115.html
Who knows if they won't be early like the workstations are.
Centrino is the latest Intel product to have a problem.
Windows in blue-screen-of-death shocker! Computer industry shaken!
Seek help!
Haddock, it's just about common knowledge. Haven't you read the multiple links that were reposted on Ace's and elsewhere over the past couple months?
No, though I do try to keep up. Ace's doesn't have a search engine for the BBS.
>>>Thorton is a fused part that disables half the cache in a Barton CPU
>>
>> How do you know?
>
> Why are you asking this question now?
>
> You've stood by silently while every imaginable rumor
> has passed before your eyes and yet now you decide to
> challenge a unsubstantiated claim. Why were you silent
> before?
Well if it's a totally unsubstantiated rumour, I guess I can live with that, it's just that WBMW and you seemed so sure so I thought perhaps you had an interesting link I had overlooked.
OK, I Googled a little and found this:
http://www.digit-life.com/archive.shtml?20030319
which seems to indicate that Thorton has a smaller core than Barton and this:
http://www.xbitlabs.com/news/cpu/display/20030425073021.html
which on the one hand says that Thorton is a new core (ie not Barton with cache fused) but on the other hand says that "AMD also confirmed its ability to disable a half of Barton's 512KB L2 cache." which makes it sound like it is the same core with half the cache disabled.
Given the expense I don't think it likely that AMD would build Thorton by just fusing half the cache in a Barton. If they want to improve yields then then there are better ways to add redundancy giving more protection for a given increase in cache size.
Thorton is a fused part that disables half the cache in a Barton CPU
How do you know?
Cool. Complete with AGP. I wonder how many they have.
While we are getting down to the nitty gritty, the proceedings of the GCC summit have been published, and the x86-64 article is on page 79ff.
http://www.gccsummit.org/2003/
Anybody with half a brain would realize that a 512K Barton is larger than a 256K Barton would need to be.
I think Thorton is the Barton core (excluding cache) with 256K of cache. Ie it does not have the same size as Barton. If you have any information to the contrary please share it.