Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Hi Threejack,
So far Gartner has been more or less on track in their prediction he semi IP market would grow to some $ 2 billion in time. (I have that article somewhere)
I also assume you know our (semireview) report on semiconductor IP companies in general.
Cheers
Cor
No, I wouldn't call it good news.
I agree.
Good news would have been negotiating skills outside the courtroom. Not saying the end will not be OK, but considerable delays are going to weigh on the stock price.
Maybe there will be a sudden turnaround and Samsung and Rambus come to an agreement. (like happened between Samsung and Tessera, but that was after 1.5 years fierce litigating)
Working the pickax isn't that much fun anymore.
Ha, I am from Holland, a country half it's surface below sea level and consisting mostly of clay. Very few stones. Now I live in Spain and for the first time in my life had to work the pixax to even plant a simple small palm tree:( I admire those poor farmers who used to hack their livelihood out of the mountains like this.
Cheers
Cor
Samsung’s 256Mb GDDR3 is also fixed at 4. I guess that must be the JEDEC spec. 256 Mb fixed and 512Mb programmable to 4 or 8.
That sounds a likely explanation. I don't have that JEDEC spec, guess one has to pay for it
I still find it pretty useless, but I remember that AMD very much wanted the two values (but in DDR2, I believe). If one needs a burst of 12 what do yuo think the programmer does? Program one burst of 8 and one of 4? I don't think so, he will program three of 4. So we don't need the longer bursts, it does not add anything useful (that I can see)
Cheers
Cor
Naturally I checked it out.
Hmm, like I always do
The one you chose is not an existing chip btw. Availability 05qtr4
The one I took is an available one, the 256 Mbit one:
http://www.infineon.com/cmc_upload/documents/012/3555/HYB18T256324F_Rev.1.11.pdf
And this one says (p. 10)
"The burst length is fixed to 4 and the two least significant bits of the burst address are ’Don’t Care’ and internally set to LOW."
This sort of confirms one of my "conspiracy" theories (posted here, but now too lazy to find it) that IFX was eliminating Rambus IP features from their chips (I bet they changed the PLL/DLL stuff as well to non-infringement mode, which would only leave the most powerful claim the latency in a register).
Now that IFX has a fine deal, there is no further need for this , so future products can have inside whatever they like, as far as Rambus is concerned.
Cheers
Cor
Guys, guys, guys ... forget those programmable burst lengths anyway. There is no technical reason to keep them now that we have prefetches. When the prefetch was 1 (=none) there may have been a point in that it was difficult to stream read commands. Now it is absolutely no problem (I checked both the DDR2+ data sheets and the XDR one) to give additional reads so that one gets any burst multiple of the prefetch.
Infineon already (sensibly) has stopped having programmable burst length on GDDR3.
Cheers
Cor
Wonder if they didn't mean 512MB of XDR memory per CELL chip
I see about 10 memory chips per cell on the left. So think you are right there. After all even an old geezer like me has 1 GByte in his one year old (AMD64, just had to do that 3jack computer.
Isn't that baord a bit oversize for a blade?
Cheers
Cor
Test of fast DDR2 modules (chips from Micron):
Last page:
http://www.legitreviews.com/article.php?aid=204&pid=6
snip:
Conclusions:
Nathan Kirsch's Thoughts:
Low latency DDR2 memory has been something enthusiasts, gamers, and overclockers have been waiting for since DDR2 first came out without luck. Luckily enthusiast memory giant Corsair stepped up to the plate and hit a home run with their PC-5400UL memory modules. These modules turned out to be dual purpose in our opinion as they can run tight timings (CL3) up to around 750MHz and then run all the way up to a 1GHz at CL5. For the first time ever we are seeing numerous test samples hit over 1GHz and do it with stability! If you are looking to plug some modules in and run tight timings these will work perfectly. If you are that die hard overclocker looking to hit over 1000MHz on your memory look no further!
In terms of raw performance the modules don't shine like they should till the front side bus is raised. When keeping the FSB at 266MHz only marginal bandwidth gains were noted, but when the FSB was raised above 300Mhz (QDR 1200MHz) the scores vastly improved. With new chipsets like the NVIDIA nForce 4 SLI Intel Edition and the Intel i955X Express now on the market Corsair has selected the perfect time to launch their 5400UL modules. I also find it ironic that these brand new modules already push the new chipsets to their limits in terms of memory bandwidth. We were able to hit 1066MHz with some stability, but overclockers like "The Stilt" have already reached 1170MHz on volt modded motherboards with Corsair's 5400UL memory.
At the 2005 Consumer Electronics Show (CES) memory companies were talking about hitting 700MHz or 800MHz in their test labs and how DDR2 sales are slowly picking up. Like many others in the industry DDR2 didn't look that good to us till Corsair brought out their PC2-5400UL part. If you were waiting on upgrading to a DDR2 platform now might not be a bad time to take that plunge. Intel and nVIDIA have released their new dual core chipsets, PCI-Express video cards are out in mass, and DDR2 memory has finally passed up DDR1 in terms of perfomance. What are you waiting for? Jump on the DDR2 bandwagon! I'm sure our AMD fans will migrate over later when those next generation processors and boards come out.
A one gig kit of PC2-5400UL currently runs $240 shipped to your door, which is a heck of a deal for memory that can overclock over 1GHz if you are lucky and have the right hardware! Right now Corsair only offers the modules in 512MB capacities, but most people run 2 x 512MB anyway. If you are looking for some DDR2 memory we have found that Corsair's PC2-5400UL is the best that we have seen to date. Add this one to the top of your memory shopping list!
Those latencies look good. Athlon-64s would love them imo.
Another GHz-er ;(
Cheers
Cor
Yeah those cards could burn the power traces off the mobo, LOL
I am not trying to say that the speeds are different from what you say, just that they should be more careful with the units in the specs, i.e. write MHz when it is MHz(clock) and write Mbps when it is Mbps(data rate per pin).
Anyway from those two cards clocked at 5x0 MHz, I think that the 500+ MHz (clock speed, I made sure of that) which I quoted for is quite fast.
Are these the fastest graphics cards on the market?
Cheers
Cor
AT Suit update from msaba on Yahoo...
Hey Will,
should that not read Hynix suit instead of AT suit?
confused,
Cor
Why would anyone assume that GDRR3 is given as clock speed?
Memory
256MB XDR Main RAM @3.2GHz 256MB GDDR3 VRAM @700MHz
Because in your example it is shown in MHz which is the correct unit for clock speed and not the correct unit for data rate, which should be in Mbps.
So they may be making editing mistakes, but are not making it obvious to the reader which is which.
Cheers
Cor
Why would 500 MHz clock be considered in the high range?
Hi Elixe
I said 500+ range. This is clock, not data rate. As we have discussed Cell and Xbox 360 to have a clock of 350 MHz for GDDR3, this is as fast as is used now I think.
Are you implying that GDDR3-1.6GHz data rate is in the same price range as 1.0GHz data rate?
(2-2.5 times DDR2).
I don't know that, Elixe. My informant was very reluctant to give details, in fact at first did not want to give ANY but I persuaded by a QandA play to choose one range of a few which I presented. For competitive reasons they are not publishing price lists of this stuff. I guess if there are only two customers (ATI and NVidia), they have to be careful.
Be assured that I am always searching for more info on such issues and sharing it when I get it. If you can get an actual quote out of one of those who make it, that would be great.
(I have even asked an Israeli woman who keeps bombaring me with requests for chip quotes through chipstocks.net to ask a quote for me, but she did not do it:(
Cheers
Cor
So we don't know the premium.
Hi Cal and others
I have just today obtained a pricing premium for GDDR3 at the high range (500 MHz clock+) from a manufacturer of whom I will not release the name.
This type of memory sells for 2-2.5 times DDR2 (which is approx. $ 5.90 street price at the moment per 512 Mbit chip).
That translates into approx. $ 12.00 to $ 15.00 per chip. Reported price for XDR is still higher (25-30), but we don't know at what qty that is. (the GDDR is qty pricing)
Cheers
Cor
Hi Paul: about that darn granularity.
I feel we should be using a different word for it, representing the minimum piece of data which can be extracted in one read access from a given memory system.
The term granularity is already used for the minimum capacity increase for a given memory choice.
see:
http://www.electronicdesign.com/Articles/Index.cfm?AD=1&ArticleID=10093
Memory Granularity
Dave Bursky
ED Online ID #10093
April 14, 2005
Memory-subsystem and module granularity-the minimum size increase created by adding another row of memory chips or a memory DIMM to a system-is a key factor when selecting the memory during the design process. Depending on the application, the memory chip's data-bus width can significantly affect cost and expandability.
In typical computer memory systems, DRAMs are used in 4-, 8-, and 16-bit-wide configurations. When aggregated to provide a 64-bit datapath, 16 DRAMs would be needed if a x4 organization is used. Similarly, just eight DRAMs are needed when using x8 devices, and only four for x16 chips (not including parity or error checking and correction considerations).
B
ased on today's mainstream density of 512 Mbits per chip, that translates to a memory increment of 1 Gbyte when using x4 DRAMs, 512 Mbytes when using x8 devices, and 256 Mbytes for x16 devices. Of course, you can double those numbers if 1-Gbit DRAMs take the place of 512-Mbit chips. On those DIMMs with only eight or four DRAMs, a second row of chips is often mounted on the reverse side of the module, providing a second rank and doubling the DIMM's capacity.
For most server and high-end workstation applications, the larger the increment, the better. Consequently, x4 or x8 memory chips deliver the best density options on commodity DIMMs or custom memory modules. For the high-end PCs that don't require a maximum storage capacity increment, modules based on x8 DRAMs offer a more economical alternative. For commodity PCs and low-end office computers, x8 and x16 organizations make the most economic sense-fewer memory chips means a lower module cost and, therefore, a modest upgrade cost to add 256 or 512 Mbytes to a system.
Deciding on the correct granularity also involves the system's expansion limit. Due to bus loading and board space, most PCs typically limit their memory expansion capability to between two and four DIMM sockets. That, in turn, limits how much memory can be installed.
In contrast, high-end workstations and servers often include several memory banks, each containing from four to eight registered DIMMs. But, due to bus loading, even the approach used in most servers will restrict the amount of memory. To get around that problem, server designs are moving from registered DIMMs to the new fully buffered DIMM architecture developed by Intel and the Memory Implementers Forum (www.memforum.org).
I also think, as stated in another post, that programmable burst length will go away as the controller is quite capacle of giving streaming commands for the next burst (equal to the prefetch), especially now that minimu bursts are 4+.
Cheers
Cor
(enough grains for today
Hey Elixe
I am sticking to my 8 bytes (i.e. 64 bits) granularity on this.
Don't understand the problem.
We need a 64 bit wide bus to get a total bandwidth of 800MBps.
(your spec)
Granularity by definition is the smallest unit of data which can be extracted from the memory, so to get that I have to use a burst of 1. Prefetch is also 1 for sdram (i.e. does not exist).
So at a burst of 1 I get 64 bits, which is the granularity. Longer bursts cause one to get twice, four times etc the granularity.
Btw, quite another point, now that the prefetch is often 4 bits and up on various memory chips, programmed bursts lose their usefulness. One simply has to issue subsequent reads (one NOP inserted) to keep streaming the data. This does not affect the bandwidth as on DDRx it is a pin at which it is asserted and on XDR it goes on the command channel.
So one of the four main infringement aspects of DDRx goes away. Dual edged clocking was not accepted in the hynix case (correctly, because it is a different type of clocking than that in the patents). The main thing is still the latency register.
Cheers
Cor
Samsung doesn’t make GDDR3 with 350 MHz clock?
I never said that, Elixe. Even if they list from 500 MHz up (I did not check this time), I can well imagine they will run OK at 350 MHz. I have the feeling though that much of the advantage of GDDR3 is getting lost if you are running at speeds like DDR2.
But maybe Samsung is selling their rejects for this application
Obviously for 22.4 GBps only 700Mbps data rate was required.
Assuming that bandwidth is a magic number they needed and nothing more... Maybe they had a well working design for 38.4 GBps at 256 bits and just decided to derate it for Xbox. (el cheapo solution now and upward mobility later)
Depending on the application the burst length can be programmed to 1, 2, 4 or 8.
In my application the core speed is 100 MHz and the Bandwidth is 800 MBps.
Hmm saturday night trick questions? I would think one would need 8 times 8 bits, i.e. 64 bits (for the 1 bursts).
Anyway, at the end of the day maybe they like to have that upside bandwidth room for the Xbox ??
Cheers
Cor
Another reference to 1.4 Gbps.
http://www.theinquirer.net/?article=23392
G70 Card named after 3Dmark 05 figure
Hardware Roundup 7800 is the keynumber
By Désiré Athow: Friday 20 May 2005, 16:46
WE HAD a hard time trying to decrypt those news from hardspell, but it seems that the G70 might perform. This is the closest I can get to the real thing. Apparently the Geforce 7800 will come with 512MB DDR3 memory running at 1.4GHz and have achieved 7800 on 3DMark 05. The 6800 Ultra scores around 4400 marks normally so effectively, a G70 should beat a 6800 Ultra SLI. Chinese website.
...
Struck me that this source also talks about 1.4 Gbps. But in this case 256 b wide?
Cheers
Cor
Nice pictures, Cal, did not know you had a knack for reading Japanese
It's more like I would have expected, but a bit surprised to see unannounced x16 GDDR parts on the Xenon, but maybe MSFT knows more than we do on the chip schedules.
I for one cannot see the point of underclocking GDDR. Would be nice to get some confirmation from Sony or Msft.
Cheers
Cor
Getting a little confused about Xbox 360 and PS3.
Is Anand's take for the Xbox or for the PS3. PS3 only has 256 MB of GDDR3, so the arguments change slightly. Only 4 chips needed. So do we have 256 bit bus for Xbox and 128 bit bus for PS3?
Cheers
Cor
Also have an e-mail into Anand.
Great, save me the trouble, was also going to do that:
Cheers
cor
Elixe hi I know that cannot be done.
But I thought we were talking about the PS3 which only has 256 MB GDDR3.
Cheers
Cor
(have to shoot off for a bbq:)
Probably to minimize risk. They've been there, done that with GDDR3.
Cal, given that they probably has to design maybe 18 months ago, that is probably the explanation.
I also believe that Anand made a mistake. Whoever would inderclock GDDR3 to that extent?
The confusion is caused by the fact that everybody is used to DDR400 being 200 MHz clock:(
Cheers
Cor
On a 128 bit bus you’d need DDR2-1.4 Gbps.
Oops, I was ahead of the times, Guess that 700 MHz DDR2 does not [yet] exist Well maybe by the time PS3 ships?
Cheers
Cor
Higher speeds aren’t the issue. Nvidia could have used four 512 Mb at 1.4 Gbps. GDDR3-700 Mbps is really underclocked. The design can easily increase bandwidth with existing GDDR3
This is an advantage, of course. Without changing anything other than the chips they could double the bandwidth. (might get into phase problems w/o flexphas though
The problem with DDR2 is it’s only 16 bits wide. To get 22.4 GBps bandwidth you need 16 DDR2-700. Guess what, the bus width is still 256 bits but you have twice as many chips.
But wouldn't DDR2 support two chips "behind each other" ?
Cheers
Cor
Deep luck mode? Hahaha. Perfect for Rambus. Thanks : )
I thought that was the typo of the week, almost Freudian
Cheers
Cor
Samsung specifies GDDR3 burst lengths of 4 and 8.
How on earth could access granularity = bandwidth / core speed for both?
Get your drift Elixe And I use your formula because it's directly in bits all the time. But we are talking the minimum burst all the time, I assume, because that minimum burst is caused by the inabilility of the core to keep up with the interface.
I think it’s pretty clear now that the bus is 256 bits wide and they’re spec’ing data rate not clock..
I am still puzzled by this. Why do you think they would have chosen such an uncorfortably wide 256 bus and underclock the dram? Hell they could have taken DDR2 at that speed and have a prefetch of 2....
Also by the time PS3 comes to market higher speed ranges are probably available than those here now. (maybe also 1 Gbit parts)
Cheers
Cor
Well I did look at the datasheets before posting and my impression was indeed that when they are talking about 700 MHz, they mean a clock speed of 700 MHz which simply means 1.4 Gbps.(per data bit)
The confusion also seems to have taken Anandtech by surprise, because they also talk about 256 bit wide. I never saw it like that, I see 128 bits wide (by Cal's formula and by Elixe's formula, LOL)
Btw not all GDDR chips are 32 bits wide, 16 bits wide also exists.
Cheers
Cor
PS Can't somebody steal a box and look inside?
PPS Cal, why do you think NVidia chose GDDR over XDR here?
Thanks for the link back, Elixe.
So the double precision stuff is not "hard wired" which means one has to build it from several SP instructions which is painful and will put the performance rather low compared with the peers (simplified synopsis:)
For now I would be happy to see that development station out in the wild
Cheers
Cor
Sorry Elixe I missed that fp discussion then.
Keep in mind the CELL development work stations probably don’t need full IEEE compliance.
This is exactly what I was driving at. The Cell dev. ws is a relatively specialized market. I had hoped for a more general workstation market with the backing of IBM. General workstations would need serious fp stuff (and I do not pretend to understand the ins and outs of that compliance the way RJ does).
Cheers
Cor
Something about Cell processors outside graphics.
This may interest some here. I ran into the following text in:
http://www.research.ibm.com/cell/
Single precision floating point computation is geared for throughput of media and 3D graphics objects. In this vein, the decision to support only a subset of IEEE floating point arithmetic and sacrifice full IEEE compliance was driven by the target applications. Thus, multiple rounding modes and IEEE-compliant exceptions are typically unimportant for these workloads, and are not supported. This design decision is based the real time nature of game workloads and other media applications: most often, saturation is mathematically the right solution. Also, occasional small display glitches caused by saturation in a display frame is tolerable. On the other hand, incomplete rendering of a display frame, missing objects or tearing video due to long exception handling is objectionable.
and asked at TMF at:
http://boards.fool.com/Message.asp?mid=22502649
Can somebody here place this in context? What does this mean? Does it mean the SPEs are useless for workstation use, because of limitations which do not matter in gaming?
A reply from RMHJ (a programmer with much processor knowledge) was:(in part)
Pretty much.
Implementing the full IEEE FP standard is pretty onerous, and really isn't ideal for real-time graphics. The standard includes multiple formats (single and double precisions, and optional "extended" precisions), a variety of "special quantities", including NaNs (Not-a-Numbers), things like plus and minus infinities, "signaling" and quiet NaNs, denormalized numbers, a variety of rounding modes (towards +- infinity, towards 0, ...). Even on servers/workstations intended for serious FP work, many of these are emulated (e.g., denorms on Alpha).
Most of these are geared towards getting useful and/or maximally accurate answers in the face of accumulating error, and estimating/limiting those errors. ("Using floating point numbers is a lot like moving piles of sand. Every time you do something, you pick up a little dirt and lose a little sand.")
The considerations for video are fairly different. Double precision (with 52-bit mantissas) are significant overkill for display, and require very large summing trees for fast performance. IEEE single-precision has only a 24-bit mantissa, and is rarely adequate for workstation apps, but is generally adequate for video.
Video usually requires "saturating" arithmetic, i.e., if you add or multiply two things and the result overflows (underflows), the result returned is the largest (smallest) reprentatable value. In IEEE you would get a NaN (or possibly a denorm), and possibly an exception. If you look at the SSE/SSE2/SSE3 instructions for x86, many of these instructions perform saturating arithmetic on 8/16/32 bit quantities. E.g., once you've gone to full-white or full black on the screen, any brighter or any darker are useless.
Maybe for those who have access better to read the whole thread. The conclusion in a later post from RMHJ was:
I don't know that the SPE's don't support double-precision arithmetic, but from the excerpt clipped, that would be the way to bet.
IMO, there's little/no chance for Cell to displace the x86 architecture. (And this has been my opinion for some time.)
=============================================================
I think my initial shock reaction on reading the IBM piece may have been correct in showing that this processor architecture is not very suitable for scientific calculations, i.e. workstations.
Same would then hold for supercomputers.
Of course, as one remarked on that thread, it would be possible to create different Cell processors with different instrcution sets for different purposes.
Cheers
Cor
The Cell workstations?
http://www.theregister.co.uk/2004/05/12/sony_ibm_cell/
Sony, IBM to offer Cell workstations for Xmas
By Tony Smith
Published Wednesday 12th May 2004 09:57 GMT
Sony and IBM today said they will ship workstations based on the pair's upcoming Cell parallel processing chip in December.
The machines will be geared toward digital content creation, so we're essentially talking PlayStation 3 software development machines here.
The companies' announcement doesn't mention the next-generation console, of course, but since the workstations are being produced with the co-operation of Sony's Computer Entertainment division, it's hard to imagine that Sony and IBM have much else in mind.
Well, beyond the movie business, that is. Both are also hoping to push the workstations toward film-makers and special effects houses, with their increasing demand for greater and greater computational power to not only render scenes that are more visually complex but to do so as quickly as possible.
Cell has received much criticism - particularly from rival chip makers - for forcing a whole new programming model at developers. So it's crucial IBM and Sony get development kit to programmers as early as they can, either to give coders time to get their heads around the new architecture, or to demonstrate that writing games for Cell isn't going to be as tricky as has been made out.
Cell was announced in March 2001 at the start of what was described as a five-year project, putting its release in the 2006 timeframe. If Sony, IBM and fellow Cell developer Toshiba can get chips out by the end of 2004 - almost certainly only in sample quantities - it would seem that their project is progressing rather better than anticipated.
Cheers
Cor
OOPS, that was an old story, sorry.
(picked ip up on yahoo)
Well they are not here yet, but maybe soon
Yes it would be nice to see one of those Cell workstations in teh wild
Here is the link to the Anandtech story about it:
http://www.anandtech.com/tradeshows/showdoc.aspx?i=2417&p=1
Interesting point:
Contrary to the rumors we've heard, it looks like the PS3 will implement the Cell processor that we all were introduced to a few months ago - featuring a single PPE and 8 SPEs. There is one caveat however; the Cell processor in the PS3 will only feature 7 working SPEs, one will remain disabled in order to improve yields.
(yield problems?
I wonder if I could use one of those products as my media center in the living room? No need for me to play games. But it would need a DVD burner maybe to replace the VCR.... I wonder what is best, one of these or a stripped down PC?
Cheers
Cor
I think PS3 has a graphics edge because PS3 does not use unified memory like XBOX 360.
OK, but otoh there is that 10 MB of 256 Gb/s edram in the ATI GPU. Sort of like a cache memory, I guess.
I agree that performance will be more than enough in both cases so now it's up to the software.
Seems to me the Xbox software will be easier to develop because it's simply 6 threads of PowerPC stuff rather than 2 threads PowerPC plus the 8 SPEs in PS3. And then of course the GPU stuff. Do the game developers have to know much about the GPU stuff or are they simply sending macro-type commands to the GPU like give me a poygon with these parameters ...
Price and content will be decisive.
Agreed. Are prices still totally unknown for both?
cheers
Cor
Elixe, have you seen a side by side graphics comparison for the PS3 and the Xbox 360?
as the details are now coming out, can we make a comparison there?
You probably have far more links than I do. I found some stuff here:(and at other places)
http://www.tomshardware.com/hardnews/20050512_232450.html
Xbox 360:
ATI delivers the graphic system with a 500 MHz, R520-based, graphics processor that integrates 10 Mbyte of DRAM and is combined with 512 MByte of GDDR3-700 memory. The chip can draw up to 500 million polygons per second and features a pixel fill rate of 16 G samples per second.
Do we have the comparable numers for PS3?
There is also considerable confusion about the teraflops, the ones of the grpahics engines and the ones in the processor. I read for PS3 for example:
(also Tom's)
Nvidia delivers the graphics engine, codenamed "RSX". The chip is based on a 550 MHz GPU that offers 1.8 TFlops performance. RSX will support full HD support at 1080p in two channels and integrate 256 MByte GDDR3 memory at 700 MHz.
Somewhere else I read 1.8 Tflops for the GPU and 0.2 Tflops for the CPU, which seems to lead to a menaingless 2.0 Tflops total.
Cheers
Cor
Guess nVidia didn't want to modify their GPU for XDR.
Elixe hi,
Could it be that the internal structure of the GPU is such, that it favors a parallel arrival of the data?
The GPU clock seems to be at a similar rate as the GDDR rate...
Cheers
Cor
Threejack, on the memory chips;
If PS3 will use one Cell chip with 256MB XDR, where is the 512MB XDR going to be used?
If you use four 512 Mbit XDR chips you get 256 Mbytes of memory; that is probably what will happen.
And on the one Cell chip (not four), it was always clear there would be one. The Cell chip has a cost of about $ 100 (at 90 nm) so four would cost $ 400. There is a lot of other hardware in that box, it would make it prohibitively expensive.
I can see Sony loss leadering to some extent on the box if they can profitably selling games for it, but not to the extent of hundreds of dollars per box.
jmo
Cheers
Cor
PS3 in stores by June?
They must have the year wrong, expect this story to disappear:
http://www.bloomberg.com/apps/news?pid=10000101&sid=abUbYgyfhgBg&refer=japan
Sony Plans to Begin Selling PlayStation 3 by June (Update1)
May 16 (Bloomberg) -- Sony Corp., looking to protect its No. 1 position in video-game machines from Microsoft Corp.'s new Xbox 360, disclosed details of the new PlayStation 3 and said the machine will be available in stores by June.
Executives including Ken Kutaragi, president of Sony Computer Entertainment, demonstrated the graphics capabilities of the new machine today in Los Angeles, showing explosions, falling leaves and a simulation of a bathtub full of plastic ducks.
Sony is revealing details of the new game console days after Microsoft unveiled its Xbox 360. Microsoft hopes the Xbox 360 will topple Sony from the No. 1 spot in game machines, a $6.16 billion market. Sony's PlayStation 2 outsells Microsoft four to one globally and has a 57 percent share of the U.S. market.
Customers with older PlayStation games will be able to use them on the new machine, Sony said. The PlayStation 3 will have wireless-fidelity, or Wi-Fi, access for Internet connections.
The PlayStation 3 will be compatible with the Blu-ray format for next-generation digital video discs, Sony said. The device will include ports that let users connect digital cameras and music players and will let users manipulate photographs.
The new machine will support up to seven controllers, connected using Bluetooth wireless technology. Owners will also be able to connect two televisions to the device.
The machine will include the ``Cell' chip developed by Sony, International Business Machines Corp. and Toshiba Corp. and a graphics chip made by Nvidia Inc.
The 2005 launch of the Xbox 360, coming before the new PlayStation, could cut Sony's 57 percent share of the North American market, Piper Jaffray & Co. analyst Anthony Gikas estimates. Microsoft has spent $12 billion and incurred losses of more than $2.4 billion developing and selling Xbox.
Kutaragi spoke at a press conference in Los Angeles ahead of this week's Electronic Entertainment Expo video-game conference.
=============================================================
in edit here is a Reuter's link saying 2006 again:
http://www.reuters.com/newsArticle.jhtml?type=technologyNews&storyID=8508552
Sony to launch PlayStation 3 in 2006
Mon May 16, 2005 08:24 PM ET
LOS ANGELES (Reuters) - Sony Corp. will launch its next video game console, the PlayStation 3, in 2006, with the promise of high-definition graphics and broadband connectivity, the company said on Monday.
The electronics and entertainment conglomerate said the new game console will feature a next-generation DVD player that supports high-capacity Blu-ray technology and will be powered by the "Cell" chip, which pledges to be significantly more powerful than Intel Corp.'s Pentium 4.
Sony boasted that the PS3 would have twice the processing speed of the Xbox 360, the new console from Microsoft Corp. that will be released later this year. Sony developed the "Cell" microprocessor with International Business Machines Corp. and Toshiba Corp. ...
...
=============================================================
Cheers
Cor
Note that the nVidia GDDR3 bandwidth is almost the same as the XDR bandwidth. (22.4GBps vs 25.6 GBps).
Ah but what is the granularity
So yes I failed to mention those other 256 MB GDDR in my post. I guess you estimated the total 512 MB including that graphics memory.
Guess nVidia didn't want to modify their GPU for XDR.
Will we find out why? Too expensive? ... not available soon enough to validate? What?
Also the XDR is running at the slowest XDR speed of 3.2 Gb/s.
The Rambus PR which says nothing new is here:
http://rambus.com/news/newsroom/pressrelease.cfm?id=189
Cheers
Cor
Only 256 MB XDR in PS3?
That is a disappointment. (bolding mine)
see:
http://www.gamespot.com/news/2005/05/16/news_6124681.html
"PlayStation 3 announced for 2006
[UPDATE 2] Sony confirms the name and release window of its next-generation console in Los Angeles, will use Blu-ray disc format.
LOS ANGELES--Today saw the second of the big three console makers announce their next-generation platform. At its pre-E3 press conference, Sony Computer Entertainment gave the world its first look at the PlayStation 3, as it now is officially called.
...
Sony also laid out the technical specs of the device. The PlayStation 3 will feature the much-vaunted Cell processor, which will run at 3.2 Ghz and feature 2.18 teraflops of performance. It will sport 256mb XDR main RAM at 3.2 Ghz, have 256MB of GDDR VRAM at 700mhz.
The PlayStation 3 will also sport some hefty multimedia features, such as video chat, internet access, digital photo viewing, digital audio and video. Sony Computer Entertainment head Ken Kutaragi introduced it as a "Super computer for computer entertainment."
Cheers
Cor
For Elixe: IBM Link
(sounds like für Elise
http://www-128.ibm.com/developerworks/power/library/pa-nl9-tip.html
Power Architecture Community Newsletter, 16 May 2005: PowerPC processor tips
The IBM® Microelectronics Technical Library has added two new content categories -- one for Blue Gene®/L, and one for the Cell processors.
Porting applications to the IBM eServer Blue Gene/L system solution
....
A 4.8GHz fully pipelined embedded SRAM in the streaming processor of a Cell processor
This ISSCC 2005 session article on static memory, A 4.8GHz Fully Pipelined Embedded SRAM in the Streaming Processor of a Cell Processor, describes the embedded SRAM in the streaming processor of a Cell chip. This three-page article (one page of description and two of diagrams) details how the synergistic processing element's local store unit handles decoding. (The synergistic processing element is the processor designed to accelerate media and streaming workloads. The local store unit is local memory comprised of several macros that perform load/stores, transactions for DMA, and instruction fetches into an instruction-line buffer.)
The article also addresses how area, power, and yield are as important as performance since the LS occupies one-third of the SPE floor plan (488 Kb, PDF format, presented February 9, 2005).
Thought you might be interested, if you hadn't seen it yet yourself.
Here is the last paper:
http://www-306.ibm.com/chips/techlib/techlib.nsf/techdocs/372E2BE9229AC34987256FC00074A13A/$file/ISS...
Cheers
Cor
Hey Threejack, on XDR in cellphones:
Given Mr. Gates remarks this week about music and email delivery to cell phones, and the announced cross-licensing between Toshiba and Microsoft, what do you think the chances XDR finds its way into these devices in a few years?
As this application does not require extraordinary memory bandwidth, there could be three reasons left to use XDR:
1) small granularity of memory size, one memory chip can give you 64 MB of memory.
2) low power
3) low price
If 2 or 3 happened versus other solutions, preferably both, I would give it a chance. Granularity they seem to cope in up-market phones now OK, even with smaller memories than 64 MB.
In the smaller phones they have tended to use MCPs (Multi Chip Packages) with flash memory and sram stacked, later with flash, sram and dram stacked. There was a preference for flash and sram stacking as teh interface is very similar and they can share many connections, such as address lines. That advantage would not exist for XDR as the control bus would transport the addresses.
I would say the likelihood of XDR getting into cellphones is even lower than it getting into mainstream PCs.
Cheers
Cor