Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Michael,
Tim,
OFF TOPIC DrJohn,
FPG,
Elmer,
Elmer,
Elmer,
Tim, I love your posts regarding your extensive experience in tech, so allow me to give you some of mine.
I've been working on memory controllers for the better half of my career, most of which has been at Intel. I was part of the design verification teams that made FBD, then later "FBD2" and "son of FBD2" a reality. In all cases, the memory controller was moved off the CPU and onto the memory package itself, which is the same concept that Hybrid Memory Cubes is implementing.
There are differences, to be sure, but the basic idea is the same. You rely on mezzanine bridges between the CPU and the memory itself. This allows you to tackle the physics problem and drastically improve memory bandwidth by putting the controller right next to the memory devices themselves. Then you can use high-speed serial interfaces to connect between the CPU and the memory controller. (Rambus didn't work because, among many other things, the memory devices themselves had to drive the interface signals at the high speeds, and that got expensive.)
The real interesting thing is that FBD was kind of ahead of its time. I talked with the guy who co-invented the interface, and he acknowledged that Intel underestimated how far JEDEC could extend the lifespan of DDR SDRAM, which is why FBD never really caught on. But that extension comes with a LOT of complexities like elaborate interface training, which IMO is more esoteric and uglier than the training of high-speed serial interfaces like PCIe or QuickPath. (We're up to DDR4 now, and boy is there a lot we need to do just to squeeze the last hundred MHz out of these babies.)
Intel of course wasn't the only company working on mezzanine memory bridges. There was also a company in Ventura named InPhi, and they were working on LRDIMM technologies. The idea was to multiplex multiple DRAM devices stacked on a DIMM and present them to memory controllers that did not have native support for these extra devices (or ranks, as we say). This way, you could increase memory capacity by doubling the number of devices rather than going to higher density devices, which were often more than twice as expensive and in very short supply.
I think with the HMC concept, Intel and Micron are finally uniting the two goals of increased memory bandwidth and increased memory capacity. That's the good news. The bad news, however, is that this technology probably won't be useful in anything other than high-performance computing and mission-critical servers that require a few terabytes of DRAM per system. I don't see this technology making its way into the consumer market or even in rack-mounted servers because of power and cooling issues.
By the way, I disagree with the article you linked to when it stated that Intel "made the mistake" of not moving the memory controller onto the CPU. It was a mistake, for sure, but I don't think it has anything to do with the need for HMC because Intel found ways around the memory bottleneck before finally integrating the memory controller onto the CPU. (But then again, the article also says that hyperthreading was introduced in 2005. It was actually introduced in 2002 w/ the introduction of the 130nm Pentium 4, a.k.a. Northwood. So the author of the article seems to have gotten a few facts wrong.)
Tenchu
Micron's Hybrid Memory Cube looks similar to Intel's old FBD technology, except that Micron combines the on-package memory controller with 3D stacking of the memory chips.
It's kind of funny how the big players in memory, after rejecting high-speed custom memory interfaces like FBD and that pariah Rambus, are now reinventing that concept. I guess they've squeezed all they can out of DDR technology, and now they have to leave the old world of wide parallel multi-drop buses.
Not Invented Here ...
Tenchu
Robert,
Unkwn, I never liked the heterogeneous core concept. Software guys really have to program specifically for said architectures. Dynamically scheduling lightweight threads to use the little cores in order to free up the big cores doesn't provide much benefit, in my mind.
The idea, at least from this hardware designer's POV, is to save power. But there is little power cost in just scheduling the lightweight threads on big cores. Clock and power-gating ensures that the big cores don't run longer or harder than they need to. They are flexible enough to handle both light loads efficiently and heavy loads effectively.
IMO, the advantages of heterogeneous cores is on the order of milliwatts. That's actually significant in the mobile world, to be sure, but the costs and difficulty in programming for heterogeneous cores isn't worth it.
Tenchu
(OFF TOPIC) Robert, from the article:
AG,
AG,
Borusa,
Ron and Saturn, I see that as recently as last month, Goldman Sucks reiterated their Sell rating on INTC, but they raised their price target to $20.
Where is INTC now? Above $34.
At this point, GS is so far into permabear status that they can't get out without causing a major ripple in the market.
Truth is relative, it seems.
Tenchu
FPG,
Yes on both questions.
Morrowinder,
FPG,
FPG,
AG,
Michael,
Morrowinder,
FPG,
FPG,
FPG,
FPG,
FPG,
Borusa,
Graphicsguy,
Borusa,
Borusa,
Chipguy, nothing you said contradicts what I insinuated.
Merced was going to be the first P7 the same way Pentium Pro was to the P6. Future versions of P7 would then migrate down to the desktop as the software issues were worked out.
Willamette, AFAIK, was going to be the stop-gap measure because it was obvious even then that P7 wouldn't migrate as easily to the desktop as P6 did. But even those significant migration challenges turned out to be much, much greater than anticipated, and Willamette then became the inheritor of the Pentium throne.
In any case, I don't think the fact that Merced was targeted at servers and workstations proved that Intel didn't have plans to migrate IA-64 into all product lines.
Tenchu
Tim,
Chipguy,
Borusa,
FPG,
Elmer, didn't Prince Cuomo drop the lawsuit after Intel agreed to be part of a 450mm wafer consortium based in Albany, NY?
Tenchu
Congratulations, FPG. Neelie the Terrible and Prince Cuomo succeeded in using antitrust law to extort billions out of Intel.
They didn't do a damn thing to open up competition, but hey, you never really cared about that, did you?
Ends justify the means ...