InvestorsHub Logo
Followers 20
Posts 6448
Boards Moderated 0
Alias Born 07/10/2002

Re: Tim May post# 135922

Tuesday, 08/26/2014 4:31:31 PM

Tuesday, August 26, 2014 4:31:31 PM

Post# of 151674
Tim, I love your posts regarding your extensive experience in tech, so allow me to give you some of mine.

I've been working on memory controllers for the better half of my career, most of which has been at Intel. I was part of the design verification teams that made FBD, then later "FBD2" and "son of FBD2" a reality. In all cases, the memory controller was moved off the CPU and onto the memory package itself, which is the same concept that Hybrid Memory Cubes is implementing.

There are differences, to be sure, but the basic idea is the same. You rely on mezzanine bridges between the CPU and the memory itself. This allows you to tackle the physics problem and drastically improve memory bandwidth by putting the controller right next to the memory devices themselves. Then you can use high-speed serial interfaces to connect between the CPU and the memory controller. (Rambus didn't work because, among many other things, the memory devices themselves had to drive the interface signals at the high speeds, and that got expensive.)

The real interesting thing is that FBD was kind of ahead of its time. I talked with the guy who co-invented the interface, and he acknowledged that Intel underestimated how far JEDEC could extend the lifespan of DDR SDRAM, which is why FBD never really caught on. But that extension comes with a LOT of complexities like elaborate interface training, which IMO is more esoteric and uglier than the training of high-speed serial interfaces like PCIe or QuickPath. (We're up to DDR4 now, and boy is there a lot we need to do just to squeeze the last hundred MHz out of these babies.)

Intel of course wasn't the only company working on mezzanine memory bridges. There was also a company in Ventura named InPhi, and they were working on LRDIMM technologies. The idea was to multiplex multiple DRAM devices stacked on a DIMM and present them to memory controllers that did not have native support for these extra devices (or ranks, as we say). This way, you could increase memory capacity by doubling the number of devices rather than going to higher density devices, which were often more than twice as expensive and in very short supply.

I think with the HMC concept, Intel and Micron are finally uniting the two goals of increased memory bandwidth and increased memory capacity. That's the good news. The bad news, however, is that this technology probably won't be useful in anything other than high-performance computing and mission-critical servers that require a few terabytes of DRAM per system. I don't see this technology making its way into the consumer market or even in rack-mounted servers because of power and cooling issues.

By the way, I disagree with the article you linked to when it stated that Intel "made the mistake" of not moving the memory controller onto the CPU. It was a mistake, for sure, but I don't think it has anything to do with the need for HMC because Intel found ways around the memory bottleneck before finally integrating the memory controller onto the CPU. (But then again, the article also says that hyperthreading was introduced in 2005. It was actually introduced in 2002 w/ the introduction of the 130nm Pentium 4, a.k.a. Northwood. So the author of the article seems to have gotten a few facts wrong.)

Tenchu
Volume:
Day Range:
Bid:
Ask:
Last Trade Time:
Total Trades:
  • 1D
  • 1M
  • 3M
  • 6M
  • 1Y
  • 5Y
Recent INTC News