InvestorsHub Logo
Followers 0
Posts 625
Boards Moderated 0
Alias Born 03/25/2004

Re: wbmw post# 65236

Tuesday, 11/08/2005 9:56:21 AM

Tuesday, November 08, 2005 9:56:21 AM

Post# of 97826
Wbmw:

My memory was a little faulty at how many data bits are kept. ECC uses 1 bit per byte, but the data in the cache also has 3 predecode bits per byte (for L1I). Since L1I is swapped to L2 and this is retained according to AMD. Thus each 64 byte line has 768 bits, not the 640 I remembered (I thought it was one bit for ECC and one bit for predecode). This may be one of the reasons why AMD caches use more die area per byte than Intel's.

The linked list is how I code data caches used in my programs. Even in the old days, memory latency was much faster than disk. Some implementations in data comm also use this in their hardware. They just took their software algorithms and did their hardware the same way (why change what works). Singly linked lists are more used in hardware as they have many more uses.

As to why FA caches aren't implemented more, I think its plain inertia. You use old solutions that worked for you until something forces you away. Else we would have long ago gone away from x86 and followed the each fad as it came along with every new system just chock full of bugs. By the time any system will have most of the bugs eradicated, the fad would change away and everyone would switch. And there would be no long big sustained companies in this business. No IBMs, Intels or Microsofts. Fads don't give one time to get huge.

Caches started being direct and moved to more ways as time went on. As caches are shared with more and more cores and get wider as well, taking the jump to FA will get more likely.

Pete


Volume:
Day Range:
Bid:
Ask:
Last Trade Time:
Total Trades:
  • 1D
  • 1M
  • 3M
  • 6M
  • 1Y
  • 5Y
Recent AMD News