InvestorsHub Logo
Followers 21
Posts 14802
Boards Moderated 0
Alias Born 03/17/2003

Re: pgerassi post# 65128

Saturday, 11/05/2005 12:07:28 PM

Saturday, November 05, 2005 12:07:28 PM

Post# of 97826
It may be slow the way you'd do it. But it can compare all addresses simultaneously in a single cycle, yes, its a bunch of power, but on the same order as a address decode into a memory array. Its the LRU linked list reorder that takes a couple of cycles. Parallelization can shrink cycles in exchange for more power.

1. You have no idea what you are talking about in terms of
implementation and design trade-offs. For example, fully
associative integrated caches generally have very good
low power characteristics (read up on early ARM designs)
but that is about their only good point. Your LRU comment
is completely irrelevent as the replacement policy and its
implementation is a completely orthogonal design issue.

2. As CJ has already said, once you get above about 8
way set associativity increasing associativity further has
very little hit rate benefit and is typically a net performance
loss when you take into account all the negatives design
trade offs from going fully associative.



Volume:
Day Range:
Bid:
Ask:
Last Trade Time:
Total Trades:
  • 1D
  • 1M
  • 3M
  • 6M
  • 1Y
  • 5Y
Recent AMD News