InvestorsHub Logo
icon url

Tenchu

04/25/03 12:58 PM

#3002 RE: Elmer Phud #2998

EP, Some others can address this better than I but I think it does all of those operations simultaniousley. If it's in L1 it cancles the L2 and main memory request. If it's in L2 it cancles the memory request. No need to wait while it's checking each in sequence.

I doubt the memory request is speculatively issued on every access to L2. That would create way too many bus requests that would later need to be cancelled, especially when you're talking about L2 hit rates of 95%.

Tenchu
icon url

sgolds

04/25/03 1:02 PM

#3005 RE: Elmer Phud #2998

Paul, Elmer, true. For efficient cache implementation, a processor wants to get as much concurrent access as it can afford. My explanation was not meant to be definitive in details, but rather a simplified sequential explanation of the differences of the approaches.