What is the Cost of an L1 Cache Miss?

Edit: For reference purposes (if anyone stumbles across this question), Igor Ostrovsky wrote a great post about cache misses. It discusses several different issues and shows example numbers. End Edit

I did some testing <long story goes here> and am wondering if a performance difference is due to memory cache misses. The following code demonstrates the issue and boils it down to the critical timing portion. The following code has a couple of loops that visit memory in random order and then in ascending address order.

I ran it on an XP machine (compiled with VS2005: cl /O2) and on a Linux box (gcc –Os). Both produced similar times. These times are in milliseconds. I believe all loops are running and are not optimized out (otherwise it would run “instantly”).

*** Testing 20000 nodes
Total Ordered Time: 888.822899
Total Random Time: 2155.846268

Do these numbers make sense? Is the difference primarily due to L1 cache misses or is something else going on as well? There are 20,000^2 memory accesses and if every one were a cache miss, that is about 3.2 nanoseconds per miss. The XP (P4) machine I tested on is 3.2GHz and I suspect (but don’t know) has a 32KB L1 cache and 512KB L2. With 20,000 entries (80KB), I assume there is not a significant number of L2 misses. So this would be (3.2*10^9 cycles/second) * 3.2*10^-9 seconds/miss) = 10.1 cycles/miss. That seems high to me. Maybe it’s not, or maybe my math is bad. I tried measuring cache misses with VTune, but I got a BSOD. And now I can’t get it to connect to the license server (grrrr).

typedef struct stItem
{
   long     lData;
   //char     acPad[20];
} LIST_NODE;



#if defined( WIN32 )
void StartTimer( LONGLONG *pt1 )
{
   QueryPerformanceCounter( (LARGE_INTEGER*)pt1 );
}

void StopTimer( LONGLONG t1, double *pdMS )
{
   LONGLONG t2, llFreq;

   QueryPerformanceCounter( (LARGE_INTEGER*)&t2 );
   QueryPerformanceFrequency( (LARGE_INTEGER*)&llFreq );
   *pdMS = ((double)( t2 - t1 ) / (double)llFreq) * 1000.0;
}
#else
// doesn't need 64-bit integer in this case
void StartTimer( LONGLONG *pt1 )
{
   // Just use clock(), this test doesn't need higher resolution
   *pt1 = clock();
}

void StopTimer( LONGLONG t1, double *pdMS )
{
   LONGLONG t2 = clock();
   *pdMS = (double)( t2 - t1 ) / ( CLOCKS_PER_SEC / 1000 );
}
#endif



long longrand()
{
   #if defined( WIN32 )
   // Stupid cheesy way to make sure it is not just a 16-bit rand value
   return ( rand() << 16 ) | rand();
   #else
   return rand();
   #endif
}

// get random value in the given range
int randint( int m, int n )
{
   int ret = longrand() % ( n - m + 1 );
   return ret + m;
}

// I think I got this out of Programming Pearls (Bentley).
void ShuffleArray
(
   long *plShuffle,  // (O) return array of "randomly" ordered integers
   long lNumItems    // (I) length of array
)
{
   long i;
   long j;
   long t;

   for ( i = 0; i < lNumItems; i++ )
      plShuffle[i] = i;

   for ( i = 0; i < lNumItems; i++ )
      {
      j = randint( i, lNumItems - 1 );

      t = plShuffle[i];
      plShuffle[i] = plShuffle[j];
      plShuffle[j] = t;
      }
}



int main( int argc, char* argv[] )
{
   long          *plDataValues;
   LIST_NODE     *pstNodes;
   long          lNumItems = 20000;
   long          i, j;
   LONGLONG      t1;  // for timing
   double dms;

   if ( argc > 1 && atoi(argv[1]) > 0 )
      lNumItems = atoi( argv[1] );

   printf( "/n/n*** Testing %u nodes/n", lNumItems );

   srand( (unsigned int)time( 0 ));

   // allocate the nodes as one single chunk of memory
   pstNodes = (LIST_NODE*)malloc( lNumItems * sizeof( LIST_NODE ));
   assert( pstNodes != NULL );

   // Create an array that gives the access order for the nodes
   plDataValues = (long*)malloc( lNumItems * sizeof( long ));
   assert( plDataValues != NULL );

   // Access the data in order
   for ( i = 0; i < lNumItems; i++ )
      plDataValues[i] = i;

   StartTimer( &t1 );

   // Loop through and access the memory a bunch of times
   for ( j = 0; j < lNumItems; j++ )
      {
      for ( i = 0; i < lNumItems; i++ )
         {
         pstNodes[plDataValues[i]].lData = i * j;
         }
      }

   StopTimer( t1, &dms );
   printf( "Total Ordered Time: %f/n", dms );

   // now access the array positions in a "random" order
   ShuffleArray( plDataValues, lNumItems );

   StartTimer( &t1 );

   for ( j = 0; j < lNumItems; j++ )
      {
      for ( i = 0; i < lNumItems; i++ )
         {
         pstNodes[plDataValues[i]].lData = i * j;
         }
      }

   StopTimer( t1, &dms );
   printf( "Total Random Time: %f/n", dms );

}

what is the difference between l1 cache and l2 cache?

I know that l1 and l2 caches are levels in multi-level cache. I would like to know where each level cache is placed, and what is the maximum number of cache levels allowed?

Cycles/cost for L1 Cache hit vs. Register on x86?

I remember assuming that an L1 cache hit is 1 cycle (i.e. identical to register access time) in my architecture class, but is that actually true on modern x86 processors? How many cycles does an L1 ca

What is L1/L2 cache behavior for LUTs and the alike?

Assuming a LUT of say 512KB of 64-bit double types. Generally speaking, how does the CPU cache the structure in L1 or L2? For example: I access the middle element, does it attempt to cache the whole

L1/2 cache problem

could L1/L2 cache line each cache multiple copies of the main memory data word?

what is the L1 cache throughput in Nvidia’s Kepler?

I would like to know the throughout, latency, and the number of banks in Kepler’s L1 cache (read only ‘texture’ and normal cache). in a CUDA program, I’m reading the same data multiple times by differ

GPU L1 Cache Coherence

In OPENCL and CUDA, there are primitives, i.e. barrier() and syncthread() respectively, to enforce coherence for the L1 data cache/shared memory. Does this imply that the cache itself is not coherent,

L2 cache lines miss count

I want to calculate total no of L2 cache miss while I am running one particular program A. Is there any way to find cache miss in L2 cache ? I got to know, Core i7 CPU’s performance counter event typ

What is the cache miss rate for an optimal matrix transpose?

If I have an M x N matrix and an L1 cache of size K what cache miss rate does an optimal matrix transpose have. Obviously I am looking for something that is a function of M and N (and possibly K, thou

GPU L1 and L2 cache statistics

I have written some simple benchmarks that perform a series of global memory accesses. When I measure the L1 and L2 cache statistics, I’ve found out that (in GTX580 that has 16 SMs): total L1 cache

What causes a L3 cache miss in CPU?

I have a question regarding the relation between cache misses of difference cache levels in a x86 architecture (Say Xeon X5660). I did some profiling over an OpenCL application (Blackscholes), on some

Answers

Well yeah that does look like it will mainly be L1 cache misses.

10 cycles for an L1 cache miss does sound about reasonable, probably a little on the low side.

A read from RAM is going to take of the order of 100s or may be even 1000s (Am too tired to attempt to do the maths right now ;)) of cycles so its still a huge win over that.

It’s difficult to say anything for sure without a lot more testing, but in my experience that scale of difference definitely can be attributed to the CPU L1 and/or L2 cache, especially in a scenario with randomized access. You could probably make it even worse by ensuring that each access is at least some minimum distance from the last.

You should take a read of “What every programmer should know about memory” by Ulrich Drepper – it goes deep into the timing of memory access, and access-pattern and cache interactions.

The easiest thing to do is to take a scaled photograph of the target cpu and physically measure the distance between the core and the level-1 cache. Multiply that distance by the distance electrons can travel per second in copper. Then figure out how many clock-cycles you can have in that same time. That’s the minimum number of cpu cycles you’ll waste on a L1 cache miss.

You can also work out the minimum cost of fetching data from RAM in terms of the number of CPU cycles wasted in the same way. You might be amazed.

Notice that what you’re seeing here definitely has something to do with cache-misses (be it L1 or both L1 and L2) because normally the cache will pull out data on the same cache line once you access anything on that cache-line requiring less trips to RAM.

However, what you’re probably also seeing is the fact that RAM (even though it’s calls Random Access Memory) still preferres linear memory access.

3.2ns for an L1 cache miss is entirely plausible. For comparison, on one particular modern multicore PowerPC CPU, an L1 miss is about 40 cycles — a little longer for some cores than others, depending on how far they are from the L2 cache (yes really). An L2 miss is at least 600 cycles.

Cache is everything in performance; CPUs are so much faster than memory now that you’re really almost optimizing for the memory bus instead of the core.

While I can’t offer an answer to whether or not the numbers make sense (I’m not well versed in the cache latencies, but for the record ~10 cycle L1 cache misses sounds about right), I can offer you Cachegrind as a tool to help you actually see the differences in cache performance between your 2 tests.

Cachegrind is a Valgrind tool (the framework that powers the always-lovely memcheck) which profiles cache and branch hits/misses. It will give you an idea of how many cache hits/misses you are actually getting in your program.

Some numbers for a 3.4GHz P4 from a Lavalys Everest run:

  • the L1 dcache is 8K (cacheline 64 bytes)
  • L2 is 512K
  • L1 fetch latency is 2 cycles
  • L2 fetch latency is about double what you are seeing: 20 cycles

More here: http://www.freeweb.hu/instlatx64/GenuineIntel0000F25_P4_Gallatin_MemLatX86.txt

(for the latencies look at the bottom of the page)

If you plan on using cachegrind, please note that it is a cache hit/miss simulator only. It won’t always be accurate. For example: if you access some memory location, say 0x1234 in a loop 1000 times, cachegrind will always show you that there was only one cache miss (the first access) even if you have something like:

clflush 0x1234 in your loop.

On x86, this will cause all 1000 cache misses.