Get better results when you design your cache to match your applications and system

Unix Insider |  Operating Systems



      
   0  server debug level
"/dev/null"  is server log file

passwd cache:

       Yes  cache is enabled
       507  cache hits on positive entries
         0  cache hits on negative entries
        55  cache misses on positive entries
         2  cache misses on negative entries
        89% cache hit rate
         0  queries deferred
        16  total entries
       211  suggested size
       600  seconds time to live for positive entries
         5  seconds time to live for negative entries
        20  most active entries to be kept valid
       Yes  check /etc/{passwd,group,hosts} file for changes
        No  use possibly stale data rather than waiting for refresh

group cache:

       Yes  cache is enabled
        27  cache hits on positive entries
         0  cache hits on negative entries
        11  cache misses on positive entries
         0  cache misses on negative entries
        71% cache hit rate
         0  queries deferred
         5  total entries
       211  suggested size
      3600  seconds time to live for positive entries
         5  seconds time to live for negative entries
        20  most active entries to be kept valid
       Yes  check /etc/{passwd,group,hosts} file for changes
        No  use possibly stale data rather than waiting for refresh

hosts cache:

       Yes  cache is enabled
        22  cache hits on positive entries
         3  cache hits on negative entries
         7  cache misses on positive entries
         3  cache misses on negative entries
        71% cache hit rate
         0  queries deferred
         4  total entries
       211  suggested size
      3600  seconds time to live for positive entries
         5  seconds time to live for negative entries
        20  most active entries to be kept valid
       Yes  check /etc/{passwd,group,hosts} file for changes
        No  use possibly stale data rather than waiting for refresh


What happens when the cache is full

So far we have assumed that there is always some spare room in the
cache. In practice, the cache will fill up and at some point cache
entries will need to be reused. The cache replacement policy varies,
but the general principle is that you want to get rid of entries that
you will not need again soon. Unfortunately the "won't need soon" cache
policy is hard to implement unless you have a very structured and
predictable workload, or you can predict the future!


In its place you will often find caches that use "least recently
used" (LRU) or "not recently used" (NRU) policies, in the hope that
your accesses have good temporal locality. CPU caches tend to use
"direct mapping" where there is no choice -- the address in memory is
used to fix an address in cache.

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Answers - Powered by ITworld

Ask a Question
randomness