Increase system performance by maximizing your cache

Unix Insider |  Networking

Q: I know that files are cached in memory and there is also a cache filesystem option. How can I tell if the caches are working well and how big they should be? Also, how can I tune applications together with the caches?

--Tasha in Cashmere (again)

A: Computer system hardware and software are built by using many types of
cache. The system designers optimize these caches to work well with
typical workload mixes and tune them using in-house and industry
standard benchmarks. If you are writing an application or deciding how
to deploy an existing suite of applications on a network of systems,
you need to know what caches exist and how to work with them to get
good performance.

Cache principles revisited

Here's a recap to the principles of caching we covered in last month's
article. Caches work on two basic principles that should be quite
familiar to you from everyday life experiences. The first is that if
you spend a long time getting something that you think you may
need again soon, you keep it nearby. The contents of your cache make
up your working set. The second principle is that when you get
something, you can save time by also getting the extra items you suspect you'll need in the near future.

The first principle is called "temporal locality" and involves
reusing the same things over time. The second principle is called
"spacial locality" and depends on the simultaneous use of things that are
located near each other. Caches only work well if there is good
locality in what you are doing. Some sequences of behavior work very
efficiently with a cache, and others make little or no use of the
cache. In some cases, cache-busting behavior can be fixed by changing
the system to provide support for special operations. In most cases,
avoiding cache-busting behavior in the workload's access pattern will
lead to a dramatic improvement in performance.

A cache works well if there are a lot more reads than writes, and
if the reads or writes of the same or nearby data occur close together in
time. An efficient cache has a low reference rate (it doesn't make
unnecessary lookups), a very short cache hit time, a high hit ratio,
the minimum possible cache miss time, and an efficient way of handling
writes and purges.

File access caching with local disks

We'll start by looking at the simplest configuration, the open,
fstat, read, write, and mmap operations on a
local disk with the default Unix File System (UFS).

There are a lot of interrelated caches. They are system-wide caches
shared by all users and all processes.

Join us:






NetworkingWhite Papers & Webcasts

See more White Papers | Webcasts

Answers - Powered by ITworld

ITworld Answers helps you solve problems and share expertise. Ask a question or take a crack at answering the new questions below.

Ask a Question