Increase system performance by maximizing your cache

Unix Insider |  Networking



The buffer cache itself was left intact, but it was bypassed for all
data transfers, changing it from having a key role to being mostly
inconsequential. The sar -b command still reports on its
activity, but I can't remember the buffer cache itself being a
performance bottleneck in many years. As the title says, this cache
holds only UFS metadata. This includes disk blocks full of inodes (a
disk block is 8 kilobytes; an inode is about 300 bytes), indirect
blocks (used as inode extensions to keep track of large files), and
cylinder group information (which records the way the disk space is
divided up between inodes and data). The buffer cache sizes itself
dynamically, hits are quick, and misses involve a disk access.



  • In-memory page cache

    When we talk about memory usage and demand on a system, it is actually
    the behavior of this cache that is the issue. It contains all data that
    is held in memory. That includes the files that make up executable code
    and normal data files, without making any distinction between them. A
    large proportion of the total memory in the system is used by this
    cache as it holds all the pages that make up the current working set of
    the system as a whole.



    All page-in and page-out operations occur between this cache and the
    underlying filesystems on disk (or over NFS). Individual pages in the
    cache might currently be unmapped (e.g. a data file), or can be mapped
    into the address space of many processes (e.g. the pages that make up
    the libc.so.1 shared library). Some pages do not
    correspond to a named file (e.g. the stack space of a process); these
    anonymous pages have swap space reserved for them so that they can be
    written to disk if required. The vmstat and sar -pg
    commands monitor the activity of this cache.



    The cache is made up of 4-kilobyte or 8-kilobyte page frames. Each
    page of data can be located on disk as a filesystem or swap space
    datablock, or in memory in a page frame. Some page frames are ready for
    reuse, or empty and are kept on the free list (reported as free by
    vmstat).



    A cache hit occurs when a needed page is already in memory. This can
    be recorded as an attach to an existing page or as a reclaim if the
    page was on the free list. A cache miss occurs when the page needs to
    be created from scratch (zero fill fault), duplicated (copy on write),
    or read in from disk (page in). Apart from the page in, these are all
    quite quick operations, and all misses take a page frame from the free
    list and overwrite it.

  • Join us:
    Facebook

    Twitter

    Pinterest

    Tumblr

    LinkedIn

    Google+

    NetworkingWhite Papers & Webcasts

    See more White Papers | Webcasts

    Join us:
    Facebook

    Twitter

    Pinterest

    Tumblr

    LinkedIn

    Google+

    Ask a Question