The right disk configurations for servers

Unix Insider |  Networking

A large number of
SBus SCSI interfaces is a better investment for this particular
workload.


If you need to run at sustained rates of more than 20 to 30
megabytes per second of sequential activity on a single file, you will
run into problems with the default UFS filesystem. The UFS indirect
block structure and data layout strategy work well for general purpose
accesses such as the home directories, but cause too many random seeks
for high-speed sequential performance. The Veritas VxFS filesystem is
an extent-based structure, which avoids the indirect block problem. It
also allows individual files to be designated as "direct" for raw
unbuffered access. This bypasses the problems caused by UFS trying to
cache all files in RAM, which is inappropriate for large sequential
access files and stresses the pager. It is possible to get 100
megabytes per second or more with a carefully setup VxFS
configuration.


A log-based filesystem may slow down high-speed sequential operations
by limiting you to the throughput of the log. It should only log
synchronous updates, such as directory changes and file creation/deletion,
so see how it goes with and without a log for your own workload. If it
doesn't get in the way of the performance you need, use a log to keep
reboot times down.


Database choice

Database workloads are very different again. Reads may be done in small
random blocks (when looking up indexes), or large sequential blocks
(when doing a full table scan). Writes are normally synchronous for
safe commits of new data. On a mixed workload system, running databases
through the filesystem can cause virtual memory "churning" due to the
high levels of paging and scanning associated with filesystem I/O. This
can affect other applications adversely, so where possible it is best
to use raw disks or direct unbuffered I/O to a filesystem that supports
it (such as VxFS).


Both Oracle and Sybase default to a 2-kilobyte block size. A small
block size keeps the disk service time low for random lookups of
indexes and small amounts of data. When a full table scan occurs, the
database may read multiple blocks in one operation, causing larger I/O
sizes and sequential patterns.


Databases have two characteristics that are greatly assisted by an
array controller that contains non-volatile RAM. One is that a large
proportion of the writes are synchronous, and are on the critical path
for user response times. The service time for a 2-kilobyte write is
often reduced from about 10 to 15 milliseconds to 1 to 2 milliseconds.
The other is that synchronous sequential writes often occur as a stream
of small blocks, typically of only 2 kilobytes at a time. The array
controller can coalesce together multiple adjacent writes into a
smaller number of much larger operations, which can be written to disk
far faster.

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Answers - Powered by ITworld

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Ask a Question
randomness