FreeNAS: Flexible, fast storage, and the price is right

By David Newman, Network World |  Storage, FreeNAS, NAS

One thing we did not do was allow FreeNAS to use all 48 Gbytes of the RAM in the server supplied by iXsystems. Like any modern operating system, FreeBSD puts as much data as possible into RAM before having to swap out to disk. Serving data from RAM means much higher performance for relatively small reads and writes, but it's not representative of the performance users would see in production. This is especially true when many users are involved; then, reading and writing from disk becomes inevitable.

To ensure a balance of disk I/O and caching performance, we configured the FreeNAS server to use only 6GB of RAM, the minimum supported with ZFS, and then we read or wrote 64GB in each test - well in excess of the available RAM. We also configured both NFS client machines to use 6GB of RAM, even though both had 16GB available.

Test results

FreeNAS performance is fast, especially with sequential reads and re-reads (see the figure, below). Storage performance tests usually measure I/O in bytes per second; when expressed in bits, FreeNAS read and re-read data at rates at or above 6Gbps.

That 6Gbps top speed also includes several other factors: The 6Gbps speed limit of SATA3 disks; the overhead added by the NFS protocol; the contention among multiple TCP flows (there were 16 threads active during these tests); and the amount of disk I/O relative to data read from RAM. The top speeds achieved here are about as fast as the hardware could possibly go under these test conditions.

Write and rewrite performance was slower than reads, as usual in I/O benchmarking. With sequential rewrites, FreeNAS moved traffic at rates of around 280MBps. Curiously, sequential rewrites went twice as fast with 4-kbyte records than with 64-kbyte ones. The most likely explanation is that the time involved in writing the larger amount of data to disk favored the smaller record size.

Sequential write and read tests are meaningful when writing or reading large amounts of data on a relatively empty disk. Once the disk fills up, or if the application involves reading from different parts of a database, then random read and write tests become more important.

Results are much slower for random read and write tests. That's not surprising considering that disk heads move around a lot more in a random test than they would with sequential operations. Here, the larger 64-kbyte records help, since there's more time spent writing or reading relative to disk seek time. Still, both 4- and 64-kbyte read and write times are just a fraction of the sequential times.

In the worst case, writes of 4-kbyte records are just 3MBps, compared with 276MBps with sequential writes. In fairness, though, any storage systems would do far worse in random tests than in sequential ones. These results aren't a reflection on FreeNAS or ZFS.


Originally published on Network World |  Click here to read the original story.
Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Answers - Powered by ITworld

ITworld Answers helps you solve problems and share expertise. Ask a question or take a crack at answering the new questions below.

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Ask a Question
randomness