Filesystems evolve

By Rawn Shah, Unix Insider |  Operating Systems

Filesystems have been around since the 1960s; you'd think everything you could possibly want would have been developed by now. Isn't it time for filesystems to stop changing?

Researchers say no, and I tend to agree. The shifting nature of computing makes this continuous change not only appropriate, but necessary. As new technologies and new operating systems become more popular, filesystems' basic requirements evolve. You can't expect something built a decade ago to serve the computers of today, just as you can't expect a truck built at the turn of the 20th century to still haul shipping containers cross-country.

Progress answers the demands it brings about. It's a circular development -- change fosters change. The perceived benefits outweigh the issues and difficulties, and filesystems become faster, smarter, and better.

Change happens

It's 1980. You are creating a very simple operating system, like DOS or C/PM. You don't really need a complex filesystem, because most people can only afford 720-KB, 5.25-inch floppy drives to store data. The drive has a known fixed size, and based on calculations and an understanding of how the operating system can use files, you create a filesystem like File Allocation Table (FAT), which is used in DOS.

Jump ahead to 1988. Unix systems are becoming more common on PC-based machines. Hard drives are affordable and are up to 500 MB. You have plenty of space to store your applications; but due to the size, the need to find files quickly, and the OS's more advanced needs, you put the Universal Filesystem (UFS) -- designed for large, expensive Unix systems -- on Integrated Drive Electronics (IDE) hardware. FAT is slower and less efficient for these larger drives, and does not support the needs of the multiuser Unix environment.

Now it's 1996. 4-GB drives abound, and the future promises even larger ones. Linux has been available for a few years, and as part of the open source movement, a new filesystem called ext2 appears. It openly shares its inner workings while supporting the old ideas of Unix filesystems.

By 2004, we can expect to see single drives, available to common users, that are several hundred gigabytes. At that size, new problems may emerge. While big by today's standards, that space could well be filled by future applications. However, you'll have stored so much that backing up the data will be a daily or weekly ritual, even for end users. Although more automated, the process may take many hours to complete, because tape technology and DVD-writable drives still can't write data as quickly as data can be read.

On another front, networking is becoming more complex, as separate storage systems from the processor box become more commonplace.

Join us:






Answers - Powered by ITworld

ITworld Answers helps you solve problems and share expertise. Ask a question or take a crack at answering the new questions below.

Ask a Question