Automatic for the people

RELATED TOPICS

Unix Insider –

Network File System (NFS) configuration information is quite possibly the digital equivalent of a tribble (that's as in Star Trek, not the Sun Vice President who is distinctly analog). Client configuration data -- lists of servers names, paths to filesystems on those servers, and the local directory on which they are mounted -- grows nearly constantly as you add new packages, libraries, and users. In your efforts to keep a consistent view of the world on all systems, you can spend quite a bit of time conjugating /etc/vfstab entries that fully describe the set of remote filesystem resources used by each machine.

It would be fairly easy to make up a prototype /etc/vfstab for NFS clients and blast it out to each desktop. Unfortunately, such an approach makes the NFS client configuration complex, with two downside risks. The first is that increasing the number of NFS mounts increases the probability that a desktop machine is incapacitated when an NFS server crashes or becomes unreachable. If you can ensure that your NFS servers go down less than once a month, but you have 30 or 40 file servers accessed by the desktops, you're looking at (on average) at least one meltdown a day when you take the "mount everything" approach. What you need is a tool that minimizes the amount of client-side configuration work necessary while mounting the minimal number of filesystems needed at any time.

Automation for the desktop people comes in the form of the automounter. A standard part of Solaris and most Unix operating systems, The automounter uses centralized configuration information to reduce the client-side administrative load and maintains a working set of mount points instead of completing a mount to every NFS server that could possibly be needed by the desktop. As such, the automounter obeys the two rules of clean system administration: it enforces consistency and it does so simply. The automounter takes the simplicity theme one step further by unmounting filesystems that are not in use.

This month, join us in a guided tour of the automounter, starting with an overview of how it manages the client NFS mount environment. We'll discuss the different flavors of configuration files, called maps, and give copious examples of good and bad map construction. We'll dive into a bit of detail around the automounter's implementation, and invocation features that will let you fine-tune your map usage. Finally, as has become habit in this space, we'll provide a list of typical pitfalls and problems, with suggested solutions. We're also going to assume you're running Solaris 2.3 or later, since the automounter went through its last major architectural change in that release, but we'll point out differences for older automounters where appropriate.

The name game

NFS clients build on the original Unix concept of a shared filesystem name space by adding access to remote filesystems. All filesystems, local or remote, live under the root directory (/) in a single tree structure; there aren't drive letters or volume names or mini disk packs to worry about. The /etc/vfstab file contains the list of every filesystem mounted to construct this name space, including the root filesystem and other local disks. For a local disk, the /etc/vfstab entry contains a disk device name and the path name on which it is to be mounted; for an NFS mount the /etc/vfstab line contains a server and remote path name pair:

<font face="Courier">/dev/dsk/c0t3d0s4    /dev/rdsk/c0t3d0s4	/usr      ufs	1	no	-
bigboy:/export/home  - 		        /home     nfs   -       no      -
</font>

Obviously, /etc/vfstab can become a major time sink when you need to maintain several hundred machines that access hundreds or thousands of network-accessible filesystems. NFS lets you export part of a filesystem to a client, so even if you have a few dozen servers, you may have to worry about several hundred client mount points. Remember rule #2: simplicity wins when you only show clients the minimal data set. The corollary to rule #2 is that your most vocal client will demand access to a filesystem subset that is not mounted when your work queue is the longest.

The automounter attempts to solve these problems by standardizing and streamlining the client-side mount process. The automounter consists of two parts: a user-level daemon that masquerades as a bona fide NFS server, and a kernel-resident module called autofs that actually completes the mount operations. Previous to Solaris 2.3, there was no kernel component, and the user daemon did all of the work, resulting in some pathname ugliness that made automounted filesystems a bit less pleasant to use. We'll revisit the automounter internals after taking a detour to look at the mechanics of mounting a filesystem and how the automounter inserts itself into that process.

The key trick employed by the automounter is impersonating an NFS server. The user-level automounter process,

<font face="Courier">automountd</font>
, (automount before Solaris 2.3) looks like a full-featured NFS server that is running on the local machine. Local NFS client code sends NFS RPC requests to this pseudo-server, where they are analyzed to determine what filesystem is needed to satisfy them. There isn't any real NFS server code in the automounter process; it knows enough about the NFS protocol to intercept the first reference to a filesystem. Once the automounter has mounted the necessary filesystem on the local machine, all future references go directly to the NFS server and bypass the automounter.

If an automounted filesystem is not used for five minutes, the automounter tries to unmount it. The unmount request isn't always successful, because there may be no on-going activity but the client may hold a file open on the mounted filesystem as a current working directory. But in most cases, the unmount removes an idle filesystem, typically after the client is done using the library or tools on that volume. Pruning idle mounts reduces the probability that a client gets hung when one of those servers goes down. Using the automounter encourages mount point proliferation, but you want as few of them active as possible at any one time. Consider the following scenario: a user accesses a copy of Frame for an hour on Monday, mounting the Frame distribution on her local machine. On Thursday, the publication tools NFS server, containing Frame, crashes, taking your user's desktop into a zombie existence filled with "NFS server not responding" messages. Your user is livid because she hasn't touched that server in three days. With the automounter, it's likely the Frame distribution would have been unmounted sometime late on Monday, making the crash of the NFS server immaterial to the user's desktop.

To see how the automounter knows when to get involved, we need to delve deeper into dynamic mount information maintained by the kernel.

Sermon on the mount

NFS uses an opaque pointer to a file called a file handle, so one of the first references to a file system has to be an NFS request that discovers a file handle or asks for statistics on a known handle such as the root of the mount point. Put another way, the first NFS request made on a newly mounted filesystem is going to be a getattr, lookup, or readdir. The automounter intercepts these requests, completes the mount, and then passes the request on to the remote server. All future requests reference file handles and information derived from the remote server, so they no longer need to bother with the automounter. If you think the hand-off between client NFS code, user-level automounter daemon, mount completion, and request forwarding sounds painfully slow, it is. One of the main reasons the automounter's performance is tolerable is that it only gets involved in first-time requests and mounts, not every NFS request.

When any filesystem is mounted, an entry is created in /etc/mnttab to indicate the mapping between a physical resource, such as a disk or a remote server specification, and the local filesystem mount point:

<font face="Courier">bigboy:/export/tools		/tools	nfs	rw,intr,nosuid,dev=2200025	827184583
auto_home			/home	autofs	ignore,indirect,rw,intr,nosuid,dev=2180002	826900214
</font>

The last entry in the example is one put into /etc/mnttab by the automounter. It advertises the parts of the name space it's managing by making entries in /etc/mnttab that resemble real-life NFS servers, with a few minor differences:

  • The entry is of type "autofs". In earlier versions of of Solaris, you'd see the type listed as "nfs" with an explicit port number given ("port=743"). By default, NFS requests go to port 2049 on the server, which would be the local machine's NFS RPC server. The number shown in the mount table entry is the port on which the automounter is listening for requests. Entries of type "autofs" don't need a port number listed, since the kernel filesystem handler takes care of communication with the automounter daemon.

  • Instead of a server:path combination, there's an automounter map file named in the "raw device" area, telling you what configuration file is associated with that line in the mount table. Maps, discussed in detail below, are the automounter's guide to resources that it manages and the name space in which they are presented. Having the map name in the /etc/mnttab file is a useful debugging clue, in case you find filesystems appearing in places you had not intended.

  • The map type, either indirect or direct, tells you whether the automounter-managed entry is a single directory on which a remote filesystem is mounted (direct), or a directory of mount points (indirect).

Let's walk through both a typical NFS request and another one in which the automounter does its thing. From the first sample /etc/mnttab line, we know that /tools is already mounted from server bigboy via NFS. So an attempt to open /tools/frame/bin/maker begins with the local client looking up frame in /tools. Seeing that /tools is a mount point, the kernel bundles up the lookup operation in an NFS request that is sent to server bigboy. If the directory exists, a file handle is returned, and the lookup process continues with the next pathname component.

Now look at the second /etc/mnttab entry, for the automounted /home directory. This is an indirect map, meaning that the mount points occur in the /home directory as opposed to on it. When the first reference is made to any directory in /home, such as /home/stern/pubs, the kernel again starts with a lookup of the first component. In this case, it's finding stern in /home. Since /home is a mount point, the request is sent to the server specification in /etc/mnttab, namely, the autofs filesystem. Again, in earlier versions of Solaris, the request would be sent directly to the automounter daemon, using the port number listed in the /etc/mnttab file.

Whether it's via autofs or a direct call, the user-level automounter daemon leaps into action at this point, deciphering the NFS lookup RPC. To find the filesystem to be mounted on /home/stern, the automounter consults the map file for /home, matches the appropriate configuration line, and completes the mount of sunrise:/export/stern on /home/stern. The NFS lookup RPC is passed on to sunrise, and the new mount is reflected in /etc/mnttab so that future requests and references go directly to the NFS server. Note that other first-time references to subdirectories of /home will again tickle the automounter; if the user on that machine wants to look at /home/sue/archives, then the automounter will intercept the lookup of sue in /home and complete another NFS mount. If the user leaves /home/sue inactive for five minutes, the automounter unmounts it, removing the entry from /etc/mnttab.

It's easy to see how the automounter enforces a "working set" methodology on NFS mounts -- when a filesystem is needed, it gets mounted, and when it's quiescent, it's unmounted. It sounds simple because it overlooks the difficult system administration task involved in getting the automounter running well -- creating the map entries that describe your filesystem name space.

Network cartography

Maps come in two basic flavors: direct and indirect. A direct map describes a set of unrelated mounts such as /usr/share/man and /var/mail. The automounter manages the directories on which the individual mounts are placed, making a direct map the simple equivalent of a series of /etc/vfstab lines. A direct map has the following format:

<font face="Courier">/var/mail       mailhub:/var/mail
/usr/share/man  -ro shareserver:/usr/share/man
/usr/local/bin  -ro shareserver:/usr/local/bin
</font>

Note that each line starts with an absolute pathname, and that any necessary NFS mount options are included mid-line, as they would be in /etc/vfstab. Direct maps are useful for the handful of mounts that you want on every desktop, but have no regularity in their naming or location.

RELATED TOPICS
1 2 Page
Morale boosters: 5 proven ways to motivate your IT team
View Comments
You Might Like
Join the discussion
Be the first to comment on this article. Our Commenting Policies