A third in fourths: An NFS potpourri

RELATED TOPICS

Unix Insider –

We've gently eased into the summer months (or winter doldrums for our Southern readers) with a two-month discussion of the automounter and its configuration. By now we hope that you can build maps, install and customize them to your heart's content, and perhaps you've justified the need to change every single client configuration to reach administrative harmonic convergence. Then again, if you're like the 99 percent of our readers who inhabit the real world (we're convinced there are some 'bots who digest each month's offering) you've run into some last little detail separating automounted bliss from true nirvana.

In order to enable you with powers to perform the more spiritual acts of system management -- those that actually elicit kind words from impressed users -- we'll take this third pass through the automounter and NFS. We'll start by getting stopped, looking at the various ways the automounter can get stuck, and what you can do to repair the situations. From automounters that don't work we'll move on to those that work too well, giving root-enabled users access to home directories and mail spools of unsuspecting users. We'll introduce NFS authentication to get you started on the path to a solution and take a peek at WebNFS, Sun's filesystem for the Internet. Our final section deals with auto-sharing and automounting removable media like CD-ROMs. Armed with a four-corner offensive arsenal of expert tricks, you should be prepared for anything the NFS-ready masses throw your way.

Tracing the mystery: Debugging mount attempts

Any system with hidden moving parts is bound to break. Create some complex automounter maps and you're likely to find unexpected behavior or get unintended filesystem mounts showing up in less than desirable places. The automounter has two debugging flags:

<font face="Courier">-T</font>
and
<font face="Courier">-v</font>
, enabling request tracing and verbose mount requests, respectively. When the
<font face="Courier">-v</font>
flag is supplied on the automountd command line, each request is sent to syslog. With a single
<font face="Courier">-T</font>
on the command line, you'll get significant information about the inner workings of the automounter:

<font face="Courier">	MOUNT REQUEST: name=/var/mail map=auto.direct
	opts=ro,intr,nosuid path=/var/mail
  		PUSH /etc/auto.direct
  		POP /etc/auto.direct
  		mapname = auto.direct, key = /var/mail
     			(nfs,nfs) / 	-rw,intr,nosuid,actimeo=0	sunrise:/var/mail 
  		nfsmount: sunrise:/var/mail /var/mail rw,intr,nosuid,actimeo=0
  		ping sunrise OK
  		mount sunrise:/var/mail /var/mail (rw,intr,nosuid,actimeo=0)
  		mount sunrise:/var/mail OK
	MOUNT REPLY    : status=0
</font>

Supply a double trace flag (

<font face="Courier">-T -T</font>
) and you get detail targeted at those with source code.

Killing the automounter and restarting it with a plethora of flags isn't conducive to rapid problem solving or user satisfaction. If you have a variety of volumes mounted, it may be hard to kill the automounter gracefully and have it start up again without side effects. Fortunately, there's a back door into setting the debug options. When the automounter sees a lookup request for a file name starting with an equal sign (=), it parses the name to see if it matches one of the following settings:

<font face="Courier">	=v	Toggle -v on/off
	=0	Turn off all tracing
	=1	Set -T (simple trace)
	=2	Set -T -T (advanced trace)
</font>

This trick only works for indirect mount points, where the automounter would match the file name component to a map key. For example, turn on verbose mount request logging and simple tracing by performing two "

<font face="Courier">ls</font>
" operations on the appropriate files in /net (a default indirect map):

<font face="Courier">	# ls /net/=v
	# ls /net/=1
</font>

Only the superuser can toggle the debugging flags. If you're using /home as an indirect automounter mount point, you could just as easily have done:

<font face="Courier">	# ls /home/=v /home/=1
</font>

What kind of problems are you likely to uncover? One of the more typical misbehaviors stems from the * wildcard key being substituted into the server specification, producing an illegal server name or an invalid server path name. For example, you use a default entry in the auto.home map like:

<font face="Courier">	*	&:/export/home
</font>

If a user accidentally uses /home/fred instead of /home/bigboy/fred, an attempt is made to mount fred:/export/home. You may have a machine with that name, producing unexpected mount results, or the automounter may fail trying to resolve the host name.

If you see messages complaining "Can't remount," check for overlapping map entries. You may be mounting one filesystem on top of part of another, making it impossible to unmount the the underlying directory after it has reached quiescence. Check hierarchical maps and those with variables or wildcards carefully to make sure the name spaces of each map remain independent. This error is also common with subdirectory mounts where the actual mount point changes depending upon the order in which the subdirectories are accessed.

Auto Pileups: Unsticking the automounter

By far the most common problem is when the automounter gets stuck waiting for a server that has crashed, or when the daemon is unable to resolve a mount point due to name server, network loading, or map syntax problems. The automounter emulates an NFS server, so when it gets stuck or stops processing requests, the user-visible symptoms are the same as for a crashed NFS server: "server not responding" messages in the console window and a general lock up of processes accessing the failed server. Instead of seeing a server name, however, you'll see the process id of the automounter daemon:

<font face="Courier">	pid203@/home not responding, still trying
</font>

The automounter is single threaded, so it only processes one mount request at a time. Subsequent requests for the automounter's services are routinely ignored until the current mount operation completes. If the mount in progress makes no progress, other processes talking to the automounter have their requests dropped, just as an overloaded server drops incoming NFS RPC packets.

From the point of view of the process that tickled the automounter, an NFS server (the one running on the local machine) has crashed, so the standard NFS warning is displayed. It may be having trouble talking to a server that crashed, or it may be encountering NIS/DNS problems that lead to high latency in host name resolution, routing problems that prevent mount requests from reaching the server, or other variations of lossy networks.

You can immediately tell what map is causing the problem. But if you want to see the key in the map that is leading to the hang, you'll need to turn on debugging before you coax the automounter into a compromising position. It's possible that the automounter has gotten wedged because it can't get enough CPU to process the map files and send off an NFS mount RPC. Remember that the automounter daemon is a user-level process, although it spends most of its life making system calls. Be sure your system isn't suffering from some runaway real-time processes or sources of high-level interrupts, like serial line input, that could prevent a user-level process from completing a series of system calls. In most cases, however, the automounter pileups are caused by an unresponsive server or name service problems that prevent the daemon from converting map entries into valid server IP address and path name specifications.

Early versions of the automounter could be lulled to sleep faster than some infant children. In response to bugs, and the need for non-Sun automounter functionality, Jan-Simon Pendry wrote the publicly available

<font face="Courier">amd</font>
. A code archive is available for ftp from ftp.acl.lanl.gov:/pub/amd, and an archive of
<font face="Courier">amd</font>
notes, questions and bug fixes is available at the University of Maryland. One of the nicer features of
<font face="Courier">amd</font>
is that of a keep-alive mount point: The automounter pings the currently mounted server from a replicated set, until it finds that it no longer gets a response. When the server ostensibly has died,
<font face="Courier">amd</font>
renames the old mount point so that future references cause a new mount (of another server) to be completed.
<font face="Courier">amd</font>
tries its best to never block a user-level process.

Does this make

<font face="Courier">amd</font>
better than the Solaris automounter? The keep-alive feature helps when you need to start a new series of processes that rely on a dead mount point. The new requests cause new mounts to be completed on the new staging area, while the old mount point (and processes waiting on it) spin quietly in the background. Neither
<font face="Courier">amd</font>
nor the automounter help you if you need to recover processes waiting on a crashed server. Those processes go into disk wait and can't be cleaned up easily. To get an inventory of processes using a particular mount point, use the
<font face="Courier">lsof</font>
utility to locate processes with open files on the automounted directory hierarchy.

Averting the evil eyes: Using stronger authentication

Sometimes the automounter works too well. A user has root access on her own machine, so she creates another login (for you) on her machine with a trivial password. A quick "

<font face="Courier">su</font>
" to root, and then an "
<font face="Courier">su fred</font>
", and voila! -- she's now browsing your home directory, reading your mail and generally enjoying the privileges of membership in your e-life. How do you prevent users from accessing others' files when they can basically do what they want as root? Gain a stronger measure of control using Secure NFS, Sun's layering of strong authentication on top of the NFS V2 and V3 protocols.

Secure NFS uses Secure RPC, the same mechanism used by NIS+ to authenticate client requests. Secure RPC uses a verifier attached to the credentials for each RPC request. The verifier includes the user's credentials and a timestamp to prevent request replays. The verifier is encrypted with a session key unique to each user-host pair. How do the NFS server and user's client determine the session key? A large random value is chosen as the key and is passed from the client to the server using public key encryption. The server and user keys are maintained in an NIS or NIS+ table. The public key is left is plaintext, and the private key is encrypted using the user's (or root's in the case of a server) login password.

To complete the key exchange, the user needs to decrypt her private key using her login password. If the correct login password isn't provided, the secret key can't be decrypted, and the user cannot be authenticated to the server. Nosy users that create duplicate, fake logins can't gain access to the spoofed users' NFS-mounted files because they can't decrypt the user's private key. See the man pages for

<font face="Courier">publickey</font>
,
<font face="Courier">newkey</font>
, and
<font face="Courier">chkey</font>
for more information on generating the public key pairs, building the NIS and NIS+ tables, and setting up server keys.

Turn on Secure NFS using the

<font face="Courier">secure</font>
mount option in /etc/vfstab or in the automounter map and by sharing (or exporting) the filesystem with the secure option as well. On the client side, the secure flag tells the kernel RPC code that it needs to supply encrypted verifiers and credentials with each request. On the server side, the option ensures that credentials are verified before allowing access to the file. User credentials that have no verifier or can't be decrypted properly are treated as null credentials, giving the user the same permissions as
<font face="Courier">nobody</font>
.

Here are some other warnings and advisories to help you ward off evil prowling eyes on your network:

RELATED TOPICS
1 2 Page
What’s wrong? The new clean desk test
View Comments
You Might Like
Join the discussion
Be the first to comment on this article. Our Commenting Policies