This section of the chapter provides a brief overview ofclient configuration, identifies the key files and commands involved in config-uring and mounting NFS exported file systems, and s
Trang 1■■ Hide filesystems beneath— This option corresponds to the hideoption listed in Table 12-1.
■■ Export only if mounted— This option corresponds to the mp[=path]option listed in Table 12-1 Selecting this option is equivalent to specifythe mp mount option with out the optional path mount point
■■ Optional mount point— This option corresponds to the path portion
of the mp[=path] option listed in Table 12-1 You can type the mountpoint, if you want to specify on, the text box or use the Browse button
to select the mount point graphically
■■ Set explicit Filesystem ID— This option corresponds to the fsid=noption listed in Table 12-1 Enter the actual FSID value in the text box.Figure 12-4 shows the General Options tab We have disabled subtree check-ing for /home and left the required sync option (Sync write operations onrequest) enabled
The User Access tab, shown in Figure 12-5, implements the UID/GIDremapping and root-squashing options described earlier in this chapter Selectthe Treat remote root user as local root user check box if you want the equiva-lent of no_root_squash To remap all UIDs and GIDs to the UID and GID ofthe anonymous user (the all_squash option from Table 12-1), select the Treatall client users as anonymous users check box As you might guess, if you want
to specify the anonymous UID or GID, click the corresponding check boxes toenable these options and then type the desired value in the matching textboxes In Figure 12-5, all clients will be remapped to the anonymous user.Figure 12-5 shows the User Access Tab as it appears in Fedora Core; it looksslightly different in RHEL
Figure 12-4 The General Options tab.
Trang 2Figure 12-5 The User Access tab.
When you have finished configuring your new NFS export, click the OKbutton to close the Add NFS Share dialog box After a short pause, the newNFS share appears in this list of NFS exports, as shown in Figure 12-6 If youwant to change the characteristics of an NFS share, select the share you want
to modify and click the Properties button on the toolbar This will open theEdit NFS Share dialog box, which has the same interface as the Add NFS Sharedialog box
Similarly, if you want to remove an NFS share, select the export you want tocancel and click the Delete button To close the NFS Server Configuration tool,
type Ctrl+Q or click File ➪ Quit on the menu bar.
Figure 12-6 Adding an NFS share.
Trang 3Configuring an NFS Client
Configuring client systems to mount NFS exports is simpler than configuringthe NFS server itself This section of the chapter provides a brief overview ofclient configuration, identifies the key files and commands involved in config-uring and mounting NFS exported file systems, and shows you how to con-figure a client to access the NFS exports configured in the previous section.Configuring a client system to use NFS involves making sure that theportmapper and the NFS file locking daemons statd and lockd are avail-able, adding entries to the client’s /etc/fstab for the NFS exports, andmounting the exports using the mount command
As explained at the beginning of the chapter, a mounted NFS exported filesystem is functionally equivalent to a local file system Thus, as you mightexpect, you can use the mount command at the command line to mount NFSexports manually, just as you might mount a local file system Similarly, tomount NFS exports at boot time, you just add entries to the file system mounttable, /etc/fstab As you will see in the section titled “Using AutomountServices” at the end of this chapter, you can even mount NFS file systems auto-matically when they are first used, without having to mount them manually
The service that provides this feature is called, yup, you guessed it, the mounter More on the automounter in a moment.
auto-As a networked file system, NFS is sensitive to network conditions, so theNFS client daemons accept a few options, passed via the mount command,address NFS’s sensitivities and peculiarities Table 12-4 lists the major NFS-specific options that mount accepts For a complete list and discussion of allNFS-specific options, see the NFS manual page (man nfs)
Table 12-4 NFS-Specific Mount Options
OPTION DESCRIPTION
bg Enables mount attempts to run in the background if the first
mount attempt times out (disable with nobg).
fg Causes mount attempts to run in the foreground if the first
mount attempt times out, the default behavior (disable with nofg).
hard Enables failed NFS file operations to continue retrying after
reporting “server not responding” on the system, the default behavior (disable with nohard).
intr Allow signals (such as Ctrl+C) to interrupt a failed NFS file
operation if the file system is mounted with the hard option (disable with nointr) Has no effect unless the hard option is also specified or if soft or nohard is specified.
Trang 4Table 12-4 (continued)
OPTION DESCRIPTION
lock Enables NFS locking and starts the statd and lockd
daemons (disable with nolock).
mounthost=name Sets the name of the server running mountd to name.
mountport=n Sets the mountd server port to connect to n (no default).
nfsvers=n Specify the NFS protocol version to use, where n is 1, 2, 3,
or 4.
port=n Sets the NFS server port to which to connect to n (the
default is 2049).
posix Mount the export using POSIX semantics so that the POSIX
pathconf command will work properly.
retry=n Sets the time to retry a mount operation before giving up to
n minutes (the default is 10,000).
rsize=n Sets the NFS read buffer size to n bytes (the default is
1024 ); for NFSv4, the default value is 8192.
soft Allows an NFS file operation to fail and terminate (disable
with nosoft).
tcp Mount the NFS file system using the TCP protocol (disable
with notcp).
timeo=n Sets the RPC transmission timeout to n tenths of a second
(the default is 7) Especially useful with the soft mount option.
udp Mount the NFS file system using the UDP protocol, the
default behavior (disable with noupd).
wsize=n Sets the NFS write buffer size to n bytes (the default is
1024 ); for NFSv4, the default value is 8192.
The options you are most likely to use are rsize, wsize, hard, intr, andnolock Increasing the default size of the NFS read and write buffersimproves NFS’s performance The suggested value is 8192 bytes, that is,rsize=8192 and wsize=8192, but you might find that you get better per-formance with larger or smaller values The nolock option can also improveperformance because it eliminates the overhead of file locking calls, but not allservers support file locking over NFS If an NFS file operation fails, you canuse a keyboard interrupt, usually Ctrl+C, to interrupt the operation if the
exported file system was mounted with both the intr and hard options This
prevents NFS clients from hanging
Trang 5Like an NFS server, an NFS client needs the portmapper running in order toprocess and route RPC calls and returns from the server to the appropriateport and programs Accordingly, make sure that the portmapper is running onthe client system using the portmap initialization script:
# service portmap status
If the output says portmap is stopped (it shouldn’t be), start theportmapper:
# service portmap start
To use NFS file locking, both an NFS server and any NFS clients need to runstatdand lockd As explained in the section on configuring an NFS server,the simplest way to accomplish this is to use the initialization script, /etc/rc.d/init.d/nfslock Presumably, you have already started nfslock
on the server, so all that remains is to start it on the client system:
# service nfslock start
Once you have configured the mount table and started the requisite mons, all you need to do is mount the file systems You learned about themountcommand used to mount file systems in a previous chapter, so this sec-tion shows only the mount invocations needed to mount NFS file systems.During initial configuration and testing, it is easiest to mount and unmountNFS export at the command line For example, to mount /home from theserver configured at the end of the previous section, execute the followingcommand as root:
dae-# mount -t nfs bubba:/home /home
You can, if you wish, specify client mount options using mount’s -o ment, as shown in the following example
argu-# mount -t nfs bubba:/home /home -o rsize=8292,wsize=8192,hard,intr,nolock
After satisfying yourself that the configuration works properly, you probablywant to mount the exports at boot time Fortunately, Fedora Core and RHELmake this easy because the initialization script /etc/rc.d/init.d/netfs,which runs at boot time, automatically mounts all networked file systems notconfigured with the noauto option, including NFS file systems It does this byparsing /etc/fstab looking for file systems of type nfs, nfs4 (described inthe next section), smbfs (Samba) cifs (Common Internet Filesystem) orncpfs(Netware) and mounting those file systems
Trang 6T I P If you are connecting an NFSv4 client to an NFSv2 server, you must use the mount option nfsvers=2 or the mount attempt will fail Use nfsvers=1
if you are connecting to an NFSv1 server We learned this the hard way while trying to mount an export from an ancient server running Red Hat Linux 6.2 (we told you it was ancient) We kept getting an error indicating the server was down when we knew it wasn’t Finally, we logged into the server, discovered
it was running a very old distribution and were able to mount the export.
While we’re somewhat embarrassed to be running such an old version of Red Hat, we’re also quite pleased to report that it has been running so well for so long that we forgot just how old it was.
Configuring an NFSv4 Client
The introduction of NFSv4 into the kernel added some NFSv4-specific ior of which you need to be aware and changed some of the mount options.This section covers NFSv4-specific features and begins with the mount optionsthat have changed in terms of their meaning or behavior Table 12-5 lists thenew or changed mount options
behav-The two new options listed in Table 12-5 are clientaddr and proto sion 3 of NFS introduced NFS over TCP, which improved NFS’s reliability overthe older UDP-based implementation Under NFSv3, you would use the mountoption tcp or udp to specify to the client whether you wanted it to use TCP orUDP to communicate with the server NFSv4 replaces tcp and udp with a sin-gle option, proto= that accepts two arguments: tcp or udp In case it isn’tclear, the NFSv3 option tcp is equivalent to the NFSv4 option proto=tcp.Figuring out the udp option is left as an exercise for the reader
Ver-Table 12-5 NFSv4-Specific Mount Options
OPTION DESCRIPTION
clientaddr=n Causes a client on a multi-homed system to use the IP address
specified by n to communicate with an NFSv4 server.
proto=type Tells the client to use the network protocol specified by type,
which can be tcp or udp (the default is udp); this option replaces the tcp and udp options from earlier versions of NFS.
rsize=n Sets the read buffer size to n bytes (the default for NFSv4 is
8192); the maximum value is 32678.
sec=mode Set the security model to mode, which can be sys, krb5,
krb5i , or krb5p.
wsize=n Sets the write buffer size to n bytes (the default for NFSv4 is
8192); the maximum value is 32678.
Trang 7The semantics for the rsize and wsize options have changed with NFSv4.The default buffer size is for NFSv4 is 8192 bytes, but it can grow to as largeand 32,678 bytes, which should result in a noticeable performance improve-ment, especially when you are transferring large files The buffer setting isonly a suggestion, however, because the client and server negotiate the buffersize to select an optimal value according to network conditions.
Strictly speaking, the sec option for selecting the security model NFS usesisn’t new with NFSv4 It existed in NFSv3, but now that NFSv4 has addedstrong encryption to the core NFS protocol, using this option is worthwhile Asshown in Table 12-5, legal values for the sec option are sys, krb5, krb5i,and krb5p sys, the default security model, uses standard Linux UIDs andGIDs to authenticate NFS transactions krb5 uses Kerberos 5 to authenticateusers but takes no special measures to validate NFS transactions; krb5i (Ker-beros 5 with integrity checking) uses Kerberos 5 to authenticate users andchecksums to enforce the data integrity on NFS transactions; krb5p (Kerberos
5 with privacy checking) uses Kerberos 5 to authenticate users and encryption
to protect NFS transactions against packet sniffing You can use the various
Kerberos-enabled security models only if the NFS server supports both NFSv4 and the requested security model.
rsize=8192,wsize=8192,hard,intr,nolock 0 0The hostname used on the left side of the colon, bubba, must resolve to
an IP address either using DNS or an entry in the /etc/hosts file Wedon’t recommend using an IP address because, in a well-run system, IPaddresses can change, whereas a hostname won’t If DNS is properlyconfigured and maintained, the hostname will always point to the propersystem regardless of what that system’s IP address is at any given time
2 If it isn’t already running, start the portmapper using the followingcommand:
# service portmap start Starting portmapper: [ OK ]
Trang 83 Mount the exports using one of the following commands:
# mount –a –t nfsor
# mount /home /usr/localor
# service netfs startThe first command mounts all (-a) file systems of type nfs (-t nfs)
The second command mounts only the file systems /home and/usr/local(for this command to work, the file systems you want tomount must be listed in /etc/fstab) The third command uses theservicecommand to mount all network file systems using by invok-ing the netfs service Verify that the mounts completed successfully
by attempting to access files on each file system If everything works asdesigned, you are ready to go
If all the preceding seems unnecessarily tedious, it only seems that way
because it is more involved to explain how to set up an NFS client than it is
actually to do it Once you’ve done it a couple of times, you’ll be able to dazzleyour friends and impress your coworkers with your wizardly mastery of NFS.You can really wow them after reading the next section, which shows you how
to avoid the tedium by using the automounter to mount file systems ically the first time you use them
automat-Using Automount Services
The easiest way for client systems to mount NFS exports is to use autofs, whichautomatically mounts file systems not already mounted when the file system isfirst accessed autofs uses the automount daemon to mount and unmount filesystems that automount has been configured to control Although slightly moreinvolved to configure than the other methods for mounting NFS file systems,autofs setup has to be done only once In the next chapter, you’ll even learn how
to distribute automounter configuration files from a central server, obviatingthe need to touch client systems manually at all
autofs uses a set of map files to control automounting A master map file,/etc/auto.master, associates mount points with secondary map files Thesecondary map files, in turn, control the file systems mounted under the cor-responding mount points For example, consider the following /etc/auto.master autofsconfiguration file:
/home /etc/auto.home
Trang 9This file associates the secondary map file /etc/auto.home with themount point /home and the map file /etc/auto.var with the /var mountpoint Thus, /etc/auto.home defines the file systems mounted under/home, and /etc/auto.var defines the file systems mounted under /var.
Each entry in /etc/auto.master, what we’ll refer to as the master map file,
consists of at least two and possibly three fields The first field is the mountpoint The second field identifies the full path to the secondary map file thatcontrols the map point The third field, which is optional, consists of optionsthat control the behavior of the automount daemon
In the example master map file, the automount option for the /var mountpoint is timeout 600, which means that after 600 seconds (10 minutes) ofinactivity, the /var mount point will be umounted automatically If a timeoutvalue is not specified, it defaults to 300 seconds (5 minutes)
The secondary map file defines the mount options that apply to file systemsmounted under the corresponding directory Each line in a secondary map filehas the general form:
localdir [-[options]] remotefs
localdir refers to the directory beneath the mount point where the NFS mount will be mounted remotefs specifies the host and pathname of the NFS mount remotefs is specified using the host:/path/name format described in the previous section options, if specified, is a comma-separated list of mount
options These options are the same options you would use with the mountcommand
Given the entry /home /etc/auto.home in the master map file, considerthe following entries in /etc/auto.home:
kurt -rw,soft,intr,rsize=8192,wsize=8192 luther:/home/kurt terry luther:/home/terry
In the first line, localdir is kurt, options is -rw,soft,intr,rsize=8192,
wsize=8192, and remotefs is luther:/home/kurt This means that the NFS
export /home/kurt on the system named luther will be mounted in /home/kurtin read-write mode, as a soft mount, with read and write buffer sizes of
8192 bytes A key point to keep in mind is that if /home/kurt exists on thelocal system, its contents will be temporarily replaced by the contents of theNFS mount /home/kurt In fact, it is probably best if the directory specified
by localdir does not exist because autofs dynamically creates it when it isfirst accessed
The second line of the example auto.home file specifies localdir as terry,
no options, and remotefs as the NFS exported directory /home/terry exported
from the system named luther In this case, then, /home/terry on lutherwill be mounted as /home/terry on the NFS client using the default NFS
Trang 10mount options Again, /home/terry should not exist on the local system, butthe base directory, /home, should exist.
Suppose that you want to use autofs to mount a shared projects directorynamed /proj on client systems on the /projects mount point On the NFSserver (named diskbeast in this case), you would export the /proj as described
in the section “Configuring an NFS Server.” On each client that will mount thisexport, create an /etc/auto.master file that resembles the following:
/projects /etc/auto.projects timeout 1800
This entry tells the automount daemon to consult the secondary map file/etc/auto.projectsfor all mounts located under /projects After 1800seconds without file system activity in /projects, autofs will automati-cally unmount it
N OT E If the autofs RPM is installed, Fedora Core and RHEL systems provide a default /etc/auto.master map file All of the entries are commented out using the # sign, so you can edit the existing file if you wish.
Next, create the following /etc/auto.projects file on each client thatwill use diskbeast’s export:
code -rw,soft,rsize=8192,wsize=8192 diskbeast:/proj
This entry mounts /proj from mailbeast as /projects/code on theclient system The mount options indicate that the directory will beread/write, that it will be a soft mount, and that the read and write block sizesare 8192 bytes Recall from Table 12-4 that a soft mount means that the kernelcan time out the mount operation after a period of time specified by thetimeo=noption, where n is defined in tenths of a second
Finally, as the root user, start the autofs service:
# /sbin/service autofs start
Starting automount: [ OK ]
After starting the autofs service, you can use the status option to verifythat the automount daemon is working:
# /sbin/service autofs status
Configured Mount Points:
/usr/sbin/automount timeout 600 /projects file /etc/auto.projects
-Active Mount Points:
Trang 11
-As you can see under the heading Active Mount Points, the /projectsmount point is active You can verify this by changing to the /projects/codedirectory and executing an ls command:
# cd /projects/code
# ls 3c501.c atp.c fmv18x.c net_init.c smc9194.c 3c503.c au1000_eth.c gmac.c ni5010.c smc-mca.c 3c505.c auto_irq.c gt96100eth.c ni52.c smc- ultra32.c
3c507.c bagetlance.c hamachi.c ni65.c smc-ultra.c 3c509.c bmac.c hp100.c ns83820.c sonic.c
You can also see the automount daemon at work by using the mountcommand:
To stop the automounter, use the service script’s stop argument:
# /sbin/service autofs stop
/sbin/service autofs reload
Trang 12Examining NFS Security
As explained at the beginning of the chapter, NFS protocol versions 3 and olderhave some inherent security problems that make it unsuitable for use across theInternet and potentially unsafe for use even in a trusted network This sectionidentifies key security issues of NFS in general and the security risks specific to
an NFS server and to NFS clients and suggests remedies that minimize your
network’s exposure to these security risks Be forewarned, however, that no list
of security tips, however comprehensive, makes your site completely secure.Nor will plugging possible NFS security holes address other potential exploits
General NFS Security Issues
One NFS weakness, in general terms, is the /etc/exports file If a cracker is
able to spoof or take over a trusted address, an address listed in /etc/exports,
your exported NFS mounts are accessible Another NFS weak spot is normalLinux file system access controls that take over once a client has mounted anNFS export: Once an NFS export has been mounted, normal user and group per-missions on the files take over access control
The first line of defense against these two weaknesses is to use host accesscontrol as described earlier in the chapter to limit access to services on yoursystem, particularly the portmapper, which has long been a target of exploitattempts Similarly, you should add entries in /etc/hosts.deny lockd,statd, mountd, and rquotad
More generally, judicious use of IP packet firewalls, using netfilter, matically increases NFS server security netfilter is stronger than NFS dae-mon-level security or even TCP Wrappers because it restricts access to yourserver at the packet level Although netfilter is described in detail in Chap-ter 34, this section gives you a few tips on how to configure a netfilter fire-wall that plays nicely with NFS
dra-First, you need to know the ports and services NFS uses so that you knowwhere to apply the packet filters Table 12-6 lists the ports and protocols eachNFS daemon (on both the client and server side) use
Table 12-6 NFS Ports and Network Protocols
SERVICE PORT PROTOCOLS
(continued)
Trang 13Table 12-6 (continued)
SERVICE PORT PROTOCOLS
N OT E Before NFSv4, NFS over TCP was experimental on the server side, so most administrators used UDP on the server However, TCP is quite stable on NFS clients Nevertheless, using packet filters for both protocols on both the client and the server does no harm NFSv4’s server-side TCP code is much more stable than NFSv3, so it is safe for deployment in a production environment.
Note that mountd, lockd, statd, and rquotad do not bind to any specificport; that is, they use a port number assigned randomly by the portmapper(which is one of portmapper’s purposes in the first place) The best way toaddress this variability is to assign each daemon a specific port using theportmapper’s -p option and then to apply the packet filter to that port.Regardless of how you configure your firewall, you must have the followingrule:
iptables -A INPUT -f -j ACCEPT
This rule accepts all packet fragments except the first one (which is treated
as a normal packet) because NFS does not work correctly unless you let mented packets through the firewall Be sure to read Chapter 34 carefully toconfigure your NFS server’s firewall properly
frag-Server Security Considerations
On the server, always use the root_squash option in /etc/exports NFShelps you in this regard because root squashing is the default, so you shouldnot disable it (with no_root_squash) unless you have an extremely com-pelling reason to do so, such as needing to provide boot files to diskless clients.With root squashing in place, the server substitutes the UID of the anonymous
user for root’s UID/GID (0), meaning that a client’s root account cannot change files that only the server’s root account can change.
The implication of root squashing might be unclear, so we’ll make it explicit:
all critical binaries and files should be owned by root, not bin, wheel, adm or
another non-root account The only account that an NFS client’s root user cannotaccess is the server’s root account, so critical files owned by root are much lessexposed than if they are owned by other accounts
Trang 14It gets better, though Consider the situation in which a user has root access
on a system In this case, exporting file systems using the all_squash optionmight be worth considering A user with root access on a client can usually su
to any user, and that UID will be used over NFS Without all_squash, a promised client can at least view and, if the file system is mounted read-write,update files owned by any user besides root if root_squash is enabled Thissecurity hole is closed if the all_squash option is used
com-NFS also helps you maintain a secure server through the secure mountoption; because this mount option is one of the default options mountdapplies to all exports unless explicitly disabled using the insecure option.Ports 1–1024 are reserved for root’s use; merely mortal user accounts cannot
bind these ports Thus, ports 1–1024 are sometimes referred to as privileged or secure ports The secure option prevents a malevolent nonroot user from ini-
tiating a spoofed NFS dialog on an unprivileged port and using it as a launchpoint for exploit attempts
Client Security Considerations
On the client, disable SUID (set UID) root programs on NFS mounts using thenosuidoption The nosuid mount option prevents a server’s root accountfrom creating an SUID root program on an exported file system, logging in tothe client as a normal user, and then using the UID root program to becomeroot on the client In some cases, you might also disable binaries on mountedfile systems using the noexec option, but this effort almost always proves to
be impractical or even counterproductive because one of the benefits of NFS issharing file systems, such as /usr or /usr/local, that contain scripts or pro-grams that need to be executed
T I P You might not want to use nosuid if you are sharing system binary directories, because many things in /bin and /usr/bin will break if they are not SUID.
NFS versions 3 and 4 support NFS file locking Accordingly, NFS clientsmust run statd and lockd in order for NFS file locks to function correctly.statdand lockd, in turn, depend on the portmapper, so consider applyingthe same precautions for portmap, statd, and lockd on NFS clients thatwere suggested for the NFS server
In summary, using TCP wrappers, the secure, root_squash, and nosuidoptions, and sturdy packet filters can increase the overall security of your NFSsetup However, NFS is a complex, nontrivial subsystem, so it is entirely con-ceivable that new bugs and exploits will be discovered
Trang 15In this chapter, you learned to configure NFS, the Network File System First,you found a general overview of NFS, its typical uses, and its advantages anddisadvantages Next, you found out how to configure an NFS server, you iden-tified key files and commands to use, and you saw the process with a typicalreal-world example With the server configured and functioning, you thenlearned how to configure a client system to access NFS exported file systems,again using key configuration files and commands and simulating the proce-dure with a representative example You also learned how to address NFS per-formance problems and how to troubleshoot some common NFS errors Thechapter’s final section identified potential security problems with NFS andsuggested ways to mitigate the threat