The AutoFS file system enables you to do the following: ● Mount file systems on demand ● Unmount file systems automatically ● Centralize the administration of AutoFS mounts through the u
Trang 1Configuring AutoFS
Objectives
The AutoFS file system provides a mechanism for automatically mountingNFS file systems on demand and for automatically unmounting these filesystems after a predetermined period of inactivity The mount points arespecified using local or distributed automount maps
Upon completion of this module, you should be able to:
● Describe the fundamentals of the AutoFS file system
● Use automount maps
The following course map shows how this module fits into the currentinstructional goal
Figure 7-1 Course Map
Managing Swap Configuration
Managing Crash Dumps and Core Files
Configuring NFS ConfiguringAutoFS
Managing Virtual File Systems and Core Dumps
Trang 2Introducing the Fundamentals of AutoFS
AutoFS is a file system mechanism that provides automatic mountingusing the NFS protocol AutoFS is a client-side service The AutoFS filesystem is initialized by the /etc/rc2.d/S74autofsautomount script,which runs automatically when a system is booted This script runs the
automountcommand, which reads the AutoFS configuration files andalso starts the automount daemonautomountd The automountddaemonruns continuously, mounting and unmounting remote directories on anas-needed basis
Whenever a user on a client computer running theautomountddaemontries to access a remote file or directory, the daemon mounts the remotefile system to which that file or directory belongs This remote file systemremains mounted for as long as it is needed If the remote file system isnot accessed for a defined period of time, theautomountddaemonautomatically unmounts the file system
The AutoFS service mounts and unmounts file systems as requiredwithout any user intervention The user does not need to use themount
and umountcommands and does not need to know the superuserpassword
The AutoFS file system enables you to do the following:
● Mount file systems on demand
● Unmount file systems automatically
● Centralize the administration of AutoFS mounts through the use of aname service, which can dramatically reduce administration
overhead time
● Create multiple mount resources for read/write or read-only filesystems
Trang 3The automount facility contains three components:
● The AutoFS file system
● The automountddaemon
● The automountcommand
Figure 7-2 The AutoFS Features
AutoFS File System
An AutoFS file system’s mount points are defined in the automount maps
on the client system After the AutoFS mount points are set up, activityunder the mount points can trigger file systems to be mounted under themount points If the automount maps are configured, the AutoFS kernel
module monitors mount requests made on the client If a mount request ismade for an AutoFS resource not currently mounted, the AutoFS servicecalls the automountddaemon, which mounts the requested resource
RAM
Automount Maps
Master map Direct map Indirect map
Special map
automount -v AutoFS
automountd
Trang 4The automountd Daemon
The/etc/rc2.d/S74autofsscript starts theautomountddaemon at boottime Theautomountddaemon mounts file systems on demand andunmounts idle mount points
Note – Theautomountddaemon is completely independent from the
automountcommand Because of this separation, you can add, delete, orchange map information without having to stop and start theautomountd
daemon process
The automount Command
Theautomountcommand, called at system startup time, reads the mastermap to create the initial set of AutoFS mounts These AutoFS mounts arenot automatically mounted at startup time, they are the points underwhich file systems are mounted on demand
Trang 5Using Automount Maps
The file system resources for automatic mounting are defined inautomount maps Figure 7-3 shows maps defined in the /etcdirectory
Figure 7-3 Configuring AutoFS Mount Points
The AutoFS map types are:
● Master map – Lists the other maps used for establishing the AutoFSfile system Theautomountcommand reads this map at boot time
● Direct map – Lists the mount points as absolute path names This
map explicitly indicates the mount point on the client
● Indirect map – Lists the mount points as relative path names This
map uses a relative path to establish the mount point on the client
● Special – Provides access to NFS servers by using their host names
NFS Client
"venues"
/
auto_master
/net -hosts [options]
/home auto_home [options]
/- auto_direct [options]
auto_direct /opt/moreapps pluto: /export/opt/apps
auto_home
Ernie mars:/export/home/ernie Mary mars:/export/home/mary etc
Trang 6The automount maps can be obtained from ASCII data files, NIS maps,NIS+ tables, or from an LDAP database Together, these maps describeinformation similar to the information specified in the /etc/vfstabfilefor remote file resources.
The source for automount maps is determined by theautomountentry inthe/etc/nsswitch.conffile For example, the entry:
automount: files
tells theautomountcommand that it should look in the/etcdirectory forits configuration information Usingnisinstead offilestellsautomount
to check the NIS maps for its configuration information
Configuring the Master Map
The auto_mastermap associates a directory, also called a mount point,with a map Theauto_mastermap is a master list specifying all the mapsthat the AutoFS service should check Names of direct and indirect mapslisted in this map refer to files in the /etcdirectory or to name servicedatabases
Associating a Mount Point With a Map
The following example shows an /etc/auto_masterfile
# cat /etc/auto_master
# Master map for automounter
#
+auto_master
/net -hosts -nosuid,nobrowse
/home auto_home -nobrowse
Trang 7The general syntax for each entry in the auto_mastermap is:
mount point map name mount options
where:
Note – The plus (+) symbol at the beginning of the+auto_masterline inthis file directs the automountddaemon to look at the NIS, NIS+, orLDAP databases before it reads the rest of the map If this line iscommented out, only the local files are searched unless the
/etc/nsswitch.conffile specifies that NIS, NIS+, or LDAP should besearched
mount point The full path name of a directory If the directory
does not exist, the AutoFS service creates one, ifpossible
map name The name of a direct or indirect map These maps
provide mounting information A relative pathname in this field requires AutoFS to consult the
/etc/nsswitch.conf file for the location of themap
mount options The general options for the map The mount
options are similar to those used for standard NFSmounts However, thenobrowse option is anAutoFS-specific mount option
Trang 8Identifying Mount Points for Special Maps
There are two mount points for special maps listed in the default
/net -hosts -nosuid,nobrowse
/home auto_home -nobrowse
The two mount points for special maps are:
Note – The-xfnmap provides access to legacy FNS resources Supportfor FNS is scheduled to cease with this release of the Solaris OE
The-hosts map Provides access to all resources shared by NFS
servers The resources being shared by a server aremounted below the/net/hostname directory, or, ifonly the server’s IP address is known, below the
/net/IPaddress directory The server does nothave to be listed in the hosts database for thismechanism to work
The-xfn map Provides access to resources available through the
Federated Naming Service (FNS) Resourcesassociated with FNS mount below the/xfn
directory
Trang 9Using the /net Directory
Shared resources associated with the hosts map entry are mounted belowthe /net/hostnamedirectory For example, a shared resource named
/documentationon hostsys42is mounted by the command:
# cd /net/sys42/documentation
Using the cdcommand to trigger the automounting ofsys42’s resourceeliminates the need to log in to the system Any user can mount theresource by executing the command to change to the directory thatcontains the shared resource The resource remains mounted until apredetermined time period of inactivity has occurred
Adding Direct Map Entries
The /-entry in the example master map defines a mount point for directmaps
# cat /etc/auto_master
# Master map for automounter
#
+auto_master
/net -hosts -nosuid,nobrowse
/home auto_home -nobrowse
/- auto_direct -ro
The /-mount point is a pointer that informs the automountfacility thatthe full path names are defined in the file specified by map_name(the
/etc/auto_directfile in this example)
Note – The/-entry is not an entry in the default master map This entryhas been added here as an example The other entries in this examplealready exist in the auto_masterfile
Even though themap_nameentry is specified asauto_direct, the
automountfacility automatically searches for all map-related files in the
/etcdirectory; therefore, based upon the automountentry in the
/etc/nsswitch.conffile, theauto_directfile is the
/etc/auto_directfile
Trang 10Note – An NIS or NIS+ master map can have only one direct map entry A
master map that is a local file can have any number of entries
Creating a Direct Map
Direct maps specify the absolute path name of the mount point, thespecific options for this mount, and the shared resource to mount Forexample:
# cat /etc/auto_direct
# Superuser-created direct map for automounter
#
/apps/frame -ro,soft server1:/export/framemaker,v5.5.6
/opt/local -ro,soft server2:/export/unbundled
/usr/share/man -ro,soft server3,server4,server5:/usr/share/man
The syntax for direct maps is:
key [ mount-options] location
where:
key The full path name of the mount point for the direct
maps
mount-options The specific options for a given entry
location The location of the file resource specified in
server:pathname notation
Trang 11The following direct map entry specifies that the client mounts the
/usr/share/mandirectory as read-only from the servers server3,
server4, orserver5, as available
/usr/share/man -ro server3,server4,server5:/usr/share/man
This entry uses a special notation, a comma-separated list of servers, tospecify a powerfulautomountfeature—multiple locations for a fileresource The automountddaemon automatically mounts the
/usr/share/mandirectory as needed, from serversserver3,server4, or
server5, with server proximity and administrator-defined weightsdetermining server selection If the nearest server fails to respond withinthe specified time-out period, the next server with the nearest proximity isselected
Note – Selection criteria for multiple servers, such as server proximity and
administrator-defined weights, is defined in the “Replicated File Systems”section of the automountman page
Adding Indirect Map Entries
The /homeentries define mount points for indirect maps The map
auto_homelists relative path names only Indirect maps obtain the initialpath of the mount point from the master map
# cat /etc/auto_master
# Master map for automounter
#
+auto_master
/net -hosts -nosuid,nobrowse
/home auto_home -nobrowse
Trang 12Creating an Indirect Map
Use theauto_homeindirect map to list the location of home directoriesacross the network For example,
/etc/nsswitch.conffile specifies that NIS, NIS+, or LDAP should besearched
The following describes the syntax for indirect maps:
key [ mount-options ] location
where:
key Specifies the path name of the mount point relative to
the beginning of the path name specified in themaster map
mount-options Specifies the options for a given entry
location Specifies the location of the file resource specified in
server:pathname notation
Trang 13The example /etc/auto_homefile implies the following mount points:
/home/stevenu,/home/johnnyd,/home/wkd, and /home/mary
Figure 7-4 shows the /home/marymount point
Figure 7-4 The Mount Points
Reducing the auto_home Map to a Single Line
The following entry reduces theauto_homefile to a single line The use ofsubstitution characters specifies that for every login ID, the client
remotely mounts the /export/home/loginIDdirectory from the NFSserver server1onto the local mount point/home/loginID
Figure 7-5 Mounting a Directory on a Local Mount Point
Figure 7-5 shows that this entry uses the wildcard character (*) to matchany key The substitution character (&) at the end of the location is
replaced with the matched key field Using wildcard and substitutioncharacters works only when all home directories are on a single server (in
auto_home autofs
Trang 14Updating the Automount Maps
When making changes to the master map or creating a direct map, run the
automountcommand to make the changes effective
Running the automount Command
The syntax of the command is:
automount [-t duration] [-v]
where:
You can modify the master map entries or add entries for new maps.However, you must run theautomountcommand to make these changeseffective
You do not have to stop and restart theautomountddaemon after makingchanges to existing entries in a direct map, because the daemon is
stateless You can modify existing entries in the direct map at any time.The new information is used when theautomountddaemon next accessesthe map entry to perform a mount
Any modifications to indirect maps are automatically used by the
automountddaemon
A modification is a change to options or resources A change to the key(the mount point) or a completely new line is an added entry, a deletedentry, or both
-t duration Specifies a time, in seconds, that the file system
remains mounted when not in use The default is
600 seconds (10 minutes)
-v Specifies verbose mode, which displays output as
theautomount command executes
Trang 15Use Table 7-1 to determine whether you should run (or rerun) the
automountcommand
Note – You can run the automountcommand at any time to rescan themaps, even if running the command is not required
Verifying AutoFS Entries in the /etc/mnttab File
The/etc/mnttabfile is a file system that provides read-only access to thetable of mounted file systems for the current host Mounting a file systemadds an entry to this table Unmounting a file system removes the entryfrom this table Each entry in the table is a line of fields separated byspaces in the form of:
special mount_point fstype options time
where:
Table 7-1 When to Run theautomountCommand
Automount Map Run if the Entry is
Added or Deleted
Run if the Entry
is Modified
special The name of the resource to be mounted
mount_point The path name of the directory on which the file
system is mounted
fstype The type of file system
options The mount options
time The time at which the file system was mounted
Trang 16You can display the/etc/mnttabfile to obtain a snapshot of the mountedfile systems, including those mounted as an AutoFS file system type.
# cat /etc/mnttab
/dev/dsk/c0t0d0s0 / ufs
rw,intr,largefiles,onerror=panic,suid,dev=2200000 1008255791
/proc /proc proc dev=4080000 1008255790
mnttab /etc/mnttab mntfs dev=4140000 1008255790
Stopping and Starting the Automount System
The /etc/init.d/autofsscript executes automatically as the systemtransitions between run levels, or you can run the script manually fromthe command line
Stopping the Automount System
When theautofsscript runs with the stopargument, it performs aforced unmount of all AutoFS file systems, and it then kills the
automountddaemon
The autofsscript runs with the stopargument automatically whentransitioning to:
● Run level S using the /etc/rcS.d/K41autofsscript
● Run level 1 using the/etc/rc1.d/K41autofsscript
● Run level 0 using/etc/rc0.d/K41autofsscripts script
Trang 17To run the script on demand, become superuser, and kill theautomountd
daemon by typing the following command:
# /etc/init.d/autofs stop
Starting the Automount System
When the autofsscript is run with thestartargument, the script startsthe automountddaemon, and then it runs the automountutility as abackground task
The script runs with the startargument automatically whentransitioning to run level 2 using the /etc/rc2.d/S74autofsscript
To run the script on demand, become superuser, and start theautomountd
daemon by performing the command:
# /etc/init.d/autofs start
Trang 18Performing the Exercises
You have the option to complete any one of three versions of a lab Todecide which option to choose, consult the following descriptions of thelevels:
● Level 1 – This version of the lab provides the least amount ofguidance Each bulleted paragraph provides a task description, butyou must determine your own way of accomplishing each task
● Level 2 – This version of the lab provides more guidance Althougheach step describes what you should do, you must determine whichcommands (and options) to input
● Level 3 – This version of the lab is the easiest to accomplish becauseeach step provides exactly what you should input to the system Thislevel also includes the task solutions for all three levels
Trang 19Exercise: Using the Automount Facility (Level 1)
In this exercise, you use the automount facility to automatically mountman pages and to mount a user’s home directory
Preparation
Choose a partner for this lab, and determine which system will beconfigured as the NFS server and which will serve as the NFS client.Verify that entries for both systems exist in the /etc/hostsfile of eachsystem Refer to the lecture notes as necessary to perform the steps listed.Tasks
Perform the following tasks:
● On the server, perform the steps required to share the
/usr/share/mandirectory
● On the client, rename the /usr/share/mandirectory to
/usr/share/man.origdirectory, and create a new mount point forthe/usr/share/mandirectory Edit the master map so that it calls adirect map Create the direct map to mount the/usr/share/man
directory from the server Use theautomountcommand to updatetheautomountddaemon Test that the man pages work, and verifythe mount that occurs
● Create a new, identical user on both the server and client that uses
/export/home/usernamefor the user’s home directory On bothsystems, make the changes required in the/etc/passwdfile to setthe home directory for this new user to the/home/username
directory
● On the server, perform the steps required to share the/export/home
directory
● On both systems, make the changes required in the/etc/auto_home
file to allow both systems to automatically mount the
/export/home/usernamedirectory when the new user calls for the
/home/usernamedirectory Test the new user login on both systems,and verify that the mounts take place Log in asrootwhen finished
● On the server, unshare the /export/homeand/usr/share/man
directories, and remove entries for these directories from the
/etc/dfs/dfstabfile Stop the NFS server daemons
● On the client, remove the direct map entry from the
file, and update the daemon with
Trang 20Exercise: Using the Automount Facility (Level 2)
In this exercise, you use the automount facility to automatically mountman pages and to mount a user’s home directory
Preparation
Choose a partner for this lab, and determine which system will beconfigured as the NFS server and which will serve as the NFS client.Verify that entries for both systems exist in the/etc/hostsfile of eachsystem Refer to the lecture notes as necessary to perform the steps listed.Task Summary
Perform the following tasks:
● On the server, perform the steps required to share the
/usr/share/mandirectory
● On the client, rename the/usr/share/mandirectory to
/usr/share/man.origdirectory, and create a new mount point forthe/usr/share/mandirectory Edit the master map so that it calls adirect map Create the direct map to mount/usr/share/man
directory from the server Use theautomountcommand to updatetheautomountddaemon Test that the man pages work, and verifythe mount that occurs
● Create a new, identical user on both the server and client that uses
/export/home/usernamefor the user’s home directory On bothsystems, make the changes required in the/etc/passwdfile to setthe home directory for this new user to the/home/username
directory
● On the server, perform the steps required to share the/export/home
directory
● On both systems, make the changes required in the/etc/auto_home
file to allow both systems to automatically mount the
/export/home/usernamedirectory when the new user calls for the
/home/usernamedirectory Test the new user login on both systems,and verify that the mounts take place Log in as rootwhen finished
● On the server, unshare the /export/homeand /usr/share/man
directories and remove entries for these directories from the
Trang 21Complete the following tasks
Task 1– On the Server Host
Complete the following steps:
1 Edit the/etc/dfs/dfstabfile, and add a line to share the manpages
2 Use the pgrepcommand to check if themountddaemon is running
● If the mountddaemon is not running, start it
● If the mountddaemon is running, share the new directory
Task 2 – On the Client Host
Complete the following steps:
1 Rename the/usr/share/mandirectory so that you cannot view theman pages installed on the client system
4 Run the automountcommand to update the list of directories
managed by theautomountddaemon
_
5 Test the configuration, and verify that a mount for the
/usr/share/mandirectory exists after accessing the man pages. _What did you observe to indicate that theautomountoperation wassuccessful?
_
Trang 22Task 3 – On the Server Host
Complete the following steps:
1 Verify that the/export/homedirectory exists If it does not exist,create it
2 Add a user account with the following characteristics:
● User ID: 3001
● Primary group: 10
● Home directory: /export/home/usera
● Login shell:/bin/ksh
● User name: usera
3 Remove the lock string from the new user’s/etc/shadowfile entry
Task 4 – On the Client Host
Complete the following steps:
1 Verify that the/export/homedirectory exists If it does not exist,create it
2 Add a user account with the following characteristics:
● User ID: 3001
● Primary group: 10
● Home directory: /export/home/usera
● Login shell: /bin/ksh
● User name: usera
3 Remove the lock string from the new user’s/etc/shadowfile entry
Trang 23Task 5 – On Both Systems
Complete the following steps:
1 Edit the/etc/passwdfile, and change the home directory for thenew user from the/export/home/usernamedirectory to
/home/username, where usernameis the name of your new user
2 Edit the /etc/auto_homefile Add the following line, and replace
usernamewith the name of your new user:
username server:/export/home/username
Task 6 – On the Server Host
Complete the following steps:
1 Edit the/etc/dfs/dfstabfile, and add a line to share the
/export/homedirectory
2 Use the pgrepcommand to check if themountddaemon is running
● If the mountddaemon is not running, start it
● If the mountddaemon is running, share the new directory
Task 7 – On Both Systems
Complete the following step:
Log in as the new user
Do both systems automatically mount the new user’s homedirectory?
_Which directory is mounted, and what is the mount point:
● On the server?
● On the client?
Trang 24
Task 8 – On the Client Host
Complete the following steps:
1 On the client, log off asusera. _
2 Remove the entry for userafrom the/etc/auto_homemap
Task 9 – On the Server Host
Complete the following steps:
1 Log off asusera. _
2 After the client reboots as described in Step 4 of ‘‘Task 8 – On theClient Host’’ section on page 7-24, remove the entry for userafromthe/etc/auto_homemap
Trang 25Exercise: Using the Automount Facility (Level 3)
In this exercise, you use the automount facility to automatically mountman pages and to mount a user’s home directory
Preparation
Choose a partner for this lab, and determine which system will beconfigured as the NFS server and which will serve as the NFS client.Verify that entries for both systems exist in the /etc/hostsfile of eachsystem Refer to the lecture notes as necessary to perform the steps listed
Task Summary
Perform the following tasks:
● On the server, perform the steps required to share the
/usr/share/mandirectory
● On the client, rename the /usr/share/mandirectory to
/usr/share/man.origdirectory, and create a new mount point forthe/usr/share/mandirectory Edit the master map so that it calls adirect map Create the direct map to mount/usr/share/man
directory from the server Use theautomountcommand to updatetheautomountd daemon Test that the man pages work, and verifythe mount that occurs
● Create a new, identical user on both the server and client that uses
/export/home/usernamefor the user’s home directory On bothsystems, make the changes required in the/etc/passwdfile to setthe home directory for this new user to the/home/username
directory
● On the server, perform the steps required to share the/export/home
directory
● On both systems, make the changes required in the/etc/auto_home
file to allow both systems to automatically mount the
/export/home/usernamedirectory when the new user calls for the
/home/usernamedirectory Test the new user log in on both systems,and verify that the mounts that happen Log in asrootwhen
finished
Trang 26● On the server, unshare the /export/homeand /usr/share/man
directories and remove entries for these directories from the
/etc/dfs/dfstabfile Stop the NFS server daemons
● On the client, remove the direct map entry from the
/etc/auto_masterfile, and update theautomountddaemon withthe change Return the /usr/share/mandirectory to its originalconfiguration
Tasks and Solutions
The following section provides the tasks with their solutions
Task 1 – On the Server Host
Complete the following steps:
1 Edit the /etc/dfs/dfstabfile, and add a line to share the manpages
Task 2 – On the Client Host
Complete the following steps:
1 Rename the/usr/share/mandirectory so that you cannot view theman pages installed on the client system
# cd /usr/share/
# mv man man.orig
Trang 272 Edit the /etc/auto_masterfile to add an entry for a direct map.
5 Test the configuration, and verify that a mount for the
/usr/share/mandirectory exists after accessing the man pages
# man ls
< output from man command >
# mount | grep man
/usr/share/man on sys44:/usr/share/man
remote/read/write/setuid/dev=42c0003 on Thu Dec 13 08:07:26 2001
What did you observe to indicate that theautomountoperation wassuccessful?
This operation should automatically mount the directory in which the manuals are stored In other words, themancommand should work.
Task 3 – On the Server Host
Complete the following steps:
1 Verify that the/export/homedirectory exists If it does not exist,create it
# ls /export/home
Note – Perform the next command if the /export/homedirectory doesnot exist
# mkdir /export/home
Trang 282 Add a user account with the following characteristics:
● User ID: 3001
● Primary group: 10
● Home directory: /export/home/usera
● Login shell:/bin/ksh
● User name: usera
# useradd -u 3001 -g 10 -m -d /export/home/usera -s /bin/ksh usera
3 Remove the lock string from the new user’s/etc/shadowfile entry
Task 4 – On the Client Host
Complete the following steps:
1 Verify that the/export/homedirectory exists If it does not, create it
● Home directory: /export/home/usera
● Login shell: /bin/ksh
● User name: usera
# useradd -u 3001 -g 10 -m -d /home/usera -s /bin/ksh usera
3 Remove the lock string from the new user’s/etc/shadowfile entry
Task 5 – On Both Systems
Complete the following steps:
1 Edit the /etc/passwdfile, and change the home directory for thenew user from the/export/home/usernamedirectory to
/home/username, whereusernameis the name of your new user
# vi /etc/passwd
Trang 29Task 6 – On the Server Host
Complete the following steps:
1 Edit the/etc/dfs/dfstabfile, and add a line to share the
Task 7 – On Both Systems
Complete the following step:
Log in as the new user
# su - usera
Do both systems automatically mount the new user’s homedirectory?
Yes, this should work.
Which directory is mounted, and what is the mount point:
Trang 30Task 8 – On the Client Host
Complete the following steps:
1 On the client, log off asusera
2 Remove the entry for userafrom the/etc/auto_homemap
3 Remove the entry for theauto_homemap from the
Task 9 – On the Server Host
Complete the following steps:
1 Log off asusera
2 After the client reboots as described in Step 4 of ‘‘Task 8 – On theClient Host’’ section on page 7-30, remove the entry for userafromthe/etc/auto_homemap
3 Remove the entries from /etc/dfs/dfstabfile
4 Unshare mounted directories
# unshareall
Trang 31Exercise Summary
?
!
Discussion – Take a few minutes to discuss what experiences, issues, or
discoveries you had during the lab exercise
● Experiences
● Interpretations
● Conclusions
● Applications
Trang 33Describing RAID and the Solaris™ Volume
Manager Software
Objectives
A redundant array of independent disks (RAID) configuration enablesyou to expand the characteristics of a storage volume beyond the physicallimitations of a single disk You can use a RAID configuration to increasedisk capacity as well as to improve disk performance and fault tolerance.The Solaris™ Volume Manager software provides a graphical user
interface (GUI) tool to simplify system administration tasks on storagedevices Upon completion of this module, you should be able to:
● Describe RAID
● Describe Solaris Volume Manager software concepts
The following course map shows how this module fits into the currentinstructional goal
Figure 8-1 Course Map
Describing RAID and Solaris
Volume Manager Software
Managing Storage Volumes
Configuring Solaris Volume Manager Software
Trang 34Introducing RAID
RAID is a classification of methods to back up and to store data onmultiple disk drives There are six levels of RAID as well as anon-redundant array of independent disks (RAID 0) The Solaris VolumeManager software uses metadevices, which are product-specific
definitions of logical storage volumes, to implement RAID 0, RAID 1,and RAID 5:
● RAID 0: Non-redundant disk array (concatenation and striping)
● RAID 1: Mirrored disk array
● RAID 5: Block-interleaved distributed-parity
RAID 0
RAID-0 volumes, including both stripes and concatenations, arecomposed of slices and let you expand disk storage capacity You caneither use RAID-0 volumes directly or use the volumes as the buildingblocks for RAID-1 volumes (mirrors) There are two types of RAID-0volumes:
● Concatenated volumes (or concatenations)
A concatenated volume writes data to the first available slice Whenthe first slice is full, the volume writes data to the next available slice
● Striped volumes (or stripes)
A stripe distributes data equally across all slices in the stripe
RAID-0 volumes allow you to expand disk storage capacity efficiently.These volumes do not provide data redundancy If a single slice fails on aRAID-0 volume, there is a loss of data
Trang 35Concatenated Volumes
Figure 8-2 shows that in a concatenated RAID 0 volume, data is organizedserially and adjacently across disk slices, forming one logical storage unit
Figure 8-2 RAID-0 Concatenation
A concatenation combines the capacities of several slices to get a largerstorage capacity You can add more slices to the concatenation as thedemand for storage increases You can add slices at anytime, even if otherslices are currently active
Note – An interlace is a grouped segment of blocks on a particular slice.
Interlace 2 Interlace 3 Interlace 4
Physical
Slice C
Interlace 6 Interlace 7 Interlace 8
Interlace 5
Solaris Volume Manager
Interlace 12
Interlace 2 Interlace 1
Interlace 10 Interlace 11 Interlace 12 Interlace 9
Trang 36The default behavior of concatenated RAID-0 volumes is to fill a physicalcomponent within the volume before beginning to store data on
subsequent components within the concatenated volume However, thedefault behavior of UFS file systems within the Solaris OE is to distributethe load across devices assigned to the volume containing a file system.This anomaly can make it seem that concatenated RAID-0 volumesdistribute data across the components of the volume in a round-robinmethod The data distribution is a function of the UFS file system that ismounted in the concatenated volume and is not a function of the
concatenated volume itself
You can also use a concatenation to expand any active and mounted UFSfile system without having to bring down the system Usually, the
capacity of a concatenation is the total size of all the slices in theconcatenation
Trang 37Striped Volumes
Figure 8-3 shows the arrangement of a RAID-0 volume A RAID 0 volumeconfigured as a stripe arranges data across two or more slices Stripingalternates equally-sized segments of data across two or more slices,
forming one logical storage unit These segments are interleaved
round-robin, so that the combined space is created alternately from eachslice
Figure 8-3 RAID-0 Stripe
Striping enables parallel data access because multiple controllers canaccess the data at the same time Parallel access increases Input/Output(I/O) throughput because multiple disks in the volume are busy servicingI/O requests simultaneously
Physical
Slice A PhysicalSlice B PhysicalSlice C
Solaris Volume Manager
RAID 0 (Stripe) Logical Volume
Interlace 4 Interlace 5
Interlace 2
Interlace 6 Interlace 1 Interlace 3
Trang 38You cannot convert an existing file system directly to a stripe You mustfirst back up the file system, create the stripe, and then restore the filesystem to the stripe.
For sequential I/O operations on a stripe, the Solaris Volume Manager
software reads all the blocks in an interlace An interlace is a grouped
segment of blocks on a particular slice The Solaris Volume Managersoftware then reads all the blocks in the interlace on the second slice, and
so on
An interlace is the size, in Kbytes, Mbytes, or blocks, of the logical datachunks on a stripe Depending on the application, different interlacevalues can increase performance for your configuration The performanceincrease comes from several disk head-arm assemblies (HDAs)
concurrently executing I/O operations When the I/O request is largerthan the interlace size, you might get better performance
When you create a stripe, you can set the interlace value or use the SolarisVolume Manager software’s default interlace value of 16 Kbytes Afteryou create the stripe, you cannot change the interlace value (although youcould back up the data on it, delete the stripe, create a new stripe with anew interlace value, and then restore the data)
Trang 39RAID 1
RAID-1 volumes, also known as mirror volumes, are typically composed
of RAID-0 volumes and provide the advantage of data redundancy Thedisadvantage is the higher cost incurred by requiring two RAID-1 deviceswherever a single RAID-0 device is mirrored Typical topics to be
considered when configuring mirrors are:
● Trade-offs when using mirrors
● Uses of multiple submirrors
● RAID 0+1
● RAID 1+0
● Mirror read, write, and synchronization options
● Mirror configuration guidelines
Trade-Offs When Using Mirrors
A RAID-1 (mirror) volume maintains identical copies of the data in
RAID-0 volumes Mirroring requires more disks You need at least twice
as much disk space as the amount of data to be mirrored
After configuring a mirror, you can use it as if it were a physical slice.With multiple copies of data available, data access time is reduced if themirror read and write policies are properly configured You then use readand write policies to distribute the access to the submirrors evenly acrossthe mirror The mirror read and write policies are described in detail later
in this module
You can mirror any file system, including existing file systems You canalso use a mirror for any application, such as a database
Using Multiple Submirrors
A mirror is made of two or more RAID-0 volumes configured as eitherstripes or concatenations The mirrored RAID-0 volumes are called
submirrors A mirror consisting of two submirrors is known as a two-waymirror, while a mirror consisting of three submirrors is known as a
three-way mirror
Creating a two-way mirror is usually sufficient for data redundancy A
Trang 40When a submirror is offline, it is in a read-only mode The Solaris VolumeManager software tracks all the changes written to the online submirror.When the submirror is brought back online, only the newly writtenportions are resynchronized Other reasons for taking the submirroroffline include troubleshooting and repair.
You can attach or detach a submirror from a mirror at any time, though atleast one submirror must remain attached to the mirror at all times.Usually, you begin the creation of a mirror with only a single submirror,after which you can attach additional submirrors
Figure 8-4 RAID-1 Mirror
The Solaris Volume Manager software makes duplicate copies of the datalocated on multiple physical disks The Solaris Volume Manager softwarepresents one virtual disk to the application All disk writes are duplicated,and disk reads come from one of the underlying submirrors If the
submirrors are not of equal size, the total capacity of the mirror is limited
by the size of the smallest submirror
Interlace 2 Interlace 3 Interlace 4 Interlace 1
Interlace 2 Interlace 3 Interlace 4 Interlace 1
Submirror 1
RAID 1 (Mirror) Logical Volume
Submirror 2 Submirror 1
Submirror 2 Solaris VolumeManager
Int 1 Int 2 Int 3 Int 4
Int 1 Int 2 Int 3 Int 4