1. Trang chủ
  2. » Công Nghệ Thông Tin

Lecture Operating systems: A concept-based approach (2/e): Chapter 12 - Dhananjay M. Dhamdhere

64 87 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 64
Dung lượng 1,06 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Chapter 12 - Implementation of file operations. This chapter discusses the physical organization used in file systems. It starts with an overview of I/O devices and their characteristics, and discusses different RAID organizations that provide high reliability, fast access, and high data transfer rates.

Trang 1

in any form or by any means, without the prior written permission of the publisher, or used beyond the limited distribution to teachers and educators permitted by McGraw­Hill 

Trang 2

Input Output Control System (IOCS)

• The IOCS consists of two layers that provide efficient file processing and efficient device performance

– Access Methods layer

* Each access method provides efficient processing of files with a

specific file organization, e.g., sequential file organization and direct file organization

– Physical IOCS layer

* Performs I/O operations on devices

* Ensures efficient device performance

Trang 3

Physical organizations in Access methods and

Physical IOCS

•  The physical IOCS reads data from disk into buffers or disk cache/file cache   maintained in memory (or writes data), ensuring high device throughput

•  The access method moves the data between buffers or caches and the 

   address space of the process, ensuring efficient file processing

Trang 4

Policies and Mechanisms

Trang 5

Layers of File system and IOCS

•  M: Mechanism module, P: Policy module

•  A policy module invokes mechanism modules of the same layer,

   which may invoke policy and mechanism modules of the lower layer

Trang 6

Policies and mechanisms in file system

and IOCS layers

Trang 7

Model of a computer system

•  The I/O subsystem has an independent data path to memory

•  Devices are connected to device controllers, which are connected to the   DMA; a device is identified by the pair (controller id, device id)

•  The DMA, a device controller, and a device implement an I/O operation

Trang 8

Access and data transfer time in an I/O operation

The total time required to perform an I/O operation = t a  + t x

Trang 9

Error detection approaches

•  Parity bits

•  Cyclic    redundancy   checksum    (CRC)

Trang 10

Disk data organization

• Data should be organized such that it can be accessed

efficiently

– Notion of a Cylinder

* Consists of identically positioned tracks on all platters of a disk

 All of its tracks can be accessed for the same position of disk heads

 Its use reduces disk head movement

 Put adjoining data of a file on tracks in the same cylinder

– Data staggering techniques

* A disk rotates a bit while disk heads are being readied or moved to access a new track Make sure that data to be accessed passes

Trang 11

A cylinder in a disk with several platters

Tracks in a cylinder can be accessed by electronically switching to use of

the disk head positioned on a track; it does not involve movement of heads

Trang 12

Data staggering techniques

• Sector interleaving

– Background: Older disks used to contain a buffer to store data

read from the disk or data to be written on it

* In a read operation, the data was first read from a disk sector into the buffer

* Data from the buffer was then transferred to memory

* Now the disk was ready for a new operation

* But the next sector had already passed under the head!

– Technique: A few sectors are skipped while recording adjoining file records or file data

* The number of sectors skipped is called the interleaving factor (inf)

Trang 13

Sector interleaving

(a) No interleaving; adjacent records in a file occupy adjoining disk sectors

(b) Interleaving factor = 2; two sectors exist between adjacent file records

Trang 15

Variation of throughput with interleaving factor

Inf  = 2 provides the best peak throughput

Trang 16

Data staggering techniques

– The disk requires some time to switch from reading data of one

track to data of another track in a cylinder (Head switch time)

* Some sectors/records/blocks pass under the head during this time

* Head skewing: Stagger the data on tracks

 The first sector of a new track should pass under the head only after its disk head is ready to read

Trang 17

Redundant Array of Inexpensive Disks (RAID)

• An array of cheap disks is used instead of a single disk

• Different arrangements are used to provide three benefits

– Reliability

* Store data redundantly (remember stable storage?)

* Read / write redundant data records in parallel

– Fast data transfer rates

* Store file data on several disks in the RAID

* Read / write file data in parallel

– Fast access

* Store two or more copies of data

* To read data, access the copy that is accessible most efficiently

Trang 18

Disk striping

A disk strip contains data (it is like a sector or disk block)

A disk stripe is a collection of identically positioned strips

on different disks in the RAID

– Data written on strips in a disk stripe can be read in parallel

* This arrangement provides high data transfer rates

Trang 19

RAID levels

• Different RAID organizations (called RAID levels) provide different benefits

– RAID 0: Disk striping

* High data transfer rates

– RAID 1: Disk mirroring

* Identical data is written on two disks

* To read, the copy accessible faster is accessed

– RAID 0+1

* Disk striping as in RAID 0, each stripe is mirrored

– RAID 1+0

* Disks are first mirrored, then striped

* Provides better reliability than RAID 0+1

Trang 20

RAID levels

– Level 2: Error correction codes

* Data and redundant bits are recorded on different disks

* For example, (12,8) Hamming code

– Level 3: Bit interleaved parity

* Similar to level 2, but uses a single parity disk

* Device controller detects error, parity bit is used to correct it

– Level 4: Block interleaved parity

* Strip contains consecutive bytes; parity strip contains parity bits

– Level 5: Block interleaved distributed parity

* Like level 4, but parity strips are spread on several disks

– Level 6: P+Q redundancy

Trang 21

Physical IOCS and Device Drivers

• The Physical IOCS

– Supports device-level I/O operations

* Provides functionalities for starting an I/O operation, checking its

status, etc (Note: a device is identified by the pair (cu, d))

– Performs device-level performance optimization

* Schedules I/O operations on a device in such a manner that the device throughput is maximized

* Reduces disk head movement in a disk

The Physical IOCS provides this support for all classes of

I/O devices

– A device driver in a modern OS provides such support for a

specific class of I/O devices; e.g., a disk driver

Trang 22

Overview of device-level I/O

• I/O instructions and I/O commands

– I/O instruction

* An I/O instruction initiates an I/O operation; it involves the CPU, DMA, device controller and device

* An I/O operation may consist of several distinct tasks

 e.g., reading from a disk involves positioning disk heads, locating a sector and performing data transfer

* Individual tasks in I/O are performed through ‘I/O commands’

* An I/O instruction indicates the (cu, d) pair identifying a device and specifies the commands that describe the tasks to be performed

– I/O command

* Performs a specific task in an I/O operation

Trang 23

Computer features supporting device-level I/O

• The following features in a computer are used for I/O

– Initiating an I/O operation

* Instruction ‘I/O-init (cu, d), command address’ initiates an I/O opn

* I/O commands are assumed to occupy consecutive memory locations

– Checking device status

* Instruction ‘I/O-status (cu, d)’ provides status information regarding

the device (cu, d)

– Performing I/O operations

* Device-specific commands may be used to perform I/O

– Handling interrupts

* Interrupt hardware performs switching of PSWs

Trang 24

Device-level I/O without OS support

• We consider a program that is executing in a raw

machine, i.e., without OS support

– When the program initiates an I/O operation, it should execute the next instruction only after the I/O operation completes

* CPU and the I/O channel operate in parallel

* Hence the CPU should do nothing until the I/O operation completes

It is achieved as follows

 An IO_FLAG is associated with the I/O operation

 It is set by the interrupt servicing routine when the I/O operation completes

 The CPU loops until IO_FLAG is set to the value ‘I/O complete’

Trang 25

Device level I/O

•  Before initiating an I/O operation, the IO_FLAG is set to ‘I/O in progress’

•  The IO_init instruction sets a condition code regarding its outcome

•  If the I/O device is busy, the IO_init instruction is attempted repeatedly

•  The CPU loops at PROCEED until the I/O operation completes

Trang 26

Physical IOCS

• The Physical IOCS provides functionalities for the

following:

– I/O initiation, completion and error recovery

* It provides an interface for device-level I/O

– Awaiting completion of an I/O operation

* It provides a system call through which a program that initiated an I/O operation can block itself until the I/O completes

– Optimization of I/O device performance

* It performs I/O scheduling to perform I/O operations in an order that

optimizes performance of an I/O device

Trang 27

Actions of the physical IOCS layer

•  A process makes read/write requests to the physical IOCS, which invokes   functionalities in the kernel to implement the I/O operation

•  When a process makes an ‘Await I/O completion’ call, the physical IOCS 

   obtains device status information and decides whether to block the process

Trang 28

Requests and responses across the physical IOCS interfaces

Trang 29

Logical I/O devices

• A logical I/O device is a virtual device; it is just a name

– A process uses a logical I/O device; the kernel assigns a physical device to it

* Physical IOCS performs I/O on the physical device

– Its use provides flexibility

* Background: I/O initiation requires specification of I/O device address, i.e., a (cu, d) pair

* A process may not be able to provide this address

 It may not know the address, e.g., a printer’s address

 Change of device would require recompilation of the program!

 Use of a logical device overcomes this problem

Trang 30

Performing I/O operations

• I/O operations are performed as follows:

– A process uses a logical device name while initiating an I/O

operation

* Actually, a higher level language program contains read or write operations on a file

* The compiler associates a logical device name with the file

* It translates a read or write operation into a system call for I/O initiation using the logical device name

* During execution, the system call activates the Physical IOCS

– The Physical IOCS notes all I/O operations directed at a device

– It performs I/O scheduling and decides which of the pending I/O

Trang 31

I/O operation using Physical IOCS facilities—

system call to initiate an I/O operation

Trang 32

How a process initiates an I/O operation

•  The code of a process contains a read / write operation

•  The compiler converts it into a call on a file system module

•  The file system module invokes a module of the Physical IOCS library

•  This module makes the system call ‘Initiate I/O operation’

Trang 33

Physical IOCS data structures

•  LDT contains information   about logical devices; its   entries point to PDT entries

•  PDT contains information   about physical devices

•  A PDT entry contains a    pointer to an I/O queue

•  The I/O queue is a list of   IOCBs

Trang 34

Summary of PIOCS functionalities

•  Device address translation refers to obtaining address of a physical I/O

   device from a logical device name (it is done through the LDT)

•  I/O scheduler selects one IOCB from the I/O queue of a device

Trang 35

Use of device drivers

Trang 36

Disk Scheduling

• A disk scheduling policy performs I/O operations in an

order that optimizes disk throughput We shall discuss

four policies

– FIFO

– Shortest Seek Time First (SSTF)

* Seek time: Time spent in movement of disk heads

– SCAN / Look

* Heads are moved from one end of platter to another, servicing I/O requests (Look moves them only up to the last request in a direction)

* Direction of head movement is reversed; another SCAN is started

– CSCAN / C-look (‘C’ stands for ‘circular’)

Trang 37

I/O requests for Disk scheduling

•  The disk heads are currently moving toward higher numbered tracks

•  The requests are made at different times

Trang 38

Disk scheduling details

Trang 39

Performance of disk scheduling algorithms

Q:  Does any of these policies suffer from starvation?

* Errata: Read ‘Look’ instead of ‘SCAN’ and ‘C­look’ instead of ‘CSCAN’

Trang 40

Access Methods

• An access method provides efficient file processing for a class of files Access methods use two techniques:

– Buffering of records

* A buffer is a memory area that holds some file data temporarily

* Buffering provides overlap of CPU and I/O activities in a program

 For a file being read-processed, records are read into buffers before the process requests access to the records

 For a file being write-processed, data to be written into a record

is copied to the buffer, but it is written on the disk later

– Blocking of records

* Many records are written together in a single I/O operation

Trang 41

Unbuffered and buffered processing of file F

Trang 42

Buffering of records

• Notation used:

tp : Processing time per record

tio : I/O time per record

tc : Time required to copy a record between a buffer and the address space of a process

In the example, we assume tp = 50 msec, tio = 75 msec, and

tc = 5 msec

Trang 43

I/O, copying and processing in Unbuf_P (UP),

Single_buf_P (SP) and Multi_buf_P (MP)

 

SP processes the record

copied into Rec_area  

while the next record is  read into the buffer

A record is copied into

Rec_area  and 

processed by MP, while next record is read

No parallelism between

I/O and processing

of a record by UP

← Encloses   parallel   activities

← Encloses   sequential   activities

Trang 44

Timing diagrams of operation using buffering

(I: I/O activity, C: copying, P: processing)

(a) UP: No overlap between I/O of records and their processing

(b) SP: I/O and processing are performed in parallel, but not copying

Trang 45

Use of buffers in Multi_buf_P

Trang 46

File processing with a single buffer

Operation of Single_buf_P proceeds as follows

– After processing one record, it waits until I/O on the buffer

is complete, copies the record into Rec_area and processes it

tw : Wait time per record

tee : Effective elapsed time per record

Trang 47

File processing with multiple buffers

Operation of Multi_buf_P proceeds as follows:

– Copying of a record in a buffer into Rec_area and its processing

together overlap with I/O on another buffer

Hence, tee = max (tio, tc + tp)

Total elapsed time = tio + (number of records – 1) x tee

+ (tc + tp)

Q: Why and when to use a large number of buffers?

— Helps in servicing peak requirement for records

Trang 49

Processing of a file with blocked records in an

Trang 50

Transfer rate of I/O device = 800 K bytes/sec

Record size = 200 bytes, ta = 10 msec

Trang 51

Variation of (tio)lr with blocking factor

•  (t io)lr decreases as the blocking factor is increased

   This fact can be used to reduce or eliminate t w  through buffering

Transfer rate of I/O device = 800K bytes/sec

Trang 52

Combination of buffering and blocking

• A combination of buffering and blocking can be used to

minimize the effective elapsed time of a process

– Blocking reduces the effective I/O time per record

* See previous slide

– Buffering provides overlap between I/O and processing of records

* See slides on operation of Single_buf_P and Multi_buf_P

* Overlap is maximum when effective I/O time per record < processing time per record

– Use an appropriate blocking factor such that

* (t io)lr < processing time per record

Trang 53

Buffered processing of blocked records using

blocking factor = 4 and two buffers

•  The process waits until I/O on Buf1  is complete; it occurs at 11 sec

•  The four records in Buf 1  are processed during 11­23 sec

•  During this time, I/O on Buf2  is in progress; it completes at 22 sec

•  The records in Buf2  can be processed straightaway

Ngày đăng: 30/01/2020, 03:27

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN