1. Trang chủ
  2. » Thể loại khác

Operating System Concepts 8th Edition chp 5

41 4 0
Tài liệu được quét OCR, nội dung có thể không chính xác
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 41
Dung lượng 34,94 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

2010 01 07 13 07 20 2010 01 07 13 07 33 2010 01 07 13 07 41 2010 01 07 13 07 50 2010 01 07 13 07 57 2010 01 07 13 08 14 2010 01 07 13 08 24 2010 01 07 13 08 32 2010 01 07 13 08 39 2010 01 07 13 08 51[.]

Trang 1

cpu scheduling is the basis o i

Be aching the CPU among The operating systems By

computer more Brodtictive.In oe độc nã —— system can make the

concepts and present several ee ie NHAN „

roblem of selecting an algorithm for a DariicrirE ee a

In Chapter 4, we introduced threads to the process model O i

systems that support them, it is kernel-level threads nel roce 2 eee

in fact being scheduled by the operating system ae the oS a ee

scheduling and thread scheduling are often used interchangeabl Ti ie

chapter, we use process scheduling when discussing general schedulin; : t

and thread scheduling to refer to thread-specific ideas ae

HAPTER OBJECTIVES

wtroduce CPU scheduling, which is the basis for multiprogrammed rating systems

ribe various CPU-scheduling algorithms

n criteria for selecting a CPU-sch

Trang 2

e 5.1 Alternating sequence of CPU ạ:Z -.-: 5ursts

‘tern continues Every

»ver use of the CPU ~

ing-system function

are scheduled eeu use The CPU is, of coumse,

‘tesources Thus, its scheduling is central

ion alt with a cru

Trang 3

8 16 Se

burst duration (milliseconds) Figure 5.2 Histogram of CPU-burst durations

rogram might have a few long CPU bursts This distribution can be important

in the selection of an appropriate cpu-scheduling algorithm

idle, the operating system must select one of the eue to be executed The selection process is carried

heduler (or CPU scheduler) The scheduler selects

a

in memory that are ready to execute and allocates

7 is not necessarily a first-in, first-out

(FIFO) queue

see 2n Saat the Ea scheduling

algorithms, a my

5 implemented as a FIFO queue, a priority queue, a io : be :

linked list Conceptually, however, all the

Trang 4

b nonpreemptive or cooperative: Othergs "

Pier nonpreenptie echeiling cncc the CPU hg

the process - CPU until : re an CPU iy

: : dtchíng to the waiting siate Íñ¡s schedulin :

" ẻ a«a 3x; Windows 95 introduced HT

Scheduling, and all subsequent versions of Window's operating Systems hay

ised preemptive scheduling The Mac OS * operating system for the M also uses preemptive scheduling, previous ns of the Macintosh ones

3 system relied on cooperatis « scheduling Cooy tiv heduling jg the ong

Smethod that can be used on certain hardware plat! Ð4cause iL do

= fequire the special hardware (for example 0 | led for Preempig

‘Unfortunately, preempt e scheduling in sociated with aga

to shared data Consider the «as Of two proc: hare data Whilew

7S updating the data, it is preempted «that: TOC€SS Can run Tý

Process then tries to read the data wh cl ; 3

ru onsistent state Ề

UNS, we need new mechanisms to ; cess to una

balso affects the design of the oper — Dun,

isy with an activity activities may involve cha ging important kes

Wx happens if the process is preem

kernel (or the device driver) ne

®? Chaos ensues Certain operating

deal with this problem by aiti waitin,

an 1/0 block rote to tak e place before

›) must be selected for execution, Tha’

rith

diff

fol

Trang 5

This function involves ae nome

L ing to the pro: ais

Jumping proper location in the user program to restart that program

the dispatcher should be as fast < ig St as possible, ¢ si ree

switch Th ‘ast as possible, since it is in cee

wale

Fe re time it takes for the dispatcher to bln during every

- start running is known as the dispatch latency Fo pin

Scheduling Criteria

i t CPU-sch ing algori ; xe di

-_ Diếeren aa eduling algorithms have different properties, and the choice

of a parti ar algorithm may tavor one class of processes over another In

b choosing which algorithm to use in a particular situation, we must consider

_ the properties of the various algorithms

Fe Many criteria have been suggested for comparing cpu-scheduling alge-

rithms Which characteristics are used for comparison can make a substantial

difference in which algorithm is judged to be best The criteria include the

© CPU utilization \Ve want to keep the CPU as busy as possible Concep-

tually, CPU utilization can range from 0 to 100 percent Ina real system,

it

Should range {rom 40 percent (for a lightly loaded system)

to 90 percent

i (or a heavily uses 5) stem)

| Throughput |! the CPU is busy executing

processes, then work is being

done One measure of work is the number

of processes that are comp per time unit, called throughput For long

processes, this rate may be one

ess per hour; for short transactions, it may be

ten processes per ‘

found time From the point of view

Trang 6

;< the time from the sub

roduced This measure

Pa sponding, not the time ;

takes to Start Tis Kes round time is & generally limi y limiteg

Su

utilization and throughput and to min;,

,W "= under SO€ circumstances, it is q esiral

'owevV©F,

measure ee 2ximum values rather than the average,

e the minim at good service, we May want to minj ‘Or

The v

response : sted that, for interactive systems (such ag ta

Oe

have sugg' nt to minimize the variance in the respons, howe

2 ems), it is ey | m ore 1m P orta response time A system with reasonaby : 1

inimi ‘ed ca sg be considered more desirable thana Syste rage

the average but is highly variable However,

scheduling algorithms that minimize variance

each a sequence of several hundred CPU bursts and 1/0 burs + me

ity, though, we consider only one CPU burst (in milliseconds)

aa the problem of deciding whic: ne processes inthe : ; :

located the CPU There are many di‘‘erent CPU-scheduling ựC

on, we describe several of therm and

a

nh

t-Served Scheduling

the

algorithm is the first-come, first-served * ae

= scheme, the process that requests the kh:

process ente ue,

a process enters the ready queue, sho nen the CPU is free, it is allo

Process is then removed

Trang 7

‘ e waiting time is 0 milliseconds for proc “1:

oo Pr, " -¬ for process Pr Tà Age milliseconds for process

he : 24 + nu, = Ko If the processes arrive ia owever, the results will be as shown i rive in the order P2, P3, Pi, he ed ed

: E own in the following Gantt chart:

The average waiting time is now (6 + 0 + 3)/3 =3 milliseconds This reduction

is substantial Thus, the average waiting time under an FCFS policy is enerall

not minimal and may vary substantially if the processes CPU burst pra i

Si

greatly

:

Tn addition, consider the performance of FCFS scheduling in a dynamic

situation Assume we have one CPU-bound process and many 1/O-bound : srocesses flow around the system, the following scenario

_ may result The CPU-bound process will get and hold the

CPU During this

ime, all the other processes will finish their 1/0 and

will move into the ready

es wait in the ready queue, the

queue, waiting *o: Cpu While the process

le Eventually, the CPU-bound proc€sS finishes its CPU

burst

_1/O devices are 1!

:d moves to an I/O device All the 1/O-bound processes,

which have short ‘is

PU bursts, execute quickly and move back to the 1/0

quevies At this point, §

‘CPU sits idle The CPU-bound process will then move

s all the other processes wait for the one big proces

to get off penis fhe Beene vee CPU and device wélization Jlowed to go first SS might Pe De

Trang 8

js used to bres® the shortest-next-CPU-bursps method pe lers of the next CPU burst of a5 Mii,

Jepenc «a use the term SJF because most peop

time is 3 milliseconds for process P;, 16 rvilliseconds for Process

ds for process P3, and 0 milliseconds for process P4, Thus, the time is (3 + 16 + 9 + 0)/4 = 7 milliseconds By comparison, Ý

the FCFS scheduling scheme, the ave: age waiting time would

g algorithm is provably optimal, in that it gives the ting

for a given set of processes Moving a shot

es the waiting time of the short process mt

Of the long process, Consequently, thea

m is knowing the length of

in.n batch systern a

user specifies when he s

(Too low a value

epee

Trang 9

Trt =a bt, + q đi ayy

e of ¢, contains our m Ost recent inf š eter a controls the relat Ormation; 7,, st ‘

cs a elati ; ; Ty Stores the past history

prediction If a = 0, then +„.; = ; ve weight of recent and oe Hr oe ae i

căng aditions are assumed to be transient an d recent history has no effect (current

recent CPU burst matters (history i eee : = and

only the most

_ œ = 1/2, so recent = _ a history and past hist st history ar irrelevant) More i

ore _ _ be defined as a constant Figure 5.3 shows an exponential average with « = 1/2 and 1 or as an sen Na ae ` a Men u,

To understand the behavior of of the exponenti ) formula for 7,1 by substituting for 7,,, en verses, an

+(1-— 6) lta Lô in ay tp

e are less than or equal to 1, each successive term has

The SJF algo

arises when a ne still executing ©

Trang 10

nữ) ting Pr S, whereas „7 nonp emp

executing I 2 ing process to finish its Cpụ

carrer Tao called shortest-remaining

| the queue Process

ves at time 1 The remaining time for proce: (7 milliseconds) is aan the time required by process P, (4 millis: ids), So process jis”

and process P, is scheduled The avera aiting time for this”

UD + 1-1) + (17 — 2) + (5 —3)]/4 = 26/4 = 6.5 milliseconds) scheduling would result in an avera ge waiting time of 7)

> Seneral priority scheduling algon and the CPUis allocated to the pr

€sses are scheduled in FCFS

Ithm where the priority (p) Ì

larger the CPU burst, the

Trang 11

Using priority scheduling, we w ung Bi hart

ould schedule these processes according to the

The average waiting time is 8.2 milliseconds

Priorities can be defined either i

_ Fr : a e fined either internally or externally Internally defined

priorities use some measurable quantity o : y or quantities to compute the priority iti a

of a process For example, time limits, fill :à ¢ Ề , me memory requirements, the number of

"` % _ ratio of average 1/0 burst to average CPU burst have been _ used mẻ computing, priorities External priorities are set by criteria outside the

ne operating system, : uch as the importance of the process, the type and amount

ee Pot funds being paid for computer use, the department sponsoring the work,

= and other, often political, factors

Priority scheculing can be either preemptive or nonpreemptive When a _ process arriv« ready queue, its priority is compared with the priority Saetne currentiy curuung process A preemptive priority scheduling algorithm _ will preempt ' U if the priority of the newly arrived process 1S higher than the prior!) the currently running process A nonpreemptive priority scheduling algo .to™ will simply put the new process at the head of

msidered blocked A priority scheduling

algorithm can leave processes waiting indefinitely Ina heavily

higher-priority the CPU cee

Trang 12

s It is ch between processes A small unit Of time

“t si ee ig defined A time quantum is Benerally a

i ed "` The ready queue is treated as a circyjg,

MI)

he eat to 1 time quantum

a time ing we keep the Te eee as a FIEO a

ed to the tail of the rea ne bự

` _ rea: process fee the ready queue, sets 2 tinea Re Qụ

after 1 See quantum, and dispatches the process

tuy

i ill then happen The process may have a cpy buổ

One of two things wi ưng be ;

Jess than 1 time quantum In this case, the process itself will release the cm

voluntarily The scheduler will then proceed to the next process in the re _ queue Otherwise, if the CPU burst of the currently running process js lena

~ than 1 time quantum, the timer will go off and will cause _ operating system A context switch will be executed, and the Process will an interrupt to

" put at the tail of the ready queue The CPU scheduler will then select the poy process in the ready queue

ney

The average waiting time under the RR policy is often long Consi der te

eee” that arrive at time 0, with the length of the CPU burs

4 milliseconds, so it quits before its

€n to the next process, process P3.'

m, the CPU is returned to pro

Trang 13

oa € quantum, that procece ; - : ‘in 1 ready q , that process

milliseconds, each process will get up to.20 milliseconds every 100 milliseconds The performance of the RR algorit

time quantum At one extreme i these ieee ne quant ; a its es the

xtremely large,

RR policy is the same as the FCFS policy In contrast, if the et queue

is extremely small (say, 1 millisecond), the RR approach is called processor sharing and (in theory) creates the appearance that each of n processes has its

own processor running at 1/n the speed of the real processor This approach

: was used in Control Data Corporation (CDC) hardware to implement ten

peripheral processors with only one set of hardware and ten sets of registers

y The hardware executes one instruction for one set of registers, then goes on to

s the next This cycle continues, resulting in ten slow processors rather than one

= fast one (Actually, since the processor was much faster than memory and each

_ instruction referenced memory, the processors were not much slower than ten

_ real processors would have been.)

In software, we need also to consider the effect of context switching on the

_ erformance of RR scheduling Assume, for example, that we have only one

_ process of 10 time units If the quantum is 12 time units, the process finishes

in less than 1 time quantum, with no overhead If the quantum is 6 time Ep

however, the pxccess requires 2 quanta, resulting in a context switch If the

fime quantum is time unit, then nine context a che

D

ve : mã si : : 5 Á),

6c ition of the process accordingly (Figure »

: Thus, we + -::: ïhe time quantum to De large with respect to the context

` switch time If ine context-switch : If times ae Ovi vill be spent in context ime i roximately 10 percent of the

have time quanta TS mộ

Trang 14

time quantum

Figure 5.5 How turnaround time varies with the time quantum

size, P

algoriH Turnaround time also depends on the size of the time quantum AS We

n see from Figure 5.5, the average turnaround time of a set of processes

5 not necessarily improve as the time-quantum s.2~ increases In general

average turnaround time can be improved if 1m rocesses finish thei

ziven three process

ne units each and a quantum of 1 time unit, « © average turnaround

If the time quantum is 10, however, the av > age turnaround fim

20 If context-switch time is added in, the av: ore for a smaller time quantum, since more context switches se turnaround time

time quantum should be large compared with the cơ Not be too large If the time quantum is too larg

to an FCFS policy A rule of thumb is that 80 pe

than the time quantum

Trang 15

Student processes

lowest priority

Figure 5.6 Multilevel queue scheduling

size, process priority, Or process type Each queue has its own scheduling

algorithm For example, separate queues might be used for foreground and

background processes The foreground queue might be scheduled by an RR

algorithm, while ‘he background queue is scheduled by an FCFS algorithm

In addition, “here must be scheduling among the queues, which is com- monly implemented as fixed-priority preemptive scheduling For example, the foreground queue } have absolute priority over the background queue

"Let's look at an example of a multilevel queue scheduling algorithm with

re queues, listed below in order of priority:

Trang 16

Ta This setup hs the sả“

Trị inflexible

queue scheduling algorithm, in contrag, : The idea is fo separate

Processes aeanaet

S i _Tf.a process uses too much Gis CÔ

rere This scheme

TD process entering the ready queue is put in queue AY

a tme Quantanx ai S illiseconds If it does not Hush

Swed to the tail of queue 1 If queue 0 is empty

A Tis given a quanta of 16 milliseconds if

Ss HOt a eee RR

epee and iS PUl Into Queue 2 Processes in quc > pun Oa aR

_ are run only when queues 0 and 1 are em]

aa — aes ĐEN NH) gives highest priority sPOCeSS WHEE

or fess, Such a process will qi wet the CR, A

and go off to ite Next 1/0 burst Proce l +

an 24 millisecond proces at reed ROE AE

8 ate also served quic hy though With Rae

L will

with any CPU cvcles lett over from

ate one LONG processes automatically sink BAe

th

mat thres

5.4

One

are |

Trang 17

e scheduling algorithm for each queue

The definition of a multilevel

dã general CPU-scheduling eee a : ae Ce ở " ntigured t specifi

system under design Unfortunately, it is also ieee complex ae

since defining the best scheduler requir ` es sO He

ey alues for all the parameters q me means by which to select

Thread Scheduling

In Chapter 4, we introduced threads to the process model, distinguishing

between user-level and kernel-level threads On operating systems that support _ them, it is kernel-level threads—not processes—that are being scheduled

thread, although mapping may be indirect and may use 4 lightweight

process (LWP) In this ion, we explore scheduling issues involving user-level

Trang 18

‘usted by the thread library, ae

Si not act grammer {0 change the prig

Wh opti t PCS will typically preempt the

ag either PCS or SCS during thread creas

tention scope values:

0n,

1x Pthread progr

eation

Pthread ioe pl that allows speci

threads identifies the following con

process schedules threads using PCS scheduling,

° PTHREAD-SCOPE

STEM schedules threads u

ˆ`& PTHREAD.SCOPE.SY sing SCS scheduling,

KP On systems implementing the many-to-many model,

ˆ PTHREAD.SCOPE.PROCES5 policy schedules user-level threads

}

The Pthread IPC provides two functions for getting—and setting—the

ad_attr_setscope(pthread_attr_t *attir, int scope)

pthread-attr_getscope(pthread_attr_t xattr, int *scope)

mete Go ` contains a poir:tez to the attribute seHữf

Ps — the pthread_attr_setscope() function

ee “ or the PTHREAD.SCOPE.PROCBE

eo ie ere is to be set In the case

ts “no contains a pointer toa!

cur rt > of the contention scope If an en”

$a non-zero value

thread scheduling APL The PP

_ Contention scope and sets tt hết separate threads that that on some systems, only

example, Linux and Mac

Trang 20

ES proach to CPU scheduling Lae en activities handled by sy

appr ing, ane ee other processors execute Only user ak

master S€ mnø ịs simple because only one T0cse

~ accesses the system „ es symmetric multiprocessing (SMP), where ea

: “A second ine all processes may be in a common ready queug

„ processor is self-sched 1 own private queue of ready processes Regardles

each processor may Nà the scheduler for each processor examine thy

- => ae A a process to execute As "ready queue and select 2 P to access and update a common di we shall see in Chapter, and n with

cers, "` must be programmed carefully We must ensure thy) if a

— EWO: rs do not choose the same process and that processes are not log

W =

_ from the queue Virtually all modern operating systems support SMP, including one

Windows XP, Windows 2000, Solaris, Linux, and Mac 0s X In the remainderg) also :

section, we discuss issues concerning SMP systems

imple

Processor The data most recently accessed > the process popula 5.8

the processor; and as a result, successive mermory accesseSĐƒ ten satisfied in cache memory Now consider what hapJ#Š _ On S

ates to another processor The contents of cache mem proce

for the first processor, and the cache for the seca Othe

tepopulated Because of the high cost of invalidating am high

and instead attempt to keep a process fl an Sĩ

\ Which it is currently running forms When an operating syste™ Prog ae

Tunning on the same proce

Ngày đăng: 29/04/2017, 10:18

TỪ KHÓA LIÊN QUAN