1. Trang chủ
  2. » Công Nghệ Thông Tin

Lecture Operating systems: A concept-based approach (2/e): Chapter 4 - Dhananjay M. Dhamdhere

53 57 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 53
Dung lượng 1 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Chapter 4 - Scheduling. Scheduling is the act of selecting the next process to be serviced by a CPU. This chapter discusses how a scheduler uses the fundamental techniques of prioritybased scheduling, reordering of requests, and variation of time slice to achieve a suitable combination of user service, efficient use of resources, and system performance. It describes different scheduling policies and their properties.

Trang 1

in any form or by any means, without the prior written permission of the publisher, or used beyond the limited distribution to teachers and educators permitted by McGraw­Hill  for their individual course preparation. If you are a student using this PowerPoint slide, you are using it without permission. 

Trang 2

requests should be taken up for servicing

– A request is a unit of computational work

* It could be a job, a process, or a subrequest made to a process

Trang 3

Scheduling related concepts and terms

Trang 5

Performance related concepts and terms

Trang 6

Performance related concepts and terms

– Terms related to average service

* Mean turn-around time

– Terms related to scheduling performance

Trang 7

Fundamental techniques of scheduling

– Priority-based scheduling

* As seen in the context of multiprogramming

– Reordering of requests: it may be used to

* Enhance system throughput, e.g., as in multiprogramming

* Enhance user service, e.g., as in time sharing

– Variation of time slice

* Small time slice yields better response times

* Large time slice may reduce scheduling overhead

Trang 8

More on priority

– Priorities may be static or dynamic

* A static priority is assigned to a request before it is admitted

* A dynamic priority is one that is varied during servicing of a request

– How to handle processes having same priority?

* Round-robin scheduling is performed within a priority level

– Starvation of a low priority request may occur

Q: How to avoid starvation?

Trang 9

* Kernel may preempt a process and schedule another one

* A set of processes are serviced in an overlapped manner

Trang 10

Processes for scheduling

processes for scheduling

Process P1 P2 P3 P4 P5

Arrival time 0 2 3 5 9

Service time 3 3 2 5 3

Deadline 4 14 6 11 12

Trang 12

Non-preemptive scheduling policies

– First come, first served (FCFS)

– Shortest request next (SRN)

– Highest response ratio next (HRN)

time

Q: Which policy will provide better performance under what

conditions? Consider throughput, average turn-around time,

manipulation by users, starvation possibility …

Trang 13

Performance of FCFS and SRN scheduling

•  Table shows sequence of decisions made by the scheduling policies

•  SRN schedules P3 ahead of P 2  because it is shorter

•  Mean turn­around time is shorter by SRN than by FCFS scheduling

Qs:  Schedule lengths? Should we always use SRN?

          Arrives Service        Time

P1        0      3

P2        2      3

P3        3      2

P 4         5      5

P 5        9      3

Trang 14

Highest response ratio next (HRN) policy

•  P 14    has a shorter service time than P13

•  It has a higher response ratio at 8 seconds, so it is  scheduled ahead of P13

Qs:  Advantages / disadvantages over FCFS and SRN?

Trang 15

Non-preemptive scheduling policies

– Short requests

* SRN provides them a favoured treatment; FCFS does not

Q: Does HRN provide them a favoured treatment?

– Long requests

* May starve if SRN scheduling is used

* They do not starve if HRN scheduling is used

Q: Why?

– Service times of processes are not known

– Service time information provided by users is not reliable

Trang 16

Preemptive scheduling policies

– Preempt a process to permit other processes to operate

* Aimed at improving throughput and response times

– Round-robin with time slicing (RR)

– Least completed next (LCN)

* Select the process that has received least amount of CPU time

– Shortest time to go (STG)

* Select the process with least remaining service time

Trang 17

Scheduling using preemptive scheduling policies

     Arrives Service        time

P 1       0       3 

P 2       2       3

P3        3       2

P4       5       5

P 5        9       3

Trang 18

Operation of preemptive scheduling policies

Trang 19

Performance of preemptive scheduling policies

•  Column C shows the completion time of a process

•  ta is the turn­around time and w is the weighted turn­around

Trang 20

Variation of average response time with time slice

•  Response time = n x (time slice + scheduling overhead)

•  The response time is larger for time slice of 5 msec than for 10 msec

   because of the scheduling overhead

Trang 21

Preemptive scheduling policies

Trang 22

Long, medium, and short-term scheduling

combination of performance and user service, so an OS uses three schedulers

– Long-term scheduler

* Decides when to admit an arrived process

 Uses nature of a process, availability of resources to decide

Trang 23

Event handling and scheduling

•  An event handler passes control to the long­ or medium­term scheduler

•  These schedulers pass control to the short­term scheduler

Trang 24

Scheduling in a time sharing system

The medium term scheduler swaps blocked and ready processes and

      changes their states accordingly

Trang 25

Scheduling mechanisms and policy modules

mechanisms when necessary; the mechanisms access scheduling data

Trang 26

Simple priority-based scheduling

– The list is organized in reducing order by priority

– PCBs are added or deleted when processes are created or

terminated

– Scheduler scans the PCB list and selects the first ready process

– Schedule a dummy process that does nothing

* Put the CPU in a sleep mode (conserves power!)

 OS uses several sleep modes and puts the CPU successively into deeper sleep modes if it continues to be idle

 It takes longer to wake up from a deeper sleep mode

Trang 27

Round-robin scheduling

– The kernel maintains several lists of processes—a list of ready processes, a list of blocked processes, etc.

* Scheduler selects the first process in the ready list for operation

* If the process exhausts its time slice, it is put at the end of the ready list

* If the process initiates I/O, it is added at the end of the ready list when its I/O completes

* If a ready process is swapped out, it is removed from the ready list

It would be added at the end of the ready list when it is swapped-in

Trang 28

Ready lists in a time sharing system

(a)   Initial state of the system

(b)   P3  is scheduled

(c)   P3  is preempted

Trang 29

Practical scheduling policies

– Provide a good balance of response time and overhead

* Vary the time slice!

* Vary the priority

– Provide fair service to processes by providing a specified share

of CPU time to each group of processes

* In round-robin scheduling, all process receive approx similar service

* If one application has 5 processes, while other applications have only 1 process each, it would receive favoured treatment

Trang 30

Multilevel scheduling

– Scheduler uses many lists of ready processes

– Each list has a pair (time slice, priority) associated with it

* The time slice is inversely proportional to the priority

– Simple priority-based scheduling between priority levels

– Round-robin scheduling within each priority level

Q: How to organize the lists for minimum scheduling overhead?

Trang 31

Ready queues in a multilevel scheduler

•  Each queue header has two pointers—to first queue in list and to the

   next queue header

•  Queue headers are linked in order of reducing priority

•  Provides ‘constant time’ scheduling performance

Trang 32

Multilevel adaptive scheduling

good combination of service and performance

– Adapt treatment of each process to its behaviour

* The priority of a process is varied depending on its recent behaviour

* If the process uses up its time slice, it must be (more) CPU bound than assumed, so provide a larger time slice at a lower priority

– If a process is starved of CPU attention for some time, increase its priority

* Improves response time and turn-around time

Trang 33

Example of multilevel adaptive scheduling

(a) Initial state: P3, P2 at the highest priority level and P8, P7  at the lowest

(b)  P3  is demoted to the lower priority level and P7  is promoted

Trang 34

Fair share scheduling

exceeds its fair share

– Lottery scheduling

* Each application is given ‘shares’ to represent its share of CPU time

* The shares may be distributed among its processes

* The scheduler holds a lottery among shares of ready processes to decide the winning share

 Process holding the winning share is scheduled

– Open issues

* If an application is dormant for some time, should it be given more CPU time to ‘make up’ when it is activated?

Trang 35

Real time scheduling

deadline

– Finding the deadline of a process

* Consider precedence between processes of an application, i.e

which process should complete before which other process starts

* Find the deadline of each process of an application from the application’s deadline

 Two kinds of deadlines—starting and completion deadlines

 We assume completion deadlines

– Meeting the deadline

* Use a scheduling policy that ensures that deadlines are met

* In a soft real time system, it suffices to try to meet the deadlines,

without meeting them in every instance

Trang 36

Real time scheduling

– Static scheduling

* A schedule for servicing the processes of an application is prepared before the application is initiated

 The schedule incorporates precedence and deadlines

 Scheduler uses the schedule during operation of the application

– Priority-based scheduling

* Priorities incorporate criticality, etc

* Simple priority-based scheduling is used

– Dynamic scheduling

* A new process is initiated only if its response requirement can be met

Trang 37

Process precedence graph (PPG) for a simple real time system

•  Numbers in circles denote service times of processes

•  Edges indicate precedence requirements; e.g., Servicing of P

   must complete before servicing P 2   or P 3  can begin

Trang 38

Deadlines of processes

– Total service time = 25 seconds

– If the deadline for the application as a whole is 35 seconds, and processes do not perform I/O operations

* Process P6 has a deadline of 35 seconds

* Processes P4 and P5 have deadlines of 30 seconds

Q: What are the deadlines of other processes?

Completion deadline of a process P i =

Completion deadline of application

– ∑ service times of processes reachable from P i in the PPG

Trang 39

Feasible schedule

– An application has a feasible schedule if there exists at least one sequence in which its processes can be scheduled that meets deadlines of all processes

– Schedule the process with the earliest deadline

“If a feasible schedule exists for an application, then earliest

deadline first (EDF) scheduling can meet all deadlines.”

Trang 40

Operation of Earliest Deadline First (EDF) scheduling

The notation P i  : n  is used to show that P i   has the deadline n

Trang 41

Rate Monotonic Scheduling (RMS)

– Rate of a process is the number of times it operates per second

* A process is assigned a priority proportional to its rate

* Priority-based scheduling is now used

– Existence of a feasible schedule

* If a process does not perform I/O

 Share of CPU time used by it is = (service time / time period)

* A feasible schedule exists if ∑processes (service time / time period) ≤ 1

Q: Does RMS guarantee that deadlines will be met if a feasible

schedule exists?

Trang 42

Rate Monotonic Scheduling (RMS)

P1 P 2 P3

Time period 10 15 30

CPU time required 3 5 9

(CPU time required = service time if no I/O is performed)

– What are their priorities?

– Does a feasible schedule exist?

* Feasible schedule exists if ∑ (CPU time required / time period ) ≤ 1

– Schedule prepared by RMS is

* P1, P2, P 3 (2 seconds), P1, P3 (2 seconds), P2, P1, P3 (6 seconds)

Trang 43

Rate Monotonic Scheduling (RMS)

– Not guaranteed (consider P3 to have a time period of 27 sec)

P1 P 2 P3

Time period 10 15 27

CPU time required 3 5 9

– RMS will prepare an identical schedule (see previous slide)

* P3 will miss its deadline by 1 second

Trang 44

Performance analysis

directed at a server So it must be analyzed in the

environment where the server is to be used

– Three methods of performance analysis

* Implement a scheduler, study its performance for real requests

* Simulate the functioning of the scheduler and determine completion times, throughput, etc

* Perform mathematical modeling using

 Model of the server

 Model of the workload

to obtain expressions for service times, etc

Trang 45

Mathematical modeling

performance characteristics such as arrival times and

service times of requests

– Queuing theory is employed

* To provide arrival and service patterns

* Exponential distributions are used because of their memoryless property

 Arrival times: F(t) =1 – e –αt, where α is the mean arrival rate

Service times: S(t) = 1 – e –ωt ,where ω is the mean execution rate

* Mean queue length is given by Little’s formula

 L = α x W, where L is the mean queue length and W is the mean wait time for a request

Trang 46

Summary of performance analysis

ρ = α/ω is the utilization factor of the server

Trang 47

Scheduling in Unix

– A larger value implies a lower priority

– Priority of a process is varied based on its CPU usage

+ f(CPU time used by the process)

Effect of CPU time decays with time, so only recent CPU usage

affects priority

Trang 48

Operation of a Unix-like scheduling policy when

process perform I/O

•  At time 1 second, P1’s T field contains 60 because it consumed 60 ticks

•  Hence P1’s priority is computed as 60 (base priority) + 30 = 90

Q: How does this policy differ from round­robin time slicing policy?

Trang 49

Fair-share scheduling using a Unix-like policy

of CPU time

– Priority of a process depends on the CPU time used by all

processes in its group

Priority of a process

= Base priority + nice value

+ f(CPU time used by the process)

+ f(CPU time used by all processes in the same group)

Trang 50

Operation of a fair share scheduling policy

•  P1, P2, P3 and P5  form one group, P 4  forms the other group

•  At 2 seconds, P2’s effective priority is low because P1 consumed some time

Trang 51

Scheduling in Linux

– Real time processes

* Have static priorities between 0 (highest) and 100 (lowest)

* Can be scheduled in FIFO or round-robin manner within each level

– Non-real time process

* Priorities: –20 to +19

* Priority is recomputed periodically according to nature of activity, and also to counter starvation

slice in Linux) A high priority process has a higher

quantum

Trang 52

Scheduling in Linux

exhausted list

– The scheduler considers only processes in the active list

– A process from active list is moved to exhausted list when its

time slice is exhausted

– When active list is empty, all processes are moved from the

exhausted list to the active list

– O(1) scheduling

* Priority of a process is recomputed when its time slice expires, so overheads are spread uniformly

Trang 53

Scheduling in Windows

– They have priorities 1-15, computed from base priority of process, base priority of thread and a dynamic component

– Priority is reduced by 1 if the process uses up the time slice

– When a thread blocked on an event wakes up, its priority is incremented depending on the event It is incremented by 6 when it receives

keyboard input

– Priority is increased if a ready thread does not receive the CPU for 3

seconds It is also given twice the normal burst

Ngày đăng: 30/01/2020, 04:58

TỪ KHÓA LIÊN QUAN