1. Trang chủ
  2. » Công Nghệ Thông Tin

Lecture Operating systems: A concept-based approach (2/e): Chapter 3 - Dhananjay M. Dhamdhere

61 64 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 61
Dung lượng 1,13 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Chapter 3 - Processes and threads. This chapter begins by discussing how an application creates processes through system calls and how the presence of many processes achieves concurrency and parallelism within the application. It then describes how the operating system manages a process - how it uses the notion of process state to keep track of what a process is doing and how it reflects the effect of an event on states of affected processes. The chapter also introduces the notion of threads, describes their benefits, and illustrates their features.

Trang 1

in any form or by any means, without the prior written permission of the publisher, or used beyond the limited distribution to teachers and educators permitted by McGraw­Hill 

Trang 2

What is a process?

(Note the emphasis on ‘an’)

– A programmer uses the notion of a process to achieve

concurrency within an application program

– An OS uses the notion of a process to control execution of a

program

concurrent programs in a uniform manner

Trang 3

Example of processes in an application

– Specification

the next sample arrives

– Four processes are created for the application (see next slide)

making system calls

Trang 4

Tasks in a real time application for data logging

• Process 1 copies the sample into a buffer in memory

• Process 2 copies the sample from the buffer into the file

• Process 3 performs housekeeping and statistical analysis

Trang 5

Process tree for the real time application

• The OS creates the primary process when the application is initiated; it is called ‘main’ in this diagram

• The primary process creates the other three processes through system calls; they are its child processes

Trang 6

Benefits of child processes

– Computation speed-up

processes of an application; it speeds up operation of the application

– Priority for critical functions

be assigned a high priority to satisfy its time constraints

– Protecting a parent process from errors

operation; however, the parent process is not affected

Trang 7

Concurrency and Parallelism

independent of one another

Q: Is this concurrency or parallelism?

Trang 8

Concurrency and Parallelism

independent of one another Is this concurrency or

parallelism?

– Parallelism: Operation at the same time Parallelism is not

possible unless

can be scheduled simultaneously

– Concurrency: Operation in a manner that gives the impression of

parallelism, but actually only one process can operate at any

time

Trang 9

Process interaction

themselves in four different ways

Trang 10

OS view of a process

execution of programs

– The OS views a process as

– The OS performs scheduling to organize operation of processes

Trang 11

OS view of a process

(id, code, data, stack, resources, CPU state)

– The process id is used by the OS to uniquely identify the

process

– Code, data and the stack form the address space of the process

– Resources are allocated to the process by the OS

– The CPU state is comprised of the values in the CPU registers and in the fields of the PSW

Trang 12

OS view of a process

• The process environment consists of the process address space and

information concerning various resources allocated to the process

• The PCB contains execution state of the process, e.g., its CPU state

Trang 13

Process environment

needed for accessing and controlling resources allocated

to a process (it is also called the process context ):

– Address space of the process, i.e., code, data, and stack

– Memory allocation information

– Status of file processing activities, e.g., file pointers

– Process interaction information

– Resource information

– Miscellaneous information

Trang 14

The process control block (PCB)

operation of the process Its fields are:

– Process id

– Ids of parent and child processes

– Priority

– Process state (defined later)

– CPU state, which is comprised of the PSW and CPU registers

– Event information

– Signal information

– PCB pointer (used for forming a list of PCBs)

Trang 15

Fundamental functions for controlling processes

• Occurrence of an event causes an interrupt

• The context save function saves the state of the process that was in operation

• Scheduling selects a process; dispatching switches the CPU to its execution

Trang 16

Context save

– The CPU state indicates what the process in operation is doing

at any moment (see Chapter 2)

– Every resource allocated to a process has a state; every activity also has a state For example,

record was accessed last

process, and states of its resource access activities

– The saved states of the CPU and resources are used for

resuming the process at a later time

Trang 17

Event handling

– Occurrence of an event is notified to the kernel through an

interrupt It now performs the following tasks:

and preempt the process in operation

Trang 18

Scheduling and dispatching

to servicing of a process

– The scheduling function identifies a process for servicing

– The dispatching function loads the context, i.e., process

environment, of the identified process so that it starts or resumes its operation

information from the process environment

Trang 19

Context switch

CPU to a new process

– This action is called a context switch

– It is implemented through the following actions

Trang 20

– Some sample process states

– The kernel keeps track of the state of a process and changes it

as its activity changes

– Different operating systems may use different process states

Trang 21

Fundamental process states

– Terminated

Trang 22

State transitions

– A process has a state

– The state of a process changes when the nature of its activity changes

– Several state transitions can occur before the process

terminates

Q: What are the events that cause transitions between the

fundamental process states?

Trang 23

Fundamental state transitions for a process

• The transition ready → running occurs when the process is dispatched

• running → blocked occurs when it starts I/O or makes a request

• blocked → ready occurs when its I/O completes or its request is granted

Trang 24

Swapping and process states

– The OS implements swapping through swap-in and swap-out

actions

– It uses some new states to implement swapping

swapped out and those that have not been

exit these new states

and ready swapped → ready when it is swapped in

states

Trang 25

Process states and state transitions in swapping

Two new states:

– Ready swapped – Blocked swapped

Trang 26

Event handling actions of an OS

– Accept an interprocess message

– Deliver an interprocess message

Trang 27

Event handling actions of an OS

Q: Are any arrows

missing?

Trang 28

Event handling

– The OS has to change states of affected processes

– The OS uses an event control block (ECB) to find the process

Trang 29

Event Control Block (ECB)

• An event control block is formed for an event when the OS knows

that the event will occur

when it initiates an I/O operation for it

Trang 30

PCB-ECB interrelationship

• The OS forms an ECB for the ‘end of I/O operation’ event and

Trang 31

of a process

– Threads are created within a process

– Switching between threads of a process causes less overhead

than switching between processes (Q: Why?)

Trang 32

Thread switching overhead

– A process context switch involves:

2 Saving its CPU state

3 Loading the context of the new process

4 Loading its CPU state

– A thread is a program execution within the context of a process

(i.e., it uses the resources of a process); many threads can be created within the same process

between threads of the same process

Trang 33

– If two processes share the same address space and the same resources, they have identical context

of identical contexts This overhead is redundant

– In such situations, it is better to use threads

Trang 34

Threads in Process Pi : (a) Concept, (b) Implementation

Each thread has a stack of its own

•   Execution of a thread is managed by creating a

    thread control block (TCB) for it

Trang 35

Benefits of threads

– Low switching overhead

– Computation speed-up

(see next slide)

– Efficient communication

Trang 36

Kinds of threads

– Kernel-level threads

their existence, and schedules them

– User-level threads

routines are linked to become a part of the process code The kernel is oblivious of user-level threads

– Hybrid threads

Q: Why have three kinds of threads?

A: They have different properties concerning switching overhead,

concurrency and parallelism

Trang 37

Kernel-level and user-level threads

– Switching is performed by the kernel

– Switching is performed by the thread library

– A blocking call by a thread blocks the process

– Low parallelism

Trang 38

Scheduling of kernel-level threads

• At a ‘create thread’ call, the kernel creates a thread and a

thread control block (TCB) for it

• Scheduler examines all TCBs and selects one of them

• Dispatcher dispatches the thread corresponding to the selected TCB

Trang 39

Scheduling of user-level threads

• At a ‘create thread’ request, thread library creates a thread and a TCB

• Thread library performs ‘scheduling’ of threads within a process; we call

it ‘mapping’ of a TCB into the PCB of a process

Trang 40

Actions of the thread library (N, R, and B indicate

running, ready and blocked states)

Trang 41

Hybrid thread models

user-level threads

– Each user-level thread has a thread control block (TCB)

– Each kernel-level thread has kernel thread control block (KTCB)

– Three models of associating user-level and kernel-level threads

Trang 42

Associations in hybrid thread models

(a) Many-to-one association: scheduling is done by thread library

(b) One-to-one association: scheduling is done by kernel

(c) Many-to-many association: scheduling by thread library and kernel

Trang 43

– Processes should be able to send signals to other processes and

specify signal handling actions to be performed when signals are sent to them

The OS performs message passing and provides facilities for the other three modes of interaction

Trang 44

An example of data sharing: airline reservations

• Agents answer queries and perform bookings

• They share the reservations data

Trang 45

Race conditions in data sharing

wrong if race conditions exist

– Let fi(ds) and fj(ds) represent the value of ds after the operations

– If processes Pi and Pj perform operations Oi and Oj

* If O i is performed before O j , resulting value of d is f j (f i (d s ))

* If O j is performed before O i , resulting value of d is f i (f j (d s ))

Q: Why do race conditions arise?

Trang 46

Data sharing by processes of a reservation system

• Process Pi performs actions S1, S2.1

• Process Pj performs actions S1, S2.1, S2.2

• The same seat is allocated to both the processes

Trang 47

Race condition in the airline reservation system

have two consequences:

– nextseatno may not be updated properly

– Same seat number may be allocated to two passengers

Trang 48

Race conditions in the airline reservations system

Q: Which of them has

a race condition?Three possible

executions are shown

Trang 49

Control synchronization between processes

(a) Initiation of Pj should be delayed until Pi performs si

(b) After performing Sj-1, Pj should be delayed until Pi performs si

Trang 50

Interprocess messages

it wishes to receive a message

Trang 51

Benefits of message passing

– Data sharing is not necessary

– Processes may belong to different applications

– Messages cannot be tampered with by a process

– Kernel guarantees correctness

undelivered messages exist for it

Trang 52

exceptional situations

– A process must anticipate signals from other processes and

must provide a signal handler for each signal

– The kernel activates the signal handler when a signal is sent to the process

– Schematic of signals on the next slide

Trang 53

Signal handling

Firm arrow : Pointers

in data structuresDashed arrows: Execution time actions

Trang 54

– Accordingly, there are two running states: User running and

kernel running

now scheduled and may make a system call

enter kernel mode even if other processes are blocked in that mode

Trang 55

Process state transitions in Unix

• I/O request: User running → Kernel running → Blocked

• End of time slice: User running → Kernel running → Ready

Trang 56

Threads in Solaris

– User threads

– Light weight processes (LWP)

threads into LWPs Several LWPs may be created within a process

– Kernel threads

creates some kernel threads for its own use; e.g., a thread to handle disk I/O

parallelism (see Hybrid models’ schematic)

Trang 57

Threads in Solaris

• Mapping between user threads and LWPs is performed by thread library

• Each LWP has a KTCB; scheduling is performed by the kernel’s scheduler

Trang 58

Processes and threads in Linux

– Linux supports kernel-level threads

– Threads and processes are treated alike except at creation

current directory, open files and signal handlers of its parent process; a process does not share any information of its parent

– A thread or process contains information about

Trang 59

Processes and threads in Linux

– Task_running: scheduled or waiting to be scheduled

– Task_interruptible: sleeping on an event, but may receive a

signal

– Task_uninterruptible: sleeping and may not receive a signal

– Task_stopped: operation has been stopped by a signal

– Task_zombie: operation completed, but its parent has not issued

a system call to check whether it has terminated

Trang 60

Processes and threads in Windows

is a unit for concurrency Hence each process must have

at least one thread in it

– Standby: Thread has been selected to run on a CPU

– Transition: Kernel stack has been swapped out

Trang 61

Thread state transitions in Windows 2000

Ngày đăng: 30/01/2020, 01:03

TỪ KHÓA LIÊN QUAN