Chapter 3 - Processes and threads. This chapter begins by discussing how an application creates processes through system calls and how the presence of many processes achieves concurrency and parallelism within the application. It then describes how the operating system manages a process - how it uses the notion of process state to keep track of what a process is doing and how it reflects the effect of an event on states of affected processes. The chapter also introduces the notion of threads, describes their benefits, and illustrates their features.
Trang 1in any form or by any means, without the prior written permission of the publisher, or used beyond the limited distribution to teachers and educators permitted by McGrawHill
Trang 2What is a process?
(Note the emphasis on ‘an’)
– A programmer uses the notion of a process to achieve
concurrency within an application program
– An OS uses the notion of a process to control execution of a
program
concurrent programs in a uniform manner
Trang 3Example of processes in an application
– Specification
the next sample arrives
– Four processes are created for the application (see next slide)
making system calls
Trang 4Tasks in a real time application for data logging
• Process 1 copies the sample into a buffer in memory
• Process 2 copies the sample from the buffer into the file
• Process 3 performs housekeeping and statistical analysis
Trang 5Process tree for the real time application
• The OS creates the primary process when the application is initiated; it is called ‘main’ in this diagram
• The primary process creates the other three processes through system calls; they are its child processes
Trang 6Benefits of child processes
– Computation speed-up
processes of an application; it speeds up operation of the application
– Priority for critical functions
be assigned a high priority to satisfy its time constraints
– Protecting a parent process from errors
operation; however, the parent process is not affected
Trang 7Concurrency and Parallelism
independent of one another
Q: Is this concurrency or parallelism?
Trang 8Concurrency and Parallelism
independent of one another Is this concurrency or
parallelism?
– Parallelism: Operation at the same time Parallelism is not
possible unless
can be scheduled simultaneously
– Concurrency: Operation in a manner that gives the impression of
parallelism, but actually only one process can operate at any
time
Trang 9Process interaction
themselves in four different ways
Trang 10OS view of a process
execution of programs
– The OS views a process as
– The OS performs scheduling to organize operation of processes
Trang 11OS view of a process
(id, code, data, stack, resources, CPU state)
– The process id is used by the OS to uniquely identify the
process
– Code, data and the stack form the address space of the process
– Resources are allocated to the process by the OS
– The CPU state is comprised of the values in the CPU registers and in the fields of the PSW
Trang 12OS view of a process
• The process environment consists of the process address space and
information concerning various resources allocated to the process
• The PCB contains execution state of the process, e.g., its CPU state
Trang 13Process environment
needed for accessing and controlling resources allocated
to a process (it is also called the process context ):
– Address space of the process, i.e., code, data, and stack
– Memory allocation information
– Status of file processing activities, e.g., file pointers
– Process interaction information
– Resource information
– Miscellaneous information
Trang 14The process control block (PCB)
operation of the process Its fields are:
– Process id
– Ids of parent and child processes
– Priority
– Process state (defined later)
– CPU state, which is comprised of the PSW and CPU registers
– Event information
– Signal information
– PCB pointer (used for forming a list of PCBs)
Trang 15Fundamental functions for controlling processes
• Occurrence of an event causes an interrupt
• The context save function saves the state of the process that was in operation
• Scheduling selects a process; dispatching switches the CPU to its execution
Trang 16Context save
– The CPU state indicates what the process in operation is doing
at any moment (see Chapter 2)
– Every resource allocated to a process has a state; every activity also has a state For example,
record was accessed last
process, and states of its resource access activities
– The saved states of the CPU and resources are used for
resuming the process at a later time
Trang 17Event handling
– Occurrence of an event is notified to the kernel through an
interrupt It now performs the following tasks:
and preempt the process in operation
Trang 18Scheduling and dispatching
to servicing of a process
– The scheduling function identifies a process for servicing
– The dispatching function loads the context, i.e., process
environment, of the identified process so that it starts or resumes its operation
information from the process environment
Trang 19Context switch
CPU to a new process
– This action is called a context switch
– It is implemented through the following actions
Trang 20– Some sample process states
– The kernel keeps track of the state of a process and changes it
as its activity changes
– Different operating systems may use different process states
Trang 21
Fundamental process states
– Terminated
Trang 22State transitions
– A process has a state
– The state of a process changes when the nature of its activity changes
– Several state transitions can occur before the process
terminates
Q: What are the events that cause transitions between the
fundamental process states?
Trang 23Fundamental state transitions for a process
• The transition ready → running occurs when the process is dispatched
• running → blocked occurs when it starts I/O or makes a request
• blocked → ready occurs when its I/O completes or its request is granted
Trang 24Swapping and process states
– The OS implements swapping through swap-in and swap-out
actions
– It uses some new states to implement swapping
swapped out and those that have not been
exit these new states
and ready swapped → ready when it is swapped in
states
Trang 25Process states and state transitions in swapping
Two new states:
– Ready swapped – Blocked swapped
Trang 26Event handling actions of an OS
– Accept an interprocess message
– Deliver an interprocess message
Trang 27Event handling actions of an OS
Q: Are any arrows
missing?
Trang 28Event handling
– The OS has to change states of affected processes
– The OS uses an event control block (ECB) to find the process
Trang 29Event Control Block (ECB)
• An event control block is formed for an event when the OS knows
that the event will occur
when it initiates an I/O operation for it
Trang 30PCB-ECB interrelationship
• The OS forms an ECB for the ‘end of I/O operation’ event and
Trang 31of a process
– Threads are created within a process
– Switching between threads of a process causes less overhead
than switching between processes (Q: Why?)
Trang 32Thread switching overhead
– A process context switch involves:
2 Saving its CPU state
3 Loading the context of the new process
4 Loading its CPU state
– A thread is a program execution within the context of a process
(i.e., it uses the resources of a process); many threads can be created within the same process
between threads of the same process
Trang 33– If two processes share the same address space and the same resources, they have identical context
of identical contexts This overhead is redundant
– In such situations, it is better to use threads
Trang 34Threads in Process Pi : (a) Concept, (b) Implementation
• Each thread has a stack of its own
• Execution of a thread is managed by creating a
thread control block (TCB) for it
Trang 35Benefits of threads
– Low switching overhead
– Computation speed-up
(see next slide)
– Efficient communication
Trang 36Kinds of threads
– Kernel-level threads
their existence, and schedules them
– User-level threads
routines are linked to become a part of the process code The kernel is oblivious of user-level threads
– Hybrid threads
Q: Why have three kinds of threads?
A: They have different properties concerning switching overhead,
concurrency and parallelism
Trang 37Kernel-level and user-level threads
– Switching is performed by the kernel
– Switching is performed by the thread library
– A blocking call by a thread blocks the process
– Low parallelism
Trang 38Scheduling of kernel-level threads
• At a ‘create thread’ call, the kernel creates a thread and a
thread control block (TCB) for it
• Scheduler examines all TCBs and selects one of them
• Dispatcher dispatches the thread corresponding to the selected TCB
Trang 39Scheduling of user-level threads
• At a ‘create thread’ request, thread library creates a thread and a TCB
• Thread library performs ‘scheduling’ of threads within a process; we call
it ‘mapping’ of a TCB into the PCB of a process
Trang 40Actions of the thread library (N, R, and B indicate
running, ready and blocked states)
Trang 41Hybrid thread models
user-level threads
– Each user-level thread has a thread control block (TCB)
– Each kernel-level thread has kernel thread control block (KTCB)
– Three models of associating user-level and kernel-level threads
Trang 42Associations in hybrid thread models
(a) Many-to-one association: scheduling is done by thread library
(b) One-to-one association: scheduling is done by kernel
(c) Many-to-many association: scheduling by thread library and kernel
Trang 43– Processes should be able to send signals to other processes and
specify signal handling actions to be performed when signals are sent to them
The OS performs message passing and provides facilities for the other three modes of interaction
Trang 44An example of data sharing: airline reservations
• Agents answer queries and perform bookings
• They share the reservations data
Trang 45Race conditions in data sharing
wrong if race conditions exist
– Let fi(ds) and fj(ds) represent the value of ds after the operations
– If processes Pi and Pj perform operations Oi and Oj
* If O i is performed before O j , resulting value of d is f j (f i (d s ))
* If O j is performed before O i , resulting value of d is f i (f j (d s ))
Q: Why do race conditions arise?
Trang 46Data sharing by processes of a reservation system
• Process Pi performs actions S1, S2.1
• Process Pj performs actions S1, S2.1, S2.2
• The same seat is allocated to both the processes
Trang 47Race condition in the airline reservation system
have two consequences:
– nextseatno may not be updated properly
– Same seat number may be allocated to two passengers
Trang 48Race conditions in the airline reservations system
Q: Which of them has
a race condition?Three possible
executions are shown
Trang 49Control synchronization between processes
(a) Initiation of Pj should be delayed until Pi performs si
(b) After performing Sj-1, Pj should be delayed until Pi performs si
Trang 50Interprocess messages
it wishes to receive a message
Trang 51Benefits of message passing
– Data sharing is not necessary
– Processes may belong to different applications
– Messages cannot be tampered with by a process
– Kernel guarantees correctness
undelivered messages exist for it
Trang 52exceptional situations
– A process must anticipate signals from other processes and
must provide a signal handler for each signal
– The kernel activates the signal handler when a signal is sent to the process
– Schematic of signals on the next slide
Trang 53Signal handling
Firm arrow : Pointers
in data structuresDashed arrows: Execution time actions
Trang 54– Accordingly, there are two running states: User running and
kernel running
now scheduled and may make a system call
enter kernel mode even if other processes are blocked in that mode
Trang 55Process state transitions in Unix
• I/O request: User running → Kernel running → Blocked
• End of time slice: User running → Kernel running → Ready
Trang 56Threads in Solaris
– User threads
– Light weight processes (LWP)
threads into LWPs Several LWPs may be created within a process
– Kernel threads
creates some kernel threads for its own use; e.g., a thread to handle disk I/O
parallelism (see Hybrid models’ schematic)
Trang 57Threads in Solaris
• Mapping between user threads and LWPs is performed by thread library
• Each LWP has a KTCB; scheduling is performed by the kernel’s scheduler
Trang 58Processes and threads in Linux
– Linux supports kernel-level threads
– Threads and processes are treated alike except at creation
current directory, open files and signal handlers of its parent process; a process does not share any information of its parent
– A thread or process contains information about
Trang 59Processes and threads in Linux
– Task_running: scheduled or waiting to be scheduled
– Task_interruptible: sleeping on an event, but may receive a
signal
– Task_uninterruptible: sleeping and may not receive a signal
– Task_stopped: operation has been stopped by a signal
– Task_zombie: operation completed, but its parent has not issued
a system call to check whether it has terminated
Trang 60Processes and threads in Windows
is a unit for concurrency Hence each process must have
at least one thread in it
– Standby: Thread has been selected to run on a CPU
– Transition: Kernel stack has been swapped out
Trang 61Thread state transitions in Windows 2000