A cooperating process is one that can affect or be affected by other processes executing in the system. Cooperating processes can either directly share a logical address space (that is, both code and data) or be allowed to share data only through files or messages. The former case is achieved through the use of threads, discussed in chapter 4. Concurrent access to shared data may result in data inconsistency, however. In this chapter, we discuss various mechanisms to ensure the orderly execution of cooperating processes that share a logical address space, so that data consistency is maintained.
Trang 1Silberschatz, Galvin and Gagne 2002 7.1
Operating System Concepts
Chapter 7: Process Synchronization
■ Shared-memory solution to bounded-butter problem
(Chapter 4) allows at most n – 1 items in buffer at the same time A solution, where all N buffers are used is not
simple
✦ Suppose that we modify the producer-consumer code by
adding a variable counter, initialized to 0 and incremented
each time a new item is added to the buffer
Trang 2Silberschatz, Galvin and Gagne 2002 7.3
Operating System Concepts
Bounded-Buffer
■ Shared data
#define BUFFER_SIZE 10 typedef struct {
} item;
Trang 3Silberschatz, Galvin and Gagne 2002 7.5
Operating System Concepts
must be performed atomically.
■ Atomic operation means an operation that completes inits entirety without interruption
Trang 4Silberschatz, Galvin and Gagne 2002 7.7
Operating System Concepts
Bounded Buffer
■ The statement “count++” may be implemented in
machine language as:
■ Interleaving depends upon how the producer andconsumer processes are scheduled
Trang 5Silberschatz, Galvin and Gagne 2002 7.9
Operating System Concepts
Bounded Buffer
■ Assume counter is initially 5 One interleaving of
statements is:
producer: register1 = counter (register1 = 5)
producer: register1 = register1 + 1 (register1 = 6) consumer: register2 = counter (register2 = 5)
consumer: register2 = register2 – 1 (register2 = 4) producer: counter = register1 (counter = 6)
consumer: counter = register2 (counter = 4)
■ The value of count may be either 4 or 6, where the
correct result should be 5
Race Condition
■ Race condition: The situation where several processes
access – and manipulate shared data concurrently Thefinal value of the shared data depends upon whichprocess finishes last
■ To prevent race conditions, concurrent processes must
be synchronized.
Trang 6Silberschatz, Galvin and Gagne 2002 7.11
Operating System Concepts
The Critical-Section Problem
■ n processes all competing to use some shared data
■ Each process has a code segment, called critical section,
in which the shared data is accessed
■ Problem – ensure that when one process is executing inits critical section, no other process is allowed to execute
in its critical section
Solution to Critical-Section Problem
1 Mutual Exclusion If process P i is executing in its criticalsection, then no other processes can be executing in theircritical sections
2 Progress If no process is executing in its critical section
and there exist some processes that wish to enter theircritical section, then the selection of the processes thatwill enter the critical section next cannot be postponedindefinitely
3 Bounded Waiting A bound must exist on the number of
times that other processes are allowed to enter theircritical sections after a process has made a request toenter its critical section and before that request is
granted
a Assume that each process executes at a nonzero speed
Trang 7Silberschatz, Galvin and Gagne 2002 7.13
Operating System Concepts
Initial Attempts to Solve Problem
■ Only 2 processes, P0 and P1
■ General structure of process P i (other process P j)
■ Processes may share some common variables to
synchronize their actions
Trang 8Silberschatz, Galvin and Gagne 2002 7.15
Operating System Concepts
Algorithm 2
■ Shared variables
initially flag [0] = flag [1] = false.
■ Process P i
do { flag[i] := true;
Trang 9Silberschatz, Galvin and Gagne 2002 7.17
Operating System Concepts
Bakery Algorithm
■ Before entering its critical section, process receives anumber Holder of the smallest number enters the criticalsection
■ If processes P i and P j receive the same number, if i < j, then P i is served first; else P j is served first
■ The numbering scheme always generates numbers inincreasing order of enumeration; i.e., 1,2,3,3,3,3,4,5 Critical section for n processes
Trang 10Silberschatz, Galvin and Gagne 2002 7.19
Operating System Concepts
Trang 11Silberschatz, Galvin and Gagne 2002 7.21
Operating System Concepts
Mutual Exclusion with Test-and-Set
■ Shared data:
boolean lock = false;
■ Process P i
do { while (TestAndSet(lock)) ;
■ Atomically swap two variables
void Swap(boolean &a, boolean &b) { boolean temp = a;
a = b;
b = temp;
}
Trang 12Silberschatz, Galvin and Gagne 2002 7.23
Operating System Concepts
Mutual Exclusion with Swap
■ Shared data (initialized to false):
boolean lock;
boolean waiting[n];
■ Process P i
do { key = true;
while (key == true)
■ Synchronization tool that does not require busy waiting
■ Semaphore S – integer variable
■ can only be accessed via two indivisible (atomic)operations
Trang 13Silberschatz, Galvin and Gagne 2002 7.25
Operating System Concepts
Critical Section of n Processes
struct process *L;
} semaphore;
■ Assume two simple operations:
Trang 14Silberschatz, Galvin and Gagne 2002 7.27
Operating System Concepts
Semaphore as a General Synchronization Tool
■ Execute B in Pj only after A executed in P i
■ Use semaphore flag initialized to 0
Trang 15Silberschatz, Galvin and Gagne 2002 7.29
Operating System Concepts
Deadlock and Starvation
an event that can be caused by only one of the waitingprocesses
■ Let S and Q be two semaphores initialized to 1
removed from the semaphore queue in which it is suspended
Two Types of Semaphores
■ Counting semaphore – integer value can range over
an unrestricted domain
■ Binary semaphore – integer value can range only
between 0 and 1; can be simpler to implement
■ Can implement a counting semaphore S as a binary
semaphore
Trang 16Silberschatz, Galvin and Gagne 2002 7.31
Operating System Concepts
Implementing S as a Binary Semaphore
C = initial value of semaphore S
■ signal operation
wait(S1);
C ++;
if (C <= 0) signal(S2);
else signal(S1);
Trang 17Silberschatz, Galvin and Gagne 2002 7.33
Operating System Concepts
Classical Problems of Synchronization
Trang 18Silberschatz, Galvin and Gagne 2002 7.35
Operating System Concepts
Bounded-Buffer Problem Producer Process
do {
…
produce an item in nextp
… wait(empty);
wait(mutex);
…
add nextp to buffer
… signal(mutex);
signal(full);
} while (1);
Bounded-Buffer Problem Consumer Process
do { wait(full) wait(mutex);
…
remove an item from buffer to nextc
… signal(mutex);
signal(empty);
…
consume the item in nextc
… } while (1);
Trang 19Silberschatz, Galvin and Gagne 2002 7.37
Operating System Concepts
Trang 20Silberschatz, Galvin and Gagne 2002 7.39
Operating System Concepts
Readers-Writers Problem Reader Process
wait(mutex);
readcount++;
if (readcount == 1) wait(rt);
signal(mutex);
…reading is performed …
wait(mutex);
readcount ;
if (readcount == 0) signal(wrt);
signal(mutex):
Dining-Philosophers Problem
■ Shared data
semaphore chopstick[5];
Trang 21Silberschatz, Galvin and Gagne 2002 7.41
Operating System Concepts
Dining-Philosophers Problem
■ Philosopher i:
do { wait(chopstick[i]) wait(chopstick[(i+1) % 5]) …
eat
… signal(chopstick[i]);
signal(chopstick[(i+1) % 5]);
…
think
… } while (1);
Critical Regions
■ High-level synchronization construct
■ A shared variable v of type T, is declared as:
v: shared T
■ Variable v accessed only inside statement
region v when B do S
where B is a boolean expression.
■ While statement S is being executed, no other process can access variable v.
Trang 22Silberschatz, Galvin and Gagne 2002 7.43
Operating System Concepts
Critical Regions
■ Regions referring to the same shared variable excludeeach other in time
■ When a process tries to execute the region statement, the
Boolean expression B is evaluated If B is true, statement
S is executed If it is false, the process is delayed until B
becomes true and no other process is in the region
associated with v.
Example – Bounded Buffer
■ Shared data:
struct buffer { int pool[n];
int count, in, out;
}
Trang 23Silberschatz, Galvin and Gagne 2002 7.45
Operating System Concepts
Bounded Buffer Producer Process
■ Producer process inserts nextp into the shared buffer
region buffer when( count < n) {
pool[in] = nextp;
in:= (in+1) % n;
count++;
}
Bounded Buffer Consumer Process
■ Consumer process removes an item from the shared
buffer and puts it in nextc
region buffer when (count > 0) { nextc = pool[out];
out = (out+1) % n;
count ;
}
Trang 24Silberschatz, Galvin and Gagne 2002 7.47
Operating System Concepts
Implementation region x when B do S
■ Associate with the shared variable x, the following
variables:
semaphore mutex, first-delay, second-delay;
int first-count, second-count;
■ Mutually exclusive access to the critical section is
provided by mutex.
■ If a process cannot enter the critical section because the
Boolean expression B is false, it initially waits on the
first-delay semaphore; moved to the second-delay
semaphore before it is allowed to reevaluate B.
Implementation
■ Keep track of the number of processes waiting on delay and second-delay, with first-count and second- count respectively.
first-■ The algorithm assumes a FIFO ordering in the queuing ofprocesses for a semaphore
■ For an arbitrary queuing discipline, a more complicatedimplementation is required
Trang 25Silberschatz, Galvin and Gagne 2002 7.49
Operating System Concepts
Monitors
■ High-level synchronization construct that allows the safe sharing
of an abstract data type among concurrent processes
procedure body P2 (…) {
}
procedure body Pn (…) {
} {
initialization code
} }
Monitors
■ To allow a process to wait within the monitor, a
condition variable must be declared, as
condition x, y;
■ Condition variable can only be used with the
operations wait and signal.
Trang 26Silberschatz, Galvin and Gagne 2002 7.51
Operating System Concepts
Schematic View of a Monitor
Monitor With Condition Variables
Trang 27Silberschatz, Galvin and Gagne 2002 7.53
Operating System Concepts
Dining Philosophers Example
monitor dp
{
enum {thinking, hungry, eating} state[5];
condition self[5];
void pickup(int i) // following slides
void putdown(int i) // following slides
void test(int i) // following slides
state[i] = hungry;
test[i];
if (state[i] != eating) self[i].wait();
} void putdown(int i) { state[i] = thinking;
// test left and right neighbors test((i+4) % 5);
test((i+1) % 5);
}
Trang 28Silberschatz, Galvin and Gagne 2002 7.55
Operating System Concepts
Dining Philosophers
void test(int i) {
if ( (state[(I + 4) % 5] != eating) &&
(state[i] == hungry) &&
(state[(i + 1) % 5] != eating)) { state[i] = eating;
self[i].signal();
} }
Monitor Implementation Using Semaphores
■ Variables
semaphore mutex; // (initially = 1) semaphore next; // (initially = 0) int next-count = 0;
■ Each external procedure F will be replaced by
signal(mutex);
Trang 29Silberschatz, Galvin and Gagne 2002 7.57
Operating System Concepts
Monitor Implementation
■ For each condition variable x, we have:
semaphore x-sem; // (initially = 0) int x-count = 0;
■ The operation x.wait can be implemented as:
x-count++;
if (next-count > 0) signal(next);
else signal(mutex);
signal(x-sem);
wait(next);
next-count ;
}
Trang 30Silberschatz, Galvin and Gagne 2002 7.59
Operating System Concepts
Monitor Implementation
■ Conditional-wait construct: x.wait(c);
executed
✦ value of c (a priority number) stored with the name of the
process that is suspended
✦ when x.signal is executed, process with smallest
associated priority number is resumed next
■ Check two conditions to establish correctness of system:
✦ User processes must always make their calls on the monitor
in a correct sequence
✦ Must ensure that an uncooperative process does not ignorethe mutual-exclusion gateway provided by the monitor, andtry to access the shared resource directly, without using theaccess protocols
Solaris 2 Synchronization
■ Implements a variety of locks to support multitasking,multithreading (including real-time threads), and
multiprocessing
■ Uses adaptive mutexes for efficiency when protecting
data from short code segments
■ Uses condition variables and readers-writers locks when
longer sections of code need access to data
■ Uses turnstiles to order the list of threads waiting to
acquire either an adaptive mutex or reader-writer lock
Trang 31Silberschatz, Galvin and Gagne 2002 7.61
Operating System Concepts
Windows 2000 Synchronization
■ Uses interrupt masks to protect access to global
resources on uniprocessor systems
■ Uses spinlocks on multiprocessor systems.
■ Also provides dispatcher objects which may act as wither
mutexes and semaphores
■ Dispatcher objects may also provide events An event
acts much like a condition variable