my_thread big_thread priority = 10 attempts to obtain my_mutex, but is suspended because the mutex is owned by my_thread 10 Figure 8.25: Example showing effect of priority inheritance
Trang 1Mutual Exclusion Challenges and Considerations 119
047
048 /****************************************************/
049 /* Application Defi nitions */
050 /****************************************************/
051
052 /* Defi ne what the initial system looks like */
053
054 void tx_application_defi ne(void *fi rst_unused_memory)
055 {
056
057
058 /* Put system defi nitions here,
059 e.g., thread and mutex creates */
060
061 /* Create the Speedy_Thread */
062 tx_thread_create( & Speedy_Thread, “Speedy_Thread”,
063 Speedy_Thread_entry, 0,
064 stack_speedy, STACK_SIZE,
065 5, 5, TX_NO_TIME_SLICE, TX_AUTO_START);
066
067 /* Create the Slow_Thread */
068 tx_thread_create( & Slow_Thread, “Slow_Thread”,
069 Slow_Thread_entry, 1,
070 stack_slow, STACK_SIZE,
071 15, 15, TX_NO_TIME_SLICE, TX_AUTO_START);
072
073 /* Create the mutex used by both threads */
074 tx_mutex_create( & my_mutex, “my_mutex”, TX_NO_INHERIT);
075
076 }
077
078
079 /****************************************************/
080 /* Function Defi nitions */
081 /****************************************************/
082
083 /* Defi ne the activities for the Speedy_Thread */
084
085 void Speedy_Thread_entry(ULONG thread_input)
Trang 2120 Chapter 8
086 {
087 UINT status;
088 ULONG current_time;
089
090 while(1)
091 {
092
093 /* Activity 1: 2 timer-ticks */
094 tx_thread_sleep(2);
095
096 /* Activity 2: 5 timer-ticks *** critical section ***
097 Get the mutex with suspension */
098
099 status tx_mutex_get( & my_mutex, TX_WAIT_FOREVER);
100 if (status ! TX_SUCCESS) break; /* Check status */
101
102 tx_thread_sleep(5);
103
104 /* Release the mutex */
105 status tx_mutex_put( & my_mutex);
106 if (status ! TX_SUCCESS) break; /* Check status */
107
108 /* Activity 3: 4 timer-ticks */
109 tx_thread_sleep(4);
110
111 /* Activity 4: 3 timer-ticks *** critical section ***
112 Get the mutex with suspension */
113
114 status tx_mutex_get( & my_mutex, TX_WAIT_FOREVER);
115 if (status ! TX_SUCCESS) break; /* Check status */
116
117 tx_thread_sleep(3);
118
119 /* Release the mutex */
120 status tx_mutex_put( & my_mutex);
121 if (status ! TX_SUCCESS) break; /* Check status */
122
123 current_time tx_time_get();
124 printf(“ Current Time: %lu Speedy_Thread fi nished
cycle \n”,
Trang 3Mutual Exclusion Challenges and Considerations 121
125 current_time);
126
127 }
128 }
129
130 /****************************************************/
131
132 /* Defi ne the activities for the Slow_Thread */
133
134 void Slow_Thread_entry(ULONG thread_input)
135 {
136 UINT status;
137 ULONG current_time;
138
139 while(1)
140 {
141
142 /* Activity 5: 12 timer-ticks *** critical section ***
143 Get the mutex with suspension */
144
145 status tx_mutex_get( & my_mutex, TX_WAIT_FOREVER);
146 if (status ! TX_SUCCESS) break; /* Check status */
147
148 tx_thread_sleep(12);
149
150 /* Release the mutex */
151 status tx_mutex_put( & my_mutex);
152 if (status ! TX_SUCCESS) break; /* Check status */
153
154 /* Activity 6: 8 timer-ticks */
155 tx_thread_sleep(8);
156
157 /* Activity 7: 11 timer-ticks *** critical section ***
158 Get the mutex with suspension */
159
160 status tx_mutex_get( & my_mutex, TX_WAIT_FOREVER);
161 if (status ! TX_SUCCESS) break; /* Check status */
162
163 tx_thread_sleep(11);
164
Trang 4122 Chapter 8
165 /* Release the mutex */
166 status tx_mutex_put( & my_mutex);
167 if (status ! TX_SUCCESS) break; /* Check status */
168
169 /* Activity 8: 9 timer-ticks */
170 tx_thread_sleep(9);
171
172 current_time tx_time_get();
173 printf(“Current Time: %lu Slow_Thread finished cycle \n”,
174 current_time);
175
176 }
177 }
8.16 Mutex Internals
When the TX_MUTEX data type is used to declare a mutex, an MCB is created, and that MCB is added to a doubly linked circular list, as illustrated in Figure 8.24
The pointer named tx_mutex_created_ptr points to the fi rst MCB in the list See the
fi elds in the MCB for mutex attributes, values, and other pointers
If the priority inheritance feature has been specifi ed (i.e., the MCB fi eld named
tx_mutex_inherit has been set), the priority of the owning thread will be increased to match that of a thread with a higher priority that suspends on this mutex When the
owning thread releases the mutex, then its priority is restored to its original value,
regardless of any intermediate priority changes Consider Figure 8.25 ,which contains
a sequence of operations for the thread named my_thread with priority 25, which
successfully obtains the mutex named my_mutex , which has the priority inheritance
feature enabled
The thread called my_thread had an initial priority of 25, but it inherited a priority of
10 from the thread called big_thread At this point, my_thread changed its own priority twice (perhaps unwisely because it lowered its own priority!) When my_thread released
the mutex, its priority reverted to its original value of 25, despite the intermediate priority
changes Note that if my_thread had previously specifi ed a preemption threshold, then
the new preemption-threshold value would be changed to the new priority when a
change-priority operation was executed When my_thread released the mutex, then the
Trang 5Mutual Exclusion Challenges and Considerations 123
preemption threshold would be changed to the original priority value, rather than to the original preemption-threshold value
8.17 Overview
A mutex is a public resource that can be owned by at most one thread at any point in time It has only one purpose: to provide exclusive access to a critical section or to shared resources Declaring a mutex has the effect of creating an MCB, which is a structure used to store vital information about that mutex during execution
There are eight services designed for a range of actions involving mutexes, including creating a mutex, deleting a mutex, prioritizing a suspension list, obtaining ownership of
a mutex, retrieving mutex information (3), and relinquishing ownership of a mutex
Developers can specify a priority inheritance option when defi ning a mutex, or during later execution Using this option will diminish the problem of priority inversion
my_thread
big_thread (priority = 10) attempts to obtain my_mutex, but is
suspended because the mutex is owned by my_thread 10
Figure 8.25: Example showing effect of priority inheritance on thread priority
tx_mutex_created_ptr
Figure 8.24: Created mutex list
Trang 6124 Chapter 8
Another problem associated with the use of mutexes is the deadly embrace, and several tips for avoiding this problem were presented
We developed a complete system that employs two threads and one mutex that protects the critical section of each thread We presented and discussed a partial trace of the
threads
8.18 Key Terms and Phrases
creating a mutex ownership of mutex critical section prioritize mutex suspension list deadly embrace priority inheritance
deleting a mutex priority inversion exclusive access recovery from deadly embrace multiple mutex ownership shared resources
Mutex Control Block (MCB) Ready Thread List mutex wait options Suspend Thread List mutual exclusion
8.19 Problems
1 Describe precisely what happens as a result of the following mutex declaration:
TX_MUTEX mutex_1;
2 What is the difference between a mutex declaration and a mutex defi nition?
3 Suppose that a mutex is not owned, and a thread acquires that mutex with the
tx_mutex_get service What is the value of tx_mutex_suspended_count (a member
of the MCB) immediately after that service has completed?
4 Suppose a thread with the lowest possible priority owns a certain mutex, and a ready
thread with the highest possible priority needs that mutex Will the high priority
thread be successful in taking that mutex from the low-priority thread?
5 Describe all the circumstances (discussed so far) that would cause an executing
thread to be moved to the Suspend Thread List
Trang 7Mutual Exclusion Challenges and Considerations 125
6 Suppose a mutex has the priority-inheritance option enabled and a thread that
attempted to acquire that mutex had its priority raised as a result Exactly when will that thread have its priority restored to its original value?
7 Is it possible for the thread in the previous problem to have its priority changed while
it is in the Suspend Thread List? If so, what are the possible problems that might arise? Are there any circumstances that might justify performing this action?
8 Suppose you were charged with the task of creating a watchdog thread that would try
to detect and correct deadly embraces Describe, in general terms, how you would accomplish this task
9 Describe the purpose of the tx_mutex_prioritize service, and give an example
10 Discuss two ways in which you can help avoid the priority inversion problem
11 Discuss two ways in which you can help avoid the deadly embrace problem
12 Consider Figure 8.23 , which contains a partial activity trace of the sample system Exactly when will the Speedy_Thread preempt the Slow_Thread?
Trang 8Memory Management: Byte Pools
and Block Pools
C H A P T E R 9
9.1 Introduction
Recall that we used arrays for the thread stacks in the previous chapter Although
this approach has the advantage of simplicity, it is frequently undesirable and is quite infl exible This chapter focuses on two ThreadX memory management resources that provide a good deal of fl exibility: memory byte pools and memory block pools
A memory byte pool is a contiguous block of bytes Within such a pool, byte groups of
any size (subject to the total size of the pool) may be used and reused Memory byte
pools are fl exible and can be used for thread stacks and other resources that require
memory However, this fl exibility leads to some problems, such as fragmentation of the memory byte pool as groups of bytes of varying sizes are used
A memory block pool is also a contiguous block of bytes, but it is organized into a
collection of fi xed-size memory blocks Thus, the amount of memory used or reused
from a memory block pool is always the same — the size of one fi xed-size memory block There is no fragmentation problem, and allocating and releasing memory blocks is fast In general, the use of memory block pools is preferred over memory byte pools
We will study and compare both types of memory management resources in this chapter
We will consider the features, capabilities, pitfalls, and services for each type We will also create illustrative sample systems using these resources
Trang 9128 Chapter 9
9.2 Summary of Memory Byte Pools
A memory byte pool is similar to a standard C heap 1 In contrast to the C heap, a
ThreadX application may use multiple memory byte pools In addition, threads can
suspend on a memory byte pool until the requested memory becomes available
Allocations from memory byte pools resemble traditional malloc calls, which include the
amount of memory desired (in bytes) ThreadX allocates memory from the memory byte
pool in a fi rst-fi t manner, i.e., it uses the fi rst free memory block that is large enough to
satisfy the request ThreadX converts excess memory from this block into a new block
and places it back in the free memory list This process is called fragmentation
When ThreadX performs a subsequent allocation search for a large-enough block of
free memory, it merges adjacent free memory blocks together This process is called
defragmentation
Each memory byte pool is a public resource; ThreadX imposes no constraints on how memory byte pools may be used 2 Applications may create memory byte pools either during initialization or during run-time There are no explicit limits on the number of memory byte pools an application may use
The number of allocatable bytes in a memory byte pool is slightly less than what
was specifi ed during creation This is because management of the free memory area
introduces some overhead Each free memory block in the pool requires the equivalent of two C pointers of overhead In addition, when the pool is created, ThreadX automatically divides it into two blocks, a large free block and a small permanently allocated block at the end of the memory area This allocated end block is used to improve performance of the allocation algorithm It eliminates the need to continuously check for the end of the pool area during merging During run-time, the amount of overhead in the pool typically increases This is partly because when an odd number of bytes is allocated, ThreadX pads out the block to ensure proper alignment of the next memory block In addition, overhead increases as the pool becomes more fragmented
1 In C, a heap is an area of memory that a program can use to store data in variable amounts that will not be known until the program is running
2 However, memory byte pool services cannot be called from interrupt service routines (This topic will be discussed in a later chapter.)
Trang 10Memory Management: Byte Pools and Block Pools 129
The memory area for a memory byte pool is specifi ed during creation Like other memory areas, it can be located anywhere in the target’s address space This is an important
feature because of the considerable fl exibility it gives the application For example, if the target hardware has a high-speed memory area and a low-speed memory area, the user can manage memory allocation for both areas by creating a pool in each of them
Application threads can suspend while waiting for memory bytes from a pool When
suffi cient contiguous memory becomes available, the suspended threads receive their requested memory and are resumed If multiple threads have suspended on the same
memory byte pool, ThreadX gives them memory and resumes them in the order they occur on the Suspended Thread List (usually FIFO) However, an application can cause priority resumption of suspended threads, by calling tx_byte_pool_prioritize prior to the byte release call that lifts thread suspension The byte pool prioritize service places the highest priority thread at the front of the suspension list, while leaving all other
suspended threads in the same FIFO order
9.3 Memory Byte Pool Control Block
The characteristics of each memory byte pool are found in its Control Block 3 It contains useful information such as the number of available bytes in the pool Memory Byte Pool Control Blocks can be located anywhere in memory, but it is most common to make the Control Block a global structure by defi ning it outside the scope of any function Figure 9.1 contains many of the fi elds that comprise this Control Block
In most cases, the developer can ignore the contents of the Memory Byte Pool Control Block However, there are several fi elds that may be useful during debugging, such as the number of available bytes, the number of fragments, and the number of threads suspended
on this memory byte pool
9.4 Pitfalls of Memory Byte Pools
Although memory byte pools provide the most fl exible memory allocation, they also
suffer from somewhat nondeterministic behavior For example, a memory byte pool may have 2,000 bytes of memory available but not be able to satisfy an allocation request of
3 The structure of the Memory Byte Pool Control Block is defi ned in the tx_api.h fi le