1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Tài liệu Lịch khai giảng trong các hệ thống thời gian thực P3 pdf

28 389 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Scheduling of Dependent Tasks
Tác giả Francis Cottet, Joëlle Delacroix, Claude Kaiser, Zoubir Mammeri
Chuyên ngành Real-Time Systems
Thể loại Textbook
Năm xuất bản 2002
Định dạng
Số trang 28
Dung lượng 560,03 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

First, when task τ0 has the highest priority of thetask set, its execution can be delayed only by the activated tasks which have a lower priority and use the same m0 shared resources.. T

Trang 1

Scheduling of Dependent Tasks

In the previous chapter, we assumed that tasks were independent, i.e with no tionships between them But in many real-time systems, inter-task dependencies arenecessary for realizing some control activities In fact, this inter-task cooperation can

rela-be expressed in different ways: some tasks have to respect a processing order, dataexchanges between tasks, or use of various resources, usually in exclusive mode From

a behavioural modelling point of view, there are two kinds of typical dependenciesthat can be specified on real-time tasks:

• precedence constraints that correspond to synchronization or communication amongtasks;

• mutual exclusion constraints to protect shared resources These critical resourcesmay be data structures, memory areas, external devices, registers, etc

3.1 Tasks with Precedence Relationships

The first type of constraint is the precedence relationship among real-time tasks Wedefine a precedence constraint between two tasks τi and τj, denoted by τi → τj, ifthe execution of taskτi precedes that of task τj In other words, task τj must awaitthe completion of taskτi before beginning its own execution

As the precedence constraints are assumed to be implemented in a deterministicmanner, these relationships can be described through a graph where the nodes representtasks and the arrows express the precedence constraint between two nodes, as shown

in Figure 3.1 This precedence acyclic graph represents a partial order on the task set

If task τi is connected by a path to task τj in the precedence graph thenτi → τj A

general problem concerns tasks related by complex precedence relationships where n

successive instances of a task can precede one instance of another task, or one instance

of a task precedes m instances of another task Figure 3.2 gives an example where the

rates of the communicating tasks are not equal

To facilitate the description of the precedence constraint problem, we only considerthe case of simple precedence constraint, i.e if a taskτi has to communicate the result

of its processing to another taskτj, these tasks have to be scheduled in such a way that

the execution of the kth instance of taskτi precedes the execution of the kth instance of

taskτj Therefore, these tasks have the same rate (i.e T i = T j) So all tasks belonging

to a connected component of the precedence graph must have the same period On thegraph represented in Figure 3.1, tasksτ1 toτ5have the same period and tasksτ6 toτ9also have the same period If the periods of the tasks are different, these tasks will run

ISBN: 0-470-84766-2

Trang 2

τ 2

Average temperature over four samples calculation task

An answer to the first question was given by Blazewicz (1977): if we have to get

τi → τj, then the task parameters must be in accordance with the following rules:

• r j ≥ r i

• Prio i ≥ Prio j in accordance with the scheduling algorithm

In the rest of this chapter, we are interested in the validation context This problemcan be studied from two points of view: execution and validation First, in the case ofpreemptive scheduling algorithms based on priority, the question is: which modification

of the task parameters will lead to an execution that respects the precedence constraints?

Second, is it possible to validate a priori the schedulability of a dependent task set?

3.1.1 Precedence constraints and fixed-priority

algorithms (RM and DM)

The rate monotonic scheduling algorithm assigns priorities to tasks according to theirperiods In other words, tasks with shorter period get higher priorities Respecting thisrule, the goal is to modify the task parameters in order to take account of precedenceconstraints, i.e to obtain an independent task set with modified parameters The basicidea of these modifications is that a task cannot start before its predecessors and cannotpreempt its successors So if we have to getτi → τj, then the release time and thepriority of task parameters must be modified as follows:

• r

j ≥ Max(r j , r i) r i∗is the modified release time of task τi

• Prio ≥ Prio in accordance with the RM algorithm

Trang 3

Figure 3.3 Precedence graphs of a set of six tasks

Table 3.1 Example of priority mapping taking care of precedence constraints and using the RM scheduling algorithm

The deadline monotonic scheduling algorithm assigns priorities to tasks according to

their relative deadline D (tasks with shorter relative deadline get higher priorities) The

modifications of task parameters are close to those applied for RM scheduling exceptthat the relative deadline is also changed in order to respect the priority assignment

So ifτi → τj, then the release time, the relative deadline and the priority of the taskparameters must be modified as follows:

• r

j ≥ Max(r j , r i) r i∗is the modified release time of taskτi

j ≥ Max(D j , Di ) D i∗is the modified relative deadline of task τi

• Prio i ≥ Prio j in accordance with the DM scheduling algorithm

This modification transparently enforces the precedence relationship between two tasks

3.1.2 Precedence constraints and the earliest deadline

first algorithm

In the case of the earliest deadline first algorithm, the modification of task parameters

relies on the deadline d So the rules for modifying release times and deadlines of

tasks are based on the following observations (Figure 3.4) (Blazewicz, 1977; Chetto

et al., 1990)

Trang 4

Figure 3.4 Modifications of task parameters in the case of EDF scheduling

First, if we have to getτi → τj , the release time r j∗of taskτj must be greater than

or equal to its initial value or to the new release times r i∗of its immediate predecessors

τi increased by their execution times C i:

r j ≥ Max((r

i + C i ), r j )

Then, if we have to getτi → τj , the deadline d i∗of taskτi has to be replaced by the

minimum between its initial value d i or by the new deadlines d j∗ of the immediatesuccessorsτj decreased by their execution times C j:

d i≥ Min((d

j − C j ), d i )

Procedures that modify the release times and the deadlines can be implemented in aneasy way as shown by Figure 3.4 They begin with the tasks that have no predecessorsfor modifying their release times and with those with no successors for changingtheir deadlines

3.1.3 Example

Let us consider a set of five tasks whose parameters (r i , C i , d i) are indicated inTable 3.2 Note that all the tasks are activated simultaneously except task τ2 Theirprecedence graph is depicted in Figure 3.5 As there is one precedence graph linking

Table 3.2 Set of five tasks and the modifications of parameters according to the precedence constraints (4 is the highest priority)

Initial task parameters Modifications to use

RM

Modifications to use EDF

Trang 5

Figure 3.5 Precedence graph linking five tasks

all the tasks of the application, we assume that all these tasks have the same rate.Table 3.2 also shows the modifications of task parameters in order to take account ofthe precedence constraints in both RM and EDF scheduling

Let us note that, in the case of RM scheduling, only the release time parameters arechanged and the precedence constraint is enforced by the priority assignment Under

EDF scheduling, both parameters (r i , d i) must be modified

3.2 Tasks Sharing Critical Resources

This section describes simple techniques that can handle shared resources for dynamicpreemptive systems When tasks are allowed to access shared resources, their accessneeds to be controlled in order to maintain data consistency Let us consider a critical

resource, called R, shared by two tasksτ1andτ2 We want to ensure that the sequences

of statements ofτ1 andτ2, which perform on R, are executed under mutual exclusion.

These pieces of code are called critical sections or critical regions Specific mechanisms(such as semaphore, protected object or monitor), provided by the real-time kernel,can be used to create critical sections in a task code It is important to note that,

in a non-preemptive context, this problem does not arise because by definition a taskcannot be preempted during a critical section In this chapter, we consider a preemptivecontext in order to allow fast response time for high-priority tasks which correspond

to high-safety software

Let us consider again the small example with two tasks τ1 and τ2 sharing one

resource R Let us assume that task τ1 is activated first and uses resource R, i.e.

enters its critical section Then the second task τ2, having a higher priority than τ1,asks for the processor Since the priority of taskτ2 is greater, preemption occurs, task

τ1 is blocked and task τ2 starts its execution However, when task τ2 wants access

to the shared resource R, it is blocked due to the mutual exclusion process So task

τ1 can resume its execution When task τ1 finishes its critical section, the higherpriority task τ2 can resume its execution and use resource R This process can lead

to an uncontrolled blocking time of task τ2 On the contrary, to meet hard real-timerequirements, an application must be controlled by a scheduling algorithm that canalways guarantee a predictable system response time The question is how to ensure

a predictable response time of real-time tasks in a preemptive scheduling mechanismwith resource constraints

Trang 6

3.2.1 Assessment of a task response time

In this section, we consider on-line preemptive scheduling where the priorities are fixedand assigned to tasks We discuss the upper bound of the response time of a taskτ0

which has a worst-case execution time C0 Let us assume now that the utilization factor

of the processor is low enough to permit the task set, includingτ0, to be schedulablewhatever the blocking time due to the shared resources

In the first step, we suppose that the tasks are independent, i.e without any sharedresource If taskτ0 has the higher priority, it is obvious that the response time TR0 ofthis taskτ0 is equal to its execution time C0 On the other hand, when taskτ0 has anintermediate priority, the upper bound of the response can also be evaluated easily as

a function of the tasks with a priority higher than that of taskτ0, denotedτHPT:

• Where all tasks are periodic with the same period or aperiodic, we obtain:

is negligible Of course, these overheads can be taken into account as an additionalterm of task execution times

Now, in the context of a set with n + 1 tasks and m resources, let us calculate

the upper bound of the response time of task τ0 (i) when it does and (ii) when itdoes not hold the highest priority First, when task τ0 has the highest priority of thetask set, its execution can be delayed only by the activated tasks which have a lower

priority and use the same m0 shared resources This situation has to be analysed fortwo cases:

• Case I: The m0shared resources are held by at least m0tasks as shown in Figure 3.6,where taskτj holds resource R1 requested by taskτ0 It is important to notice thattaskτi waiting for resource R1 is preempted by task τ0 due to the priority order-

ing management of queues Let CR i,q denote the maximum time the task τi uses

resource R q , CR max,q the maximum of CR i,q over all tasks τi , CR i,max the

max-imum of CR i,q over all resources R q , and finally CRmax the maximum of CR i,q

over all tasks and resources As a consequence, the upper bound of the responsetime of taskτ0 is given by:

Trang 7

In the worst case, for this set (n other tasks using the m resources, with n < m),

the response time is at most:

• Case II: The m0 shared resources are held by n1 tasks with n1< m0, as shown

in Figure 3.6, where tasks τk and τj hold resources R2, R3 and R4 requested by

τ0 We can notice that, at least, one task holds two resources If we assume thatthe critical sections of a task are properly nested, the maximum critical sectionduration of a task using several resources is given by the longest critical section

So the response time of taskτ0 is upper-bounded by:

Trang 8

Or more precisely, we get:

In the worst case, for this set (n other tasks and m resources, with n < m), the

response time of taskτ0 is at most:

To sum up, an overall expression of the response time for the highest priority task

in a real-time application composed of n + 1 tasks and m resources is given by the

following inequality:

Let us consider now that taskτ0 has an intermediate priority The task set includes n1

tasks having a higher priority level (HPT set) and n2 tasks which have a lower priority

level and share m critical resources with taskτ0 This case is depicted in Figure 3.7

with the following specific values: n1 = 1, n2 = 2 and m = 3 With the assumption that the n2 lower priority tasks haves dependencies only withτ0, and not with the n1higher priority tasks, it should be possible to calculate the upper bound of the responsetime of taskτ0 by combining inequalities (3.2) and (3.10) The response time is:

t

τj

r

Critical resource use

Critical resource request

Critical resource release

Trang 9

However, this computation of the upper bound of each task relies on respect for theassumptions concerning the scheduling rules In particular, for a preemptive schedul-ing algorithm with fixed priority, there is an implicit condition of the specificationthat must be inviolable: at its activation time, a task τ0 must run as soon as all thehigher priority tasks have finished their execution and all the lower priority tasks usingcritical resources, requested byτ0, have released the corresponding critical sections Infact two scheduling problems can render this assumption false: the priority inversionphenomenon and deadlock.

3.2.2 Priority inversion phenomenon

In preemptive scheduling that is driven by fixed priority and where critical resourcesare protected by a mutual exclusion mechanism, the priority inversion phenomenoncan occur (Kaiser, 1981; Rajkumar, 1991; Sha et al., 1990) In order to illustratethis problem, let us consider a task set composed of four tasks {τ1,τ2,τ3,τ4} hav-ing decreasing priorities Tasks τ2 and τ4 share a critical resource R1, the access

of which is mutually exclusive Let us focus our attention on the response time oftask τ2 The scheduling sequence is shown in Figure 3.8 The lowest priority task

τ4 starts its execution first and after some time it enters a critical section using

resource R1 When task τ4 is in its critical section, the higher priority task τ2 isreleased and preempts task τ4 During the execution of task τ2, task τ3 is released.Nevertheless, task τ3, having a lower priority than task τ2, must wait When task

τ2 needs to enter its critical section, associated with the critical resource R1 sharedwith task τ4, it finds that the corresponding resource R1 is held by task τ4 Thus it

is blocked The highest priority task able to execute is task τ3 So task τ3 gets theprocessor and runs

During this execution, the highest priority taskτ1 awakes As a consequence taskτ3

is suspended and the processor is allocated to taskτ1 At the end of execution of task

τ1, taskτ3 can resume its execution until it reaches the end of its code Now, only thelowest priority taskτ4, preempted in its critical section, can execute again It resumes

Figure 3.8 Example of priority inversion phenomenon

Trang 10

its execution until it releases critical resource R1 required by the higher priority task

τ2 Then, this task can resume its execution by holding critical resource R1 necessaryfor its activity

It is of great importance to analyse this simple example precisely The maximumblocking time that task τ2 may experience depends on the duration of the criticalsections of the lower priority tasks sharing a resource with it, such as taskτ4, and onthe other hand on the execution times of higher priority tasks, such as taskτ1 Thesetwo kinds of increase of the response time of task τ2 are completely consistent withthe scheduling rules But, another task, τ3, which has a lower priority and does notshare any critical resource with taskτ2, participates in the increase of its blocking time

This situation, called priority inversion, contravenes the scheduling specification and

can induce deadline missing as can be seen in the example given in Section 9.2 Inthis case the blocking time of each task cannot be bounded unless a specific protocol

is used and it can lead to uncontrolled response time of each task

Trang 11

During the critical section of taskτ2 using resource R1, taskτ1 awakes and preemptstask τ2 before it can lock the second resource R2 Task τ1 needs resource R2 first,which is free, and it locks it Then task τ1 needs resource R1, which is held by task

τ2 So task τ2 resumes and asks for resource R2, which is not free The final result isthat taskτ2 is in possession of resource R1 but is waiting for resource R2 and taskτ1

is in possession of resource R2 but is waiting for resource R1 Neither taskτ1nor task

τ2 will release the resource until its pending request is satisfied This situation leads to

a deadlock of both tasks This situation can be extended to more than two tasks with

a circular resource access order and leads to a chained blocking

Deadlock is a serious problem for critical real-time applications Solutions must

be found in order to prevent deadlock situations, as classically done for operatingsystems (Bacon, 1997; Silberschatz and Galvin, 1998; Tanenbaum, 1994; Tanenbaumand woodhull, 1997) One method is to impose a total ordering of the critical resourceaccesses (Havender, 1968) It is not always possible to apply this technique, because it

is necessary to know all the resources that a task will need during its activity This iswhy this method is called static prevention (Figure 3.9b) Another technique that can

be used on-line is known as the banker’s algorithm (Haberman, 1969), and requiresthat each task declares beforehand the maximum number of resources that it may holdsimultaneously

Other methods to cope with deadlocks are based on detection and recovering cesses (for example by using a watchdog timer) The use of a watchdog timer allowsdetection of inactive tasks: this may be a deadlock, or the tasks may be waiting forexternal signals Then, the technique for handling the deadlock is to reset the tasksinvolved in the detected deadlock or, in an easier way, the whole task set This method,used very often when the deadlock situation is known to occur infrequently, is notacceptable for highly critical systems

pro-3.2.4 Shared resource access protocols

Scheduling of tasks that share critical resources leads to some problems in all computerscience applications:

• synchronization problems between tasks and particularly the priority inversion uation when they share mutually exclusive resources;

sit-• deadlock and chained blocking problems

In real-time systems, a simple method to cope with these problems is the reservation andpre-holding of resources at the beginning of task execution However, such a techniqueleads to a low utilization factor of resources, so some resource access protocols havebeen designed to avoid such drawbacks and also to bound the maximum response time

of tasks

Different protocols have been developed for preventing the priority inversion in the

RM or EDF scheduling context These protocols permit the upper bound of the blockingtime due to the critical resource access for each taskτi to be determined This is called

B i This maximum blocking duration is then integrated into the schedulability tests

of classical scheduling algorithms like RM and EDF (see Chapter 2) This integration

Trang 12

is simply obtained by considering that a task τi has an execution time equal to C i+

B i Some of these resource access protocols also prevent the deadlock phenomenon(Rajkumar, 1991)

Priority inheritance protocol

The basic idea of the priority inheritance protocol is to dynamically change the priority

of some tasks (Kaiser, 1981; Sha et al., 1990) So a taskτi, which is using a criticalresource inside a critical section, gets the priority of any task τj waiting for thisresource if the priority of task τj is higher than that of taskτi Consequently, task τi

is scheduled at a higher level than its initial level of priority This new context leads

to freeing of the critical resource earlier and minimizes the waiting time of the higherpriority task τj The priority inheritance protocol does not prevent deadlock, whichhas to be avoided by using the techniques discussed above However, the priorityinheritance protocol has to be used for task code with correctly nested critical sections

In this case, the protocol is applied in a recursive manner This protocol of priorityinheritance has been implemented in the real-time operating system DUNE-IX (Banino

et al., 1993)

Figure 3.10 gives an example of this protocol for a task set composed of three tasks{τ1,τ2,τ3} having decreasing priorities and two critical resources {R1, R2} Task τ1

uses resource R1, taskτ2 resource R2, and taskτ3 both resources R1 and R2 Taskτ3

starts running first and takes successively resources R1 and R2 Later task τ2 awakesand preempts taskτ3 in its nested critical section When taskτ2 requires resource R2,

it is blocked by taskτ3, thus taskτ3 gets the priority of taskτ2 We say that taskτ3inherits the priority of taskτ2 Then, in the same manner, taskτ1 awakes and preemptstaskτ3in its critical section When taskτ1requests resource R1, it is blocked by taskτ3,consequently taskτ3 inherits the priority of taskτ1 So taskτ3continues its executionwith the highest priority of the task set Whenτ3 releases resources R2 and then R1,

it resumes its original priority Immediately, the higher priority taskτ1, waiting for aresource, preempts taskτ3 and gets the processor The end of the execution sequencefollows the classical rules of scheduling

Critical resource release

: Task using resource R1

: Task using resource R2

: Task using resources R1 and R2

Inheritance of priority of τ 2 Inheritance of priority of τ 1

Figure 3.10 Example of application of priority inheritance protocol

Trang 13

When the priority inheritance protocol is used, it is possible to evaluate the upperbound of the blocking time of each task Under this protocol, a taskτi can be blocked

at most by n critical sections of lower priority tasks or by m critical sections

corre-sponding to resources shared with lower priority tasks (Buttazzo, 1997; Klein et al.,1993; Rajkumar, 1991) That is:

As we can see in Figure 3.10, taskτ2is at most delayed by the longest critical section

of taskτ3 (recall that several critical sections used by a task must be correctly nested

In the example, R1 is released after R2)

Priority ceiling protocol

The basic idea of this protocol is to extend the preceding protocol in order to avoiddeadlocks and chained blocking by preventing a task from entering in a critical sectionthat leads to blocking it (Chen and Lin, 1990; Sha et al., 1990) To do so, each resource

is assigned a priority, called priority ceiling, equal to the priority of the highest priority

task that can use it The priority ceiling is similar to a threshold In the same way as inthe priority inheritance protocol, a taskτi, which is using a critical resource inside acritical section, gets the priority of any taskτj waiting for this resource if the priority

of taskτj is higher than that ofτi Consequently, taskτi is scheduled at a higher levelthan its initial level of priority and the waiting time of the higher priority task τj isminimized Moreover, in order to prevent deadlocks, when a task requests a resource,the resource is allocated only if it is free and if the priority of this task is strictlygreater than the highest priority ceiling of resources used by other tasks This ruleprovides early blocking of tasks that may cause deadlock and guarantees that futurehigher priority tasks get their resources

Figure 3.11 gives an example of this protocol for a task set composed of threetasks {τ1,τ2,τ3} with decreasing priorities and two critical resources {R1, R2} Task

τ1 uses resource R1, taskτ2 resource R2, and taskτ3 both resources R1 and R2 Task

Critical task request

: Task elected

: Task using resource R1

: Task using resource R2

: Task using resources R1 and R2

τ 3 inherits priority of τ 2 τ 3 inherits priority of τ 1 τ 3 resumes its initial priority

Trang 14

τ3 starts running first and takes resource R1, which is free The priority ceiling of

resource R1 (respectively R2) is the priority of taskτ1 (respectivelyτ2) Later taskτ2awakes and preempts taskτ3 given that its priority is greater than the current priority

of taskτ3 When task τ2 requests resource R2, it is blocked by the protocol because

its priority is not strictly greater than the priority ceiling of held resource R1 Sincetaskτ2 is waiting, taskτ3 inherits the priority of taskτ2 and resumes its execution Inthe same way, task τ1 awakes and preempts task τ3 given that its priority is greaterthan that of taskτ3 When taskτ1 requests resource R1, it is blocked by the protocolbecause its priority is not strictly greater than the priority ceiling of used resource

R1 And, since task τ1 is waiting, task τ3 inherits the priority of τ1 and resumes itsexecution When taskτ3 exits the critical sections of both resources R2 and then R1,

it resumes its original priority and it is immediately preempted by the waiting highestpriority task, i.e taskτ1 The end of the execution sequence follows the classical rules

of scheduling

Initially designed for fixed-priority scheduling algorithms, such as rate monotonic,this protocol has been extended by Chen and Lin (1990) to variable-priority schedulingalgorithms, such as earliest deadline first In this context, the priority ceiling is evaluated

at each modification of the ready task list that is caused by activation or completion

of tasks This protocol has been implemented in the real-time operating system Mach

at Carnegie Mellon University (Nakajima et al., 1993; Tokuda and Nakajima, 1991)

It is important to notice that this protocol needs to know a priori all the task

priorities and all the resources used by each task in order to assign priority ceilings.Moreover, we can outline that the properties of this protocol are true only in a one-processor context When the priority ceiling protocol is used, it is possible to evaluatethe upper bound of the blocking time of each task Under this protocol, a taskτi can

be blocked at most by the longest critical section of a lower priority task that is using

a resource of priority ceiling less than or equal to the priority of that taskτi (Buttazzo,1997; Klein et al., 1993; Rajkumar, 1991)

The priority ceiling protocol is the so-called original priority ceiling protocol (Burnsand Wellings, 2001) A slightly different priority ceiling protocol, called the immediatepriority ceiling protocol (Burns and Wellings, 2001), takes a more straightforwardapproach and raises the priority of a process as soon as it locks a resource rather thanonly when it is actually blocking a higher priority process The worst-case behaviour

of the two ceiling protocols is identical

Stack resource policy

The stack resource protocol extends the preceding protocol in two ways: it allowsthe use of multi-unit resources and can be applied with a variable-priority schedulingalgorithm like earliest deadline first (Baker, 1990) In addition to the classical priority,each task is assigned a new parameterπ, called level of preemption, which is related

to the time devoted for its execution (i.e π is inversely proportional to its relative

deadline D) This level of preemption is such that a taskτi cannot preempt a task τj

unlessπ(τ i ) > π(τ j ) The current level of preemption of the system is determined as afunction of the resource access Then a task cannot be elected if its level of preemption

is lower than this global level of preemption The application of this rule points outthat the main difference between the priority ceiling protocol and the stack resource

Ngày đăng: 15/12/2013, 11:15

TỪ KHÓA LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm

w