1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Tài liệu Hiệu suất của hệ thống thông tin máy tính P16 pptx

26 198 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Performance of computer communication systems: a model-based approach
Tác giả Boudewijn R. Haverkort
Chuyên ngành Computer Science
Thể loại book chapter
Năm xuất bản 1998
Định dạng
Số trang 26
Dung lượng 1,65 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Then, in Section 16.2, we discuss SPN-based polling models for the analysis of token ring systems and traffic multiplexers.. There, it is decided what the next action the customer will u

Trang 1

Chapter 16

the addressed applications include aspects that are very difficult to capture by other performance evaluation techniques The aim of this chapter is not to introduce new theory, but to make the reader more familiar with the use of SPNs

phenomena Although the SPN-based solution approach is more expensive than one based

on queueing networks, this study shows how to model system aspects that cannot be coped with by traditional queueing models Then, in Section 16.2, we discuss SPN-based polling models for the analysis of token ring systems and traffic multiplexers These models include aspects that could not be addressed with the techniques presented in Chapter 9 Since some of the models become very large, i.e., the underlying CTMC becomes very large,

based reliability model for which we will perform a transient analysis in Section 16.3 We finally present an SPN model of a very general resource reservation system in Section 16.4

We briefly recall the most important system aspects to be modelled in Section 16.1.1 We then present the SPN model and perform invariant analysis in Section 16.1.2 We present some numerical results in Section 16.1.3

Performance of Computer Communication Systems: A Model-Based Approach.

Boudewijn R Haverkort Copyright © 1998 John Wiley & Sons Ltd ISBNs: 0-471-97228-2 (Hardback); 0-470-84192-3 (Electronic)

Trang 2

16.1.1 Multiprogramming computer systems

behind their terminals and issue requests after a negative exponentially distributed think

are actively being processed For its processing, the system uses its CPU and two disks, one of which is used exclusively for handling paging I/O After having received a burst

of CPU time, a customer either has to do a disk access (user I/O), or a new page has to

be obtained from the paging device (page I/O), or a next CPU burst can be started, or

an interaction with the terminal is necessary In the latter case, the customer’s code will

be completely swapped out of main memory We assume that every customer receives an equal share of the physical memory

16.12 The SPN model

The SPN depicted in Figure 16.1 models the above sketched system Place terminal

to the number of users thinking The place swap models the swap-in queue Only if there are free pages available, modelled by available tokens in free, is a customer swapped in, via immediate transition getmem, to become active at the CPU The place used is used to

the CPU, a customer moves to place decide There, it is decided what the next action the customer will undertake is: it might return to the terminals (via the immediate transition

f reemem), it might require an extra CPU burst (via the immediate transition reserve),

upon, i.e., on the number of tokens in place used, so as to model increased paging activity

if there are more customers being processed simultaneously

We are interested in the throughput and (average) response time perceived at the ter- minals Furthermore, to identify bottlenecks, we are interested in computing the utilisation

of the servers and the expected number of customers active at the servers (note that we can not directly talk about queue lengths here, although a place like cpu might be seen as

a queue that holds the customer in service as well and where serve is the corresponding

Trang 3

server) The measures of interest can be defined and computed as follows:

places are non-empty As an example, for the CPU, we find:

Pcpu = c Pr(m) (16.1)

mER(~),#cpu>O

-

we again consider the CPU) :

m~w!!?0)

(16.2)

waiting is denoted as the number in the system (syst):

E[h+] = c (#used(m)+#swap(m))Pr(m}

!G%.??il)

(16.3)

Trang 4

l The throughput perceived at the terminals:

E??~WE,)

E[R] = E[N,,st]/ -L (16.5) Before we proceed to the actual performance evaluation, we can compute the place invari- ants In many cases, we can directly obtain them from the graphical representation of the SPN, as is the case here Some care has to be taken regarding places that will only contain tokens in vanishing markings (as decide in this case) We thus find the following place invariants:

16.1.3 Some numerical results

To evaluate the model, we have assumed the following numerical parameters: the number

of terminals K = 30, the think time at the terminals E[Z] = 5, the service time at the cpu E[S,,,] = 0.02, the service time at the user disk E[Suser-disk] = 0.1, and the service time

at the paging device E[Spage-disk ] = 0.0667 The weights of the immediate transitions are,

i.e., the load on the paging device increases as the number of actually admitted customers

latter case, the five transitions that are enabled when decide contains a token form a

First of all, we study the throughput perceived at the terminals for increasing J in Figure 16.2 When paging is not taken into account, we see an increase of the throughput for increasing J, until a certain maximum has been reached When paging is included in the model, we observe that after an initial increase in throughput, a dramatic decrease in throughput takes place By allowing more customers, the paging device will become more heavily loaded and the effective rate at which customers are completed decreases Similar

Trang 5

Figure 16.2: The terminal throughput Xt as a function of the multiprogramming limit J

when paging effects are modelled and not modelled

Figure 16.3: The expected response time E[R] as a function of the multiprogramming limit

J when paging effects are modelled and not modelled

Trang 6

Figure 16.4: The mean number of customers in various components as a function of the

WI 8

6

J

Figure 16.5: The mean number of customers in various components as a function of the

Trang 7

16.2 Polling models 363

observations can be made from Figure 16.3 where we compare, for the same scenarios, the expected response times Allowing only a small number of customers leaves the system resources largely unused and only increases the expected number of tokens in place swap Allowing too many customers, on the other hand, causes the extra paging activity which,

in the end, overloads the paging device and causes thrashing to occur

To obtain slightly more detailed insight into the system behaviour, we also show the expected number of customers in a number of queues, when paging is not taken into account (Figure 16.4) and when paging is taken into account (Figure 16.5) As can be observed, the monotonous behaviour of the expected place occupancies in the model without paging

queued at the CPU decreases, simply because more and more customers start to queue

up at the paging device (the sharply increasing curve) The swap-in queue (place swap) does not decrease so fast in size any more when paging is modelled; it takes longer for customers to be completely served (including all their paging) before they return to the terminals; hence, the time they spend at the terminals becomes relatively smaller (in a complete cycle) so that they have to spend more time in the swap-in queue

We finally comment on the size of the underlying reachability graph and CTMC of this SPN We therefore show in Table 16.1 the number of tangible markings TM (which equals the number of states in the CTMC), the number of vanishing markings VM (which need to be removed during the CTMC construction process) and the number of nonzero entries (7) in the generator matrix of the CTMC, as a function of the multiprogramming limit J Although the state space increases for increasing J, the models presented here can still be evaluated within reasonable time; for J = 30 the computation time remains below

60 seconds (Sun Spare 20) Notice, however, that the human-readable reachability graph requires about 1.4 Mbyte of storage

In this section we discuss the use of SPNs to specify and solve polling models for the analysis

of token ring systems A class of cyclic polling models with count-based scheduling will be discussed in Section 16.2.1, whereas Section 16.2.2 is devoted to cyclic polling models with local time-based scheduling We then comment on some computational aspects for large models in Section 16.2.3 We finally comment on the use of load-dependent visit-orderings

in Section 16.2.4

Trang 8

Table 16.1: The number of tangible and vanishing states and the number of nonzero entries

switch-over

Figure 16.6: SPN model of a station with exhaustive scheduling strategy

Trang 9

16.2 Polling models 365

Using SPNs we can construct Markovian polling models of a wide variety However, the choice for a numercial solution of a finite Markov chain implies that only finite-buffer (or finite-customer) systems can be modelled, and that all timing distributions are exponential

or of phase-type Both these restrictions do not imply fundamental problems; however, from a practical point of view, using phase-type distributions or large finite buffers results

in large Markovian models which might be too costly to generate and solve The recent developments in the use of so-called DSPNs [182, 183, 1841 allow us to use deterministically timed transition as well, albeit in a restricted fashion

Ibe and Trivedi discuss a number of SPN-based cyclic server models [142] A few

of them will be discussed here First consider the exhaustive service model of which we depict a single station in Figure 16.6 The overall model consists of a cyclic composition of a number of station submodels Tokens in place passive indicate potential customers; after

an exponentially distributed time, they become active (they are moved to place active where they wait until they are served) When the server arrives at the station, indicated

by a token in place token, two situations can arise If there are customers waiting to be served, the service process starts, thereby using the server and transferring customers from

returns to place token Transition serve models the customer service time If there are

place token of the next station

Using the SPN model an underlying Markov chain can be constructed and solved

active is (l- cri)Xi since only when passive is non-empty, is the arrival transition enabled Using Little’s law, the expected response time then equals

(16.6)

In Figure 16.7 we depict a similar model for the case when the scheduling strategy is

at such a station, three situations can occur Either there is nothing to send, so that

Trang 10

passive active token

Figure 16.7: SPN model of a station with k-limited scheduling strategy

are customers waiting, at most Ic of them can be served (transition serve is inhibited as soon as count contains k tokens) Transition enough then fires, thus resetting place count

equal the number of tokens in place count; this number can also be zero, meaning that the arc is effectively not there), and preparing the token for the switch-over to the next station When there are less than k customers queued upon arrival of the token, these

prepare Then, as before, transition flush fires and removes all tokens from count

As can be observed here, the SPN approach towards the modelling of polling systems

we can also combine them as we like Since the underlying CTMC is solved numerically, dealing with asymmetric models does not change the solution procedure Notice that we have used Poisson arrival processes in the polling models of Chapter 9 In the models

finite source and sink of customers; by making the initial number of tokens in this place larger, we approximate the Poisson process better (and make the state space larger) Of course, we can use other arrival processes as well, i.e., we can use any phase-type renewal process, or even non-renewal processes

Trang 11

16.2 Polling models 367

direct-switch

switch-over

Figure 16.8: SPN-based station model with local, exponentially distributed THT

16.2.2 Local time-based, cyclic polling models

The SPN models presented so far all exhibit count-based scheduling As we have seen before, time-based scheduling strategies are often closer to reality We can easily model such time-based polling models using SPNs as well

Consider the SPN as depicted in Figure 16.8 It represents a single station of a polling model Once the token arrives at the station, i.e., a token is deposited in place token, two possibilities exist:

1 There are no customers buffered: the token is immediately released and passed to the next station in line, via the immediate transition direct;

2 There are customers buffered: these customers are served and simultaneously, the token holding timer (THT) is started The service process can end in one of two ways:

the token is passed to the next downstream station and the serving of customers stops;

expires: the token is simply forwarded to the next downstream station

Instead of using a single exponential transition to model the token holding timer, one can also use a more deterministic Erlang-J distributed token holding timer as depicted in Figure 16.9 The number J of exponential phases making up the overall Erlang distribution

Trang 12

passive active token

direct-switch

to next

switch-over

Figure 16.9: SPN-based station model with local, Erlang-J distributed THT

Erlang-J distributed timer that have been passed already The operation of this SPN is similar to the one described before; the only difference is that transition timer now needs

to fire J times before the THT has expired (and transition expire becomes enabled) In case all customers have been served but the timer has not yet expired, transition direct will fire and move the token to the next station Notice that one of its input arcs is marking

Consider a 3-station cyclic polling model as depicted in Figure 16.9 with J = 2 The system is fully symmetric but for the THT values per station: we have thti varying whereas t/&3 = 0.2 The other system parameters are X = 3, E[S] = 0.1 (exponentially distributed) and S = 0.05 (exponentially distributed)

In Figure 16.10 we depict the average waiting times perceived at station 1 and stations

2 and 3 (the latter two appear to be the same) when we vary thti from 0.05 through 2.0 seconds As can be observed, with increasing thti, the performance of station 1 improves

Trang 13

Table 16.2: The influence of the variability of the THT

Consider a symmetric cyclic polling model consisting of N = 3 stations of the form as depicted in Figure 16.9 As system parameters we have X = 2, E[S] = 0.1 (exponentially

non-zero entries q in the Markov chain generator matrix Q which is a good measure for the required amount of computation, the expected waiting time, and the expected queue

the performance improves This is due to the fact that variability is taken out of the model

0

Ngày đăng: 21/01/2014, 20:20

TỪ KHÓA LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm