1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Tài liệu Queueing mạng lưới và chuỗi Markov P4 ppt

24 256 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Steady-state aggregation/ disaggregation methods
Tác giả Gunter Bolch, Stefan Greiner, Hermann De Meer, Kishor S. Trivedi
Thể loại Chapter
Năm xuất bản 1998
Định dạng
Số trang 24
Dung lượng 1,7 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Steady-State Aggregation/ D&aggregation Methods In this chapter we consider two main approximation methods: Courtois’s decomposition method and Takahashi’s iterative aggregation/disaggr

Trang 1

Steady-State Aggregation/ D&aggregation

Methods

In this chapter we consider two main approximation methods: Courtois’s decomposition method and Takahashi’s iterative aggregation/disaggregation method

In this section we introduce an efficient method for the steady-state analy- sis of Markov chains Whereas direct and iterative techniques can be used for the exact analysis of Markov chains as previously discussed, the method

of Courtois [Cour75, Cour77] is mainly applied to approximate computations

u NN of the desired state probability vector u Courtois’s approach is based

on decomposability properties of the models under consideration Initially, substructures are identified that can separately be analyzed Then, an aggre- gation procedure is performed that uses independently computed subresults

as constituent parts for composing the final results The applicability of the method needs to be verified in each case If the Markov chain has tightly cou- pled subsets of states, where the states within each subset are tightly coupled

to each other and weakly coupled to states outside the subset, it provides a strong intuitive indication of the applicability of the approach Such a sub- set of states might then be aggregated to form a macro state as a basis for further analysis The macro state probabilities, together with the conditional micro state probabilities from within the subsets, can be composed to yield the micro state probabilities of the initial model Details of the approach are clarified through the following example

153

Gunter Bolch, Stefan Greiner, Hermann de Meer, Kishor S Trivedi

Copyright  1998 John Wiley & Sons, Inc Print ISBN 0-471-19366-6 Online ISBN 0-471-20058-1

Trang 2

154 STEADY-STATE AGGREGATION/ DLSAGGREGATION METHODS

4.1.1 Decomposition

Since the method of Courtois is usually expressed in terms of a DTMC, where-

as we are emphasizing the use of a CTMC in our discussion of methodologies,

we would like to take advantage of this example and bridge the gap by choos- ing a CTMC as a starting point for our analysis With the following model

in mind, we can explain the CTMC depicted in Fig 4.1 We assume a sys- tem in which two customers are circulating among three stations according

to some stochastic regularities Each arbitrary pattern of distribution of the customers among the stations is represented by a state In general, if there are N such stations over which K customers are arbitrarily distributed, then from simple combinatorial reasoning we know that (“$“F’) = (“‘,“-‘) such combinations exist Hence, in our example with N = 3 and K = 2 we have

4

0 2 = 6 states

Fig 4.1 CTMC subject to decomposition

In state 020, for example, two customers are in station two, while stations one and three are both empty After a time period of exponentially distributed length, a customer travels from station two to station one or station three The transition behavior is governed by the transition rates ,921 or ~23 to states

110 or 011, respectively The transition behavior between the other states can

be explained similarly

The analysis of such a simple model could be easily carried out by using one

of the standard direct or iterative methods Indeed, we use an exact method

to validate the accuracy of the decomposition/aggregation approach for our example We use the simple example to illustrate Courtois’s method To this end, the model needs to be explored further as to whether it is nearly com- pletely decomposable, that is, whether we can find state subsets that represent tightly coupled structures

The application may suggest a state set partitioning along the lines of the customers’ circulation pat tern among the visited stations This would

be a promising approach if the customers are preferably staying within the

Trang 3

Fig 4.2 Decomposition of the CTMC with regard to station three

Fig 4.3 Decompositions of the CTMC with regard to stations two and one

bounds of a subset of two stations and only relatively rarely transfer to the third station, i.e., the most isolated one All such possibilities are depicted

in Fig 4.2 and 4.3 In Fig 4.2, station three is assumed to be the most isolated one, i.e., the one with the least interactions with the others Solid arcs emphasize the tightly coupled states, whereas dotted arcs are used to represent the “loose coupling.” When customers are in the first station, they are much more likely to transit to the second station, then return to the first station, before visiting the third station Hence, we would come up with three subsets {020,110,200}, {011, lOl}, and {002} in each of which the number of customers in the third station is fixed at 0, 1, and 2, respectively Alternatively, in Fig 4.3, we have shown the scenario where stations two and three are isolated

Now we proceed to discuss Courtois’s method on the basis of the set of parameters in Table 4.1 It suggests a decomposition according to Fig 4.2 Clearly, the parameter values indicate strong interactions between stations one and two, whereas the third station seems to interact somewhat less with

Trang 4

156 STEADY-STATE AGGREGATION/ DISAGGREGATION METHODS

Tab/e 4.1 Transition rates p12 = 4.50 1-121 = 2.40 p31 = 0.20

and is depicted symbolically in Table 4.2

Tab/e 4.2 Generator matrix Q of the CTMC structured for decomposition

It is known from Eq (3.5) that we can transform any CTMC to a DTMC

by defining P = Q/q + I, with Q > maxi,jES I~ijl Next we solve v = VP instead of 7rQ = 0 and we can assert that v = 7r Of course, we have to fulfill the normalization condition vl = 1 For our example, the transformation results in the transition probability matrix as shown in Table 4.3, where q is appropriately chosen

Given the condition g > max;,j )g;jJ = 7.35, we conveniently fix 4 = 10 Substituting the parameter values from Table 4.1 in Table 4.3 yields the tran-

Trang 5

Tab/e 4.3 Transition-probability matrix P of the DTMC

structured for decomposition

The elements of the submatrix can be addressed by the schema just intro- duced: pll 1,-, = 0.24 Since the indices I, 0 5 I 5 M - 1 are used to uniquely refer to subsets as elements of a partition of the state space S, we conveniently denote the corresponding subset of states by SI; for example, Sr = { 101,011)

Of course, each diagonal submatrix could possibly be further partitioned into substructures according to this schema The depth to which such mul- tilayer decomposition/aggregation technique is carried out depends on the number of stations N and on the degree of coupling between the stations We further elaborate on criteria for the decomposability in the following

Trang 6

A convenient way to structure the transition probability

example is to number the states using radix K notation:

matrix in our

c kiKi, where c ki = K

where /Q denotes the number of customers in station i and N is the index

of the least coupled station, i.e., the one to be isolated For instance, in our example a value of 4 would be assigned to state 200 and a value of 12 to state

011 Note that the states in Table 4.2 and 4.3 have been numbered according

be a basic building block for defining transition probability matrices P;I of a

be used to describe a system with K - 1 customers circulating among N - 1 stations, while one customer stays in station N all the time The transitions from station N to all other stations would be temporarily ign0red.l In terms

of our example, we would only consider transitions between states 101 and

011

To derive stochastic submatrices, the transition probability matrix P is decomposed into two matrices A and B that add up to the original one Matrix A comprises the diagonal submatrices PII, and matrix B the comple- mentary off-diagonal submatrices PIJ, I # J:

Trang 7

P*=(A+X)=

There are multiple ways to define matrix X Usually, accuracy and comput a- tional complexity of the method depend on the way X is defined It is most important, though, to ensure that the matrices P;,,O 5 I 2 M - 1, are all ergodic, i.e., they are aperiodic and irreducible Incorporating matrix X in matrix P, we get:

Trang 8

160 STEADY-STATE AGGREGATION/ DISAGGREGATION METHODS

4.1.2 Applicability

The applicability of Courtois’s method needs to be checked in each case In general, partitioning of the state space and subsequent aggregation of the resulting subsets into macro states can be exactly performed if the DTMC under consideration has a lumpable transition probability matrix In such a case, the application of Courtois’s method will not be an approximation A transition probability matrix P = bij] is lumpable with respect to a partition

of S in subsets SI, 0 5 1 5 n/r - 1, if for each submatrix PIJ,V’I, J,O 5 I #

J 5 111 - 1 real-valued numbers 0 < ~IJ 5 1 exist such that Eq (4.7) holds [KeSn78]:

jESJ

Note that the diagonal submatrices PII need not be completely decoupled from the rest of the system, but rather the matrix has to exhibit regularities imposed by the lumpability condition in order to allow an exact aggregation

In fact, if the PII are completely decoupled from the rest of the system, i.e., the r~ J = 0, I # J, then this can be regarded as a special case of lumpability

of P More details on and an application example of state lumping techniques are given in Section 4.2, particularly in Section 4.2.2

From our example in Eq (4.1), P is not lumpable with respect to the chosen partition Hence Courtois’s method will be an approximation A measure of accuracy can be derived according to Courtois from Eqs (4.4) and (4.6) The degree E of coupling between macro states can be computed from matrix B = [bij] in Eq (4.4) If E is sufficiently small it can be shown that the error induced by Courtois’s method is bounded by O(E)

Let E be defined as follows:

E = max i 0.02 + + 1 0-07* 0.02 0.03 =

0.02 t 0.02 t 0.015 0.02 + 0.02

Trang 9

To prove that P is nearly completely decomposable, it is sufficient to show that Relation (4.10) holds between E and the maximum of the second largest eigenvalues A;(2) of P;, for all I, 0 5 I 5 M - 1 [Cour77]:

need to be computed The eigenvalues of P&, are the roots of the following equation:

Ix;(l)j = 1, Ix;(a)/ = 0.595, IX;;(3)1 = 0.02

The eigenvalues of PT 1 are determined by the solution of:

det(PT, - XTI) = (0.48 - A;) 0.52

0.295 (0.705 - A;)

= (A; - 1)(X; - 0.185)

= 0,

Trang 10

162 STEADY-STATE AGGREGATiON/DlSAGGREGATlON METHODS

which, in turn, gives:

IXi(l)I = 1 and IXi(2)l = 0.185

The eigenvalue of P&, immediately evaluates to:

Because E < 0.2025, Condition (4.10) holds and the transition probabili-

ty matrix P is nearly completely decomposable with respect to the chosen decomposition strategy depicted in Fig 4.2

4.1.3 Analysis of the Substructures

As a first step toward the computation of the approximate state probability vector v” w u, we analyze each submatrix P;I separately and compute the conditional state probability vector Y;, 0 5 I 5 M - 1:

Thus V; is the left eigenvector of Pf, corresponding to the eigenvalue X;(l) =

1 Substituting the parameters from P& of our example, the solution of

yields the conditional steady-state probabilities

to the corresponding macro state 0:

of the micro states aggregated

z&, = 0.164, u& = 0.295, l& = 0.54

Similarly, PT, is used in

( 40’4,) - ( (0.48 - 1) 0.295 (0.705 - 1) > 0.52 =o, VT1 = 1

Trang 11

to obtain the conditional state probabilities

the corresponding macro state 1:

of the micro states aggregated to

l& = 0.362, z& = 0.638

Finally:

to

Nearly completely decomposable systems

“long-term” and “short-term” behavior:

can be characterized with respect

l From a “short-term” perspective, systems described by their probability transition matrix P can be decomposed into M independent subsystems, each of whose dynamics is governed by a stochastic process approximate-

ly described by the matrices PF, , O<I<M -1 Asanoutcomeof the analyses of A4 independent subsystems the conditional micro-state probability vector VT results

l In the “long run,” the impact of the interdependencies between the subsystems cannot be neglected The interdependencies between sub- systems, or macro states, I and J, for all I, J, are described by the transition probabilities IIJ that can be - approximately - derived from the transition probability matrix P and the conditional state probability vector UT Solving the transition matrix P = [I’,J] for the macro states yields the macro-state probability vector y The macro-state probability vector y can, in turn, be used for unconditioning of UT, 0 5 I 5 Ad- 1, to yield the final result, the approximate state probability vector V” * V

4.1.4 Aggregation and Unconditioning

Having obtained the steady-state probability vector for each subset, we are now ready for the next step in Courtois’s method The transition probability matrix over the macro states, T’ = [I’ 1~ is approximately computed as: ]

(4.13)

Comparing Eq (4.13) with the lumpability Condition (4.7), it is clear that the I’IJ can be exactly determined if the model under consideration is lumpable, that is:

holds, independent of v;

Trang 12

In our example, the macro-state transition probabilities I~J are derived according to Eq (4.13):

Trang 13

Tab/e 4.4 State probabilities computed using Courtois’s method and exact ones

For models with large state spaces, Courtois’s method can be very efficient

if the underlying model is nearly completely decomposable If the transition probability matrix obeys certain regularity conditions the results can be exact The error induced by the method can, in principle, be bounded [CoSe84] But since the computational complexity of this operation is considerable, a formal error bounding is often omitted The efficiency of Courtois’s method is due to the fact that instead of solving one linear system of equations of the size of the state space S, several much smaller linear systems are solved independently, one system for each subset SJ of the partitioned state space S, and one for the aggregated chain

We conclude this section by summarizing the entire algorithm For the sake of simplicity, only one level of decomposition is considered Of course, the method can be iteratively applied on each diagonal submatrix P;,

4.1.5 The Algorithm

Create the state space and organize it appropriately according to

of decomposition

Build the transition probability matrix P (by use of randomization

P = Q/q+1 if the starting point is a CTMC), and partition P into AL! x M number of submatrices PIJ, 0 2 I, J, 5 M - 1, appropriately

Verify the nearly complete decomposability of P according to Rela- with the chosen value of E

Decompose P such that P = P* + EC according to Eq (4.9) Matrix P* contains only stochastic diagonal submatrices P;,, and E is a mea- sure of the accuracy of Courtois’s method It is defined as the maximum sum

of the entries of the non-diagonal submatrices PIJ, I # J, of P

For each I, 0 5 I 5 M - 1, solve equation $PT, =

vT1 = 1 to obtain the conditional state probability vectors UT

UT with

Ngày đăng: 21/01/2014, 20:20

w