1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Tài liệu Hiệu suất của hệ thống thông tin máy tính P15 pptx

27 272 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Numerical solution of Markov chains
Tác giả Boudewijn R. Haverkort
Thể loại Book chapter
Năm xuất bản 1998
Định dạng
Số trang 27
Dung lượng 1,69 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Initially, the matrix A is stored in it, but 15.1.3 Power, Jacobi, Gauss-Seidel and SOR iterative methods the matrix P is a stochastic matrix and describes the evolution of the CTMC in

Trang 1

Chapter 15

15.1 Computing steady-state probabilities

equations:

ISBNs: 0-471-97228-2 (Hardback); 0-470-84192-3 (Electronic)

Trang 2

the steady-state probabilities; however, together with the normalisation equation a unique

elimination The Gaussian elimination procedure consists of two phases: a reduction phase

We now vary i from 1 to N The j-th equation, with j = i + 1, , N, is now changed by

the aj,k values as follows:

aj,k I= Uj,k - mj,iai,k, j, k > i (15.3)

Trang 3

“ column i being reduced

procedure

side of the linear system of equations equals 0, we do not have to change anything there

Trang 5

-4P1 +P2 = -6a,

the N-th equation with the equation Cipi = 1 In doing so, the last equation will directly

the reduction phase, most entries of the upper half of A will be non-zero The non-zero el-

Trang 6

15.1.2 LU decomposition

equations have to be solved, all of the form A: = b, for different values of b The method

z

A is the product of these two matrices, we know that

Trang 7

by increasing i from 1 until N is reached

Suppose we want to decompose

u1,2 = al,2 = 2 From this, we find u 2,2 = u2,2 - Z2,1u1,2 = 5 We then compute Z3,1 = -5

We thus find:

(15.14)

Trang 8

We now form the matrix A = QT and in addition directly include the normalisation

doing so, the vector f! changes to b = (O,O, 1) and after the solution of Lg = b, we found

the vector 2 always has this value, and so we do not really have to solve the system

find that the last row of U contains only 0’s The solution of Lx = 0 will then always yield

Trang 9

only one data structure (typically an array) Initially, the matrix A is stored in it, but

15.1.3 Power, Jacobi, Gauss-Seidel and SOR iterative methods

the matrix P is a stochastic matrix and describes the evolution of the CTMC in time-steps

Trang 10

The Power method solves p as the left Eigenvector - of P, corresponding to an Eigenvalue

cussed the convergence of the Power method to compute the largest Eigenvalue of a matrix) Since more efficient methods do exist, we do not discuss the Power method any further

linear system (15.2) into:

(15.17)

We clearly need ai,i # 0; when the linear system is used to solve for the steady-state

pF’ai,j + &li”)ai,j

j>i

Trang 11

towards the solution is very slow Therefore, it is good to check whether 1 IA$“) 11 < E

d E LV+ (and d 5 k)

p@+l) = D-l& + u)~(“) - (15.19)

used as soon as they have been computed, we obtain the Gauss-Seidel scheme:

py+l) = -

lUt,il gp’ ( (k+l)ai,j + Cpj”)Ui,j ) j>i 1

(15.20)

Trang 12

The SOR method

tension of the Gauss-Seidel method, in which the vector p(“+l) is computed as the weighted

(15.23)

to find a better value for w, we can use the method proposed by Hageman and Young [116]

We then have to compute an estimate for the second largest Eigenvalue of the iteration

This new estimate then replaces the old value of w, and should be used for another number

then w should be reduced towards 1

Trang 13

From the discussion of the Power method in Chapter 8 (in the context of the computa-

Eigenvalue always equals 1, and the speed of convergence of the discussed methods then

of w one can

smaller

of nonzero elements per column in A is limited to a few dozen For example, considering

Trang 14

An important difference between the presented iterative methods is the number of

log E NoI=-

The matrix A = QT can be decomposed as D - (L + U) , so that we find:

Trang 15

# Power Jacobi Gauss-Seidel

15.2 Transient behaviour

to solve for that purpose We then continue with the discussion of a simple Runge-Kutta method in Section 15.2.2 In Section 15.2.3 we proceed with a very reliable method, known

15.2.1 Introduction

Trang 16

l when the system life-time is so short that steady-state is not reached;

interest;

p’(t) = p_(t>Q, given p(O) (15.29)

associate a reward ri with every state, the expected reward at time t can be computed as

E[X(t)] = f&i(t)

i=l

(15.30)

Trang 17

We see that a similar differential equation can be used to obtain I(t) as to obtain p(t) If

Y(t) = -&(t) i=l

(15.35)

state 1 both processors operate, we assign a reward 2~ to state 1, where ,Q is the effective

can now be computed:

Trang 18

l Finally, the accumulated reward distribution F’(y, t) at time t expresses what the

152.2 Runge-Kutta methods

compute E~+~ The values 7ro through E;-, are not used to compute E~+~ Therefore,

for 7ri are computed as follows:

with

(15.37)

Trang 19

Since the RK4 method provides an explicit solution to 7ri, it is called an explicit 4th-order method Per iteration step of length h, it requires 4 matrix-vector multiplications, 7 vector-

under study is stiff, meaning that the ratio of the largest and smallest rate appearing in

Q is very large, say of the order of lo4 or higher

however, we will not do so Instead, we will focus on a class of methods especially developed

15.2.3 Uniformisation

of vectors and matrices:

p(t) = p(0)eQt - (15.38)

due to the fact that Q contains positive as well as negative entries; and (iii) the matrices

most popular

Uniformisation is based on the more general concept of uniformisation [147] and is also

the matrix

Q

Trang 20

2

Figure 15.3: A small CTMC and the corresponding DTMC after uniformisation

If X is chosen such that X > maxi{ (qi,il}, then the entries in P are all between 0 and 1, while the rows of P sum to 1 In other words, P is a stochastic matrix and describes a DTlMC The value of X, the so-called uniformisation rate, can be derived from Q by inspection

of the successor states is selected probabilistically For the states in the CTMC that have total outgoing rate X, the corresponding states in the DTMC will not have self-loops For states in the CTMC having a state residence time distribution with a rate smaller than

X (the states having on average a longer state residence time), one epoch in the DTMC

Trang 21

might not be long enough; hence, in the next epoch these states might be revisited This

Using the matrix P, we can write

p(t) = p(0)eQt = p(0)ex(P-l)t = p(0)e-XIteXPt = p(0)eextexPt (15.42)

p(t) = p(0)ePxt F q

(15.43) where

Poisson process with rate X Of course, we still deal with a Taylor series approach here;

discuss below

jj@) = 5 $(xt;n)En* (15.47)

Trang 22

Table 15.2: The number of required steps kc as a function of E and the product At

n=O

(15.48)

5 (At)n > l- E - = (1 - e)eXt, n=O n! - e-At (15.49)

We consider the transient solution of the CTMC given in Figure 15.3; we already performed

E = 10w4 We find:

Trang 23

t 0.1 0.2 0.5 1 5 10 20 50 100

for larger values of t we require very many steps to be taken, the successive vectors 7rn do

that end, denote with k,, < kE the value after which E does not change any more Instead

n=O n=O

by only starting to add the weighted vectors 7r, after the Poisson weighting factors become

i$(At;O) = eext, and $(Xt; n + 1) = $(Xt; n)s, n E I? (15.51)

Trang 24

and

N

w> = Cd(t), (15.52) i=l

which expresses the total amount of reward gained over the period [0, t) Below, we will

place according to a Poisson process with rate X equals t/( k+ 1) The expected accumulated reward until time t , given Ic jumps, then equals

Trang 25

weighting these possibilities accordingly, we obtain:

k=O i=l m=O

(15.53)

tion over all states does not suffice any more Instead, we have to sum the accumulated

15.3 Further reading

Trang 26

Reibman et al present comparisons in [239, 240, 2381 A procedure to handle t,he stiffness

15.4 Exercises

1 Gaussian elimination

3 The Gauss-Seidel method

Trang 27

15.3 Computing transient probabilities

Ngày đăng: 21/01/2014, 20:20

🧩 Sản phẩm bạn có thể quan tâm

w