1. Trang chủ
  2. » Công Nghệ Thông Tin

Networking Theory and Fundamentals - Lecture 3 ppt

36 304 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Markov Chains
Người hướng dẫn Prof. Yannis A. Korilis
Trường học University of the Aegean
Chuyên ngành Networking Theory and Fundamentals
Thể loại lecture notes
Năm xuất bản 2003
Thành phố Mytilene
Định dạng
Số trang 36
Dung lượng 343,54 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

3-3 Markov ChainStochastic process that takes values in a countable set Example: {0,1,2,…,m}, or {0,1,2,…} Elements represent possible “states” Chain “jumps” from state to state Memoryl

Trang 1

TCOM 501:

Networking Theory & Fundamentals

Lecture 3 January 29, 2003Prof Yannis A Korilis

Trang 2

3-2 Topics

Markov ChainsDiscrete-Time Markov ChainsCalculating Stationary DistributionGlobal Balance Equations

Detailed Balance EquationsBirth-Death Process

Generalized Markov ChainsContinuous-Time Markov Chains

Trang 3

3-3 Markov Chain

Stochastic process that takes values in a countable set

Example: {0,1,2,…,m}, or {0,1,2,…}

Elements represent possible “states”

Chain “jumps” from state to state

Memoryless (Markov) Property: Given the present state, future jumps of the chain are independent of past history

Markov Chains: discrete- or continuous- time

Trang 4

3-4 Discrete-Time Markov Chain

Discrete-time stochastic process {X n : n = 0,1,2,…}

Trang 5

3-5 Chapman-Kolmogorov Equations

n step transition probabilities

Chapman-Kolmogorov equations

is element (i, j) in matrix P n

Recursive computation of state probabilities

Trang 6

3-6 State Probabilities – Stationary Distribution

State probabilities (time-dependent)

In matrix form:

If time-dependent distribution converges to a limit

π is called the stationary distribution

Existence depends on the structure of Markov chain

Trang 7

3-7 Classification of Markov Chains

Aperiodic:

State i is periodic:

Aperiodic Markov chain: none

of the states is periodic

Irreducible:

States i and j communicate:

Irreducible Markov chain: all states communicate

0

2 1

Trang 8

3-8 Limit Theorems

Theorem 1: Irreducible aperiodic Markov chain

For every state j, the following limit

exists and is independent of initial state i

N j (k): number of visits to state j up to time k

πj: frequency the process visits state j

Trang 9

3-9 Existence of Stationary Distribution

Theorem 2: Irreducible aperiodic Markov chain Thereare two possibilities for scalars:

1. πj = 0, for all states j No stationary distribution

2. πj > 0, for all states j π is the unique stationary

Trang 10

3-10 Ergodic Markov Chains

Markov chain with a stationary distribution

States are positive recurrent: The process returns to state j “infinitely often”

A positive recurrent and aperiodic Markov chain is called ergodic

Ergodic chains have a unique stationary distributionErgodicity ⇒ Time Averages = Stochastic Averages

Trang 11

3-11 Calculation of Stationary Distribution

A Finite number of states

Solve explicitly the system of

equations

Numerically from P n which

converges to a matrix with

rows equal to π

Suitable for a small number of

states

B Infinite number of states

Cannot apply previous methods

to problem of infinite dimension Guess a solution to recurrence:

Trang 12

3-12 Example: Finite Markov Chain

Markov chain formulation

i is the number of umbrellas

available at her current location Transition matrix

Absent-minded professor uses

two umbrellas when commuting

between home and office If it

rains and an umbrella is available

at her location, she takes it If it

does not rain, she always forgets

to take an umbrella Let p be the

probability of rain each time she

commutes What is the

probability that she gets wet on

any given day?

Trang 13

3-13 Example: Finite Markov Chain

Trang 14

3-14 Example: Finite Markov Chain

Taking p = 0.1:

Numerically determine limit of P n

Effectiveness depends on structure of P

0 0.9 0.1 0.9 0.1 0

Trang 15

3-15 Global Balance Equations

Markov chain with infinite number of statesGlobal Balance Equations (GBE)

is the frequency of transitions from j to i

Intuition: j visited infinitely often; for each transition out of j there must be a subsequent transition into j

=

Trang 16

3-16 Global Balance Equations

Alternative Form of GBE

If a probability distribution satisfies the GBE, then it is the unique stationary distribution of the Markov chainFinding the stationary distribution:

Guess distribution from properties of the system Verify that it satisfies the GBE

☺ Special structure of the Markov chain simplifies task

Trang 17

3-17 Global Balance Equations – Proof

Trang 18

3-18 Birth-Death Process

One-dimensional Markov chain with transitions only between neighboring states: Pij=0, if |i-j|>1

Detailed Balance Equations (DBE)

Proof: GBE with S ={0,1,…,n} give:

Trang 19

3-19 Example: Discrete-Time Queue

In a time-slot, one arrival with probability p or zero arrivals with probability 1-p

In a time-slot, the customer in service departs with probability q or stays with probability 1-q

Independent arrivals and service timesState: number of customers in system

Trang 20

3-20 Example: Discrete-Time Queue

p n

α α

− +

Trang 21

3-21 Example: Discrete-Time Queue

Have determined the distribution as a function of π0

How do we calculate the normalization constant π0? Probability conservation law:

) 1 ( )

1 ( 1

q

q p

p q

p p

Trang 22

3-22 Detailed Balance Equations

General case:

Imply the GBE Need not hold for a given Markov chain Greatly simplify the calculation of stationary distribution Methodology:

Assume DBE hold – have to guess their form Solve the system defined by DBE and Σiπi = 1

If system is inconsistent, then DBE do not hold

If system has a solution { πi : i=0,1,…}, then this is the unique stationary distribution

π j P ji = πi ij P i j, = 0,1,

Trang 23

3-23 Generalized Markov Chains

Markov chain on a set of states {0,1,…}, that whenever

enters state i

The next state that will be entered is j with probability P ij

Given that the next state entered will be j, the time it spends at state i until the transition occurs is a RV with distribution F ij

{Z(t): t ≥ 0} describing the state the chain is in at time t:

It does not have the Markov property: future depends on

The present state, and The length of time the process has spent in this state

Trang 24

3-24 Generalized Markov Chains

T i : time process spends at state i, before making a

transition – holding time

Probability distribution function of T i

T ii : time between successive transitions to i

X n is the nth state visited {X n : n=0,1,…}

Is a Markov chain: embedded Markov chain Has transition probabilities P ij

Semi-Markov process irreducible: if its embedded Markov chain is irreducible

Trang 25

3-25 Limit Theorems

Theorem 3: Irreducible semi-Markov process, E[T ii] < ∞

For any state j, the following limit

exists and is independent of the initial state

T j (t): time spent at state j up to time t

pj is equal to the proportion of time spent at state j

j j

jj

E T p

Trang 26

3-26 Occupancy Distribution

Theorem 4: Irreducible semi-Markov process; E[T ii] < ∞.Embedded Markov chain ergodic; stationary distribution π

Occupancy distribution of the semi-Markov process

πj proportion of transitions into state j

E[T j ] mean time spent at j

Probability of being at j is proportional to πj E[T j]

Trang 27

3-27 Continuous-Time Markov Chains

Continuous-time process {X(t): t ≥ 0} taking values

in {0,1,2,…} Whenever it enters state iTime it spends at state i is exponentially distributed with parameter νi

When it leaves state i, it enters state j with

probability P ij , where Σj ≠i P ij = 1Continuous-time Markov chain is a semi-Markov process with

Exponential holding times: a continuous-time Markov chain has the Markov property

( ) 1 i t, , 0,1,

ij

F t = − e−ν i j =

Trang 28

3-28 Continuous-Time Markov Chains

When at state i, the process makes transitions to state ji with rate:

Total rate of transitions out of state i

Average time spent at state i before making a transition:

Trang 29

3-29 Occupancy Probability

Irreducible and regular continuous-time Markov chain

Embedded Markov chain is irreducible Number of transitions in a finite time interval is finite with probability 1

From Theorem 3: for any state j, the limit

exists and is independent of the initial state

p j is the steady-state occupancy probability of state j

p j is equal to the proportion of time spent at state j

Trang 30

3-30 Global Balance Equations

Two possibilities for the occupancy probabilities:

p j = 0, for all j

p j > 0, for all j, and Σj p j = 1

Global Balance Equations

Rate of transitions out of j = rate of transitions into j

If a distribution {p j : j = 0,1,…} satisfies GBE, then it is

Alternative form of GBE:

Trang 31

3-31 Detailed Balance Equations

Detailed Balance Equations

☺ Simplify the calculation of the stationary distributionNeed not hold for any given Markov chain

Examples: birth-death processes, and reversible Markov chains

, , 0,1,

j ji i ij

p q = p q i j =

Trang 33

Use DBE to determine state probabilities as a function of p0

Use the probability conservation law to find p0

Using DBE in problems:

Prove that DBE hold, or Justify validity (e.g reversible process), or Assume they hold – have to guess their form – and solve system

Trang 34

3-34 M/M/1 Queue

Arrival process: Poisson with rate λService times: iid, exponential with parameter µService times and interarrival times: independentSingle server

Infinite waiting roomN(t): Number of customers in system at time t (state)

λ

µ

λ µ

λ

µ

λ

µ

Trang 35

p = ρ − ρ n=

Trang 36

3-36 The M/M/1 Queue

Average number of customers

Applying Little’s Theorem, we have

Similarly, the average waiting time and number of customers in the queue is given by

µ

λλ

= N 1 1

T

ρ λ

T W

ρ λ

µ

Ngày đăng: 22/07/2014, 18:22

TỪ KHÓA LIÊN QUAN