a False : Little’s formula holds for arbitrary arrival processes.. Little’s law can be be generalizedto where Q r is the mean number of type r jobs in queue and E[W r] is the mean waitin
Trang 1page 1
Basic Queueing Models
2.2 LITTLE’S FORMULA AND ITS GENERALIZATION
2.2-1 Little’s formula
(a) False : Little’s formula holds for arbitrary arrival processes
(b) False : For the same reason as part (a)
(c) True: Little’s formula holds for arbitrary work-conserving disciplines
(d) True: For the same reason as part (a)
2.2-2 Little’s formula for multiple type jobs Little’s law can be be generalizedto
where Q r is the mean number of type r jobs in queue and E[W r] is the mean
waiting time for type r jobs in queue, for r = 1, , R The average queue size for a type r job is given by (2.1) for the FCFS queue discipline or any
other work-conserving discipline
2.2-3 Distributions seen by arrivals and departures In the interval [0, T ], for each arrival which causes Q(t) to increase from n to n + 1 (n = 0, 1, · · · ), there must be a corresponding departure that causes Q(t) to decrease from
n + 1 to n (since Q(0) = Q(T ) = 0) This implies that the average queue
size seen by an arrival is the same as that seen by a departure in the interval
F Y (y) = 1 − e −λy , y ≥ 0,
so Y is exponentially distributed with parameter λ.
Trang 2page 2
i (b) Let X the arrival streams are independent, so to are the random variables X j denote the inter-arrival time of the jth arrival stream Since j,
j = 1, , m Furthermore, since the jth arrival stream is Poisson, X j
is an exponentially distributed random variable with parameter λ j Theinter-arrival time of the aggregate stream is given by
Y = min{X1, , X m }.
From the result of part (a), we can conclude that Y is exponentially tributed with parameter λ =Pm i=1 λ i Therefore, the aggregate stream
dis-is a Podis-isson process with rate λ.
2.3-2 Consistency check of the Poisson process
(a) Let I h denote a small interval of length h From (2.3-2) we have
−λh = o(h).
(b) Let X1 denote the time of the first arrival after the time origin (say t = 0) and X2 denote the inter-arrival time between the first arrival and the
second arrival The RVs X1 and X2 are both exponentially distributed
with parameter λ and have a common cdf:
F X (x) = 1 − e −λx , x ≥ 0.
The event of no arrival in the interval I h is equivalent to the event
{X1 > h} Therefore,
P [no arrival in I h ] = P [X1 > h] = e −λh = 1 − λh + o(h).
The event of two or more arrivals in the interval I his equivalent to the
event {X1 + X2 ≤ h} Let Y = X1 + X2 Then,
Trang 3page 3
i One can further show that in general, the cdf of Y is given by
F Y (y) = F X1(y) ~ f X2(y) = f X1(y) ~ F X2(y). (2.3)
Therefore,
F Y (y) = F X (y) ~ f X (y) =
Z y0
(1 − e −λx ) · λe −λ(y−x) dx
= λe −λy
Z y0
(e λx − 1)dx = 1 − e −λy − λye −λy
Returning to (2.2), we obtain
P [2 or more arrivals in I h ] = F Y (h) = 1 − e −λh − λhe −λh
= 1 − (1 − λh + o(h)) − (λh + o(h)) = o(h).
Finally,
P [one arrival in I h ] = 1 − P [no arrival in I h ] − P [≥ 2 arrivals in I h]
= 1 − (1 − λh + o(h)) − o(h) = λh + o(h).
To prove (2.3), note that
F Y (y) =
Z y0
f X1(x) ~ f X2(x) dx =
Z y0
Z x0
f X1(x − t)f X2(t) dt dx
=
Z y0
Z y
t
f X1(x − t)f X2(t) dx dt =
Z y0
Z y−t0
f X1(α) dα f X2(t) dt
=
Z y0
F X1(y − t)f X2(t) dt = F X1(y) ~ f X2(y).
2.3-3 Decomposition of a Poisson process
(a) We are given that {X j } is a sequence of i.i.d random variables, nentially distributed with parameter λ Then for fixed n,
expo-S n = X1 + · · · + X n and has an Erlang-n distribution The cdf of S n is given by
F S n (x) = P [S n ≤ x] = 1 − P [< n arrivals in an interval of length x]
Trang 4(λx) j j! · (1 − r)
= re −λx · e λ(1−r)x
µ1
r
¶
= e −λrx which shows that S N has an exponential distribution with parameter λr.
(b) In decomposing the Poisson stream into m substreams, each arrival is assigned independently to the kth substream with probability r k, where
Pm
k=1 r k = 1 Consider an arrival that is assigned to the kth substream.
The number of subsequent arrivals of the original Poisson stream until
the next arrival that is assigned to the kth substream is a random variable
N k with distribution
P [N k = n] = (1 − r k)n−1 r k , n = 0, 1, · · · Therefore, the inter-arrival time between arrivals assigned to the kth
substream is a random variable
S N k = X1 + · · · X N k , where X i are inter-arrival times of the original Poisson process Hence,
the X i are i.i.d and exponentially distributed with parameter λ By the result from part (a), S N k is exponentially distributed with parameter
r k λ Therefore, the kth substream is Poisson with rate r k λ.
2.3-4 Alternate decomposition of a Poisson stream Let X i represent the
interarrival time between the ith and the (i + 1)-st arrival For substream 1,
the time between the first and the second arrival is given by
Y = X1 + X2 + · · · X m
Trang 5page 5
i The event {Y ≤ y} is equivalent to the event that there are fewer than m arrivals of the original Poisson stream in an interval of length y, i.e.,
F Y (y) , P [Y ≤ y] = 1 − P [< m arrivals in an interval of length y]
−λy ,
which is an Erlang-m distribution with mean m/λ.
2.3-5 Derivation of the Poisson distribution
(a) Equation (2.3-9) for n = 0 can be written as:
d
dt ln(P0(t)) = −λ,
which is a simple, separable first-order differential equation Integrating
both sides and solving for P0(t) yields P0(t) = Ke −λt, where the constant
K is determined by the initial condition P0(0) = 1 Hence, K = 1 and
Substituting (6.5) into (2.3-9) for n = 1, we obtain
P 0
1(t) + λP1(t) = λe −λt (2.5)
Equation (2.5) is a first-order differential equation that can be reduced
to a separable form by multiplying both sides by an integrating factor
More generally, let us re-write (2.5) as follows:
Equating the left-hand side of (2.8) with (2.7), we see that they can be
made equal by choosing φ(t) such that
Trang 6page 6
i which is the integrating factor we seek.After multiplying (2.5) by the integrating factor φ(t), we obtain:
d dt
h
e
R
R(t)dt P1(t)
P n+1 (t) = (λt)
(n+1) (n + 1)! e
−λt
Thus, by induction, we have shown the validity of (2.14) for all n ≥ 0.
(b) Taking the Laplace transform of the system of differential equations(4.61) and (4.62), we have:
Trang 7where we have used the following Laplace transform properties:
L −1 {f ∗ (s + a)} = f (t)e −at (2.18)
L −1
½1
2.3-6 Uniformity of Poisson arrivals
(a) Suppose there are n arrivals in the interval (0, T ] The joint probability that there are i arrivals in a subinterval (0, t], one arrival in (t, t + h], and n − i − 1 arrivals in (t + h, T ] is the product of three factors obtained
from the Poisson distribution:
µ
(λt) i i! e
¶
t i (T − t − h) n−i−1 Summing the above expression over the possible values of i, we find that the joint probability that there are n arrivals in (t, T ] with one arrival
in (t, t + h] is
λ n he −λT (n − 1)!
¶
t i (T − t − h) n−i−1= [λ(T − h)]
n−1 (n − 1)! λhe
−λT ,
(2.20)where we used the binomial formula
¶
x i y k−i = (x + y) k Since h is an infinitesimal interval, we rewrite (2.20) as
P [n arrivals in (0, T ], 1 arrival in (t, t + h]] = (λT )
n−1 (n − 1)! λhe
n! e −λT
= nh
T + o(h).
Trang 8page 8
i (b) Since the n arrivals are independent, any one of them will fall into the interval (t, t + h] with equal chance Thus, the conditional probability
that a call arrives in (t, t + h], given that it is one of n arrivals in (0, T ]
is h
T This final expression is independent of n, hence the conditional
probability is unconditional Thus, we have proved (2.3-34)
2.3-7 Pure birth process When λ(n) = λ and µ(n) = 0 for all n ≥ 0, the
differential-difference equations of the B-D process become:
p 0
n (t) = −λp n (t) + λp n−1 (t), n = 1, 2, · · · , (2.21)
p 0
Using the same procedure as in Exercise 2.3-6, these equations can be solved
to obtain the Poisson distribution:
p n (t) = (λt)
n n! e
After integrating both sides from 0 to t and re-arranging, we obtain:
p n (t) = e −λ(n)t
·
λ(n − 1)
Z t0
p n−1 (x)e λ(n)x dx + K
¸
where K is a constant determined by the initial condition p n (0) = K.
2.3-9 Pure death process When λ(n) = 0 and µ(n) = µ for all n ≥ 0, the
differential-difference equations of the B-D process become:
Trang 9e µx p n+1 (x)dx, n = 1, · · · , N0 − 1. (2.31)
Applying (2.31) successively for n = 1, 2, · · · , N0 − 1, we obtain:
p n (t) = (µt)
N0−n (N0 − n)! e
−µt = 1 −
NX0−1 n=0
(µt) n n! e
−µt
2.3-10 The time-dependent PGF Multiply both sides of (2.21) and (2.22) by z n
and sum from n = 0 to ∞ to obtain:
Hence,
p n (t) = e −λt (λt)
n n! .
2.3-11 Time-dependent solution for a certain BD process When λ n = λ and
µ n = nµ for all n, equations (2.3-46) become
p 0
n (t) = −(λ + nµ)p n (t) + λp n−1 (t) + (n + 1)µp n+1 (t), n = 1, 2, 3, · · · ,
(2.36)
Trang 10page 10
i Multiply both sides of (2.36) by z
n , sum from n = 1 to ∞ and then add (2.37)
is the unique solution to (2.39)
Since p n (t) is the coefficient of z n in the power series expansion of G(z, t), we
2.4 BIRTH-AND-DEATH QUEUEING MODELS
2.4-1 Splitting a Poisson stream
Trang 112.4-2 Erlangian distribution.
(a) The LT of an exponential random variable with mean µ is given by
f ∗ (s) = µ
s + µ . Therefore, the LT of Y i is
f ∗
Y (s) = nλ
s + nλ . Since X = Y1 + · · · + Y n,
(b) The pdf of X can be obtained by inverting the LT given in (2.43) Using
properties of the LT we have
f X (x) = (nλ) n L −1
·1
(s + nλ) n
¸
= (nλ) n e −λnx L −1
·1
s n
¸
= (nλ) n e −λnx · x
n−1 (n − 1)! =
(nλx) n x(n − 1)! e
Var[X] = nVar[Y i ] = n 1
(nλ)2 =
1
nλ2 .
2.4-3 Erlangian distribution (continued) The service completions of
cus-tomers 1 through n may be considered as arrivals of a Poisson process, since
the service times are exponentially distributed and i.i.d The event that the
Trang 12−µx
2.4-4 Balance equation of M/M/1
(a) The detailed balance equation for the M/M/1 queue equates the
prob-ability flow rate from state n − 1 to state n with that from state n to state n − 1 The flow rates must be equal if the queue reaches a stable
equilibrium Therefore, the balance equations are given by
Hence, we see that p n = (1 − ρ)ρ n for n = 0, 1, · · ·
Trang 13page 13
i2.4-5. PASTA and related properties in the M/M/1 queue.
(a) Due to the uniformity property of the Poisson process, the proportion
of arriving calls in the interval (0, T ) that find n in the system can be
expressed as the following ratio:
a n= expected number of arrivals in (0, T ) that find n in the system
expected number of arriving calls in (0, T ) . The expected number of calls during (0, T ) that find exactly n calls in the system is λp n T Thus,
a n = P∞ λp n T
i=0 λp i T =
λp n T
λT = p n
(b) From Problem (2.2-1), we know that a n = d n holds for any
work-conserving queueing discipline Hence, in this case p n = d n, i.e., theprobability distribution of the number of system seen by departing cus-
tomers is {p n }.
(c) If the arrival process is state-dependent, i.e., the arrival rate λ(n)
de-pends on the state of the system, then
2.4-6 Derivation of the waiting time distribution From (2.4-25) we have
= 1 − ρe −µ(1−ρ)x
2.4-7 Laplace transform method The waiting time experienced by a call that
arrives with n ≥ 1 calls ahead of it in the system is given by:
W = R1 + S2 + · · · + S n , (2.48)
where R1, S2, · · · , S n are i.i.d according to an exponential distribution of
parameter µ If there are 0 calls ahead of the arriving call, its waiting time will
be 0 That is, the conditional pdf of W given N = 0 is given by f W (t|0) = δ(t),
Trang 14´n , n ≥ 1
f W (x) = 1 − ρe −µ(1−ρ)t , t ≥ 0.
The time in system, T is related to the waiting time W by T = W + S, where
S is the service time, which is exponentially distributed with parameter µ and independent of W Hence, the Laplace transform of the pdf of T is given by:
µ
s + µ =
µ(1 − ρ)
s + µ(1 − ρ) , which implies that T is exponentially distributed with parameter µ(1 − ρ).
Trang 15−(a1+a2 ) (2.52)
= Q(k; a1 + a2) = RHS, where the variable substitution l = i + k − j is made in (2.52).
Using integration by parts, we can write
Z ∞
a
e −y y k−1 (k − 1)! dy
LHS = aQ(k; a) + (k + 1)Q(k; a). (2.54)
We can also write the following relations:
Q(k; a) = Q(k − 1; a) + P (k; a) (2.55)
Q(k; a) = Q(k + 1; a) − P (k + 1; a) (2.56)Substituting (2.55) and (2.56) into (2.54), we obtain
LHS = a[Q(k − 1; a) + P (k; a)] + (k + 1)[Q(k + 1; a) − P (k + 1; a)]
= [aP (k; a) − (k + 1)P (k + 1; a)]
+ [aQ(k − 1; a) + (k + 1)Q(k + 1; a)] (2.57)
It is easy to verify that the first term in square brackets in (2.57) is zero
Hence, (2.53) is established
Trang 16page 16
i (d) The result in part (c) can be rewritten in the form (2.53) Substitutingk for k + 1 in (2.53), we have
(k + a)Q(k − 1; a) = aQ(k − 2; a) + kQ(k; a) (2.58)Rearranging terms, we can write
kQ(k; a) − aQ(k − 1; a) = kQ(k − 1; a) − aQ(k − 2; a). (2.59)Using the relation
kQ(k − 1; a) = Q(k − 1; a) + (k − 1)Q(k − 1; a) (2.60)
in (2.59), we obtain
= Q(k − 1; a) + [(k − 1)Q(k − 1; a) − aQ(k − 2; a)]. (2.62)Applying the above recursive relation repeatedly, we obtain
kQ(k; a) − aQ(k − 1; a)
= Q(k − 1; a) + Q(k − 2; a) + [(k − 2)Q(k − 2; a) − aQ(k − 3; a)]
= Q(k − 1; a) + Q(k − 2; a) + Q(k − 3; a) + · · · + Q(1; a) + [Q(1; a) − aQ(0; a)]
=
k−1
X
j=0 Q(j; a)
2.4-10 Queue with finite waiting room M/M/1(B)
(a) Let {π n } denote equilibrium-state distribution of the M/M/1(B) queue.
The BD balance equations (2.3-52) reduce to (cf (2.4-5))
π n = ρπ n−1 = ρ n π0, n = 0, 1, , B, where ρ = λ/µ The normalization requirement implies that
Trang 17(b) The conditional waiting time distribution for the M/M/1(B) is the same
as for the M/M/1 as given by (2.4-18) and (2.4-24):
(b) Taking the Laplace transform of (2.4-31) yields (2.4-33) directly
(c) Multiplying both sides of (2.4-33) by z n and summing from n = 1 to ∞
Trang 18page 18
∗ (z, s), we obtain s(G ∗ (z, s) − P ∗
0(s)) = −(λ + µ)(G ∗ (z, s) − P ∗
0(s)) + µz −1 (G ∗ (z, s) − zP ∗
1(s) − P ∗
0(s)). (2.63)Taking the Laplace transform of (2.4-32) yields
P ∗ (s) − P0(0) = −λP ∗ (s) + µP ∗ (0).
Rearranging terms and using the fact that P0(0) = 1, we obtain
P1∗ (s) = µ −1 [(λ + µ)P0∗ (s) − 1]. (2.64)Substituting (2.64) into (2.63) and rearranging terms, we obtain
[(λ + µ + s)z − µ − λz2]G ∗ (z, s) = z − µ(1 − z)P ∗
0(s),
from which (2.4-36) follows
(d) (2.4-37) and (2.4-38) follow directly from applying the quadratic formula
and it then follows that LHS = RHS
(f) Using (2.4-41) and (2.4-42), we can compute the following limits:
From (2.66), we conclude that z −1
1 (s) remains bounded as s → ∞ From (2.67), we conclude that z −1
2 (s) tends to infinity as s → ∞.
Trang 19Equating (2.64) and (2.68), and solving for P ∗ (s) yields (2.4-43).
(h) We can rewrite (2.4-41) as follows:
z1−n (s) =
µ
λ µ
We make use of the following property of the inverse Laplace transform:
L −1 {G ∗ (a(s + b))} = 1
a g
µ
t a
¶n/2
e −(λ+µ)t(2pλµ)
·
n 2t √ λµ I n(2
p
λµt)
¸
, (2.71)
from which (2.4-46) follows
(i) Applying the inverse Laplace transform to (2.4-43), we obtain
L −1 {P ∗ (s)} = L −1
½1
¾
(2.72)
= u(t) −
Z t0
from which (2.4-47) follows
(j) Substitution of (2.4-49) into (2.4-46) yields (2.4-50) after some algebra
(k)2.4-12 Mean queue length and mean waiting time
"∞X
k=0
ρ k
#
= ρp m d dρ
·1