In this chapter we focus instead on the class of M=G=1 input processes aspotential traf®c models.. An M=G=1 input process is understood as the busy serverprocess of a discrete-time in®ni
Trang 1INPUT PROCESSES
ARMAND M MAKOWSKI AND MINOTHI PARULEKAR
Institute for Systems Research, and Department of Electrical and Computer Engineering,University of Maryland, College Park, MD 20742
9.1 INTRODUCTION
Several recent measurement studies have concluded that classical Poisson-like traf®cmodels do not account for time dependencies observed at multiple time scales in awide range of networking applications, for example, Ethernet LANs [13, 20, 33],variable bit rate (VBR) traf®c [3, 15], Web traf®c [6], and WAN traf®c [31] As theresulting temporal correlations are expected to have a signi®cant impact on bufferengineering practices, this ``failure of Poisson modeling'' has generated an increasedinterest in a number of alternative traf®c models that capture observed (long-range)dependencies [14, 24] Proposed models include fractional Brownian motion [25]and its discrete-time analog, fractional Gaussian noise [1] Already both haveexposed clearly the limitations of traditional traf®c models in predicting storagerequirements and devising congestion controls A discussion of these issues in thecase of fractional Brownian motion is summarized in Chapter 4
In this chapter we focus instead on the class of M=G=1 input processes aspotential traf®c models An M=G=1 input process is understood as the busy serverprocess of a discrete-time in®nite server system fed by a discrete-time Poissonprocess of rate l (customers=slot) and with generic service time s distributedaccording to G As argued in Parulekar [27] and Parulekar and Makowski [29], theseM=G=1 input processes constitute a viable alternative to existing traf®c models;reasons range from ¯exibility to tractability
Self-Similar Network Traf®c and Performance Evaluation, Edited by Kihong Park and Walter Willinger ISBN 0-471-31974-0 Copyright # 2000 by John Wiley & Sons, Inc.
215
Self-Similar Network Traf®c and Performance Evaluation, Edited by Kihong Park and Walter Willinger
Copyright # 2000 by John Wiley & Sons, Inc Print ISBN 0-471-31974-0 Electronic ISBN 0-471-20644-X
Trang 2First, the relevance of the M=G=1 input model to network traf®c modeling isperhaps best explained through its connection to an attractive model for aggregatepacket streams proposed by Likhanov et al [21] They show that the combinedtraf®c generated by several independent, identically distributed (i.i.d.) on=off sourceswith Pareto distributed activity periods behaves in the limit, as the number of sourcesincreases, like the M=G=1 input stream with a Paretodistributed s This provides arationale for the view that M=G=1 input processes could provide a naturalalternative to existing traf®c models, at least for certain multiplexed applications.Second, the class of M=G=1 input processes is stable under multiplexing; that is,the superposition of several M=G=1 processes can be represented by an M=G=1input process.
Third, the M=G=1 model displays great ¯exibility in capturing positivedependencies over a wide range of time scales; this is achieved very simply throughthe tail behavior of s (Proposition 9.4.1) The degree of positive correlation canfurther be characterized by the sum of the autocovariances, or index of dispersion ofcounts (IDCs), with the process being short-range dependent (SRD) (i.e., IDC ®nite)
if and only if Es2 is ®nite (Proposition 9.5.1)
Insights into how temporal correlations of M=G=1 input processes will affectqueueing performance can be gained by analyzing the behavior of a multiplexer fed
by an M=G=1 input process For simplicity, we model the multiplexer as a time single server system consisting of an in®nite size buffer and a server with aconstant release rate c (cells=slot) The number of customers in the input buffer attime t is denoted by qt Our performance index is the steady-state buffer tailprobability Pq1> b, as this quantity is indicative of the buffer over¯ow prob-ability in a corresponding ®nite buffer system with b positions
discrete-Computing these tail probabilities, either analytically or numerically, represents achallenging problem in the absence of any underlying Markov property for M=G=1inputs Instead, we focus on the simpler task of determining the asymptotic tailbehavior of the queue-length distribution for large buffer size More precisely, weseek results of the form
lim
b!1
1
h bln Pq1> b g 9:1for some positive constant g and mapping h: R! R; these quantities arecharacterized by l, G, and c and should be easily computable Limits such as Eq.(9.1) suggest approximations of the form
Pq1> b e h bg b ! 1: 9:2Needless to say, such estimates should be approached with care [5] Nonetheless, Eq.(9.1) already provides some qualitative insights intothe queueing behavior at themultiplexer and could, in principle, be used to produce guidelines for sizing up itsbuffers
Trang 3In this chapter we provide an overview of some recent work on this issue.Drastically different behaviors emerge depending on whether vt* O t or vt* o t(with t ! 1), where v* ln P^s > t; t 1; 2; , and ^s is the forwardtrecurrence time (9.9) associated with s The case vt* O t is associated with theservice time s having exponential tails, while the case vt* o t corresponds toheavy or subexponential tails for s.
Our focus here is primarily (but not exclusively) on large deviations techniques inorder to obtain Eq (9.1) This approach has already been adopted by a number ofauthors [10, 16, 19] Applying results by Duf®eld and O'Connell [10] (and somerecent extensions thereof [11, 30]), we are able to compute h b and g underreasonably general conditions In fact, for a large class of distributions, we can select
h b vdbe* , and the asymptotics (9.1) and (9.2) then take the compact form
Pq1> b P^s > bg b ! 1: 9:3Hence, in many cases, including Weibull, lognormal, and Pareto service times, q1and ^s (thus s) belong to the same distributional class as characterized by tailbehavior
In many cases of interest, in lieu of Eq (9.1), these large deviations techniquesyield only the weaker asymptotic bounds
g lim inf
b!1
1
h bln Pq1> b 9:4and
lim sup
b!1
1
h bln Pq1> b y* 9:5with g6 g* This situation typically occurs when s is heavy tailed (more generally,subexponential) with either ®nite or in®nite Es2, in which case large deviationsexcursions are only one of several causes for buffer exceedances [19] While Eqs.(9.4) and (9.5) are still useful in providing bounds on decay rates, they will not betight in the heavy-tail case and other approaches are needed Of particular relevanceare the approaches of Liu et al [22] (summarized in Section 9.11) and of Likhanov(discussed in Chapter 8) Liu et al [22] derive bounds through direct arguments thatrely on the asymptotics of Pakes [26] for the GI=GI=1 queue under subexponentialassumptions [12] While Likhanov presents lower and upper bounds only when s isPareto, these bounds are asymptotically tight Results for the continuous-time modelcan be found in Jelenkovic and Lazar [17], and in Chapters 10 and 7
Comparison of Eq (9.3) with results from Norros [25] and Parulekar andMakowski [28] points already to the complex and subtle impact of (long-range)dependencies on the tail probability Pq1> b Indeed, in Norros [25] the inputstream to the multiplexer was modeled as a fractional Gaussian noise process (orrather its continuous-time analog) exhibiting long-range dependence (in fact, self-similarity), and the buffer asymptotics displayed Weibull-like characteristics On theother hand, by the results described above, an M=G=1 input process with a Weibull
Trang 4service time also yields Weibull-like buffer asymptotics although the input process isnow short-range dependent Hence, the same asymptotic buffer behavior can beinduced by two vastly different input streams, one long-range dependent and theother short-range dependent! To make matters worse, if the pmf G were Paretoinstead of Weibull, the input process would now be long-range dependent, in factasymptotically self-similar [28], but the buffer distribution would now exhibitPareto-like asymptotics, in sharp contrast with the results of Norros [25].
To reiterate the main conclusion of Parulekar and Makowski [28], the value of theHurst parameter as the sole indicator of long-range dependence (via asymptotic self-similarity) is at best questionable as it does not characterize buffer asymptotics byitself Furthermore, buffer sizing cannot be determined adequately by appealingsolely to the short- versus long-range dependence characterization of the inputmodel used, be it of the M=G=1 type or otherwise Of course, this is not toosurprising since long-range dependence (and its close cousin, second-order self-similarity) is determined by second-order properties of the input process, whileasymptotics of the form (9.1) invoke much ®ner probabilistic properties, which areembedded here in the sequence fvt*; t 1; 2; g The ®niteness of Es2 (whichcharacterizes the SRD nature of the M=G=1 input process) is obviously a poormarker for predicting the behavior of this sequence
To close, we note that the diverse queueing behavior demonstrated here is tied tothe tail behavior of s, which determines the correlation structure of M=G=1 inputs.This clearly illustrates the tremendous impact that the correlation structure of aninput stream can have on the corresponding queueing performance given that theM=G=1 inputs all have Poisson marginals! One more data point for the need of acautious approach in modeling network traf®c when time dependencies are eitherobserved or suspected
9.2 THE M=G=11 INPUT PROCESS
We summarize various facts concerning the busy server process of a discrete-timeM=G=1 system; details are available in Parulekar [27]
9.2.1 The Model
Consider a system with in®nitely many servers During time slot t; t 1; bt1newcustomers enter the system Customer i; i 1; ; bt1, is presented toits ownserver and begins service by the start of slot t 1; t 2; its service time hasduration st1;i (expressed in number of slots) Let bt denote the number of busyservers or, equivalently, of customers still present in the system, at the beginning ofslot t; t 1 We assume that b servers are initially present in the system at t 0(i.e., at the beginning of slot 0; 1) with customer i; i 1; ; b, requiring anamount of work of duration s0;i from its own server The busy server process
fbt; t 0; 1; g is what we refer toas the M=G=1 input process
Trang 5The following assumptions are enforced on the R-valued random variables (rvs)
b, fbt1; t 0; 1; g and fst;i; t 0; 1; ; i 1; 2; g: (1) The rvs aremutually independent; (2) The rvs fbt1; t 0; 1; g are i.i.d Poisson rvs withparameter l > 0; (3) The rvs fst;i; t 1; 2; ; i 1; 2; g are i.i.d withcommon pmf G on f1; 2; g We denote by s a generic R-valued rv distributedaccording to the pmf G Throughout we assume this pmf G tohave a ®nite ®rstmoment, or equivalently, Es < 1 At this point, no additional assumptions aremade on the rvs fs0;i; i 1; 2; g
For each t 0; 1; , we note the decomposition
bt b 0t b at ; 9:6where the rvs b 0t and b at describe the contributions to the number of customers inthe system at the beginning of slot t; t 1 from those initially present (at t 0)and from the new arrivals in the interval [0, t], respectively Under the enforcedoperational assumptions, we readily check that
9.2.2 The Stationary Version
Although the busy server process fbt; t 0; 1; g is in general not a (strictly)stationary process, it does admit a stationary and ergodic version fbt*; t 0; 1; g.This stationary version satis®es the decomposition (9.6) with the portion in (9.7) due
to the initial condition replaced by
b 0t Pb
n11^sn> t; t 0; 1; ; 9:8where (1) the rvs b and f^sn; n 1; 2; g are independent of the rvs
fbt1; t 0; 1; g and fst;i; t 1; 2; ; i 1; 2; g; (2) the rvsf^sn; n 1; 2; g are independent of the rv b, which is Poisson distributed withparameter lEs; and (3) the rvs f^sn; n 1; 2; g are i.d.d rvs distributedaccording to the forward recurrence time ^s associated with s; the correspondingequilibrium pmf ^G of ^s is given by
P^s r Ps rEs ; r 1; 2; : 9:9
Trang 6The following properties of fbt*; t 0; 1; g follow readily from this sentation [7, 18 (Theorem 3.11, p 79), 27].
repre-Proposition 9.2.1 The stationary and ergodic version fb*; t 0; 1; g of thetbusy server process has the following properties:
(i) For each t 0; 1; , the rv b* is a Poisson rv with parameter lEs.t(ii) The process is reversible in that
b0*; b1*; ; b* t st b*; bt t 1* ; ; b0*; t 0; 1; : 9:10
9.3 THE BUFFER SIZING PROBLEM
As we shall see shortly in Sections 9.4 and 9.5, M=G=1 input processes display anextremely rich correlation structure We expect these temporal correlations to have asigni®cant impact on queueing performance when such processes are offered to
a multiplexer Togain some insights intothis basic issue we map a multiplexer intoadiscrete-time single server queue with in®nite capacity and constant release rate of ccells=slot under the ®rst-come-®rst-served discipline The cell stream is modeled by
an M=G=1 input process as de®ned above, with bt1 representing the number ofnew cells that arrive at the start of time slot t; t 1 Let qb
t denote the number ofcells remaining in the buffer by the end of slot t 1; t, sothat qb
t bt1 cells areready for transmission during slot t; t 1 If the multiplexer output link cantransmit c cells=slot, then the buffer content sequence fqb
t; t 0; 1; g evolvesaccording to the Lindley recursion
qb
0 q; qb
t1 qb
t bt1 c; t 0; 1; ; 9:11for some initial condition q
Conditions under which the queueing system (9.11) admits a steady-state regimeare well known and are given next
Proposition 9.3.1 If lEs < c, then there exists an R-valued rv qb
Trang 7Stationary M=G=1 processes being time-reversible, we have the representation
qb
1stsup Sb
t ct; t 0; 1; 9:12for the steady-state buffer content qb
1with
Sb
0 0; Sb
t b1* bt*; t 1; 2; : 9:13Hereafter, by an M=G=1 input process we mean its stationary version
fb*; t 0; 1; g, which is fully characterized by the pair l; G Moreover, fromtnow on, we always assume the stability condition
9.4 SECOND-ORDER CORRELATIONS
Before discussing the asymptotics associated with buffer over¯ow induced byM=G=1 input processes, we make a slight detour to explore the correlationstructure of such input processes
9.4.1 Correlation Properties
In view of Proposition 9.2.1, the stationary version fb*; t 0; 1; g has a well-tde®ned (auto)covariance function G: R ! R, say,
G h Covb*; bt th* ; t; h 0; 1; : 9:15Proposition 9.4.1 We have
G h lE s h lEsP^s > h; h 0; 1; : 9:16The ®rst equality in Eq (9.16) is established in Cox and Isham [7] and the secondequality follows readily from the de®nition (9.9) From Eq (9.16) we ®nd theautocorrelation function g: R ! R of the M=G=1 process l; s tobe given by
g h G h
G 0 P^s > h; h 0; 1; : 9:17Note that g 0 1 as we recall that Ps > 0 1
9.4.2 Inverting g
Proposition 9.4.1 shows that the correlation structure of the stationary M=G=1input process l; s is completely determined by the pmf of ^s (thus of s) It turns out
Trang 8that the inverse is true as well Indeed, Eqs (9.9) and (9.17) together imply
g h g h 1 P^s > h P^s > h 1
Es1 Ps > h; h 0; 1; ; 9:18sothat the mapping h ! g h is necessarily decreasing and integer-convex Takingintoaccount the facts g 0 1 and Ps > 0 1, we conclude from Eq (9.18) (with
Es P1
h0Ps > h 1 lim1 g 1h!1g h 9:21and Eq (9.19) imposes limh!1g h 0 A moment of re¯ection readily leads to thefollowing invertibility result
Proposition 9.4.2 An R-valued sequence fg h; h 0; 1; g is the relation function of the M=G=1 process l; s with integrable s if and only ifthe corresponding mapping h ! g h is decreasing and integer-convex with
autocor-g 0 1 > g 1 and limh!1g h 0, in which case the pmf G of s is given by
Eq (9.20)
9.5 LONG-RANGE DEPENDENCE
The existence of positive correlations in the sequence fbt*; t 0; 1; g is clearlyapparent from Eq (9.16) The strength of such positive correlations can beformalized in several ways, which we now describe; additional material is available
in Cox [8] and we refer the reader to Tsybakov and Georganas [32] for a discussion
of alternative de®nitions
The sequence fb*; t 0; 1; g is said tobe short-range dependent (SRD) ift
P1
Trang 9Otherwise, the sequence fbt*; t 0; 1; g is said tobe long-range dependent(LRD) Easy calculations using Eq (9.16) readily lead to the following simplecharacterization.
Proposition 9.5.1 We have
P1 h0G h l2Es s 1 9:23
so that the process is SRD if and only if Es2 is ®nite
Interesting subclasses of LRD processes can further be identi®ed through thenotion of second-order self-similarity To do so, we introduce the rvs
b mt m1m 1P
k0b*mtk; m 1; 2; ; t 0; 1; : 9:24For each m 1; , the rvs fb mt ; t 0; 1; g form a (wide-sense) stationarysequence with correlation structure de®ned by
G m h Covb mt ; b mth and g m h G m h
G m 0; h 0; 1; : 9:25For each H > 0 consider the mapping gH: R ! R given by
gH h 1
2 jh 1j2H 2jhj2H jh 1j2H; h 0; 1; : 9:26
We say that the sequence fbt*; t 0; 1; g is exactly (second-order) self-similar if
Varb mt d2m b; m 1; 2; 9:27for some constants d2> 0 and 0 < b < 1, a requirement equivalent to
G h d2gH h; h 0; 1; ; 9:28where H 1 b=2 is known as the Hurst parameter of the process The parameter
H being in the range (0.5, 1), the mapping gH is strictly decreasing and convex, with gH 0 1, and behaves asymptotically as
integer-gH h H 2H 1h2H 2 h ! 1: 9:29
Trang 10By Proposition 9.4.2 we can interpret gH as the autocorrelation function of theM=G=1 input process l; sH with
PsH > r jr 2j2H 3jr 1j4 1 22H2H 2 3jrj2H jr 1j2H; r 1; 2; ;sothat the M=G=1 input process l; sH is exactly second-order self-similar withHurst parameter H
In applications, the notion of exact self-similarity is often too restrictive and isweakened as follows The sequence fbt*; t 0; 1; g is said tobe asymptotically(second-order) self-similar if
lim
m!1 g m h gH h; h 1; 2; : 9:30This will happen for the M=G=1 input process l; s if
with 1 < a < 2, for some slowly varying function L: R! R, in which case
H 3 a=2
9.6 GENERAL BUFFER ASYMPTOTICS
Several authors [10, 16, 19] have derived asymptotics such as (9.1) by means oflarge deviations estimates associated with the sequence ft 1 Sb
t ct; t 0; 1; g.These results (and their necessary extensions) are summarized below as they apply tothe present context
9.6.1 A General Setup
With a given R-valued sequence fxt1; t 0; 1; g, we associate the R-valued rv
q1 given by
q1 sup St; t 0; 1; ; 9:32where
S0 0; St x1 xt; t 1; 2; : 9:33
If the sequence fxt1; t 0; 1; g is assumed stationary and ergodic withEx1 < 0, then q1is a.s ®nite We are interested in characterizing the asymptoticbehavior of the tail probability Pq1> b for large b
Trang 11To ®x the terminology, a scaling sequence is any monotone increasing R-valuedsequence fvt; t 0; 1; g such that limt!1vt 1 The sequence ft 1St,
t 1; 2; g is said tosatisfy the large deviations principle under scaling vt ifthere exists a lower-semicontinuous function I : R ! 0; 1 such that for everyopen set G,
P Sdb=yedb=ye> y
9:38
Trang 12of the lower bound As the best lower bound is the largest, we can immediatelysharpen (9.40) intothe lower bound (9.4) with g given by (9.37) jThe existence of (9.34) (and for that matter, of (9.35)) is typically validatedthrough the GaÈrtner±Ellis theorem [9, Theorem 2.3.6, p 45] In that context, for each
exists (possibly as an extended real number) Under broad conditions, the process
ft 1St; t 1; 2; g then satis®es the large deviations principle under scaling vtwith good rate function L*: R ! 0; 1, where L* is the Legendre±Fencheltransform of the mapping L: R ! 1; 1 de®ned through Eq (9.42), namely,
L* z sup
Expression (9.37) simpli®es when the large deviations principle for the process
ft 1St; t 1; 2; g holds with a good rate function I : R ! 0; 1, which isconvex Indeed, the relation
inf
follows readily from the goodness of I and the fact that limt!1t 1St Ex1 < 0a.s under the ergodic assumption However, by convexity we have I increasing(resp decreasing) on Ex1; 1 (resp on 1; Ex1, and the conclusioninfx>yI x I y holds for all y > 0 The interior of the effective domain of I
Trang 13is an interval of the form y; y* with 1 y Ex1 y* 1, and Eq (9.37)becomes
g1 inf
y>0g yI y inf
0<y<y g yI y: 9:45The nondegeneracy condition y* > 0 holds in most applications
9.6.3 An Upper Bound
In Duf®eld and O'Connell [10], the companion upper bound (9.5) was derived under
a set of conditions that, unfortunately, do not cover some instances of the M=G=1process considered here Upon re®ning the arguments of Duf®eld and O'Connell[10], we have established the following asymptotic upper bound; details are available
in Section 9.14 and in Parulekar [27] An alternative approach was given by Duf®eld[11] but more explicit expressions are given here for the upper bound
Proposition 9.6.2 Assume the following conditions:
(i) For each y in R, the limit (9.42) exists (possibly as an extended real number)with
inf
y>0L y 2 R: 9:46(ii) For some ®nite K 0, we have
Trang 14The case K 0 is equivalent toHypothesis 2.2(iv) in Duf®eld and O'Connell[10] Moreover, the upper bound is trivial in cases where L* 0 K As the leastupper bound is the sharpest, under the assumptions of Proposition 9.6.2 weimmediately get Eq (9.5) with
g* sup
y>0 min a y; b y: 9:51
9.7 EVALUATION OF L u u 2 R
An important step in applying the results of the previous section consists in ®nding ascaling sequence fvt; t 0; 1; g such that for each y in R, the limit (9.42) exists(possibly as an extended real number) In Parulekar and Makowski [28±30], weshow that the selection of this scaling is governed by the behavior of the sequence
fv*; t 0; 1; g given byt
v* ln P^s > t; t 0; 1; :t 9:52This is done under the assumption that the limit
R 1 (Case I), 0 < R < 1 (Case II), or R 0 (Case III)
To state the results more conveniently, we set
exists (possibly in®nite), so does (9.42) with
and it suf®ces to concentrate on ®nding the limit (9.55) The main facts along theselines are developed in the next two theorems; proofs are available in Parulekar [27]and Parulekar and Makowski [30] Cases I and II are covered ®rst
Trang 15Theorem 9.7.1 Assume R > 0, possibly in®nite, and take the linear scaling
vt t; t 1; 2; : 9:57Then, for each y in R, the limit Lb y limt!1Lb;t y exists and is given by
with Eeys ®nite (resp in®nite) if y < R (resp R < y)
We now turn to Case III
Theorem 9.7.2 Assume R 0 with fv*=t; t 1; 2; g eventually monotonetdecreasing Then, with the scaling
provided there exists a mapping G: R ! R such that (i) G t < t for large
t 1; 2; , (ii) limt!1vt* G t=t 1, and (iii) limt!1 v*=t G t=v*t G t 0.The assumptions of Theorem 9.7.2 are satis®ed in all cases known to the authorsand are easy to check for broad classes of distributions If v* tt b 0 < b < 1, wecan take G t tg with 1 b < g < 1 If vt* ln tb b > 0, then the choice
G t t ln t gwith 0 < g < b will do
9.8 EXPONENTIAL TAILS
By exponential tails we refer to the situation where
Eeys < 1 for some y > 0: 9:61
It is easy to see [30, Section 6] that for each y in R, the quantities Eeys and Eey^sare simultaneously ®nite (resp in®nite) Moreover, with R > 0, possibly in®nite,these quantities are ®nite (resp in®nite) whenever y < R (resp y > R) Thus, underlimit (9.53), exponential tails correspond to Cases I and II
Trang 169.8.1 The Asymptotics Under Exponential Tails
When R > 0, Theorem 9.7.1 suggests vt t, sothat h b b, g y y 1, and
K 0 Therefore,
and the GaÈrtner±Ellis theorem [9, Theorem 2.3.6, p 45] guarantees the full largedeviations principle (under scaling t) fo r ft 1 Sb
t ct; t 1; 2; g with ratefunction L* From Eq (9.45) we readily conclude
Proposition 9.8.1 Assume R > 0 Then, the asymptotics
lim
b!1
1
bln Pqb1> b g 9:66holds with g g* g
Proposition 9.8.1 covers the situations where the tail of G decays at leastexponentially fast, for example, the Rayleigh, Gamma, and geometric cases Thisresult is of course not new; it has been obtained earlier by several authors [10, 16,19] and paves the way to the notion of effective bandwidth However, the argumentsleading toit, which are detailed in Parulekar [27], already show that the upper bound
of Proposition 9.6.2 is good enough to recover this ``classical'' case
9.8.2 Comparison with Instantaneous Inputs
Another input process closely related to the M=G=1 input process l; s is the inputprocess according to which the work associated with a session is offered instanta-neously tothe buffer, rather than gradually as was the case for M=G=1 inputprocesses
Trang 17Such an instantaneous input process, say, fat1; t 0; 1; g, is composed ofi.i.d R-valued rvs given by
at1 :bPt1
i1st1;i; t 0; 1; ; 9:67where the twofamilies of i.i.d rvs fbt1; t 0; 1; g and fst1;i; t 0; 1; ; i 1; 2; g are as in Section 9.2.1 Let a denote a generic rv of thei.i.d sequence fat1; t 0; 1; g When offered to the multiplexer described by
Eq (9.11), the instantaneous arrival process generates a sequence of buffer contents
a D=GI=1 queue Hence, the system will be stable if Ea < c, a condition equivalentto(9.14), in which case qa
t )t qa
1for some R-valued rv qa
1 Here as well, we areinterested in the tail probabilities Pqa
1> b for b large, and in comparing theseasymptotics with those of Pq1> b
To apply Propositions 9.6.1 and 9.6.2, it is natural to consider the partial sumsequence fSa
t; t 0; 1; g associated with the instantaneous input process
fat1; t 0; 1; g, namely,
Sa 0; Sa
t a1 at; t 1; 2; ; 9:69sothat
qa
1stsup Sa
t ct; t 0; 1; : 9:70Under the enforced independence assumptions, for each t 1; 2; we have
EeyS a
t Qt
s1EEeysbs e lt 1 Ee ys ; y 2 R; 9:71sothat