1. Trang chủ
  2. » Ngoại Ngữ

A study of hitting times for random walks on finite, undirected graphs

49 226 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 49
Dung lượng 704,1 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Using the concept of agraph’s fundamental matrix and the method of spectral decomposition, we derive a formulathat calculates expected hitting times for discrete-time random walks on fin

Trang 2

Signature removed

Trang 3

A S H T R W

by

ARI BINDER

STEVENJ MILLER, ADVISOR

A thesis submitted in partial fulfillment

of the requirements for theDegree of Bachelor of Arts with Honors

in Mathematics

WILLIAMS COLLEGE

Williamstown, Massachusetts

April 27, 2011

Trang 4

ABSTRACT

This thesis applies algebraic graph theory to random walks Using the concept of agraph’s fundamental matrix and the method of spectral decomposition, we derive a formulathat calculates expected hitting times for discrete-time random walks on finite, undirected,strongly connected graphs We arrive at this formula independently of existing literature,and do so in a clearer and more explicit manner than previous works Additionally we applyprimitive roots of unity to the calculation of expected hitting times for random walks oncirculant graphs The thesis ends by discussing the difficulty of generalizing these results

to higher moments of hitting time distributions, and using a different approach that makesuse of the Catalan numbers to investigate hitting time probabilities for random walks onthe integer number line

Trang 5

I would like to thank Professor Steven J Miller for being a truly exceptional advisor andmentor His energetic support and sharp insights have contributed immeasurably both tothis project and to my mathematical education over the past three years I also thank thesecond reader, Professor Mihai Stoiciu, for providing helpful comments on earlier drafts

I began this project in the summer of 2008 at the NSF Research Experience for graduates at Canisius College; I thank Professor Terrence Bisson for his supervision andguidance throughout this program I would also like to thank Professor Frank Morgan, whofacilitated the continuation of my REU work by directing me to Professor Miller Addi-tionally I thank Professor Thomas Garrity for lending me his copy of Hardy’s DivergentSeries Finally, I would like to thank the Williams Mathematics Department for providing

Under-me with this research opportunity, and my friends and family, whose support and raderie helped me tremendously throughout this endeavor

Trang 6

3.1 The Fundamental Matrix and its Role in Determining Expected Hitting Times 12

5 Using Catalan Numbers to Quantify Hitting Time Probabilities 30

Trang 7

1 INTRODUCTION

Random walk theory is a widely studied field of mathematics with many applications,some of which include household consumption [Ha], electrical networks [Pa-1], and con-gestion models [Ka] Examining random walks on graphs in discrete time, we quantifythe expected time it takes for these walks to move from one specified vertex of the graph

to another, and attempt to determine the distribution of these so-called hitting times (seeDefinitions 1.9 below) for random walks on the integer number line

We start by recording basic definitions from graph theory that are relevant to our study,and providing a basic example The informed reader can skip this subsection, and thereader seeking a more rigorous introduction to algebraic graph theory should consult [Bi].Definitions 1.1 A graph G = (V, E) is a vertex set V combined with an edge set E, wheremembers of E connect members of V We say G is an n-vertex graph if |V | = n Byconvention we disallow multiple edges connecting the same two vertices and self-loops;that is, there is at most one element of E connecting i to j when i 6= j, and no edgesconnecting i to j when i = j, where i, j ∈ V Furthermore, we call G undirected when anedge connects i to j if an only if an edge connects j to i

Unless otherwise specified, we assume G is undirected throughout this paper, and thusdefine an edge as a two-way path from i to j where i, j ∈ V and i 6= j

Definition 1.2 We say that a graph G is strongly connected if for each vertex i, there exists

at least one path from i to any other vertex j of G

Definition 1.3 When an undirected edge connects i and j, we say that i and j are adjacent,

or that i and j are neighbors We denote adjacency as i ∼ j

Definition 1.4 We call an n-vertex graph k-regular if there are exactly k edges leavingeach vertex, where 1 ≤ k < n and n > 1

Definition 1.5 The n-vertex graph G is vertex-transitive if its group of automorphisms actstransitively on its vertex set V This simply means that all vertices look the same locally,i.e we cannot uniquely identify any vertex based on the edges and vertices around it

Definition 1.6 An n-vertex graph G is bipartite if we can divide V into two disjoint sets

U and W such that no edge in E connects two vertices from U or two vertices from W Equivalently, all edges connect a vertex in U to a vertex in W

Trang 8

The undirected 6-cycle, two representations of which are given below in Figure 1, vides an easy way to visualize the above definitions, and is the example we will be usingthroughout the thesis

pro-FIGURE 1 Two representations of the undirected 6-cycle

From Definitions 1.2 and 1.4, we can immediately see that the undirected 6-cycle isstrongly connected and 2-regular Furthermore, Definition 1.5 tells us that the undirected6-cycle is vertex-transitive, because the labeling of the vertices does not matter Whateverlabeling we choose, each vertex will still be connected to two other vertices by undirectededges, and thus labeling is the only way we can uniquely identify each vertex In fact, if weassume undirectedness, no multiple edges, and no self-loops, then k-regularity is equivalent

to vertex-transitivity Finally, from Definition 1.6 and from the graph on the right in Figure

1, we can see that the undirected 6-cycle is bipartite, with disjoint vertex sets U = {1, 3, 5}and W = {2, 0, 4}

Definition 1.7 A random walk on an n-vertex graph G is the following process:

• Choose a starting vertex i of G

• Let A ⊂ V = {j : i ∼ j} Choose an element j of A uniformly at random

• If not at the walk’s stopping time (see Definition 2.4 below), move to j, and repeatthe second step Otherwise end at j

Definition 1.8 Consider a random walk on an n-vertex graph G, where G is isomorphic

to a bipartite graph We refer to this phenomenon by saying either that the walk or G isperiodic If G is not isomorphic to a bipartite graph, then the walk and G are aperiodic.Thus when referring to G, we use “bipartite” and “periodic” interchangeably This is

a slight abuse of the terminology: processes are periodic, not graphs However, we are

Trang 9

comfortable describing a graph as periodic in this paper because we only discuss graphs inthe context of random walks.

Note that our definition of a random walk is a discrete time definition All of the results

in this paper apply to discrete time random walks, but are convertible to continuous time,according to [Al-F1]

Definitions 1.9 The hitting time for a random walk from vertex i to a destination is thenumber of steps it takes the walk to reach the destination starting from i Define the walk’sdestination as the first time it visits vertex j; then we denote the walk’s expected hittingtimeas Ei[Tj], and the variance of the walk’s hitting time distribution as V ari[Tj]

Just as with any other expected value, we can intuitively define Ei[Tj] as a weightedaverage Thus Ei[Tj] = P∞

n=0n · P (walk first reaches j starting from i in n steps) Wecan define V ari[Tj] in the same probabilistic manner Because of the infinite number ofpossible random walks we must take into account, calculating Ei[Tj] and V ari[Tj] valuesfrom the probabilistic definition appears to be a very difficult task to carry out In thisthesis we develop a more tractable method for quantifying hitting time distributions Ourmain result is the use of spectral decomposition of the transition matrix (see Definitions2.2 below) to produce a natural formula for the calculation of expected hitting times onn-vertex, undirected, strongly connected graphs The rest of the thesis is organized asfollows: Section 2 constructs a theoretical conceptualization of hitting times for randomwalks on graphs, Section 3 uses spectral decomposition to develop a hitting time formulaand applies primitive roots to the construction of spectra of transition matrices, Section 4applies our methodology in sample calculations, Section 5 attempts to quantify hitting timedistributions on the number line, and Section 6 concludes

2 A THEORETICAL FOUNDATION FORRANDOMWALKS AND HITTING TIMES

In the following subsections, we construct a rigorous framework that both supports ural observations about random walks on graphs and allows us to move toward our goal

nat-of calculating expected hitting times for random walks on finite graphs The reader isinstructed to see the literature for proofs of any unproved assertions below

2.1 Understanding Random Walks on Graphs in the Context of Markov Chains Weappeal to [No] for the following association of random walks on graphs to discrete timeMarkov chains The following definitions and results are summaries of relevant results

Trang 10

Consider a random variable X, and let ρi = P (X = i); then X has distribution ρ,and takes the value i with probability ρi Let Xn refer to the state at time n; introducingtime-dependence allows us to use matrices to conceptualize the process of changing states.

Definitions 2.2 We say that a matrix P = {pij : i, j ∈ I} is stochastic if every row

is a distribution, which is to say that each entry is non-negative and each row sums to 1.Furthermore, P is a transition matrix if P is stochastic such that pij = P (Xn+1 = j|Xn =i), where pij is independent of X0, , Xn−1 We express P (Xn = j|X0 = i) as p(n)ij

Definition 2.3 (See Theorem [1.1.1] of [No]) A Markov chain is a set {Xn} such that

X0 has distribution ρ, and the distribution of Xn+1is given by the ithrow of the transitionmatrix P for n ≥ 0, given that Xn = i

Definition 2.4 A random variable S, where 0 ≤ S ≤ ∞, is called a stopping time if theevents {S = n}, where n ≥ 0, depend only on X1, , Xn

These definitions allow us to introduce a property that is necessary for our conception ofhitting times for random walks on graphs

Theorem 2.5 (Norrris’s strong Markov property; see Theorem [1.4.2] of [No]) Let {Xn}

be a Markov chain with initial distribution ρ and transition matrix P , and let S be a stoppingtime of {Xn} Furthermore, let XS = i Then {XS+n} is a Markov chain with initialdistribution δ and transition matrix P , where δj = 1 if j = i, and 0 otherwise

We now apply random walks on graphs to this Markov chain framework

Proposition 2.6 Consider a random walk starting at vertex i of an n-vertex graph G Let

S be a stopping time such that XS = i Then the random walk, together with S, exhibitsNorris’s strong Markov property

Trang 11

Proof First of all, define Xtas the position of the walk at time t, t ≥ 0 Trivially, wesee that X0 = i Let ρ = {ρj : 1 ≤ j ≤ n}, where ρj = 1 if j = i and 0 otherwise.Then X0 has distribution ρ Furthermore, define P as {pij : 1 ≤ i, j ≤ n} where pij is thenumber of edges going from vertex i to j divided by the total number of edges leaving i.From the definition of a random walk given above, we see that pij = P (Xt+1= j|Xt= i).Furthermore, we calculate pij from the structure of the graph; the positions of the walkhave no bearing on the transition probabilities Thus pij is independent of X0, , Xt−1.Then {Xt} is a Markov chain with initial distribution ρ and transition matrix P

We now consider S The event set {S = n} refers to the set of occurrences in whichafter n steps, the walk is again at vertex i, where n ≥ 0 Clearly these events depend only

on the first n steps of the walk; hence S is a stopping time Thus according to Theorem2.5, {XS+t} is a Markov chain with initial distribution δi = ρ and transition matrix P 2This result is simply saying that the Markov process “begins afresh” after each stoppingtime is reached, and is intuitively clear In the next subsection, we use this framework todevelop a theoretical conception of hitting times for random walks on graphs

2.2 Interpreting Hitting Times as a Renewal-Reward Process In accordance with[Al-F1], we use renewal-reward theory as a way to think about hitting times for randomwalks on graphs Doing so is a prerequisite for all the results we obtain regarding hittingtimes We appeal to [Co] for the following outline of a renewal-reward process

Definitions 2.7 Let S1, S2, S3, S4, be a sequence of positive i.i.d.r.v.s such that 0 <E[Si] < ∞ Denote Jn = Pn

i=1Si as the nthjump timeand [Jn−1, Jn] as the nth renewalinterval Then the random variable (Rt)t≥0 such that Rt = sup{n : Jn ≤ t} is called arenewal process Rt refers to the number of renewals that have occurred by time t Nowlet W1, W2, W3, W4, be a sequence of i.i.d.r.v.s such that −∞ < E[Wi] < ∞ Then

Yt=PR t

i=1Wiis a renewal-reward process

We can see a natural relation between Definitions 2.7 and Markov chains, which weprove as the following proposition

Proposition 2.8 Consider a random walk on an n-vertex graph G starting from vertex i.Consider a sequence of stopping times S1, S2, S3, such that XSn = i for all n ≥ 1 Let

Ytrefer to the total number of visits to an arbitrary vertex j that have occurred by the mostrecent stopping time Then Ytis a renewal-reward process

Trang 12

Proof Let Sn refer to the number of steps the walk takes between the (n − 1)st and nth

stopping times, such that XS 1 +S 2 + +S n−1 = XS1+S2+ +Sn = i Note that by Proposition2.6, the random walk on G together with the stopping time Snsatisfies Theorem 2.5, which

is to say that the portion of the walk between the (n − 1)st and nth stopping times isindependent of all previous portions of the walk Furthermore, the starting and endingpositions of each portion of the walk between stopping times are the same, and are governed

by the same transition matrix P , also in accordance with the strong Markov property Thus

S1, S2, S3, S4, is a sequence of i.i.d.r.v.s such that 0 < E[Si] < ∞

Now, let Jn = Pn

i=1Si refer to the number of steps the walk has taken by the timethe nth stopping time occurs Furthermore let Rt = sup{n : Jn ≤ t} be the number ofstopping times that have been reached by time t Then according to Definitions 2.7, Rt is

a renewal process Now let Vn be the number of visits to vertex j between the (n − 1)stand nthstopping times Once again, because the portion of the walk between the (n − 1)stand nth stopping times is independent of all previous portions of the walk; for all n, theportion of the walk between the (n − 1)st and nthstopping times share the same start andendpoints; and the entire walk is governed by transition matrix P , we conclude that V1, V2,

V3, V4, is a sequence of i.i.d.r.v.s such that −∞ < E[Vi] < ∞ Therefore Yt=PR t

ex-Theorem 2.9 (The Asymptotic Property of Renewal-Reward Processes) Consider reward process Yt =PR t

renewal-j=1Wj as described in Definitions 2.7 Assume the process begins

in state i Then limt→∞ Ytt = Ei [W 1 ]

E i [S 1 ]

Definitions 2.10 (Irreducibility of the Transition Matrix) If an n-vertex graph G isstrongly connected, then we say its transition matrix P is irreducible Irreducibility im-plies the existence of a unique probability distribution π on the n vertices of G such that

πj =Pn

i=1πipij for 1 ≤ j ≤ n We refer to π as the graph’s stable distribution

Theorem 2.11 (The Ergodic Theorem; see Theorem 1 of [Al-F1]) Let Nj(t) be thenumber of visits to vertex j during times 1, 2, , t Then for any initial distribution,

t−1Nj(t) → πj as t → ∞, where π = (π1, , πj, , πn) is the stable distribution

Trang 13

With these insights, we arrive at a crucial proposition.

Proposition 2.12 (See Proposition 3 of [Al-F1]) Consider a random walk starting atvertex i on an n-vertex, strongly connected graph G Furthermore, let S be a stopping timesuch that XS = i and 0 < Ei[S] < ∞ Then Ei[number of visits to j before time S]

= πjEi[S] for each vertex j

Proof We bring the previous results to bear Consider a sequence of stopping times

S1, S2, S3, , and assume that these stopping times, together with a stopping time S, areall governed by the same distribution Let Ytrefer to the total number of visits to an arbi-trary vertex j that have occurred by the most recent stopping time By Proposition 2.8, Yt

is a renewal-reward process Then according to the asymptotic property of renewal-rewardprocesses, limt→∞ Yt

lim

t→∞

Nj(t)

t = πj,where Nj(t) is the total number of visits to vertex j by the tth step of the walk Note that

Nj(t) 6= Yt; rather, Yt = Nj(Smost recent) However, relative to t, Nj(t) − Yt becomesarbitrarily small as t → ∞ Hence the two limits are equal, and thus we have

Ei[W ]

Ei[S] = πj

or Ei[W ] = πjEi[S] Noting that W refers to the number of visits to j before stopping time

2.3 Two Basic Results Proposition 2.12 is very useful because as long as S is a stoppingtime such that XS = i, we can choose S to be whatever we want By smart choices of

S and clever manipulation, [Al-F1] uses this proposition to prove many useful properties,some of them regarding expected hitting times We prove one of these properties below

Trang 14

Result 2.13 (See Lemma 5 of [Al-F1]) Consider a random walk starting from vertex i on

an n-vertex, strongly connected graph G Define Ti+as the first return time to i Then

Ei[Ti+] = 1

πi.Proof Substituting i for j and Ti+for S in Proposition 2.12, we get

Ei[number of visits to i before time T+i ] = πiEi[Ti+]

We follow [Al-F1]’s convention, and include time 0 and exclude time t in accounting forthe vertices a t-step walk visits Note that we visit vertex i exactly once by the first returntime: the visit occurs exactly at time 0, since we exclude time Ti+ Substituting 1 in for

Ei[number of visits to i before time T+i ], we arrive at our result 2Applying Result 2.13 to k-regular graphs, we can quantify specific hitting times, asshown below

Proposition 2.14 Consider an n-vertex, k-regular, undirected, strongly connected graph

G Let π represent the stable distribution on G’s vertices Then π is the uniform tion

distribu-Proof First of all, because G is strongly connected, the transition matrix P is irreducible,which implies the existence of a stable distribution π To show π is uniform, note that since

G is k-regular, k edges leave each vertex Furthermore, because G is undirected, we knowthat the number of edges leaving vertex i is equal to the number of edges leading to i foreach vertex i of G Thus P (Xt+1 = j|Xt = i) = P (Xt+1 = i|Xt = j) This implies that

pij = pjifor 1 ≤ i, j ≤ n, which means that P is symmetric Furthermore, if i is connected

to j, then pij = pji = 1/k, and pij = pji = 0 otherwise Because each vertex is connected

to k vertices, we see that each row of P, and hence each column as well by symmetry,contains k non-zero terms, each equalling 1/k Now, let πi = 1/n for 1 ≤ i ≤ n We seethat

πi = 1n

Trang 15

which is the irreducibility condition Thus the stable distribution is uniform, and is unique

From Proposition 2.14 and Result 2.13, we can immediately see that the expected firstreturn time for undirected, k-regular, strongly connected graphs is simply equal to n, thenumber of vertices on the graph:

From (2.14a) we can infer another result regarding expected hitting times for random walks

on this class of graphs

Result 2.15 Assume vertices i and j are adjacent Consider a random walk starting atvertex j on an n-vertex k-regular undirected graph G Then Ej[Ti] = n − 1

Proof Consider a random walk on G starting at vertex i After one step, the walk is at jwith probability 1/k for all j such that j ∼ i Thus we see the following:

Ei[Ti+] = n

= 1 + 1kX

We employ spectral decomposition of G’s transition matrix to achieve such a formula.This application of the spectral decomposition of P to hitting times is well-known; see[Al, Al-F2, Br, Ta], for instance However, this literature assumes the background we havejust provided, it often gives results in continuous time, and it does not explicitly state aformula for determining Ei[Tj] values Thus while the following methodology is by nomeans novel, by fully deriving and stating the formula that it permits, and by remaining

in discrete time, we present hitting time results in what we believe to be a clearer, morestraightforward, and more intuitive manner than the existing literature

Trang 16

3 A METHODOLOGY FORCALCULATINGEXPECTEDHITTINGTIMES

3.1 The Fundamental Matrix and its Role in Determining Expected Hitting Times.With Proposition 2.12 and a clever choice of S, [Al-F1] defines the fundamental matrix,from which we can calculate expected hitting times for random walks on any graph withstable distribution π We paraphrase their ingenuity in this subsection

Definition 3.1 Consider a graph G with irreducible transition matrix P and stable bution π The graph’s fundamental matrix Z is defined as the matrix such that

Definition 3.2 Consider a random walk on an n-vertex graph G in which for each vertex

j, P (walk starts at j) = πj Then we refer to the expected number of steps it takes thewalk to reach vertex i as Eπ[Ti]

Theorem 3.3 (The Convergence Theorem; See Theorem 2 of [Al-F1]) Consider a Markovchain {Xn} Then P (Xt = j) → πj as t → ∞ for all states j, provided the chain isaperiodic

These definitions set us up to prove our first general result about expected hitting times

Proposition 3.4 (See Lemma 11 of [Al-F1]) For an n-vertex graph G with fundamentalmatrix Z and stable distribution π,

Eπ[Ti] = Zii

πi .Proof We get at the notion of Z in the following manner Consider a random walk on

G starting at vertex i Let t0 ≥ 1 be some arbitrary stopping time and define S as the timetaken by the following process:

• wait time t0,

• and then wait, if necessary, until the walk next visits vertex i

Trang 17

Note that after time t0, the graph has probability distribution δ, where P (Xt0 = j) = δj for

1 ≤ j ≤ n Hence E[S] = t0+ Eδ[Ti] Furthermore, note that the number of visits to i bytime t0 is geometrically distributed, because we can treat Xn = i as a Bernoulli Trial forall n from 1 to t0− 1 Also, it easy to see that the walk makes no visits to i after time t0− 1(remember that we do not include time S in our accounting) Therefore we can express

Ei[Ni(t0)] as the sum of the i to i transition probabilities:

p(n)ii because the walk starts at i

Invoking Proposition 2.12 and setting j = i, we achieve the following expression:

Now let G be periodic Then by definition Xt 0 oscillates between disjoint vertex subsets

U and W as t0 → ∞ Without loss of generality assume that when n is odd, P (Xn ∈

U ) = 0 and when n is even, P (Xn ∈ W ) = 0 Thus i ∈ U , meaning that for odd n,

p(n)ii − πi = −πi, a constant Hence the infinite series does not converge However, if wescale each partial sum by 1/n, these terms become arbitrarily small as n → ∞ Denotingthe mth partial sum of the series as sm, we see that limt 0 →∞ t 1

0 −1

Pt 0 −1 m=0sm exists Thusthe seriesPt 0 −1

n=0(p(n)ii − πi) is Ces`aro summable (see [Har], for instance), and by definition

of Z, the Ces`aro sum is equal to Zii Considering the right hand side, note that by theconvergence theorem, for odd n, as n → ∞, the distribution of Xnconverges to a stabledistribution on the elements of W ; we call this distribution πW Furthermore, for even n,

as n → ∞, the distribution of Xn converges to a stable distribution on the elements of U ;call this distribution πU Define δnas the distribution of Xnon the vertices of G; then theodd terms of the sequence {δn} approach πW and the even terms approach πU Consider

Trang 18

ρn= n1 Pn

i=0δn; then for large n,

ρn ≈ 1n

Hence the Ces`aro average of this sequence exists

Now, consider a random walk on G that starts from vertex i, where i ∈ U with bility 1/2, and i ∈ W with probability 1/2 Then from this starting distribution δ0, {δn}converges to π = 12(πW + πU), as indicated above Using Ces`aro summation as above, weagain see that

proba-t 0 −1

X

n=0

(p(n)ii − πi) → Ziifor t0 → ∞,where pii refers to the average of the two i to i transition probabilities Therefore Zii =

Proposition 3.4 allows us to calculate expected hitting times for a random walk to aspecific vertex when the starting vertex is determined by the stable distribution We seek toextend this result so that we can determine expected hitting times for explicit walks fromvertex i to vertex j We use the following lemma to do so

Lemma 3.5 (See Lemma 7 of [Al-F1]) Consider a random walk on a graph G with stabledistribution π Then, for j 6= i,

Ej[number of visits to j before time Ti] = πj(Ej[Ti] + Ei[Tj])

Proof Assume the walk starts at j Define S as “the first return to j after the first visit toi.” Then we have Ej[S] = Ej[Ti] + Ei[Tj] Because there are no visits to i before Ti and novisits to j after Ti by our accounting convention, substitution into Proposition 2.12 yields

We are now ready to associate Z with explicit hitting times, and do so below

Proposition 3.6 (See Lemma 12 of [Al-F1]) For an n-vertex graph G with fundamentalmatrix Z and stable distribution π,

Ei[Tj] = Zjj− Zij

πj .

Trang 19

Proof We proceed in the same manner as above Consider a random walk on G starting

at vertex j Define S as the time taken by the following process:

• wait until the walk first reaches vertex i,

• then wait a further time t0,

• and finally wait, if necessary, until the walk returns to j

Define δ as the probability distribution such that δk = Pk(Xt0 = k) Then Ej[S] =

Ej[Ti] + t0+ EδTj Furthermore note that between times Tiand t0, the number of visits to

Ej[number of visits to j before time Ti] +

As Aldous and Fill observe in [Al-F1], because hitting time results are transferable tocontinuous time, where the periodicity issue does not arise, it is easier to switch to con-tinuous time to evaluate random walks that are periodic in the discrete case However, webelieve that remaining in discrete time is fruitful because it gives us a more tangible sense

of the oscillatory nature of discrete time random walks on bipartite graphs

Trang 20

3.2 Making Sense of the Fundamental Matrix As we can see from the above sectionand ultimately from Proposition 3.6, the fundamental matrix is a powerful tool for our pur-poses, so long as we can determine it However, for most graphs it is essentially impossible

to calculate the sum of this infinite series, just as it seems difficult to calculate an expectedhitting time from its probabilistic definition We therefore seek to equate this definition of

Z with a more useable one To do so, we make use of the following lemma While theresults of the next two sections are known, we arrive at them independently of the existingliterature

Lemma 3.7 Let graph G have irreducible transition matrix P Define P∞ as the n by nmatrix such that P∞ij = πj for all i Then P P∞ = P∞ = P∞P Furthermore, P∞P∞ =

P∞

Proof Define Jnas the n column vector of all ones Thus we have P P∞ = P Jnπ Now,

by stochasticity of P , P Jn = Jn, so P P∞ = Jnπ = P∞ Similarly, P∞P = JnπP Since

P is irreducible, πP = π So, P∞P = Jnπ = P∞ To show P∞P∞ = P∞, we write thefollowing:

Z in terms of matrices we can easily calculate

Proposition 3.8 Consider an n-vertex graph G with irreducible transition matrix P andfundamental matrix Z Then

(−1)i

Trang 21

This means that for t ≥ 1, if we negate every other term of the expansion of (P − P∞)t,the coefficients of the terms will sum to 0 Since P and P∞commute, this yields

= Pt− P∞ by the binomial theorem

We get from the second line to the third line above by knowing that the coefficients of theterms sum to 0, and the first term, Pt, has coefficient 1 So we now have

λP 1 = λP∞1 = 1, and λ(P −P∞)1 = 1−1 = 0 Furthermore, because λP∞i = 0 for 2 ≤ i ≤ nand |λP∞i| < 1 for 2 ≤ i ≤ n, we conclude that the eigenvalues of A = (P − P∞) arestrictly less than 1 in absolute value Therefore, from linear algebra, the series converges

to a sum satisfying

(I − A)(I + A + A2+ · · · ) = I

Because A has eigenvalues all less than 1 in absolute value, (I − A) has all nonzero values, and hence is invertible Thus the sum of the series is (I −A)−1 = (I −(P −P∞))−1,

Trang 22

t=0(P − P∞)toscillates in the limit, which we would expect based on the proof ofProposition 3.4 for the periodic case We appeal to Abel summation (see [Har]) to evaluatethis sum To do so, let 0 < α < 1 Write

t=0(Pt− P∞)αt converges to(I − α(P − P∞))−1 − P∞ Making use of this equality and noting that the Abel sum of

P∞

t=0(P − P∞)tis simply the limit ofP∞

t=0(Pt− P∞)αt as α approaches 1 from below,

Note that from the proof of Proposition 3.6, the series P∞

t=0(p(t)ij − P∞ij) is Ces`aro mable for each i and j, with Ces`aro sum equal to Zij Hence Pn

sum-t=0(Pt− P∞) is Ces`arosummable, with Ces`aro sum equal to Z Because the Abel sum exists and equals the Ces`arosum whenever the latter exists, we see that the Abel sum ofPn

t=0(Pt− P∞) is equal to Z,and therefore Z = (I − (P − P∞))−1− P∞in the periodic case as well 2

Trang 23

3.3 Toward an Explicit Hitting Time Formula Using Proposition 3.8 and Proposition3.6, we can quantify hitting times on any finite, strongly connected graph However, thefundamental matrix is an abstract concept that is hard to visualize It is hard to tell exactlywhere the actual values for the hitting times are coming from Thus it would be nice if

we could determine hitting times straight from the transition matrix, because we obtain thetransition matrix directly from the graph itself We work toward quantifying first Eπ[Tj]values and then Ei[Tj] values using only the transition matrix and its spectrum in thissubsection Doing so requires associating the spectrum of the fundamental matrix with that

of the transition matrix, which we accomplish with the following lemma

Lemma 3.9 For an n-vertex graph G with irreducible transition matrix P and fundamentalmatrix Z, λZ 1 = 0, and for 2 ≤ m ≤ n,

λZm = (1 − λPm)−1,

where λPm refers to the mthlargest eigenvalue of P

Proof From Proposition 3.8, we write Z = (I − (P − P∞))−1 − P∞ From linearalgebra, we know that if two matrices commute with each other and are symmetric, thenthey are simultaneously diagonalizable To show Z is symmetric, note that P is symmetric

by undirectedness of G, P∞is symmetric by uniformity of π, and I is symmetric by tion Because differences of symmetric matrices are symmetric and inverses of symmetricmatrices are symmetric, Z = (I − (P − P∞))−1− P∞is symmetric

defini-We now must show that P and Z commute By definition, P commutes with I and withitself, and by Lemma 3.7, P commutes with P∞ Thus we have

P Z = P ((I − (P − P∞))−1− P∞) from Proposition 3.8

= P (I − (P − P∞))−1− P P∞ by distributivity of matrix multiplication

= P (I − (P − P∞))−1− P∞P by Lemma 3.7

= (I − (P − P∞))−1P − P∞P because P commutes with I, P∞, and itself

= ((I − (P − P∞))−1− P∞)P by distributivity of matrix multiplication

Trang 24

Z to those of P , we are in position to explicitly quantify expected hitting times Recallthat Eπ[Tj] refers to the expected number of steps it takes a random walk to reach vertex

j in which the starting vertex is determined by the stationary distribution Thus it makesintuitive sense to think of these values as an average of hitting times with an explicit startingvertex We prove this a priori intuition below

Proposition 3.10 For a random walk on an n-vertex graph G with irreducible transitionmatrix P and uniform stable distribution π, we have

Ngày đăng: 21/09/2015, 21:23

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm

w