7.1, so that the deterministic dynamicscharacterize what the population does ‘‘on average.’’ This identification of the average of the stochastic trajectories with the deterministictraje
Trang 1The basics of stochastic population dynamics
In this and the next chapter, we turn to questions that require the use ofall of our tools: differential equations, probability, computation, and agood deal of hard thinking about biological implications of the analysis
Do not be dissuaded: the material is accessible However, accessing thismaterial requires new kinds of thinking, because funny things happenwhen we enter the realm of dynamical systems with random compo-nents These are generally called stochastic processes Time can bemeasured either discretely or continuously and the state of the systemcan be measured either continuously or discretely We will encounterall combinations, but will mainly focus on continuous time models.Much of the groundwork for what we will do was laid by physicists
in the twentieth century and adopted in part or wholly by biologists as
we moved into the twentyfirst century (see, for example, May (1974),Ludwig (1975), Voronka and Keller (1975), Costantino and Desharnais(1991), Lande et al (2003)) Thus, as you read the text you may begin tothink that I have physics envy; I don’t, but I do believe that we shouldacknowledge the source of great ideas Both in the text and in Connec-tions, I will point towards biological applications, and the next chapter
is all about them
Thinking along sample paths
To begin, we need to learn to think about dynamic biological systems in
a different way The reason is this: when the dynamics are stochastic,even the simplest dynamics can have more than one possible outcome.(This has profound ‘‘real world’’ applications For example, it means248
Trang 2that in a management context, we might do everything right and still not
succeed in the goal.)
To illustrate this point, let us reconsider exponential population
growth in discrete time:
Xðt þ 1Þ ¼ ð1 þ lÞX ðtÞ (7:1)
which we know has the solution X(t)¼ (1 þ l)t
X(0) Now suppose that
we wanted to make these dynamics stochastic One possibility would be
to assume that at each time the new population size is determined by the
deterministic component given in Eq (7.1) and a random, stochastic
term Z(t) representing elements of the population that come from
‘‘somewhere else.’’ Instead of Eq (7.1), we would write
Xðt þ 1Þ ¼ ð1 þ lÞX ðtÞ þ ZðtÞ (7:2)
In order to iterate this equation forward in time, we need assumptions
about the properties of Z(t) One assumption is that Z(t), the process
uncertainty, is normally distributed with mean 0 and variance 2 In
that case, there are an infinite number of possibilities for the sequence
{Z(0), Z(1), Z(2), } and in order to understand the dynamics we
should investigate the properties of a variety of the trajectories, or
sample paths, that this equation generates In Figure 7.1, I show ten
such trajectories and the deterministic trajectory
Trang 3Note that in this particular case, the deterministic trajectory ispredicted to be the same as the average of the stochastic trajectories.
If we take the expectation of Eq (7.2), we have
EfX ðt þ 1Þg ¼ Efð1 þ lÞX ðtÞg þ EfZðtÞg ¼ ð1 þ lÞEfX ðtÞg (7:3)
which is the same as Eq (7.1), so that the deterministic dynamicscharacterize what the population does ‘‘on average.’’ This identification
of the average of the stochastic trajectories with the deterministictrajectory only holds, however, because the underlying dynamics arelinear Were they nonlinear, so that instead of (1þ l)X(t), we had a termg(X(t)) on the right hand side of Eq (7.2), then the averaging as in
Eq (7.3) would not work, since in general E{g(X)}6¼ g(E{X}).The deterministic trajectory shown in Figure7.1accords with ourexperience with exponential growth Since the growth parameter issmall, the trajectory grows exponentially in time, but at a slow rate.How about the stochastic trajectories? Well, some of them are close tothe deterministic one, but others deviate considerably from the deter-ministic one, in both directions Note that the largest value of X(t) in thesimulated trajectories is about 23 and that the smallest value is about
10 If this were a model of a population, for example, we might saythat the population is extinct if it falls below zero, in which case one ofthe ten trajectories leads to extinction Note that the trajectories are just
a little bit bumpy, because of the relatively small value of the variance(try this out for yourself by simulating your own version of Eq (7.2)with different choices of l and 2)
The transition from Eq (7.1) to Eq (7.2), in which we made thedynamics stochastic rather than deterministic, is a key piece of the art ofmodeling We might have done it in a different manner For example,suppose that we assume that the growth rate is composed of a determi-nistic term and a random term, so that we write X(tþ 1) ¼ (1 þ l(t))X(t),where lðtÞ ¼ lþ ZðtÞ, and understand l to be the mean growth rate andZ(t) to be the perturbation in time of that growth rate Now, instead of
Eq (7.2), our stochastic dynamics will be
Xðt þ 1Þ ¼ ð1 þ lÞX ðtÞ þ ZðtÞX ðtÞ (7:4)
Note the difference between Eq (7.4) and Eq (7.2) In Eq (7.4), thestochastic perturbation is proportional to population size This slightmodification, however, changes in a qualitative nature the sample paths(Figure7.2) We can now have very large changes in the trajectory,because the stochastic component, Z(t), is amplified by the current value
of the state, X(t)
Which is the ‘‘right’’ way to convert from deterministic to stochasticdynamics – Eq (7.2) or Eq (7.4)? The answer is ‘‘it depends.’’ It depends
Trang 4upon your understanding of the biology and on how the random factors
enter into the biological dynamics That is, this is a question of the art of
modeling, at which you are becoming more expert, and which (the
development of models) is a life-long pursuit We will put this question
aside mostly, until the next chapter when it returns with a vengeance,
when new tools obtained in this chapter are used
Brownian motion
In 1828 (Brown1828), Robert Brown, a Scottish botanist, observed that
a grain of pollen in water dispersed into a number of much smaller
particles, each of which moved continuously and randomly (as if with a
‘‘vital force’’) This motion is now called Brownian motion; it was
investigated by a variety of scientists between 1828 and 1905, when
Einstein – in his miraculous year – published an explanation of
Brownian motion (Einstein1956), using the atomic theory of matter
as a guide It is perhaps hard for us to believe today but, at the turn of the
last century, the atomic theory of matter was still just that – considered
to be an unproven theory Fuerth (1956) gives a history of the study of
Brownian motion between its report and Einstein’s publication
Beginning in the 1930s, pure mathematicians got hold of the subject,
and took it away from its biological and physical origins; they tend to
call Brownian motion a Wiener process, after the brilliant Norbert
Wiener who began to mathematize the subject
Trang 5In compromise, we will use W(t) to denote ‘‘standard Brownianmotion,’’ which is defined by the following four conditions:
(1) W(0)¼ 0;
(2) W(t) is continuous;
(3) W(t) is normally distributed with mean 0 and variance t;
(4) if {t1, t2, t3, t4} represent four different, ordered times with t1<t2<t3<t4(Figure7.3), then W(t2) W(t1) and W(t4) W(t3) are independent randomvariables, no matter how close t3is to t2 The last property is said to be theproperty of independent increments (seeConnectionsfor more details) and
is a key assumption
In Figure7.4, I show five sample trajectories, which in the ness are described as ‘‘realizations of the stochastic process.’’ They allstart at 0 because of property (1) The trajectories are continuous,forced by property (2) Notice, however, that although the trajectoriesare continuous, they are very wiggly (we will come back to thatmomentarily)
busi-For much of what follows, we will work with the ‘‘increment ofBrownian motion’’ (we are going to convert regular differential equa-tions of the sort that we encountered in previous chapters into stochasticdifferential equations using this increment), which is defined as
Now, although Brownian motion and its increment seem verynatural to us (perhaps because have spent so much time working withnormal random variables), a variety of surprising and non-intuitiveresults emerge To begin, let’s ask about the derivative dW/dt SinceW(t) is a random variable, its derivative will be one too Using thedefinition of the derivative
A key assumption of the
process of Brownian motion
is that W(t 2 ) W(t 1 ) and
W(t 4 ) W(t 3 ) are independent
random variables, no matter
how close t 3 is to t 2
Trang 6and we conclude that the average value of dW/dt is 0 But look what
happens with the variance:
but we had better stop right here, because we know what is going to
happen with the limit – it does not exist In other words, although the
sample paths of Brownian motion are continuous, they are not
differ-entiable, at least in the sense that the variance of the derivative exists
Later in this chapter, in the section on white noise (see p 261), we will
make sense of the derivative of Brownian motion For now, I want to
introduce one more strange property associated with Brownian motion
and then spend some time using it
Suppose that we have a function f (t,W ) which is known and well
understood and can be differentiated to our hearts’ content and for
which we want to find f (tþ dt, w þ dW ) when dt (and thus E{dW2})
Figure 7.4 Five realizations
of standard Brownian motion.
Trang 7is small and t and W (t ) ¼ w are specified We Taylo r expand in the usualmanner, using a subsc ript to denote a derivativ e
f ðt þ d t ; w þ dW Þ ¼f ðt ; w Þ þ ft dt þ fw dW
þ12
The gamble r’s ruin in a fair game
Many – perha ps all – books on stochasti c proce sses or proba bilityinclude a sectio n on gambling becau se, let’s face it, what is the point
of studying proba bility and stochasti c processe s if you can’ t become abetter gamb ler (see also Dubins and Savage ( 1976 ))? The gam blingproblem als o allow s us to introdu ce some ideas that will flow throughthe rest of this chapt er and the next chapter
Imagi ne that you are playing a fair game in a casino (we will discussreal casinos, which always have the edge, in the next sect ion) and thatyour current holdings are X( t) dollars You are out of the gam e whenX( t) falls to 0 and you break the bank when your holdings X (t ) reach thecasino holdings C If you think that this is a purely mathem aticalproblem and are impatie nt for biology, make the followi ng analogy:X( t) is the size at time t of the popul ation descended from a propa gule ofsize x that reached an island at time t ¼ 0; X( t) ¼ 0 correspo nds toextincti on of the popul ation and X( t) ¼ C correspo nds to succe ssfulcoloniz ation of the island by the descendant s of the propa gule Withthis interp retation, we have one of the models for island biogeogra phy
of MacAr thur and Wilson (1967 ), which will be discusse d in the nextchapter
Trang 8Since the game is fair, we may assume that the change in your
holdings are determined by a standard Brownian motion; that is, your
holdings at time t and time tþ dt are related by
Xðt þ dtÞ ¼ X ðtÞ þ dW (7:11)
There are many questions that we could ask about your game, but I
want to focus here on a single question: given your initial stake X(0)¼ x,
what is the chance that you break the casino before you go broke?
One way to answer this question would be through simulation of
trajectories satisfying Eq (7.11) We would then follow the
trajec-tories until X(t) crosses 0 or crosses C and the probability of breaking
the casino would be the fraction of trajectories that cross C before
they cross 0 The trajectories that we simulate would look like those
in Figure 7.4 with a starting value of x rather than 0 This method,
while effective, would be hard pressed to give us general intuition and
might require considerable computer time in order for us to obtain
accurate answers So, we will seek another method by thinking along
sample paths
In Figure7.5, I show the t x plane and the initial value of your
holdings X(0)¼ x At at time dt later, your holdings will change to
xþ dW, where dW is normally distributed with mean 0 and variance
dt Suppose that, as in the figure, they have changed to xþ w, where we
can calculate the probability of dW falling around w from the normal
distribution What happens when you start at this new value of
hold-ings? Either you break the bank or you go broke; that is, things start over
exactly as before except with a new level of holdings But what happens
between 0 and dt and after dt are independent of each other because of
the properties of Brownian motion Thus, whatever happens after dt is
determined solely by your holdings at dt And those holdings are
normally distributed
To be more formal about this, let us set
uðxÞ ¼ PrfX ðtÞ hits C before it hits 0jX ð0Þ ¼ xg (7:12)
(which could also be recognized as a colonization probability, using the
metaphor of island biogeography) and recognize that the argument of
the previous paragraph can be summarized as
uðxÞ ¼ EdWfuðx þ dW Þg (7:13)
where EdWmeans to average over dW Now let us Taylor expand the
right hand side of Eq (7.13) around x:
uðxÞ ¼ EdW uðxÞ þ dWuxþ1
to x þ w, where w has a normal distribution with mean 0 and variance dt.
We can thus relate u(x) at this time to the average of u(x þ dW) at a slightly later time (later by dt).
Trang 9and take the average over dW, remembering that it is normally uted with mean 0 and variance dt:
distrib-uðxÞ ¼ distrib-uðxÞ þ1
2uxxdtþ oðdtÞ (7:14b)
The last two equations share the same number because I want toemphasize their equivalence To finish the derivation, we subtract u(x)from both sides, divide by dt and let dt! 0 to obtain the especiallysimple differential equation
which we now solve by inspection The second derivative is 0, so thefirst derivative of u(x) is a constant ux¼ k1and thus u(x) is a linearfunction of x
uðxÞ ¼ k2þ k1x (7:16)
We will find these constants of integration by thinking about theboundary conditions that u(x) must satisfy
From Eq (7.12), we conclude that u(0) must be 0 and u(C) must
be 1 since if you start with x¼ 0 you have hit 0 before C and if youstart with C you have hit C before 0 Since u(0)¼ 0, from Eq (7.16)
we conclude that k2¼ 0 and to make u(C) ¼ 1 we must have k1¼ 1/C sothat u(x) is
uðxÞ ¼x
What is the typical relationship between your initial holdings andthose of a casino? In general C x, so that u(x) 0 – you are almostalways guaranteed to go broke before hitting the casino limit
But, of course, most of us gamble not to break the bank, but to havesome fun (and perhaps win a little bit) So we might ask how long it will
be before the game ends (i.e., your holdings are either 0 or C) Toanswer this question, set
TðxÞ ¼ average amount of time in the game; given X ð0Þ ¼ x (7:18)
We derive an equation for T(x) using logic similar to that which took
us to Eq (7.15) Starting at X(0)¼ x, after dt the holdings will be
xþ dW and you will have been in the game for dt time units Thus weconclude
TðxÞ ¼ dt þ EdWfT ðx þ dW Þg (7:19)
and we would now proceed as before, Taylor expanding, averaging,dividing by dt and letting dt approach 0 This question is better left as anexercise
Trang 10Exercise 7.2 (M)
Show that T(x) satisfies the equation1 ¼ (1/2) Txxand that the general solution
of this equation is T(x)¼ x2þ k1xþ k2 Then explain why the boundary
conditions for the equation are T(0)¼ T(C) ¼ 0 and use them to evaluate the
two constants Plot and interpret the final result for T(x)
The gambler’s ruin in a biased game
Most casinos have a slight edge on the gamblers playing there This
means that on average your holdings will decrease (the casino’s edge) at
rate m, as well as change due to the random fluctuations of the game To
capture this idea, we replace Eq (7.11) by
dX¼ X ðt þ dtÞ X ðtÞ ¼ mdt þ dW (7:20)
Exercise 7.3 (E/M)
Show that dX is normally distributed with mean –mdt and variance dtþ o(dt) by
evaluating E{dX} and E{dX2} using Eq (7.20) and the results of Exercise7.1
As before, we compute u(x), the probability that X(t) hits C
before 0, but now we recognize that the average must be over dX
rather than dW, since the holdings change from x to xþ dX due to
deterministic (mdt) and stochastic (dW) factors The analog of
Eq (7.13) is then
uðxÞ ¼ EdXfuðx þ dX Þg ¼ EdXfuðx mdt þ dW Þg (7:21)
We now Taylor expand and combine higher powers of dt and dW into a
term that is o(dt)
uðxÞ ¼ EdX uðxÞ þ ðmdt þ dW Þuxþ1
2ðmdt þ dW Þ2uxxþ oðdtÞ
(7:22)
We expand the squared term, recognizing that O(dW2) will be order
dt, take the average over dX, divide by dt and let dt! 0 (you should
write out all of these steps if any one of them is not clear to you) to
obtain
1
2uxx mux¼ 0 (7:23)
which we need to solve with the same boundary conditions as before
u(0)¼ 0, u(C) ¼ 1 There are at least two ways of solving Eq (7.23)
I will demonstrate one; the other uses the same method that we used in
Chapter2to deal with the von Bertalanffy equation for growth
Trang 11Let us set w¼ ux, so that Eq (7.23) can be rewritten as wx¼ 2mw,for which we immediately recognize the solution w(x)¼ k1e2mx, where
k1is a constant Since w(x) is the derivative of u(x) we integrate again toobtain
uðxÞ ¼ k2e2mxþ k3 (7:24)
where k2and k3are constants and, to be certain that we are on the samepage, try the next exercise
Exercise 7.4 (E)
What is the relationship between k1and k2?
When we apply the boundary condition that u(0)¼ 0, we concludethat k3¼ k2, and when we apply the boundary condition u(C)¼ 1, weconclude that k2¼ 1/(e2mC 1) We thus have the solution for theprobability of reaching the limit of the casino in a biased game:
uðxÞ ¼e2mx 1
x
Figure 7.6 When the game is
biased, the chance of reaching
the limit of the casino before
going broke is vanishingly
small Here I show u(x) given by
Eq ( 7.25 ) for m ¼ 0.1 and
C ¼ 100 Note that if you start
with even 90% of the casino
limit, the situation is not very
good Most of us would start
with x C and should thus just
enjoy the game (or develop a
system to reduce the value of
m, or even change its sign.)
Trang 12Exercise 7.5 (M/H)
Derive the equation for T(x), the mean time that you are in the game when
dX is given by Eq (7.20) Solve this equation for the boundary conditions
T(0)¼ T(C) ¼ 0
Exercise 7.6 (E/M)
When m is very small, we expect that the solution of Eq (7.25) should be close
to Eq (7.17) because then the biased game is almost like a fair one Show that
this is indeed the case by Taylor expansion of the exponentials in Eq (7.25) for
m! 0 and show that you obtain our previous result If you have more energy
after this, do the same for the solutions of T(x) from Exercises7.5and7.2
Before moving on, let us do one additional piece of analysis In
general, we expect the casino limit C to be very large, so that 2mC 1
Dividing numerator and denominator of Eq (7.25) by e2mCgives
uðxÞ ¼e
2mðCxÞ e2mC
1 e2mC e2mðCxÞ (7:26)
with the last approximation coming by assuming that e2mC 1 Now
let us take the logarithm to the base 10 of this approximation to u(x), so
that log10(u(x))¼ 2m(C x)log10e I have plotted this function in
Figure7.7, for x¼ 10 and C ¼ 50, 500, or 1000 Now, C ¼ 1000, x ¼ 10,
and m¼ 0.01 probably under-represents the relationship of the bank of a
casino to most of us, but note that, even in this case, the chance of
reaching the casino limit before going broke when m¼ 0.01 is about 1 in
on Eq ( 7.26 ) for x ¼ 10 and
C ¼ 50, 500, or 1000, as a function of m.
Trang 13a billion So go to Vegas, but go for a good time (In spring 1981, myfirst year at UC Davis, I went to a regional meeting of the AmericanMathematical Society, held in Reno, Nevada, to speak in a session onapplied stochastic processes Many famous colleagues were they, andalthough our session was Friday, they had been there since Tuesdaydoing, you guessed it, true work in applied probability All, of course,claimed positive gains in their holdings.)
The transition density and covariance
of Brownian motion
We now return to standard Brownian motion, to learn a little bit moreabout it To do this, consider the interval [0, t] and some intermediatetime s (Figure7.8) Suppose we know that W(s)¼ y, for s < t What can
be said about W(t)? The increment W(t) W(s) ¼ W(t) y will be ally distributed with mean 0 and variance t s Thus we conclude that
norm-Prfa W ðtÞ bg ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1
2pðt sÞp
ðb aexp ðx yÞ
22ðt sÞ
!
dx (7:27)
Note too that we can make this prediction knowing only W(s), and nothaving to know anything about the history between 0 and s A stochasticprocess for which the future depends only upon the current value andnot upon the past that led to the current value is called a Markov process,
so that we now know that Brownian motion is a Markov process.The integrand in Eq (7.27) is an example of a transition densityfunction, which tells us how the process moves from one time and value
to another It depends upon four values: s, y, t, and x, and we shall write it as
qðx; t; y; sÞdx ¼ Prfx W ðtÞ x þ dxjW ðsÞ ¼ yg
¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi12pðt sÞ
p exp ðx yÞ
22ðt sÞ
!
dx (7:28)
This equation should remind you of the diffusion equation encountered
in Chapter2, and the discussion that we had there about the strangeproperties of the right hand side as t decreases to s In the next section all
of this will be clarified But before that, a small exercise
Figure 7.8 The time s divides
the interval 0 to t into two
pieces, one from 0 to just before
s (s ) and one from just after
s (s þ ) to t The increments in
Brownian motion are then
independent random variables.
s–s+
t W(s–) –W(0) and W(t)–W(s+) are independent
random variables
Trang 14Exercise 7.7 (E/M)
Show that q(x, t, y, s) satisfies the differential equation qt¼ (1/2)qxx What
equation does q(x, t, y, s) satisfy in the variables s and y (think about the
relationship between qtand qsand qxxand qyybefore you start computing)?
Keeping with the ordering of time in Figure7.8, let us compute the
where the last line of Eq (7.29) follows because W(s) W(0) and
W(t) W(s) are independent random variables, with mean 0 Suppose
that we had interchanged the order of t and s Our conclusion would then
be that E{W(t)W(s)}¼ t In other words
EfW ðtÞW ðsÞg ¼ minðt; sÞ (7:30)
and we are now ready to think about the derivative of Brownian motion
Gaussian ‘‘white’’ noise
The derivative of Brownian motion, which we shall denote by
(t)¼ dW/dt, is often called Gaussian white noise It should already be
clear where Gaussian comes from; the origin of white will be understood
at the end of this section, and the use of noise comes from engineers, who
see fluctuations as noise, not as the element of variation that may lead to
selection; Jaynes (2003) has a particularly nice discussion of this point
We have already shown the E{(t)}¼ 0 and that problems arise when we
try to compute E{(t)2} in the usual way because of the variance of
Brownian motion (recall the discussion around Eq (7.8)) So, we are
going to sneak up on this derivative by computing the covariance
EfðtÞðsÞg ¼ q q
qtqsEfW ðtÞW ðsÞg (7:31)
Note that I have exchanged the order of differentiation and integration
in Eq (7.31); we will do this once more in this chapter In general, one
needs to be careful about doing such exchanges; both are okay here (if
you want to know more about this question, consult a good book on
advanced analysis) We know that E{W(t)W(s)}¼ min(t, s) Let us think
about this covariance as a function of t, when s is held fixed, as if it were
just a parameter (Figure7.9)
ðt; sÞ ¼ minðt; sÞ ¼ t if t5 s
s if t s (7:32)
Trang 15Now the derivative of this function will be discontinuous; since thederivative is 1 if t < s, and is 0 if t > s, there is a jump at t¼ s(Figure 7.9) We are going to deal with this problem by using theapproach of generalized functions described in Chapter2 (and in thecourse of this, learn more about Gaussians).
We will replace the derivative (q/qt)(t, s) by an approximation that issmooth but in the limit has the discontinuity Define a family of functions
nðt; sÞ ¼
ffiffiffinpffiffiffiffiffiffi2ppð1
ðtsÞexp nx
22
p exp nx
22
ðtsÞexp nx
22
dx¼ limn!1
q
qtnðt; sÞ (7:33)
When t¼ s, the lower limit of the integral is 0, so that the integral is 1/2
To understand what happens when t does not equal s, the followingexercise is useful
t s
(d)
t s
(c)
1/2
t s
1 (b)
t s
s
(a) Figure 7.9 (a) The covariance
function (t,s) ¼ E{W(t)W(s)} ¼
min(t,s), thought of as a
function of t with s as a
parameter (b) The derivative
of the covariance function
approximate derivative is the
tail of the cumulative Gaussian
from t ¼ s.
Trang 16ffiffin p ðtsÞexp y22
The form of the integral in expression (7.34) lets us understand what
will happen when t6¼ s If t < s, the lower limit is negative, so that as
n! 1 the integral will approach 1 If t > s, the lower limit is positive so
that as n increases the integral will approach 0 We have thus
con-structed an approximation to the derivative of the correlation function
Equation (7.31) tells us what we need to do next We have
con-structed an approximation to (q/qt)(t, s), and so to find the covariance
of Gaussian white noise, we now need to differentiate Eq (7.33) with
respect to s Remembering how to take the derivative of an integral with
respect to one of its arguments, we have
q q
qtqsðt; sÞ ¼ limn!1
ffiffiffinpffiffiffiffiffiffi2p
p exp nðt sÞ
22
!
¼ limn!1nðt sÞ (7:35)
Now, n(t s) is a Gaussian distribution centered not at 0 but at t ¼ s
with variance 1/n Its integral, over all values of t, is 1 but in the limit
that n! 1 it is 0 everywhere except at t ¼ s, where it is infinite In other
words, the limit of n(t s) is the Dirac delta function that we first
encountered in Chapter2(some n(x) are shown in Figure7.10)
Trang 17This has been a tough slog, but worth it, because we have shown that
EfðtÞðsÞg ¼ ðt sÞ (7:36)
We are now in a position to understand the use of the word ‘‘white’’
in the description of this process Historically, engineers have workedinterchangeably between time and frequency domains (Kailath1980)because in the frequency domain tools other the ones that we considerare useful, especially for linear systems (which most biological systemsare not) The connection between the time and frequency (Stratonovich
1963) is the spectrum S(o) defined for the function f(t) by
where the last equality follows because the delta function picks out
t¼ 0, for which the exponential is 1 The spectrum of Eq (7.36) is thusflat (Figure7.11): all frequencies are equally represented in it Well, that
is the description of white light and this is the reason that we call thederivative of Brownian motion white noise In the natural world, thecovariance does not drop off instantaneously and we obtain spectra withcolor (seeConnections)
The Ornstein–Uhlenbeck process and stochastic integrals
In our analyses thus far, the dynamics of the stochastic processhave been independent of the state, depending only upon Brownianmotion We will now begin to move beyond that limitation, but do itappropriately slowly To begin, recall that if X(t) satisfies the dynamics
Spectrum, S( ω)
Frequency, ω
Figure 7.11 The spectrum of
the covariance function given
by Eq ( 7.36 ) is completely
flat so that all frequencies are
equally represented Hence
the spectrum is ‘‘white.’’ In the
natural world, however, the
higher frequencies are less
represented, leading to a
fall-off of the spectrum.
Trang 18dX/dt¼ f (X) and K is a stable steady state of this system, so that
f (K)¼ 0, and we consider the behavior of deviations from the steady
state Y(t)¼ X(t) K then, to first order, Y(t) satisfies the linear dynamics
dY / dt¼ | f0(K)|Y, where f0(K) is the derivative of f (X) evaluated at K
We can then define a relaxation parameter ¼ | f0(K)| so that the
dynamics of Y are given by
dY
We call the relaxation parameter because it measures the rate at which
fluctuations from the steady state return (relax) towards 0 Sometimes
this parameter is called the dissipation parameter
Exercise 7.9 (E)
What is the relaxation parameter if f (X) is the logistic rX(1 (X / K))? If you
have the time, find Levins (1966) and see what he has to say about your result
We fully understand the dynamics of Eq (7.39): it represents return
of deviations to the steady state: which ever way the deviation starts
(above or below K), it becomes smaller However, now let us ask what
happens if in addition to this deterministic attraction back to the steady
state, there is stochastic fluctuation That is, we imagine that in the next
little bit of time, the deviation from the steady state declines because of
the attraction back towards the steady state but at the same time is
perturbed by factors independent of this decline Bjørnstadt and
Grenfell (2001) call this process ‘‘noisy clockwork;’’ Stenseth et al
(1999) apply the ideas we now develop to cod, and Dennis and Otten
(2000) apply them to kit fox
We formulate the dynamics in terms of the increment of Brownian
motion, rather than white noise, by recognizing that in the limit dt! 0,
Eq (7.39) is the same as dY¼ Y dt þ o(dt) and so our stochastic
version will become
where is allowed to scale the intensity of the fluctuations The
stochastic process generated by Eq (7.40) is called the Ornstein–
Uhlenbeck process (seeConnections) and contains both deterministic
relaxation and stochastic fluctuations (Figure7.12) Our goal is to now
characterize the mixture of relaxation and fluctuation
To do so, we write Eq (7.40) as a differential by using the
integrat-ing factor etso that
dðetYÞ ¼ etdW (7:41)