A process consisting of randomly occurring points in the plane is said to constitute a two-dimensional Poisson process having rateλ, λ >0, if
1. The number of points occurring in any given region of area Ais Poisson distributed with meanλA.
2. The numbers of points occurring in disjoint regions are independent.
For a given fixed point0 in the plane, we now show how to simulate points, according to a two-dimensional Poisson process with rateλ, that occur in a circular region of radiusrcentered at0.
Let C(a) denote the circle of radius a centered at 0, and note that, from Condition 1, the number of points in C(a) is Poisson distributed with mean λπa2. LetRi,i ≥1, denote the distance from the origin0to itsith nearest point (Figure5.4). Then
P
πR21>x
=P{R1>
x/π}
=P
no points in C x
π
=e−λx
5.6 Simulating a Two-Dimensional Poisson Process 89
R3
R1
0 R2 R4
Figure 5.4. Two Dimensional Poisson Process.
where the last equality uses the fact that the area ofC(√
x/π)isx. Also, with C(b)−C(a)denoting the region betweenC(b)andC(a),a<b, we have
P
πR22−πR12>x|R1=a
=P
⎧⎨
⎩R2>
x+πR21 π |R1=a
⎫⎬
⎭
=P
no points inC
x+πa2 π
−C(a)|R1=a
=P
no points inC
x+πa2 π
−C(a)
by Condition 2
=e−λx
In fact, the same argument can be repeated continually to obtain the following proposition.
Proposition With R0=0, πRi2−πR2i−1,i≥1, are independent exponential random variables each having rateλ.
In other words, the amount of area that need be traversed to encounter a Poisson point is exponential with rateλ. Since, by symmetry, the respective angles of the Poisson points are independent and uniformly distributed over (0,2π), we thus have the following algorithm for simulating the Poisson process over a circular region of radiusrabout0.
step1: Generate independent exponentials with rateλ,X1,X2, . . ., stopping at N =Min{n:X1+ ã ã ã +Xn> πr2}
step2: IfN =1 stop; there are no points inC(r). Otherwise, fori =1, . . . ,N−1, set
Ri=
X1+ ã ã ã +Xi π (that is,πRi2=X1+ ã ã ã +Xi).
step3: Generate random numbersU1, . . . ,UN−1.
step4: The polar coordinates of theN−1 Poisson points are (Ri,2πUi), i=1, . . . ,N−1
The above algorithm can be considered as the fanning out from a circle centered at0with a radius that expands continuously from 0 tor. The successive radii at which points are encountered are simulated by using the result that the additional area necessary to explore until one encounters another point is always exponentially distributed with rateλ. This fanning-out technique can also be used to simulate the process over noncircular regions. For example, consider a nonnegative function f(x)and suppose that we are interested in simulating the Poisson process in the region between thex-axis and the function f (Figure5.5) with x going from 0 toT. To do so, we can start at the left-hand edge and fan vertically to the right by considering the successive areas encountered. Specifically, ifX1 < X2 <ã ã ã denote the successive projections of the Poisson process points on thex-axis, it follows in exactly the same manner as before that (withX0=0)
Xi
Xi−1
f(x)d x, i =1, . . . ,are independent with rateλ
Hence, we can simulate the Poisson points by generating independent exponential random variables with rateλ,W1,W2, . . ., stopping at
N=Min
n:W1+ ã ã ã +Wn >
T 0
f(x)d x We now determineX1, . . . ,XN−1by using the equations
X1 0
f(x)d x =W1 X2
X1
f(x)d x=W2 ...XN−1
XN−2
f(x)d x=WN−1
Exercises 91
f(x)
T Figure 5.5. Graph off.
Because the projection on the y-axis of the point whose x-coordinate is Xi is clearly uniformly distributed over (0,f(Xi)), it thus follows that if we now generate random numbersU1, . . . ,UN−1, then the simulated Poisson points are, in rectangular coordinates,(Xi,Uif(Xi)),i =1, . . . ,N−1.
The above procedure is most useful when f is regular enough so that the above equations can be efficiently solved for the values ofXi. For example, if f(x)=c (and so the region is a rectangle), we can expressXias
Xi = W1+ ã ã ã +Wi c and the Poisson points are
(Xi,cUi), i =1, . . . ,N−1
Exercises
1. Give a method for generating a random variable having density function f(x)=ex/(e−1), 0⩽x⩽1
2. Give a method to generate a random variable having density function f(x)=
x−2
2 if 2⩽x⩽3
2−x/3
2 if 3⩽x⩽6
3. Use the inverse transform method to generate a random variable having distribution function
F(x)= x2+x
2 , 0⩽x⩽1
4. Give a method for generating a random variable having distribution function F(x)=1−exp(−αxβ), 0<x<∞
A random variable having such a distribution is said to be a Weibull random variable.
5. Give a method for generating a random variable having density function f(x)=
e2x, −∞<x<0 e−2x, 0<x<∞
6. Let X be an exponential random variable with mean 1. Give an efficient algorithm for simulating a random variable whose distribution is the conditional distribution ofXgiven thatX <0.05. That is, its density function is
f(x)= e−x
1−e−0.05, 0<x<0.05
Generate 1000 such variables and use them to estimateE[X|X <0.05]. Then determine the exact value ofE[X|X <0.05].
7. (The Composition Method) Suppose it is relatively easy to generate random variables from any of the distributions Fi,i = 1, . . . ,n. How could we generate a random variable having the distribution function
F(x)= n
i=1
piFi(x)
where pi,i =1, . . . ,n, are nonnegative numbers whose sum is 1?
8. Using the result of Exercise 7, give algorithms for generating random variables from the following distributions.
(a) F(x)= x+x33+x5,0⩽x⩽1 (b) F(x)=
1−e−2x+2x
3 if0<x<1
3−e−2x
3 if1<x<∞ (c) F(x)=n
i=1αixi,0⩽x⩽1, whereαi⩾0, n
i=1αi =1 9. Give a method to generate a random variable having distribution function
F(x)= ∞
0
xye−yd y, 0⩽x⩽1
[Hint: Think in terms of the composition method of Exercise 7. In particular, let Fdenote the distribution function ofX, and suppose that the conditional distribution ofXgiven thatY =yis
P{X ⩽x|Y =y} =xy, 0⩽x⩽1
Exercises 93
10. A casualty insurance company has 1000 policyholders, each of whom will independently present a claim in the next month with probability .05.
Assuming that the amounts of the claims made are independent exponential random variables with mean $800, use simulation to estimate the probability that the sum of these claims exceeds $50,000.
11. Write an algorithm that can be used to generate exponential random variables in sets of 3. Compare the computational requirements of this method with the one presented after Example5cwhich generates them in pairs.
12. Suppose it is easy to generate random variable from any of the distribution Fi,i =1, . . . ,n. How can we generate from the following distributions?
(a) F(x)=n i=1Fi(x) (b) F(x)=1−n
i=1[1−Fi(x)]
[Hint: IfXi,i =1, . . . ,n, are independent random variables, withXihaving distributionFi, what random variable has distribution functionF?]
13. Using the rejection method and the results of Exercise 12, give two other methods, aside from the inverse transform method, that can be used to generate a random variable having distribution function
F(x)=xn, 0⩽x⩽1
Discuss the efficiency of the three approaches to generating fromF.
14. LetG be a distribution function with density g and suppose, for constants a<b, we want to generate a random variable from the distribution function
F(x)= G(x)−G(a)
G(b)−G(a), a⩽x⩽b
(a) IfX has distributionG, thenFis the conditional distribution of Xgiven what information?
(b) Show that the rejection method reduces in this case to generating a random variableXhaving distributionGand then accepting it if it lies betweena andb.
15. Give two methods for generating a random variable having density function f(x)=xe−x, 0⩽x<∞
and compare their efficiency.
16. Give two algorithms for generating a random variable having distribution function
F(x)=1−e−x−e−2x+e−3x, x>0
17. Give two algorithms for generating a random variable having density function f(x)= 1
4+2x3+5
4x4, 0<x<1
18. Give an algorithm for generating a random variable having density function f(x)=2xe−x2, x>0
19. Show how to generate a random variable whose distribution function is F(x)= 1
2(x+x2), 0≤x ≤1 using
(a) the inverse transform method;
(b) the rejection method;
(c) the composition method.
Which method do you think is best for this example? Briefly explain your answer.
20. Use the rejection method to find an efficient way to generate a random variable having density function
f(x)= 1
2(1+x)e−x, 0<x<∞
21. When generating a gamma random variable with parameters (α, 1),α <1, that is conditioned to exceedcby using the rejection technique with an exponential conditioned to exceedc, what is the best exponential to use? Is it necessarily the one with meanα, the mean of the gamma (α, 1) random variable?
22. Give an algorithm that generates a random variable having density f(x)=30(x2−2x3+x4), 0⩽x⩽1 Discuss the efficiency of this approach.
23. Give an efficient method to generate a random variableXhaving density f(x)= 1
.000336x(1−x)3, .8<x<1
24. In Example5fwe simulated a normal random variable by using the rejection technique with an exponential distribution with rate 1. Show that among all exponential density functionsg(x)=λe−λx the number of iterations needed is minimized whenλ=1.
25. Write a program that generates normal random variables by the method of Example5f.
26. Let (X,Y)be uniformly distributed in a circle of radius 1. Show that if R is the distance from the center of the circle to (X,Y)then R2 is uniform on (0, 1).
Bibliography 95 27. Write a program that generates the first T time units of a Poisson process
having rateλ.
28. To complete a job a worker must go throughkstages in sequence. The time to complete stageiis an exponential random variable with rateλi,i=1, . . . ,k.
However, after completing stagei the worker will only go to the next stage with probabilityαi, i =1, . . . ,k−1. That is, after completing stagei the worker will stop working with probability 1−αi. If we letXdenote the amount of time that the worker spends on the job, thenX is called aCoxianrandom variable. Write an algorithm for generating such a random variable.
29. Buses arrive at a sporting event according to a Poisson process with rate 5 per hour. Each bus is equally likely to contain either 20, 21,…, 40 fans, with the numbers in the different buses being independent. Write an algorithm to simulate the arrival of fans to the event by timet=1.
30.
(a) Write a program that uses the thinning algorithm to generate the first 10 time units of a nonhomogeneous Poisson process with intensity function
λ(t)=3+ 4 t+1
(b) Give a way to improve upon the thinning algorithm for this example.
31. Give an efficient algorithm to generate the first 10 times units of a nonhomogeneous Poisson process having intensity function
λ(t)=
t
5, 0<t <5 1+5(t−5),5<t<10
32. Write a program to generate the points of a two-dimensional Poisson process within a circle of radiusR, and run the program forλ=1 andR=5. Plot the points obtained.
Bibliography
Dagpunar, T.,Principles of Random Variate Generation. Clarendon Press, Oxford, 1988.
Devroye, L.,Nonuniform Random Variate Generation. Springer-Verlag, New York, 1986.
Fishman, G. S.,Principles of Discrete Event Simulation. Wiley, New York, 1978.
Knuth, D.,The Art of Computer Programming, Vol. 2, 2nd ed.,Seminumerical Algorithms.
Addison-Wesley, Reading, MA, 2000.
Law, A. M., and W. D. Kelton,Simulation Modelling and Analysis, 3rd ed. McGraw-Hill, New York, 1997.
Lewis, P. A. W., and G. S. Shedler, “Simulation of Nonhomogeneous Poisson Processes by Thinning,”Nav. Res. Log. Quart.,26, 403–413, 1979.
Marsaglia, G., “Generating Discrete Random Variables in a Computer,”Commun. Assoc.
Comput. Mach.,6, 37–38, 1963.
Morgan, B. J. T.,Elements of Simulation. Chapman and Hall, London, 1983.
Ripley, B. D., “Computer Generation of Random Variables: A Tutorial,” Inst. Statist.
Rev.,51, 301–319, 1983.
Ripley, B. D.,Stochastic Simulation. Wiley, New York, 1986.
Rubenstein, R. Y.,Simulation and the Monte Carlo Method. Wiley, New York, 1981.
Schmeiser, B. W., “Random Variate Generation, a Survey,”Proc. 1980 Winter Simulation Conf.,Orlando, FL; pp. 79–104, 1980.
The Multivariate 6
Normal Distribution and Copulas
Introduction
In this chapter we introduce the multivariate normal distribution and show how to generate random variables having this joint distribution. We also introduce copulas which are useful when choosing joint distributions to model random variables whose marginal distributions are known.