1. Trang chủ
  2. » Nghệ sĩ và thiết kế

Random variables II: Probability Examples c-3 - eBooks and textbooks from bookboon.com

116 11 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 116
Dung lượng 4,02 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

discrete distribution, 4, 6 discrete random variable, 4, 6 distribution function, 4 expectation, 11 frequency, 5, 6 Helly-Bray’s lemma, 16 independent random variables, 7 Jacobian, 10 la[r]

Trang 1

Random variables II

Probability Examples c-3

Trang 2

2

Leif Mejlbro

Probability Examples c-3 Random variables II

Download free eBooks at bookboon.com

Trang 3

Probability Examples c-3 – Random variables II

© 2009 Leif Mejlbro & Ventus Publishing ApS

ISBN 978-87-7681-518-9

Trang 4

4

Contents

Download free eBooks at bookboon.com

Trang 5

This is the third book of examples from the Theory of Probability This topic is not my favourite,

however, thanks to my former colleague, Ole Jørsboe, I somehow managed to get an idea of what it is

all about The way I have treated the topic will often diverge from the more professional treatment

On the other hand, it will probably also be closer to the way of thinking which is more common among

many readers, because I also had to start from scratch

The topic itself, Random Variables, is so big that I have felt it necessary to divide it into three books,

of which this is the second one We shall here continue the study of frequencies and distribution

functions in 1 and 2 dimensions, and consider the correlation coefficient We consider in particular

the Poisson distribution

The prerequisites for the topics can e.g be found in the Ventus: Calculus 2 series, so I shall refer the

reader to these books, concerning e.g plane integrals

Unfortunately errors cannot be avoided in a first edition of a work of this type However, the author

has tried to put them on a minimum, hoping that the reader will meet with sympathy the errors

which do occur in the text

Leif Mejlbro26th October 2009

Trang 6

6

The abstract (and precise) definition of a random variable X is that X is a real function on Ω, where

the triple (Ω, F, P ) is a probability field, such that

{ω ∈ Ω | X(ω) ≤ x} ∈ F for every x ∈ R

This definition leads to the concept of a distribution function for the random variable X, which is the

function F : R → R, which is defined by

F(x) = P {X ≤ x} (= P {ω ∈ Ω | X(ω) ≤ x}),

where the latter expression is the mathematically precise definition which, however, for obvious reasons

everywhere in the following will be replaced by the former expression

A distribution function for a random variable X has the following properties:

0 ≤ F (x) ≤ 1 for every x ∈ R

The function F is weakly increasing, i.e F (x) ≤ F (y) for x ≤ y

limx→−∞F(x) = 0 and limx→+∞F(x) = 1

The function F is continuous from the right, i.e limh→0+F(x + h) = F (x) for every x ∈ R

One may in some cases be interested in giving a crude description of the behaviour of the distribution

function We define a median of a random variable X with the distribution function F (x) as a real

number a = (X) ∈ R, for which

P{X ≤ a} ≥ 1

2 and P{X ≥ a} ≥

1

2.Expressed by means of the distribution function it follows that a ∈ R is a median, if

If the random variable X only has a finite or a countable number of values, x1, x2, , we call it

discrete, and we say that X has a discrete distribution

A very special case occurs when X only has one value In this case we say that X is causally distributed,

or that X is constant

Download free eBooks at bookboon.com

Trang 7

The random variable X is called continuous, if its distribution function F (x) can be written as an

integral of the form

F(x) =

 x

−∞

f(u) du, x∈ R,

where f is a nonnegative integrable function In this case we also say that X has a continuous

distribution, and the integrand f : R → R is called a frequency of the random variable X

Let again (Ω, F, P ) be a given probability field Let us consider two random variables X and Y , which

are both defined on Ω We may consider the pair (X, Y ) as a 2-dimensional random variable, which

implies that we then shall make precise the extensions of the previous concepts for a single random

variable

We say that the simultaneous distribution, or just the distribution, of (X, Y ) is known, if we know

P{(X, Y ) ∈ A} for every Borel set A ⊆ R2

When the simultaneous distribution of (X, Y ) is known, we define the marginal distributions of X

and Y by

PX(B) = P {X ∈ B} := P {(X, Y ) ∈ B × R}, where B ⊆ R is a Borel set,

PY(B) = P {Y ∈ B} := P {(X, Y ) ∈ R × B}, where B ⊆ R is a Borel set

Notice that we can always find the marginal distributions from the simultaneous distribution, while it

is far from always possible to find the simultaneous distribution from the marginal distributions We

now introduce

www.sylvania.com

We do not reinvent the wheel we reinvent light.

Fascinating lighting offers an infinite spectrum of possibilities: Innovative technologies and new markets provide both opportunities and challenges

An environment in which your expertise is in high demand Enjoy the supportive working atmosphere within our global group and benefit from international career paths Implement sustainable ideas in close cooperation with other specialists and contribute to influencing our future Come and join us in reinventing light every day.

Trang 8

• If x ∈ R is kept fixed, then F (x, y) is a weakly increasing function in y, which is continuous from

the right and which satisfies the condition limy→−∞F(x, y) = 0

• If y ∈ R is kept fixed, then F (x, y) is a weakly increasing function in x, which is continuous from

the right and which satisfies the condition limx→−∞F(x, y) = 0

• When both x and y tend towards infinity, then

lim

x, y→+∞F(x, y) = 1

• If x1, x2, y1, y2∈ R satisfy x1≤ x2 and y1≤ y2, then

F(x2, y2) − F (x1, y2) − F (x2, y1) + F (x1, y2) ≥ 0

Given the simultaneous distribution function F (x, y) of (X, Y ) we can find the distribution functions

of X and Y by the formulæ

FX(x) = F (x, +∞) = lim

y→+∞F(x, y), for x ∈ R,

Fy(x) = F (+∞, y) = lim

x→+∞F(x, y), for y ∈ R

The 2-dimensional random variable (X, Y ) is called discrete, or that it has a discrete distribution, if

both X and Y are discrete

The 2-dimensional random variable (X, Y ) is called continuous, or we say that it has a continuous

distribution, if there exists a nonnegative integrable function (a frequency) f : R2

→ R, such that thedistribution function F (x, y) can be written in the form

It should now be obvious why one should know something about the theory of integration in more

variables, cf e.g the Ventus: Calculus 2 series

We note that if f (x, y) is a frequency of the continuous 2-dimensional random variable (X, Y ), then X

and Y are both continuous 1-dimensional random variables, and we get their (marginal) frequencies

Trang 9

It was mentioned above that one far from always can find the simultaneous distribution function from

the marginal distribution function It is, however, possible in the case when the two random variables

X and Y are independent

Let the two random variables X and Y be defined on the same probability field (Ω, F, P ) We say

that X and Y are independent, if for all pairs of Borel sets A, B ⊆ R,

P{X ∈ A ∧ Y ∈ B} = P {X ∈ A} · P {Y ∈ B},

which can also be put in the simpler form

F(x, y) = FX(x) · FY(y) for every (x, y) ∈ R2

If X and Y are not independent, then we of course say that they are dependent

In two special cases we can obtain more information of independent random variables:

If the 2-dimensional random variable (X, Y ) is discrete, then X and Y are independent, if

hij= fi· gj for every i and j

Here, fidenotes the probabilities of X, and gj the probabilities of Y

If the 2-dimensional random variable (X, Y ) is continuous, then X and Y are independent, if their

frequencies satisfy

f(x, y) = fX(x) · fY(y) almost everywhere

The concept “almost everywhere” is rarely given a precise definition in books on applied mathematics

Roughly speaking it means that the relation above holds outside a set in R2 of area zero, a so-called

null set The common examples of null sets are either finite or countable sets There exists, however,

also non-countable null sets Simple examples are graphs of any (piecewise) C1-curve

Concerning maps of random variables we have the following very important results,

Theorem 1.1 Let X and Y be independent random variables Let ϕ : R → R and ψ : R → R be

given functions Thenϕ(X) and ψ(Y ) are again independent random variables

If X is a continuous random variable of the frequency I, then we have the following important theorem,

where it should be pointed out that one always shall check all assumptions in order to be able to

conclude that the result holds:

Trang 10

10

Theorem 1.2 Given a continuous random variable X of frequency f

1) Let I be an open interval, such that P {X ∈ I} = 1

2) Let τ : I → J be a bijective map of I onto an open interval J

3) Furthermore, assume that τ is differentiable with a continuous derivative τ

We note that if just one of the assumptions above is not fulfilled, then we shall instead find the

distribution function G(y) of Y := τ (X) by the general formula

G(y) = P {τ (X) ∈ ] − ∞ , y]} = PX ∈ τ◦−1

(] − ∞ , y]) ,where τ◦−1= τ−1denotes the inverse set map

Note also that if the assumptions of the theorem are all satisfied, then τ is necessarily monotone

At a first glance it may be strange that we at this early stage introduce 2-dimensional random variables

The reason is that by applying the simultaneous distribution for (X, Y ) it is fairly easy to define the

elementary operations of calculus between X and Y Thus we have the following general result for a

continuous 2-dimensional random variable

Theorem 1.3 Let (X, Y ) be a continuous random variable of the frequency h(x, y)

The frequency of the sum X + Y is k1(z) =+∞

Notice that one must be very careful by computing the product and the quotient, because the

corre-sponding integrals are improper

If we furthermore assume that X and Y are independent, and f (x) is a frequency of X, and g(y) is a

frequency of Y , then we get an even better result:

Download free eBooks at bookboon.com

Trang 11

Theorem 1.4 Let X and Y be continuous and independent random variables with the frequencies

f (x) and g(y), resp

The frequency of the sumX + Y is k1(z) =+∞

Let X and Y be independent random variables with the distribution functions FX and FY, resp We

introduce two random variables by

U := max{X, Y } and V := min{X, Y },

the distribution functions of which are denoted by FU and FV, resp Then these are given by

FU(u) = FX(u) · FY(u) for u ∈ R,

Trang 12

12

If X and Y are continuous and independent, then the frequencies of U and V are given by

fU(u) = FX(u) · fY(u) + fX(u) · FY(u), for u ∈ R,

and

fV(v) = (1 − FX(v)) · fY(v) + fX(v) · (1 − Fy(v)) , for v ∈ R,

where we note that we shall apply both the frequencies and the distribution functions of X and Y

The results above can also be extended to bijective maps ϕ = (ϕ1, ϕ2) : R2 →R2, or subsets of R2

We shall need the Jacobian of ϕ, introduced in e.g the Ventus: Calculus 2 series

It is important here to define the notation and the variables in the most convenient way We start

by assuming that D is an open domain in the (x1x2) plane, and that ˜D is an open domain in the

(y1, y2) plane Then let ϕ = (ϕ1, ϕ2) be a bijective map of ˜D onto Dwith the inverse τ = ϕ−1, i.e

the opposite of what one probably would expect:

where the independent variables (y1, y2) are in the “denominators” Then recall the Theorem of

transform of plane integrals, cf e.g the Ventus: Calculus 2 series: If h : D → R is an integrable

function, where D ⊆ R2 is given as above, then for every (measurable) subset A ⊆ D,

Of course, this formula is not mathematically correct; but it shows intuitively what is going on:

Roughly speaking we “delete the y-s” The correct mathematical formula is of course the well-known

although experience shows that it in practice is more confusing then helping the reader

Download free eBooks at bookboon.com

Trang 13

Theorem 1.5 Let(X1, X2) be a continuous 2-dimensional random variable with the frequency h (x1, x2).LetD⊆ R2

be an open domain, such that

P{(X1, X2) ∈ D} = 1

Let τ : D → ˜D be a bijective map of D onto another open domain ˜D, and let ϕ = (ϕ1, ϕ2) =

τ−1

, where we assume thatϕ1 andϕ2 have continuous partial derivatives and that the corresponding

Jacobian is different from 0 in all of ˜D

Then the 2-dimensional random variable

We have previously introduced the concept conditional probability We shall now introduce a similar

concept, namely the conditional distribution

If X and Y are discrete, we define the conditional distribution of X for given Y = yj by

It follows that for fixed j we have that P {X = xi| Y = yj} indeed is a distribution We note in

particular that we have the law of the total probability

P{X = xi} =

j

P{X = xi| Y = yj} · P {Y = yj}

Analogously we define for two continuous random variables X and Y the conditional distribution

function ofX for given Y = y by

P{X ≤ x | Y = y} =

x

−∞f(u, y) du

fY(y) , forudsat, at fY(y) > 0.

Note that the conditional distribution function is not defined at points in which fY(y) = 0

The corresponding frequency is

f(x | y) = f(x, y)

fY(y) , provided that fY(y) = 0.

We shall use the convention that “0 times undefined = 0” Then we get the Law of total probability,

Trang 14

14

1) Let X be a discrete random variable with the possible values {xi} and the corresponding

proba-bilities pi= P {X = xi} The mean, or expectation, of X is defined by

E{X} :=

i

xipi,

provided that the series is absolutely convergent If this is not the case, the mean does not exists

2) Let X be a continuous random variable with the frequency f (x) We define the mean, or expectation

If the random variable X only has nonnegative values, i.e the image of X is contained in [0, +∞[,

and the mean exists, then the mean is given by

E{X} =

 +∞

0

P{X ≥ x} dx

Concerning maps of random variables, means are transformed according to the theorem below,

pro-vided that the given expressions are absolutely convergent

Theorem 1.6 Let the random variable Y = ϕ(X) be a function of X

1) If X is a discrete random variable with the possible values {xi} of corresponding probabilities

pi= P {X = xi}, then the mean of Y = ϕ(X) is given by

E{ϕ(X)} =

i

ϕ(xi) pi,

provided that the series is absolutely convergent

2) If X is a continuous random variable with the frequency f (x), then the mean of Y = ϕ(X) is

Assume that X is a random variable of mean µ We add the following concepts, where k ∈ N:

The k-th moment, EXk

 The k-th absolute moment, E|X|k

 The k-th central moment, E(X − µ)k

 The k-th absolute central moment, E|X − µ|k

 The variance, i.e the second central moment, V{X} = E(X − µ)2 ,

Download free eBooks at bookboon.com

Trang 15

provided that the defining series or integrals are absolutely convergent In particular, the variance is

very important We mention

Theorem 1.7 Let X be a random variable of mean E{X} = µ and variance V {X} Then

E(X − c)2 = V {X} + (µ − c)2 for every c∈ R,

V{X} = EX2 − (E{X})2 for c= 0,

E{aX + b} = a E{X} + b for every a, b∈ R,

V{aX + b} = a2V{X} for every a, b∈ R

It is not always an easy task to compute the distribution function of a random variable We have the

following result which gives an estimate of the probability that a random variable X differs more than

some given a > 0 from the mean E{X}

Theorem 1.8 ( ˇCebyˇsev’s inequality) If the random variable X has the mean µ and the variance

σ2, then we have for every a >0,

Trang 16

16

These concepts are then generalized to 2-dimensional random variables Thus,

Theorem 1.9 Let Z = ϕ(X, Y ) be a function of the 2-dimensional random variable (X, Y )

1) If (X, Y ) is discrete, then the mean of Z = ϕ(X, Y ) is given by

E{ϕ(X, Y )} =

i, j

ϕ(xi, yj) · P {X = xi ∧ Y = yj} ,

provided that the series is absolutely convergent

2) If (X, Y ) is continuous, then the mean of Z = ϕ(X, Y ) is given by

E{ϕ(X, Y )} =



R 2

ϕ(x, y) f (x, y) dxdy,provided that the integral is absolutely convergent

It is easily proved that if (X, Y ) is a 2-dimensional random variable, and ϕ(x, y) = ϕ1(x) + ϕ2(y),

then

E{ϕ1(X) + ϕ2(Y )} = E {ϕ1(X)} + E {ϕ2(Y )} ,

provided that E {ϕ1(X)} and E {ϕ2(Y )} exists In particular,

E{X + Y } = E{X} + E{Y }

If we furthermore assume that X and Y are independent and choose ϕ(x, y) = ϕ1(x) · ϕ2(y), then also

E{(X − E{X}) · (Y − E{Y })} = 0

These formulæ are easily generalized to n random variables We have e.g

provided that all means E {Xi} exist

If two random variables X and Y are not independent, we shall find a measure of how much they

“depend” on each other This measure is described by the correlation, which we now introduce

Consider a 2-dimensional random variable (X, Y ), where

Trang 17

all exist We define the covariance between X and Y , denoted by Cov(X, Y ), as

E{X} = µX, E{Y } = µY, V{X} = σ2

X >0, V{Y } = σ2

Y >0,all exist Then

Cov(X, Y ) = 0, if X and Y are independent,

Cov(X, Y ) = E{X · Y } − E{X} · E{Y },

|Cov(X, Y )| ≤ σX· σy,

Cov(X, Y ) = Cov(Y, X),

V{X + Y } = V {X} + V {Y } + 2Cov(X, Y ),

V{X + Y } = V {X} + V {Y }, if X and Y are independent,

(X, Y ) = 0, if X and Y are independent,

(X, X) = 1, (X, −X) = −1, |(X, Y )| ≤ 1

Let Z be another random variable, for which the mean and the variance both exist- Then

Cov(aX + bY, Z) = a Cov(X, Z) + b Cov(Y, Z), for every a, b∈ R,

and if U= aX + b and V = cY + d, where a > 0 and c > 0, then

(U, V ) = (aX + b, cY + d) = (X, Y )

Two independent random variables are always non-correlated, while two non-correlated random

vari-ables are not necessarily independent

By the obvious generalization,

Finally we mention the various types of convergence which are natural in connection with sequences

of random variables We consider a sequence Xnof random variables, defined on the same probability

field (Ω, F, P )

Trang 18

18

1) We say that Xn converges in probability towards a random variable X on the probability field

(Ω, F, P ), if

P{|Xn− X| ≥ ε} → 0 for n → +∞,

for every fixed ε > 0

2) We say that Xn converges in probability towards a constant c, if every fixed ε > 0,

P{|Xn− c| ≥ ε} → 0 for n → +∞

3) If each Xn has the distribution function Fn, and X has the distribution function F , we say that

the sequence Xn of random variables converges in distribution towards X, if at every point of

continuity x of F (x),

lim

n→+∞Fn(x) = F (x)

Finally, we mention the following theorems which are connected with these concepts of convergence

The first one resembles ˇCebyˇsev’s inequality

Theorem 1.11 (The weak law of large numbers) Let Xn be a sequence of independent random

variables, all defined on (Ω, F, P ), and assume that they all have the same mean and variance,

E{Xi} = µ and V {Xi} = σ2

.Then for every fixed ε > 0,

A slightly different version of the weak law of large numbers is the following

Theorem 1.12 If Xn is a sequence of independent identical distributed random variables, defined

on (Ω, F, P ) where E {Xi} = µ, (notice that we do not assume the existence of the variance), then

for every fixed ε > 0,

We have concerning convergence in distribution,

Theorem 1.13 (Helly-Bray’s lemma) Assume that the sequence Xn of random variables

con-verges in distribution towards the random variable X, and assume that there are real constants a and

Trang 19

Finally, the following theorem gives us the relationship between the two concepts of convergence:

Theorem 1.14 1) If Xn converges in probability towards X, then Xn also converges in distribution

I joined MITAS because

�e Graduate Programme for Engineers and Geoscientists

as a

Month 16

I was a construction

supervisor in the North Sea advising and

I was a

I joined MITAS because

www.discovermitas.com

Trang 20

20

Example 2.1 Given a countable number of boxes: U1, U2, , Un, Let box number n contain

nslips of paper with the numbers 1, 2, , n We choose at random with probability pn the box Un,

and from this box we choose randomly one of the slips of paper Let X denote the random variable,

which indicates the number of the chosen box, and let Y denote the random variable, which gives the

number on the chosen slip of paper.

1) Find the distribution of the random variable Y

2) Prove that the mean E{Y } exists if and only if the mean E{X} exists When both these means

exist one shall express E{Y } by means of E{X}.

3) Assume that pn= pqn−1, where p > 0, q > 0 and p + q = 1 Find

If on the other hand E{X} exists, then we can reverse all computations above and conclude that

E{Y } exists In fact, every term is ≥ 0, so the summations can be interchanged, which gives

E{Y } = 1

2(1 + E{X}).

Download free eBooks at bookboon.com

Trang 21

3) If pn= pqn−1, it follows from (1) that



Example 2.2 Throw once an (honest) dice and let the random variable N denote the number given

by the dice

Then flip a coin N times, where N is the random variable above, and let X denote the number of

heads in these throws

1) Find P {X = 0 ∧ N = i} for i = 1, 2, 3, 4, 5, 6

2) Find P {X = 0}

3) Find the mean E{X}

1) If N = i, then X = 0 means that we get tails i times, thus

P{X = 0 ∧ N = i} = 1

2

, i= 1, 2, 3, 4, 5, 6

2) By the law of total probability,



·1

6 =

16



· 12

j

· 12

i−j

=

ij

  12

i

,hence

  12

6



i=j

ij

  12

i

= 16

6



i=1

 12



= 16

6



i=1

 12

6



i=1

i 12



= 16

6



i=1

i 12

i

2i−1

= 112

Trang 22

22

Example 2.3 A box contains N balls with the numbers 1, 2, , N Choose at random a ball from

the box and note its number X, without returning it to the box Then select another ball and note its

number Y

1) Find the distribution of the 2-dimensional random variable (X, Y )

2) Find the distribution of the random variable Z = |X − Y |

2N(N − 1)

1

2(N − 1)N = 1.

Download free eBooks at bookboon.com

Trang 23

3 Correlation coefficient and skewness

Example 3.1 A random variable X has its distribution given by

Trang 24

Cov(Y, Z) = E{Y Z} − E{Y }E{Z},

and

σ12= V {Y } and σ22= V {Z}

The distribution functions of Y and Z are found by simply counting,

P{Y = 1} = P {X even} + P {X odd, and X is divisible by 3}

σ12= V {Y } = EY2 − (E{Y })2= 67

100 −

 67100

σ22= V {Z} = EZ2 − (E{Z})2= 33

100 −

 33100

Trang 25

67.

Example 3.2 Let X denote a random variable, for which E{X} = µ, V {X} = σ2

and EX3 allexist

1 Prove the formula

2.

2 Find the number γ(X) of this distribution

3 Find the values of p, for which γ(X) = 0

4 Find γ(X) for p = 1

8.

1) The claim is proved in the continuous case The proof in the discrete case is analogous A

straightforward computation gives

Trang 26

 

p−12



Download free eBooks at bookboon.com

Trang 27

This implies that

8, then

γ(X) = −

128 ·18

 1

8 −12

  1

8 −14

3/2 = −

3474

 74

Reach your full potential at the Stockholm School of Economics,

in one of the most innovative cities in the world The School

is ranked by the Financial Times as the number one business school in the Nordic and Baltic countries

Visit us at www.hhs.se

Swed

Stockholm

no.1

nine years

in a row

Trang 28

nxn−1e− ax, x >0,

0, otherwise,where a is a positive constant

Compute the skewness γ(Xn), and show that γ (Xn) → 0 for n → ∞

According to Example 3.2 the skewness γ (Xn) is defined by

γ(Xn) =

E(Xn− µn)3

σ3 n

xne− axdx= 1

a(n − 1)!

 ∞ 0

xn+1e− axdx= (n + 1)!

a2(n − 1)! =

n(n + 1)

a2 ,hence

EX3

n = an

(n − 1)!

 ∞ 0

xn+2e− axdx= (n + 2)!

a3(n − 1)! =

n(n + 1)(n + 2)

a3 ,whence

γ(Xn) =E

(Xn− µn)3

σ3 n

Trang 29

Example 3.4 Assume that the 2-dimensional random variable (X, Y ) has the frequency

1) Find the frequencies of X and Y

2) Find the means of X and Y

3) Find the variances of X and Y

4) Compute the correlation coefficient  between X and Y , and prove that it does not depend on A

0 0.2 0.4 0.6 0.8 1

2x2

A2 dx= 2

3A,and

E{Y } =

 A 0

y3

A2

A 0

=1

3A.

Trang 30

2

2that

= 2

3 −

12

 x 0

 xy2

2

x y=0

dx

= 1

A2

 A 0

x3

dx=A

2

4 .

Download free eBooks at bookboon.com

Click on the ad to read more

Trang 31

18A2 = 1

2,which is independent of A

Example 3.5 Consider a 2-dimensional random variable (X, Y ), which in the parallelogram given by

while the frequency is equal to 0 anywhere else in the (x, y) plane

1) Find the frequencies of the de random variables X and Y

2) Find the means of each of the random variables X and Y

3) Find the covariance Cov(X, Y )

0 0.5 1 1.5 2

3,

Trang 33

3) We first compute

E{XY } = 2

3

 1 0

 x+1 x

xy(x + y) dy



dx= 23

 1 0

 x+1 x

x2

y+ xy2

 dy

dx

= 23

 1 0

dx

= 23

 1 0

= 23

 1 0

= 23

 1 0

= 23

 1 0

2x3

Cov(X, Y ) = E{XY } − E{X}E{Y } = 7

while the frequency is equal to 0 anywhere else in the (x, y) plane

1) Find the constant a

2) Find the distribution function and the frequency of random variable Z = X + Y

3) Find the mean E{Z} and the variance V {Z}

1) When we integrate over the first quadrant we obtain

1 =

 ∞ 0

 ∞ 0

h(x, y) dx dy = a

 ∞ 0

 ∞ 0

(1 + x + y)5

dx dy

= a

 ∞ 0

dy= a4

 ∞ 0

(1 + y)−4

dy= a

12,from which we conclude that a = 12 Hence the frequency is

0 otherwise

Trang 34

(1 + z)5,i.e.

0 for z ≤ 0

Download free eBooks at bookboon.com

Click on the ad to read more

Trang 35

3) The mean is

E{Z} =

 ∞ 0

12z2(z + 1)5dz= 12

 ∞ 0

z2+ 2z + 1 − 2z − 2 + 1

(z + 1)5 dz

=

 ∞ 0

12z3

(z + 1)5 = 12

 ∞ 0

(z3+ 3z2+ 3z + 1) − (3z2+ 6z + 3) + (3 + 3z) − 1

(z + 1)5 dz

=

 ∞ 0

x3e−x(y+1)dx=1

2 ·

1(y + 1)4

 ∞ 0

t3e−tdt= 3

(y + 1)4,hence, by summing up,

Trang 36

x3e− x

dx= 3!

2 = 3,and

EX2 = 1

2

 ∞ 0

x4e− xdx=4!

2 = 12,hence

V{X} = EX2 − (E{X})2= 12 − 32= 3

Analogously we obtain

E{Y } = 3

 ∞ 0

y+ 1 − 1(y + 1)4 dy= 3

 ∞ 0

 1(y + 1)3 − 1



= 1

2,and

EY2

= 3

 ∞ 0

y2+ 2y + 1 − 2y − 2 + 1

(y + 1)4 dy

= 3

 ∞ 0

1(y + 1)2 − 2

E{XY } =

 ∞ 0

 ∞ 0

1

2x

4e− x

 ∞ 0

y e− xydy

dx

=

 ∞ 0

(X, Y ) = Cov(X, Y )

V {X} · V {Y } =

−1 2



3 ·3 4

= −1

3.

Download free eBooks at bookboon.com

Trang 37

Example 3.8 Let X1and X2be independent, identically distributed random variables of the frequency

1) Let fY(y) be the frequency of Y = X1

1

√2πyx exp

 ∞ 0

π· y + 11 · √1y,hence

2) Since fY(y) = 0 is equivalent to y > 0 and fY(y) > 0, the integrand satisfies y fY(y) ≥ 0, hence

the check of the existence is reduced to check the convergence for A → ∞ of

y

y + 1· √1ydy = 1

π

 A 0

y + 1 − 1

y + 1 · √1ydy

= 1π

 A 0

1

√ydy −π1

 A 0

Trang 38

1) Find the frequencies of X and Y

2) Find frequency of Z = X + Y

3) Find the mean and the variance of the random variable Z

4) Find the correlation coefficient (X, Y )

Download free eBooks at bookboon.com

Click on the ad to read more

“The perfect start

of a successful, international career.”

Trang 39

fZ(z) =

 z 0

h(x, z − x) dx =

 z 0

1

2z

4e−zdz= 12,and

EX2 = E Y2 = 1

2

 ∞ 0

t3e−t+ t2e−t dt = 1

2(3! + 2!) = 4,hence

 ∞ 0

xy(x + y) e− (x+y)dx dy

= 12

 ∞ 0



ye− y

 ∞ 0

x2e− xdx+ y2e− y

 ∞ 0

x e− xdx

dy

= 12

 ∞ 0

2! y e− y+ 1! y2e− y dy = 1

2(2 · 1! + 1 · 2!) = 2,

Trang 40

(X, Y ) = Cov(X, Y )

V {X}V {Y } =

−1 4 7 4

= −1

7.Alternatively, it follows from

V {Z} = V {X} + V {Y } + 2 Cov(X, Y ),

that

Cov(X, Y ) = −1

4,and hence

(X, Y ) = Cov(X, Y )

V {X}V {Y } =

−1/47/4 = −

1

7.

Example 3.10 A compound experiment can be described by first choosing at random a real number

X in the interval ]0, 1[, and then at random to choose a real number Y in the interval ]X, 1[ The

frequency of the 2-dimensional random variable (X, Y ) is denoted by h(x, y)

1) Prove that h(x, y) is 0 outside the triangle in the (x, y) plane of the vertices (0, 0), (0, 1) and (1, 1),

and that h(x, y) inside the mentioned triangle above is given by

h(x, y) = 1

1 − x.2) Find the frequencies f (x) and g(y) of the random variables X and Y

3) Find the mean and variance of the random variables X and Y

Ngày đăng: 13/01/2021, 09:14

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN