1. Trang chủ
  2. » Khoa Học Tự Nhiên

Handbook of mathematics for engineers and scienteists part 155 pdf

7 312 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 7
Dung lượng 374,77 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

The characteristic function of the sum of two independent random variables is equal to the product of their characteristic functions.. The binomial distribution is a model of random expe

Trang 1

20.2.2-6 Characteristic functions Semi-invariants.

1◦ The characteristic function of a random variable X is the expectation of the random

variable e itX, i.e.,

f (t) = E{e itX}=

 +

∞ e itx dF (x)

=



j

e itx j p j in the discrete case,

 +

∞ e itx p(x) dx in the continuous case, (20.2.2.11)

where t is a real variable ranging from –∞ to +∞ and i is the imaginary unit, i2= –1 Properties of characteristic functions:

1 The cumulative distribution function is uniquely determined by the characteristic func-tion

2 The characteristic function is uniformly continuous on the entire real line

3 |f(t)| ≤ f(0) =1

4 f (–t) = f (t).

5 f (t) is a real function if and only if the random variable X is symmetric.

6 The characteristic function of the sum of two independent random variables is equal to the product of their characteristic functions

7 If a random variable X has a kth absolute moment, then the characteristic function of

X is k times differentiable and the relation f(m)(0) = im E{X m}holds for mk.

8 If x1and x2are points of continuity of the cumulative distribution function F (x), then

F(x2) – F (x1) = 1

T →∞lim

 T

T

eitx1 – eitx2

it f(t) dt. (20.2.2.12)

9 If7+

|f(t)|dt <∞, then the cumulative distribution function F (x) has a probability density function p(x), which is given by the formula

p(x) = 1 2π

 +

∞ e

itx f (t) dt. (20.2.2.13)

If the probability distribution has a kth moment α k , then there exist semi-invariants (cumulants) τ1, , τ kdetermined by the relation

ln f (t) =

k



l=1

τ l (it)

l

l! + o(t

The semi-invariants τ1, , τ kcan be calculated by the formulas

τ l = il ∂

l ln f (t)

∂t l 

t=0.

Trang 2

20.2.2-7 Generating functions.

The generating function of a numerical sequence a0, a1, is defined as the power series

ϕ X (z) =



n=0

where z is either a formal variable or a complex or real number If X is a random variable

whose absolute moments of any order are finite, then the series



n=0

E{X n}z n

is called the moment-generating function of the random variable X.

If X is a nonnegative random variable taking integer values, then the formulas

ϕ X (z) = E{z X}=



n=0

P (X = n)z n (20.2.2.17)

define the probability-generating function, or simply the generating function of the random

variable X The generating function of a random variable X is related to its characteristic

function f (t) by the formula

f (t) = ϕ X (e it) (20.2.2.18)

20.2.3 Main Discrete Distributions

20.2.3-1 Binomial distribution

A random variable X has the binomial distribution with parameters (n, p) (see Fig 20.2) if

P (X = k) = C n k p k(1– p) n–k, k=0,1, , n, (20.2.3.1) where0< p <1, n≥ 1

0 0 0.1 0.2 0.3

P

Figure 20.2 Binomial distribution for p =0 55, n =6

Trang 3

The cumulative distribution function, the probability-generating function, and the char-acteristic function have the form

F (x) =

m



k=1 C

k

n p k(1– p) n–k for mx < m +1(m =1,2, , n –1),

ϕ X (z) = (1 – p + pz) n,

f (t) = (1 – p + pe it)n,

(20.2.3.2)

and the numerical characteristics are given by the formulas

E{X} = np, Var{X}= np(1 – p), γ1 = 1–2p

np(1 – p), γ2=

1–6p(1– p) np(1 – p) . The binomial distribution is a model of random experiments consisting of n independent identical Bernoulli trials If X1, , X nare independent random variables, each of which can take only two values 1or0 with probabilities p and q =1– p, respectively, then the random variable X = n

k=1 X k has the binomial distribution with parameters (n, p).

The binomial distribution is asymptotically normal with parameters (np, np(1 – p)) as

n → ∞ (the de Moivre–Laplace limit theorem, which is a special case of the central limit

theorem, see Paragraph 20.3.2-2); specifically,

P (X = k) = C n k p k(1– p) n–k√ np(11

– p) ϕ

* k – np

√ np(1 – p)

+

as (k – np)

3

[np(1 – p)]4 0,

P (k1≤Xk2)≈Φ*√ k2– np

np(1 – p)

+ –Φ*√ k1– np np(1 – p)

+

as (k1,2– np)

3

[np(1 – p)]4 0,

where ϕ(x) and Φ(x) are the probability density function and the cumulative distribution

function of the standard normal distribution (see Paragraph 20.2.4-3)

20.2.3-2 Geometric distribution

A random variable X has a geometric distribution with parameter p (0 < p <1) (see Fig 20.3) if

P (X = k) = p(1 – p) k, k=0,1,2, (20.2.3.3)

0 0 0.2 0.4 0.6

P

Figure 20.3 Geometric distribution for p =0 55

Trang 4

The probability-generating function and the characteristic function have the form

ϕ X (z) = p[1– (1– p)z]–1,

f (t) = p[1– (1– p)e it]–1, and the numerical characteristics can be calculated by the formulas

E{X}= 1– p

p , α2= (1– p)(2 – p)

p2 , Var{X}= 1– p

p2 , γ1=

2– p

1 – p, γ2=6+1p2

– p. The geometric distribution describes a random variable X equal to the number of failures before the first success in a sequence of Bernoulli trials with probability p of success in

each trial

The geometric distribution is the only discrete distribution that is memoryless, i.e.,

satisfies the relation

P(X > s + t|X > t) = P (X > s) for all s, t >0 This property permits one to view the geometric distribution as the discrete analog of the exponential distribution

20.2.3-3 Hypergeometric distribution

A random variable X has the hypergeometric distribution with parameters (N , p, n) (see

Fig 20.4) if

P (X = k) = C

k

Np C N(1 n–kp)

C n N

, k=0,1, , n, (20.2.3.4) where0< p <1,0 ≤nN , N >0

0 0 0.2 0.4

P

Figure 20.4 Hypergeometric distribution for p =0 5, N =10, n =4

The numerical characteristics are given by the formulas

E{X} = np, Var{X}= N – n

N –1np(1 – p).

A typical scheme in which the hypergeometric distribution arises is as follows: n ele-ments are randomly drawn without replacement from a population of N eleele-ments containing exactly N p elements of type I and N (1 – p) elements of type II The number of elements

of type I in the sample is described by the hypergeometric distribution.

If n  N (in practice, n <0.1N), then

C k

Np C N(1 n–kp)

C n

n p k(1– p) n–k; i.e., the hypergeometric distribution tends to the binomial distribution

Trang 5

0 0 0.1 0.2 0.3

P

Figure 20.5 Poisson distribution for λ =2

20.2.3-4 Poisson distribution

A random variable X has the Poisson distribution with parameter λ (λ >0) (see Fig 20.5) if

P (X = k) = λ k

k! e

λ, k=0,1,2, (20.2.3.5)

The cumulative distribution function of the Poisson distribution at the points k =

0,1,2, is given by the formula

F(k) = 1

k!



λ y

k ey dy =1– S k+1 (λ), where S k+1 (λ) is the value at the point λ of the cumulative distribution function of the gamma distribution with parameter k +1 In particular, P (X = k) = Sk (λ) – S k+1 (λ) The sum of independent random variables X1, , X n, obeying the Poisson distributions with

parameters λ1, , λ n , respectively, has the Poisson distribution with parameter λ1+· · ·+λ n.

The probability-generating function and the characteristic function have the form

ϕ X (z) = e λ(z–1),

f (t) = e λ(e it–1), and the numerical characteristics are given by the expressions

E{X} = λ, Var{X}= λ, α2= λ2+ λ, α3= λ(λ2+3λ+1),

α4= λ(λ3+6λ2+7λ+1), μ3= λ, μ4=3λ2+ λ, γ

1 = λ–1 2, γ2= λ–1 The Poisson distribution is the limit distribution for many discrete distributions such as the hypergeometric distribution, the binomial distribution, the negative binomial distribu-tion, distributions arising in problems of arrangement of particles in cells, etc The Poisson distribution is an acceptable model for describing the random number of occurrences of certain events on a given time interval in a given domain in space

20.2.3-5 Negative binomial distribution

A random variable X has the negative binomial distribution with parameters (r, p) (see

Fig 20.6) if

P (X = k) = C r+k–1 r–1 p r(1– p) k, k=0,1, , r, (20.2.3.6) where0< p <1, r >0

Trang 6

0 0 0.1 0.2 0.3

P

Figure 20.6 Negative binomial distribution for p =0 8, n =6

The probability-generating function and the characteristic function have the form

ϕ X (z) =



p

1– (1– p)z

r ,

f (t) =



p

1– (1– p)e it

r , and the numerical characteristics can be calculated by the formulas

E{X}= r(1 – p)

p , Var{X}= r(1 – p)

p2 , γ1=

2– p

√ r(1 – p), γ2 =

6

r + p

2

r(1 – p).

The negative binomial distribution describes the number X of failures before the rth success in a Bernoulli process with probability p of success on each trial For r =1, the negative binomial distribution coincides with the geometric distribution

20.2.4 Continuous Distributions

20.2.4-1 Uniform distribution

A random variable X is uniformly distributed on the interval [a, b] (Fig 20.7a) if

p(x) = 1

b – a for x[a, b]. (20.2.4.1)

a

a

x b

1

Figure 20.7 Probability density (a) and cumulate distribution (b) functions of uniform distribution.

Trang 7

The cumulative distribution function (see Fig 20.7b) and the characteristic function

have the form

F (x) =

0 for xa,

x – a

b – a for a < xb,

1 for x > b,

f (t) = 1

t(b – a) (e

itb – e ita),

(20.2.4.2)

and the numerical characteristics are given by the expressions

E{X}= a + b

2 , Var{X}=

(b – a)2

12 , γ1=0, γ2= –1.2, Med{X}=

a + b

2 (a + b). The uniform distribution does not have a mode

20.2.4-2 Exponential distribution

A random variable X has the exponential distribution with parameter λ >0(Fig 20.8a) if

p(x) = λeλx, x>0 (20.2.4.3)

x

3

Figure 20.8 Probability density (a) and cumulate distribution (b) functions of exponential distribution

for λ =2

The cumulative distribution function (see Fig 20.8b) and the characteristic function

have the form

F(x) =



1– eλx for x >0,

0 for x≤ 0,

f (t) =

1– it

λ

– 1

,

(20.2.4.4) and the numerical characteristics are given by the formulas

E{X}= 1

λ, α2= 2

λ2, Med{X}= ln2

λ , Var{X}= 1

λ2, γ1=2, γ2=6 The exponential distribution is the continuous analog of the geometric distribution and

is memoryless:

P(X > t + s|X > s) = P (X > s).

The exponential distribution is closely related to Poisson processes: if a flow of events

is described by a Poisson process, then the time intervals between successive events are independent random variables obeying the exponential distribution The exponential distri-bution is used in queuing theory and theory of reliability

Ngày đăng: 02/07/2014, 13:20

TỪ KHÓA LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm