1. Trang chủ
  2. » Ngoại Ngữ

A-Novel-Estimation-Method-Based-on-Maximum-Likelihood

81 4 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 81
Dung lượng 1,29 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

A Novel Estimation Method Based on Maximum LikelihoodCITATIONS 2 READS 287 1 author: Some of the authors of this publication are also working on these related projects: A novel weighted

Trang 1

A Novel Estimation Method Based on Maximum Likelihood

CITATIONS

2

READS 287

1 author:

Some of the authors of this publication are also working on these related projects:

A novel weighted likelihood estimation with empirical Bayes flavor View project

Md Mobarak Hossain

University of Nevada, Reno

8 PUBLICATIONS   12 CITATIONS   

SEE PROFILE

All content following this page was uploaded by Md Mobarak Hossain on 08 January 2019.

The user has requested enhancement of the downloaded file.

Trang 2

A NOVEL ESTIMATION METHOD BASED ON

MAXIMUM LIKELIHOOD

A thesis submitted in partial fulfillment of the

requirements for the degree of Master of Science in

Trang 3

Md Mobarak Hossain

ALL RIGHTS RESERVED

Trang 4

We recommend that the thesis prepared under our supervision by

Md Mobarak Hossain

entitled

A NOVEL ESTIMATION METHOD BASED ON

MAXIMUM LIKELIHOOD

be accepted in partial fulfillment of the

requirements for the degree of

MASTER OF SCIENCE

Tomasz Kozubowski, Ph.D., Advisor

Anna Panorska, Ph.D., Committee Member

Minggen Lu, Ph.D., Graduate School Representative

David Zeh, Ph.D., Dean, Graduate School

December, 2014

Trang 5

on each observation It turns out that this method can be related to a Bayesian proach, where the prior distribution is data driven In case of estimating a locationparameter of a unimodal density, the prior distribution is the empirical distribution

ap-of the sample, and converges to the true distribution that generated the data as thesample size increases

Trang 6

We provide several examples illustrating the new method, and conduct simulationstudies to assess the performance of the estimators It turns out that this straight-forward methodology produces consistent estimators, which seem to be comparablewith those obtained by the ML method in large sample setting, and may actuallyoutperform the latter when the sample size is small.

Trang 7

At first I would like to appreciate my honorable thesis advisor, Professor Tomasz

J Kozubowski, who showed me a great interest in the field of statistics sor Kozubowski not only helped me to complete the thesis but also encouraged me,supported me, guided me with great patience which accelerated me to complete mygraduate study in the department of Mathematics and Statistics at the University ofNevada, Reno (UNR) I am really grateful to him I am also very grateful to the grad-uate school representative, Dr Minggen Lu, for his great support, and to Krzysztof

goes to the former Graduate Director, Professor Anna K Panorska, who not onlygave me the best suggestion but also carefully helped me academically and spiritually

to complete my graduate study in UNR I am really grateful for her excellent supportfor the graduate students I also would like to acknowledge my family’s support andencouragement Moreover, I am very much thankful to all of the faculty, graduatestudents, and staffs of the department of Mathematics and Statistics at UNR

Trang 8

TABLE OF CONTENTS

Abstract i

Acknowledgments iii

List of Tables vii

List of Figures viii

Chapter 1 Introduction 1

2 Point Estimation 5

2.1 Maximum Likelihood Estimation 5

2.2 The Method of Moments 8

2.3 Bayesian Approach 11

3 Maximum Likelihood Estimation for Cauchy Distribution 15

4 A New Approach to Point Estimation Based on Maximum Likelihood 20 4.1 Description of the new method 20

4.2 Bayesian interpretation 22

4.3 The case of uniform distribution 24

5 Examples and Simulation Studies 27

5.1 Exponential distribution 27

5.2 Cauchy distribution 33

5.3 Continuous Pareto distribution 43 Appendices

Trang 9

A R coding which we used in this thesis 58

for the exponential distribution (Table 5.1 was developed using

expo-nential distribution (One part of the Figure 5.1 was developed

expo-nential distribution (One part of the Figure 5.2 was developed

for exponential distribution (One part of the Figure 5.2 was

method for Cauchy distribution (One part of the Figure 5.2

distribution (Table 5.2 was partially developed using the

Trang 10

A.8 Finding the estimated values of sigma, MSE and develop a

box-plot for the Cauchy distribution (Table 5.2 and Figure 5.5 were

Cauchy distribution (Figure 5.4 was partially developed using

A.10 Finding the estimated values of beta, MSE and Drawing a

box-plot of estimated values of beta for the continuous Pareto tribution (Table 5.4 and Figure 5.7 were partially developed

A.11 Drawing a the histogram for the estimated values of beta for

the continuous Pareto distribution (Figure 5.6 were partially

A.12 Drawing the Q-Q plot for the estimated values of beta for the

continuous Pareto distribution (Figure 5.8 were partially

Trang 11

LIST OF TABLES

Table

5.3 Estimated parameters for Cauchy distribution when both parameters

5.5 Estimated parameters for continuous Pareto distribution when both

Trang 12

LIST OF FIGURES

Figure

5.1 Q-Q plots of the estimated values, ˆθ 30

5.2 Boxplots of the estimated values, ˆθ 31

5.3 Histogram of the estimated values of ˆθ 32

5.4 Histogram and Q-Q plot of the estimated values, ˆσ and ˆθ 40

5.5 Boxplots of the estimated values, ˆσ and ˆθ 41

5.6 Histogram of the estimated values, ˆα and ˆσ 52

5.7 Boxplots of the estimated values, ˆα and ˆσ 53

5.8 Histogram and Q-Q plot of the estimated values, ˆσ and ˆθ 54

Trang 13

CHAPTER 1

Introduction

The method of maximum likelihood is perhaps the most widely used approach to mate unknown parameter(s) θ of a probability distribution (see [4, 5]) According to

a random sample from a probability distribution given by the probability densityfunction (PDF) f (x|θ) , then the MLE of θ maximizes the function

L (θ) =

nY

i=1

called the likelihood function Depending on the nature of the distribution, the cess of maximizing the function L (θ) in (1.1) may or may not be simple Take forexample exponential distribution with parameter θ, whose PDF is given by

Trang 14

In this case the likelihood function becomes

i=1

is the sample mean (see Chapter 2 for computational details) However, there aremany instances where maximum likelihood approach leads to challenging computa-tional issues (see [8, 9]) One such example is the Cauchy location model, given bythe PDF

i=1

1

Trang 15

does not admit an explicit solution, unless n ≤ 4 (see [5]) and the MLE of θ requires

a numerical search This indeed is quite challenging due to multi-modality of thefunction L in (1.7) (see [2, 5])

However, although maximum likelihood approach is in general quite challengingfor the Cauchy case, it is rather straightforward when the sample size is n = 1.Indeed, if we only have one observation, the likelihood function is simply the Cauchy

The novel estimation approach we propose in this work is to estimate the

means of a weighted average to obtain the final estimator of θ This approach is quitegeneral, and will procedure estimators of the form

ˆ

θ =

nX

i=1

chosen weights In case of the Cauchy distribution mentioned above, we will haveˆ

likelihood (1.7) based on the entire sample is avoided altogether

As we shall see in the sequel, the method we propose produces results quite similar,

Trang 16

or better, than the method of maximum likelihood, while computationally it is muchsimpler.

Our thesis is organized as follows In Chapter 2 we review basic approaches topoint estimation, which include the method of maximum likelihood In Chapter 3

we present the case of Cauchy distribution, and computational challenges that comewith maximum likelihood estimation in this case Then, in Chapter 4, we present thenew method, and apply it to several cases, including the Cauchy model Finally, inChapter 5 we present numerical evidence in favor of the new method

Trang 17

CHAPTER 2

Point Estimation

Point estimation involves the use of sample data to calculate a single value (known

as a statistic) which is to serve as a “best guess” or “best estimate” of an unknownpopulation parameter In this chapter we shall discuss standard approaches to pointestimation, based on maximum likelihood, method of moments, and the Bayesianapproach

The method of maximum likelihood is a classical, widely used method for estimatingunknown parameter(s) of the underlying probability mass function (PMF) or prob-ability density function (PDF), introduced by R A Fisher in 1912 This approachcan be described as follows

distribution with the PMF or the PDF f (x|θ), where θ ∈ Θ can be a single real

θ for a specific vector x, it is called the likelihood function For every possible vector

Trang 18

x, let δ (θ) ∈ Θ denote that value of θ ∈ Θ for which the likelihood function fn(x|θ)

estimator (MLE) of θ

the components of X are independent and identicallly distributed (IID) variablesfollowing a PDF f (x|θ), then the likelihood function is given by

L (θ) = fn(x|θ) = fn(x1, x2, x3, , xn|θ) =

nY

Trang 19

Here is an example, where the MLE can be attained in a closed simple form

i=1

f (xi|θ) =

nY

Trang 20

The value(s) of θ for which the likelihood function is max/min are those whose

¯

¯

x, thethe likelihood function (2.4) has the maximum value, which is unique In conclusion,the MLE of θ is given by:

ˆ

The method of moments is a method of point estimation, which usually yields sistent estimators For any random variable X and any positive integer k, the expec-

Trang 21

terminology, the mean of X is the first moment of X.

distribution given by the PMF or the PDF f (x|θ), where θ ∈ Θ can be a single realvalued parameter or a vector-valued parameter

Next, let

1n

nX

i=1

parameters Then, we set k equations as follows:

Trang 22

nX

i=1

Xi1 = µ1(θ1, θ2, , θk)1

n

nX

i=1

Xi2 = µ2(θ1, θ2, , θk)

1

n

nX

i=1

Xik= µk(θ1, θ2, , θk)

(2.10)

The system of equations (2.10) consists of k equation with k unknown values of

θ = (θ1, θ2, , θk) The method of moment estimator(s) (MME) of θ is a solution ofthis system of equations (see [7]),

ˆ

θ = ˆθ

1, ˆθ2, , ˆθk,= (δ1(X1, X2, , Xn) , δ2(X1, X2, , Xn) , , δk(X1, X2, , Xn))

(2.11)

are IID random variables from exponential distribution given by the PDF (2.3) We

Trang 23

1n

nX

be the posterior density of θ given ~X = (X1, X2, X3, , Xn), and let L (θ, δ) be a lossfunction For each ~X = (X1, X2, X3, , Xn), let δ? ~X

for which the expected loss

Trang 24

estimator is given by:

δ? ~X

In the special case L (θ, δ) = (θ − δ)2, λ? ~X

becomes the mean of the conditionaldistribution of θ| ~X,

δ? ~X

The following example illustrates this procedure

given by the PDF (2.3) Let the prior distribution on θ be a gamma distribution withparameters α, β > 0, given by the PDF

i=1

Trang 25

The joint density of ~X = (X1, , Xn) and θ then becomes

posterior distribution, is given by the PDF

(β+ P n i=1 X i)n+αC (α, β)

Trang 26

as well If we now assume L to be L (δ, θ) = (δ − θ)2, the Bayesian estimator of θbecomes the mean of the posterior distribution,

Trang 27

CHAPTER 3

Maximum Likelihood Estimation for Cauchy Distribution

In this chapter we discuss the case of Cauchy distribution, for which maximum lihood estimators (MLE) are generally not available in closed form Instead, compu-tationally intensive, numerical procedures are required to find them

Trang 28

The log likelihood function for the Cauchy distribution is given by:

log (L (θ, σ)) =

nX

i=1log

1πσ

+

nX

+

nX

i=1

The solution technique of the system of the equations (3.5) and (3.6) for the variables

θ and σ is not so easy for general sample size When σ is known, the likelihoodfunction for θ is occasionally multimodal (see [8]) When σ is fixed and sample size

Trang 29

is n = 1, the likelihood function (3.3) becomes

In this case, the likelihood function (3.7) is maximized when when θ = x1 Hence,

(θ − x1)2− σ2

σ < |θ − x1|, we have dL(σ)dσ > 0 and for σ > |θ − x1| we have dL(σ)dσ < 0 Therefore, the

is ˆσ = δ (x1) = |θ − x1|

From the above discussion, we can guarantee that for sample size n = 1 the MLE

Trang 30

exists in closed form, and is unique.

For sample size n = 2, Haas, et al (see [1]) concluded that the MLE of (θ, σ) is

Trang 31

1, then our method should work very well for any sample size.

Trang 32

CHAPTER 4

A New Approach to Point Estimation Based on Maximum Likelihood

In this chapter we discuss a new approach for point estimation We shall see thatthis approach, defined through maximum likelihood, admits Bayesian interpretation

as well

f (x|θ), where θ is a single unknown parameter Then the likelihood function takes

on the form

L(θ) =

nY

Trang 33

be the maximum likelihood estimator of θ based on Xi Thus, the likelihood function

The new estimation method consists of the following steps:

Step 1: Find ˆθi on the basis of Xi, i = 1, 2, , n

ˆ

θ =

nX

Trang 34

4.2 Bayesian interpretation

where θ is an unknown parameter

Qnj=1f (xj|θ) for θ = ai, i = 1, , n

(4.9)

Trang 35

The marginal density of ~X is given by

gn(~x) = X

θh(~x, θ) =

nX

i=1

1n

nY

k=1

Qn

under the square error loss function, is given by

ˆ

θ =

nX

i=1

ai

Qnj=1f (xj|ai)Pn

i=1

In other words the equations (4.14) and (4.4) are exactly the same

Trang 36

Hence, our estimator is the Bayes estimator with a particular prior distribution,obtained on the basis of the random sample An unusual aspect of this formulation

is that the prior distribution is data-driven, rather than being set in advance Infact, the random sample “points” towards a particular prior distribution, based onthe data Note that this prior distribution concentrates on the n values g (Xi) , i =

distribution converges to that of g (X) , where X is a random variable with PDF

f (x|θ) Consequently, it is expected that the estimates obtained by this method will

be consistent, since for a given, fixed prior the mean of the posterior distributionconverges to the parameter

Trang 37

L(θ) =

nY

Trang 38

From equation (4.17) and equation (4.21) it is clear that both estimators are thesame So, for the uniform case, the estimators are exactly the same under bothmethods

Trang 39

CHAPTER 5

Examples and Simulation Studies

In this chapter we shall discuss the new method for exponential, Cauchy, and uous Pareto distributions We will compare the estimated parameters derived by thenew method with the MLEs for the case of exponential distribution based on numer-ical values, box plots, histograms and mean square errors We will also compare theestimated parameters using the new method with the true values of the parameters,for the case of Cauchy and continuous Pareto distributions

Trang 40

L(θ) =

nY

i=1

Trang 41

P n j=1 x j

Pnj=1(x1

j)ne−

1 xj

P n

This is the new estimator of the parameter θ

The following table compares the new method and the likelihood method as well

as the true value of the parameter for different sample sizes

Table 5.1: Estimated values of the parameter θ for exponential distribution

Trang 42

value of θ for the new method as well as for MLE We can see that the the estimatedvalues are almost identical for both methods Moreover, when the sample size is smallthe new method seems better than the MLE The mean square error of new method

is similar to the mean square error of the MLE In addition, both mean square errorsare decreasing when the sample size is increasing

Trang 43

From Figure 5.1, we can see that the normal Quantile-Quantile plots of estimatedvalues of θ are almost straight lines with 45 degree inclination with horizontal axis.Therefore, the estimated values are normally distributed for both methods.

From Figure 5.2, we see that the boxplots of the estimated values of θ are

Ngày đăng: 24/10/2022, 23:51

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
[1] G. Haas, L. Bain, and C. Antle. Inferences for the Cauchy distribution based on maximum likelihood estimators, Biometrika, 57, 403-408 (1970) Khác
[2] J. A. Reeds. Asymptotic number of roots of Cauchy location likelihood equations, The Annals of Statistics, Vol. 13, No. 2, 775-784 (1985) Khác
[3] P. McCullagh. Mobius transformation and Cauchy parmeter estimation, Annals of Statistics, Vol. 24, No. 2, 787-808 (1996) Khác
[4] M. H. DeGroot. Probability and Statistics, Second Edition, Addison-Wesley (1986) Khác
[5] N. L. Johnson, S. Kotz and N. Balakrishnan. Continuous Univariate Distribu- tions, Second Edition, Wiley and Sons (1994) Khác
[6] T. S. Ferguson. Maximum likelihood estimates of the parameters of the Cauchy distribution for sample of size 3 and 4, Journal of American Statistical Associa- tion, 73, 211-213 (1978) Khác
[7] T. J. Kozubowski, Lecture Notes for Stat 754, Mathematical Statistics, Univer- sity of Nevada, Reno (2014) Khác
[8] V. D. Barnett. Evaluation of the maximum likelihood estimation when the like- Khác

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w