1. Trang chủ
  2. » Tài Chính - Ngân Hàng

SAS/ETS 9.22 User''''s Guide 155 pot

10 72 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 10
Dung lượng 254,94 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

In order to get an approximate closed form solution, the second shape parameter 3 3m1m2 > 0, then simplifying and solving the moment equations yields the following feasible set of initia

Trang 1

Given n observations of the severity value yi (1 i  n), the estimate of kth raw moment is denoted

by mk and computed as

mk D 1

n

n

X

i D1

yik

The 100pth percentile is denoted by p (0 p  1) By definition, p satisfies

F p / p  F p/

where F p /D limh#0F p h/ PROC SEVERITY uses the following practical method of computing p Let OF y/ denote the empirical distribution function (EDF) estimate at a severity value

y This estimate is computed by PROC SEVERITY and supplied to the name_PARMINIT subroutine Let yp and ypCdenote two consecutive values in the array of y values such that OF yp/ < p and O

F ypC/ p Then, the estimate Op is computed as

Op D yp C p FOp

O

FpC FOp .y

C

p yp/

where OFpCD OF ypC/ and OFp D OF yp/

Let  denote the smallest double-precision floating-point number such that 1C  > 1 This machine precision constant can be obtained by using the CONSTANT function in Base SAS software The details of how parameters are initialized for each predefined distribution model are as follows:

BURR The parameters are initialized by using the method of moments The kth raw moment

of the Burr distribution is:

EŒXkD 

k€.1

Three moment equations EŒXkD mk(k D 1; 2; 3) need to be solved for initializing the three parameters of the distribution In order to get an approximate closed form solution, the second shape parameter 3 3m1m2 >

0, then simplifying and solving the moment equations yields the following feasible set of initial values:

O Dr m2m3

2m3 3m1m2

; O˛ D 1 C m3

2m3 3m1m2

;

If 2m3 3m1m2< , then the parameters are initialized as follows:

O Dpm2; O˛ D 2;

EXP The parameters are initialized by using the method of moments The kth raw moment

of the exponential distribution is:

EŒXkD k€.kC 1/; k > 1 Solving EŒX D m1yields the initial value of O D m1

Trang 2

GAMMA The parameter ˛ is initialized by using its approximate maximum likelihood (ML)

estimate For a set of n iid observations yi (1  i  n), drawn from a gamma distribution, the log likelihood, l , is defined as follows:

l D

n

X

i D1

log yi˛ 1e

yi=

˛€.˛/

!

D ˛ 1/

n

X

i D1

log.yi/ 1



n

X

i D1

yi n˛ log. / n log.€.˛//

Using a shorter notation ofP to denote Pni D1and solving the equation @l =@ D 0 yields the following ML estimate of  :

O D P yi

n˛ D m1

˛ Substituting this estimate in the expression of l and simplifying gives

l D ˛ 1/Xlog.yi/ n˛ n˛ log.m1/C n˛ log.˛/ n log.€.˛//

Let d be defined as follows:

d D log.m1/ 1

n

X log.yi/ Solving the equation @l =@˛ D 0 yields the following expression in terms of the digamma function, .˛/:

log.˛/ ˛/D d The digamma function can be approximated as follows:

O .˛/ log.˛/ 1

˛

 0:5C 1 12˛C 2



This approximation is within 1.4% of the true value for all the values of ˛ > 0 except when ˛ is arbitrarily close to the positive root of the digamma function (which is approximately 1.461632) Even for the values of ˛ that are close to the positive root, the absolute error between true and approximate values is still acceptable (j O ˛/ ˛/j < 0:005 for ˛ > 1:07) Solving the equation that arises from this approximation yields the following estimate of ˛:

O˛ D 3 d Cp.d 3/2C 24d

12d

If this approximate ML estimate is infeasible, then the method of moments is used The kth raw moment of the gamma distribution is:

EŒXkD k€.˛C k/

€.˛/ ; k > ˛ Solving EŒX D m1and EŒX2D m2yields the following initial value for ˛:

O˛ D m

2 1

m2 m21

Trang 3

If m2 m21 <  (almost zero sample variance), then ˛ is initialized as follows: O˛ D 1

After computing the estimate of ˛, the estimate of  is computed as follows:

O D m1

Both the maximum likelihood method and the method of moments arrive at the same relationship between O˛ and O

GPD The parameters are initialized by using the method of moments Notice that for  > 0,

the CDF of the generalized Pareto distribution (GPD) is:

F x/D 1



1Cx



 1=

D 1

 =

xC =

1=

This is equivalent to a Pareto distribution with scale parameter 1D = and shape parameter ˛D 1= Using this relationship, the parameter initialization method used for the PARETO distribution model is used to get the following initial values for the parameters of the GPD distribution model:

O D m1m2

2.m2 m21/; O D m2 2m21

2.m2 m21/

If m2 m21<  (almost zero sample variance) or m2 2m21< , then the parameters are initialized as follows:

O D m1

2 ; O D 1

2 IGAUSS The parameters are initialized by using the method of moments Note that the

standard parameterization of the inverse Gaussian distribution (also known as the Wald distribution), in terms of the location parameter  and shape parameter , is as follows (Klugman, Panjer, and Willmot 1998, p 583):

f x/D

r

 2x3 exp



.x /2 22x



F x/D ˆ  x

 r  x

!

C 1 r 

x

! exp 2





For this parameterization, it is known that the mean is EŒX D  and the variance is VarŒX D 3=, which yields the second raw moment as EŒX2D 2.1C =/ (computed by using EŒX2D VarŒX C EŒX/2)

The predefined IGAUSS distribution model in PROC SEVERITY uses the following alternate parameterization to allow the distribution to have a scale parameter,  :

f x/D

r

˛

2x3 exp

 ˛.x  /2 2x



F x/D ˆ x

 r

˛

x

!

 C 1

r

˛

x

! exp 2˛/

Trang 4

The parameters  (scale) and ˛ (shape) of this alternate form are related to the parameters  and  of the preceding form such that  D  and ˛ D = Using this relationship, the first and second raw moments of the IGAUSS distribution are:

EŒX D  EŒX2D 2



1C 1

˛



Solving EŒX D m1and EŒX2D m2yields the following initial values:

O D m1; O˛ D m

2 1

m2 m21

If m2 m21 <  (almost zero sample variance), then the parameters are initialized as follows:

O D m1; O˛ D 1 LOGN The parameters are initialized by using the method of moments The kth raw moment

of the lognormal distribution is:

EŒXkD exp

 kC k

22 2



Solving EŒX D m1and EŒX2D m2yields the following initial values:

O D 2 log.m1/ log.m2/

2 ; O Dplog.m2/ 2 log.m1/

PARETO The parameters are initialized by using the method of moments The kth raw moment

of the Pareto distribution is:

EŒXkD 

k€.kC 1/€.˛ k/

€.˛/ ; 1 < k < ˛ Solving EŒX D m1and EŒX2D m2yields the following initial values:

O D m1m2

m2 2m21; O˛ D 2.m2 m

2

1/

m2 2m21

If m2 m21<  (almost zero sample variance) or m2 2m21< , then the parameters are initialized as follows:

O D m1; O˛ D 2 WEIBULL The parameters are initialized by using the percentile matching method Let q1 and

q3 denote the estimates of the 25th and 75th percentiles, respectively Using the formula for the CDF of Weibull distribution, they can be written as

1 exp .q1= //D 0:25

1 exp .q3= //D 0:75

Trang 5

Simplifying and solving these two equations yields the following initial values: O D exp r log.q1/ log.q3/



; O D log.log.4//

log.q3/ log O / where r D log.log.4//= log.log.4=3// These initial values agree with those sug-gested in Klugman, Panjer, and Willmot (1998)

A summary of the initial values of all the parameters for all the predefined distributions is given in

Table 22.4 The table also provides the names of the parameters to use in theINIT=option in the DIST statement if you want to provide a different initial value

Table 22.4 Parameter Initialization for Predefined Distributions

Distribution Parameter Name for INIT option Default Initial Value

2m 3 3m 1 m 2

2m3 3m1m2

p

.d 3/ 2 C24d 12d



r log.q1/ log.q3/

r 1



Notes:

 mk denotes the kth raw moment

 d D log.m1/ P log.yi//=n

 q1 and q3 denote the 25th and 75th percentiles, respectively

 r D log.log.4//= log.log.4=3//

Trang 6

Predefined Utility Functions

The following predefined utility functions are provided with the SEVERITY procedure and are available in the SASHELP.SVRTDIST library:

SVRTUTIL_HILLCUTOFF:

This function computes an estimate of the value where the right tail of a distribution is expected

to begin The function implements the algorithm described in Danielsson et al 2001 The description of the algorithm uses the following notation:

n number of observations in the original sample

B number of bootstrap samples to draw

m1 size of the bootstrap sample in the first step of the algorithm (m1< n)

x.i /j;m i th order statistic of j th bootstrap sample of size m (1 i  m; 1  j  B)

x.i / i th order statistic of the original sample (1 i  n)

Given the input sample x and values of B and m1, the steps of the algorithm are as follows:

1 Take B bootstrap samples of size m1from the original sample

2 Find the integer k1that minimizes the bootstrap estimate of the mean squared error:

k1D arg min

1k<m 1

Q.m1; k/

3 Take B bootstrap samples of size m2D m21=n from the original sample

4 Find the integer k2that minimizes the bootstrap estimate of the mean squared error:

k2D arg min

1k<m 2

Q.m2; k/

5 Compute the integer kopt, which is used for computing the cutoff point:

kopt D k

2 1

k2



log.k1/

2 log.m1/ log.k1/

2 2 log.k 1 /= log.m 1 /

6 Set the cutoff point equal to x.koptC1/

The bootstrap estimate of the mean squared error is computed as

Q.m; k/D 1

B

B

X

j D1

MSEj.m; k/

The mean squared error of j th bootstrap sample is computed as

MSEj.m; k/ D Mj.m; k/ j.m; k//2/2 where Mj.m; k/ is a control variate proposed by Danielsson et al 2001,

Mj.m; k/D 1

k

k

X

i D1

 log.x.m i C1/j;m / log.x.m k/j;m /

2

Trang 7

j.m; k/ is the Hill’s estimator of the tail index (Hill 1975),

j.m; k/D k1

k

X

i D1

log.x.m i C1/j;m / log.x.m k/j;m /

This algorithm has two tuning parameters, B and m1 The number of bootstrap samples B

is chosen based on the availability of computational resources The optimal value of m1is chosen such that the following ratio, R.m1/, is minimized:

R.m1/D .Q.m1; k1//

2

Q.m2; k2/ The SVRTUTIL_HILLCUTOFF utility function implements the preceding algorithm It uses the grid search method to compute the optimal value of m1

 Type: Function

 Signature: SVRTUTIL_HILLCUTOFF(n, x{*}, b, s, status)

 Argument Description:

n Dimension of the array x

x{*} Input numeric array of dimension n that contains the sample

b Number of bootstrap samples used to estimate the mean squared error If b is

less than 10, then a default value of 50 is used

s Approximate number of steps used to search the optimal value of m1in the

range Œn0:75; n 1 If s is less than or equal to 1, then a default value of 10

is used

status Output argument that contains the status of the algorithm If the algorithm

succeeds in computing a valid cutoff point, then status is set to 0 If the algorithm fails, then status is set to 1

 Return value: The cutoff value where the right tail is estimated to start If the size of the input sample is inadequate (n 5), then a missing value is returned and status is set to a missing value If the algorithm fails to estimate a valid cutoff value (status = 1), then the fifth largest value in the input sample is returned

SVRTUTIL_PERCENTILE:

This function computes the specified percentile given the EDF estimates of a sample Let

F x/ denote the EDF estimate at x Let xp and xpC denote two consecutive values in the sample of x values such that F xp/ < p and F xpC/ p Then, the function computes the 100pth percentile p as

p D xp C p Fp

FpC Fp .x

C

p xp/

where FpCD F xpC/ and Fp D F xp/

 Type: Function

 Signature: SVRTUTIL_PERCENTILE(p, n, x{*}, F{*})

Trang 8

 Argument Description:

p Desired percentile The value must be in the interval (0,1) The function

returns the 100pth percentile

n Dimension of the x and F input arrays

x{*} Input numeric array of dimension n that contains distinct values of the random

variable observed in the sample These values must be sorted in increasing order

F{*} Input numeric array of dimension n in which each F[i ] contains the EDF

estimate for x[i ] These values must be sorted in nondecreasing order

 Return value: The 100pth percentile of the input sample

SVRTUTIL_RAWMOMENTS:

This subroutine computes the raw moments of a sample

 Type: Subroutine

 Signature: SVRTUTIL_RAWMOMENTS(n, x{*}, nx{*}, nRaw, raw{*})

 Argument Description:

n Dimension of the x and nx input arrays

x{*} Input numeric array of dimension n that contains distinct values of the random

variable that are observed in the sample

nx{*} Input numeric array of dimension n in which each nx[i ] contains the number

of observations in the sample that have the value x[i ]

nRaw Desired number of raw moments The output array raw contains the first

nRawraw moments

raw{*} Output array of raw moments The kth element in the array (raw{k}) contains

the kth raw moment, where 1 k  nRaw

 Return value: Numeric array raw that contains the first nRaw raw moments The array contains missing values if the sample has no observations (that is, if all the values in the

nxarray add up to zero)

SVRTUTIL_SORT:

This function sorts the given array of numeric values in an ascending or descending order

 Type: Subroutine

 Signature: SVRTUTIL_SORT(n, x{*}, flag)

 Argument Description:

n Dimension of the input array x

x{*} Numeric array that contains the values to be sorted at input The subroutine

uses the same array to return the sorted values

flag A numeric value that controls the sort order If flag is 0, then the values are

sorted in an ascending order If flag has any value other than 0, then the values are sorted in descending order

 Return value: Numeric array x, which is sorted in place (that is, the sorted array is stored

in the same storage area occupied by the input array x)

Trang 9

Censoring and Truncation

One of the key features of PROC SEVERITY is that it enables you to specify whether the severity event’s magnitude is observable and if it is observable, then whether the exact value of the magnitude

is known If an event is unobservable when the magnitude is in certain intervals, then it is referred to

as a truncation effect If the exact magnitude of the event is not known, but it is known to have a value in a certain interval, then it is referred to as a censoring effect

PROC SEVERITY allows a severity event to be left-truncated and right-censored An event is said

to be left-truncated if it is observed only when Y > T , where Y denotes the random variable for the magnitude and T denotes a random variable for the truncation threshold An event is said to be right-censored if it is known that the magnitude is Y > C , but the exact value of Y is not known C

is a random variable for the censoring limit

PROC SEVERITY assumes that the input data is given as a triplet yi; ti; ıi/; i D 1; : : : ; N /, where

N is the number of observations (in a BY group), yi is the observed value (magnitude) of the response (event) variable, ti is the left-truncation threshold, and ıi is a right-censoring indicator If

ıi is equal to one of the values specified in theRIGHTCENSORED=option (or 0 if no indicator value is specified), then it indicates that yi is right-censored In that case, the censoring limit ci is assumed to be equal to the recorded value yi If ıi is not equal to one of the indicator values or has a missing value, then yi is assumed to be the exact event value; that is, the observation is uncensored

If the global left-truncation threshold Tgis specified by using theLEFTTRUNCATED=option, then

ti D Tg;8i If yi  ti for some i , then that observation is ignored and a warning is written to the SAS log A missing value for ti indicates that the observation is not left-truncated

If the global right-censoring limit Cgis specified by using the RIGHTCENSORED= option, then yi

is compared with Cg If yi < Cg, then ıi D 1 to indicate exact (uncensored) observation; otherwise,

ıi D 0 to indicate right-censored observation Note that the case of yi D Cg is considered as right-censored, because it is assumed that the actual event magnitude is greater than Cg However, it gets recorded as Cg If yi > Cgfor some observation, then it is reduced to the limit (yi D Cg) and

a warning is written to the SAS log

Specification of right-censoring and left-truncation affects the likelihood of the data (see the section

“Likelihood Function” on page 1541) and how the empirical distribution function (EDF) is estimated (see the section “Empirical Distribution Function Estimation Methods” on page 1547)

Probability of Observability

For left-truncated data, PROC SEVERITY also enables you to provide additional information in the form of probability of observability by using thePROBOBSERVED=option It is defined as the probability that the underlying severity event gets observed (and recorded) for the specified left-truncation threshold value For example, if you specify a value of 0.75, then for every 75 observations recorded above a specified threshold, 25 more events have happened with a severity value less than or equal to the specified threshold Although the exact severity value of those 25 events is not known, PROC SEVERITY can use the information about the number of those events

Trang 10

In particular, for each left-truncated observation, PROC SEVERITY assumes a presence of 1 p/=p additional observations with yi D ti These additional observations are then used for computing the likelihood (see the section “Probability of Observability and Likelihood” on page 1542) and an unconditional estimate of the empirical distribution function (see the section “EDF Estimates and Left-Truncation” on page 1549)

Parameter Estimation Method

PROC SEVERITY uses the maximum likelihood (ML) method to estimate the parameters of each model A nonlinear optimization process is used to maximize the log of the likelihood function

Likelihood Function

Let Y denote the random response variable, and let y denote its value recorded in an observation

in the input data set Let ı denote the censoring indicator: ı D 1 indicates that the observation is uncensored (sometimes referred to as an event observation) and ıD 0 indicates that the observation

is right-censored When ıD 0, the recorded value of y is assumed to be the censoring limit, denoted

by c Let t denote the left-truncation threshold Let f‚.y/ and F‚.y/ denote the PDF and CDF respectively, evaluated at y for a set of parameter values ‚ Then, the set of input observations can

be categorized into the following four subsets within each BY group:

 E: the set of uncensored observations that are not left-truncated The likelihood of an observation i 2 E is

li 2E D Pr.Y D yi/D f‚.yi/

 El: the set of uncensored observations that are left-truncated The likelihood of an observation

j 2 El is

lj 2El D Pr.Y D yjjY > tj/D f‚.yj/

1 F‚.tj/

 C : the set of right-censored observations that are not left-truncated The likelihood of an observation k2 C is

lk2C D Pr.Y > ck/D 1 F‚.ck/

 Cl: the set of right-censored observations that are left-truncated The likelihood of an observa-tion m2 Cl is

lm2Cl D Pr.Y > cmjY > tm/D 1 F‚.cm/

1 F‚.tm/ Note that E[ El/\ C [ Cl/D ; Also, the sets El and Cl are empty when left-truncation is not specified, and the sets C and Cl are empty when right-censoring is not specified

Ngày đăng: 02/07/2014, 15:20

TỪ KHÓA LIÊN QUAN