The following example illustrates the probability plot methodology and the alternative by using PROC ENTROPY.. proc reg data=rate outest=regout; model y=a b c d ab ac ad bc bd cd abc abd
Trang 1proc reg data=one outest=parm3;
model y = x1 x2;
by by;
run;
The 100 estimations of the coefficient on variablex1are then summarized for each of the three error distributions by using PROC UNIVARIATE, as follows:
proc univariate data=parm1;
var x1;
run;
The following table summarizes the results from the estimations The true value for the coefficient
onx1is 1.0
Method Mean Std Deviation Mean Std Deviation Mean Std Deviation
GME-NM 0.878 0.116 0.948 0.427 3.03 13.62
For normally distributed or nearly normally distributed data, moment-constrained maximum entropy
is a good choice For distributions not well described by a normal distribution, data-constrained maximum entropy is a good choice
Example 12.2: Unreplicated Factorial Experiments
Factorial experiments are useful for studying the effects of various factors on a response For the practitioner constrained to the use of OLS regression, there must be replication to estimate all of the possible main and interaction effects in a factorial experiment Using OLS regression to analyze unreplicated experimental data results in zero degrees of freedom for error in the ANOVA table, since there are as many parameters as observations This situation leaves the experimenter unable to compute confidence intervals or perform hypothesis testing on the parameter estimates
Several options are available when replication is impossible The higher-order interactions can be assumed to have negligible effects, and their degrees of freedom can be pooled to create the error degrees of freedom used to perform inference on the lower-order estimates Or, if a preliminary experiment is being run, a normal probability plot of all effects can provide insight as to which effects are significant, and therefore focused, in a later, more complete experiment
The following example illustrates the probability plot methodology and the alternative by using PROC ENTROPY Consider a 24factorial model with no replication The data are taken fromMyers and Montgomery(1995)
Trang 2data rate;
do a=-1,1; do b=-1,1; do c=-1,1; do d=-1,1;
input y @@;
ab=a*b; ac=a*c; ad=a*d; bc=b*c; bd=b*d; cd=c*d;
abc=a*b*c; abd=a*b*d; acd=a*c*d; bcd=b*c*d;
abcd=a*b*c*d;
output;
end; end; end; end;
datalines;
45 71 48 65 68 60 80 65 43 100 45 104 75 86 70 96
;
run;
Analyze the data by using PROC REG, then output the resulting estimates
proc reg data=rate outest=regout;
model y=a b c d ab ac ad bc bd cd abc abd acd bcd abcd;
run;
proc transpose data=regout out=ploteff name=effect prefix=est;
var a b c d ab ac ad bc bd cd abc abd acd bcd abcd;
run;
Now the normal scores for the estimates can be computed with the rank procedure as follows:
proc rank data=ploteff normal=blom out=qqplot;
var est1;
ranks normalq;
run;
To create the probability plot, simply plot the estimates versus their normal scores by using PROC SGPLOT as follows:
title "Unreplicated Factorial Experiments";
proc sgplot data=qqplot;
scatter x=est1 y=normalq / markerchar=effect
markercharattrs=(size=10pt);
xaxis label="Estimate";
yaxis label="Normal Quantile";
run;
Trang 3The plot shown inOutput 12.2.1displays evidence that thea,b,d,ad, andbdestimates do not fit into the purely random normal model, which suggests that they may have some significant effect on the response variable To verify this, fit a reduced model that contains only these effects
proc reg data=rate;
model y=a b d ad bd;
run;
The estimates for the reduced model are shown inOutput 12.2.2
Trang 4Output 12.2.2 Reduced Model OLS Estimates
Unreplicated Factorial Experiments
The REG Procedure Model: MODEL1 Dependent Variable: y
Parameter Estimates
Parameter Standard
These results support the probability plot methodology
PROC ENTROPY can directly estimate the full model without having to rely upon the probability plot for insight into which effects can be significant To illustrate this, PROC ENTROPY is run by using default parameter and error supports in the following statements:
proc entropy data=rate;
model y=a b c d ab ac ad bc bd cd abc abd acd bcd abcd;
run;
The resulting GME estimates are shown in Output 12.2.3 Note that the parameter estimates associated with thea,b,d,ad, andbdeffects are all significant
Trang 5Unreplicated Factorial Experiments
The ENTROPY Procedure
GME-NM Variable Estimates
Variable Estimate Std Err t Value Pr > |t|
Example 12.3: Censored Data Models in PROC ENTROPY
Data available to an analyst might sometimes be censored, where only part of the actual series is observed Consider the case in which only observations greater than some lower bound are recorded,
as defined by the following process:
yD max Xˇ C ; lb/ :
Running ordinary least squares estimation on data generated by the preceding process is not optimal because the estimates are likely to be biased and inefficient One alternative to estimating models with censored data is the tobit estimator This model is supported in the QLIM procedure in SAS/ETS and in the LIFEREG procedure in SAS/STAT PROC ENTROPY provides another alternative which can make it very easy to estimate such a model correctly
Trang 6The following DATA step generates censored data in which any negative values of the dependent variable,y, are set to a lower bound of 0
data cens;
do t = 1 to 100;
x1 = 5 * ranuni(456);
x2 = 10 * ranuni(456);
y = 4.5*x1 + 2*x2 + 15 * rannor(456);
if( y<0 ) then y = 0;
output;
end;
run;
To illustrate the effect of the censored option in PROC ENTROPY, the model is initially estimated without accounting for censoring in the following statements:
title "Censored Data Estimation";
proc entropy data = cens gme primal;
priors intercept -32 32
model y = x1 x2 /
esupports = (-25 1 25);
run;
Output 12.3.1 GME Estimates
Censored Data Estimation
The ENTROPY Procedure
GME Variable Estimates
Variable Estimate Std Err t Value Pr > |t|
intercept 5.478121 0.00188 2906.41 <.0001
The previous model is reestimated by using the CENSORED option in the following statements:
proc entropy data = cens gme primal;
priors intercept -32 32
model y = x1 x2 /
esupports = (-25 1 25)
censored(lb = 0, esupports=(-15 1 15) );
run;
Trang 7Censored Data Estimation
The ENTROPY Procedure
GME Variable Estimates
Variable Estimate Std Err t Value Pr > |t|
The second set of entropy estimates are much closer to the true parameter estimates of 4.5 and 2 Since another alternative available for fitting a model of censored data is a tobit model, PROC QLIM
is used in the following statements to fit a tobit model to the data:
proc qlim data=cens;
model y = x1 x2;
endogenous y ~ censored(lb=0);
run;
Output 12.3.3 QLIM Estimates
Censored Data Estimation
The QLIM Procedure Parameter Estimates
For this data and code, PROC ENTROPY produces estimates that are closer to the true parameter values than those computed by PROC QLIM
Example 12.4: Use of the PDATA= Option
It is sometimes useful to specify priors and supports by using the PDATA= option This example illustrates how to create a PDATA= data set which contains the priors and support points for use in a
Trang 8subsequent PROC ENTROPY step In order to have a model to estimate in PROC ENTROPY, you must first have data to analyze The following DATA step generates the data used in this analysis:
title "Using a PDATA= data set";
data a;
array x[4];
do t = 1 to 100;
ys = -5;
do k = 1 to 4;
x[k] = rannor( 55372 ) ;
ys = ys + x[k] * k;
end;
ys = ys + rannor( 55372 );
output;
end;
run;
Next you fit this data with some arbitrary parameter support points and priors by using the following PROC ENTROPY statements:
proc entropy data = a gme primal;
x2 -20(3) 30(2) x3 -15(4) 30(4) x4 -25(3) 30(2) intercept -13(4) 30(2) ; model ys = x1 x2 x3 x4 / esupports=(-25 0 25);
run;
These statements produce the output shown inOutput 12.4.1
Output 12.4.1 Output From PROC ENTROPY
Using a PDATA= data set
The ENTROPY Procedure
GME Variable Estimates
Variable Estimate Std Err t Value Pr > |t|
You can estimate the same model by first creating a PDATA= data set, which includes the same information as the PRIORS statement in the preceding PROC ENTROPY step
Trang 9data test;
length Variable $ 12 Equation $ 12;
input Variable $ Equation $ Nsupport Support Prior ;
datalines;
Intercept 2 -13 0.66667
Intercept 2 30 0.33333
x1 2 -10 0.66667 x1 2 30 0.33333 x2 2 -20 0.60000 x2 2 30 0.40000 x3 2 -15 0.50000 x3 2 30 0.50000 x4 2 -25 0.60000 x4 2 30 0.40000
;
The following statements reestimate the model by using these support points
proc entropy data=a gme primal pdata=test;
model ys = x1 x2 x3 x4 / esupports=(-25 0 25);
run;
These statements produce the output shown inOutput 12.4.2
Output 12.4.2 Output From PROC ENTROPY with PDATA= option
Using a PDATA= data set
The ENTROPY Procedure
GME Variable Estimates
Variable Estimate Std Err t Value Pr > |t|
These results are identical to the ones produced by the previous PROC ENTROPY step
Trang 10Example 12.5: Illustration of ODS Graphics
This example illustrates how to use ODS graphics in the ENTROPY procedure This example is a continuation of the example in the section“Simple Regression Analysis” on page 662 Graphical displays are requested by specifying the ODS GRAPHICS statement For information about the graphics available in the ENTROPY procedure, see the section “ODS Graphics” on page 710
The following statements show how to generate ODS graphics plots with the ENTROPY procedure The plots are displayed inOutput 12.5.1
proc entropy data=coleman;
model test_score = teach_sal prcnt_prof socio_stat
teach_score mom_ed;
run;
Output 12.5.1 Model Diagnostics Plots