676 17 Regression for Binary and Count Data
model{
for( i in 1 : N ) {
y[i] ~ dbin(p[i],n[i])
probit(p[i]) <- alpha.star + beta
*
(x[i] - mean(x[]))
yhat[i] <- n[i]
*
p[i]
}
alpha <- alpha.star - beta
*
mean(x[])
beta ~ dnorm(0.0,0.001)
alpha.star ~ dnorm(0.0,0.001)
}
DATA
list( x = c(1.6907, 1.7242, 1.7552, 1.7842,
1.8113, 1.8369, 1.8610, 1.8839),
n = c(59, 60, 62, 56, 63, 59, 62, 60),
y = c(6, 13, 18, 28, 52, 53, 61, 60), N = 8)
INITS
list(alpha.star=0, beta=0)
mean sd MC error val2.5pc median val97.5pc start sample
alpha –35.03 2.652 0.01837 –40.35 –35.01 –29.98 1001 100000
alpha.star 0.4461 0.07724 5.435E-4 0.2938 0.4461 0.5973 1001 100000
beta 19.78 1.491 0.0104 16.94 19.77 22.78 1001 100000
yhat[1] 3.445 1.018 0.006083 1.757 3.336 5.725 1001 100000
yhat[2] 10.76 1.69 0.009674 7.643 10.7 14.26 1001 100000
yhat[3] 23.48 1.896 0.01095 19.77 23.47 27.2 1001 100000
yhat[4] 33.81 1.597 0.01072 30.62 33.83 36.85 1001 100000
yhat[5] 49.59 1.623 0.01208 46.28 49.63 52.64 1001 100000
yhat[6] 53.26 1.158 0.008777 50.8 53.33 55.33 1001 100000
yhat[7] 59.59 0.7477 0.00561 57.91 59.68 60.82 1001 100000
yhat[8] 59.17 0.3694 0.002721 58.28 59.23 59.71 1001 100000
If instead of probit, the clog-log was used, as cloglog(p[i]) <- alpha.star
+ beta
*
(x[i] - mean(x[]))
, then the coefficients are
mean sd MC error val2.5pc median val97.5pc start sample
alpha –39.73 3.216 0.02195 –46.24 –39.66 –33.61 1001 100000
beta 22.13 1.786 0.01214 18.73 22.09 25.74 1001 100000
For comparisons we take a look at the classical solution ( beetleBliss2.m).
Figure 17.7 shows three binary regressions (logit, probit and clog-log) fitting
the Bliss data.
disp(’Logistic Regression 2: Bliss Beetle Data’)
lw = 2.5;
set(0, ’DefaultAxesFontSize’, 16);
fs = 15;
msize = 10;
beetle=[...
1.6907 6 59; 1.7242 13 60; 1.7552 18 62; 1.7842 28 56;...