70 BAYESIAN INFERENCE
best to combine their opinions. Strategies can be found in Press (2003) and
Garthwaite, Kadane, and O’Hagan (2005), with additional references therein.
For elicitation and informative priors for covariance matrices, we refer the
reader to Brown, Le, and Zidek (1994), Garthwaite and Al-Awadhi (2001),
and Daniels and Pourahmadi (2002); for correlation matrices, some recent
work can be found in Zhang et al. (2006).
Other recent work on elicitation can be found in Kadane and Wolfson
(1998), O’Hagan (1998), and Chen et al. (2003). Excellent reviews can be
found in Chaloner (1996), Press (2003), and Garthwaite et al. (2005).
Computing
Slice sampling is an another approach to sample from unknown full condi-
tional distributions (Damien, Wakefield, and Walker, 1999; Neal, 2003). The
approach involves augmenting the parameter space with non-negative latent
variables in a specific way. It is a special case of data augmentation. WinBUGS
uses this approach for parameters with bounded domains.
Another approach that has been used to sample from unknown full condi-
tionals is hybrid MC (Gustafson, 1997; Neal, 1996), which uses information
from the first derivative of the log full conditional and has been implemented
successfully in longitudinal models in Ilk and Daniels (2007) in the context
of some extensions of MTM’s for multivariate longitudinal binary data. For
other candidate distributions for the Metropolis-Hastings algorithm, we refer
thereader to Gustafson et al. (2004) who propose and review approaches that
attempt to avoid the random walk behavior of the certain Metropolis-Hastings
algorithm without necessarily doing an expensive numerical maximization at
each iteration. This includeshybridMCasaspecial case.
Forsettings where the logarithm of the full conditional distribution is log
concave, Gilks and Wild (1992) have proposed easy-to-implement adaptive
rejection sampling algorithms.
Parameter expansion algorithms to sample correlation matrices have re-
cently been proposed (Liu, 2001; Liu and Daniels, 2006) that greatly simplify
sampling by providing a conditionally conjugate structure by sampling a co-
variance matrix from the appropriate distribution and then transforming it
back to a correlation matrix.
The efficiency of posterior summaries can be increased by using a tech-
nique called Rao-Blackwellization (Gelfand and Smith, 1990; Liu, Wong, and
Kong, 1994). See Eberly and Casella (2003) for Rao-Blackwellization applied
to credible intervals.
Additional extensions of data augmentation, termed marginal, conditional,
and joint,thatcanfurther improve the efficiency and convergence of the
MCMC algorithm can be found in van Dyk and Meng (2001).