286 CORE TECHNIQuES
2. Drop the disturbance D
Ri
from the model in Figure 10.7, which would convert the
latent risk composite into a weighted combination of its cause indicators. Grace (2006)
argues that (a) error variance estimates for latent composites may have little theoreti-
cal significance in some contexts, and (b) the presence or absence of these error terms
should not by itself drive decisions about the inclusion of composites in the model.
3. Drop the risk composite from the model in Figure 10.7 and replace it with direct
effects from the three cause indicators to each of the two endogenous factors with effect
indicators.
Each respecification option just described would identify the direct effect between
achievement and classroom adjustment in Figure 10.7. Whether any of these options
makes theoretical sense is another matter, one that in a particular study would dictate
whether any of these respecifications is plausible.
Grace (2006, chap. 6) and Grace and Bollen (2008) describe many examples of the
analysis of models with composites in the environmental sciences. Jarvis, MacKenzie,
and Podsakoff (2003) and others advise researchers in the consumer research area—
and the rest of us, too—not to automatically specify factors with effect indicators only
because doing so may result in specification error, perhaps due to lack of familiarity
with formative measurement models. On the other hand, the specification of formative
measurement is not a panacea. For example, because cause indicators are exogenous,
their variances and covariances are not explained by a formative measurement model.
This makes it more difficult to assess the validity of a set of cause indicators (Bollen,
1989). The fact that error variance in formative measurement is represented at the con-
struct level instead of at the indicator level as in reflective measurement is a related prob-
lem. Howell, Breivik, and Wilcox (2007) note that formative measurement models are
more susceptible than reflective measurement models to interpretational confounding
where values of indicator loadings are affected by changes in the structural model. The
absence of a nominal definition of a formative factor apart from the empirical values of
loadings of its indicators exacerbates this problem. For these and other reasons, Howell
et al. conclude that (1) formative measurement is not an equally attractive alternative
to reflective measurement and (2) researchers should try to include reflective indica-
tors whenever other indicators are specified as cause indicators of the same construct,
but see Bagozzi (2007) and Bollen (2007) for other views. See also the special issue on
formative measurement in the Journal of Business Research (Diamantopoulos, 2008) for
more information about formative measurement.
An alternative to SEM for analyzing models with both measurement and structural
components is partial least squares path modeling, also known as latent variable
partial least squares. In this approach, constructs are estimated as linear combinations
of observed variables, or composites. Although SEM is better for testing strong hypoth-
eses about measurement, the partial least squares approach is well suited for situations
where (1) prediction is emphasized over theory testing and (2) it is difficult to meet the
requirements for large samples or identification in SEM. See Topic Box 10.1 for more
information.