
How to Fool Yourself with SEM 365
esis testing (e.g., whether an error variance differs statistically from zero). A third is
to forget that statistical tests of individual effects tend to result in rejection of the null
hypothesis too often when non-normal data are analyzed by methods that assume nor-
mality. See point 27 for related misuses of statistical tests in SEM.
45. Interpret the standardized solution in inappropriate ways. This is a relatively com-
mon mistake in multiple-sample SEM—specifically, to compare standardized estimates
across groups that differ in their variabilities. In general, standardized solutions are fine
for comparisons within each group (e.g., the relative magnitudes of direct effects on the
same endogenous variable), but only unstandardized solutions are usually appropriate
for cross-group comparisons. A related error is to interpret group differences in the stan-
dardized estimates of equality-constrained parameters: the unstandardized estimates of
such parameters are forced to be equal, but their unstandardized counterparts are typi-
cally unequal if the groups have different variabilities.
46. Fail to consider equivalent or near-equivalent models. Essentially all structural
equation models have equivalent versions that generate the same predicted correlations
or covariances. For latent variable models, there may be infinitely many equivalent mod-
els. There are probably also near-equivalent versions that generate almost the same cova-
riances as those in the data matrix. Researchers must offer reasons why their models are
to be preferred over some obvious equivalent or near-equivalent versions of them.
47. Fail to consider (nonequivalent) alternative models. When there are competing
theories about the same phenomenon, it may be possible to specify alternative models
that reflect them. Not all of these alternatives may be equivalent versions of one another.
If the overall fits of some of these alternative models are comparable, then the researcher
must explain why a particular model is to be preferred.
48. Reify the factors. Believe that constructs represented in your model must corre-
spond to things in the real world. Perhaps they do, but do not assume it.
49. Believe that naming a factor means that it is understood (i.e., commit the naming
fallac y). Factor names are conveniences, not explanations. For example, if a three-factor
fits the data, this does not prove that the verbal labels assigned by the researcher to the
factors are correct. Alternative interpretations of factors are often possible in many, if
not most, factor analyses.
50. Believe that a strong analytical method like SEM can compensate for poor study
design or slipshod ideas. No statistical procedure can make up for inherent logical or
design flaws. For example, expressing poorly thought out hypotheses with a path dia-
gram does not give them more credibility. The specification of direct and indirect effects
in a structural model cannot be viewed as a replacement for an experimental or longi-
tudinal design. As mentioned earlier, the inclusion of a measurement error term for an
observed variable that is psychometrically deficient cannot somehow transform it into
a good measure. Applying SEM in the absence of good design, measures, and ideas is
like using a chain saw to cut butter: one will accomplish the task, but without a more
substantial base, one is just as likely to make a big mess.
51. As the researcher, fail to report enough information so that your readers can repro-
duce your results. There are still too many reports in the literature where SEM was used