206 CORE TECHNIQuES
degrees of freedom. This is because the effect of the correction for parsimony diminishes
as the sample size becomes increasingly large (Mulaik, 2009). See Mulaik (pp. 342–345)
for more information about other parsimony corrections in SEM.
The population parameter estimated by the RMSEA is often designated as ε (epsi-
lon). In computer output, the lower and upper bounds of the 90% confidence interval for
ε are often printed along with the sample value of the RMSEA, the point estimate of ε.
As expected, the width of this confidence interval is generally larger in smaller samples,
which indicates less precision. The bounds of the confidence interval for ε may not
be symmetrical around the sample value of the RMSEA, and, ideally, the lower bound
equals zero. Both the lower and upper bounds are estimated assuming noncentral chi-
square distributions. If these distributional assumptions do not hold, then the bounds
of the confidence interval for ε may be wrong.
Some computer programs, such as LISREL and Mplus, calculate p values for the
test of the one-sided hypothesis H
0
: ε
0
≤ .05, or the close-fit hypothesis. This test is an
accept–support test where failure to reject this null hypothesis favors the researcher’s
model. The value .05 in the close-fit hypothesis originates from Browne and Cudeck
(1993), who suggested that RMSEA ≤ .05 may indicate “good fit.” But this threshold is
a rule of thumb that may not generalize across all studies, especially when distributional
assumptions are in doubt. When the lower limit of the confidence interval for ε is zero, the
model chi-square test will not reject the null hypothesis that ε
0
= 0 at α = .05. Otherwise,
a model could fail the more stringent model chi-square test but pass the less demand-
ing close-fit test. Hayduk, Pazderka-Robinson, Cummings, Levers, and Beres (2005)
describe such models as close-yet-failing models. Such models should be treated as
any other that fails the chi-square test. That is, passing the close-fit test does not justify
ignoring a failed exact-fit test.
If the upper bound of the confidence interval for ε exceeds a value that may indicate
“poor fit,” then the model warrants less confidence. For example, the test of the poor-
fit hypothesis H
0
: ε
0
≥ .10 is a reject–support test of whether the fit of the researcher’s
model is just as bad or even worse than that of a model with “poor fit.” The threshold
of .10 in the poor-fit hypothesis is also from Browne and Cudeck (1993), who suggested
that RMSEA ≥ .10 may indicate a serious problem. The test of the poor-fit hypothesis
can serve as a kind of reality check against the test of the close-fit hypothesis. (The
tougher exact-fit test serves this purpose, too.) Suppose that RMSEA = .045 with the
90% confidence interval .009–.155. Because the lower bound of this interval (.009) is
less than .05, the close-fit hypothesis is not rejected. The upper bound of the same con-
fidence interval (.155) exceeds .10, however, so we cannot reject the poor-fit hypothesis.
These two outcomes are not contradictory. Instead, we would conclude that the point-
estimate RMSEA = .045 is subject to a fair amount of sampling error because it is just as
consistent with the close-fit hypothesis as it is with the poor-fit hypothesis. This type of
“mixed” outcome is more likely to happen in smaller samples. A larger sample may be
required in order to obtain more precise results.
Some limitations of the RMSEA are as follows: