The LIML estimates are generally quite close to the FIML ones. This should usu-
ally be the case, as the LIML estimates are nearly as efficient as their FIML counter-
parts (Begg and Gray, 1984). One might wonder why we would bother with LIML if
the FIML software is available, as it generally is. As Hosmer and Lemeshow (2000)
point out, the LIML approach has some specific advantages. First, it allows the model
for each log odds to be different if we so choose, that is, to contain different regressors
or different functions of regressors, an approach not possible with FIML. Second, it
allows one to take advantage of features that may be offered in binary logistic regres-
sion software but not multinomial logistic regression software. Examples are weight-
ing by case weights or diagnosing influential observations. Third, means of assessing
empirical consistency, such as the Hosmer–Lemeshow χ
2
, are not yet well developed
for the multinomial model (Hosmer and Lemeshow 2000). However, using the LIML
approach, empirical consistency can be assessed for each equation separately, as dis-
cussed in Chapter 7. In fact, for the model in Table 8.5, the Hosmer– Lemeshow χ
2
is 11.823 for the equation for “intense male violence” and 6.652 for the equation
for “physical aggression.” Both values are nonsignificant, suggesting an acceptable
model fit.
Inferences. In multinomial logistic regression, there are several statistical tests of
interest. First, as in binary logistic regression, there is a test statistic for whether the
model as a whole exhibits any predictive efficacy. The null hypothesis is that all
K(M ⫺ 1) of the regression coefficients (i.e., the betas) in equation group (8.1) equal
zero. Once again, the test statistic is the model chi-squared, equal to ⫺2log(L
0
/L
1
),
where L
0
is the likelihood function evaluated for a model with only the MLEs for the
intercepts and L
1
is the likelihood function evaluated at the MLEs for the hypothe-
sized model. This test is not automatically output in CATMOD. However, as the pro-
gram always prints out ⫺2 log L for the current model, it can be readily computed by
first estimating a model with no predictors and then recovering ⫺2log L
0
from the
printout (it is the value of “⫺2 log likelihood” for the last iteration on the printout).
This test can then be computed as ⫺2log L
0
⫺ (⫺2log L
1
). For the model in Table 8.5,
⫺2log L
0
was 3895.2458, while ⫺2log L
1
was 3419.5082. The test statistic was
therefore 3895.2458 ⫺ 3419.5082 ⫽ 475.7376, with 11(2) ⫽ 22 degrees of freedom, a
highly significant result. (The LIML equations each have their own model χ
2
,as
shown in the table.)
Second, the test statistic, using FIML, for the global effect on the response vari-
able of a given predictor, say X
k
, is not a single-degree-of-freedom test statistic as in
the binary case. For multinomial models, there are (M ⫺ 1) β
k
’s representing the
global effect of X
k
, one for each of the log odds in equation group (8.1). Therefore,
the test statistic is for the null hypothesis that all M ⫺ 1 of these β
k
’s equal zero.
There are two ways to construct the test statistic. One is to run the model with and
without X
k
and note the value of ⫺2 log L in each case. Then, if the null hypothesis
is true, the difference in ⫺2 log L for the models with and without X
k
is asymptoti-
cally distributed as chi-squared with M ⫺ 1 degrees of freedom. This test requires
running several different models, however, excluding one of the predictors on each
run. Instead, most software packages, including SAS, provide an asymptotically
298 ADVANCED TOPICS IN LOGISTIC REGRESSION