violence” by a factor of exp(.064) ⫽ 1.066. Or each unit increase in positive commu-
nication lowers the odds of “intense male violence” by exp(⫺.443) ⫽ .642, whereas it
lowers the odds of “any violence” by exp(⫺.441) ⫽ .643, a virtually indistinguishable
difference in effects. If the effects of predictors are invariant to the cutpoint, a more par-
simonious specification of equation (8.3) is possible. This is
log O
ⱕj
⫽ β
0
j
⫹ β
1
X
1
⫹ β
2
X
2
⫹
...
⫹ β
K
X
K
. (8.4)
This is the ordered logit or proportional odds model (Agresti, 2002). In this model, the
effects of predictors are the same, regardless of the cutpoint for the odds. That is, each
unit increase in a given predictor, say X
k
, multiplies the odds by a proportionality con-
stant of exp(β
k
), regardless of the cutpoint chosen. The results of estimating this model
(using procedure LOGISTIC in SAS) are shown in the last column of Table 8.7. Notice
that the intercept is allowed to depend on the cutpoint, so there are two intercepts in
the equation. (In fact, there are two different equations, but the coefficients are being
constrained to be the same in each.) In that predictors are assumed to be invariant to
the cutpoint, there is only one set of regression coefficients. Effects are interpreted just
as in binary logistic regression, except that the response is the log odds of “more
severe” versus “less severe” violence, rather than, as in Table 8.4, violence per se.
Thus, cohabiting is seen to raise the odds of “more severe” violence by exp(.909) ⫽
2.482, or about 148%, whereas each unit increase in positive communication lowers
the odds of “more severe” violence by about 35%.
Test of Invariance. In the first two columns of Table 8.7, where effects are allowed
to depend on the cutpoint, some predictors appear to have different effects on the
odds of “intense male violence” compared to violence per se. For example, the effect
of cohabiting on the odds of “intense male violence” is exp(1.202) ⫽ 3.327, whereas
its effect on the odds of “any violence” is only exp(.867) ⫽ 2.380. Moreover, other
regressors, for example, female’s age at union or economic disadvantage,have
significant effects on only one of the odds. Are these variations significantly different
or just the result of sampling error? This can be tested using the score test for the
proportional odds assumption (provided automatically in SAS). The test statistic
tests the null hypothesis that regressor effects are the same across all J ⫺ 1 possible
cutpoints. That is, H
0
is that for each of the K regressors in the model, β
k
j
⫽ β
k
for
j ⫽ 1, 2, ..., J ⫺ 1. Under the null hypothesis, the score statistic is asymptotically
distributed as chi-squared with degrees of freedom equal to K(J ⫺ 2). This is the
difference in the number of parameters required to estimate the model in equation
(8.3) versus equation (8.4): K(J ⫺ 1) ⫺ K ⫽ K(J ⫺ 1 ⫺ 1) ⫽ K(J ⫺ 2). As shown in
Table 8.7 for the current example, its value is 12.457, which, with 11 degrees of free-
dom, is not significant. This suggests that there is insufficient evidence to reject the
proportional odds assumption. Apparently, the more parsimonious proportional-odds
model appears reasonable for the data.
Estimating Probabilities with the Proportional Odds Model. In the event that esti-
mates of P(Y ⫽ j) for j ⫽ 1, 2,..., J, based on the proportional odds model, are of
MULTINOMIAL MODELS 305