student is currently taking), and number of previous math courses (the number of
previous college-level math courses taken by the student).
Recall that when discussing the authenticity of the first model in Chapter 2, I sug-
gested academic ability as the real reason for the association of math diagnostic per-
formance with exam scores. Should this be the case, we would expect that with
academic ability held constant, diagnostic scores would no longer have any impact on
exam performance. That is, if academic ability is Z in Figure 3.2 and diagnostic score
is X, this hypothesis suggests that there is no connection from X to Y, but rather, it is
the connection from X to Z and from Z to Y that causes Y to vary when X varies.
(Instead of a curved line connecting X with Z, we now imagine a directed arrow from
Z to X, since Z is considered to cause X as well as Y. The mathematics will be the same,
as shown below in the section on omitted-variable bias.) The measure of academic
ability I choose in this case is college GPA, since it reflects the student’s performance
across all classes taken prior to the current semester and is therefore a proxy for aca-
demic ability.
The first model shows that exam performance is, on average, 2.749 points higher
for each point higher that a student scores on the diagnostic. Model 2, with college
GPA added, shows that this effect is reduced somewhat but is still significant. (Whether
this reduction itself is significant is assessed below.) Net of academic ability, exam per-
formance is still, on average, 2.275 points higher for each point higher a student scores
on the diagnostic. It appears that academic ability does not explain all of the associa-
tion of diagnostic scores with exam scores, contrary to the hypothesis. College GPA
also has a substantial effect on exam performance. Holding the diagnostic score con-
stant, students who are a unit higher in GPA are estimated to be, on average, about 13.5
points higher on the exam. The model with two predictors explains about 41% of the
variance in exam scores. Adding college GPA apparently enhances the proportion of
explained variation by .419 ⫺ .268 ⫽ .151. This increment to R
2
resulting from the
addition of college GPA is referred to as the squared semipartial correlation coefficient
between exam performance and college GPA, controlling for diagnostic score. The
semipartial correlation coefficient between college GPA and exam performance, con-
trolling for diagnostic score, is the square root of this quantity, or .389. Although the
increment to R
2
is a meaningful quantity, the semipartial correlation coefficient is not
particularly useful. A more useful correlation coefficient that takes into account other
model predictors is the partial correlation coefficient, explained below. Our estimate
of σ
2
for model 2 is MSE, which is 169.878.
The third model adds the last three predictors. This model explains 45% of the
variance in exam scores. Of the three added predictors, two are significant: attitude
toward statistics and class hours in the current semester. Each unit increase in attitude
is worth about a third of a point increase in exam performance, on average. Each addi-
tional hour of classes taken during the semester is worth about eight-tenths of an addi-
tional point on the exam, on average. This last finding is somewhat counterintuitive,
in that those with a greater class burden have less time to devote to any one class.
Perhaps these students are especially motivated to succeed, or perhaps these students
have few other obligations, such as jobs or families, which allows them to devote more
time to studies. The intercept in all three models is clearly uninterpretable, since the
EMPLOYING MULTIPLE PREDICTORS 89