V. Lifschitz, L. Morgenstern, D. Plaisted 69
absence of any knowledge that would contradict this conclusion. This sort of default
reasoning would be nonmonotonic in the set of axioms: adding further information
(e.g., that Tweety is a penguin) could mean that one has to retract conclusions (that is,
that Tweety flies).
The need for nonmonotonic reasoning was noted, as well, by Minsky [185].Atthe
time Minsky wrote his critique, early work on nonmonotonicity had already begun.
Several years later, most of the major formal approaches to nonmonotonic reasoning
had already been mapped out [173, 224, 181]. This validated both the logicist AI ap-
proach, since it demonstrated that formal systems could be used for default reasoning,
and the anti-logicists, who had from the first argued that first-order logic was too weak
for many reasoning tasks.
Nonmonotonicity and the anti-logicists
From the time they were first developed, nonmonotonic logics were seen as an essen-
tial logicist tool. It was expected that default reasoning would help deal with many KR
difficulties, such as the frame problem, the problem of efficiently determining which
things remain the same in a changing world. However, it turned out to be surprisingly
difficult to develop nonmonotonic theories that entailed the expected conclusions. To
solve the frame problem, for example, one needs to formalize the principle of in-
ertia—that properties tend to persist over time. However, a naive formalization of
this principle along the lines of [174] leads to the multiple extension problem; a phe-
nomenon in which the theory supports several models, some of which are unintuitive.
Hanks and McDermott [110] demonstrated a particularexample of this, the Yale shoot-
ing problem. They wrote up a simple nonmonotonic theory containing some general
facts about actions (that loading a gun causes the gun to be loaded, and that shooting
a loaded gun at someone causes that individual to die), the principle of inertia, and a
particular narrative (that a gun is loaded at one time, and shot at an individual a short
time after). The expected conclusion, that the individual will die, did not hold. Instead,
Hanks and McDermott got multiple extensions: the expected extension, in which the
individual dies; and an unexpected extension, in which the individual survives, but the
gun mysteriously becomes unloaded. The difficulty is that the principle of inertia can
apply either to the gun remaining loaded or the individual remaining alive. Intuitively
we expect the principle to be applied to the gun remaining loaded; however, there was
nothing in Hank’s and McDermott’s theory to enforce that.
The Yale shooting problem was not hard to handle: solutions began appearing
shortly after the problem became known. (See [160, 161, 238] for some early so-
lutions.) Nonetheless, the fact that nonmonotonic logics could lead to unexpected
conclusions for such simple problems was evidence to anti-logicists of the infeasi-
bility of logicist AI. Indeed, it led McDermott to abandon logicist AI. Nonmonotonic
logic was essentially useless, McDermott argued [180], claiming that it required one
to know beforehand what conclusions one wanted to draw from a set of axioms, and
to build that conclusion into the premises.
In contrast, what logicist AI learned from the Yale shooting problem was the
importance of a good underlying representation. The difficulty with Hanks and Mc-
Dermott’s axiomatization was not that it was written in a nonmonotonic logic; it was
that it was devoid of a concept of causation. The Yale shooting problem does not arise
in an axiomatization based on a sound theory of causation [243, 187, 237].