4 Lecture 1
independent of the particular model of computation. In a subsequent pa-
per with Philip Lewis [115], they showed that space complexity (number
of tape cells used) can also be treated as a complexity measure in much
the same way as time. Other pioneering work on computational complexity
that appeared around the same time included papers of Cobham [30] and
Edmonds [37], who are generally credited with the invention of the notion
of polynomial time, and Hennie and Stearns [60]. Edmonds was apparently
the first to conjecture that P = NP. These papers were immediately rec-
ognized as a fundamental advance. Indeed, it was the original Hartmanis
and Stearns paper [55] that gave the name computational complexity to the
discipline.
The fundamental contribution of Hartmanis and Stearns was not so
much the specific results regarding the complexity of Turing machine com-
putations, but the assimilation of concrete notions of resource usage into a
general theory of computational complexity. Although they worked primar-
ily with multitape Turing machines, they argued rightly that the concepts
were universal and that the same behavior would emerge in any reasonable
model. The fundamental notion of complexity class laid down in their orig-
inal paper still pervades the field. The theory has been further generalized
by Manuel Blum [16] using an abstract notion of complexity measure, and
many results generalize to this more abstract setting (see Supplementary
Lecture J). Other resources besides time and space, from area in VLSI
layout problems to randomness in probabilistic computation, have been
successfully treated in this framework.
Today the field also includes a wide variety of notions such as prob-
abilistic complexity classes, interactive proof systems, approximation and
inapproximation results, circuit complexity, and many others. The primary
goal of this field is to understand what makes computational problems com-
plex, with the hope that by doing so, we might better understand how to
make them simpler.
Turing Machines
A convenient starting point for our study is Turing machine (TM) com-
plexity. Turing machines were invented in 1936 by Alan M. Turing [123]
at around the same time as several other formalisms purporting to cap-
ture the notion of effective computability: Post systems [94, 95], µ-recursive
functions [47], λ-calculus [28, 71], and combinatory logic [109, 35].
All these models are computationally equivalent in the sense that they
can simulate one another. This led Alonzo Church to formulate Church’s
thesis [29, 123], which states that these models exactly capture our intu-
itive notion of effective computability. But one aspect of computability that
Church’s thesis does not address is the notion of complexity. For example,