Издательство Springer, 1979, -251 pp.
Since the early part of this century, logicians have been interested in decision procedures for theories in the predicate calculus. By a theory, we mean the set of sentences true about a particular structure, or the set of sentences true about every structure in a particular class of structures. One of the first decision procedures discovered was Lowenheim's procedure for validity of sentences in the monadic predicate calculus. Another early example is Presburger's procedure for the first-order theory of the integers under addition.
Although some people felt that some theories might be undecidable, it wasn't until the thirties that it was possible to carefully state, let alone prove, such conjectures. It was first necessary to have formal notions of computation, as provided by the work of people such as Church, Kleene, Post and Turing. The first undecidabi1ity result was G?del's famous theorem that the first- order theory of the integers under the operations of addition and multiplication is not decidable. At the present time an enormous number of decidability and undecidabi1ity results are known; [ELTT65] is a good survey of these results.
For a long time, it was felt that one could say nothing stronger about a theory than that it was decidable. Although the concept of comparing the difficulty of undecidable (or nonrecursive) sets has existed for a long time, it was only recently that people have started investigating the relative decidability of recursive sets. It was in the sixties that the field of computational complexity theory began. The goal was to look at a recursive set and ask how much of a particular resource (usually time or storage) was needed to decide the membership problem for that set. It was necessary of course to have a precise model of computation to work with. Many different models have been defined today, but we shall use the Turing machine because of its elegance and simplicity; most of the following theorems also hold for most other existing models of computation. Time and space on a Turing machine will be defined in a straightforward way in Chapter one. The time or space used for a particular machine will be expressed as a function of the length of input: a machine operates within time (or space) f(n) if on all inputs of length n which are accepted, the time (or space) used is at most f(n) .
With this model, one can now look at previously existing algorithms (for instance decision procedures for logical theories) and ask how much time or space they use. One can use this model as a basis for comparing these algorithms, or for inventing new ones which are better in the very precise sense of using less time or space. One of the first of these upper bound results was Ferrante and Rackoff's theorem that the first order theory of the real numbers under addition can be decided by an algorithm operating in exponential space.
However, this model also gives us a precise way of saying that a particular set or theory requi res a certain amount of time or space. Here one is saying not only that a particular algorithm uses a lot of the resource, but that any algorithm for the membership problem must use a lot of the resource in question. These are called lower bound results. The techniques for showing that certain decidable sets require (for any algorithm) a lot of time or space to be decided, parallel earlier results showing that certain sets cannot be decided at all (by any algorithm). Whereas undecidabi1ity is usually proven by "arithmetization" of Turing machines, lower bounds are usually proven by efficient "arithmetization" of time or space bounded Turing machines, an idea having its beginnings in a paper by Stephen Cook. The first lower bound result for logical theories was Albert Meyer's theorem that the second order theory of the integers under successor requires an enormous (non-elementary recursive) amount of time and space. This was done in 1973.
As with decidability and undecidabi1ity results, there exist today many different upper and lower complexity bounds for many different types of sets, and in particular for many different logical theories. The purpose of this work is to present certain of the theorems and proofs about complexity bounds for logical theories of the first-order predicate calculus. Although the subjects chosen are those that the authors have personally worked on, the proof methods are typical of those commonly used in this area.
Precise definitions of the concepts discussed here are presented in Chapter one. Chapters two through five contain upper bound results, and Chapters six through nine contain lower bound results.
The authors would like to thank Albert Meyer for his help and encouragement, both during their studies and since. Most of these results would not have been obtained without his ideas and suggestions. The authors are also grateful to Teresa Miao for the typing of this manuscript. While at MIT, the authors were supported by NSF grant GJ-
34671. The first author would also like to thank the Tufts University Faculty Awards Committee. The second author thanks the Department of Computer Science of the University of Toronto, as well as the National Research Council of Canada.
Introduction and Background.
Ehrenfeucht Games and Decision Procedures.
Integer Addition - An Example of an Ehrenfeucht Game Decision Procedure.
Some Additional Upper Bounds.
Direct Products of Theories.
Lower Bound Preliminaries.
A Technique for Writing Short Formulas Defining Complicated Properties.
A Lower Bound on the Theories of Pairing Functions.
Some Additional Lower Bounds.
Since the early part of this century, logicians have been interested in decision procedures for theories in the predicate calculus. By a theory, we mean the set of sentences true about a particular structure, or the set of sentences true about every structure in a particular class of structures. One of the first decision procedures discovered was Lowenheim's procedure for validity of sentences in the monadic predicate calculus. Another early example is Presburger's procedure for the first-order theory of the integers under addition.
Although some people felt that some theories might be undecidable, it wasn't until the thirties that it was possible to carefully state, let alone prove, such conjectures. It was first necessary to have formal notions of computation, as provided by the work of people such as Church, Kleene, Post and Turing. The first undecidabi1ity result was G?del's famous theorem that the first- order theory of the integers under the operations of addition and multiplication is not decidable. At the present time an enormous number of decidability and undecidabi1ity results are known; [ELTT65] is a good survey of these results.
For a long time, it was felt that one could say nothing stronger about a theory than that it was decidable. Although the concept of comparing the difficulty of undecidable (or nonrecursive) sets has existed for a long time, it was only recently that people have started investigating the relative decidability of recursive sets. It was in the sixties that the field of computational complexity theory began. The goal was to look at a recursive set and ask how much of a particular resource (usually time or storage) was needed to decide the membership problem for that set. It was necessary of course to have a precise model of computation to work with. Many different models have been defined today, but we shall use the Turing machine because of its elegance and simplicity; most of the following theorems also hold for most other existing models of computation. Time and space on a Turing machine will be defined in a straightforward way in Chapter one. The time or space used for a particular machine will be expressed as a function of the length of input: a machine operates within time (or space) f(n) if on all inputs of length n which are accepted, the time (or space) used is at most f(n) .
With this model, one can now look at previously existing algorithms (for instance decision procedures for logical theories) and ask how much time or space they use. One can use this model as a basis for comparing these algorithms, or for inventing new ones which are better in the very precise sense of using less time or space. One of the first of these upper bound results was Ferrante and Rackoff's theorem that the first order theory of the real numbers under addition can be decided by an algorithm operating in exponential space.
However, this model also gives us a precise way of saying that a particular set or theory requi res a certain amount of time or space. Here one is saying not only that a particular algorithm uses a lot of the resource, but that any algorithm for the membership problem must use a lot of the resource in question. These are called lower bound results. The techniques for showing that certain decidable sets require (for any algorithm) a lot of time or space to be decided, parallel earlier results showing that certain sets cannot be decided at all (by any algorithm). Whereas undecidabi1ity is usually proven by "arithmetization" of Turing machines, lower bounds are usually proven by efficient "arithmetization" of time or space bounded Turing machines, an idea having its beginnings in a paper by Stephen Cook. The first lower bound result for logical theories was Albert Meyer's theorem that the second order theory of the integers under successor requires an enormous (non-elementary recursive) amount of time and space. This was done in 1973.
As with decidability and undecidabi1ity results, there exist today many different upper and lower complexity bounds for many different types of sets, and in particular for many different logical theories. The purpose of this work is to present certain of the theorems and proofs about complexity bounds for logical theories of the first-order predicate calculus. Although the subjects chosen are those that the authors have personally worked on, the proof methods are typical of those commonly used in this area.
Precise definitions of the concepts discussed here are presented in Chapter one. Chapters two through five contain upper bound results, and Chapters six through nine contain lower bound results.
The authors would like to thank Albert Meyer for his help and encouragement, both during their studies and since. Most of these results would not have been obtained without his ideas and suggestions. The authors are also grateful to Teresa Miao for the typing of this manuscript. While at MIT, the authors were supported by NSF grant GJ-
34671. The first author would also like to thank the Tufts University Faculty Awards Committee. The second author thanks the Department of Computer Science of the University of Toronto, as well as the National Research Council of Canada.
Introduction and Background.
Ehrenfeucht Games and Decision Procedures.
Integer Addition - An Example of an Ehrenfeucht Game Decision Procedure.
Some Additional Upper Bounds.
Direct Products of Theories.
Lower Bound Preliminaries.
A Technique for Writing Short Formulas Defining Complicated Properties.
A Lower Bound on the Theories of Pairing Functions.
Some Additional Lower Bounds.