Издательство CRC Press, 2004, -326 pp.
The theory of finite automata is the mathematical theory of a simple class of algorithms that are important in computer science. Algorithms are recipes that tell us how to solve problems; the rules we lea in school for adding, subtracting, multiplying and dividing numbers are good examples of algorithms. Although algorithms have always been important in mathematics, mathematicians did not spell out precisely what they were until the early decades of the twentieth century, when they started to ask questions about the nature of mathematical proof. Perhaps motivated in part by the mechanical devices that were used to help in arithmetical calculations, mathematicians wanted to know whether there were mechanical ways of proving theorems. By mechanical, they did not intend to build real machines that could prove theorems; rather they wanted to know whether it was possible in principle to prove theorems mechanically. In mathematical terms, they wanted to know whether there was an algorithm for proving the theorems of mathematics. The answer to this question obviously depended on what was meant by the word ‘algorithm.’ Numerous definitions were suggested by different mathematicians from various perspectives and, remarkably, they all proved to be equivalent. By the end of the 1930s, mathematicians had a precise definition of what an algorithm was, and using this definition they were able to show that there were problems that could not be solved by algorithmic means. It was perhaps no accident that, as mathematicans were laying the foundations of the theory of algorithms, engineers were constructing machines that could implement algorithms as programs. Algorithms and programs are just two sides of the same coin. Mathematicians were initially interested in the dividing line between what was and what was not algorithmic. But simply knowing that a problem could be solved by means of an algorithm was not the end of the story. Certainly, this implied you could write a program to solve the problem, but did not imply your program would run quickly or efficiently. For this reason, two approaches to classifying algorithms were developed. The first classified them according to their running times and is known as complexity theory, whereas the second classified them according to the types of memory used in implementing them.
Introduction to finite automata
Recognisable languages
Non-deterministic automata
?-automata
Kleene’s Theorem
Local languages
Minimal automata
The transition monoid
The syntactic monoid
Algebraic language theory
Star-free languages
Varieties of languages
A Discrete mathematics
The theory of finite automata is the mathematical theory of a simple class of algorithms that are important in computer science. Algorithms are recipes that tell us how to solve problems; the rules we lea in school for adding, subtracting, multiplying and dividing numbers are good examples of algorithms. Although algorithms have always been important in mathematics, mathematicians did not spell out precisely what they were until the early decades of the twentieth century, when they started to ask questions about the nature of mathematical proof. Perhaps motivated in part by the mechanical devices that were used to help in arithmetical calculations, mathematicians wanted to know whether there were mechanical ways of proving theorems. By mechanical, they did not intend to build real machines that could prove theorems; rather they wanted to know whether it was possible in principle to prove theorems mechanically. In mathematical terms, they wanted to know whether there was an algorithm for proving the theorems of mathematics. The answer to this question obviously depended on what was meant by the word ‘algorithm.’ Numerous definitions were suggested by different mathematicians from various perspectives and, remarkably, they all proved to be equivalent. By the end of the 1930s, mathematicians had a precise definition of what an algorithm was, and using this definition they were able to show that there were problems that could not be solved by algorithmic means. It was perhaps no accident that, as mathematicans were laying the foundations of the theory of algorithms, engineers were constructing machines that could implement algorithms as programs. Algorithms and programs are just two sides of the same coin. Mathematicians were initially interested in the dividing line between what was and what was not algorithmic. But simply knowing that a problem could be solved by means of an algorithm was not the end of the story. Certainly, this implied you could write a program to solve the problem, but did not imply your program would run quickly or efficiently. For this reason, two approaches to classifying algorithms were developed. The first classified them according to their running times and is known as complexity theory, whereas the second classified them according to the types of memory used in implementing them.
Introduction to finite automata
Recognisable languages
Non-deterministic automata
?-automata
Kleene’s Theorem
Local languages
Minimal automata
The transition monoid
The syntactic monoid
Algebraic language theory
Star-free languages
Varieties of languages
A Discrete mathematics