Издательство Springer, 2010, -299 pp.
Discrepancy theory is also called the theory of irregularities of distribution. Here are some typical questions: What is the most uniform way of distributing n points in the unit square? How big is the irregularity necessarily present in any such distribution? For a precise formulation of these questions, we must quantify the irregularity of a given distribution, and discrepancy is a numerical parameter of a point set serving this purpose.
Such questions were first tackled in the thirties, with a motivation coming from number theory. A more or less satisfactory solution of the basic discrepancy problem in the plane was completed in the late sixties, and the analogous higher-dimensional problem is far from solved even today. In the meantime, discrepancy theory blossomed into a field of remarkable breadth and diversity. There are subfields closely connected to the original numbertheoretic roots of discrepancy theory, areas related to Ramsey theory and to hypergraphs, and also results supporting eminently practical methods and algorithms for numerical integration and similar tasks. The applications include financial calculations, computer graphics, and computational physics, just to name a few.
This book is an introductory textbook on discrepancy theory. It should be accessible to early graduate students of mathematics or theoretical computer science. At the same time, about half of the book consists of material that up until now was only available in original research papers or in various surveys.
Some number of people may be interested in discrepancy theory with some specific application in mind, or because they want to do research in it. But, in my opinion, discrepancy theory can also serve as an example of live mathematics for students completing the basic math courses. The problems in discrepancy are natural, easy to state, and relatively narrowly focused. The solutions, although some of them are quite deep and clever, can often be explained in several pages with all the details. Hence, the beginner need not feel overwhelmed by the volume of material or by a technical machinery towering above him like a Gothic cathedral. At the same time, many notions and theorems the student has to lea in the basic curriculum can be seen in action here (such as calculus, geometry of the Euclidean space, harmonic analysis, elementary number theory, probability theory and the probabilistic method in combinatorics, hypergraphs, counting and asymptotic estimates, linear algebra, finite fields, polynomial algebra, and algorithm design). The Fourier series is encountered not because the next item in the course outline is called the Fourier series, but because one needs it to answer a seemingly unrelated question about points in the unit square. In my opinion, such examples from the outside are very important and refreshing in leaing a mathematical discipline, but the basic courses can seldom include them.
Based on the book, it is possible to teach a one-semester or two-semester special topic course (experiments in this direction have been kindly performed by Joram Lindenstrauss and by Nati Linial). For a general course on discrepancy, I suggest covering Section 1.1 (perhaps omitting Weyl’s criterion), the Van der Corput and Halton–Hammersley constructions (Sec. 2.1), maybe Beck’s upper bound for discs (Sec. 3.1), definitely Roth’s lower bound (Sec. 6.1), the notion of combinatorial discrepancy (Sec. 1.3), basic combinatorial upper bounds (Sec. 4.1), the lower bound using eigenvalues (Sec. 4.2), and the partial coloring method (Sec. 4.5). If time permits, the next recommendations are Hal?asz’ lower bound proof (Sec. 6.2) and Alexander’s lower bound (Sec. 6.4 or 6.5). I leave further extension to the instructor’s judgment. For those wishing to pursue the subject of quasi-Monte Carlo methods, the main recommended parts are Section 1.4 and the whole of Chapter
2. Convinced combinatorialists are invited to read mainly Chapters 4 and
5. The latter discusses the Vapnik–Chervonenkis dimension, which is of considerable interest in statistics, computational leaing theory, computational geometry, etc.
Sections usually consist of three parts: the main text (what I would talk about in a course), bibliographic references and remarks intended mainly for specialists, and exercises. The exercises are classified by difficulty as no-star, one-star, and two-star (but this classification is quite subjective). No-star exercises should be more or less routine, and two-star ones often contain a clever idea that had once been enough for a publication, although the difficulty may now be greatly reduced by a suggestive formulation. More difficult exercises are usually accompanied by hints given at the end of the book. Rather than seriously expecting anyone to solve a large part of the exercises, I used the exercise-hint combination as a way of packing lots of results into a much smaller space than would be required for writing them out according to the customary way of mathematical presentation. This, of course, greatly enlarges the danger of introducing errors and making false claims, so the reader who wants to use such information should check carefully if the hint really works.
The book contains two tables summarizing some important asymptotic bounds in discrepancy theory, an index, and a list of references with crossreferences to the pages where they are cited. I consider this last provision convenient for the reader, but it has the unfortunate aspect that the authors mentioned in the references can immediately find where their work is cited and conclude that their results were misquoted and insufficiently appreciPreface ated. I apologize to them; my only excuse is that such shortcomings are not intentional and that I simply did not have as much time to devote to each of the referenced papers and books as it would have deserved.
Introduction.
Low-Discrepancy Sets for Axis-Parallel Boxes.
Upper Bounds in the Lebesgue-Measure Setting.
Combinatorial Discrepancy.
VC-Dimension and Discrepancy.
Lower Bounds.
More Lower Bounds and the Fourier Transform.
A. Tables of Selected Discrepancy Bounds.
B. News Scan 1999–2009.
Discrepancy theory is also called the theory of irregularities of distribution. Here are some typical questions: What is the most uniform way of distributing n points in the unit square? How big is the irregularity necessarily present in any such distribution? For a precise formulation of these questions, we must quantify the irregularity of a given distribution, and discrepancy is a numerical parameter of a point set serving this purpose.
Such questions were first tackled in the thirties, with a motivation coming from number theory. A more or less satisfactory solution of the basic discrepancy problem in the plane was completed in the late sixties, and the analogous higher-dimensional problem is far from solved even today. In the meantime, discrepancy theory blossomed into a field of remarkable breadth and diversity. There are subfields closely connected to the original numbertheoretic roots of discrepancy theory, areas related to Ramsey theory and to hypergraphs, and also results supporting eminently practical methods and algorithms for numerical integration and similar tasks. The applications include financial calculations, computer graphics, and computational physics, just to name a few.
This book is an introductory textbook on discrepancy theory. It should be accessible to early graduate students of mathematics or theoretical computer science. At the same time, about half of the book consists of material that up until now was only available in original research papers or in various surveys.
Some number of people may be interested in discrepancy theory with some specific application in mind, or because they want to do research in it. But, in my opinion, discrepancy theory can also serve as an example of live mathematics for students completing the basic math courses. The problems in discrepancy are natural, easy to state, and relatively narrowly focused. The solutions, although some of them are quite deep and clever, can often be explained in several pages with all the details. Hence, the beginner need not feel overwhelmed by the volume of material or by a technical machinery towering above him like a Gothic cathedral. At the same time, many notions and theorems the student has to lea in the basic curriculum can be seen in action here (such as calculus, geometry of the Euclidean space, harmonic analysis, elementary number theory, probability theory and the probabilistic method in combinatorics, hypergraphs, counting and asymptotic estimates, linear algebra, finite fields, polynomial algebra, and algorithm design). The Fourier series is encountered not because the next item in the course outline is called the Fourier series, but because one needs it to answer a seemingly unrelated question about points in the unit square. In my opinion, such examples from the outside are very important and refreshing in leaing a mathematical discipline, but the basic courses can seldom include them.
Based on the book, it is possible to teach a one-semester or two-semester special topic course (experiments in this direction have been kindly performed by Joram Lindenstrauss and by Nati Linial). For a general course on discrepancy, I suggest covering Section 1.1 (perhaps omitting Weyl’s criterion), the Van der Corput and Halton–Hammersley constructions (Sec. 2.1), maybe Beck’s upper bound for discs (Sec. 3.1), definitely Roth’s lower bound (Sec. 6.1), the notion of combinatorial discrepancy (Sec. 1.3), basic combinatorial upper bounds (Sec. 4.1), the lower bound using eigenvalues (Sec. 4.2), and the partial coloring method (Sec. 4.5). If time permits, the next recommendations are Hal?asz’ lower bound proof (Sec. 6.2) and Alexander’s lower bound (Sec. 6.4 or 6.5). I leave further extension to the instructor’s judgment. For those wishing to pursue the subject of quasi-Monte Carlo methods, the main recommended parts are Section 1.4 and the whole of Chapter
2. Convinced combinatorialists are invited to read mainly Chapters 4 and
5. The latter discusses the Vapnik–Chervonenkis dimension, which is of considerable interest in statistics, computational leaing theory, computational geometry, etc.
Sections usually consist of three parts: the main text (what I would talk about in a course), bibliographic references and remarks intended mainly for specialists, and exercises. The exercises are classified by difficulty as no-star, one-star, and two-star (but this classification is quite subjective). No-star exercises should be more or less routine, and two-star ones often contain a clever idea that had once been enough for a publication, although the difficulty may now be greatly reduced by a suggestive formulation. More difficult exercises are usually accompanied by hints given at the end of the book. Rather than seriously expecting anyone to solve a large part of the exercises, I used the exercise-hint combination as a way of packing lots of results into a much smaller space than would be required for writing them out according to the customary way of mathematical presentation. This, of course, greatly enlarges the danger of introducing errors and making false claims, so the reader who wants to use such information should check carefully if the hint really works.
The book contains two tables summarizing some important asymptotic bounds in discrepancy theory, an index, and a list of references with crossreferences to the pages where they are cited. I consider this last provision convenient for the reader, but it has the unfortunate aspect that the authors mentioned in the references can immediately find where their work is cited and conclude that their results were misquoted and insufficiently appreciPreface ated. I apologize to them; my only excuse is that such shortcomings are not intentional and that I simply did not have as much time to devote to each of the referenced papers and books as it would have deserved.
Introduction.
Low-Discrepancy Sets for Axis-Parallel Boxes.
Upper Bounds in the Lebesgue-Measure Setting.
Combinatorial Discrepancy.
VC-Dimension and Discrepancy.
Lower Bounds.
More Lower Bounds and the Fourier Transform.
A. Tables of Selected Discrepancy Bounds.
B. News Scan 1999–2009.