8.2 Gaussian Processes 217
in the Fourier transform. Thus, after rescaling k →
ˆ
kR
−1/2
the cumulant
expansion reads
ˆp
R
ˆ
k
=exp
∞
n=1
c
(n)
R
1−n/2
n!
i
ˆ
k
n
. (8.16)
Apart from the first cumulant, we find that the second cumulant remains
invariant while all higher cumulants approach zero as R →∞. Thus, only
the first and the second cumulants will remain for sufficiently large R and the
probability distribution function p
R
(ξ) approaches a Gaussian function. The
result of our naive argumentation is the central limit theorem. The precise
formulation of this important theorem is:
The sum, normalized by R
−1/2
of R random independent and identi-
cally distributed states of zero mean and finite variance, is a random
variable with a probability distribution function converging to the
Gaussian distribution with the same variance. The convergence is to
be understood in the sense of a limit in probability, i.e., the probability
that the normalized sum has a value within a given interval converges
to that calculated from the Gaussian distribution.
We will now give a more precisely derivation of the central limit theorem.
Formal proofs of the theorem may be found in probability textbooks such as
Feller [18, 29, 30]. Here we follow a more physically motivated way by Sornette
[31], using the technique of the renormalization group theory.
This powerful method [32] introduced in field theory and in critical phase
transitions is a very general mathematical tool, which allows one to decompose
the problem of finding the collective behavior of a large number of elements
on large spatial scales and for long times into a succession of simpler problems
with a decreasing number of elements, whose effective properties vary with
the scale of observation. In the context of the central limit theorem, these
elements refer to the elementary N-component events ξ
j
.
The renormalization group theory works best when the problem is domi-
nated by one characteristic scale which diverges at the so-called critical point.
The distance to this criticality is usually determined by a control parame-
ter which may be identified in our special case as R
−1
. Close to the critical
point, a universal behavior becomes observable, which is related to typical
phenomena like scale invariance of self-similarity. As we will see below, the
form stability of the Gaussian probability distribution function is such a kind
of self-similarity.
The renormalization consists of an iterative application of decimation and
rescaling steps. The first step is to reduce the number of elements to transform
the problem in a simpler one. We use the thesis that under certain conditions
the knowledge of all the cumulants is equivalent to the knowledge of the
probability density. So we can write
p (ξ
j
)=f
ξ
j
,c
(1)
,c
(2)
,...,c
(m)
,...
, (8.17)