124 5. Scaling in Financial Data and in Physics
Among them, the kurtosis κ is most practical because (i) in general, it is
finite even for symmetric distributions and (ii) it gives less weight to the
tails of the distribution, where the statistics may be bad, than even higher
cumulants would. Distributions with κ>0 are called leptokurtic.
Gaussian distributions are ubiquitous in nature, and arise in diffusion
problems, the tossing of a coin, and many more situations. However, there
are exceptions: turbulence, earthquakes, the rhythm of the heart, drops from
a leaking faucet, and also the statistical properties of financial time series,
are not described by Gaussian distributions.
Central Limit Theorem
The ubiquity of the Gaussian distribution in nature is linked to the central
limit theorem, and to the maximization of entropy in thermal equilibrium.
At the same time, it is a consequence of fundamental principles both in
mathematics and in physics (statistical mechanics).
Roughly speaking, the central limit theorem states that any random phe-
nomenon, being a consequence of a large number of small, independent causes,
is described by a Gaussian distribution. At the same handwaving level, we can
see the emergence of a Gaussian by assuming N IID variables (for simplicity –
the assumption can be relaxed somewhat)
p(x, N )=[p(x)]
N
=exp[N ln p(x)] . (5.32)
Any normalizable distribution p(x) being peaked at some x
0
, p(x, N ) will
have a very sharp peak at x
0
for large N. We can then expand p(x, N)to
second order about x
0
,
p(x, N ) ≈ exp
−
(x − Nx
0
)
2
2σ
2
for N 1 , (5.33)
and obtain a Gaussian. Its variance will scale with N as σ
2
∝ N .
More precisely, the central limit theorem states that, for N IID variables
with mean m
1
and finite variance σ, and two finite numbers u
1
, u
2
,
lim
N→∞
P
u
1
≤
x − m
1
N
σ
√
N
≤ u
2
=
u
2
u
1
du
√
2π
exp
−
u
2
2
. (5.34)
Notice that the theorem only makes a statement on the limit N →∞,and
not on the finite-N case. For finite N , the Gaussian obtains only in the center
of the distribution |x −m
1
N|≤σ
√
N, but the form of the tails may deviate
strongly from the tails of a Gaussian. The weight of the tails, however, is
progressively reduced as more and more random variables are added up, and
the Gaussian then emerges in the limit N →∞. The Gaussian distribution is
a fixed point, or an attractor, for sums of random variables with distributions
of finite variance.