544 Ran dom variables
Let us start with a slightly simplistic reasoning, where the amount of
chocolate can only take integral values k between 1 and 100 with probabil-
ity P(X
i
= k) = 1/100. There is a very small probability that I will eat only
2 g of chocolate, since this requires that I eat 1 g the first day and 1 g the sec-
ond, with a total probability equal to 1/10 000 (if we assume that the amount
eaten the first day is independent of the amount eaten the second). The prob-
ability that I ea t 5 g is larger, since this may happen if I ea t 1 g and then 4 g,
or 2 g and then 3 g, or 3 g and then 2 g, or (finally) 4 g a nd then 1 g; th is gives
a total probability equal to 4/10 000.
The reader will now have no trouble showing that the probability that I
eat n grams of chocolate is given by the formula
P(X
1
+ X
2
= n) = P(X
1
= 1) P(X
2
= n − 1)
+ P(X
1
= 2) P(X
2
= n − 2) + ···+ P(X
1
= n −1) P(X
2
= 1).
With a finer subdivision of the interval [0, 100], a similar formula would hold,
and pas sing to the continuous limit, we expect to find that
f
total
(x) =
Z
100
0
f (t) f (x − t) dt = f ∗ f (x),
where f
total
is the density of X
1
+ X
2
and f that of X
1
or X
2
(since the support
of f is [0, 100]). In other words, the probability dens ity of the sum of two
independent random variables is the convolution product of the probability
densities of the arguments.
This result can be established rigorously in two different manners, using a
change of variable or the characteristic function. The reader may find either
of those two proofs more enlightening than th e other.
(Using the characteristic function)
From Proposition 20.59 we know that the characteristic function of the sum of
two independent random variab les is the product of the characteristic functions
of the summands. Using the inverse Fourier transform, we obtain:
THEOREM 20.62 (Sum of r.v.) Let X
1
and X
2
be tw o independent random var i-
ables, with probability densities f
1
and f
2
, respectively (in the sense of distributions).
Then the probability density of the sum X
1
+ X
2
is given by
f
X
1
+X
2
= f
1
∗ f
2
,
or equivalently
f
X
1
+X
2
(x) =
Z
+∞
−∞
f
1
(t) f
2
(x − t) dt,
which is only correct if the random variables involved are continuous, the probability
densities then being integrable functions.