since W (s)isN(0,s) and W (t) − W (s) is independent of W (s).
HEURISTICS. Remember from Chapter 1 that the formal time-derivative
˙
W (t)=
dW (t)
dt
= ξ(t)
is “1-dimensional white noise”. As we will see later however, for a.e. ω the sample path
t → W (t, ω) is in fact differentiable for no time t ≥ 0. Thus
˙
W (t)=ξ(t) does not really
exist.
However, we do have the heuristic formula
(3) “E(ξ(t)ξ(s)) = δ
0
(s − t)”,
where δ
0
is the unit mass at 0. A formal “proof” is this. Suppose h>0, fix t>0, and set
φ
h
(s):=E
W (t + h) − W (t)
h
W (s + h) − W (s)
h
=
1
h
2
[E(W (t + h)W (s + h)) − E(W (t + h)W (s)) − E(W (t)W (s + h)) + E(W (t)W (s))]
=
1
h
2
[((t + h) ∧ (s + h)) − ((t + h) ∧ s) − (t ∧ (s + h))+(t ∧ s)].
t-h
t+h
t
graph of φ
h
height = 1/h
Then φ
h
(s) → 0ash → 0, t = s. But
φ
h
(s) ds = 1, and so presumably φ
h
(s) →
δ
0
(s − t) in some sense, as h → 0. In addition, we expect that φ
h
(s) → E(ξ(t)ξ(s)). This
gives the formula (3) above.
Remark: Why
˙
W(·) = ξ(·) is called white noise. If X(·) is any real-valued stochastic
process with E(X
2
(t)) < ∞ for all t ≥ 0, we define
r(t, s):=E(X(t)X(s)) (t, s ≥ 0),
the autocorrelation function of X(·). If r(t, s)=c(t − s) for some function c : R → R and
if E(X(t)) = E(X(s)) for all t, s ≥ 0, X(·) is called stationary in the wide sense. A white
noise process ξ(·) is by definition Gaussian, wide sense stationary, with c(·)=δ
0
.
41