10 Digital Image Processing and Analysis – PDEs and Variational Tools
171
of them is processed. Here, again simplifying, we shall deal with single channel
images only, which is equivalent to treating black and white images. Thus, for
what follows, we assume to have a black and white image, represented by a real
valued intensity function u
0
, defined pointwise almost everywhere on the image
domain G.
In most practical applications digital images are processed in spaces of func-
tions of bounded variation, however, there have been serious recent objections
to this claim. These objections are based on the prevailing idea that natural
images have a very strong multi-scale feature such that, generally, their total
variation may become unbounded [1].
For most image processing tasks it is of paramount importance to analyse
the principal features and structures of the image under consideration. It is intu-
itively clear that these features are independent of high frequencies contained in
the intensity function u
0
. Thus, it seems natural to try to extract significant im-
age information by smoothing the intensity function. In particular, think of the
problem of detecting edges in images. It seems natural to think of edges as those
curves in the image domain, where the (Euclidean norm of the) gradient of the
intensity function assumes maximal values, or – as used in many applications –
where the Laplacian of the intensity function (which is the trace of its Hessian
matrix) becomes 0. Thus, edge detection requires the computation of pointwise
derivatives of the intensity function, which cannot be done without smoothing
the piecewise constant intensity obtained from digital imaging. Moreover, the
gradient ofthe piecewise constant function u
0
is – trivially – singular at ALL pixel
edges (gradients of piecewise constant functions are singular measures concen-
trated on the partition edges)! Obviously, the ‘significant’ image specific edges
can only be distinguished from the ‘insignificant’ pixel edges by smoothing.
Thus, we deduce that image structure is, maybe somewhat counter-intuitively,
revealed by discarding detail in a coherent way. Also, currently available digital
imaging sensors are known to introduce noise into RGB images, which typically
gets worse when the nominal sensitivity (iso value) is increased. Digital high-iso
noise is patchy and ugly, much worse than the grain we all got used to (and
even got to like) in analog images. Thus, efficient and non-destructive image
denoising is of utmost importance to the photographic community.
The most basic smoothing technique is the convolution of u
0
byaGaussian
functionwithmeanvaluezeroandafixedvariancet>0:
u(x, t):= (u
0
∗ G
t
)(x) , (10.1)
where the 2-dimensional Gaussian reads:
G
t
(x):=
1
2πt
exp
−
|x|
2
2t
. (10.2)
For carrying out the convolution in (10.1) the image has to be appropriately
extended to all of
R
2
,say,eitherby0outsideG or periodically. Both approaches