Variations that are not visually apparent (because the human eye compensates
automatically for gradual changes in brightness) may be detected only when the
image is captured in the computer. Balancing lighting across the entire recorded
scene is difficult. Careful position of lights on a copy stand, use of ring lighting for
macro photography, or adjustment of the condenser lens in a microscope, are pro-
cedures that help to achieve uniform lighting of the sample. Capturing an image of
a uniform grey card or blank slide and measuring the brightness variation is an
important tool for such adjustments.
Some other problems are not normally correctable. Optics can cause vignetting
(darkening of the periphery of the image) because of light absorption in the glass.
Cameras may have fixed pattern noise that causes local brightness variations. Cor-
recting variations in the brightness of illumination may leave variations in the angle
or color of the illumination. And, of course, the sample itself may have local
variations in density, thickness, surface flatness, and so forth, which can cause
changes in brightness.
Many of the variations other than those which are a function of the sample itself
can be corrected by capturing an image that shows just the variation. Removing the
sample and recording an image of just the background, a grey card, or a blank slide
or specimen stub with the same illumination provides a measure of the variation.
This background image can then be subtracted from or divided into the image of
the sample to level the brightness. The example in Figure 2.36 shows particles of
cornstarch imaged in the light microscope with imperfect centering of the light
source. Measuring the particle size distribution depends upon leveling the contrast
so that particles can be thresholded everywhere in the image. Capturing a background
image with the same illumination conditions and subtracting it from the original
makes this possible.
The choice of subtraction or division for the background depends on whether
the imaging device is linear or logarithmic. Scanners are inherently linear, so that
the measured pixel value is directly proportional to the light intensity. The output
from most scanning microscopes is also linear, unless nonlinear gamma adjustments
are made in the amplified signal. The detectors used in digital cameras are linear,
but in many cases the output is converted to logarithmic to mimic the behavior of
film. Photographic film responds logarithmically to light intensity, with equal incre-
ments of density corresponding to equal ratios of brightness. For linear recordings,
the background is divided into the image, while for logarithmic images it is sub-
tracted (since division of numbers corresponds to the subtraction of their logarithms).
In practice, the best advice when the response of the detector is unknown, is to try
both methods and use the one that produces the best result. In the examples that follow,
some backgrounds are subtracted and some are divided to produce a level result.
In situations where a satisfactory background image cannot be (or was not)
stored along with the image of the specimen, there are several ways to construct
one. In some cases one color channel may contain little detail but may still serve as
a measure of the variation in illumination. Another technique that is sometimes used
is to apply an extreme low pass filter (e.g., a Gaussian smooth with a large standard
deviation) to the image to remove the features, leaving just the background variation.
This method is based on the assumptions that the features are small compared to
2241_C02.fm Page 112 Thursday, April 28, 2005 10:23 AM
Copyright © 2005 CRC Press LLC