Edge intersection of large features can also be reduced by dropping the image
magnification so that the features are not as large. But if the image also contains
small features, they may become too small to cover enough pixels to provide an
accurate measurement. Features with widths smaller than about 20 pixels can be
counted, but their measurement has an inherent uncertainty because the way the
feature may happen to be positioned on the pixel grid can change the dimension by
1 pixel (a 5% error for something 20 pixels wide). For features of complex shape,
even more pixels are needed to record the details with fidelity.
Figure 5.21 shows three images of milk samples. After homogenization, all of
the droplets of fat are reduced to a fairly uniform and quite small size, so selection
of an appropriate magnification to count them and measure their size variation is
straightforward. The ratio of maximum to minimum diameter is less than 5:1. But
before homogenization, depending on the length of time the milk is allowed to stand
while fat droplets merge and rise toward the top (and depending on where the sample
is taken), the fat is present as a mixture of some very large and many very small
droplets. In the coarsest sample (Figure 5.21c) the ratio of diameters of the largest
to the smallest droplets is more than 50:1.
It is for samples such as these, in which large size ranges of features are present,
that images with a very large number of pixels are most essential. As an example, an
image with a width of 500 pixels would realistically be able to include features up to
about 100 pixels in width (20% of the size of the field of view), and down to about 20
pixels (smaller ones cannot be accurately measured). That is a size range of 5:1. But
to accommodate 50:1 if the minimum limit for the small sizes remains at 20 pixels and
the field of view must be five times the size of the largest (1000 pixel) features, an
image dimension of 5000 pixels is required, corresponding to a camera of about 20
million total pixels. Only a few very high-resolution cameras (or a desktop scanner)
can capture images of that size. It is the need to deal with both large and small features
in the same image that is the most important factor behind the drive to use cameras
with very high pixel counts for microstructural image analysis.
Furthermore, a 50:1 size range is not all that great. Human vision, with its 150
million light sensors, can (by the same reasoning process) satisfactorily deal with
features that cover about a 1000:1 size range. In other words, we can see features
that are millimeter in size and ones that are a meter in size at the same time. To see
smaller features, down to 100
µ
m for example, we must move our eyes closer to
the sample, and lose the ability to see large meter-size features. Conversely, to see
a 100 meter football field we look from far off and cannot see centimeter size
features. So humans are conditioned to expect to see features that cover a much
larger range of sizes than digital cameras can handle.
There are a few practical solutions to the need to measure both large and small
features in a specimen. One is to capture images at different magnifications, measure
them to record information only on features within the appropriate size range, and then
combine the data from the different magnifications. If that is done, it is important to
weight the different data sets not according to the number of images taken, but to the
area imaged at each magnification. That method will record the information on indi-
vidual features, but does not include information on how the small features are spatially
distributed with respect to the large ones. For that purpose, it is necessary to capture
2241_C05.fm Page 306 Thursday, April 28, 2005 10:30 AM
Copyright © 2005 CRC Press LLC