TEXTURE
Many structures are not distinguished from each other or from the surrounding
background by a difference in brightness or color, nor by a distinct boundary line
of separation. Yet visually they may be easily distinguished by a texture. This use
of the word “texture” may initially be confusing to food scientists who use it to
describe physical and sensory aspects of food; it is used in image analysis as a way
to describe the visual appearance of irregularities or variations in the image, which
may be related to structure. Texture does not have a simple or unique mathematical
definition, but refers in general terms to a characteristic variability in brightness (or
color) that may exist at very local scales, or vary in a predictable way with distance
or direction. Examples of a few visually different textures are shown in Figure 3.28.
These come from a book (P. Brodatz,
Textures: A Photographic Album for Artists
and Designers
, Dover Publications, New York, 1966) that presented a variety of
visual textures, which have subsequently been widely distributed via the Internet
and are used in many image processing examples of texture recognition.
Just as there are many different ways that texture can occur, so there are different
image processing tools that respond to it. Once the human viewer has determined
that texture is the distinguishing characteristic corresponding to the structural dif-
ferences that are to be enhanced or measured, selecting the appropriate tool to apply
to the image is often a matter of experience or trial-and-error. The goal is typically
to convert the textural difference to a difference in brightness that can be thresholded.
Figure 3.29 illustrates one of the simplest texture filters. The original image is
a light micrograph of a thin section of fat in cheese, showing considerable variation
in brightness from side to side (the result of varying slice thickness). Visually, the
smooth areas (fat) are easily distinguished from the highly textured protein network
around them, but there is no unique brightness value associated with the fat regions,
and they cannot be thresholded to separate them from the background for measure-
ment. The range operator, which was used above as an edge detector, can also be
useful for detecting texture. Instead of using a very small neighborhood to localize
the edge, a large enough region must be used to encompass the scale of the texture
(so that both light and dark regions will be covered). In the example, a neighborhood
radius of at least 2.5 pixels is sufficient.
Many of the texture-detecting filters were originally developed for the applica-
tion of identifying different fields of planted crops in aerial photographs or satellite
images, but they work equally well when applied to microscope images. In Figure
3.30 a calculation of the local entropy isolates the mitochondria in a TEM image of
liver tissue.
Probably the most calculation-intensive texture operator in widespread use deter-
mines the local fractal dimension of the image brightness. This requires, for every
pixel in the image, constructing a plot of the range (difference between brightest
and darkest pixels) as a function of the radius of the neighborhood, out to a radius
of about 7 pixels. Plotted on log-log axes, this often shows a linear increase in
contrast with neighborhood size. Performing least-squares regression to determine
2241_C03.fm Page 156 Thursday, April 28, 2005 10:28 AM
Copyright © 2005 CRC Press LLC