position observed by the left eye. Also, an object in front of another object seems to move relative to the other object
when seen from one eye and then from the other. The closer the objects, the greater the parallax. A practical stereo vision
system is not yet available, primarily because of the difficulty in matching the two different images that are formed by
two different views of the same object.
Object orientation is important in manufacturing operations such as material handling or assembly to determine where
a robot may need to position itself relative to a part to grasp the part and then transfer it to another location. Among the
methods used for determining object orientation are the equivalent ellipse, the connecting of three points, light intensity
distribution, and structured light.
Equivalent Ellipse. For an image of an object in a two-dimensional plane, an ellipse can be calculated that has the
same area as the image. The major axis of the ellipse will define the orientation of the object. Another similar measure is
the axis that yields the minimum moment of inertia of the object.
Connecting of Three Points. If the relative positions of three noncolinear points in a surface are known, the
orientation of the surface in space can be determined by measuring the apparent relative position of the points in the
image.
Light Intensity Distribution. A surface will appear darker if it is oriented at an angle other than normal to the light
source. Determining orientation based on relative light intensity requires knowledge of the source of illumination as well
as the surface characteristics of the object.
Structured light involves the use of a light pattern rather than a diffused light source. The workpiece is illuminated by
the structured light, and the way in which the pattern is distorted by the part can be used to determine both the three-
dimensional shape and the orientation of the part.
Object Position Defined by Relative Motion. Certain operations, such as tracking or part insertion, may require
the vision system to follow the motion of an object. This is a difficult task that requires a series of image frames to be
compared for relative changes in position during specified time intervals. Motion in one dimension, as in the case of a
moving conveyor of parts, is the least complicated motion to detect. In two dimensions, motion may consist of both a
rotational and a translational component. In three dimensions, a total of six motion components (three rotational axes and
three translational axes) may need to be defined.
Feature Extraction. One of the useful approaches to image interpretation is analysis of the fundamental geometric
properties of two-dimensional images. Parts tend to have distinct shapes that can be recognized on the basis of elementary
features. These distinguishing features are often simple enough to allow identification independent of the orientation of
the part. For example, if surface area (number of pixels) is the only feature needed for differentiating the parts, then
orientation of the part is not important. For more complex three-dimensional objects, additional geometric properties may
need to be determined, including descriptions of various image segments. The process of defining these elementary
properties of the image is often referred to as feature extraction. The first step is to determine boundary locations and to
segment the image into distinct regions. Next, certain geometric properties of these regions are determined. Finally, these
image regions are organized in a structure describing their relationship.
Light Intensity Variations. One of the most sophisticated and potentially useful approaches to machine vision is the
interpretation of an image based on the difference intensity of light in different regions. Many of the features described
above are used in vision systems to create two-dimensional interpretations of images. However, analysis of subtle
changes in shadings over the image can add a great deal of information about the three-dimensional nature of the object.
The problem is that most machine vision techniques are not capable of dealing with the complex patterns formed by
varying conditions of illuminations, surface texture and color, and surface orientation.
Another, more fundamental difficulty is that image intensities can change drastically with relatively modest variations in
illumination or surface condition. Systems that attempt to match the gray-level values of each pixel to a stored model can
easily suffer a deterioration in performance in real-world manufacturing environments. The use of such geometric
features such as edges or boundaries is therefore likely to remain the preferred approach. Even better approaches are
likely to result from research being performed on various techniques for determining surface shapes from relative
intensity levels. One approach, for example, assumes that the light intensity at a given point on the surface of an object
can be precisely determined by an equation describing the nature and location of the light source, the orientation of the
surface at the point, and the reflectivity of the surface.