These approaches were well suited to applications in medical imaging, whose raw data generally consisted of
progressive slices of 2-D imagery from sources such as computer tomography (CT) or nuclear magnetic
resonance (NMR) scans of the human body. This data formed a regular array of 3-D points, as ordered slices
of n-by-m 2-D point arrays. Gray scale points on each slice represented densities of features such as human
tissue and bone, and points on density boundaries could be used with curve-fitting techniques to generate
contour curves.
Methods published by researchers such as Christiansen and Sederberg [3] and Ganapathy and Dennehy [7]
attempted to fit 3-D polygons between points on adjacent contour curves on each slice to form the segments
of an isosurface of constant density along the region boundaries. The end result of this polygon fitting became
a 3-D image of the boundary of an artifact such as a tumor, or organ.
A later approach developed by Artzy et al. [2] sought to fit surfaces to adjacent volume elements into the full
3-D space. Here, a graph theory representation was used to link the adjacent 3-D volume elements. This paper
was one of the first to use the term voxel—now the common term for a basic unit of volume data, much the
same as a pixel represents the basic component of a raster computer graphics display.
These approaches required substantial computational logic, and not all of these algorithms were “foolproof
”—situations such as diverging result paths might cause a pure surface-fitting algorithm to fail. More
important, most were calculated around regular ordered arrays of volume elements, and could not be applied
directly to random 3-D volume components such as finite elements. More recently, these approaches have
been replaced by techniques which are algorithmic rather than heuristic, and can generally guarantee a correct
rendering of a volume array so long as each volume element, or voxel, is considered within the algorithm.
These discrete or surface-fitting methods generally consider each volume element or data point’s contribution
to a surface of constant value, or isosurface, through the volume. These methods originated largely in the
medical area for detection of the exterior of known-density areas such as tumors and organs. In one of the
earliest approaches Höhne and Bernstein [8] computed the gradient of the scalar value in adjacent volume
elements as a basis for the display of surfaces.
Newer approaches include the popular Marching Cubes algorithm for generating isosurface polygons on a
voxel-by-voxel basis (Cline et al. [4]), an extension to Marching Cubes for fitting surfaces within each voxel
by use of a smoothing scheme (Gallagher and Nagtegaal [5]), and the Dividing Cubes approach of
subdividing threshold voxels into smaller cubes at the resolution of pixels (Cline et al. [4]), each of which are
described hereafter.
These techniques are easily extended for use with numerical analysis data, subject to certain constraints. In
their most basic form, they presume linear volume element topologies such as bricks, wedges or tetrahedra.
While extensions can be developed to account for issues such as quadratic and higher-order edge curvature or
mid-face vertices, in practice linear assumptions are generally used.
The Marching Cubes Algorithm
One of the key developments in volume visualization of scalar data was the Marching Cubes algorithm of
Lorensen and Cline [9], which was patented by the General Electric Company in 1987. This technique
examines each element of a volume, and determines, from the arrangement of vertex values above or below a
result threshold value, what the topology of an isosurface passing through this element would be. It represents
a high-speed technique for generating 3-D iso-surfaces whose surface computations are performed on single
volume elements, and are almost completely failure-proof.
This technique was one of the first to reduce the generation of isosurfaces to a closed form problem with a
simple solution. It processes one volume element at a time, and generates its isosurface geometry immediately
before moving to the next volume element. Its basic approach involves taking an n-vertex cube, looking at the
scalar value at each vertex, and determining if and how an isosurface would pass through it.
Previous Table of Contents Next