You are on page 1of 24

IMAGING & THERAPEUTIC TECHNOLOGY

Three-dimensional
Visualization and
Analysis Methodologies:
A Current Perspective1
Jayaram K. Udupa, PhD

Three-dimensional (3D) imaging was developed to provide both qualitative and


quantitative information about an object or object system from images obtained
with multiple modalities including digital radiography, computed tomography,
magnetic resonance imaging, positron emission tomography, single photon
emission computed tomography, and ultrasonography. Three-dimensional im-
aging operations may be classified under four basic headings: preprocessing,
visualization, manipulation, and analysis. Preprocessing operations (volume of
interest, filtering, interpolation, registration, segmentation) are aimed at ex-
tracting or improving the extraction of object information in given images. Vi-
sualization operations facilitate seeing and comprehending objects in their full
dimensionality and may be either scene-based or object-based. Manipulation
may be either rigid or deformable and allows alteration of object structures and
of relationships between objects. Analysis operations, like visualization opera-
tions, may be either scene-based or object-based and deal with methods of
quantifying object information. There are many challenges involving matters of
precision, accuracy, and efficiency in 3D imaging. Nevertheless, 3D imaging is
an exciting technology that promises to offer an expanding number and vari-
ety of applications.

INTRODUCTION
The main purpose of three-dimensional (3D) imaging is to provide both qualitative
and quantitative information about an object or object system from images obtained
with multiple modalities including digital radiography, computed tomography (CT), mag-
netic resonance (MR) imaging, positron emission tomography (PET), single photon
emission computed tomography (SPECT), and ultrasonography (US). Objects that are
studied may be rigid (eg, bones), deformable (eg, muscles), static (eg, skull), dynamic
(eg, heart, joints), or conceptual (eg, activity regions in PET, SPECT, and functional MR
imaging; isodose surfaces in radiation therapy).
At present, it is possible to acquire medical images in two, three, four, or even five
dimensions. For example, two-dimensional (2D) images might include a digital radio-
graph or a tomographic section obtained with CT, MR imaging, PET, SPECT, or US; a
3D image might be used to demonstrate a volume of tomographic sections of a static
object; a time sequence of 3D images of a dynamic object would be displayed in four

Abbreviations: MIP = maximum intensity projection, PD = proton density, PET = positron emission tomography, 3D =
three-dimensional, 2D = two-dimensional

Index terms: Computed tomography (CT) Computers Computers, simulation Images, analysis Images, display
Images, processing Magnetic resonance (MR) Single-photon emission tomography (SPECT) Ultrasound (US)

RadioGraphics 1999; 19:783806


1From the Department of Radiology, University of Pennsylvania, 423 Guardian Dr, Philadelphia, PA 19104-6021. Received
April 21, 1998; revision requested May 21 and received December 14; accepted December 21. Address reprint requests
to the author.
RSNA, 1999

783
Figure 1. Schematic illustrates
a typical 3D imaging system.

dimensions; and an image of a dynamic object terdependent. For example, some form of visu-
for a range of parameters (eg, MR spectro- alization is essential to facilitate the other three
scopic images of a dynamic object) would be classes of operations. Similarly, object defini-
displayed in five dimensions. tion through an appropriate set of preprocess-
It is not currently feasible to acquire truly re- ing operations is vital to the effective visualiza-
alistic-looking four- and five-dimensional im- tion, manipulation, and analysis of the object
ages; consequently, approximations are made. system. We use the phrase 3D imaging to col-
In most applications, the object system being lectively refer to these four classes of operations.
investigated consists of only a few static ob- A monoscopic or stereoscopic video display
jects. For example, a 3D MR imaging study of monitor of a computer workstation is the most
the head may focus on white matter, gray mat- commonly used viewing medium for images.
ter, and cerebrospinal fluid. However, other media such as holography
A textbook with a systematic presentation of and head-mounted displays are also available.
3D imaging is not currently available. However, Unlike the 2D computer monitor, holography
edited works may be helpful for readers unfa- offers a 3D medium for viewing. The head-
miliar with the subject (13). The reference list mounted display basically consists of two tiny
at the end of this article is representative but monitors positioned in front of the eyes as part
not exhaustive. of a helmetlike device worn by the user. This
In this article, we provide an overview of arrangement creates the sensation of being free
the current status of the science of 3D imaging, from ones natural surroundings and immersed
identify the primary challenges now being en- in an artificial environment. However, the com-
countered, and point out the opportunities puter monitor is by far the most commonly
available for advancing the science. We de- used viewing medium, mainly because of its
scribe and illustrate the main 3D imaging op- superior flexibility, speed of interaction, and
erations currently being used. In addition, we resolution compared with other media.
delineate major concepts and attempt to clear A generic 3D imaging system is represented
up some common misconceptions. Our in- in Figure 1. A workstation with appropriate
tended audience includes developers of 3D im- software implementing 3D imaging operations
aging methods and software as well as develop- forms the core of the system. A wide variety of
ers of 3D imaging applications and clinicians input or output devices are used depending on
interested in these applications. We assume the application. On the basis of the core of the
the reader has some familiarity with medical system (ie, independent of input or output), 3D
imaging modalities and a knowledge of the ru- imaging systems may be categorized as those
dimentary concepts related to digital images. having (a) physician display consoles provided
by imaging equipment vendors, (b) image pro-
CLASSIFICATION OF 3D IMAGING cessing and visualization workstations supplied
OPERATIONS by other independent vendors, (c) 3D imaging
Three-dimensional imaging operations can be software supplied independent of the worksta-
broadly classified into the following categories: tion, and (d) university-based 3D imaging soft-
(a) preprocessing (defining the object system ware (often freely available via the Internet).
to create a geometric model of the objects un- Systems produced by scanner manufacturers
der investigation), (b) visualization (viewing and and workstation vendors usually provide effec-
comprehending the object system), (c) manipu- tive solutions but may cost $50,000$150,000.
lation (altering the objects [eg, virtual surgery]), For users with expertise in accessing, install-
and (d) analysis (quantifying information about ing, and running the software, university-based
object system). These operations are highly in- 3D imaging software is available that can pro-
vide very effective, inexpensive solutions. For

784 Imaging & Therapeutic Technology Volume 19 Number 3


Frequently Used Terms in 3D Imaging

Term Definition
Scene Multidimensional image;
rectangular array of
voxels with assigned
values
Scene domain Anatomic region repre-
sented by the scene
Scene intensity Values assigned to the
voxels in a scene
Pixel size Length of a side of the
square cross section of
a voxel
Scanner coordinate Origin and orthogonal axes
system system affixed to the
imaging device
Scene coordinate Origin and orthogonal axes
system system affixed to the
scene (origin usually
assumed to be upper
left corner of first section
of scene, axes are edges
of scene domain that
converge at the origin)
Object coordinate Origin and orthogonal axes
system system affixed to the ob- Figure 2. Drawing provides graphic representation
ject or object system of the basic terminology used in 3D imaging. abc =
Display coordinate Origin and orthogonal scanner coordinate system, rst = display coordinate
system axes system affixed to system, uvw = object coordinate system, xyz = scene
the display device coordinate system.
Rendition 2D image depicting the
object information cap-
tured in a scene or object
system
aging operations: graded composition and hang-
ing-togetherness.

Graded Composition.Most objects in the


example, for under $5,000 it is possible to con- body have a heterogeneous material composi-
figure a complete system running on modern tion. In addition, imaging devices introduce
personal computers (eg, Pentium 300; Intel, Santa blurring into acquired images due to various
Clara, Calif) that performs as well as or even limitations. As a result, regions corresponding
better than the costly systems described in the to the same object display a gradation of scene
other three categories. intensities. On the knee CT scan shown in Fig-
ure 3, both the patella and the femur exhibit
Terminology this property (ie, the region corresponding to
Some frequently used terms in 3D imaging are bone in these anatomic locations has not just
defined in the Table and illustrated in Figure 2. one CT value but a gradation of values).
The region of the image corresponding to
the anatomic region of interest is divided into Hanging-togetherness (Gestalt).Despite
rectangular elements (Fig 2). These elements the gradation described in the preceding para-
are usually referred to as pixels for 2D images graph, when one views an image, voxels seem
and voxels for 3D images; in this article, how- to hang together (form a gestalt) to form an
ever, we refer to them as voxels for all images. object. For example, the high-intensity voxels
In 2D imaging, the voxels are usually squares, of the patella do not (and should not) hang to-
whereas in 3D imaging they are cuboids with a gether with similar voxels in the femur, although
square cross section. voxels with dissimilar intensities in the femur
hang together (Fig 3).
Object Characteristics in Images
There are two important object characteristics
whose careful management is vital in all 3D im-

May-June 1999 Udupa RadioGraphics 785


PREPROCESSING
The aim of preprocessing operations is to take
a set of scenes and output computer object mod-
els or another set of scenes from the given set,
which facilitates the creation of computer ob-
ject models. The most commonly used opera-
tions are volume of interest, filtering, interpola-
tion, registration, and segmentation.

Volume of Interest
Volume of interest converts a given scene into
another scene. Its purpose is to reduce the
amount of data by specifying a region of inter-
est and a range of intensity of interest.
A region of interest is specified by creating a
rectangular box that delimits the scene domain
in all dimensions (Fig 4a). A range of intensity
of interest is specified by designating an inten-
sity interval. Within this interval, scene intensi-
ties are transferred unaltered to the output. Out-
Figure 3. Graded composition and hanging-to-
side the interval, they are set to the lower and getherness. CT scan of the knee illustrates graded
upper limits. The range of intensity of interest composition of intensities and hanging-togetherness.
is indicated as an interval on a histogram of the Voxels within the same object (eg, the femur) are
scene (Fig 4b). The corresponding section in assigned considerably different values. Despite this
the output scene is shown in Figure 4c. This gradation of values, however, it is not difficult to
operation can often reduce storage require- identify the voxels as belonging to the same object
ments for scenes by a factor of 25. It is advis- (hanging-togetherness).
able to use the volume of interest operation
first in any sequence of 3D imaging operations.
The challenge in making use of the volume (median) of the intensities of the voxels in the
of interest operation is to completely automate neighborhood of v in the input scene when the
this operation and to do so in an optimal fash- voxels are arranged in ascending order.
ion, which requires explicit delineation of ob- In another method (5), often used in process-
jects at the outset. ing MR images, a process of diffusion and flow
is considered to govern the nature and extent
Filtering of smoothing. The idea is that in regions of
Filtering converts a given scene into another voxels with a low rate of change in intensity,
scene. Its purpose is to enhance wanted (object) voxel intensities diffuse and flow into neigh-
information and suppress unwanted (noise, boring regions. This process is prevented by
background, other object) information in the voxels with a high rate of change in intensity.
output scene. Two kinds of filters are available: Certain parameters control the extent of diffu-
suppressing filters and enhancing filters. Ide- sion that takes place and the limits of the mag-
ally, unwanted information is suppressed with- nitude of the rate of change in scene intensity
out affecting wanted information and wanted that are considered low and high. This
information is enhanced without affecting un- method is quite effective in overcoming noise
wanted information. but sensitive enough not to suppress subtle de-
The most commonly used suppressing filter tails or blur edges.
is a smoothing operation used mainly for sup- The most commonly used enhancing filter is
pressing noise (Fig 5a, 5b). In this operation, a an edge enhancer (Fig 5c) (4). With this filter,
voxel v in the output scene is assigned an in- the intensity of a voxel v in the output is the
tensity that represents a weighted average of rate of change in the intensity of v in the input.
the intensities of voxels in the neighborhood of If we think of the input scene as a function,
v in the input scene (4). Methods differ as to then this rate of change is given by the magni-
how neighborhoods are determined and how tude of the gradient of the function. Because
weight is assigned (5). Another commonly used this function is not known in analytic form,
method is median filtering. In this method, various digital approximations are used for this
the voxel v in the output scene is assigned a operation. The gradient has a magnitude (rate of
value that simply represents the middle value change) and a direction in which this change is
maximal. For filtering, the direction is usually

786 Imaging & Therapeutic Technology Volume 19 Number 3


a. b. c.
Figure 5. Preprocessing with suppressing and enhancing filters. (a) Head CT scan illustrates the appearance
of an image prior to filtering. (b) Same image as in a after application of a smoothing filter. Note that noise is
suppressed in regions of uniform intensity, but edges are also blurred. (c) Same image as in a after application
of an edge-enhancing filter. Note that regions of uniform intensity are unenhanced because the gradient in these
regions is small. However, the boundaries (especially of skin and bone) are enhanced.

a. b.
Figure 4. Preprocessing with a volume of inter-
est operation. (a) Head CT scan includes a speci-
fied region of interest (rectangle). (b) Histogram
depicts the intensities of the scene designated in
a and includes a specified intensity of interest.
(c) Resulting image corresponds to the specified
region of interest in a.

c.

May-June 1999 Udupa RadioGraphics 787


a. b.
Figure 6. Shape-based interpolation of a binary CT scene created by designating a threshold.
(a) CT scene after shape-based interpolation at a coarse resolution and subsequent surface
rendering. (b) The same scene after interpolation at a fine resolution and surface rendering.

ignored, although it is used in operations used intensity of v (3,6,7). In 3D interpolation, the


to create renditions. Methods differ as to how simplest solution is to estimate new sections
to determine the digital approximation, which between sections of the input scene, keeping
is extensively studied in computer vision (6). the pixel size of the output scene the same as
Unfortunately, most existing suppressing fil- that of the input scene. This leads to a one-di-
ters often also suppress object information and mensional interpolation problem: estimating
enhancing filters enhance unwanted informa- the scene intensity of any voxel v in the output
tion. Explicit incorporation of object knowledge scene from the intensities of voxels in the in-
into these operations is necessary to minimize put scene on the two sides of v in the z direc-
these effects. tion (the direction orthogonal to the sections).
In nearest neighbor interpolation, v is as-
Interpolation signed the value of the voxel that is closest to v
Like filtering, interpolation converts a given in the input scene. In linear interpolation, two
scene into another scene. Its purpose is to voxels v1 and v2 (one on either side of v) are
change the level of discretization (sampling) considered. The value of v is determined with
of the input scene. Interpolation becomes nec- the assumption that the input scene intensity
essary when the objective is (a) to change the changes linearly from the intensity at v1 to that
nonisotropic discretization of the input scene at v2. In higher-order (eg, cubic) interpolations,
to isotropic discretization or to a desired level more neighboring voxels are considered.
of discretization, (b) to represent longitudinal When the size of v in the output scene differs
scene acquisitions in a registered common co- in all dimensions from that of voxels in the in-
ordinate system, (c) to represent multimodality put scene, the situation becomes more general,
scene acquisitions in a registered coordinate and intensities are assumed to vary linearly or
system, or (d) to re-section the given scene. as a higher-order polynomial in each of the
Two types of interpolation are currently avail- three directions in the input scene.
able: scene-based interpolation and object-based
interpolation. Object-based Interpolation.Object infor-
mation derived from scenes is used to guide
Scene-based Interpolation.The intensity the interpolation process. At one extreme (8),
of a voxel v in the output scene is determined the given scene is first converted to a binary
on the basis of the intensity of voxels in the scene (ie, a scene with only two intensities:
neighborhood of v in the input scene. Methods 0 and 1) with a segmentation operation (see
differ as to how the neighborhoods are deter- Segmentation). The voxels with a value of 1
mined and what form of the functions of the represent the object of interest, whereas the
neighboring intensities is used to estimate the voxels with a value of 0 represent the rest of
the scene domain. The shape of the region

788 Imaging & Therapeutic Technology Volume 19 Number 3


a.

b.
Figure 7. Scene-based registration. (a) Three-dimensional scenes corresponding to proton-density (PD)weighted
MR images of the head obtained in a patient with multiple sclerosis demonstrate a typical preregistration ap-
pearance. The scenes were acquired at four different times. (b) Same scenes as in a after 3D registration. The
progression of the disease (hyperintense lesions around the ventricles) is now readily apparent. At registration,
the scenes were re-sectioned with a scene-based interpolation method to obtain sections at the same location.

represented by the 1 voxels (the object) is nal 3D scene was first assigned a threshold to
then used to create an output binary scene with create a binary scene. This binary scene was
a similar shape (9,10) by way of interpolation. then interpolated at coarse (Fig 6a) and fine
This is done by first converting the binary scene (Fig 6b) levels and surface rendered.
back into a (gray-valued) scene by assigning ev- The challenge in interpolation is to identify
ery voxel in this scene a value that represents specific object information and incorporate it
the shortest distance between the voxel and into the process. With such information, the
the boundary between the 0 voxels and the 1 accuracy of interpolation can be improved.
voxels. The 0 voxels are assigned a negative
distance, whereas the 1 voxels are assigned a Registration
positive distance. This scene is then interpolated Registration takes two scenes or objects as in-
with a scene-based technique and is subsequently put and outputs a transformation that, when
converted back to a binary scene by setting a applied to the second scene or object, matches
threshold at 0. At the other extreme, the shape it as closely as possible to the first. Its purpose
of the intensity profile of the input scene is it- is to combine scene or object information from
self considered an object to be used to guide multiple modalities and protocols to determine
interpolation so that this shape is retained as change, growth, motion, and displacement of
faithfully as possible in the output scene (11). objects as well as aid in object identification.
For example, in the interpolation of a 2D scene Registration may be either scene-based or ob-
with this method, the scene is converted into a 3D ject-based.
surface of intensity profile wherein the height
of the surface represents pixel intensities. This Scene-based Registration.To match two
(binary) object is then interpolated with a shape- scenes, a rigid transformation made with trans-
based method. Several methods exist between lation and rotation (and often scaling) is calcu-
these two extremes (12,13). The shape-based lated for one scene S2 such that the intensity
methods have been shown to produce more ac- pattern of the transformed scene (S2) matches
curate results (811) than most of the commonly that of the first scene (S1) as closely as possible
used scene-based methods. (Fig 7) (14). Methods differ with respect to the
Figure 6 demonstrates binary shape-based in- matching criterion used and the means of
terpolation of an image derived from CT data at
coarse and fine levels of discretization. The origi-

May-June 1999 Udupa RadioGraphics 789


Figure 8. Rigid object-based
registration. Sequence of 3D MR
imaging scenes of the foot al-
lows kinematic analysis of the
midtarsal joints. The motion (ie,
translation and rotation) of the
talus, calcaneus, and navicular
and cuboid bones from one po-
sition to the other is determined
by registering the bone surfaces
in the two different positions.

determining which of the infinite number of tain object gradation can potentially overcome
possible translations and rotations are optimal this but may still retain the strength of scene-
(15). Scene-based registration methods are also based methods. Deformable fuzzy object match-
available for cases in which objects undergo ing seems natural and appropriate in most situ-
elastic (nonrigid) deformation (16). ations but will require the development of fuzzy
mechanics theory and algorithms.
Object-based Registration.In object-based
registration, two scenes are registered on the Segmentation
basis of object information extracted from the From a given set of scenes, segmentation out-
scenes. Ideally, the two objects should be as puts computer models of object information
similar as possible. For example, to match 3D captured in the scenes. Its purpose is to iden-
scenes of the head obtained with MR imaging tify and delineate objects. Segmentation con-
and PET, one may use the outer skin surface of sists of two related tasks: recognition and delin-
the head as computed from each scene and eation.
match the two surfaces (17). Alternatively (or
in addition), landmarks such as points, curves, Recognition.Recognition consists of roughly
or planes that are observable in and computable determining the whereabouts of an object in
from both scenes as well as implanted objects the scene. In Figure 3, for example, recognition
may be used (1820). Optimal translation and involves determining that this is the femur
rotation parameters for matching the two ob- and this is the patella. This task does not in-
jects are determined by minimizing some mea- volve the precise specification of the region
sure of distance between the two (sets of) occupied by the object.
objects. Methods differ as to how distances are Recognition may be accomplished either au-
defined and optimal solutions are computed. tomatically or with human assistance. In auto-
Rigid object-based registration is illustrated matic (knowledge- and atlas-based) recogni-
in Figure 8. In contrast, deformable matching tion, artificial intelligence methods are used to
operations can also be used on objects (21,22). represent knowledge about objects and their
These operations may be more appropriate than relationships (2426). Preliminary delineation
rigid matching for nonrigid soft-tissue structures. is usually needed in these methods to extract
Typically, a global approximate rigid matching object components and to form and test hy-
operation is performed, followed by local de- potheses related to whole objects.
formations for more precise matching. Deform- A carefully created atlas consisting of a
able registration is also used to match comput- complete description of the geometry and in-
erized brain atlases to brain scene data obtained terrelationships of objects is used (16,27,28).
in a given patient (23). Initially, some object in- Some delineation of object components in the
formation has to be identified in the scene. given scene is necessary. This information is
This procedure has several potential applica- used to determine the mapping necessary to
tions in functional imaging, neurology, and neu- transform voxels or other geometric elements
rosurgery as well as in object definition per se. from the scene space to the atlas. Conversely,
The challenge in registration is that scene- the information is also used to deform the atlas
based methods require that the intensity pat- so that it matches the delineated object compo-
terns in the two scenes be similar. This is often nents in the scene.
not the case, however. Converting scenes into In human-assisted recognition, simple assis-
fuzzy (nonbinary) object descriptions that re- tance is often sufficient to help solve a segmen-
tation problem. This assistance may take several
forms: for example, specification of several

790 Imaging & Therapeutic Technology Volume 19 Number 3


seed voxels inside the 3D region occupied Knowledgeable human beings usually out-
by the object or on its boundary, creation of a perform computer algorithms in the high-level
box (or some other simple geometric shape) task of recognition. However, carefully designed
that just encloses the object and can be quickly computer algorithms outperform human beings
specified, or a click of a mouse button to ac- in achieving precise, accurate, and efficient de-
cept a real object (eg, a lesion) or reject a false lineation. Clearly, human delineation cannot ac-
object. count for graded object composition. Most of
the challenges in completely automating seg-
Delineation.Delineation involves determin- mentation may be attributed to shortcomings in
ing the precise spatial extent and composition computerized recognition techniques and the
of an object including gradation. In Figure 3, if lack of delineation techniques that can handle
bone is the object system of interest, then de- graded composition and hanging-togetherness.
lineation consists of defining the spatial extent There are eight possible combinations of ap-
of the femur and patella separately and specify- proaches to recognition and delineation, result-
ing an objectness value for each voxel in ing in eight different methods of segmentation.
each object. Once the objects are defined sepa- With hard, boundary-based automatic seg-
rately, the femur and the patella can be visual- mentation, thresholding and isosurfacing are
ized, manipulated, and analyzed individually. most commonly used (2932). In these tech-
Delineation may be accomplished with a va- niques, a scene intensity threshold is specified
riety of methods. Often, delineation is itself and the surface that separates voxels with an
considered to be the entire segmentation prob- intensity above the threshold from those with
lem, in which case related solutions are consid- an intensity below the threshold is computed.
ered to be solutions to the segmentation prob- Methods differ as to how the surface is repre-
lem. However, it is helpful to distinguish be- sented and computed and whether surface
tween recognition and delineation to understand connectedness is taken into account. The sur-
and help solve the difficulties encountered in face may be represented in terms of voxels,
segmentation. Approaches to delineation can voxel faces, points, triangles, or other surface
be broadly classified as boundary-based or re- elements. If connectedness is not used, the sur-
gion-based. face obtained from a scene will combine dis-
In boundary-based delineation, an object de- crete objects (eg, the femur and the patella in
scription is output in the form of a boundary Fig 3); with connectedness, each of the objects
surface that separates the object from the back- can be represented as a separate surface (as-
ground. The boundary description may take suming they are separated in the 3D scene). In
the form of a hard set of primitives (eg, points, Figure 6, the isosurface is connected and is rep-
polygons, surface patches, voxels) or a fuzzy resented as a set of faces of voxels (29).
set of primitives such that each primitive has In addition to scene intensity threshold, in-
an assigned grade of boundariness. tensity gradient has also been used in defining
In region-based delineation, an object de- boundaries (33).
scription is output in the form of the region oc- Another method of segmentation is fuzzy,
cupied by the object. The description may take boundary-based automatic segmentation. Con-
the form of a hard set of voxels or of a fuzzy cepts related to fuzzy boundaries (eg, connect-
set such that each voxel in the set has an as- edness, closure, orientedness) that are well es-
signed grade of objectness. With the former tablished for hard boundaries are difficult and
method, each voxel in the set is considered to as yet undeveloped. However, computational
contain 100% object material; with the latter methods have been developed that identify
method, this value may be anywhere from 0% only those voxels that are in the vicinity of the
to 100%. object boundary and that assign each voxel a
Object knowledge usually facilitates recog- grade of boundariness (34,35). These meth-
nition and delineation of that object. Paradoxi- ods use scene intensity or intensity gradient to
cally, this implies that segmentation is required determine boundary gradation (Fig 9).
for effective object segmentation. As we have In hard, boundary-based, human-assisted
noted, segmentation is needed to perform most segmentation, the degree of human assistance
of the preprocessing operations in an optimal ranges from tracing the boundary entirely by
fashion. It will be seen later that segmentation hand (manual recognition and delineation) to
is essential for most visualization, manipula- specifying only a single point inside the object
tion, and analysis tasks. Thus, segmentation is
the most crucial among all 3D imaging opera-
tions and also the most challenging.

May-June 1999 Udupa RadioGraphics 791


Figure 9. Fuzzy, boundary-based automatic seg-
mentation. Rendition created with both intensity Figure 10. Live-wire segmentation. Section created
and gradient criteria shows the fuzzy boundaries on the basis of data from an MR image of the foot
of bone detected in the CT data from Figure 3. shows a live-wire segment representing a portion
of the boundary of interest, which in this case out-
lines the talus.
or on its boundary (manual recognition and au-
tomatic delineation) (3641). In routine clinical
applications, manual boundary tracing is per- desired boundary, in which case further cor-
haps the most commonly used method. On the rection of the boundary is needed. In assessing
other hand, boundary detection methods re- the effectiveness of these segmentation meth-
quiring simple user assistance based on inten- ods, it is important to evaluate their precision
sity (36,37) and gradient criteria (3841) have (repeatability) and efficiency (defined in terms of
been developed. However, these methods can- the number of scenes that can be segmented per
not be guaranteed to always work correctly in unit time). Such evaluations have not been per-
large applications. formed for methods described in the literature.
There are many user-assisted methods be- The principles underlying live-wire (live-
sides those just described that require different lane) methods (47,48) are different from those
degrees of human assistance for segmentation for active boundary methods. In live-wire meth-
of each scene (4248). In view of the inadequacy ods, every pixel edge is considered to represent
of the minimally userassisted methods men- two directed edges whose orientations are op-
tioned earlier, much effort is currently being di- posite each other. The inside of the bound-
rected toward developing methods that take a ary is considered to be to the left of the di-
largely manual approach to recognition and a rected edge, and its outside to the right. Each
more automatic approach to delineation. These directed edge is assigned a cost that is inversely
methods go under various names: active contours related to the boundariness of the edge. Sev-
or snakes (4244), active surfaces (45,46), and eral local features are used to determine the
live-wire (live-lane) (47,48). cost and include intensity to the left (inside)
In active contour and active surface methods, and right (outside) as well as intensity gradient
a boundary is first specified (eg, by creating a and its direction. In the 2D live-wire method,
rectangle or a rectangular box close to the the user initially selects a point (pixel vertex)
boundary of interest). The boundary is consid- vo on the boundary of interest. The computer
ered to have certain stiffness properties. In ad- now shows a live-wire segment from vo to
dition, the given scene is considered to exert the current mouse cursor position v. This seg-
forces on the boundary whose strength de- ment is an oriented path consisting of a con-
pends on the intensity gradients. For example, nected sequence of directed pixel edges that
a voxel exerts a strong attractive force on the represents the shortest possible path from vo to
boundary if the rate of change in intensity of v. As the user changes v through mouse move-
the voxel is high. Within this static mechanical ment, the optimal path is computed and dis-
system, the initial boundary deforms and even- played in real time. If v is on or close to the
tually assumes a shape for which the combined boundary, the live wire snaps onto the
potential energy is at a minimum. Unfortunately, boundary (Fig 10); v is now deposited and be-
the steady-state shape is usually impossible to comes the new starting point and the process
compute. Furthermore, whatever shape is ac- continues. Typically, two to five points are suf-
cepted as an alternative may not match with the ficient to segment a boundary (Fig 10). This
method and its derivatives are shown to be two

792 Imaging & Therapeutic Technology Volume 19 Number 3


a. b. c.
Figure 11. Hard, region-based, automatic segmentation with use of thresholding. Once the desired scene is
selected (a), an intensity interval is specified on a histogram (b). The segmented object is then depicted as a
binary scene (c).

a. b. c.
Figure 12. Clustering. (a) Sections from an MR imaging scene with T2 (top) and PD (bottom) values assigned
to voxels. (b) Scatter plot of the sections in a. A cluster outline for cerebrospinal fluid is indicated. (c) Segmented
binary section demonstrates cerebrospinal fluid.

to three times faster and statistically significantly Another commonly used method is cluster-
more repeatable than manual tracing (47). Its ing (Fig 12). If, for example, multiple values as-
3D version (48) is about 315 times faster than sociated with each voxel are determined (eg,
manual tracing. Note that, in this method, rec- T2 and PD values), then a 2D histogram (also
ognition is manual but delineation is automatic. known as a scatter plot) represents a plot of
To our knowledge, no fuzzy, boundary-based, the number of voxels in the given 3D scene for
human-assisted methods have been described each possible value pair. The 2D histogram of
in the literature. all possible value pairs is usually referred to as
The most commonly used hard, region-based, a feature space. The idea in clustering is that
automatic method of segmentation is thresh- feature values corresponding to the objects of
olding (Fig 11). A voxel is considered to belong interest cluster together in the feature space.
to the object region if its intensity is at an upper Therefore, to segment an object, one need only
or lower threshold or between the two thresh- identify and delineate this cluster. In other words,
olds. If the object is the brightest in the scene the problem of segmenting the scene becomes
(eg, bone in CT scans), then only the lower the problem of segmenting the 2D scene repre-
threshold needs to be specified. The threshold senting the 2D histogram. In addition to T2 and
interval is specified with a scene intensity his- PD values, it is possible to use computed val-
togram in Figure 11b, and the segmented ob- ues such as the rate of change in T2 and PD for
ject is shown as a binary scene in Figure 11c.

May-June 1999 Udupa RadioGraphics 793


every voxel. In this case, the feature space
would be four-dimensional. There are several
well-developed techniques in the area of pat-
tern recognition (49) for automatically identify-
ing clusters, and these techniques have been
extensively applied to medical images (5056).
One of the popular cluster identification meth-
ods is the knearest neighbor (kNN) technique
(49). Assume, for example, that the problem is
segmenting the white matter (WM) region in a Figure 13. Diagram illustrates fuzzy thresholding.
3D MR imaging scene in which the T2 and PD
values have been determined for each voxel.
The first step would be to identify two sets XWM Many of the clustering methods can be gen-
and XNWM of points in the 2D feature space that eralized to output fuzzy object information. For
correspond to white matter and nonwhite mat- example, in the kNN method described previ-
ter regions. These sets of points will be used to ously, if a number m of the k points closest to
determine whether a voxel in the given scene P is from XWM, then the objectness (white mat-
belongs to white matter. The sets are deter- terness) of v is m/k. Note that the fuzzy
mined with use of a training set. Suppose that thresholding described earlier is a form of
one or more scenes were previously segmented fuzzy clustering. One approach to more gener-
manually. Each voxel in the white matter and alized fuzzy clustering is the fuzzy c-means
nonwhite matter regions in each scene con- method (60). The application of this method
tributes a point to set XWM or set XNWM. The has been investigated for segmenting brain tis-
next step would be to assign a value to k (eg, k sue components in MR images (50). The idea
= 7), which is a fixed parameter. The location P is something like this: Suppose there are two
in the feature space is determined for each types of tissues, white matter and gray matter,
voxel v in the scene to be segmented. In this to be segmented in a 3D MR imaging scene,
case, the seven points from sets XWM and XNWM and that the feature space is 2D (composed of
that are closest to P are determined. If a ma- T2 and PD values). Actually, three classes must
jority (>4) of these points are from XWM, then v be considered: white matter, gray matter, and
is considered to be in white matter; otherwise, v everything else. The task is to define three
does not belong to white matter. Note that the clusters in the 2D scatter plot of the given
step of obtaining XWM and XNWM need not be re- scene that correspond to these three classes.
peated for every scene to be segmented. The set X of points to which the given scene
Note also that thresholding is essentially maps in the feature space can be partitioned
clustering in a one-dimensional feature space. into three clusters in a large (although finite)
All clustering methods have parameters whose number of ways. In the hard c-means method,
values must be determined somehow. If these the objective is to choose that particular clus-
parameters are fixed in an application, the ef- ter arrangement for which the sum (over all
fectiveness of the method in routine process- clusters) of the squared distances between the
ing cannot be guaranteed and some user assis- points in each cluster and the cluster center is
tance usually becomes necessary eventually. the smallest. In the fuzzy c-means method,
Examples of other nonclustering methods have each point in X is allowed to have an object-
been described by Kamber (57) and Wells (58). ness value between 0 and 1, making the num-
The simplest of the fuzzy, region-based, au- ber of cluster arrangements infinite. The dis-
tomatic methods of segmentation is fuzzy tance in the criterion to be minimized is modified
thresholding, which represents a generaliza- by the objectness value. Algorithms have been
tion of the thresholding concept (Fig 13) (59). described for both methods that are designed
Fuzzy thresholding requires the specification to find clusters that approximately minimize
of four intensity thresholds (t1t4). If the inten- the pertinent criterion.
sity of a voxel v is less than t1 or greater than t4, As with hard clustering methods, the effective-
the objectness of v is 0. If the intensity is be- ness of fuzzy clustering methods in routine appli-
tween t2 and t3, its objectness is 1 (100%). For cations cannot be guaranteed because some user
other intensity values, objectness lies between assistance on a per-scene basis is usually needed.
0% and 100%. Other functional forms have also The simplest of the hard, region-based, human-
been used. Figure 14 shows a rendition of bone assisted methods of segmentation is manual
and soft tissue identified with fuzzy threshold- painting of regions with a mouse-driven paint-
ing on the basis of the CT data from Figure 3. brush (61). This method is an alternative to
manual boundary tracing.

794 Imaging & Therapeutic Technology Volume 19 Number 3


ally entail human assistance, they fall into the
category of fuzzy, region-based, human-assisted
methods. A recent technique that was designed
to make use of human assistance is the fuzzy
connected technique (65). In this method, rec-
ognition is manual and involves pointing at an
object in a section display. Delineation is auto-
matic and takes both graded composition and
hanging-togetherness into account. It has been
effectively applied in several applications in-
cluding quantification of multiple sclerosis le-
sions (6669), MR angiography (70), and soft-
tissue display for planning of craniomaxillofacial
surgery (71).
In the fuzzy connected technique, nearby
voxels in the voxel array are thought of as hav-
Figure 14. Fuzzy thresholding. Rendition of CT ing a fuzzy adjacency relation that indicates
data from Figure 3 with fuzzy thresholding depicts their spatial nearness. This relation, which var-
bone and soft tissue. ies in strength from 0 to 1, is independent of
any scene intensity values and is a nonincreas-
ing function of the intervening distance. Fuzzy
In contrast to this completely manual recog- adjacency roughly captures the blurring char-
nition and delineation scheme, there are meth- acteristic of imaging devices.
ods in which recognition is manual but delinea- Similarly, nearby voxels in a scene are thought
tion is automatic. Region growing is a popular of as having a fuzzy affinity relation that indicates
technique in this group (6264). At the outset, how they hang together locally in the same ob-
the user specifies a seed voxel within the ob- ject. The strength of this relation (varying from
ject region with use of (for example) a mouse 0 to 1) between any two voxels is a function of
pointer on a section display. A set of criteria for their fuzzy adjacency as well as their scene in-
inclusion of a voxel in the object is also speci- tensity values. For example, this function may
fied; for example, (a) the scene intensity of the be the product of the strength of their adjacency
voxel should be within an interval t1 to t2, (b) the and (l | I[vl] I[v2] |), where I[v1] and I[v2]
mean intensity of voxels included in the grow- are the intensity values of voxels v1 and v2
ing region at any time during the growing pro- scaled in some appropriate way to the range
cess should be within an interval t3 to t4, and between 0 and 1. Affinity expresses the degree
(c) the intensity variance of voxels included in to which voxels hang together to form a fuzzy
the growing region at any time during the grow- object. Of course, the intent is that this is a local
ing process should be within an interval t5 to t6. property; voxels that are far apart will have
Starting with the seed voxel, the algorithm ex- negligible affinity as defined in this function.
amines its 3D neighbors (usually the closest six, The real hanging-togetherness of voxels in a
18, or 26 neighbors) for inclusion. Those that global sense is captured through a fuzzy rela-
are included are marked so that they will not tion called fuzzy connectedness. A strength of
be reconsidered for inclusion later. The neigh- connectedness is assigned to each pair of
bors of the voxels selected for inclusion are in voxels (vl, v2) as follows: There are numerous
turn examined, and the process continues until possible paths between two voxels v1 and v2,
no more voxels can be selected for inclusion. any one of which consists of a sequence of
If only criterion a in the preceding paragraph voxels starting from v1 and ending on v2. Succes-
is used and t1 and t2 are fixed during the grow- sive voxels are nearby and have a certain degree
ing process, this method outputs essentially a of adjacency. The strength of a path is simply
connected component of voxels satisfying a hard the smallest of the affinities associated with
threshold interval. Note also that for any com- pairs of successive voxels along the path. The
bination of criteria a and b, or if t1 and t2 are not strength of connectedness between v1 and v2 is
fixed, it is not possible to guarantee that the set simply the largest of the strengths associated
of voxels (object) O(vl) obtained with a seed with all possible paths between v1 and v2. A
voxel vl is the same as object O(v2), where v2 v1 fuzzy object is a pool of voxels together with a
is a voxel in O(v1). This lack of robustness consti- membership (between 0 and 1) assigned to
tutes a problem with most region-based methods. each voxel that represents its objectness. The
In the sense that the fuzzy region-based meth- pool is such that the strength of connectedness
ods of segmentation described earlier eventu-

May-June 1999 Udupa RadioGraphics 795


a. b.

c. d. e.
Figure 15. Fuzzy connected segmentation. (a, b) Sections from an MR imaging scene with T2 (a) and PD (b)
values assigned to voxels. (c e) Sections created with 3D fuzzy connected segmentation demonstrate the
union of white matter and gray matter objects (c), the cerebrospinal fluid object (d), and the union of multi-
ple sclerosis lesions (e) detected from the scene in a and b.

between any two voxels in the pool is greater gray matter, cerebrospinal fluid, and multiple
than a small threshold value (typically about sclerosis lesions in a T2, PD scene pair. Figure
0.1) and the strength between any two voxels 16a shows an MIP rendition of an MR angiogra-
(only one of which is in the pool) is less than phy data set, whereas Figure 16b demonstrates
the threshold value. Obviously, computing fuzzy a rendition of a 3D fuzzy connected vessel tree
objects even for this simple affinity function is detected from a point specified on the vessel.
computationally impractical if we proceed straight There are a number of challenges associated
from the definitions. However, the theory al- with segmentation, including (a) developing
lows us to simplify the complexity considerably general segmentation methods that can be eas-
for a wide variety of affinity relations so that ily and quickly adapted to a given application,
fuzzy object computation can be done in prac- (b) keeping human assistance required on a per
tical time (about 1520 minutes for a 256 256 scene basis to a minimum, (c) developing fuzzy
64 3D scene (16 bits per voxel) on a SPARC- methods that can realistically handle uncertain-
station 20 workstation (Sun Microsystems, Moun- ties in data, and (d) assessing the efficacy of
tain View, Calif). A wide spectrum of applica- segmentation methods.
tion-specific knowledge of image characteristics
can be incorporated into the affinity relation. VISUALIZATION
Figure 15 shows an example of fuzzy con- Visualization operations create renditions of
nected segmentation (in 3D) of white matter, given scenes or object systems. Their purpose
is to create renditions from a given set of

796 Imaging & Therapeutic Technology Volume 19 Number 3


a. b.
Figure 16. Fuzzy connected segmentation. (a) Three-dimensional maximum-intensity-projection
(MIP) rendition of an MR angiography scene. (b) MIP rendition of the 3D fuzzy connected vessels de-
tected from the scene in a. Fuzzy connectedness has been used to remove the clutter that obscures
the vasculature.

Figure 17. Montage display of a 3D CT scene of the head.

scenes or objects that facilitate the visual per- Section Mode.Methods differ as to what con-
ception of object information. Two approaches stitutes a section and how this information is
are available: scene-based visualization and ob- displayed. Natural sections may be axial, coro-
ject-based visualization. nal, or sagittal; oblique or curved sections are
also possible. Information is displayed as a mon-
Scene-based Visualization tage with use of roam-through (fly-through) and
In scene-based visualization, renditions are cre- gray scale and pseudocolor. Figure 17 shows a
ated directly from given scenes. Within this ap- montage display of the natural sections of a CT
proach, two further subclasses may be identi-
fied: section mode and volume mode.

May-June 1999 Udupa RadioGraphics 797


18.

19a. 19b.
Figures 18, 19. (18) Three-dimensional displayguided extraction of an oblique section from CT
data obtained in a patient with a craniofacial disorder. A plane is selected interactively by means of
the 3D display to indicate the orientation of the section plane (left). The section corresponding to the
oblique plane is shown on the right. (19) Pseudocolor display. (a) Head MR imaging sections obtained
at different times are displayed in green and red, respectively. Where there is a match, the composite
image appears yellow. Green and red areas indicate regions of mismatch. (b) On the same composite
image displayed after 3D scene-based registration, green and red areas indicate either a registration
error or a change in an object (eg, a lesion) over the time interval between the two acquisitions.

scene. Figure 18 demonstrates a 3D display red and green hues) where the sections match
guided extraction of an oblique section from a perfectly or where there has been no change
CT scene of a pediatric patients head. This re- (for example, in the lesions). At other places, ei-
sectioning operation illustrates how visualiza- ther red or green is demonstrated.
tion is needed to perform visualization itself.
Figure 19 illustrates pseudocolor display with Volume Mode.In volume mode visualiza-
two sections from a brain MR imaging study in tion, information may be displayed as surfaces,
a patient with multiple sclerosis. The two sec- interfaces, or intensity distributions with use of
tions, which represent approximately the same surface rendering, volume rendering, or MIP. A
location in the patients head, were taken from projection technique is always needed to move
3D scenes that were obtained at different times from the higher-dimensional scene to the 2D
and subsequently registered. The sections are screen of the monitor. For scenes of four or
assigned red and green hues. The display more dimensions, 3D cross sections must
shows yellow (produced by a combination of first be determined, after which a projection
technique can be applied to move from 3D to

798 Imaging & Therapeutic Technology Volume 19 Number 3


Figure 20. Schematic illustrates pro-
jection techniques for volume mode
visualization. Projections are created
for rendition either by ray casting from
the viewing plane to the scene or by
projecting voxels from the scene to
the viewing plane.

2D. Two approaches may be used: ray casting ated interactively directly from the scene as the
(34), which consists of tracing a line perpen- threshold is changed. Instead of thresholding,
dicular to the viewing plane from every pixel any automatic, hard, boundary- or region-based
in the viewing plane into the scene domain, or method can be used. In such cases, however,
voxel projection (72), which consists of directly the parameters of the method will have to be
projecting voxels encountered along the pro- specified interactively, and the speed of seg-
jection line from the scene onto the viewing mentation and rendition must be sufficient to
plane (Fig 20). Voxel projection is generally make this mode of visualization useful. Although
considerably faster than ray casting; however, rendering based on thresholding can presently
either of these projection methods may be used be accomplished in about 0.030.25 seconds
with any of the three rendering techniques on a Pentium 300 with use of appropriate algo-
(MIP, surface rendering, volume rendering). rithms in software (61), more sophisticated seg-
In MIP, the intensity assigned to a pixel in the mentation methods (eg, kNN) may not offer in-
rendition is simply the maximum scene inten- teractive speed.
sity encountered along the projection line (Fig The actual rendering process consists of
16a) (73,74). MIP is the simplest of all 3D ren- three basic steps: projection, hidden part re-
dering techniques. It is most effective when moval, and shading. These steps are needed to
the objects of interest are the brightest in the impart a sense of three-dimensionality to the
scene and have a simple 3D morphology and a rendered image that is created. Additional cues
minimal gradation of intensity values. Contrast for three-dimensionality may be provided by
materialenhanced CT angiography and MR an- techniques such as stereoscopic display, mo-
giography are ideal applications for this method; tion parallax by rotation of the objects, shad-
consequently, MIP is commonly used in these owing, and texture mapping.
applications (75,76). Its main advantage is that If ray casting is used as the method of pro-
it requires no segmentation. However, the jection, hidden part removal is performed by
ideal conditions mentioned earlier frequently go stopping at the first voxel encountered along
unfulfilled, due (for example) to the presence of each ray that satisfies the threshold criterion
other bright objects such as clutter from surface (78). The value (shading) assigned to the pixel
coils in MR angiography, bone in CT angiogra- in the viewing plane that corresponds to the ray
phy, or other obscuring vessels that may not be is determined as described later. If voxel pro-
of interest. Consequently, some segmentation jection is used, hidden parts can be removed
eventually becomes necessary. by projecting voxels from the farthest to the
In surface rendering (77), object surfaces are closest (with respect to the viewing plane) and
portrayed in the rendition. A threshold interval always overwriting the shading value, which
must be specified to indicate the object of in- can be achieved in a number of computationally
terest in the given scene. Clearly, speed is of efficient ways (72,7981).
the utmost importance in surface rendering be-
cause the idea is that object renditions are cre-

May-June 1999 Udupa RadioGraphics 799


The shading value assigned to a pixel p in
the viewing plane depends on the voxel v that
is eventually projected onto p. The faithfulness
with which this value reflects the shape of the
surface around v largely depends on the surface
normal vector estimated at v. Two classes of
methods are available for this purpose: object-
based methods and scene-based methods. In
object-based methods (82,83), the vector is de-
termined purely from the geometry of the shape
of the surface in the vicinity of v. In scene-based
methods (78), the vector is considered to be the
gradient of the given scene at v; that is, the di-
rection of the vector is the same as the direction
in which scene intensity changes most rapidly
at v. Given the normal vector N at v, the shad-
ing assigned to p is usually determined as [fd(v, Figure 21. Scene-based volume rendering with
N, L) + fs(v, N, L, V)] fD(v), where fd is the dif- voxel projection. Rendition of knee CT data from
fuse component of reflection, fs is the specular Figure 3 shows bone, fat, and soft tissue.
component, fD is a component that depends on
the distance of v from the viewing plane, and L
and V are unit vectors indicating the direction of opacity. Every voxel is now considered to
of the incident light and of the viewing rays. The transmit, emit, and reflect light. The goal is to
diffuse component is independent of the view- determine the amount of light reaching every
ing direction but depends solely on L (as a cosine pixel in the viewing plane. The amount of light
of the angle between L and N). It captures the transmitted depends on the opacity of the voxel.
scattering property of the surface, whereas the Light emission depends on objectness and hence
specular component captures surface shininess. on opacity: The greater the objectness, the greater
The specular component is highest in the direc- the emission. Similarly, reflection depends on
tion of ideal reflection R whose angle with N is the strength of the surface that is present: The
equal to the angle between L and N. This reflec- greater the strength, the greater the reflection.
tion decreases as a cosine function on either side Like surface rendering, volume rendering
of R. By weighting the three components in dif- consists of three basic steps: projection, hid-
ferent ways, different shading effects can be den part removal, and shading or compositing.
achieved. The principles underlying projection are identi-
In scene-based surface rendering, a hard ob- cal to those described for surface rendering.
ject is implicitly created and rendered on the Hidden part removal is much more compli-
fly from the given scene. In scene-based volume cated for volume rendering than for surface ren-
rendering, a fuzzy object is implicitly created and dering. In ray casting, a common method is to
rendered on the fly from the given scene. Clearly, discard all voxels along the ray from the viewing
surface rendering becomes a special case of vol- plane beyond a point at which the cumulative
ume rendering. Furthermore, volume rendering opacity is above a high threshold (eg, 90%) (84).
in this mode is generally much slower than sur- In voxel projection, a voxel can also be discarded
face rendering, typically requiring 320 seconds if the voxels surrounding it in the direction of
even on specialized hardware rendering engines. the viewing ray have high opacity (35).
The basic idea in volume rendering is to as- The shading operation, which is more ap-
sign an opacity from 0% to 100% to every voxel propriately termed compositing, is also more
in the scene. The opacity value is determined complicated for volume rendering than for sur-
on the basis of the objectness value at the voxel face rendering. Compositing must take into ac-
and of how prominently one wishes to portray count all three components: transmission, re-
this particular grade of objectness in the rendi- flection, and emission. One may start from the
tion. This opacity assignment is specified inter- voxel farthest from the viewing plane along each
actively by way of an opacity function (Fig 13), ray and work toward the front, calculating the
wherein the vertical axis indicates percentage output light for each voxel. The net light out-
put by the voxel closest to the viewing plane is
assigned to the pixel associated with the ray.
Instead of using this back-to-front strategy, one

800 Imaging & Therapeutic Technology Volume 19 Number 3


a. b.
Figure 22. Object-based visualization of the skull in a child with agnathia. (a) Surface-rendered im-
age. (b) Subsequent volume-rendered image was preceded by the acquisition of a fuzzy object repre-
sentation with use of fuzzy thesholding (cf Fig 13).

may also make calculations from front to back, Volume Rendering.Volume rendering
which has actually been shown to be faster (35). methods take as input fuzzy object descriptions,
In volume rendering (as in surface render- which are in the form of a set of voxels wherein
ing), voxel projection is substantially faster than values for objectness and a number of other pa-
ray casting. Figure 21 shows the CT knee data rameters (eg, gradient magnitude) are associ-
set illustrated in Figure 3 as rendered with this ated with each voxel (35). Because the object
method. Three types of tissuebone, fat, and description is more compact than the original
soft tissuehave been identified. scene and additional information for increasing
computation speed can be stored as part of the
Object-based Visualization object description, volume rendering based on
In object-based visualization, objects are first fuzzy object description can be performed at
explicitly defined and then rendered. In difficult interactive speeds even on personal computers
segmentation situations, or when segmentation such as the Pentium 300 entirely in software.
is time consuming or involves too many param- In fact, the rendering speed (215 sec) is now
eters, it is impractical to perform direct scene- comparable to that of scene-based volume ren-
based rendering. The intermediate step of com- dering with specialized hardware engines. Fig-
pleting object definition then becomes necessary. ure 22b shows a fuzzy object rendition of the
data set in Figure 22a. Figure 23a shows a ren-
Surface Rendering.Surface rendering dition of craniofacial bone and soft tissue, both
methods take hard object descriptions as input of which were defined separately with use of
and create renditions. The methods of projec- the fuzzy connected methods described ear-
tion, hidden-part removal, and shading are simi- lier. Note that if one uses a direct scene-based
lar to those described for scene-based surface volume rendering method with the opacity
rendering, except that a variety of surface de- function illustrated in Figure 13, the skin be-
scription methods have been investigated us- comes indistinguishable from other soft tissues
ing voxels (72,79,81), points, voxel faces (29, and always obscures the rendition of muscles
80,85,86), triangles (30,37,87), and other sur- (Fig 23b).
face patches. Therefore, projection methods
that are appropriate for specific surface elements Misconceptions in Visualization
have been developed. Figure 22a shows a ren- Several inaccurate statements concerning visu-
dition, created with use of voxel faces on the alization frequently appear in the literature.
basis of CT data, of the craniofacial skeleton in a The following statements are seen most often:
patient with agnathia. Figure 8 shows renditions
of the bones of the foot created by way of the
same method on the basis of MR imaging data.

May-June 1999 Udupa RadioGraphics 801


Figure 23. Visualization with
volume rendering. (a) Object-
based volume-rendered image
demonstrates bone and soft-tis-
sue structures (muscles) that had
been detected earlier as sepa-
rate fuzzy connected objects in
a 3D craniofacial CT scene. The
skin is essentially peeled away
because of its weak connect-
edness to muscles. (b) Scene-
based volume-rendered version
of the scene in a was acquired
with use of the opacity function
(cf Fig 13) separately for bone
and soft tissue. The skin has be-
come indistinguishable from
muscles because they have simi-
lar CT numbers and hence ob-
scures the rendition of the
muscles. a. b.

Surface rendering is the same as thresh- (and, therefore, visualization operations) can be
olding. Clearly, thresholding is only one applied in many different sequences to achieve
indeed, the simplestof the many available the desired result. For example, the filtering-in-
hard region- and boundary-based segmentation terpolation-segmentation-rendering sequence
methods, the output of any of which can be may produce renditions that are significantly
surface rendered. different from those produced by interpolation-
segmentation-filtering-rendering. With the large
Volume rendering does not require seg- number of different methods possible for each
mentation.Although volume rendering is a operation and the various parameters associated
general term and is used in different ways, the with each operation, there are myriad ways of
statement is false. The only useful volume ren- achieving the desired results. Figure 24 shows
dering or visualization technique that requires five images derived from CT data that were
no segmentation is MIP. The opacity assignment created by performing different operations. Sys-
schemes illustrated in Figure 13 and described tematic study is needed to determine which
in the section entitled Scene-based Visualiza- combination of operations is optimal for a given
tion are clearly fuzzy segmentation strategies application. Normally, the fixed combination
and involve the same problems that are en- provided by the 3D imaging system is assumed
countered with any segmentation method. It is to be the best for that application. Second, ob-
untenable to hold that opacity functions such jective comparison of visualization methods
as the one shown in Figure 13 do not represent becomes an enormous task in view of the vast
segmentation while maintaining that the mani- number of ways one may reach the desired goal.
festation that results when t1 = t2 and t3 = t4 (cor- A third challenge is achieving realistic tissue
responding to thresholding) does represent display that includes color, texture, and surface
segmentation. properties.

The term volume rendering may be used MANIPULATION


to refer to any scene-based rendering tech- Manipulation operations are used primarily to
nique as well as object-based rendering create a second object system from a given ob-
techniques.The meaning of the term as used ject system by changing objects or their rela-
in the literature varies widely. In one sense, it tions. The main goal of these operations is to
can also apply to the section mode of visualiza- simulate surgical procedures on the basis of pa-
tion. It is better to use volume rendering to refer tient-specific scene data and to develop aids for
to fuzzy object rendering, whether performed interventional and therapy procedures. Compared
with scene-based or object-based methods, but with preprocessing and visualization, manipula-
not to hard object rendering methods. tion is in its infancy; consequently, it will not
There are many challenges associated with be discussed in as much detail. Two classes of
visualization. First, preprocessing operations operations are being developed: rigid manipu-
lation and deformable manipulation.

802 Imaging & Therapeutic Technology Volume 19 Number 3


Figure 24. Preprocess-
ing and visualization op-
erations. Renditions from
CT data were created
with use of five differ-
ent preprocessing and
visualization operations.

dons, ligaments, and capsules is complicated


because the forces that they generate and their
behavior under external forces are difficult to
determine, especially in a patient-specific fash-
ion. Therefore, in past attempts at modeling,
properties of soft tissues have been taken into
consideration in a generic but not patient-spe-
cific fashion. Generic models based on local
deformations are used as an aid in facial plastic
surgery (92) quite independent of the underly-
ing bone and muscle, treating only the skin sur-
face. The use of multiple layersskin, fatty tis-
sue, muscle, and bone that does not movehas
been explored in different combinations to
model facial expression animation (9395). Al-
Figure 25. Rigid manipulation. Rendition cre- though attempts have been made to model soft
ated from CT data obtained in a child demon- tissue in this fashion and to duplicate its mechani-
strates rigid manipulation for use in surgical cal properties (96), no attempt seems to have
planning. This virtual surgery mimics an os- been made to integrate hard-tissue (bone) changes
teotomy procedure used in craniomaxillofacial with the soft-tissue modifications in a model.
surgery to advance the frontal bone.
Reasons include the lack of adequate visualiza-
tion tools and the lack of tools to simulate os-
teotomy procedures or integrate soft-tissue mod-
Rigid Manipulation els with hard-tissue changes.
Operations to cut, separate, add, subtract, move, The area of deformable manipulation is open
and mirror objects and their components have for further research. Because most of the tis-
been developed with use of both hard and fuzzy sues in the body are deformable and movable
object definitions (81,8891). In rigid manipu- and object information in images is inherently
lation, the user interacts directly with an object- fuzzy, basic fuzzy mechanics theories and algo-
based surface or volume rendition of the object rithms need to be developed to carry out pa-
system (Fig 25). Clearly, an operation must be tient-specific object manipulation and analysis.
executable at interactive speeds to be practical.
This is possible with both hard and fuzzy ob- ANALYSIS
ject definitions (81,91) on a personal computer The main purpose of analysis operations is to
such as a Pentium 300. generate a quantitative description of the mor-
phology, architecture, and function of the ob-
Deformable Manipulation ject system from a given set of scenes or object
Operations to stretch, compress, bend, and so system.
on are being developed. Mechanical modeling
of soft-tissue structures including muscles, ten-

May-June 1999 Udupa RadioGraphics 803


The goal of many 3D imaging applications is be repeated (including, for example, how the
analysis of an object system. Although visual- patient is positioned in the scanner).
ization is used as an aid, it may not be an end in Accuracy refers to how the measurement
itself. As such, many of the current application- agrees with truth. Establishing recognition ac-
driven works are related to analysis. Like, other curacy requires histologic assessment of object
operations, analysis operations may be classi- presence or absence for small objects. For large
fied into two groups: scene-based analysis and (anatomic) objects, an expert reader can pro-
object-based analysis. vide truth. Receiver operating characteristic
analysis can then be applied.
Scene-based Analysis Establishing delineation accuracy requires a
In scene-based analysis, quantitative descriptions point-by-point assessment of object grade. Truth
are based directly on scene intensities and in- is very difficult to establish. Typically, the fol-
clude region-of-interest statistics as well as mea- lowing surrogates of truth are used: physical
surements of density, activity, perfusion, and phantoms, mathematic phantoms, manual (ex-
flow. Object structure information derived from pert) delineation, simulation of mathematical
another modality is often used to guide the se- objects that are then imbedded in actual images,
lection of regions for these measurements. and comparison with a process whose accuracy
is known.
Object-based Analysis Efficiency refers to the practical viability of
In object-based analysis, quantitative descrip- the methodfor example, in terms of the num-
tion is obtained from an object on the basis of ber of studies that can be processed per hour.
morphology, architecture, change over time, This variable has two components: computer
relationships with other objects in the system, time and operator time. Computer time is not
and changes in these relationships. Examples crucial so long as it remains within the bounds
of measurements obtained in this manner in- of practicality. However, operator time is cru-
clude distance, length, curvature, area, volume, cial in determining whether a method is practi-
kinematics, kinetics, and mechanics. cally viable regardless of its precision or accu-
Object information in images is fuzzy; there- racy. This aspect of validation is usually ignored,
fore, the challenge is to develop fuzzy morphom- but it should be conducted and its statistical
etry and mechanics theories and algorithms to variation analyzed and reported.
enable realistic analysis of the object information.
CONCLUSIONS
DIFFICULTIES IN 3D IMAGING Although 3D imaging poses numerous chal-
There are two main types of difficulties associ- lenges for mathematicians, engineers, physicists,
ated with 3D imaging: those related to object and physicians, it is an exciting technology that
definition and those related to validation. The promises to make important contributions in a
former have already been discussed in detail wide range of disciplines in years to come.
(see Preprocessing). Difficulties related to
validation are discussed in this section. REFERENCES
Validation may be either qualitative or quan- 1. Kaufman A. A tutorial on volume visualization. Los
titative. The purpose of qualitative validation is Alamitos, Calif: IEEE Computer Society, 1991.
2. Hhne K, Fuchs H, Pizer S. 3D imaging in medicine:
to compare visualization methods for a given algorithms, systems, applications. Berlin, Germany:
task. Observer studies and receiver operating Springer-Verlag, 1990.
characteristic analysis may be used (97). A ma- 3. Udupa J, Herman G. 3D imaging in medicine. Boca
jor challenge is how to select a small number Raton, Fla: CRC, 1991.
of methods for formal receiver operating char- 4. Udupa JK, Goncalves R. Imaging transforms for visual-
izing surfaces and volumes. J Digit Imaging 1993; 6:
acteristic analysis from among the numerous 213236.
combinations of operations and methods and 5. Gerig G, Kbler O, Kikinis R, Jolesz FA. Nonlinear
their parameter settings. anisotropic filtering of MRI data. IEEE Trans Med Im-
The purpose of quantitative validation is to aging 1992; 11:221232.
6. Pratt WK. Digital image processing. New York, NY:
assess the precision, accuracy, and efficiency of
Wiley, 1991.
the measurement process. 7. Stytz MR, Parrott RW. Using kriging for 3-D medical
Precision refers to the reliability of the imaging. Comput Med Imaging Graph 1993; 17:421
method and is usually easy to establish by re- 442.
peating the measurement process and assess- 8. Raya SP, Udupa JK. Shape-based interpolation of mul-
tidimensional objects. IEEE Trans Med Imaging 1990;
ing variation using coefficient of variation, cor- 9:3242.
relation coefficient, statistic, and analysis of 9. Higgins WE, Morice C, Ritman EL. Shape-based inter-
variance. All steps that involve subjectivity must polation of tree-like structures in three-dimensional
images. IEEE Trans Med Imaging 1993; 12:439450.

804 Imaging & Therapeutic Technology Volume 19 Number 3


10. Herman GT, Zheng J, Bucholtz CA. Shape-based inter- 33. Liu H. Two- and three-dimensional boundary detec-
polation. IEEE Comput Graph Appl 1992; 12:6979. tion. Comput Graph Image Process 1977; 6:123134.
11. Grevera GJ, Udupa JK. Shape-based interpolation of 34. Levoy M. Display of surfaces from volume data. IEEE
multidimensional grey-level images. IEEE Trans Med Comput Graph Appl 1988; 8:2937.
Imaging 1996; 15:882892. 35. Udupa J, Odhner D. Shell rendering. IEEE Comput
12. Puff DT, Eberly D, Pizer SM. Object-based interpola- Graph Appl 1993; 13:5867.
tion via cores. Image Process 1994; 2167:143150. 36. Udupa J, Hung H, Chuang K. Surface and volume ren-
13. Goshtasby A, Turner DA, Ackerman LV. Matching to- dering in 3D imaging: a comparison. J Digit Imaging
mographic slices for interpolation. IEEE Trans Med 1990; 4:159168.
Imaging 1992; 11:507516. 37. Kalvin A. Segmentation and surface-based modeling of
14. Woods RP, Mazziotta JC, Cherry SR. MRI-PET registra- objects in three-dimensional biomedical images. The-
tion with automated algorithm. J Comput Assist Tomogr sis. New York University, New York, NY, 1991.
1993; 17:536546. 38. Herman GT, Liu HK. Dynamic boundary surface detec-
15. Studholme C, Hill DLG, Hawkes DJ. Automated 3D reg- tion. Comput Graph Image Process 1978; 7:130138.
istration of MR and CT images of the head. Med Im- 39. Pope D, Parker D, Gustafson D, Clayton P. Dynamic
age Anal 1996; 1:163175. search algorithms in left ventricular border recogni-
16. Miller MI, Christensen GE, Amit Y, Grenander U. tion and analysis of coronary arteries. IEEE Proc
Mathematical textbook of deformable neuroanato- Comput Cardiol 1984; 9:7175.
mies. Proc Natl Acad Sci U S A 1993; 90:1194411948. 40. Amini A, Weymouth T, Jain R. Using dynamic pro-
17. Pelizzari CA, Chen GTY, Spelbring DR, Weichselbaum gramming for solving variational problems in vision.
RR, Chen CT. Accurate three-dimensional registration IEEE Trans Pattern Anal Machine Intell 1990; 12:855
of CT, PET, and/or MR images of the brain. J Comput 867.
Assist Tomogr 1989; 13:2026. 41. Gieger D, Gupta A, Costa L, Vlontzos J. Dynamic pro-
18. Toennies KD, Udupa JK, Herman GT, Wornom IL, gramming for detecting, tracking and matching de-
Buchman SR. Registration of three-dimensional ob- formable contours. IEEE Trans Pattern Anal Machine
jects and surfaces. IEEE Comput Graph Appl 1990; Intell 1995; 17:294302.
10:5262. 42. Kass M, Witkin A, Terzopoulos D. Snakes: active con-
19. Arun KS, Huang TS, Blostein SD. Least-squares fitting tour models. Int J Comput Vision 1987; 1:321331.
of two 3-D point sets. IEEE Trans Pattern Anal Machine 43. Lobregt S, Viergever M. A discrete dynamic contour
Intell 1987; 9:698700. model. IEEE Trans Med Imaging 1995; 14:1224.
20. Maintz JBA, van den Elsen PA, Viergever MA. Evalua- 44. Cohen L. On active contour models. Comput Vision
tion of ridge seeking operators for multimodality Graph Image Process 1991; 53:211218.
medical image matching. IEEE Trans Pattern Anal Ma- 45. Cohen I, Cohen LD, Ayache N. Using deformable sur-
chine Intell 1996; 18:353365. faces to segment 3-D images and infer differential
21. Bajcsy R, Kovacic S. Multiresolution elastic matching. structures. Comput Vision Graph Image Process
Comput Vision Graph Image Process 1989; 46:121. 1992; 56:242263.
22. Gee JC, Barillot C, Le Briquer L, Haynor DR, Bajcsy R. 46. McInerney T, Terzopoulos D. A dynamic finite ele-
Matching structural images of the human brain using ment surface model for segmentation and tracking in
statistical and geometrical image features. SPIE Proc multidimensional medical images with application to
1994; 2359:191204. cardiac 4D image analysis. Comput Med Imaging
23. Evans AC, Dai W, Collins L, Neelin P, Marrett S. Warp- Graph 1995; 19:6993.
ing of a computerized 3-D atlas to match brain image 47. Falcao AX, Udupa JK, Samarasekera S, Hirsch BE.
volumes for quantitative neuroanatomical and func- User-steered image boundary segmentation. SPIE Proc
tional analysis. SPIE Proc 1991; 1445:236246. 1996; 2710:279288.
24. Gong L, Kulikowski C. Comparison of image analysis 48. Falcao A, Udupa JK. Segmentation of 3D objects using
processes through object-centered hierarchical plan- live wire. SPIE Proc 1997; 3034:228235.
ning. IEEE Trans Pattern Anal Machine Intell 1995; 17: 49. Duda RO, Hart PE. Pattern classification and scene
9971008. analysis. New York, NY: Wiley, 1973.
25. Raya S. Low-level segmentation of 3-D magnetic reso- 50. Bezdek JC, Hall LO, Clarke LP. Review of MR image
nance brain images: a rule-based system. IEEE Trans segmentation techniques using pattern recognition.
Med Imaging 1990; 9:327337. Med Phys 1993; 20:10331048.
26. Collins D, Peters T. Model-based segmentation of in- 51. Clarke L, Velthuizen RP, Camacho MA, et al. MRI seg-
dividual brain structures from MRI data. SPIE Proc mentation: methods and applications. JMRI 1995; 13:
1992; 1808:1023. 343368.
27. Gee J, Reivich M, Bajcsy R. Elastically deforming 3D 52. Vannier M, Butterfield R, Jordan D, Murphy W. Multi-
atlas to match anatomical brain images. J Comput As- spectral analysis of magnetic resonance images. Radi-
sist Tomogr 1993; 17:225236. ology 1985; 154:221224.
28. Christensen GE, Rabbitt RD, Miller MI. 3-D brain map- 53. Cline H, Lorensen W, Kikinis R, Jolesz F. Three-dimen-
ping using a deformable neuroanatomy. Phys Med sional segmentation of MR images of the head using
Biol 1994; 39:609618. probability and connectivity. J Comput Assist Tomogr
29. Udupa J, Srihari S, Herman G. Boundary detection in 1990; 14:10371045.
multidimensions. IEEE Trans Pattern Anal Machine Intell 54. Kikinis R, Shenton M, Jolesz F, et al. Quantitative
1982; 4:4150. analysis of brain and cerebrospinal fluid spaces with
30. Wyvill G, McPheeters C, Wyvill B. Data structures for MR images. JMRI 1992; 2:619629.
soft objects. Visual Comput 1986; 2:227234. 55. Mitchell JR, Karlick SJ, Lee DH, Fenster A. Computer-
31. Lorensen W, Cline H. Marching cubes: a high resolu- assisted identification and quantification of multiple
tion 3D surface construction algorithm. Comput sclerosis lesions in MR imaging volumes in the brain.
Graph 1989; 23:185194. JMRI 1994; 4:197208.
32. Doi A, Koide A. An efficient method of triangulating
equi-valued surfaces by using tetrahedral cells. IEICE
Trans Commun Electron Inf Syst 1991; 74:214224.

May-June 1999 Udupa RadioGraphics 805


56. Kohn M, Tanna N, Herman G, et al. Analysis of brain 76. Hertz SM, Baum RA, Owen RS, Holland GA, Logan
and cerebrospinal fluid volumes with MR imaging. I. DR, Carpenter JP. Comparison of magnetic resonance
Methods, reliability and validation. Radiology 1991; angiography and contrast arteriography in peripheral
178:115122. arterial stenosis. Am J Surg 1993; 166:112116.
57. Kamber M, Shingal R, Collins DL, Francis GS, Evans 77. Goldwasser S, Reynolds R. Real-time display and ma-
AC. Model-based 3-D segmentation of multiple sclero- nipulation of 3-D medical objects: the voxel machine
sis lesions in magnetic resonance brain images. IEEE architecture. Comput Vision Graph Image Process
Trans Med Imaging 1995; 14:442453. 1987; 39:127.
58. Wells WM III, Grimson WEL, Kikinis R, Jolesz FA. 78. Hhne KH, Bernstein R. Shading 3D images from CT
Adaptive segmentation of MRI data. IEEE Trans Med using gray-level gradients. IEEE Trans Med Imaging
Imaging 1996; 15:429442. 1986; 5:4547.
59. Drebin R, Carpenter L, Hanrahan P. Volume render- 79. Reynolds R, Gordon D, Chen L. A dynamic screen
ing. Comput Graph 1988; 22:6574. technique for shaded graphics display of slice-repre-
60. Bezdek JC. Pattern recognition with fuzzy objective sented objects. Comput Vision Graph Image Process
function algorithms. New York, NY: Plenum, 1981. 1987; 38:275298.
61. Udupa JK, Odhner D, Samarasekera S, et al. 3DVIENIX: 80. Herman G, Liu L. Display of three-dimensional infor-
an open transportable, multidimensional, multimodality, mation in computed tomography. J Comput Assist
multiparametric imaging software system. SPIE Proc Tomogr 1977; 1:155160.
1994; 2164:5873. 81. Udupa J, Odhner D. Fast visualization, manipulation,
62. Burt PJ, Hong TH, Rosenfeld A. Segmentation and es- and analysis of binary volumetric objects. IEEE Comput
timation of region properties through co-operative hi- Graph Appl 1991; 11:5362.
erarchical computation. IEEE Trans Syst Man Cybernet 82. Chen LS, Herman GT, Reynolds RA, et al. Surface ren-
1981; 11:802809. dering in the cuberille environment. IEEE Comput
63. Hong TH, Rosenfeld A. Compact region extraction us- Graph Appl 1985; 5:3343.
ing weighted pixel linking in a pyramid. IEEE Trans 83. Gordon D, Reynolds RA. Image-space shading of
Pattern Anal Machine Intell 1984; 6:222229. three-dimensional objects. Comput Vision Graph Im-
64. Dellepiane S, Fontana F. Extraction of intensity con- age Process 1985; 29:361376.
nectedness for image processing. Pattern Recogn Lett 84. Levoy M. Display of surfaces from volume data. ACM
1995; 16:313324. Trans Graph 1990; 9:245271.
65. Udupa J, Samarasekera S. Fuzzy connectedness and 85. Udupa JK. Multidimensional digital boundaries.
object definition: theory algorithms and applications Graph Models Image Process 1994; 50:311323.
in image segmentation. Graph Models Image Process 86. Artzy E, Frieder G, Herman G. The theory, design,
1996; 58:246261. implementation and evaluation of a three-dimensional
66. Udupa JK, Wei L, Samarasekera S, Miki Y, van surface detection algorithm. Comput Graph Image
Buchem MA, Grossman RI. Multiple sclerosis lesion Process 1981; 15:124.
quantification using fuzzy connectedness principles. 87. Chuang JH, Lee WC. Efficient generation of isosurfaces
IEEE Trans Med Imaging 1997; 16:598609. in volume rendering. Comput Graph 1995; 19:805
67. Samarasekera S, Udupa JK, Miki Y, Grossman RI. A new 812.
computer-assisted method for enhancing lesion quan- 88. Cutting C, Grayson B, Bookstein F, Fellingham L,
tification in multiple sclerosis. J Comput Assist McCarthy J. Computer-aided planning and evaluation
Tomogr 1997; 21:145151. of facial and orthognathic surgery. Comput Plast Surg
68. Miki Y, Grossman RI, Samarasekera S, et al. Clinical cor- 1986; 13:449461.
relation of computer assisted enhancing lesion quanti- 89. Trivedi S. Interactive manipulation of three-dimen-
fication in multiple sclerosis. AJNR 1997; 18:705710. sional binary images. Visual Comput 1986; 2:209218.
69. van Buchem MA, Udupa JK, Heyning FH, et al. 90. Patel VV, Vannier MW, Marsh JL, Lo LJ. Assessing
Quantitation of macroscopic and microscopic cere- craniofacial surgical simulation. IEEE Comput Graph
bral disease burden in multiple sclerosis. AJNR 1997; Appl 1996; 16:4654.
18:12871290. 91. Odhner D, Udupa JK. Shell manipulation: interactive
70. Udupa JK, Odhner D, Tian J, Holland G, Axel L. Auto- alteration of multiple-material fuzzy structures. SPIE
matic clutter-free volume rendering for MR angiogra- Proc 1995; 2431:3542.
phy using fuzzy connectedness. SPIE Proc 1997; 3034: 92. Paouri A, Thalmann NM, Thalmann D. Creating realis-
114119. tic three-dimensional human shape characters for
71. Udupa JK, Tian J, Hemmy DC, Tessier P. A pentium- computer-generated films. In: Proceedings of Com-
based craniofacial 3D imaging and analysis system. J puter Animation 91, Tokyo. Berlin, Germany:
Craniofac Surg 1997; 8:333339. Springer-Verlag, 1991; 89100.
72. Frieder G, Gordon D, Reynolds R. Back-to-front display 93. Waters K. A muscle model for animating three dimen-
of voxel-based objects. IEEE Comput Graph Appl sional facial expression. Proceedings of SIGGRAPH
1985; 5:5260. 87, 1987; 21:1724.
73. Brown DG, Riederer SJ. Contrast-to-noise ratios in 94. Platt S, Badler N. Animating facial expressions. Pro-
maximum intensity projection images. Magn Reson ceedings of SIGGRAPH 81, 1981; 245252.
Med 1992; 23:130137. 95. Terzopoulos D, Waters K. Techniques for realistic
74. Schreiner S, Paschal CB, Galloway RL. Comparison of facial modeling and animation. In: Proceedings of
projection algorithms used for the construction of Computer Animation 91, Tokyo. Berlin, Germany:
maximum intensity projection images. J Comput Assist Springer-Verlag, 1991; 5974.
Tomogr 1996; 20:5667. 96. Jianhua S, Thalmann MN, Thalmann D. Muscle-based
75. Napel S, Marks MP, Rubin GD, et al. CT angiography human body deformations. In: Proceedings of CAD/
with spiral CT and maximum intensity projection. Ra- Graphics 93, Beijing, China, 1993; 95100.
diology 1992; 185:607610. 97. Vannier MW, Hildebolt CF, Marsh JL, et al. Cranio-
synostosis: diagnostic value of three-dimensional CT
reconstruction. Radiology 1989; 173:669673.

806 Imaging & Therapeutic Technology Volume 19 Number 3

You might also like