Professional Documents
Culture Documents
Viewing Plane
Implied Geometry
(x, y, w)
Unit Sphere
2
To appear in the proceedings of I3D 2001
3
To appear in the proceedings of I3D 2001
lation, surface normals are also crucial for performing other opera- Since in a 2D projective setting we have no knowledge of the
tions, such as modeling of aggregate shapes, shading, and shadow distance d from the viewpoint to the surface or the actual displace-
projection. ment of the plane Æ , we deduce a new quantity = Æ=d. This
In projective geometry, each point has a dual—a line whose co- yields a single-parameter family of homographies compatible with
efficients equal the point’s coordinates. The dual to a point repre- the translation of a 3D plane:
senting the normal to a 3D plane plays a special role. Suppose the
vector (a; b; c) represents the normal to a 3D plane. Its dual is the T () ' I + tnT : (2)
projective 2D line ax + by + cw = 0. We can envision this line
as an equatorial great circle whose pole is (a; b; c). Clearly, points All the quantities on the right-hand side of Equation 2 are known
on this circle represent all directions parallel to the 3D plane, rep- except for the scalar parameter , which can be inferred from a
resenting ideal (rather than Euclidean) 3D points at infinity. Thus, single pair of points ( ; 0 ) given the location of a point on the
mm
the line dual to the plane’s normal is the image of the line at infin- surface before and after translation. Such a pair can be specified
ity associated with the plane, of which we make frequent use. For using the pointing device, and must be constrained to lie on the
example, we arrive at the direction of a 3D line that lies in a given selected trajectory, hence the single degree of freedom (see Figure
3D plane by computing the “vanishing point” at the intersection of 7). We determine as follows:
the line’s 2D image with the image of the plane’s line at infinity.
0
= km mk ; sign() = sign(t (m0 m)):
(n m)
3.4.1 Apparent translation (3)
4
To appear in the proceedings of I3D 2001
Figure 8: The rotation angle is inferred from a pair of input points Figure 10: An extrusion shape is created by making a copy of the
(m, m0 ) indicating the position of a point before and after rotation. base stroke and transforming it via a pseudo-3D translation along
The point rotates in a plane perpendicular to the rotation axis. By the extrusion direction. The normal of each facet is computed by
extending the lines mp and m0 p to the rotation plane’s line at in- intersecting the line joining mi and mi+1 with the base stroke’s
finity we get the directions t and t0 , which completely specify the line at infinity in order to determine a vanishing point vi , and then
rotation angle. computing the normal as the cross product of this vanishing point
with the extrusion trajectory.
1. In the first step, we rotate the object about the viewpoint (at 4.1 Extrusion
the origin of the world) using the rotation axis and angle de- The user draws a freehand “base stroke” and selects the extrusion
sired for the local rotation. All object points, including the trajectory from the list of active vanishing points, then drags the
pivot itself, move to an intermediate position: pointing device to specify the magnitude of the extrusion. The sys-
tem responds by making a copy of the base stroke, which we call
m00 = R(a; )m: the “extruded stroke,” and applies apparent 3D translation to this
copy using a variant of Equation 2:
2. Next, we use apparent 3D translation (Equation 2), where
00 , to “move the object back” to the original pivot:
t'
p p T (e ) ' I + e enT ;
5
To appear in the proceedings of I3D 2001
e
where e is inferred from the dragging action and is the selected
extrusion trajectory. Segments of the new extruded stroke are con-
nected to corresponding ones in the base stroke, thereby forming Figure 13: An extrusion shape is rotated in several steps: First, both
the facets of the shape. the base and extruded strokes are rotated about their first points.
This approach assumes that the base stroke represents a planar The first facet is also rotated in order to determine the final position
curve in 3D space. By default, the user interface initially assigns of the extruded stroke (top). Then the extruded stroke is moved to
the base stroke’s normal as the extrusion direction. Normals to the the correct position using the first facet as guide (bottom).
facets are inferred from vanishing points (see Figure 10), allow-
ing for shading and shadow projection. Later, the user may shift the
extruded stroke in any direction, thereby simulating a skewed extru-
sion shape (see Figure 11). The extrusion direction is re-computed
as the intersection of any two facet sides connecting the base and
extruded stroke. Facet normals are also updated using this new ex-
trusion direction.
0 = :
4.1.2 Apparent Rotation
1+ e (e n) Rotation of an extruded shape about a pivot is also possible. Any
point could serve as the origin of the rotation. For simplicity, we
Derivation: Suppose that the imaginary 3D curve represented by
the base stroke lies in a 3D plane whose equation is nP
= d, and
0 = d0 . From Figure
choose the pivot to be the first point in the base stroke. We perform
the extruded curve lies in a parallel plane: nP the rotation in a series of steps (see Figure 13):
12 we can deduce that: 1. Rotate the base stroke about the rotation pivot.
d0 = d + Æe (e n); (5) 2. Rotate the extruded stroke about its first point. This results in
an intermediate position for the extruded stroke.
where Æe is the extrusion distance. From our definition of in
Section 3.4.1, we have: 3. Rotate the first facet of the shape about the rotation pivot in
order to establish the correct positions for the first and second
e = Æe ; = Æt ; 0 = Æt0 ; (6) points in the extruded stroke.
d d d
where Æt is the translation distance. By substituting the value of d0
4. Move the extruded stroke from its intermediate position to the
correct position determined in step 3. For this operation, we
from Equation 5 into 6, we have: use apparent 3D translation (Equation 2), where
t' m m m m
( 1
01 ) ( 2 02 ).
0 = Æt :
d + Æe (e n)
Since Æe = de (Equation 6), we have: 4.2 Silhouettes
Rather than drawing all facets of an extrusion, and in keeping with
0 = Æt Æt =d : the hand-drawn look, we have developed techniques to highlight
d + de (e n) 1 + e (e n) e (e n)
= =
1+ the boundaries and silhouettes of extrusion shapes. Silhouettes of
6
To appear in the proceedings of I3D 2001
C OMPUTE -S TIPPLE
StippleDirection
n s
Convert-StippleDirection-To-Screen-Coordinates
StippleDensity
MaxDensity (1 min(0; n s))
NumStipples
StippleDensity Bounding-Box-Area
for i 1 to NumStipples
do BeginPoint Random-Point-Inside-Bounding-Box
EndPoint
BeginPoint + (Random-Length StippleDirection)
Clip-Stipple-to-Shape; Back-Project-Stipple; Add-to-StippleList
Figure 15: Points on a plane make a greater angle with its normal
as they move farther away from the viewpoint. This observation is
used to draw an extrusion shape in a back-to-front order.
4.3 Visibility
Although inter-object visibility cannot be unambiguously resolved
for 2D representations, intra-object visibility can be determined in is consistent with that of the manual illustrations in [3]. Instead of
some instances. For example, the facets of an extrusion shape can Lambertian shading, it generates four levels of grey according to
be drawn in a back-to-front order using the simple observation that the following rules: Shadows are hatched with maximum density,
points on the base plane make a greater angle with the plane’s nor- objects facing away from the light are hatched with lighter density,
mal as they move farther away. For example, in Figure 15, we have and light stippling is applied to objects that are dimly lit (i.e., the
m n m n
j m
> i ; therefore, a facet based at j is potentially oc- angle between the normal and the light source is greater than 45
m
cluded by another based at i (assuming the extrusion shape is not degrees).
Due to the computational overhead of artistic shading (about 2
skew). Based on this dot product criteria, we create a sorted list
of facet indexes that we use for rendering. Re-sorting of this list is seconds for a complex scene), we adopt the following strategy:
necessary if the object undergoes apparent translation or rotation. Shading strokes are computed in screen coordinates when there
is no camera motion, then back-projected and stored on the unit
sphere. As the user rotates the camera, the stored strokes are used
5 Shading to render the scene. Although the stored shading becomes some-
what inaccurate during camera motion, this strategy provides ad-
Our drawing system provides shading capabilities by inferring sur- equate feedback during scene navigation and avoids the flickering
face normals (see above) and allowing the user to insert infinite light that would result from re-computing the strokes during camera mo-
sources into the scene. The picture can then be rendered with flat- tion.
shaded solid color (using any local lighting model) or with artistic
styles such as stippling and hatching.
We have implemented a basic stippling algorithm (used in Figure 6 Shadows
24-a) that employs short strokes whose direction is determined by
the surface normal and light direction (see Figure 16). The density Shadows play an important role in scene presentation due to their
of the strokes is determined by a Lambertian shading computation, effectiveness in conveying shape and relative position information.
and their position and length are randomized in order to emulate a Following classical line construction techniques, we have imple-
hand-drawn look (see code in Figure 17). mented an automatic algorithm that computes the shape of an ob-
Another shading style that we support is a simple hatching ject’s shadow as cast from an infinite (directional) light source like
method (used in Figure 24-b). This method generates a look that the sun. However, due to the lack of depth information, the shadow
7
To appear in the proceedings of I3D 2001
PROJECT-S HADOW
n1 shadow-casting-object-normal R EPROJECT-S HADOW-P OINT
n2
0 shadow-receiving-object-normal
n
0
shadow-receiving-object-normal
B Shadow projector.
m1 m1 B Shadow attached to first point. l1 m
s0
s
B Shadow plane’s normal.
k length[stroke] B Number of points in stroke ns s
ns n B Intersection of 2 planes.
for i 2 to k t
tm
0
do l1 mi
mi 1 B Shadow-casting stroke line. l2
00 l1 l2 B New shadow point.
l2 s mi B Shadow projector. m
v l1 n1 B Vanishing point.
ns sv B Shadow plane’s normal. Figure 22: Pseudo-code for shadow re-projection.
ns n2 B Intersection of 2 planes.
t
l3 m
0 t B Shadow line.
mi
0 l2i 1 l3 B Shadow point.
Figure 20: Pseudo-code for shadow projection. 7 Examples
We have created many drawings with our system, which demon-
strate its usefulness and versatility. We used the system in con-
junction with other media, such as paper sketches, paintings, and
is initially attached to the object casting the shadow, then the user photographs. The following examples also demonstrate a variety of
may drag it to the desired position (see Figure 18). This dragging looks the system can generate, including freehand strokes, silhou-
operation is achieved with our “apparent 3D translation” method by ette rendering, and fully-shaded scenes.
using the light’s direction as the translation trajectory. Later, if the Paper Sketches. This example shows a panoramic sketch created
user re-positions the light source, the new position of the shadow entirely from a series of freehand sketches originally drawn on pa-
is recomputed automatically, without any further user intervention. per using a digital notepad. The panorama was assembled from
Note that, by using this shadow construction interface, it is possible sketches pointing at four different directions by estimating the fields
to construct a scene with incomplete or even inconsistent shadows. of view visually (see Figure 23).
It is the artist’s responsibility to maintain the scene’s integrity. Shadows and Shading. Many of the features of our system were
used in the construction of a perspective drawing of the Court of
The information that is needed to compute the shadow is the sur- the Myrtles at Alhambra Palace, Spain (see Figure 24). For ex-
face normals for both the object casting the shadow and the one ample, special vanishing points aided in the drawing of the roof
receiving it. The shadow of a stroke (or polygon) is determined by tiles. Symmetrical and repeating architectural features, such as the
marching along the stroke and projecting its successive segments colonnade, where copied and moved using the “apparent transla-
onto the shadow-receiving object. The first shadow point is at- tion” operation. Shadows, including those cast by the colonnade
tached to the corresponding point in the stroke. Thereafter, each and lattice onto the back wall, were projected semi-automatically.
shadow point is determined by intersecting a shadow line with a The shadow re-projection algorithm was then used to visualize the
shadow projector—a line joining the light source and the shadow- motion of the sun across the courtyard (see Figure 25).
casting point (see Figure 19). The trajectory of the shadow line Extrusion. This example shows the maze garden at Hampton Court
is determined by intersecting an imaginary shadow plane with the Palace, which was generated by extruding the plan drawing of the
shadow-receiving object. All these operations are performed in two maze (see Figure 26). Care was taken to maintain a proper depth
dimensions using vector cross products as shown in the pseudo- order amongst the hedges. Since our system relies on a stacking
code (see Figure 20). order for conveying occlusion, it is not possible to have one shape
Using similar techniques, shadows can be automatically re- wrapping around another. Such a shape must be broken up during
projected as the light source moves (see Figure 21). An imaginary modeling into smaller fragments—ones that are either exclusively
shadow plane is constructed encompassing the old and new shadow behind or in front of other objects. This limitation, however, can be
projectors—its surface normal inferred from the old and new light mitigated with a “grouping” tool, whereby visibility within a group
directions. The intersection of the shadow plane with the shadow- is resolved on a facet-by-facet basis rather than by objects.
receiving object gives us the trajectory along which the new shadow Projective Textures. In [12] we demonstrated the usefulness
point must lie. We intersect this trajectory with the new shadow of integrating traditional drawing media and photographs with
projector to arrive at the new shadow point (see code in Figure 22). computer-based drawing. However, our previous techniques re-
8
To appear in the proceedings of I3D 2001
9
To appear in the proceedings of I3D 2001
Figure 25: This sequence, showing the motion of the shadow across
the back wall, was generated automatically from Figure 24.
9 Acknowledgments
We would like to thank Aparna Das and Stephen Duck for their Figure 27: This scene depicts a proposed building within its real
contributions to our examples. This work was supported by an NSF context (top). In addition to the panorama of the site, the scene
CAREER awards (CCR-9624172 and CCR-9875859) and an NSF contains three perspective rectangles with projective textures and
CISE Research Infrastructure award (EIA-9802220). transparency channels (bottom). Two of these textures include pro-
posed elements, while the third is an existing building (the building
on the right) that would occlude the proposed one.
References
[1] Shenchang Eric Chen. Quicktime VR - An Image-Based Ap-
proach to Virtual Environment Navigation. In SIGGRAPH 95
[8] John Lansdown and Simon Schofield. Expressive Rendering:
Conference Proceedings, pages 29–38, August 1995.
A Review of Nonphotorealistic Techniques. IEEE Computer
[2] Jonathan M. Cohen, John F. Hughes, and Robert C. Zeleznik. Graphics and Applications, 15(3): 29–37, May 1995.
Harold: A World Made of Drawings. In NPAR 2000: First In-
ternational Symposium on Non Photorealistic Animation and [9] Ramesh Raskar and Michael Cohen. Image Precision Silhou-
Rendering, pages 83–90, June 2000. ette Edges. In 1999 ACM Symposium on Interactive 3D Graph-
ics, pages 135–140, April 1999.
[3] Robert W. Gill. Creative Perpsective. Thames and Hudson,
London, 1975. [10] Michael P. Salisbury, Sean E. Anderson, Ronen Barzel, and
David H. Salesin. Interactive Pen-And-Ink Illustration. In SIG-
[4] Paul S. Heckbert. Fundamentals of Texture Mapping and Im- GRAPH 94 Conference Proceedings, pages 101–108, July
age Warping. Master’s thesis, UCB/CSD 89/516, CS Division, 1994.
EECS Dept, UC Berkeley, May 1989.
[11] Jorge Stolfi. Oriented Projective Geometry: a Framework for
[5] S.C. Hsu, I.H.H. Lee, and N.E. Wiseman. Skeletal Strokes. Geometric Computations. Academic Press, Boston, 1991.
In UIST 93: Proceedings of the ACM SIGGRAPH & SIGCHI
Symposium on User Interface Software & Technology, Novem- [12] Osama Tolba, Julie Dorsey, and Leonard McMillan. Sketch-
ber 1993. ing with Projective 2D Strokes. In UIST 99: Proceedings of the
12th Annual ACM Symposium on User Interface Software &
[6] Takeo Igarashi, Satoshi Matsuoka, and Hidehiko Tanaka. Technology, CHI Letters, 1(1): 149–157, November 1999.
Teddy: A Sketching Interface for 3D Freeform Design. In SIG-
GRAPH 99 Conference Proceedings, pages 409–416, August [13] Robert C. Zeleznik, Kenneth P. Herndon and John F. Hughes.
1999. SKETCH: An Interface for Sketching 3D Scenes. In SIG-
GRAPH 96 Conference Proceedings, pages 163–170, August
[7] Kenichi Kanatani. Computational Projective Geometry. 1996.
CVGIP: Image Understanding, 54(3): 333–348, 1991.
10