close

Вход

Log in using OpenID

embedDownload
Wavelets for Computer Graphics: A Primer
Part 1y
Eric J. Stollnitz
Tony D. DeRose
David H. Salesin
University of Washington
1 Introduction
Wavelets are a mathematical tool for hierarchically decomposing
functions. They allow a function to be described in terms of a coarse
overall shape, plus details that range from broad to narrow. Regardless of whether the function of interest is an image, a curve, or a surface, wavelets offer an elegant technique for representing the levels
of detail present. This primer is intended to provide people working
in computer graphics with some intuition for what wavelets are, as
well as to present the mathematical foundations necessary for studying and using them. In Part 1, we discuss the simple case of Haar
wavelets in one and two dimensions, and show how they can be used
for image compression. In Part 2, we will present the mathematical
theory of multiresolution analysis, then develop spline wavelets and
describe their use in multiresolution curve and surface editing.
Although wavelets have their roots in approximation theory [5] and
signal processing [13], they have recently been applied to many
problems in computer graphics. These graphics applications include image editing [1], image compression [6], and image querying [10]; automatic level-of-detail control for editing and rendering curves and surfaces [7, 8, 12]; surface reconstruction from contours [14]; and fast methods for solving simulation problems in animation [11] and global illumination [3, 4, 9, 15]. For a discussion of
wavelets that goes beyond the scope of this primer, we refer readers
to our forthcoming monograph [16].
We set the stage here by first presenting the simplest form of
wavelets, the Haar basis. We cover one-dimensional wavelet transforms and basis functions, and show how these tools can be used to
compress the representation of a piecewise-constant function. Then
we discuss two-dimensional generalizations of the Haar basis, and
demonstrate how to apply these wavelets to image compression.
Because linear algebra is central to the mathematics of wavelets, we
briefly review important concepts in Appendix A.
We can represent this image in the Haar basis by computing a
wavelet transform. To do this, we first average the pixels together,
pairwise, to get the new lower resolution image with pixel values
8
4
Clearly, some information has been lost in this averaging process.
To recover the original four pixel values from the two averaged values, we need to store some detail coefficients, which capture the
missing information. In our example, we will choose 1 for the first
detail coefficient, since the average we computed is 1 less than 9
and 1 more than 7. This single number allows us to recover the first
two pixels of our original four-pixel image. Similarly, the second
detail coefficient is 1, since 4 + ( 1) = 3 and 4 ( 1) = 5.
Thus, we have decomposed the original image into a lower resolution (two-pixel) version and a pair of detail coefficients. Repeating
this process recursively on the averages gives the full decomposition:
Resolution
4
Averages
9
2
7
3
8
4
1
6
5
Detail coefficients
1
1
2
Finally, we will define the wavelet transform (also called the wavelet
decomposition) of the original four-pixel image to be the single coefficient representing the overall average of the original image, followed by the detail coefficients in order of increasing resolution.
Thus, for the one-dimensional Haar basis, the wavelet transform of
our original four-pixel image is given by
6
2
1
1
2 Wavelets in one dimension
The Haar basis is the simplest wavelet basis. We will first discuss
how a one-dimensional function can be decomposed using Haar
wavelets, and then describe the actual basis functions. Finally, we
show how using the Haar wavelet decomposition leads to a straightforward technique for compressing a one-dimensional function.
2.1
One-dimensional Haar wavelet transform
To get a sense for how wavelets work, let’s start with a simple example. Suppose we are given a one-dimensional “image” with a resolution of four pixels, having values
9
7
3
5
y Eric J. Stollnitz, Tony D. DeRose, and David H. Salesin. Wavelets for computer graphics: A primer, part 1. IEEE Computer Graphics and Applications, 15(3):76–84, May 1995.
The way we computed the wavelet transform, by recursively averaging and differencing coefficients, is called afilter bank—a process
we will generalize to other types of wavelets in Part 2 of our tutorial. Note that no information has been gained or lost by this process.
The original image had four coefficients, and so does the transform.
Also note that, given the transform, we can reconstruct the image to
any resolution by recursively adding and subtracting the detail coefficients from the lower resolution versions.
Storing the image’s wavelet transform, rather than the image itself,
has a number of advantages. One advantage of the wavelet transform is that often a large number of the detail coefficients turn out
to be very small in magnitude, as in the example of Figure 1. Truncating, or removing, these small coefficients from the representation introduces only small errors in the reconstructed image, giving
a form of “lossy” image compression. We will discuss this particular application of wavelets in Section 2.3, after we present the onedimensional Haar basis functions.
The mathematical theory of multiresolution analysis requires this
nested set of spaces V j . We will consider this topic more thoroughly
in Part 2.
Now we need to define a basis for each vector space V j . The basis
functions for the spaces V j are called scaling functions, and are usually denoted by the symbol . A simple basis for V j is given by the
set of scaled and translated “box” functions:
V 4 approximation
j
j
i (x) := (2 x
i = 0, : : : , 2j
i),
1,
where
V 3 approximation
W 3 detail coefficients
(x) :=
for 0 x < 1
otherwise.
1
0
As an example, Figure 2 shows the four box functions forming a basis for V 2 .
V 2 approximation
The next step is to choose an inner product defined on the vector
spaces V j . The “standard” inner product,
W 2 detail coefficients
hf j gi
Z
1
:=
f (x) g(x) dx,
0
2
V 1 approximation
for two elements f , g
V j will do quite well for our running example. We can now define a new vector space W j as the orthogonal
complement of V j in V j+1 . In other words, we will let W j be the space
of all functions in V j+1 that are orthogonal to all functions inV j under
the chosen inner product. Informally, we can think of the wavelets
in W j as a means for representing the parts of a function in V j+1 that
cannot be represented in V j .
W 1 detail coefficients
V 0 approximation
A collection of linearly independent functions ij (x) spanning W j are
called wavelets. These basis functions have two important properties:
W 0 detail coefficient
1. The basis functions ij of W j , together with the basis functions ji
of V j , form a basis for V j+1 .
Figure 1 A sequence of decreasing-resolution approximations to a
function (left), along with the detail coefficients required to recapture
the finest approximation (right). Note that in regions where the true
function is close to being flat, a piecewise-constant approximation
works well, so the corresponding detail coefficients are relatively
small.
2.2
2. Every basis function ij of W j is orthogonal to every basis function ji of V j under the chosen inner product.1
Thus, the “detail coefficients” of Section 2.1 are really coefficients
of the wavelet basis functions.
One-dimensional Haar wavelet basis functions
The wavelets corresponding to the box basis are known as theHaar
wavelets, given by
We have shown how one-dimensional images can be treated as sequences of coefficients. Alternatively, we can think of images as
piecewise-constant functions on the half-open interval [0, 1). To do
so, we will use the concept of a vector space from linear algebra.
A one-pixel image is just a function that is constant over the entire
interval [0, 1). We’ll let V 0 be the vector space of all these functions. A two-pixel image has two constant pieces over the intervals [0, 1=2) and [1=2, 1). We’ll call the space containing all these
functions V 1 . If we continue in this manner, the space V j will include all piecewise-constant functions defined on the interval [0, 1)
with constant pieces over each of 2j equal subintervals.
j
i (x)
V1
V2
(2j x
i),
i = 0, : : : , 2j
1,
where
(
(x) :=
1
1
0
for 0 x < 1=2
for 1=2 x < 1
otherwise.
Figure 3 shows the two Haar wavelets spanning W 1 .
Before going on, let’s run through our example from Section 2.1
again, but now applying these more sophisticated ideas.
We can now think of every one-dimensional image with 2j pixels as
an element, or vector, in V j . Note that because these vectors are all
functions defined on the unit interval, every vector inV j is also contained in V j+1 . For example, we can always describe a piecewiseconstant function with two intervals as a piecewise-constant function with four intervals, with each interval in the first function corresponding to a pair of intervals in the second. Thus, the spaces V j
are nested; that is,
V0
:=
I
We begin by expressing our original image (x) as a linear combination of the box basis functions in V 2 :
I (x)
= c20 20 (x) + c21 21 (x) + c22 22 (x) + c23 23 (x).
1 Some authors refer to functions with these properties aspre-wavelets,
reserving the term “wavelet” for functions ij that are also orthogonal to each
other.
2
1
0
1
20
1
2
0
1
0
1
21
1
2
0
0
1
1
22
1
2
0
0
1
23
1
2
0
1
Figure 2 The box basis for V 2 .
1
1
1
2
0
1
0
-1
0
1
1
1
1
1
2
-1
Figure 3 The Haar wavelets for W 1 .
Orthogonality
A more graphical representation is
I (x)
=
9
+
7
+
3
+
5
The Haar basis possesses an important property known as orthogonality, which is not always shared by other wavelet bases. An orthogonal basis is one in which all of the basis functions, in this case
00 , 00 , 01 , 11 , : : :, are orthogonal to one another. Note that orthogonality is stronger than the minimum requirement for wavelets that ij
be orthogonal to all scaling functions at the same resolution levelj.
Normalization
Note that the coefficients c20 , : : : , c23 are just the four original pixel
values [9 7 3 5].
Another property that is sometimes desirable is normalization. A
basis function u(x) is normalized if u u = 1. We can normalize
the Haar basis by replacing our earlier definitions with
h j i
I
We can rewrite the expression for (x) in terms of basis functions
in V 1 and W 1 , using pairwise averaging and differencing:
I (x)
c10 10 (x) + c11 11 (x) + d01
=
8
+
4
+
1
+
1
1
0 (x)
+ d11
i (x) := 2
1
1 (x)
j
i (x)
=
6
+
2
+
1
+
1
:= 2
(2j x
i)
i),
0
0 (x)
+ d01
1
0 (x)
+ d11
6
2
p12
p12
As an alternative to first computing the unnormalized coefficients
and then normalizing them, we can include normalization in the decomposition algorithm. The following two pseudocode procedures
accomplish this normalized decomposition:
I
c00 00 (x) + d00
j
(2 x
h j i
Finally, we’ll rewrite (x) as a sum of basis functions in V 0 , W 0 ,
and W 1 :
=
j =2
where the constant factor of 2j=2 is chosen to satisfy u u = 1 for
the standard inner product. With these modified definitions, the new
normalized coefficients are obtained by multiplying each old coefficient with superscript j by 2 j=2 . Thus, in the example from the
previous section, the unnormalized coefficients [6 2 1 1] become
the normalized coefficients
These four coefficients should look familiar as well.
I (x)
j =2
j
=
1
1 (x)
procedure DecompositionStep (C: array [1. . h] of reals)
for i
1 to h=2 do
(C[2i 1] + C[2i])= 2
C0 [i]
C0 [h=2 + i]
(C[2i 1] C[2i])= 2
end for
0
C
C
end procedure
p
p
p
procedure Decomposition (C: array [1. . h] of reals)
C= h (normalize input coefficients)
C
while h > 1 do
DecompositionStep (C[1. . h])
h
h=2
end while
end procedure
Once again, these four coefficients are the Haar wavelet transform
of the original image. The four functions shown above constitute
the Haar basis for V 2 . Instead of using the usual four box functions,
we can use 00 , 00 , 01 , and 11 to represent the overall average, the
broad detail, and the two types of finer detail possible in a function
in V 2 . The Haar basis for V j with j > 2 includes these functions as
well as narrower translates of the wavelet (x).
Now we can work with an orthonormal basis, meaning one that is
both orthogonal and normalized. Using an orthonormal basis turns
3
out to be handy when compressing a function or an image, which
we describe next.
2.3
Application I: Compression
16 out of 16 coefficients
14 out of 16 coefficients
12 out of 16 coefficients
10 out of 16 coefficients
8 out of 16 coefficients
6 out of 16 coefficients
4 out of 16 coefficients
2 out of 16 coefficients
The goal of compression is to express an initial set of data using
some smaller set of data, either with or without loss of information.
For instance, suppose we are given a function f (x) expressed as a
weighted sum of basis functions u1 (x), : : : , um (x):
m
X
f (x) =
ci ui (x).
i=1
The data set in this case consists of the coefficients c1 , : : : , cm . We
would like to find a function approximating f (x) but requiring fewer
coefficients, perhaps by using a different basis. That is, given a userspecified error tolerance (for lossless compression, = 0), we are
looking for
f˜(x) =
m˜
X
c˜i u˜ i (x)
i=1
k
k
Figure 4 Coarse approximations to a function obtained using L2
compression: detail coefficients are removed in order of increasing
magnitude.
for some norm. In general,
such that m˜ < m and f (x) f˜(x)
you could attempt to construct a set of basis functions u˜1 , : : : , u˜ m˜ that
would provide a good approximation with few coefficients. We will
focus instead on the simpler problem of finding a good approximation in a fixed basis.
compression to the resulting coefficients simply by removing or ignoring the coefficients with smallest magnitude. By varying the
amount of compression, we obtain a sequence of approximations to
the original function, as shown in Figure 4.
One form of the compression problem is to order the coefficients c1 , : : : , cm so that for every m˜ < m, the first m˜ elements of
the sequence give the best approximation f˜(x) to f (x) as measured
in the L2 norm. As we show here, the solution to this problem is
straightforward if the basis is orthonormal, as is the case with the
normalized Haar basis.
Let be a permutation of 1, : : : , m, and let f˜(x) be a function that
uses the coefficients corresponding to the first m˜ numbers of the permutation :
m˜
X
f˜(x) =
3 Wavelets in two dimensions
In preparation for image compression, we need to generalize Haar
wavelets to two dimensions. First, we will consider how to perform
a wavelet decomposition of the pixel values in a two-dimensional
image. We then describe the scaling functions and wavelets that
form a two-dimensional wavelet basis.
c(i) u(i) .
3.1
i=1
There are two ways we can use wavelets to transform the pixel values within an image. Each is a generalization to two dimensions of
the one-dimensional wavelet transform described in Section 2.1.
The square of the L2 error in this approximation is
f (x)
h
2
f˜(x)2 = f (x)
*
m
X
=
j
f˜(x) f (x)
c(i) u(i)
i=m+1
˜
m X
m
X
=
f˜(x)
m
X
i
+
To obtain the standard decomposition [2] of an image, we first apply
the one-dimensional wavelet transform to each row of pixel values.
This operation gives us an average value along with detail coefficients for each row. Next, we treat these transformed rows as if they
were themselves an image and apply the one-dimensional transform
to each column. The resulting values are all detail coefficients except for a single overall average coefficient. The algorithm below
computes the standard decomposition. Figure 5 illustrates each step
of its operation.
c(j) u(j)
j=m+1
˜
h
j
c(i) c(j) u(i) u(j)
i
i=m+1
˜ j=m+1
˜
m
X
=
(c(i) )2
i=m+1
˜
procedure StandardDecomposition (C: array [1. . h, 1. . w] of reals)
for row
1 to h do
Decomposition (C[row, 1. . w])
end for
for col
1 to w do
Decomposition (C[1. . h, col])
end for
end procedure
The last step follows from the assumption that the basis is orthonormal, so ui uj = ij . We conclude that to minimize this error
for any given m,
˜ the best choice for is the permutation that sorts
the coefficients in order of decreasing magnitude; that is, satisc(m) .
fies c(1)
h j i
j
j j
Two-dimensional Haar wavelet transforms
j
Figure 1 demonstrated how a one-dimensional function could be
transformed into coefficients representing the function’s overall average and various resolutions of detail. Now we repeat the process,
this time using normalized Haar basis functions. We can apply L2
The second type of two-dimensional wavelet transform, called the
nonstandard decomposition, alternates between operations on rows
4
-
transform rows
transform rows
transform
columns
?
transform
columns
?
..
.
..
Figure 6 Nonstandard decomposition of an image.
Figure 5 Standard decomposition of an image.
and columns. First, we perform one step of horizontal pairwise averaging and differencing on the pixel values in each row of the image.
Next, we apply vertical pairwise averaging and differencing to each
column of the result. To complete the transformation, we repeat this
process recursively only on the quadrant containing averages in both
directions. Figure 6 shows all the steps involved in the nonstandard
decomposition procedure below.
by first defining a two-dimensional scaling function,
(x, y) := (x) (y),
and three wavelet functions,
(x, y) := (x) (y)
(x, y) :=
procedure NonstandardDecomposition (C: array [1. . h, 1. . h] of reals)
C
C=h (normalize input coefficients)
while h > 1 do
for row
1 to h do
DecompositionStep (C[row, 1. . h])
end for
for col
1 to h do
DecompositionStep (C[1. . h, col])
end for
h
h=2
end while
end procedure
3.2
.
(x, y) :=
(x) (y)
(x) (y).
We now denote levels of scaling with a superscriptj (as we did in the
one-dimensional case) and horizontal and vertical translations with
a pair of subscripts k and `. The nonstandard basis consists of a single coarse scaling function 00,0 (x, y):=(x, y) along with scales
:
and translates of the three wavelet functions , , and
Two-dimensional Haar basis functions
j
k` (x, y)
j
k` (x, y)
j
k` (x, y)
:= 2j (2j x
j
j
j
j
:= 2 (2 x
:= 2
(2 x
k, 2j y
`)
j
`)
j
`).
k, 2 y
k, 2 y
The constant 2j normalizes the wavelets to give an orthonormal basis. The nonstandard construction results in the basis for V 2 shown
in Figure 8.
The two methods of decomposing a two-dimensional image yield
coefficients that correspond to two different sets of basis functions.
The standard decomposition of an image gives coefficients for a basis formed by the standard construction [2] of a two-dimensional
basis. Similarly, the nonstandard decomposition gives coefficients
for the nonstandard construction of basis functions.
We have presented both the standard and nonstandard approaches
to wavelet transforms and basis functions because both have advantages. The standard decomposition of an image is appealing because it simply requires performing one-dimensional transforms on
all rows and then on all columns. On the other hand, it is slightly
more efficient to compute the nonstandard decomposition. For an
m m image, the standard decomposition requires 4(m2 m) assignment operations, while the nonstandard decomposition requires
only 83 (m2 1) assignment operations.
The standard construction of a two-dimensional wavelet basis consists of all possible tensor products of one-dimensional basis functions. For example, when we start with the one-dimensional Haar
basis for V 2 , we get the two-dimensional basis for V 2 shown in Figure 7. Note that if we apply the standard construction to an orthonormal basis in one dimension, we get an orthonormal basis in two dimensions.
Another consideration is the support of each basis function, meaning the portion of each function’s domain where that function is nonzero. All nonstandard Haar basis functions have square supports,
The nonstandard construction of a two-dimensional basis proceeds
5
00 (x)
1
1 (y)
0
0 (x)
1
1 (y)
1
0 (x)
1
1 (y)
1
1 (x)
1
1 (y)
1
0,1 (x, y)
1
1,1 (x, y)
1
0,1 (x, y)
1
1,1 (x, y)
00 (x)
1
0 (y)
0
0 (x)
1
0 (y)
1
0 (x)
1
0 (y)
1
1 (x)
1
0 (y)
1
0,0 (x, y)
1
1,0 (x, y)
1
0,0 (x, y)
1
1,0 (x, y)
00 (x)
0
0 (y)
0
0 (x)
0
0 (y)
1
0 (x)
0
0 (y)
1
1 (x)
0
0 (y)
0
0,0 (x, y)
0
0,0 (x, y)
10,1 (x, y)
11,1 (x, y)
00,0 (x, y)
00,0 (x, y)
10,0 (x, y)
11,0 (x, y)
00 (x) 00 (y)
0
0
0 (x) 0 (y)
1
0
0 (x) 0 (y)
1
0
1 (x) 0 (y)
Figure 7 Standard construction of a two-dimensional Haar wavelet
basis for V 2 . In the unnormalized case, functions are +1 where plus
signs appear, 1 where minus signs appear, and 0 in gray regions.
Figure 8 Nonstandard construction of a two-dimensional Haar
wavelet basis for V 2 .
to be discarded no longer changes.
while some standard basis functions have nonsquare supports. Depending upon the application, one of these choices may be preferable to the other.
3.3
procedure Compress (C: array [1. . m] of reals; : real)
min
min C[i]
max
max C[i]
do
(min + max )=2
s
0
for i
1 to m do
if C[i] < then s
s + (C[i])2
end for
if s < 2 then min
else max
until min max
for i
1 to m do
if C[i] < then C[i]
0
end for
end procedure
fj jg
fj jg
Application II: Image compression
We defined compression in Section 2.3 as the representation of a
function using fewer basis function coefficients than were originally given. The method we discussed for one-dimensional functions applies equally well to images, which we treat as the coefficients corresponding to a two-dimensional piecewise-constant basis. The approach presented here is only introductory; for a more
complete treatment of wavelet image compression, see the article
by DeVore et al. [6].
j j
j j
We can summarize wavelet image compression using the L2 norm
in three steps:
This binary search algorithm was used to produce the images in
Figure 9. These images demonstrate the high compression ratios
wavelets offer, as well as some of the artifacts they introduce.
1. Compute coefficients c1 , : : : , cm representing an image in a normalized two-dimensional Haar basis.
2. Sort the coefficients in order of decreasing magnitude to produce
the sequence c(1) , : : : , c(m) .
DeVore et al. [6] suggest that the L1 norm is best suited to the
task of image compression. Here is a pseudocode fragment for a
“greedy” L1 compression scheme:
3. P
Starting with m˜
=
m, find the smallest m˜ for which
m
2
2
(c
)
,
where
is the allowable L2 error.
(i)
i=m+1
˜
for each pixel (x, y) do
[x, y]
0
end for
for i
1 to m do
0 P + error from discarding C[i]
if
0 [x, y] < then
x,y
discard coefficient C[i]
The first step is accomplished by applying either of the twodimensional Haar wavelet transforms described in Section 3.1, being sure to use normalized basis functions. Any standard sorting
technique will work for the second step. However, for large images
sorting becomes exceedingly slow.
j
The pseudocode below outlines a more efficient method that uses
a binary search strategy to find a threshold below which coefficient
sizes are deemed negligible. The procedure takes as input a onedimensional array of coefficients C (with each coefficient corresponding to a two-dimensional basis function) and an error tolerance . For each guess at a threshold , the algorithm computes the
square of the L2 error that would result from discarding coefficients
smaller in magnitude than . This squared error s is compared to 2
at each iteration to decide whether the binary search should continue
in the upper or lower half of the current interval. The algorithm halts
when the current interval is so narrow that the number of coefficients
j
0
end if
end for
Note that this algorithm’s results depend on the order in which coefficients are visited. Different images (and degrees of compression)
may be obtained from varying this order—for example, by starting with the finest scale coefficients, rather than the smallest coefficients. You could also run a more sophisticated constrained optimization procedure to select the minimum number of coefficients
subject to the error bound.
6
(a)
(b)
(c)
(d)
Figure 9 L2 wavelet image compression: The original image (a) can be represented using (b) 19% of its wavelet coefficients, with 5% relativeL2
error; (c) 3% of its coefficients, with 10% relativeL2 error; and (d) 1% of its coefficients, with 15% relativeL2 error.
4 Conclusion
[8] Steven J. Gortler and Michael F. Cohen. Hierarchical and variational geometric modeling with wavelets. In Proceedings of
the 1995 Symposium on Interactive 3D Graphics, pages 35–
42. ACM, New York, 1995.
We have described Haar wavelets in one and two dimensions as well
as how to use them for compressing functions and images. Part 2
of this primer will continue this exposition by presenting the mathematical framework of multiresolution analysis. We will also develop a class of wavelets based on endpoint-interpolating B-splines,
and describe how to use them for multiresolution curve and surface
editing.
[9] Steven J. Gortler, Peter Schr¨oder, Michael F. Cohen, and Pat
Hanrahan. Wavelet radiosity. In Proceedings of SIGGRAPH
93, pages 221–230. ACM, New York, 1993.
[10] Charles E. Jacobs, Adam Finkelstein, and David H. Salesin.
Fast multiresolution image querying. In Proceedings of SIGGRAPH 95, pages 277–286. ACM, New York, 1995.
Acknowledgments
[11] Zicheng Liu, Steven J. Gortler, and Michael F. Cohen. Hierarchical spacetime control. In Proceedings of SIGGRAPH 94,
pages 35–42. ACM, New York, 1994.
We wish to thank Ronen Barzel, Steven Gortler, Michael Shantzis,
and the anonymous reviewers for many helpful comments. This
work was supported by NSF Presidential and National Young Investigator awards (CCR-8957323 and CCR-9357790), by an NSF
Graduate Research Fellowship, by the University of Washington
Royalty Research Fund (65-9731), and by industrial gifts from
Adobe, Aldus, Microsoft, and Xerox.
[12] Michael Lounsbery, Tony DeRose, and Joe Warren. Multiresolution surfaces of arbitrary topological type. ACM Transactions on Graphics, 1996 (to appear).
[13] Stephane Mallat. A theory for multiresolution signal decomposition: The wavelet representation. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 11(7):674–693,
July 1989.
References
[1] Deborah Berman, Jason Bartell, and David Salesin. Multiresolution painting and compositing. In Proceedings of SIGGRAPH 94, pages 85–90. ACM, New York, 1994.
[14] David Meyers. Multiresolution tiling. Computer Graphics Forum, 13(5):325–340, December 1994.
[15] Peter Schr¨oder, Steven J. Gortler, Michael F. Cohen, and Pat
Hanrahan. Wavelet projections for radiosity. Computer
Graphics Forum, 13(2):141–151, June 1994.
[2] G. Beylkin, R. Coifman, and V. Rokhlin. Fast wavelet transforms and numerical algorithms I. Communications on Pure
and Applied Mathematics, 44(2):141–183, March 1991.
[16] Eric J. Stollnitz, Tony D. DeRose, and David H. Salesin.
Wavelets for Computer Graphics: Theory and Applications.
Morgan Kaufmann, San Francisco, 1996 (to appear).
[3] Per H. Christensen, Dani Lischinski, Eric J. Stollnitz, and
David H. Salesin. Clustering for glossy global illumination.
ACM Transactions on Graphics, 1996 (to appear).
[4] Per H. Christensen, Eric J. Stollnitz, David H. Salesin, and
Tony D. DeRose. Wavelet radiance. In G. Sakas, P. Shirley,
and S. M¨uller, editors, Photorealistic Rendering Techniques,
pages 295–309. Springer-Verlag, Berlin, 1995.
[5] Ingrid Daubechies. Orthonormal bases of compactly supported wavelets. Communications on Pure and Applied Mathematics, 41(7):909–996, October 1988.
[6] R. DeVore, B. Jawerth, and B. Lucier. Image compression
through wavelet transform coding. IEEE Transactions on Information Theory, 38(2):719–746, March 1992.
[7] Adam Finkelstein and David H. Salesin. Multiresolution
curves. In Proceedings of SIGGRAPH 94, pages 261–268.
ACM, New York, 1994.
7
A Linear algebra review
A vector space together with an inner product is called, not surprisingly, an inner product space.
The mathematics of wavelets rely heavily on fundamental ideas
from linear algebra. This appendix reviews a few important ideas.
A.1
Example: It is straightforward to show that the dot product
on IR3 defined by
h(a1 , a2 , a3 ) j (b1 , b2 , b3 )i := a1 b1 + a2 b2 + a3 b3
Vector spaces
The starting point for linear algebra is the notion of a vector space.
A vector space (over the reals) can be loosely defined as a collection V of elements where
(1)
satisfies the requirements of an inner product.
2 IR and for all u, v 2 V, au + bv 2 V.
2. There exists a unique element 0 2 V such that
for all u 2 V, 0u = 0, and
for all u 2 V, 0 + u = u.
Example:
The following “standard” inner product on
C[0, 1] plays a central role in most formulations of multiresolution analysis:
3. Other axioms (omitted here) hold true, most of which are necessary to guarantee that multiplication and addition behave as expected.
The standard inner product can also be generalized to include
a positive weight function w(x):
1. For all a, b
1
:=
f (x) g(x) dx.
0
hf j gi
The elements of a vector space V are called vectors, and the element 0 is called the zero vector. The vectors may be geometric
vectors, or they may be functions, as is the case when discussing
wavelets and multiresolution analysis.
A.2
Z
hf j gi
Z
1
:=
w(x) f (x) g(x) dx.
0
One of the most important uses of the inner product is to formalize
the idea of orthogonality. Two vectors u, v in an inner product space
are said to be orthogonal if u v = 0. It is not difficult to show
that a collection u1 , u2 , : : : of mutually orthogonal vectors must be
linearly independent, suggesting that orthogonality is a strong form
of linear independence. An orthogonal basis is one consisting of
mutually orthogonal vectors.
h ji
Bases and dimension
A collection of vectors u1 , u2 , : : : in a vector space V are said to be
linearly independent if
= 0 if and only if c1 = c2 = = 0.
A collection u1 , u2 , : : : 2 V of linearly independent vectors is abasis
for V if every v 2 V can be written as
c1 u1 + c2 u2 +
A.4 Norms and normalization
A norm is a function that measures the length of vectors. In a finitedimensional vector space, we typically use the norm u := u u 1=2 .
If we are working with a function space such asC[0, 1], we ordinarily use one of the Lp norms, defined as
kk h j i
X
ci ui
v =
i
for some real numbers c1 , c2 , : : : . The vectors in a basis for V are
said to span V. Intuitively speaking, linear independence means that
the vectors are not redundant, and a basis consists of a minimal complete set of vectors.
kukp
kuk1
=
dx
j
j
:= max u(x) .
2
x [0,1]
Even more frequently used is the L2 norm, which can also be written
as u 2 = u u 1=2 if we are using the standard inner product.
kk h j i
kk
A vector u with u = 1 is said to be normalized. If we have an
orthogonal basis composed of vectors that are normalized in theL2
norm, the basis is called orthonormal. Stated concisely, a basis
u1 , u2 , : : : is orthonormal if
hui j uj i = ij ,
Inner products and orthogonality
When dealing with geometric vectors from the vector space IR3 , the
“dot product” operation has a number of uses. The generalization of
the dot product to arbitrary vector spaces is called an inner product.
on a vector space V is any map from
Formally, an inner product
V V to IR that is
ju(x)j
1=p
p
In the limit as p tends to infinity, we get what is known as the maxnorm:
Example: The set of all functions continuous on [0, 1] is
an infinite-dimensional vector space. We’ll call this space
C[0, 1].
A.3
:=
1
0
If a basis for V has a finite number of elements u1 , : : : , um , then V
is finite-dimensional and its dimension is m. Otherwise, V is said to
be infinite-dimensional.
Example: IR3 is a three-dimensional space, and e1
(1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1) is a basis for it.
Z
where ij is called the Kronecker delta and is defined to be 1 ifi = j,
and 0 otherwise.
h j i
Example: The vectors e1 = (1, 0, 0), e2 = (0, 1, 0), e3 =
(0, 0, 1) form an orthonormal basis for the inner product space
IR3 endowed with the dot product of Equation (1).
h ji hj i
2. bilinear: hau + bv j wi = ahu j wi + bhv j wi, and
3. positive definite: hu j ui > 0 for all u 6 = 0.
1. symmetric: u v = v u ,
8
Wavelets for Computer Graphics: A Primer
Part 2y
Eric J. Stollnitz
Tony D. DeRose
David H. Salesin
University of Washington
It is often convenient to put the different scaling functions ji (x) for
a given level j together into a single row matrix,
1 Introduction
Wavelets are a mathematical tool for hierarchically decomposing
functions. They allow a function to be described in terms of a coarse
overall shape, plus details that range from broad to narrow. Regardless of whether the function of interest is an image, a curve, or a
surface, wavelets provide an elegant technique for representing the
levels of detail present.
In Part 1 of this primer we discussed the simple case of Haar
wavelets in one and two dimensions, and showed how they can be
used for image compression. In Part 2, we present the mathematical
theory of multiresolution analysis, then develop bounded-interval
spline wavelets and describe their use in multiresolution curve and
surface editing.
2 Multiresolution analysis
Multiresolution analysis relies on many results from linear algebra.
Some readers may wish to consult the appendix in Part 1 for a brief
review.
As discussed in Part 1, the starting point for multiresolution analysis
is a nested set of vector spaces
V1
V2
As j increases, the resolution of functions in V j increases. The basis
functions for the space V j are known as scaling functions.
The next step in multiresolution analysis is to definewavelet spaces.
For each j, we define W j as the orthogonal complement of V j in V j+1 .
This means that W j includes all the functions in V j+1 that are orthogonal to all those in V j under some chosen inner product. The functions we choose as a basis for W j are called wavelets.
2.1
A matrix formulation for refinement
The rest of our discussion of multiresolution analysis will focus on
wavelets defined on a bounded domain, although we will also refer
to wavelets on the unbounded real line wherever appropriate. In the
bounded case, each spaceV j has a finite basis, allowing us to use matrix notation in much of what follows, as did Lounsbery et al. [10]
and Quak and Weyrich [13].
y Eric J. Stollnitz, Tony D. DeRose, and David H. Salesin. Wavelets for computer graphics: A primer, part 2. IEEE Computer Graphics and Applications, 15(4):75–85, July 1995.
jm
:= [j0 (x)
j
1
(x)],
where mj is the dimension of V j . We can do the same for the
wavelets:
j (x)
:= [
j
0 (x)
j
nj
1
(x)],
where nj is the dimension of W j . Because W j is the orthogonal complement of V j in V j+1 , the dimensions of these spaces satisfy mj+1 =
mj + nj .
The condition that the subspaces V j be nested is equivalent to requiring that the scaling functions be refinable. That is, for all j = 1, 2, : : :
there must exist a matrix of constants Pj such that
j
The Haar wavelets we discussed in Part 1 are just one of many bases
that can be used to treat functions in a hierarchical fashion. In this
section, we develop a mathematical framework known as multiresolution analysis for studying wavelets [2, 11]. Our examples will
continue to focus on the Haar basis, but the more general mathematical notation used here will come in handy for discussing other
wavelet bases in later sections.
V0
j (x)
1
(x) =
j (x) Pj .
(1)
In other words, each scaling function at level j 1 must be expressible as a linear combination of “finer” scaling functions at level j.
Note that since V j and V j 1 have dimensions mj and mj 1 , respectively, Pj is an mj mj 1 matrix (taller than it is wide).
Since the wavelet space W j 1 is by definition also a subspace of V j ,
we can write the wavelets j 1 (x) as linear combinations of the scaling functions j (x). This means there is an mj nj 1 matrix of constants Qj satisfying
j
1
(x) =
j (x) Qj .
(2)
Example: In the Haar basis, at a particular level j there
are mj = 2j scaling functions and nj = 2j wavelets. Thus,
there must be refinement matrices describing how the two
scaling functions in V 1 and the two wavelets in W 1 can be
made from the four scaling functions in V 2 :
2
3
2
3
1 0
1
0
0 7
6 1 0 7
6 1
P2 = 4
and Q2 = 4
0
1 5
0 1 5
1
0
0 1
Remark: In the case of wavelets constructed on the unbounded real line, the columns of Pj are shifted versions
of one another, as are the columns of Qj . One column
therefore characterizes each matrix, so Pj and Qj are completely determined by sequences (: : : , p 1 , p0 , p1 , : : :) and
(: : : , q 1 , q0 , q1 , : : :), which also do not depend on j. Equations (1) and (2) therefore often appear in the literature as expressions of the form
X
(x) =
pi (2x i)
i
(x) =
X
i
qi (2x
i).
These equations are referred to as two-scale relations for
scaling functions and wavelets, respectively.
notation can also be used for the decomposition process outlined in
Section 2.1 of Part 1.
Note that equations (1) and (2) can be expressed as a single equation
using block-matrix notation:
j 1
(3)
j 1 = j Pj Qj .
Consider a function in some approximation space V j . Let’s assume
we have the coefficients of this function in terms of some scaling
function basis. We can write these
coefficients as a column matrix
T
cjmj 1 ] . The coefficients cji could, for exof values Cj = [cj0
ample, be thought of as pixel colors, or alternatively, as the x- or
y-coordinates of a curve’s control points in IR2 .
Example: Substituting the matrices from the previous example into Equation (3) along with the appropriate basis
functions gives
2
3
1 0
1
0
1
0 7
6 1 0
[10 11 01 11 ] = [20 21 22 23 ] 4
0 1
0
1 5
0
1
0
Suppose we wish to create a low-resolution version Cj 1 of Cj with
a smaller number of coefficients mj 1 . The standard approach for
creating the mj 1 values of Cj 1 is to use some form of linear filtering and down-sampling on the mj entries of Cj . This process can be
expressed as a matrix equation
1
j
j 1
where A is an m
It is important to realize that once we have chosen scaling functions and their refinement matrices Pj , the wavelet matrices Qj are
somewhat constrained (though not completely determined). In fact,
since all functions in j 1 (x) must be orthogonal to all functions
in j 1 (x), we know kj 1 `j 1 = 0 for all k and `.
h
j
j
h
[
hj 1 j j 1 i] = 0.
j
hj 1 j j i] Qj
= 0.
(6)
matrix of constants (wider than it is tall).
Dj
j
j 1
1
= Bj Cj
(7)
j
j
i
m matrix of constants related to A . The pair of
where B is an n
matrices Aj and Bj are called analysis filters. The process of splitting
the coefficients Cj into a low-resolution version Cj 1 and detail Dj 1
is called analysis or decomposition.
(4)
If Aj and Bj are chosen appropriately, then the original coefficients
Cj can be recovered from Cj 1 and Dj 1 by using the matrices Pj
and Qj from the previous section:
Substituting Equation (2) into Equation (4) yields
[
= Aj Cj
contains fewer entries than Cj , this filtering process
Since C
clearly loses some amount of detail. For many choices of Aj , it is
possible to capture the lost detail as another column matrix Dj 1 ,
computed by
i
i
1
j 1
To deal with all these inner products simultaneously, let’s define
some new notation for a matrix of inner products. We will denote
by [ j 1 j 1 ] the matrix whose (k, `) entry is jk 1 `j 1 .
Armed with this notation, we can rewrite the orthogonality condition on the wavelets as
h
m
j
Cj
Cj = Pj Cj
(5)
j
j 1
1
+ Qj Dj
1
.
j 1
(8)
A matrix equation with a right-hand side of zero like this one is
known as a homogeneous system of equations. The set of all possible solutions is called the null space of [ j 1 j ], and the
columns of Qj must form a basis for this space. There are a multitude
of bases for the null space of a matrix, implying that there are many
different wavelet bases for a given wavelet spaceW j . Ordinarily, we
uniquely determine the Qj matrices by imposing further constraints
in addition to the orthogonality requirement given above. For example, the Haar wavelet matrices can be found by requiring the least
number of consecutive nonzero entries in each column.
Recovering C from C and D is called synthesis or reconstruction. In this context, Pj and Qj are called synthesis filters.
The literature on wavelets includes various terminologies for orthogonality. Some authors refer to a collection of functions that are
orthogonal to scaling functions but not to each other aspre-wavelets,
reserving the term “wavelets” for functions that are orthogonal to
each other as well. Another common approach is to differentiate between an orthogonal wavelet basis, in which all functions are mutually orthogonal, and a semi-orthogonal wavelet basis, in which
the wavelets are orthogonal to the scaling functions but not to each
other. The Haar basis is an example of an orthogonal wavelet basis,
while the spline wavelets we will describe in Section 3 are examples
of semi-orthogonal wavelet bases.
These matrices represent the averaging and differencing operations described in Section 2.1 of Part 1.
h
j i
Example: In the unnormalized Haar basis, the matrices A2
and B2 are given by:
1 1
1
0
0
2
A =
0
1
1
2 0
1 1
1
0
0
2
B =
1
0
1
2 0
Remark: Once again, the matrices for wavelets constructed
on the unbounded real line have a simple structure: The rows
of Aj are shifted versions of each other, as are the rows of Bj .
The analysis Equations (6) and (7) often appear in the literature as
X
a` 2k cj`
cjk 1 =
`
Finally, it is sometimes desirable to define wavelets that are not quite
orthogonal to scaling functions in order to have wavelets with small
supports. This last alternative might be termed a non-orthogonal
wavelet basis, and we will mention an example when we describe
multiresolution surfaces in Section 4.3.
2.2
dkj 1
=
X
b`
2k
cj`
`
where
the
sequences
(: : : , a 1 , a0 , a1 , : : :)
and
(: : : , b 1 , b0 , b1 , : : :) are the entries in a row of Aj and Bj ,
respectively. Similarly, Equation (8) for reconstruction often
appears as
X
pk 2` cj` 1 + qk 2` d`j 1 .
cjk =
The filter bank
The previous section showed how scaling functions and wavelets
could be related by matrices. In this section, we show how matrix
`
2
Cj
A
HH - C HH - C
HHjH
HHH
B
B
jD
D
Aj
j 1
j 1
j
j 2
C1
j 2
j 1
j 1
HH - C
H Hj
B H
D
A1
must be made. In the case of the spline wavelets presented in the
next section, the wavelets are constructed to have minimal support,
but they are not orthogonal to one another (except for the piecewiseconstant case). Wavelets that are both locally supported and mutually orthogonal (other than Haar wavelets) were thought to be impossible until Daubechies’ ground-breaking work showing that certain families of spaces V j actually do admit mutually orthogonal
wavelets of small support [5].
0
1
0
Figure 1 The filter bank.
Note that the procedure for splitting Cj into a low-resolution
part Cj 1 and a detail part Dj 1 can be applied recursively to the
low-resolution version Cj 1 . Thus, the original coefficients can be
expressed as a hierarchy of lower-resolution versions C0 , : : : , Cj 1
and details D0 , : : : , Dj 1 , as shown in Figure 1. This recursive process is known as a filter bank.
3 Spline wavelets
Until now, the only specific wavelet basis we have considered is the
Haar basis. Haar basis functions have a number of advantages, including
Since the original coefficients Cj can be recovered from the sequence C0 , D0 , D1 , : : :, Dj 1 , we can think of this sequence as a
transform of the original coefficients, known as awavelet transform.
Note that the total size of the transform C0 , D0 , D1 , : : :, Dj 1 is the
same as that of the original version Cj , so no extra storage is required. (However, the wavelet coefficients may require more bits
to retain the accuracy of the original values.)
simplicity,
orthogonality,
very small supports,
nonoverlapping scaling functions (at a given level), and
nonoverlapping wavelets (at a given level),
In general, the analysis filters Aj and Bj are not necessarily transposed multiples of the synthesis filters, as was the case for the Haar
basis. Rather, Aj and Bj are formed by the matrices satisfying the
relation
j 1
Aj
= j .
(9)
j 1
Bj
j A
are both square matrices. Thus,
Note that Pj Qj and
Bj
combining Equations (3) and (9) gives
j 1
A
= Pj Qj
(10)
j
B
which make them useful in many applications. However, despite
these advantages, the Haar basis is a poor choice for applications
such as curve editing [8] and animation [9] because of its lack of
continuity.
There are a variety of ways to construct wavelets with k continuous derivatives. One such class of wavelets can be constructed from
piecewise-polynomial splines. These spline wavelets have been developed to a large extent by Chui and colleagues [3, 4]. The Haar basis is in fact the simplest instance of spline wavelets, resulting when
the polynomial degree is set to zero.
In the following, we briefly sketch the ideas behind the construction of endpoint-interpolating B-spline wavelets. Finkelstein and
Salesin [8] developed a collection of wavelets for the cubic case,
and Chui and Quak [4] presented constructions for arbitrary degree.
Although the derivations for arbitrary degree are too involved to
present here, we give the synthesis filters for the piecewise-constant
(Haar), linear, quadratic, and cubic cases in Appendix A. The next
three sections parallel the three steps described in Section 2.3 for
designing a multiresolution analysis.
Although we have not yet gotten specific about how to choose matrices Aj , Bj , Pj , and Qj , it should be clear from Equation (10) that
the two matrices in that equation must at least be invertible.
2.3
Designing a multiresolution analysis
The multiresolution analysis framework presented above is very
general. In practice you often have the freedom to design a multiresolution analysis specifically suited to a particular application. The
steps involved are
3.1
Our first step is to define the scaling functions for a nested set of
function spaces. We’ll start with the general definition of B-splines,
then specify how to make uniformly spaced, endpoint-interpolating
B-splines from these. (More detailed derivations of these and other
splines appear in a number of standard texts [1, 7].)
1. Select the scaling functions j (x) for each j = 0, 1, : : : .
This choice determines the nested approximation spaces V j , the
synthesis filters Pj , and the smoothness—that is, the number of
continuous derivatives—of the analysis.
2. Select an inner product defined on the functions in V0 , V 1 , : : : .
This choice determines the L2 norm and the orthogonal complement spaces W j . Although the standard inner product is the common choice, in general the inner product should be chosen to capture a measure of error that is meaningful in the context of the
application.
d, and a collection of
Given positive integers d and k, with k
non-decreasing values x0 , : : : , xk+d+1 called knots, the nonuniform Bspline basis functions of degree d are defined recursively as follows.
For i = 0, : : : , k, and for r = 1, : : : , d, let
1 if xi x < xi+1
0
Ni (x) :=
0 otherwise
3. Select a set of wavelets (x) that span W for each j = 0, 1, : : : .
This choice determines the synthesis filtersQj . Together, the synthesis filters Pj and Qj determine the analysis filters Aj and Bj by
Equation (10).
j
B-spline scaling functions
j
Nir (x) :=
x xi r
N
xi+r xi i
1
(x) +
xi+r+1 x r 1
N (x).
xi+r+1 xi+1 i+1
(Note: The fractions in these equations are taken to be 0 when their
denominators are 0.)
It is generally desirable to construct the wavelets to form an orthogonal basis for W j and to have small support (the support of a function f (x) is the set of points x where f (x) = 0). However, orthogonality often comes at the expense of increased supports, so a tradeoff
6
The endpoint-interpolating B-splines of degree d on [0, 1] result
when the first and last d + 1 knots are set to 0 and 1, respectively. In
3
3.2
N03
The second step of designing a multiresolution analysis is the choice
of an inner product. We’ll simply use the standard inner product
here:
Z 1
f g :=
f (x) g(x) dx.
N02
N01
N13
N00
N12
hj i
N23
N11
N10
N22
degree 0
0
N33
N21
3.3
N32
degree 1
degree 3
Figure 2 B-spline scaling functions for V 1 (d) with degree d =
0, 1, 2, and 3.
[
this case, the functions N0d (x), : : : , Nkd (x) form a basis for the space
of piecewise-polynomials of degree d with d 1 continuous deriva< xk .
tives and breakpoints at the interior knots xd+1 < xd+2 <
To make uniformly spaced B-splines, we choose k = 2j + d 1
and xd+1 , : : : , xk to produce 2j equally spaced interior intervals. This
construction gives 2j + d B-spline basis functions for a particular
degree d and level j. We will use these functions as the endpointinterpolating B-spline scaling functions. Figure 2 shows examples
of these functions at level j = 1 (two interior intervals) for various
degrees d. Note that the basis functions defined here are not normalized in the L2 norm.
The rich theory of B-splines can be used to develop expressions for
the entries of the refinement matrix Pj (see Chui and Quak [4] or
Quak and Weyrich [13] for details). The columns of Pj are sparse,
reflecting the fact that the B-spline basis functions are locally supported. The first and last d columns of Pj are relatively complicated, but the remaining (interior) columns are shifted versions of
column d + 1. Moreover, the entries of these interior columns are,
up to a common factor of 1=2d , given by binomial coefficients.
3.4
8
2
4
1
1
4
6
4
1
1
4
6
4
1
1
4
6
4
1
1
4
11
2
2
3
2
6
4
4
8
(11)
B-spline filter bank
At this point, we have completed the steps in designing a multiresolution analysis. However, to use spline wavelets, we must implement a filter bank procedure incorporating the analysis filtersAj
and Bj . These matrices allow us to determine Cj 1 and Dj 1 from Cj
using matrix multiplication as in Equations (6) and (7). As discussed earlier in Section 2, the analysis filters are uniquely determined by the inverse relation in Equation (10):
j 1
A
= Pj Qj
Bj
Example: In the case of cubic splines (d = 3), the matrix Pj
for j 3 has the form
2
3
11
2
= 0.
Finkelstein and Salesin [8] took this approach to construct cubic Bspline wavelets. Chui and Quak [4] derived slightly different spline
wavelets using derivative and interpolation properties of B-splines.
Note that both approaches result in semi-orthogonal wavelet bases:
The wavelets are orthogonal to scaling functions at the same level,
but not to each other, except in the piecewise-constant case.
If V j (d) denotes the space spanned by the B-spline scaling functions
of degree d with 2j uniform intervals, it is not difficult to show that
the spaces V 0 (d), V 1 (d), : : : are nested as required by multiresolution
analysis.
3
2
hj 1 j j i] Qj
Since this is a homogeneous system of linear equations, there is
not a unique solution. We must therefore impose additional conditions. To get wavelets with small supports, for example, we require
each column of Qj to have a minimal number of consecutive nonzeros. This constraint imposes a banded structure on Qj similar to
that of Pj . For each column q of Qj , Equation (11) leads to a small
homogeneous system that we solve for the non-zero entries in q.
The matrices that result and the corresponding B-spline wavelets are
shown in Appendix A
4
6
B-spline wavelets
To complete our development of a multiresolution analysis based on
B-splines, we need to find basis functions for the spaces W j that are
orthogonal complements to the spaces V j . As shown in Section 2.1,
the wavelets are determined by matrices Qj satisfying Equation (5),
which we repeat here for convenience:
N43
degree 2
6 4
6
6
6
6
6
6
6
6
16
Pj = 6
86
6
6
6
6
6
6
6
6
4
Inner product
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
5
However, as Quak and Weyrich [13] point out, when implementing
the filter bank procedure for spline wavelets, it is generally not a
good idea to form the filters Aj and Bj explicitly. Although Pj and Qj
are sparse, having only O(d) entries per column, Aj and Bj are in general dense, so matrix–vector multiplication would require quadratic
instead of linear time.
Fortunately, there is a better approach. The idea is to compute Cj
and Dj 1 from Cj by solving the sparse linear system
j
Cj 1
j
= Cj .
P Q
j 1
D
1
j 1 C
, we first make the maIn order to solve this system for
j 1
D
j
trix P Qj into a banded matrix simply by interspersing the
columns of Pj and Qj . The resulting banded system can then be
where blank entries are taken to be zero, and the dots indicate that the previous column is repeated, shifted down by
two rows each time.
4
(a)
(b)
(c)
(d)
Figure 3 Changing a curve’s overall sweep without affecting its
character. Given the original curve (a), the system extracts the overall sweep (b). If the user modifies the sweep (c), the system can reapply the detail (d).
Figure 5 Changing the character of a curve without affecting its sweep.
through synthesis:
4
J
Cˆ = CJ + CJ
= CJ + PJ PJ
3
2
1
Pj+1 Cj .
Note that editing the sweep of the curve at lower levels of smoothing j affects larger portions of the high-resolution curvef J (t). At the
lowest level, when j = 0, the entire curve is affected. At the highest
level, when j = J, only the narrow portion influenced by one original
control point is affected. The kind of flexibility that this multiresolution editing allows is suggested in Figures 3 and 4.
Figure 4 The middle of the dark curve is pulled, using editing at various levels of smoothing j. A change in a control point in C1 has a
very broad effect, while a change in a control point inC4 has a narrow effect.
solved in linear time using LU decomposition [12]. Thus we can
compute the entire filter bank operation without ever forming and
using Aj or Bj explicitly.
4.2
Editing the character of the curve
Multiresolution curves also naturally support changes in the character of a curve, without affecting its overall sweep. Let CJ be the
control points of a curve, and letC0 , D0 , : : :, DJ 1 denote its wavelet
transform. Editing the character of the curve is simply a matter of replacing the existing set of detail coefficientsDj , : : : , DJ 1 with some
j
J 1
, and reconstructing. To avoid coordinatenew set Dˆ , : : : , Dˆ
system artifacts, all detail coefficients are expressed in terms of the
curve’s local tangent and normal, rather than the x and y directions.
4 Application III: Multiresolution curves and surfaces
We presented two applications of wavelets in Part 1: compression of
one-dimensional signals and compression of two-dimensional images. Our third application of wavelets in computer graphics is
curve design and editing, as described in detail by Finkelstein and
Salesin [8]. Their multiresolution curves are built from a wavelet
basis for endpoint-interpolating cubic B-splines, which we discussed in the previous section.
Figure 5 demonstrates how the character of curves in an illustration
can be modified with various detail styles. (The interactive illustration system used to create this figure was described by Salisbury
et al. [14].)
Multiresolution curves conveniently support a variety of operations:
changing a curve’s overall “sweep” while maintaining its fine de-
4.3
tails, or “character” (Figures 3 and 4);
changing
Multiresolution surfaces
Multiresolution editing can be extended to surfaces by using tensor products of B-spline scaling functions and wavelets. Either
the standard construction or the nonstandard construction described
in Part 1 for Haar basis functions can be used to form a twodimensional basis from a one-dimensional B-spline basis. We can
then edit surfaces using the same operations described for curves.
For example, Figure 6 shows a bicubic tensor-product B-spline surface after altering its sweep at different levels of detail.
a curve’s “character” without affecting its overall
“sweep” (Figure 5);
editing a curve at any continuous level of detail, allowing an arbitrary portion of the curve to be affected through direct manipulation;
smoothing at continuous levels to remove undesirable features
from a curve;
approximating or “fitting” a curve within a guaranteed maximum
We can further generalize multiresolution analysis to surfaces of
arbitrary topology by defining wavelets based on subdivision surfaces, as described by Lounsbery et al. [10]. Their nonorthogonal
wavelet basis, in combination with the work of Eck et al. [6], allows any polyhedral object to be decomposed into scaling function
and wavelet coefficients. Then a compression scheme similar to the
one presented for images in Section 3.3 of Part 1 can be used to display the object at various levels of detail simply by leaving out small
wavelet coefficients during reconstruction. An example of this technique is shown in Figure 7.
error tolerance, for scan conversion and other applications.
Here we’ll describe briefly just the first two of these operations,
which fall out quite naturally from the multiresolution representation.
4.1
1
Editing the sweep of the curve
Editing the sweep of a curve at an integer level of the wavelet
transform is simple. Let CJ be the control points of the original
j
curve f J (t), let Cj be a low-resolution version of CJ , and let Cˆ be
j
an edited version of Cj , given by Cˆ = Cj + Cj . The edited verJ
sion of the highest resolution curve Cˆ = CJ + CJ can be computed
5 Conclusion
Our primer has only touched on a few of the many uses for wavelets
in computer graphics. We hope this introduction to the topic has ex-
5
(a)
(b)
(c)
(d)
Figure 6 Surface manipulation at different levels of detail: The original surface (a) is changed at a narrow scale (b), an intermediate scale (c),
and a broad scale (d).
(a)
(b)
(c)
Figure 7 Surface approximation using subdivision surface wavelets: (a) the original surface, (b) an intermediate approximation, and (c) a coarse
approximation.
plained enough of the fundamentals for interested readers to explore
both the construction of wavelets and their application to problems
in graphics and beyond. We present a more thorough discussion in
a forthcoming monograph [15].
[5] Ingrid Daubechies.
Orthonormal bases of compactly supported
wavelets.
Communications on Pure and Applied Mathematics,
41(7):909–996, October 1988.
[6] Matthias Eck, Tony DeRose, Tom Duchamp, Hugues Hoppe, Michael
Lounsbery, and Werner Stuetzle. Multiresolution analysis of arbitrary
meshes. In Proceedings of SIGGRAPH 95, pages 173–182. ACM,
New York, 1995.
[7] Gerald Farin. Curves and Surfaces for Computer Aided Geometric Design. Academic Press, Boston, third edition, 1993.
[8] Adam Finkelstein and David H. Salesin. Multiresolution curves. In
Proceedings of SIGGRAPH 94, pages 261–268. ACM, New York,
1994.
[9] Zicheng Liu, Steven J. Gortler, and Michael F. Cohen. Hierarchical
spacetime control. In Proceedings of SIGGRAPH 94, pages 35–42.
ACM, New York, 1994.
[10] Michael Lounsbery, Tony DeRose, and Joe Warren. Multiresolution
surfaces of arbitrary topological type.ACM Transactions on Graphics,
1996 (to appear).
[11] Stephane Mallat. A theory for multiresolution signal decomposition:
The wavelet representation. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 11(7):674–693, July 1989.
[12] William H. Press, Brian P. Flannery, Saul A. Teukolsky, and William T.
Fetterling. Numerical Recipes. Cambridge University Press, New
York, second edition, 1992.
[13] Ewald Quak and Norman Weyrich. Decomposition and reconstruction algorithms for spline wavelets on a bounded interval.Applied and
Computational Harmonic Analysis, 1(3):217–231, June 1994.
[14] Michael P. Salisbury, Sean E. Anderson, Ronen Barzel, and David H.
Salesin. Interactive pen and ink illustration. InProceedings of SIGGRAPH 94, pages 101–108. ACM, New York, 1994.
[15] Eric J. Stollnitz, Tony D. DeRose, and David H. Salesin. Wavelets for
Computer Graphics: Theory and Applications. Morgan Kaufmann,
San Francisco, 1996 (to appear).
Acknowledgments
We wish to thank Adam Finkelstein, Michael Lounsbery, and Sean
Anderson for help with several of the figures in this paper. Thanks
also go to Ronen Barzel, Steven Gortler, Michael Shantzis, and the
anonymous reviewers for their many helpful comments. This work
was supported by NSF Presidential and National Young Investigator awards (CCR-8957323 and CCR-9357790), by NSF grant CDA9123308, by an NSF Graduate Research Fellowship, by the University of Washington Royalty Research Fund (65-9731), and by industrial gifts from Adobe, Aldus, Microsoft, and Xerox.
References
[1] R. Bartels, J. Beatty, and B. Barsky. An Introduction to Splines for Use
in Computer Graphics and Geometric Modeling. Morgan Kaufmann,
San Francisco, 1987.
[2] Charles K. Chui. An overview of wavelets. In Charles K. Chui, editor,
Approximation Theory and Functional Analysis, pages 47–71. Academic Press, Boston, 1991.
[3] Charles K. Chui.
Boston, 1992.
An Introduction to Wavelets.
Academic Press,
[4] Charles K. Chui and Ewald Quak. Wavelets on a bounded interval.
In D. Braess and L. L. Schumaker, editors, Numerical Methods in Approximation Theory, volume 9, pages 53–75. Birkhauser Verlag, Basel,
1992.
6
A Details on endpoint-interpolating B-spline wavelets
A.2 Endpoint-interpolating linear B-spline wavelets
This appendix presents the matrices required to apply endpointinterpolating B-spline wavelets of low degree. (The Matlab code
used to generate these matrices is available from the authors upon
request.) These concrete examples should serve to elucidate the
ideas presented in Section 3. To emphasize the sparse structure of
the matrices, zeros have been omitted. Diagonal dots indicate that
the previous column is to be repeated the appropriate number of
times, shifted down by two rows for each column. The P matrices
have entries relating the unnormalized scaling functions defined in
Section 3, while the Q matrices have entries defining normalized,
minimally supported wavelets. Columns of the Q matrices that are
not represented exactly with integers are given to six decimal places.
Figure 9 shows a few typical scaling functions and wavelets for
linear B-splines. The synthesis matrices Pj and Qj for endpointinterpolating linear B-spline wavelets are
h2 i
22
3
P1 = 12 1 1
1 1
2
6 21 1
7
6
7
2
6
7
j3
1 P
= 12 6
7
"2
#
6
1 7
1 1
4
5
2
1
2
2
P = 2
A.1
Q = p3
Haar wavelets
1
The B-spline wavelet basis of degree 0 is simply the Haar basis described in Section 2 of Part 1. Some examples of the Haar basis scaling functions and wavelets are depicted in Figure 8. The synthesis
matrices Pj and Qj are
21
3
2 1
3
Pj = 6
4
1
1
1
7
5
1
1
Qj =
p j6
2
2 4
1
1
1
1 1
2
1 1
2
h
1
1
1
i
Qj3
1
1
Figure 8 Piecewise-constant B-spline scaling functions and
wavelets for j = 3.
6
6
q 6
2j 6
=
72 6
6
6
4
11. 022704
10. 104145
5. 511352
0. 918559
"
q
Q =
2
7
5
2
12
11
6
1
3
64
#
1
6
11
12
3
1
6
10
6
1
1
6
10
6
1
1
6
10
6
1
7
7
7
7
7
7
7
0. 918559 5
5. 511352
10. 104145
11. 022704
Figure 9 Linear B-spline scaling functions and wavelets forj = 3.
7
A.3
Endpoint-interpolating quadratic B-spline wavelets
Figure 10 shows some quadratic B-spline scaling functions and wavelets. The synthesis matricesPj and Qj for the quadratic case are
24
3
P1 =
2
1
2
24
P2 =
1 1
1 1
2
1
4
4
3
2 2
3 1
1 3
2 2
4
Pj3
5
6
6
6
1
= 46
6
6
6
4
2 2
3 1
1 3
3 1
1 3
3
1
2
1
q Q =
5
4
2
3
3
2
2
Qj4
q
Q =
2
6
6
6
6
q
6
j
32 6
=
136088 6
6
6
6
6
4
2
381. 872771
464. 322574
276. 334798
115. 885924
22. 587463
0. 778878
3
4936
4
144
177
109
53
21
3
21
53
109
177
144
5
3
Q =
q
1
713568
6
6
6
6
4
1
3
3 1
1 3
2 2
4
4283. 828550
5208. 746077
3099. 909150
1300. 002166
253. 384964
8. 737413
7
7
7
7
7
7
7
5
3
780
1949
3481
3362
1618
319
11
11
319
1618
3362
3481
1949
780
3
69. 531439
173. 739454
310. 306330
299. 698329
144. 233164
28. 436576
0. 980572
1
29
147
303
303
147
29
1
1
29
147
303
303
147
29
1
1
29
147
303
303
147
29
1
0. 980572
28. 436576
144. 233164
299. 698329
310. 306330
173. 739454
69. 531439
7
7
7
7
7
7
7
7
7
0. 778878 7
22. 587463 7
115. 885924 5
276. 334798
464. 322574
381. 872771
Figure 10 Quadratic B-spline scaling functions and wavelets forj = 3.
8
8. 737413
253. 384964
1300. 002166
3099. 909150
5208. 746077
4283. 828550
7
7
7
7
5
A.4
Endpoint-interpolating cubic B-spline wavelets
Some examples of cubic B-spline scaling functions and wavelets are shown in Figure 11. The synthesis matricesPj and Qj for endpointinterpolating cubic B-spline wavelets are
2 16
3
"2
1
P =
2 16
#
1 1
1 1
1 1
2
1
2
P2 =
1
16
6
4
8
3
8
12 4
3 10 3
4 12
8
P j 3
7
5
8
16
6
6
6
6
6
= 161 6
6
6
6
6
6
4
2
Q = p7
1
"
1
2
3
2
1
#
2
Qj4
q
Q =
2
6
6
6
6
6
6
6
q
6
5 2j 6
=
675221664 6
6
6
6
6
6
6
6
4
2
25931. 200710
37755. 271723
30135. 003012
14439. 869635
5223. 125428
1067. 879425
78. 842887
0. 635830
315
31196288
6
4
1368
2064
1793
1053
691
240
3
240
691
1053
1793
2064
1368
6
6
3
Q = 6
6
6
4
7
5
8
8
12 4
3 11 2
8 8
2 12 2
8 8
2 12
8
2
6. 311454
9. 189342
7. 334627
3. 514553
1. 271268
0. 259914
0. 019190
0. 000155
2
8
12 2
8 8
2 11 3
4 12
8
7
7
7
7
7
7
7
7
7
7
7
5
8
16
3
1. 543996
4. 226722
5. 585477
6. 059557
4. 367454
1. 903267
0. 473604
0. 087556
0. 087556
0. 473604
1. 903267
4. 367454
6. 059557
5. 585477
4. 226722
1. 543996
0. 000155
0. 019190
0. 259914
1. 271268
3. 514553
7. 334627
9. 189342
6. 311454
7
7
7
7
7
5
3
6369. 305453
17429. 266054
23004. 252368
24848. 487871
17678. 884301
7394. 685374
1561. 868558
115. 466347
0. 931180
385. 797044
2086. 545605
8349. 373420
18743. 473059
24291. 795239
18420. 997597
7866. 732009
1668. 615872
123. 378671
0. 994989
1
124
1677
7904
18482
24264
18482
7904
1677
124
1
1
124
1677
7904
18482
24264
18482
7904
1677
124
1
1
124
1677
7904
18482
24264
18482
7904
1677
124
1
0. 994989
123. 378671
1668. 615872
7866. 732009
18420. 997597
24291. 795239
18743. 473059
8349. 373420
2086. 545605
385. 797044
Figure 11 Cubic B-spline scaling functions and wavelets forj = 3.
9
0. 931180
115. 466347
1561. 868558
7394. 685374
17678. 884301
24848. 487871
23004. 252368
17429. 266054
6369. 305453
7
7
7
7
7
7
7
7
7
7
7
7
7
0. 635830 7
78. 842887 7
1067. 879425 7
5223. 125428 7
14439. 869635 5
30135. 003012
37755. 271723
25931. 200710
1/--pages
Пожаловаться на содержимое документа