Abstract

We consider a family of algorithms for approximate implicitization of rational parametric curves and surfaces. The main approximation tool in all of the approaches is the singular value decomposition, and they are therefore well suited to floating-point implementation in computer-aided geometric design (CAGD) systems. We unify the approaches under the names of commonly known polynomial basis functions and consider various theoretical and practical aspects of the algorithms. We offer new methods for a least squares approach to approximate implicitization using orthogonal polynomials, which tend to be faster and more numerically stable than some existing algorithms. We propose several simple propositions relating the properties of the polynomial bases to their implicit approximation properties.

1. Introduction

Implicitization algorithms have been studied in both the CAGD and algebraic geometry communities for many years. Traditional approaches to implicitization have focused on exact methods such as Gröbner bases, resultants and moving curves and surfaces, or syzygies [1]. Approximate methods that are particularly well suited to floating-point implementation have also emerged in the past 25 years [26]. These methods are closely related to the algorithms we present; however, those that fit most closely into the framework of this paper include [4, 79].

Implicitization is the conversion of parametrically defined curves and surfaces into curves and surfaces defined by the zero set of a single polynomial. Exact implicit representations of rational parametric manifolds often have very high polynomial degrees, which can cause numerical instabilities and slow floating-point calculations. In cases where the geometry of the manifold is not sufficiently complicated to justify this high degree, approximation is often desirable. Moreover, for CAGD systems based on floating point arithmetic, exact implicitization methods are often unfeasible due to performance issues. The methods we present attempt to find “best fit” implicit curves or surfaces of a given degree (the definition of “best fit” varies with regard to the chosen method of approximation). One important property of all the algorithms is that they are guaranteed to give exact implicitizations for sufficiently high implicit degrees, up to numerical stability. In addition, some of the methods are also suitable for implementation in exact arithmetic, hence constituting alternative methods for exact implicitization.

For simplicity of notation, we proceed for the majority of the paper to describe the implicitization of curves. In Sections 2, 3, and 4, we introduce the notation and review existing methods. In Section 5, we present a new method for approximate implicitization using orthogonal polynomials and prove a theoretical relation to the previous methods. Implementations of the methods using different basis functions will be presented in Section 6 and a qualitative comparison and discussion given in Section 7. Finally, the generalization to both tensor-product and triangular surfaces will be covered in Section 8.

2. Preliminaries

A parametric curve in is given by , where and are functions in on some parameter domain . Of particular importance both in CAGD and classical algebraic geometry are rational parametric curves (i.e., where and are rational functions). In the majority of this paper, we will thus restrict our attention to planar rational curves, where the domain of interest is . In order to use polynomial bases in our construction, we can use the representation of the curves in the projective plane . For a rational parametric curve in , where , and are polynomials, we thus use the homogeneous description

All the methods to be described require a choice of degree and a choice of basis , for the implicit polynomial. Here, is defined as the number of basis functions in a polynomial of total degree . Thus, for a general bivariate polynomial, we have . Any polynomial can be written in terms of such a basis by choosing coefficients : The choice of implicit basis is an important factor which has implications for both the stability of the algorithms and the quality of the approximations. However, most of the work in this paper is independent of the choice of implicit basis. In , a good choice is the Bernstein basis in a barycentric coordinate system defined over a triangle containing the parametric curve. For curves in , we use the homogeneous Bernstein basis given by where , and denote the homogeneous coordinates and denotes a multi-index. We order the basis by letting correspond to for , where denotes lexicographical order on the indices , and . Unless otherwise stated, we will assume that the implicit basis is the Bernstein basis. In particular, it forms a partition of unity The choice of degree determines the number of degrees of freedom the implicit curve will have. If the chosen degree is sufficiently high, all the algorithms will produce exact results, up to numerical stability. If the degree is lower than necessary, approximations are produced.

Since we are searching for implicit representations and want to avoid the trivial solution , we add a normalization constraint to in the approximation. How best to make this choice has been discussed by several authors (see [6] for an overview). However, as we use the singular value decomposition (SVD) as the means of approximation, our results are given with the quadratic normalization .

The techniques in this paper focus on minimization of the objective function over the space of polynomials , where is defined by (2.2) in a fixed implicit basis. Such a minimization, although not directly minimizing the Hausdorff distance between the implicit and parametric curves, is closely related to the geometric approximation problem. It has been shown that minimization of gives excellent results in geometric space away from singularities [5].

3. Approximate Implicitization: The Original Approach

In 1997, a class of techniques for approximate implicitization of rational parametric curves, surfaces, and hypersurfaces was introduced in [5]. The approximation quality of the techniques was substantiated by a general proof that the methods exhibit very high convergence rates, as shown in Tables 1 and 2. Extensions of this original approach, all of which inherit these high convergence rates, form the basis of this paper.

The guiding principle behind these methods is to find a polynomial which minimizes the maximal algebraic error in a given parameter domain . That is, in the given implicit basis , to find the coefficients which solve The solution to this problem, which we call the minimax (or uniform) approximation, is not easy to find exactly. However, approximations to the minimax solution can be found directly, using linear algebra.

We notice that the expression is a univariate polynomial of degree in . We can thus approach the problem by first factorizing the error expression in a polynomial basis , where , as follows: where is a matrix whose columns are the coefficients of expressed in the -basis and is the unknown vector of implicit coefficients. Now, we have giving an upper bound on the maximal algebraic error, dependent on the choice of basis. Here, we have used the fact that where is the smallest singular value of . We thus choose , the right singular vector corresponding to , as an approximate solution to the problem. The other singular vectors corresponding to larger singular values also give candidates for approximation that generally decrease in quality as the singular values increase [8]. It is important to note that the value of is dependent on the choice of basis .

In this paper, we use the following “normalization” to compare the approximation qualities of different polynomial bases: It should be noted that bases with different scaling coefficients on the individual basis functions will produce different results. For example, the standard Legendre basis, where each basis function has the normalization , will produce quantitatively different results to the Legendre basis normalized with respect to the -inner product. This choice of scaling is somewhat arbitrary, but experience shows that small differences in the scaling result in small differences in the approximation. Thus, for the bases we study, standard choices will be made.

Given a choice of basis functions , we summarize the general approach of this section in the following algorithm.

Algorithm 3.1. Input: a rational parametric curve of degree on the interval , and a degree for the implicit polynomial.(1)For each basis function , compute the vector of coefficients such that .(2) Construct a matrix from the column vectors .(3)Perform an SVD , and select , the right singular vector corresponding to the smallest singular value as an approximate solution.

This algorithm is known as the original method in the -basis; however, for this paper, we will refer to it simply as the -method for an arbitrary basis .

4. Weak Approximate Implicitization

Two approaches to approximate implicitization by continuous least squares minimization of the objective function were introduced simultaneously in 2001 in [4, 10], and further developed in [8]. These methods perform minimization in the weighted -inner product: where is some weight function defined on the domain of approximation .

After choosing a basis for the implicit representation, we obtain a linear algebra problem as before: where is given by This approach eliminates the need for a choice of basis, but a choice of weight function is necessary. The standard approach in [4, 10] has been to take . Later, we will discuss the benefits of choosing the Chebyshev weight function on , instead.

This problem, unlike the minimax problem, can be solved directly if the parametric components are integrable. We simply take , the eigenvector corresponding to the smallest eigenvalue of . Since the matrix is symmetric, it has orthonormal eigenvectors. Similarly to the previous method, eigenvectors corresponding to larger eigenvalues give gradually degenerating approximations.

We summarize this algorithm, for a given weight function , as follows.

Algorithm 4.1. Input: a parametric curve on the interval , and a degree for the implicit polynomial.(1)Construct a matrix by performing the integrals according to for .(2)Compute the eigendecomposition .(3)Select , the eigenvector corresponding to the smallest eigenvalue .

Algorithms following the procedure above are known under different names in the literature; weak approximate implicitization in [8], numerical implicitization in [4], and the eigenvalue method in [11]. For the rest of this paper, we will call it the weak method.

As previously mentioned, the methods of this section are suitable for either exact or approximate implicitization. They can be performed using either symbolic or numerical integration; however, the former is generally only required when performing exact implicitizations in exact precision. For applications where floating-point precision is sufficient, numerical quadrature rules provide a much faster alternative. The methods also have wide generality since they can be applied to any parametric forms with integrable components, not only rational parametric forms. There are, however, some significant disadvantages in choosing this method in practice. Firstly, due to the high degree of the integrand, the integrals can take a relatively long time to evaluate, even when numerical quadrature rules are used. Secondly, and more importantly, the condition numbers of the matrices can be significantly larger than the condition numbers of the matrices from the previous section, leading to issues with numerical stability.

Since is a symmetric positive (semi) definite matrix, it has a decomposition , where the singular values of are the square roots of the singular values of . This decomposition is not unique; however, in the next section, we show that it is possible to construct such a matrix directly, without first computing . The condition number of will be the square root of the condition number of ; hence, we obtain the solution to the least squares problem in a more stable manner. Section 5.1 demonstrates how the lack of stability in the weak method compares to the new method described in the following section.

5. Approximate Implicitization Using Orthonormal Bases

To make the connection between the original method and the weak method in the previous section, we consider the factorization (3.2). We can then express in terms of and a new matrix [8]. That is, we get where is given by . Note that is a Gramian matrix in the -basis on the weighted -inner product. This gives us a clue as to how to improve the weak method by the use of orthonormal bases. A polynomial basis is said to be orthonormal with respect to the weighted -inner product , if with denoting the Kronecker delta.

Theorem 5.1. Let be a polynomial basis, orthonormal with respect to the given inner product , and and be defined by (3.2) and (4.3), respectively. Then, .

Proof. By (5.1) and the definition of , we have for any basis . But since is orthonormal with respect to the inner product , we have (i.e., ).

We notice that the right singular vectors of are exactly the orthonormal eigenvectors of , and the singular values of are the nonnegative square roots of the eigenvalues of . Thus, with Theorem 5.1 in mind, we see that if the -basis is orthonormal with respect to , then the results of the original method and the weak method are the same. (Note that we do not specify that the orthonormal basis must be a polynomial basis, only a basis for the relevant space of functions. For example, curves defined by trigonometric polynomials can be treated in a similar way, since that basis is orthogonal.) That is, is a candidate for the matrix from the previous section. Such a matrix, as defined by (3.2), can also be given elementwise by In practice, there often exist algorithms for computing coefficient expansions in orthogonal bases that are more efficient than using the elementwise definition. We will mention in Section 6.1 how the Chebyshev and Legendre methods can utilize algorithms based on the fast Fourier transform (FFT) to generate the matrices.

We should note that the matrix is of dimension , compared with the matrix . For , we have , so finding the singular vectors of may be more computationally expensive than computing the eigenvectors of . However, since the matrix construction is usually the dominant part of the algorithm, these differences do not affect the overall complexity of the algorithms. Moreover, the increase in accuracy justifies any small increase in computational complexity in part of the algorithm.

5.1. Example

In order to compare the numerical stability of the two approaches to least squares minimization of the objective function, we turn to a familiar example; exact implicitization of a rational parametric circular arc, which is defined in projective space by We perform the implicitization using the homogeneous Bernstein basis functions , for , and degree . If performed using exact arithmetic, the implicitization vector given by both the weak and orthonormal basis methods is However, performing the algorithms in double precision, we obtain the results for the weak and the orthogonal basis methods, respectively. (The orthogonal basis method was implemented with Legendre polynomials, as described in Section 6.1.) Examining the respective relative errors (in the infinity norm) we see that the orthogonal basis method preserves the accuracy much better than the weak method, with the former only preserving approximately 11 digits of accuracy. The orthogonal basis method preserves all but the last digit of the implicitization, up to double precision. It should be noted that this is an example of implicitization with degree ; as the degree is raised, such numerical errors can become more serious. The example in Section 7.2 shows how higher degree implicitizations using the weak method can give unpredictable results.

6. Examples of the Original Approach with Different Bases

The previous sections justified why the original approach is preferable to the weak approach in cases where the expression can be expressed in terms of orthogonal functions. In this section, we will look at examples of specific implementations using orthogonal polynomial bases. We will also consider alternative implementations using nonorthogonal bases, which produce approximations to the least squares solution. We state some simple propositions which unify the methods and also consider computational aspects of the algorithms.

6.1. Jacobi Polynomial Bases

The most commonly used orthogonal polynomial bases in approximation theory are the Legendre basis and the Chebyshev basis (of the first kind) . These are both special cases of Jacobi polynomials, which are orthogonal with respect to the weight functions defined by on the interval , and are defined for . The Legendre and Chebyshev cases are given by and , respectively. It is well known that the Chebyshev expansion has an efficient construction via use of the discrete cosine transform (DCT-I) which can be implemented via fast Fourier transform (FFT) [12]. The Legendre expansion can also be constructed efficiently by transforming from the Chebyshev coefficients (in operations for a degree polynomial [13]). The implementation of approximate implicitization exploiting the speed of the FFT will be the subject of another paper. Here, we will look at the properties of the matrices for the Jacobi bases and the properties of the approximations. The following proposition is an analogue of Theorem 4.3 in [7], for Jacobi polynomial bases.

Proposition 6.1. Let be the matrix defined by (3.2) in a Jacobi polynomial basis , for any , normalized to for . Then,

Proof. Since an orthogonal polynomial basis is degree ordered, one of the functions must be identically a nonzero constant, which, by the normalization condition, is equal to 1. Consider the vector , which ensures that for all , by the partition of unity on the implicit basis . Clearly, only the basis function of degree zero can have a nonzero coefficient. By (3.2), we have the expansion But this is equal to 1 if and only if and for .

One property of Chebyshev expansions of a continuous function is that the error introduced by truncating the expansion is dominated by the first term after the truncation, if the coefficients decay quickly enough [14]. For curves that we wish to approximate (with relatively simple forms), the coefficients do tend to decay quickly, so the coefficients in the lower rows of the matrix tend to be dominated by those above them.

The Chebyshev basis is well known for giving good approximations to minimax problems in approximation theory (see [14] for an overview). This also seems to be the case for approximate implicitization, with the resulting error normally being close to equioscillating. In fact, experiments show that in almost all test cases, the number of roots in the error function given by the Chebyshev method is greater than or equal to the convergence rate, for the given implicit degree (see Table 1). Thus the Chebyshev method appears to give a “near best” approximation in the sense that the error normally oscillates a maximum number of times.

Our experience in the choice between Legendre and Chebyshev polynomials is that the difference in approximation quality is minor. Chebyshev expansions are slightly quicker to compute and require less programming effort than their Legendre counterparts [13]. In addition they tend to eliminate the spike in error at the end of the intervals that appears in the Legendre method. However, both algorithms provide efficient and numerically stable methods for (weighted) least squares approximation over the entire interval .

6.2. Discrete Approximate Implicitization: The Lagrange Basis

One of the simplest and fastest implementations of approximate implicitization is to perform discrete least squares approximation of points sampled on the parametric manifold, similar to the methods in [2, 6]. In our setting, this can be implemented as the original method but with the -basis chosen to be the Lagrange basis at the given nodes. The matrix defined by (3.1) in the Lagrange basis of degree can be given elementwise by where are nodes in the parameter domain.

The result of approximate implicitization in the Lagrange basis depends both on the number of points sampled and the density of the point distribution in the parameter domain. Since Lagrange polynomials are neither orthogonal nor degree ordered, they do not solve a least squares problem of type (8.4). However, we can form a direct relation between the discrete and continuous least squares problems, as follows.

Proposition 6.2. Let be a planar parametric curve with bounded, piecewise continuous components on the interval , and let be the matrix defined by (6.4) at uniform nodes. Then, the element of converges to a constant multiple of the element of , defined by (4.3), as the number of samples is increased.

Proof. Since each of the parametric components of the curve is bounded and piecewise continuous on the interval and is a polynomial, we know that is bounded and piecewise continuous for each . Let . Then, for uniform samples , where in the parameter domain, we see that for .

Sampling more points gives ever closer approximations of the true least squares approximation (for any given , the constant is absorbed into the singular values of the matrix and does not affect the singular vectors). However, as we have seen, the Legendre method can solve the (unweighted) least squares problem exactly, without excessive sampling and in a way that best preserves the numerical precision. Although the point sampling method does tend to give good approximations when the number of samples is large enough, it is a relatively small increase in computational complexity and programming effort to use Legendre expansions instead. The main strength of the Lagrange method lies in its simplicity: it is easy to implement, computationally inexpensive, and highly parallelizable.

Alternative choices of nodes are also interesting to investigate. Using the inequality (3.3), we can introduce the bound where is the Lebesgue constant from interpolation theory defined by . Thus, we may expect to obtain better results from point distributions with smaller Lebesgue constants. In particular, we will see, in Section 6.3 that by using the Bernstein basis we achieve a Lebesgue constant of 1. However, it is possible that with smaller Lebesgue constants come larger singular values. Thus, it is important to balance between minimizing the Lebesgue constant and the singular value in order to obtain the best bound for the algebraic error.

A point distribution of particular interest is that of the Chebyshev points. On , this is defined as Conversion from the Lagrange basis at Chebyshev points to the Chebyshev basis can be performed by a DCT-I of the Lagrange coefficients [14]. Implementing the algorithm in this way gives a fast procedure for the Chebyshev method of Section 6.1. Proposition 6.2 can be extended to show that sampling at an increasing number of Chebyshev points causes the solution to converge to that of the weak method with the Chebyshev weight function (see Proposition 6.3). However, interpolation in these points is known from approximation theory to give very good approximation properties of its own [12]. One reason for this is the small Lebesgue constants associated with Chebyshev points.

Proposition 6.3. Let be a planar parametric curve with bounded, piecewise continuous components on the interval , and let be the matrix defined by (6.4) at Chebyshev points given by (6.7). Then, the element of converges to a constant multiple of the element of , defined by (4.3) with the weight function , as the number of samples is increased.

Proof. We use the change of variable , and note first that As in Proposition 6.2, is bounded and piecewise continuous for each . In order to simplify the notation, let for each , and let . Then, for uniform samples in the domain (which correspond to the Chebyshev points in the parameter domain), we see that for . Note that , so the integral is taken over the same domain even after change of variables.

6.3. Bernstein Polynomial Basis

The approach in [5] was to choose to be a nonnegative partition of unity basis such as the Bernstein basis (i.e., and ). This ensures that for all , and so the smallest singular value gives an upper bound for the error Approximation in the Bernstein basis has the advantage that it is easily generalizable to both tensor-product and simplex domains in higher dimensions [15]. If the parametric curves are given in spline or Bézier form, it is natural to use the Bernstein coefficients, since there exist numerically stable algorithms for computing the compositions, without resorting to sampling [16]. Despite the slightly less favourable approximation qualities of the Bernstein basis (see Figure 1), this method performs sufficiently well to be integrated into CAGD systems that are based on Bézier or spline curves and surfaces. It also appears to be the most stable method if the degree is chosen high enough for an exact implicit representation (see Section 7).

The Bernstein method is closely related to both the Lagrange and Legendre methods seen previously. It is in fact easy to see that the matrix in the Bernstein basis of degree converges asymptotically to the matrix in the Lagrange basis of degree at uniform nodes, as the degree is raised.

Proposition 6.4. Let be the matrix defined by (3.1) in the degree Bernstein polynomial basis. Then, the element of converges to the element of , defined by (3.1) in the Lagrange polynomial basis at uniform nodes.

Proof. It is well known that the Bernstein coefficients of a polynomial tend to the values of the polynomial as the degree is raised, as follows [17]: where . But the elements on the right-hand side are simply the elements of in the Lagrange basis at uniform nodes.

We can thus deduce the following convergence property of the Bernstein method as an immediate consequence of Propositions 6.2 and 6.4.

Corollary 6.5. Let be the matrix defined by (3.1) in the Bernstein polynomial basis of degree . Then, the element of converges to a constant multiple of the element of , defined by (4.3), as the degree is raised.

6.4. Exact Implicitization Using Linear Algebra

As mentioned previously, when the degree is chosen high enough to give an exact implicit representation and the algorithms are implemented in exact precision, all the methods can give exact results. The choice of basis in the exact case is irrelevant to the resulting polynomial and affects only the implementation complexity and computational speed. For example, the elements of the matrix using the Lagrange method can be generated by choosing rational nodes represented in exact arithmetic. As with the floating-point implementation, the matrix can be built very quickly in parallel, but rather than using SVD, we can exploit algorithms for finding the kernel of a matrix. A similar method for exact implicitization is described in [9]. However, the matrix there is expanded in the monomial basis, which leads to computationally expensive expansions for high degrees. It is noted that it is possible to reduce the complexity of that method in certain cases by exploiting the special structures of the algorithm and sparsity in the resulting matrix. In general, sparsity is not a feature of the Lagrange method; however, the matrices can be built more efficiently. The Bernstein method can also be implemented in exact arithmetic; however, similarly to the method in [9], it suffers from issues with computationally expensive expansions. Although it is possible to implement the Chebyshev and Legendre methods in exact precision using the elementwise definition (5.3), the evaluation of such integrals will be slow. The fast algorithms for generating the matrices using FFT are not suitable for an exact implementation.

When using the original method for approximate implicitization, we represent the error function in a basis of degree . In the Lagrange basis, we thus choose the number of nodes , to be one more than the degree of the basis . This is shown to be the smallest number of samples one can take in order to guarantee an exact implicitization method in the following proposition.

Proposition 6.6. Suppose one is given a nondegenerate rational parametric planar curve of degree (i.e., the degree of the algebraic representation is ). Then, the number of unique samples required to guarantee an exact implicitization by the Lagrange method is given by .

Proof. Since the implicitization is exact, we know that there exists a unique polynomial of degree with coefficients such that . By the theory described in Section 3, we can write where is a basis for polynomials of degree , and . Since the polynomials are linearly independent, we have for , and since , the vector must lie in the null space of . This shows that any basis of degree can be used for exact implicitization. In the univariate case, Lagrange polynomials defined by points form a basis for polynomials of degree up to , if and only if all the points are unique. Thus, choosing unique points in the parameter domain is sufficient to guarantee an exact implicitization.
To see that choosing fewer than points is insufficient, we consider parameter values corresponding to double points on the curve. Let denote the minimum number of points in general position on a curve of degree required to define the curve. Let the total number of possible double points on a rational curve of degree be given by [18]. Then up to points on the curve can correspond to more than one parameter value. Thus, the minimum number of samples guaranteeing a unique exact implicitization is given by

When searching for exact implicitizations, we generally want the implicit polynomial of lowest degree that contains the parametric curve. Since the normalized coefficient vector , given by an exact implicitization of lowest degree is unique, the kernel of the matrix would be expected to be 1-dimensional. A kernel of dimension higher than one indicates that the implicit polynomial defined by any vector in the kernel is reducible, and thus the degree can be reduced.

7. Comparison of the Algorithms

So far, we have presented several approaches to exact and approximate implicitization using linear algebra. The approaches exhibit different qualities in terms of approximation, conditioning, and computational complexity. The intention of this section is to provide a comparison of the algorithms.

7.1. Algebraic Error Comparisons

Figure 1 plots the average uniform algebraic error, , of the approximations of 100 Bézier curves of degree 10, for algebraic degrees . We use a barycentric coordinate system defined on the triangle , and for the implicit representation and choose random control points within this region to define the Bézier curves. We compare the performance of the Lagrange basis (at uniform nodes) the Bernstein basis, and the Chebyshev basis (Any reference to the Lagrange basis throughout this section will be assumed to be at uniformly spaced nodes.). As the results in the Chebyshev and Legendre bases are very similar, we include only the Chebyshev basis. The monomial basis, transformed to the interval , was also considered but not included due to its vastly inferior approximation qualities. All the algorithms were performed in double precision.

For each degree up to the exact degree, , the Chebyshev basis gave the best uniform minimization of the objective function . The maximum error for degree in the Chebyshev basis was, in general, approximately equivalent to the maximal error in the Lagrange basis of degree and the Bernstein basis of degree . For the Chebyshev basis, the maximal errors level out to roughly double-precision accuracy at degree eight, whereas, for the Bernstein basis, the required degree for machine precision was 10. Although the Lagrange basis performs better than the Bernstein basis for low degrees, the higher degree approximations are distorted to the extent that the exact implicitization, at degree 10, loses several orders of magnitude in accuracy. This appears to be due to the large deviation in the extrema of Lagrange basis functions at uniform nodes, which are associated with large Lebesgue constants. An example of such an error distribution is pictured in Figure 4(c). The spike in error can be reduced by sampling more points, as the error converges to the weak approximation (c.f., Proposition 6.2), or by choosing point distributions with smaller Lebesgue coefficients.

It should be noted that, for the Bernstein and Lagrange methods, the maximum of the algebraic error normally occurs at the end points of the interval and is normally much higher than the average error across the interval (see Figure 4). Moreover, the error away from the ends of the interval can sometimes be smaller in these bases than in the Chebyshev basis. Hence, they generally perform better than Figure 1 may suggest, in relation to the Chebyshev basis. The Chebyshev basis, however, tends to make the error roughly equioscillating throughout the interval. In addition, topological constraints imposed by approximating with lower degrees than necessary mean that, even when the algebraic error is small, the geometric error can be somewhat different, especially for curves with many singularities.

7.2. A Visual Comparison of the Methods

As discussed previously, minimizing the algebraic error does not necessarily minimize the geometric distance between the implicit and parametric curves. In order to visually compare how the methods perform in terms of geometric approximations, Figure 3 plots the implicit approximations of the parametric curve pictured in Figure 2. The curve was chosen as an example that represents the general approximation properties of each of the bases. However, it should be noted that different examples can give different results.

We see that, for the quartic approximation, the Lagrange and Chebyshev methods are already performing fairly well with only some detail lost close to the double point singularities. Despite exhibiting several intersections with the parametric curve, the Bernstein method gives little reproduction of the shape. The monomial approximation bears almost no resemblance to the original curve. For the quintic approximation, the Chebyshev and Lagrange bases again perform very similarly, giving excellent approximations that replicate the singularities well. These approximations would be sufficient for many applications. The Bernstein method performs similarly to the Chebyshev and Lagrange approximations of degree four, with only some loss of detail at each of the double points. Again the monomial basis gives almost no replication of the curve. It is also interesting to note the presence of extraneous branches visible in the Bernstein, Lagrange, and Chebyshev approximations at degree five. This is a feature which may occur with any of the methods. At degree six, the Bernstein, Lagrange, and Chebyshev methods all give excellent results over the entire interval. The monomial method is beginning to show good approximation at the centre of the interval; however, this deteriorates towards the ends. At degree seven, we expect exact results, up to numerical stability, for all of the algorithms. Visually, the implicitizations in all of the bases agree very closely.

For degree seven, we can also perform the Lagrange method in exact precision as described in Section 6.4. Using this method, we obtain a null space of dimension one, which confirms that the correct algebraic degree is in fact seven. We may also use this exact implicitization to compare the relative errors (using the infinity norm) for the implicitizations given by the different bases, as follows: The numerical stability properties of the Bernstein basis are well documented in mathematical literature (see, e.g., [19]). It appears that these properties are also preserved by the implicitization algorithm presented here, with the Bernstein basis outperforming the other bases to some orders of magnitude. In relation to the numerical stability of the methods, it is interesting to note the distribution of singular values given by the singular value decomposition of the matrices [20]. For this comparison, we normalize the singular values to all lie in . The Bernstein basis has one singular value close to zero in double precision; the second smallest being approximately . This shows that the solution is quite unique. The case is similar in the monomial basis; however, the second smallest singular value is approximately . For the Lagrange and Chebyshev bases, the second smallest singular values are and , respectively. Such close proximity between the smallest singular values leads to issues with numerical stability. However, it can also be useful since, as discussed previously, the singular vectors corresponding to the larger singular values are also candidates for approximation. Thus, we would expect the second best approximation in the Chebyshev and Lagrange bases to be much better than the second best approximation in the Bernstein basis.

It is also interesting to note that when attempting to use the weak method for approximate implicitization as an exact method here, we obtain a completely different solution, with relative error given approximately by . This seems to be due to the fact that the nine smallest eigenvalues (which are equal to the singular values, since is symmetric positive semidefinite) are all below machine precision. Thus, the solution given by the weak method could be a combination of any of the nine corresponding eigenvectors. However, despite the large relative error, the weak method still returns a solution which gives a reasonable geometric approximation.

Typical algebraic error distributions obtained from the methods in this section are displayed in Figure 4. A more accurate approximation of the geometric error can be given by dividing the algebraic error by the norm of the gradient of the given implicit representation. However, since the methods we present do not control the gradient, we have not included such plots here.

7.3. Discussion

In the example of Section 7.2, it is striking how well the Lagrange basis performs in relation to the Chebyshev basis. Despite the spike in error near the ends of the interval, the geometric approximation appears to remain relatively good; in some cases, even better than the Chebyshev. Thus, the comparisons in this section may lead to different conclusions as to which is the best algorithm to choose in general. It is clear that the monomial basis performs relatively badly, but the other bases all tend to perform well. The speed and simplicity of the Lagrange method is appealing, whereas the stability of exact implicitization in the Bernstein basis is desirable for many applications. The fact that the Legendre and Chebyshev methods have the guarantee of solving a least squares problem (see Theorem 5.1) and that there exist efficient algorithms for computing them also gives justification for choosing them as general methods. As an overview of this qualitative comparison, we display various aspects of the implementations in Table 3. The table summarizes how the algorithms perform in terms of the stability, generality, and what sort of problems they solve (i.e., least squares or uniform approximations).

One undesirable property of approximate implicitization is the possibility of introducing new singularities that are not present in the parametric curve. As the implicit polynomial representation is global, we cannot control what happens outside the interval of approximation. In particular, there could appear self-intersections of the curve within the interval of interest. This is an artifact that can appear using any of the methods described in this paper. However, such problems can be avoided by adding constraints to the algebraic approximation [21] or by using information about the gradient of the implicit curve in the approximation [22]. In general, adding such constraints will reduce the convergence rates of these methods [7].

The computation times for each of the methods vary. In all the current implementations of the methods, the matrix generation is the dominant part of the algorithm, and the SVD is generally fast. When constructing the matrices, the monomial and Bernstein methods suffer from computationally expensive expansions for high degrees, whereas the Chebyshev and Lagrange methods are based on point sampling and FFT, which can be implemented in parallel. Computational features of the methods will be the subject of further research, including exploiting the parallelism of GPUs to enhance the algorithms.

8. Approximate Implicitization of Surfaces

In this section, we will discuss how the methods presented for curves in the preceding sections generalize to surface implicitization. We will also provide a visual example of approximate implicitization of surfaces.

A parametric surface in is given by where , and are functions in parameters . Similarly to curves, in , we use the homogeneous form of such a surface to write for bivariate polynomials , and .

Although we have the option of using tensor-product polynomials for the implicit representation, here we choose polynomials of total degree . An implicit polynomial of total degree can be described in a basis , where , with coefficients . The choice of using the Bernstein basis for the implicit polynomial can be extended by considering a barycentric coordinate system defined over a tetrahedron containing the parametric surface. For surfaces in , the homogeneous Bernstein basis is given by where , and denote the homogeneous coordinates, and denotes a multi-index. Again, this basis forms a partition of unity, and we order it by letting correspond to , for , where denotes lexicographical ordering.

When applying the original algorithm for approximate implicitization, we observe that the expression is a bivariate polynomial in and . As such, we can write the function in a basis as The description of and the number of basis functions is dependent on the type of parametric surface. We thus make distinctions for the two most interesting cases—tensor-product surfaces and surfaces on triangular domains—in the following subsections.

The weak method presented in Section 4 can also be generalized to surfaces. The problem can be stated as where is some weight function defined on the domain of approximation . In this way, we obtain a linear algebra problem as before, where the solution is given by the eigenvector corresponding to the smallest eigenvalue of a matrix , defined by

8.1. Tensor-Product Parametric Surfaces

For rational tensor-product surfaces of bidegree , we can write where and are bases for univariate polynomials of respective degree and , the domain is the Cartesian product of two univariate intervals, and for and . In this case, (8.3) can be written in a tensor-product basis of bidegree as follows: where and . We must use an ordering for the indices in order to enter the coefficients in matrix form. Again, we choose the lexicographical ordering , so that where and . The algorithm then proceeds as for curves by using the singular value decomposition and selecting the vector corresponding to the smallest singular value.

The univariate bases and can be be chosen arbitrarily, as in the case for curves. In this way, Theorem 5.1, Propositions 6.2, 6.3, 6.4, and 6.6, and Corollary 6.5 from the univariate case all carry over to the tensor-product case with minimal effort. (For a tensor-product extension of Proposition 6.6, the samples should be taken on a tensor-product grid and the number of samples is given by .) This applies also to higher dimensional tensor-product hypersurfaces. As an example, consider Theorem 5.1. In the tensor-product case, we have the following, almost verbatim to the univariate case.

Theorem 8.1. Let be a tensor-product polynomial basis, orthonormal with respect to the given inner product , and and be defined by (8.8) and (8.5), respectively. Then, .

Proof. Similar to Theorem 5.1, with the relevant inner product given by for real bivariate polynomials .

8.2. Triangular Surfaces

For rational surfaces of total degree on a triangular domain , we can write where is a basis for bivariate polynomials of total degree with and for . In this case, (8.3) can be written in a bivariate basis of total degree as follows: where, . Lexicographical ordering on the respective degrees of and in the basis can be used to enter the coefficients in matrix form, .

Surfaces on triangular domains may be considered a more fundamental generalization than tensor-product surfaces; however, they often exhibit several difficulties not present in the tensor-product case. For example, most practical applications of the weak method of Section 4 use numerical quadrature, a method which is difficult to implement on triangular domains for high degrees. Since the degree of the integrand in the weak implicitization of a surface of degree has degree , high degree quadrature rules are required. For example, exact implicitization of a general quintic parametric surface, with implicit degree would need a quadrature rule accurate to order 250. Although it would be wise to use a lower degree approximation in this case (e.g., ), the degree of the integrand would still be too high for many quadrature rules on triangular domains. High degree quadrature rules can be constructed by first transforming the domain into a square and using two univariate quadrature rules. However, these rules are not symmetric in the triangle and require more samples than is mathematically necessary [23].

Certain methods for approximate implicitization are, however, easy to generalize. For example, the Bernstein basis has a natural representation on simplex domains using barycentric coordinates, and thus the use of approximate implicitization on triangular surfaces in this basis is simple [15]. The convergence result of Corollary 6.5 can be extended to this case, together with well-established degree elevation algorithms, in order to obtain better approximation results.

The Lagrange basis method from Section 6.2, which was based on sampling, can also be extended to triangular surfaces. However, when choosing samples, it is essential that the Lagrange polynomials defining the -basis are linearly independent; that is, they do in fact form a basis. For curves, choosing unique parameters for the sample points was sufficient (see Proposition 6.6). However, for surfaces we must add the requirement that the samples must not all lie on a curve of degree in the parameter domain, where the number of samples is given by .

Orthogonal polynomials on triangular domains also exist, and an extension of Theorem 5.1 holds in this case. However, fast algorithms for generating the coefficients appear to be difficult to replicate, thus limiting the applicability of this method in practice. In particular, there appears to be no direct analogue of the FFT method for generating Chebyshev coefficients. We refer the reader to [24] for a discussion of orthogonal polynomials on triangular domains.

8.3. An Example of Surface Implicitization

As an example of approximate implicitization of tensor-product surfaces, we will consider the problem of approximating the well-known Newells' teapot model. It is stated in [1] that “the 32 bicubic patches defining Newells' teapot provide a surprisingly diverse set of tests for moving surface implicitization.” In that paper, properties of the moving surface implicitization algorithm for the different patches are discussed and exact implicit degrees for each patch are stated. We will consider the same 32 patches here but instead use approximate methods, where the degree of approximation has been chosen manually to give good visual results.

Each of the 32 bicubic parametric surfaces has been approximated using the tensor-product Chebyshev method and the degrees stated in Table 4. The resulting surface has been ray traced in Figure 5 using POV-Ray [25] and clearly gives an excellent implicit approximation to the parametric teapot model. Moreover, the degrees of the surface patches are greatly reduced from the exact degrees, which are also stated in Table 4. The highest degree patch in the approximated model is six, whereas, if exact methods are used, patches of degree up to 18 are required. Generally, it appears that for visual purposes, the degree can be reduced to roughly one-third of the exact degree.

This example shows one potential application of approximate implicitization; however, there are several factors that should be noted. Firstly, a significant amount of user input was required to generate the approximations of the teapot patches. This involved choosing degrees that were suitable for each patch and also choosing approximations without extra branches in the region of interest. This was done by considering approximations corresponding to other singular values than the smallest. For example, for the upper handle patches we chose the approximation corresponding to the fourth smallest singular value. For each increased singular value, the convergence rate of the method is reduced by one [8]. However, even with the reduced convergence rates, the Chebyshev method continues to give excellent approximations. User input was also given to define the clipping region for the ray tracing of each patch. In this example, the clipping regions were boxes parallel to the , and -planes. However, in more complicated examples, it is not always suitable to use box regions for this purpose.

Another feature of this example is that the continuity between the parametric patches has been approximated very well in the implicit model. This is mainly due to the high convergence rates, which give good approximations over the entire surface region. However, in this case, there is also symmetry in the model meaning the edge curves where the patches meet can be approximated in a symmetric way. To achieve this, we have used the monomial basis for the implicit representation since it is symmetric around the -axis. For more general models, such symmetry would not be possible.

9. Conclusions

We have presented and unified several new and existing methods for approximate implicitization of rational curves using linear algebra. Theoretical connections between the different methods have been made together with qualitative comparisons. Extensions of the methods to both tensor-product and triangular surfaces have been discussed. By considering various issues such as approximation quality and computational complexity, we regard the Chebyshev and Legendre methods as the algorithms of choice for approximation of most rational parametric curves. However, to obtain good numerical stability when using floating-point arithmetic for exact implicitization, the Bernstein basis is a more favourable choice. Future research could include how the methods can be improved, for example, by exploiting sparsity as in [11] or by adding continuity constraints to the approximations [26].

Acknowledgments

The research leading to these results has received funding from the (European Community's) Seventh Framework Programme (FP7/2007-2013) under Grant agreement no. (PITN-GA-2008-214584) and from the Research Council of Norway through the IS-TOPP program.