About this Journal Submit a Manuscript Table of Contents
ISRN Signal Processing
Volume 2013 (2013), Article ID 417492, 13 pages
http://dx.doi.org/10.1155/2013/417492
Review Article

A Review of Subspace Segmentation: Problem, Nonlinear Approximations, and Applications to Motion Segmentation

Department of Mathematics, Vanderbilt University, Nashville, TN 37212, USA

Received 4 November 2012; Accepted 20 December 2012

Academic Editors: M. A. Nappi and J.-G. Wang

Copyright © 2013 Akram Aldroubi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The subspace segmentation problem is fundamental in many applications. The goal is to cluster data drawn from an unknown union of subspaces. In this paper we state the problem and describe its connection to other areas of mathematics and engineering. We then review the mathematical and algorithmic methods created to solve this problem and some of its particular cases. We also describe the problem of motion tracking in videos and its connection to the subspace segmentation problem and compare the various techniques for solving it.

1. Introduction

The subspace clustering problem is fundamental in many engineering and mathematics applications [111]. It can be described as follows: let be the nonlinear set consisting of a union of subspaces of a Hilbert or a Banach space . Let be a set of data points drawn from . The subspace segmentation (or clustering) problem is then to determine (equivalently determine for ), from the data , that is, to(1)determine the number of subspaces ; (2)find an orthonormal basis for each subspace , ;(3)group the data points belonging to the same subspace into the same cluster.

The data is often corrupted by noise; it may have outliers or some of the data vectors may have missing entries. Therefore, any technique for solving the subspace segmentation problem above must be robust and stable for the aforementioned nonideal cases.

Depending on the application, the space can be finite or infinite dimensional. For example, the set of all two dimensional images of a given face , obtained under different illuminations and facial positions, can be modeled as a set of vectors belonging to a low dimensional subspace living in a higher dimensional space [1214]. For this case, a set of such images from different faces is a union . Another application in which a union of subspaces provides a good model is the problem of motion tracking of rigid objects in videos. For this situation (further developed below), a 4-dimensional subspace is assigned to each moving object in a space , where is the number of frames in the video. Examples where is infinite dimensional arise in sampling theory, and in learning theory [1519]. For example, signals with finite rate of innovations are modeled by a union of subspaces that belongs to an infinite dimensional space such as [2, 3, 20, 21].

1.1. Known Number of Subspaces and Dimensions

In some subspace segmentation problems, the number of subspaces or the dimensions of the subspaces are known or can be estimated [1, 8, 22, 23]. In these cases, the subspace segmentation problem, for both the finite and infinite dimensional space cases, can be formulated as follows.

Let be a Hilbert space, a finite set of vectors in , a family of closed subspaces of , and the set of all sequences of elements in of length (i.e., . The subspace segmentation problem formulation as a minimization problem is as follows.

Problem 1 (optimization formulation of the subspace segmentation problem). Given a finite set , a number with , and a fixed integer , find the infimum of the expression over , and .
Find a sequence of -subspaces (if it exists) such that

An example in finite dimensions is when and is the family of all subspaces of of dimensions no greater than . For this case, when , and , this is a well-known least square problem that can be solved using the singular value decomposition technique [24]. An example in infinite dimensions is when and is a family of closed, shift-invariant subspaces of that are generated by at most generators [2]. Typical shift-invariant spaces with one generator are for example the space of bandlimited functions, generated by integer shifts of the generator function . Other important shift invariant spaces are the spline spaces generated by the B-spline functions of degree [25, 26]. In these cases the subspaces in are also infinite dimensional subspaces of . Thus, even in the case where and , this (least squares) problem is much more difficult than its finite dimensional counterpart. It should be noted that when and for any Problem 1 is neither linear nor convex [27, 28]. In the presence of outliers, it has been proven that the best value for is [27, 28], and a good choice for light-tailed noise is . There are more general versions of Problem 1, for example, the Hilbert space can be replaced by a Banach space ; moreover, the family can be replaced by the more general type of family [22].

1.2. Applications and Connection to Other Areas

The subspace segmentation problem has connections to several active areas of research, including learning theory, compressed sampling, and signal processing in general [2, 3, 17, 21, 2932]. Moreover, it is relevant to several computer vision applications including motion tracking of rigid objects in videos and facial recognition [1, 4, 14, 3338].

1.2.1. Connection to Compressed Sampling

In compressed sampling, the goal is to find an unknown vector from a small set of linear measurements , , where are known sampling vectors. Clearly, this problem has a solution only if some extra information is known about and if the sampling vectors s are well chosen. In compressed sampling, the assumption is that, in a suitable basis, or frame, the unknown vector is -sparse or nearly -sparse (compressible), with [30, 3944]. This means that in a suitable basis or frame the vector has at most nonzero components, or, in the compressible assumption, that has at most large components. This sparsity assumption (or compressibility assumptions) implies that the vector must belong (or must be close to) a union of subspaces of dimensions at most . Thus, finding the sparse model for a class of signals can be obtained by solving the subspace segmentation problem in the special case where , and where the is the class of subspaces of of dimensions at most , and [45, 46].

1.2.2. Connection to Learning Theory and Data Mining

In many learning theory problems, a class of data may form a complex structure embedded in a high dimensional space [4753]. In the neighborhood of each data point, the structure may be modeled by a local tangent space, or a union of tangent spaces whose dimensions are much smaller than the dimension of the ambient space [16]. The global shape of the data model can then be obtained from the observed data points by solving Problem 1.

1.2.3. Connection to Signal Processing

In signal processing, signals are often modeled by an infinite dimensional shift-invariant subspace of [15, 5461]. For example, the classical shift-invariant space is the space of bandlimited functions , also known as the Paley-Wiener space [6267]. This is the space generated by the function and its integer shifts. Multiresolution and wavelet spaces are also shift-invariant spaces that are often used in signal processing applications. Choosing the model for a class of signals can be cast in terms of finding the solution of 1 from observed data. Unlike the compressed sampling or learning theory discussed earlier, in this situation the class consists of infinite dimensional subspaces of and therefore are more difficult to deal with even for a single shift-invariant subspace model () [68]. The case in which a signal model is not a single subspace but a union of several of such subspaces is natural as in the case of signals with finite rate of innovation [6973].

1.2.4. Application to Motion Tracking in Video

The problem of tracking rigid moving objects in a video can be formulated as a subspace segmentation problem [33, 35, 7477]. Let ,   be the Cartesian coordinates of a point of a moving object in frame of a video. By concatenating all the coordinates of into a single vector we obtain the so-called  trajectory vector of whose length is where is the number of frames in the video. It can be shown that, for rigid bodies, the trajectories of any point of object belong to a subspace of of dimensions no greater than . Thus, if is a set of trajectory vectors from a set video of moving objects (background is one such objects), then the set belongs to a union of subspaces of dimensions at most . Thus, solving the subspace segmentation problem in this situation consists in using the data to find the subspaces, and then grouping together the trajectory vectors that belong to the same objects according to the subspace they belong to. It can also be shown that human facial motion and other nonrigid motions can be approximated by linear subspaces [78].

1.2.5. Application to Face Recognition

It has been shown that the set of all two-dimensional images of a given face , obtained under different illuminations and facial positions, can be modeled as a set of vectors belonging to a low dimensional subspace, , living in a higher dimensional space [14]. A set of such images from different faces are then a union , where each face is associated with a give face.

1.3. Dimensionality Reduction

Since the data may live in a very high dimensional space , but may consist of spaces with dimension and , the subspace clustering problem can be solved in a smaller dimensional space , the  effective dimension. Specifically, if , then the data can be projected on a space of dimension , where the projection is not necessarily an orthogonal projection, but any “good” linear process that maps the data to another (low dimensional) space, for example, random projection [7982]. As a result of projecting and the data , we get the set and the data . It is now possible to solve the subspace segmentation problem with data instead of and use the segmentation in the low dimensional space to solve the original problem. This dimensionality reduction technique can be very effective and is often used in conjunction with the subspace segmentation problem [12, 83].

2. Algebraic Methods for Finite Dimensional Noise Free Case

The general subspace segmentation problem described in Section 1 does not yet have a good approach for solving it. In the ideal case where no noise is present, there are several algebraic methods that can solve this problem as will be described below. However, in realistic situations when noise, outliers, and corrupted data are present, there are no satisfactory algorithms for finding the solution, even in the finite dimensional case when . The difficulties are both theoretical as well as computational, as will be further described below.

In the ideal case, when , and the data is drawn from a finite union of subspaces , the general problem can be solved using algebraic methods. Obviously, there must be enough data points. In particular, it is necessary that for each subspace there is a subset of data points of that form a basis for . However, this is not sufficient. Consider for example the very simple case in which the data is drawn from a union of two subspaces of such that . If we are supplied with two points, one from each line , we will not be able to decide whether the data is drawn from a single subspace or from the union . However, if we are supplied with enough points belonging to and enough belonging to , the structure becomes apparent.

2.1. Reduced Row Echelon Form Method

One of the recent algebraic methods for solving the noise free subspace segmentation problem under the independent subspace restriction is the reduced row echelon form (RREF) method [22]. This method is a generalization of the method of Gear who observed that, for four dimensional subspaces, the reduced echelon form can be used to segment motions in videos [84]. It turns out that in the noise free case the reduced row echelon form method can completely solve the subspace segmentation in almost its most general version.

The RREF is based on the familiar Gauss elimination techniques for solving linear systems of equation. However, for this method to work, certain assumptions on the data and the subspaces are needed. Specifically, there must be enough data to cover all the dimensions of the union of subspaces from which the data is drawn. Moreover, the susbpaces must be independent. To make these assumptions precise, we make the following definitions.

Definition 1 (generic data). Let be a linear subspace of with dimension . A set of data drawn from with dimension are said to be generic, if (i) , and (ii) every vector from forms a basis for .

Definition 2 (independent subspaces). Subspaces are called independent if .

Independent subspaces have the property that for . The converse, however, is false, for example, three subspaces ,  ,   in with can never be independent. More generally, if are independent, then and for .

If we knew the subspaces , it would be easy to partition the data into the partition such that . Conversely, if we knew a partition of the data such that the set comes from the same subspace , then we would set and our problem subspace segmentation would be solved.

However, all we are given is the data , and we do not know the partition . Thus, solving the subspace segmentation problem amounts to finding the partition of . To do this, we construct a matrix whose columns are the data vectors . The matrix is a matrix, where may be large, while the rank of is often much smaller (noise free case). Using the three elementary row operation used in Gaussian elimination, we transform to its reduced row echelon form where is an matrix and where is the rank of . By setting to the value all nonzero coefficients in we obtain the so-called  binary reduced echelon form of denoted by . The Binary reduced row echelon form of has a structure that allows us to easily find the partition and thereby solve the subspace segmentation problem as Theorem 3 below suggests [22].

Theorem 3. Let be a set of nontrivial linearly independent subspaces of . Let be a matrix whose columns are drawn from . Assume the data is drawn from each subspace and that it is generic. Let be the binary reduced row echelon form of . Then(1)the inner product of a pivot column and a nonpivot column in is one, if and only if the corresponding column vectors in belong to the same subspace for some ; (2)moreover, , where is the -norm of ; (3)finally, if and only if   or .

This theorem suggests a very simple yet effective approach to clustering the data points (Algorithm 1) and solves the subspace segmentation problem. This is done by finding a partition of the data into clusters such that . The clusters can be formed as follows: pick a nonpivot element in , and group together all columns in such that . Repeat the process with a different nonpivot column until all columns are exhausted. This is detailed in Algorithm 1 below.

alg1
Algorithm 1: Subspace segmentation—row echelon form approach—no noise.

Note that, we do not need to know the number of subspaces nor do we need to know the dimensions of the subspaces for solving the subspace segmentation problem in this case. and are an output of the algorithm. The only assumption is that there are enough data points and that they are well distributed (they are generic), and that the subspaces are independent.

For noisy data, the reduced row echelon form method does not work, and a thresholding must be applied. However, the effect of the noise on the reduced echelon form method depends on the noise level and the relative positions of the subspaces. This dependence has been analyzed in [22].

2.2. The Generalized Principle Component Analysis GPCA

Another algebraic method for solving the subspace segmentation problem is the so-called  generalized principle component analysis (GPCA) [12, 85]. Although the most general form of this method solves the subspace segmentation problem in its entire generality for finite dimensions, we will only describe the idea behind the GPCA method in the simplified case where the number of subspaces is known and when the subspaces are hyperplanes in , that is, their dimensions is . For this case, each subspace can be described by its normal vector , and every data point satisfies the linear equation where . Thus, a data point drawn from the union of subspaces must satisfy the polynomial equations The product is in fact a homogeneous polynomial of degree , where , ( integers). Thus, if it must satisfy the equation . Hence, in order to solve the subspace segmentation problem for this case, we must(1)find the polynomial by finding the values of its coefficients . This is done by creating a system of linear equations in the unkown by setting for each data . If the number of data points is generic, then the solution of the system of equations determines the polynomial ;(2)once the polynomial is determined, it must be factored into its product . The vectors can then be found by identification. The subspaces in the unions are thus determined.

A modification of the GPCA method described in the previous section works for the general subspace segmentation in which neither the dimensions of the subspace nor their number is described [12, 85]. However, as in the case of RREF method, this method cannot work directly when noise is present and some modification is needed in the presence of noise and outliers as described in [12, 85].

3. Optimization Methods and Subspace Segmentation in the Presence of Noise

The algebraic methods discussed in the previous section do not work without modification for the case in which the data is corrupted by noise or outliers. Even with some of the adjustments to take care of noisy environment, the algebraic algorithms do not perform well when the noise is not small. Algorithms rated according to their simplicity, computational speed, and their performance in nonideal situations. Thus, algebraic methods or their modifications may be the algorithms of choice if the noise is small and computational speed is the main requirement. However, when noise is relatively large and accuracy is important, other methods are needed. In this section we discuss other methods that are robust to noise and other inaccuracies in the data.

One of the methods for the subspace segmentation problem when noise is present is typified by Problem 1. Minimizing the functional described in Problem 1 amounts to finding the union of subspaces that is nearest to the data. However, some a priori knowledge of the number of subspaces and the dimensions of the subspaces may be necessary. The cost function can be modified to incorporate a cost that depends on the number of subspaces and their dimensions, if these quantities are unknown. But before getting into algorithms for solving Problem 1, the existence of a minimizer is a theoretical question of interest. Thus, we start by some of the results pertaining to this issue.

3.1. Existence of a Minimizer to Problem 1

Given a family of closed subspaces of , a solution to Problem 1 may not exist even in the linear case when . For example, assume that and is the set of all lines through the origin except the line . For this case, a minimizer may exist for certain distribution of data points but not for others. The existence of a solution here means that a minimizer exists for any distribution of any finite number of data points. We will describe the existence results when is a Hilbert space. The case when is not a Hilbert space is very difficult and only partial results are known.

It turns out that the existence of a minimizing sequence of subspaces that solves Problem 1 is equivalent to the existence of a solution to the same problem but for [2].

Theorem 4. Problem 1 has a minimizing set of subspaces for all finite sets of data and for any if and only if it has a minimizing subspace for all finite sets of data and for .

Therefore, the following definition is useful.

Definition 5. A set of closed subspaces of a separable Hilbert space have the minimum subspace approximation property (MSAP) if for every finite subset there exists an element that minimizes the expression

Using this terminology, Problem 1 has a minimizing sequence of subspaces if and only if satisfies the MSAP. If and , then satisfies MSAP. This fact is easy to prove directly and is in fact a consequence of the Eckart-Young theorem [24]. Another situation is when and is the set of all shift-invariant spaces of length at most . For this last case, a result in [68] implies that satisfies the MSAP.

In order to understand the general case, we identify each subspace with the orthogonal projector whose kernel is exactly (i.e., , where is the orthogonal projector on ). Now we can think of as a set of projection operators and endow it with the induced weak operator topology. This setting allows us to give the necessary and sufficient conditions for a class to have the MSAP property for the case when in (1). Note that it is sufficient that is closed in order for to have the MSAP. However, this condition is too strong as the following example shows: let and consider the set which is the union of the plane and the set of lines . Then (identified with a set of projectors as described earlier) is not closed (since ). However, it is easy to show that this set satisfies the MSAP, since if the infimum in (1) is achieved by the missing line given by , it is also achieved by the plane .

For finite dimensions, the weak operator and strong operator topologies are the same and the characterization of the MSAP can be obtained in terms of the convex hull of the family consisting of together with the positive semidefinite operators added to it. Recall that the convex hull of a set is the smallest convex set containing , that is, is the intersection of all convex sets containing . For finite dimensions, the following theorem give the necessary and sufficient conditions for the MSAP property and hence the necessary and sufficient conditions for the existence of a solution to Problem 1, when in (1).

Theorem 6. Suppose has dimension . Then the following are equivalent(i) satisfies MSAP;(ii) is closed;(iii).

The necessary and sufficient conditions in infinite dimensions for the existence of solutions when can be found in [20], but are much more complicated. However, no such results are known for the existence of solution to Problem 1 when .

3.2. Search Algorithms for Problem 1

Searching for a solution to Problem 1 is easier when since this problem is then a linear problem. Using an algorithm for solving this simpler problem, the more difficult problem when can be solved by using multiple times in an iterative algorithm as follows.

Let be the set of all partitions of the data , that is, if is such that when , .(1)Let be a partition of the data . For each , use Algorithm to find the subspace that is nearest to in the sense that it minimizes . We obtain a sequence of subspaces .(2)Construct a new partition by reassigning each data point to its nearest subspace from and by grouping together those points that are assigned to the same subspace.(3)Iterate between the two steps as described in Algorithm 2.

alg2
Algorithm 2: Optimal solution .

It can be shown that this algorithm always converges. However, the convergence may be a local minima instead of the global one. For this reason, a good initial partition is important. This initial partition can be supplied by some modified version of the algebraic methods described in the previous section, for example.

There are many iterative algorithms for finding a solution to the subspace segmentation problem or some of its special cases (see, e.g., [86, 87]). Most of them iterate between partitioning the data and finding the union of subspaces that is consistent with the partition. The general algorithm described below solves the subspace segmentation problem by searching for the minimizer of Problem 1.

Note that the cost functions and in the while loop of Algorithm 2 are the one defined by (1) in Problem 1, but correspond to for .

Step in Algorithm 2 is problem dependent. For example, in the situation where and is the set of subspaces of dimensions no greater than , Step can be solved using the singular value decomposition (SVD). A similar algorithm works in a much more general context as described in [2].

4. Motion Segmentation

The problem of motion segmentation has been described in Section 1.2.4. This problem is a special case of subspace segmentation in which and is the family of subspaces of dimensions no bigger than . There are many algorithms that have been developed to solve this problem, such as the methods based on sparsity [10, 8890], the algebraic methods, [1, 12, 91], the statistical methods [76, 9295], and the iterative methods [22, 86]. The most successful methods however are all based on the spectral clustering or some related method [22, 34, 36, 96, 97]. The main idea is that a similarity matrix is used to describe the “connection” between the points. Once this similarity matrix is obtained a classical clustering technique (such as the -means) is applied to a projection of the similarity matrix on a low dimensional space (here projection is used loosely and is not necessarily an orthogonal projection). These methods are often tested and compared to the state-of-the-art methods on the Hopkins 155 Dataset [8], which serves as benchmark database to evaluate motion segmentation algorithms. It contains two and three motion sequences. Cornerness features that are extracted and tracked across the frames are provided along with the dataset. The ground truth segmentations are also provided for comparison. Figure 1 shows two samples from the dataset with the extracted features.

417492.fig.001
Figure 1: Samples from the Hopkins 155 Dataset.
4.1. Nearness to Local Subspace Algorithm

Since most spectral clustering algorithms use similar overall structure, we describe the Nearness to Local Subspace (NLS) algorithm, which is the most performant of the spectral clustering type methods as applied to the Hopkins 155 Dataset. Other spectral clustering based algorithms will be discussed in Section 4.2.

The NLS method works whenever the dimensions of the subspaces are equal and known. First, a local subspace is estimated for each data point (vector). Then, the distances between the local subspaces and points are computed and a distance matrix is generated. This is followed by construction of a binary similarity matrix constructed by applying a data-driven threshold to the distance matrix. Finally, the segmentation problem is converted to a one-dimensional data clustering problem.

The algorithm for subspace segmentation is given in Algorithm 3. It assumes that the subspaces have dimension (for motion segmentation, ). The details of the various steps are as follows.

alg3
Algorithm 3: Subspace segmentation.

Dimensionality Reduction and Normalization. A dimensionality reduction step is typical in any algorithm, including those using spectral clustering. Let be a data matrix whose columns are drawn from a union of subspaces, where each subspace has dimensions at most . The data is possibly perturbed by noise and may have other imperfections. One way to reduce the dimensionality of the problem is to use SVD. Specifically, compute the SVD of where is an matrix, is an matrix, and is a diagonal matrix with diagonal entries , where .

If the rank of the data is not known, one can use the modal selection algorithm [34] to estimate its rank by where is the singular value and is a suitable constant. Another possible model selection algorithm can be found in [98]. is the best rank- approximation of , where refers to a matrix that has the first columns of as its columns and refers to the first rows of . In the case of motion segmentation, if there are independent motions across the frames captured by a moving camera, the rank of is between and .

To reduce the dimensionality of the data, replace the data matrix with the matrix that consists of the first rows of   . This step is justified by the following proposition in [22].

Proposition 7. Let and be and matrices. Let . Assume .(i)If then .(ii)If is full rank and then .

It should also be noted that this step reduces additive noise as well, especially in the case of light-tailed noise, for example, Gaussian noise. The number of subspaces corresponds to the number of moving objects. Dimensionality reduction corresponds to Steps 1, 2, and 3 in Algorithm 3.

Another type of data reduction is normalization. Specifically, the columns of are normalized to lie on the unit sphere . This is because by projecting the subspace on the unit sphere we effectively reduce the dimensionality of the data by one. Moreover, the normalization gives equal contribution of the data matrix columns to the description of the subspaces. Note that the normalization can be done by using norms of the columns of . This normalization procedure corresponds to Steps 4 and 5 in Algorithm 3.

Local Subspace Estimation. The data points (i.e., each column vector of ) that are close to each other are likely to belong to the same subspace. For this reason, a local subspace is estimated for each data point using its closest neighbors. This can be done by generating a distance matrix and then sorting each column of the distance matrix to find the neighbors of each , which is the column of .

Once the distance matrix between the points is generated, one can find, for each point , a set of points consisting of and its closest neighbors. Then a -dimensional subspace that is nearest (in the least square sense) to the data is generated. This is accomplished using SVD Let denote the matrix of the first columns of associated with . Then, the column space is the -dimensional subspace nearest to . Local subspace estimation corresponds to Steps 6 to 10 in Algorithm 3.

Construction of Binary Similarity Matrix. So far, we have associated a local subspace to each point . Ideally, the points and only those points that belong to the same subspace as should have zero distance from . This suggests computing the distance of each point to the local subspace and forming a distance matrix .

The distance matrix is generated as . A convenient choice of is 2. Note that as decreases, the probability of having on the same subspace as increases. Moreover, for , is the Euclidean distance of to the subspace associated with .

Since we are not in the ideal case, a point that belongs to the same subspace as may have nonzero distance to . However, this distance is likely to be small compared to the distance between and if and do not belong to the same subspace. This suggests that we compute a threshold that will distinguish between these two cases and transform the distance matrix into a binary matrix in which a zero in the entry means and are likely to belong to the same subspace, whereas entry of one means and are not likely to belong to the same subspace.

To do this, we convert the distance matrix into a binary similarity matrix . This is done by applying a data-driven thresholding as follows.(1)Create a vector that contains the sorted entries of from the smallest to the highest values. Scale so that its smallest value is zero and its largest value is one.(2)Set the threshold to the value of the th entry of the sorted vector , where is such that is minimized, and where is the characteristic function of the discrete set . If the number of points in each subspace is approximately equal, then we would expect points in each subspace, and we would expect small entries (zero entries ideally). However, this may not be the case in general. For this reason, we compute the data-driven threshold that distinguishes the small entries from the large entries. The data-driven threshold is chosen according to the method described in [1].(3)Create a similarity matrix from such that all entries of less than the threshold are set to 1 and the others are set to 0.

Segmentation. The last step is to use the similarity matrix to segment the data. To do this, we first normalize the rows of using -norm, that is, , where is a diagonal matrix . is related to the random walk Laplacian [66]. Although other normalizations are possible for , however, because of the geometry of the ball, -normalization brings outliers closer to the cluster clouds (distances of outliers decrease monotonically as decreases to 1). Since SVD (which will be used next) is associated with minimization, it is sensitive to outliers. Therefore normalization works best when SVD is used.

Observe that the initial data segmentation problem has now been converted to segmentation of 1-dimensional subspaces from the rows of . This is because, in the ideal case, from the construction of , if and are in the same subspace, the th and th rows of are equal. Since there are subspaces, then there will be 1-dimensional subspaces.

Now, the problem is again a subspace segmentation problem, but this time the data matrix is with each row as a data point. Also, each subspace is 1-dimensional and there are subspaces. Therefore, we can apply SVD again to obtain Using Proposition 7, it can be shown that can replace and we cluster the columns of , which is the projection of onto the span of . Since the problem is only segmentation of subspaces of dimension 1, we can use any traditional segmentation algorithm such as -means to cluster the data points. The segmentation corresponds to Steps 18 to 20 in Algorithm 3.

4.2. Other Spectral Clustering Methods

Other subspace clustering methods use essentially the same general algorithm as above, but the main difference is the construction of the similarity . For example, Yan and Pollefeys' method estimates a subspace for each point and then uses the cordal distance between the local subspaces to construct a similarity matrix . The algorithm of Elhamifar and Vidal [88, 89] uses the sparsity method to compute a similarity matrix based on sparse representations of the data . The sparse representations are found using the standard minimization techniques in compressed sampling. We have tested these algorithms using minimizations and found that both cases produce essentially the same results. Thus, it is our conclusion that it is the spectral clustering performed on the similarity matrix that is the main reason for the good performance of this and other related algorithms.

4.3. Comparison of Motion Segmentation Algorithms

Tables 1, 2, and 3 display some of the experimental results for the Hopkins 155 Dataset. Seven approaches are compared for the motion detection algorithms: GPCA [12], RANSAC [99], local subspace affinity (LSA) [34], MLS [93], agglomerative lossy compression (ALC) [100], sparse subspace clustering (SSC) [88], and NLS. An evaluation of those algorithms is presented in [88] with a minor error in the tabulated results for articulated three-motion analysis of SSC-N. SSC-B and SSC-N correspond to Bernoulli and normal random projections, respectively [88]. Table 1 displays the misclassification rates for the two motions video sequences. Table 2 shows the misclassification rates for the three motion sequences, and Table 3 presents the misclassification rates for all of the video sequences. It can be seen that the NLS algorithm outperforms all of the algorithms.

tab1
Table 1: Percentage of classification errors for sequences with two motions.
tab2
Table 2: Percentage of classification errors for sequences with three motions.
tab3
Table 3: Percentage of classification errors for all sequences.

Acknowledgment

This research is supported in part by NSF Grant DMS-110863.

References

  1. A. Aldroubi and A. Sekmen, “Nearness to local subspace algorithm for subspace and motion segmentation,” IEEE Signal Processing Letters, vol. 19, no. 10, Article ID 6275471, pp. 704–707, 2012. View at Publisher · View at Google Scholar
  2. A. Aldroubi, C. Cabrelli, and U. Molter, “Optimal non-linear models for sparsity and sampling,” Journal of Fourier Analysis and Applications, vol. 14, no. 5-6, pp. 793–812, 2008. View at Publisher · View at Google Scholar · View at Scopus
  3. Y. Ma, A. Y. Yang, H. Derksen, and R. M. Fossum, “Estimation of subspace arrangements with applications in modeling and segmenting mixed data,” SIAM Review, vol. 50, no. 3, pp. 413–458, 2008. View at Publisher · View at Google Scholar · View at Scopus
  4. H.-P. Kriegel, P. Kroeger, and A. Zimek, “Subspace clustering,” WIREs Data Mining and Knowledge Discovery, vol. 2, pp. 351–364, 2012.
  5. S. Nitzan and A. Olevskii, “Revisiting Landau's density theorems for Paley-Wiener spaces,” Comptes Rendus Mathématique, vol. 350, no. 9-10, pp. 509–512, 2012. View at Publisher · View at Google Scholar
  6. R. Vidal, Y. Ma, and S. Sastry, “Generalized principal component analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 12, pp. 1–15, 2005.
  7. G. Chen, A. V. Little, M. Maggioni, and L. Rosasco, “Some recent advances in multiscale geometric analysis of point clouds,” in Wavelets and Multiscale Analysis, J. Cohen and A. I. Zayed, Eds., pp. 199–225, Birkhäuser, Boston, Mass, USA, 2011. View at Publisher · View at Google Scholar
  8. R. Vidal, “Subspace clustering,” IEEE Signal Processing Magazine, vol. 28, no. 3, pp. 52–68, 2011.
  9. Y. Lyubarskii and W. R. Madych, “Interpolation of functions from generalized Paley-Wiener spaces,” Journal of Approximation Theory, vol. 133, no. 2, pp. 251–268, 2005. View at Publisher · View at Google Scholar · View at Scopus
  10. Y. Sugaya and K. Kanatani, “Improved multistage learning for multibody segmentation, in,” in Proceedings of the 5th International Conference on Computer Vision Theory and Applications (VISAPP '10), pp. 199–206, 2010.
  11. R. Hu, L. Fan, and L. Liu, “Co-segmentation of 3d shapes via subspace clustering,” Computer Graphics Forum, vol. 31, pp. 1703–1713, 2012.
  12. R. Vidal, Y. Ma, and S. Sastry, “Generalized principal component analysis (GPCA),” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, pp. 1945–1959, 2005.
  13. R. Basri and D. W. Jacobs, “Lambertian reflectance and linear subspaces,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 2, pp. 218–233, 2003. View at Publisher · View at Google Scholar · View at Scopus
  14. J. Ho, M. H. Yang, J. Lim, K. C. Lee, and D. Kriegman, “Clustering appearances of objects under varying illumination conditions,” in Proceedings of the Computer Society Conference on Computer Vision and Pattern Recognition, pp. I/11–I/18, June 2003. View at Scopus
  15. A. Aldroubi and K. Gröchenig, “Nonuniform sampling and reconstruction in shift-invariant spaces,” SIAM Review, vol. 43, no. 4, pp. 585–620, 2001. View at Publisher · View at Google Scholar
  16. W. K. Allard, G. Chen, and M. Maggioni, “Multi-scale geometric methods for data sets II: geometric multi-resolution analysis,” Applied and Computational Harmonic Analysis, vol. 32, no. 3, pp. 435–462, 2012. View at Publisher · View at Google Scholar
  17. M. Soltanolkotabi and E. J. Candés, “A geometric analysis of subspace clustering with outliers,” Annals of Statistics, vol. 40, no. 4, pp. 2195–2238, 2012.
  18. Y. Sugaya and K. Kanatani, “Multi-stage optimization for multi-body motion segmentation,” in Proceedings of IEICE Transactions on Information and Systems, pp. 1935–1942, 2004.
  19. A. D. Szlam, M. Maggioni, and R. R. Coifman, “Regularization on graphs with function-adapted diffusion processes,” Journal of Machine Learning Research, vol. 9, pp. 1711–1739, 2008. View at Scopus
  20. A. Aldroubi and R. Tessera, “On the existence of optimal unions of subspaces for data modeling and clustering,” Foundations of Computational Mathematics, vol. 11, no. 3, pp. 363–379, 2011. View at Publisher · View at Google Scholar · View at Scopus
  21. M. Petrik, “An analysis of laplacian methods for value function approximation in mdps,” in Proceedings of the 20th International Joint Conference on Artificial Intelligence, pp. Morgan Kaufmann–2574, 2007.
  22. A. Aldroubi and A. Sekmen, “Reduced row echelon form and non-linear approximation for subspace segmentation and high-dimensional data clustering,” submitted to Applied and Computational Harmonic Analysis.
  23. R. Tron and R. Vidal, “A benchmark for the comparison of 3-D motion segmentation algorithms,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '07), June 2007. View at Publisher · View at Google Scholar · View at Scopus
  24. C. Eckart and G. Young, “The approximation of one matrix by another of lower rank,” Psychometrika, vol. 1, no. 3, pp. 211–218, 1936. View at Publisher · View at Google Scholar · View at Scopus
  25. M. Unser, A. Aldroubi, and M. Eden, “B-spline signal processing. Part I. Theory,” IEEE Transactions on Signal Processing, vol. 41, no. 2, pp. 821–833, 1993. View at Publisher · View at Google Scholar · View at Scopus
  26. M. Unser, A. Aldroubi, and M. Eden, “B-spline signal processing. Part II. Efficient design and applications,” IEEE Transactions on Signal Processing, vol. 41, no. 2, pp. 834–848, 1993. View at Publisher · View at Google Scholar · View at Scopus
  27. G. Chen and G. Lerman, “Foundations of a multi-way spectral clustering framework for Hybrid Linear Modeling,” Foundations of Computational Mathematics, vol. 9, no. 5, pp. 517–558, 2009. View at Publisher · View at Google Scholar · View at Scopus
  28. G. Liu, Z. Lin, and Y. Yu, “Robust subspace segmentation by low-rank representation,” in Proceedings of 27th International Conference on Machine Learning (ICML '10), pp. 663–670, June 2010. View at Scopus
  29. F. De La Torre and M. J. Black, “A framework for robust subspace learning,” International Journal of Computer Vision, vol. 54, no. 1–3, pp. 117–142, 2003. View at Scopus
  30. E. Candès and J. Romberg, “Sparsity and incoherence in compressive sampling,” Inverse Problems, vol. 23, no. 3, pp. 969–985, 2007. View at Publisher · View at Google Scholar
  31. G. Haro, G. Randall, and G. Sapiro, “Translated poisson mixture model for stratification learning,” International Journal of Computer Vision, vol. 80, no. 3, pp. 358–374, 2008. View at Publisher · View at Google Scholar · View at Scopus
  32. Y. C. Eldar and M. Mishali, “Robust recovery of signals from a structured union of subspaces,” IEEE Transactions on Information Theory, vol. 55, no. 11, pp. 5302–5316, 2009. View at Publisher · View at Google Scholar · View at Scopus
  33. S. Kang and K. H. Kwon, “Recovery of missing samples for oversampling in shift invariant spaces,” Journal of Mathematical Analysis and Applications, vol. 391, no. 1, pp. 139–146, 2012. View at Publisher · View at Google Scholar
  34. J. Yan and M. Pollefeys, “A general framework for motion segmentation: independent, articulated, rigid, non-rigid, degenerate and nondegenerate,” in Proceedings of the 9th European Conference on Computer Vision, pp. 94–106, 2006.
  35. S. R. Rao, A. Y. Yang, S. S. Sastry, and Y. Ma, “Robust algebraic segmentation of mixed rigid-body and planar motions from two views,” International Journal of Computer Vision, vol. 88, no. 3, pp. 425–446, 2010. View at Publisher · View at Google Scholar · View at Scopus
  36. T. Lin and H. Zha, “Riemannian manifold learning,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, pp. 796–809, 2008.
  37. C. Bregler, A. Hertzmann, and H. Biermann, “Recovering non-rigid 3D shape from image streams,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR '00), pp. 690–696, June 2000. View at Scopus
  38. M. Brand, “Morphable 3D models from video,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. II456–II463, December 2001. View at Scopus
  39. E. J. Candès, “The restricted isometry property and its implications for compressed sensing,” Comptes Rendus Mathématique, vol. 346, no. 9-10, pp. 589–592, 2008. View at Publisher · View at Google Scholar
  40. E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489–509, 2006. View at Publisher · View at Google Scholar · View at Scopus
  41. S. Foucart, “A note on guaranteed sparse recovery via 1-minimization,” Applied and Computational Harmonic Analysis, vol. 29, no. 1, pp. 97–103, 2010. View at Publisher · View at Google Scholar · View at Scopus
  42. P. Boufounos, G. Kutyniok, and H. Rauhut, “Sparse recovery from combined fusion frame measurements,” IEEE Transactions on Information Theory, vol. 57, no. 6, pp. 3864–3876, 2011. View at Publisher · View at Google Scholar · View at Scopus
  43. A. Aldroubi, X. Chen, and A. M. Powell, “Perturbations of measurement matrices and dictionaries in compressed sensing,” Applied and Computational Harmonic Analysis, vol. 33, no. 2, pp. 282–291, 2012. View at Publisher · View at Google Scholar
  44. Q. Sun, “Recovery of sparsest signals via l(q)-minimization,” Applied and Computational Harmonic Analysis, vol. 32, no. 3, pp. 329–341, 2012.
  45. M. Aharon, M. Elad, and A. Bruckstein, “K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation,” IEEE Transactions on Signal Processing, vol. 54, no. 11, pp. 4311–4322, 2006. View at Publisher · View at Google Scholar · View at Scopus
  46. M. Aharon, M. Elad, and A. M. Bruckstein, “On the uniqueness of overcomplete dictionaries, and a practical way to retrieve them,” Linear Algebra and Its Applications, vol. 416, no. 1, pp. 48–67, 2006. View at Publisher · View at Google Scholar · View at Scopus
  47. G. Lerman and T. Zhang, “Robust recovery of multiple subspaces by geometric lp minimization,” The Annals of Statistics, vol. 39, no. 5, pp. 2686–2715, 2011. View at Publisher · View at Google Scholar
  48. C. Archambeau, N. Delannay, and M. Verleysen, “Mixtures of robust probabilistic principal component analyzers,” Neurocomputing, vol. 71, no. 7-9, pp. 1274–1282, 2008. View at Publisher · View at Google Scholar · View at Scopus
  49. Y. M. Lu and M. N. Do, “A theory for sampling signals from a union of subspaces,” IEEE Transactions on Signal Processing, vol. 56, no. 6, pp. 2334–2345, 2008. View at Publisher · View at Google Scholar
  50. P. W. Jones, M. Maggioni, and R. Schul, “Manifold parametrizations by eigenfunctions of the Laplacian and heat kernels,” Proceedings of the National Academy of Sciences of the United States of America, vol. 105, no. 6, pp. 1803–1808, 2008. View at Publisher · View at Google Scholar
  51. Q. Wu, J. Guinney, M. Maggioni, and S. Mukherjee, “Learning gradients: predictive models that infer geometry and statistical dependence,” Journal of Machine Learning Research, vol. 11, pp. 2175–2198, 2010. View at Scopus
  52. J. Zhang, H. Huang, and J. Wang, “Manifold learning for visualizing and analyzing high-dimensional data,” IEEE Intelligent Systems, vol. 25, no. 4, Article ID 5401149, pp. 54–61, 2010. View at Publisher · View at Google Scholar
  53. F. Lauer and C. Schnorr, “Spectral clustering of linear subspaces for motion segmentation,” in Proceedings of IEEE International Conference on Computer Vision, 2009.
  54. A. Aldroubi and M. Unser, “Sampling procedures in function spaces and asymptotic equivalence with Shannon's sampling theory,” Numerical Functional Analysis and Optimization, vol. 15, no. 1-2, pp. 1–21, 1994. View at Publisher · View at Google Scholar
  55. M. Unser, “Sampling—50 years after Shannon,” Proceedings of the IEEE, vol. 88, no. 4, pp. 569–587, 2000. View at Publisher · View at Google Scholar · View at Scopus
  56. A. Aldroubi, “Non-uniform weighted average sampling and reconstruction in shift-invariant and wavelet spaces,” Applied and Computational Harmonic Analysis, vol. 13, no. 2, pp. 151–161, 2002. View at Publisher · View at Google Scholar · View at Scopus
  57. M. Anastasio and C. Cabrelli, “Sampling in a union of frame generated subspaces,” Sampling Theory in Signal and Image Processing, vol. 8, no. 3, pp. 261–286, 2009. View at Scopus
  58. J. Xian and W. Sun, “Local sampling and reconstruction in shift-invariant spaces and their applications in spline subspaces,” Numerical Functional Analysis and Optimization, vol. 31, no. 3, pp. 366–386, 2010. View at Publisher · View at Google Scholar · View at Scopus
  59. A. Bhandari and A. I. Zayed, “Shift-invariant and sampling spaces associated with the fractional Fourier transform domain,” IEEE Transactions on Signal Processing, vol. 60, no. 4, pp. 1627–1637, 2012. View at Publisher · View at Google Scholar
  60. D. Kushnir, M. Galun, and A. Brandt, “Fast multiscale clustering and manifold identification,” Pattern Recognition, vol. 39, no. 10, pp. 1876–1891, 2006. View at Publisher · View at Google Scholar · View at Scopus
  61. S. Ericsson, “Generalized sampling in shift invariant spaces with frames,” Acta Mathematica Sinica, vol. 28, no. 9, pp. 1823–1844, 2012. View at Publisher · View at Google Scholar
  62. I. Maravić and M. Vetterli, “Sampling and reconstruction of signals with finite rate of innovation in the presence of noise,” IEEE Transactions on Signal Processing, vol. 53, no. 8, pp. 2788–2805, 2005. View at Publisher · View at Google Scholar
  63. J. A. Hogan, “Frame-based nonuniform sampling in Paley-Wiener spaces,” Journal of Applied Functional Analysis, vol. 2, no. 4, pp. 361–400, 2007.
  64. H. Boche and U. J. Mönich, “There exists no globally uniformly convergent reconstruction for the Paley-Wiener space PW1/π of bandlimited functions sampled at Nyquist rate,” IEEE Transactions on Signal Processing, vol. 56, no. 7, pp. 3170–3179, 2008. View at Publisher · View at Google Scholar
  65. H. Boche and U. J. Mönich, “Unboundedness of thresholding and quantization for bandlimited signals,” Signal Processing, vol. 92, no. 12, pp. 2821–2829, 2012. View at Publisher · View at Google Scholar
  66. S. Rao, R. Tron, R. Vidal, and Y. Ma, “Motion segmentation in the presence of outlying, incomplete, or corrupted trajectories,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 10, pp. 1832–1845, 2010. View at Publisher · View at Google Scholar · View at Scopus
  67. B. A. Bailey, “Multivariate polynomial interpolation and sampling in Paley-Wiener spaces,” Journal of Approximation Theory, vol. 164, no. 4, pp. 460–487, 2012. View at Publisher · View at Google Scholar
  68. A. Aldroubi, C. Cabrelli, D. Hardin, and U. Molter, “Optimal shift invariant spaces and their Parseval frame generators,” Applied and Computational Harmonic Analysis, vol. 23, no. 2, pp. 273–283, 2007. View at Publisher · View at Google Scholar · View at Scopus
  69. M. Vetterli, P. Marziliano, and T. Blu, “Sampling signals with finite rate of innovation,” IEEE Transactions on Signal Processing, vol. 50, no. 6, pp. 1417–1428, 2002. View at Publisher · View at Google Scholar · View at Scopus
  70. P. L. Dragotti, M. Vetterli, and T. Blu, “Sampling moments and reconstructing signals of finite rate of innovation: shannon meets strang-fix,” IEEE Transactions on Signal Processing, vol. 55, no. 5, pp. 1741–1757, 2007. View at Publisher · View at Google Scholar · View at Scopus
  71. V. Y. F. Tan and V. K. Goyal, “Estimating signals with finite rate of innovation from noisy samples: a stochastic algorithm,” IEEE Transactions on Signal Processing, vol. 56, no. 10, pp. 5135–5146, 2008. View at Publisher · View at Google Scholar · View at Scopus
  72. N. Bi, M. Z. Nashed, and Q. Sun, “Reconstructing signals with finite rate of innovation from noisy samples,” Acta Applicandae Mathematicae, vol. 107, no. 1–3, pp. 339–372, 2009. View at Publisher · View at Google Scholar · View at Scopus
  73. J. Berent, P. L. Dragotti, and T. Blu, “Sampling piecewise sinusoidal signals with finite rate of innovation methods,” IEEE Transactions on Signal Processing, vol. 58, no. 2, pp. 613–625, 2010. View at Publisher · View at Google Scholar · View at Scopus
  74. T. Kanade and D. D. Morris, “Factorization methods for structure from motion,” The Royal Society of London. Philosophical Transactions A, vol. 356, no. 1740, pp. 1153–1173, 1998. View at Publisher · View at Google Scholar
  75. J. P. Costeira and T. Kanade, “A multibody factorization method for independently moving objects,” International Journal of Computer Vision, vol. 29, no. 3, pp. 159–179, 1998. View at Scopus
  76. P. H. S. Torr, “Geometric motion segmentation and model selection,” The Royal Society of London. Philosophical Transactions A, vol. 356, no. 1740, pp. 1321–1340, 1998. View at Publisher · View at Google Scholar
  77. A. Goh and R. Vidal, “Segmenting motions of different types by unsupervised manifold clustering,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '07), vol. 6, June 2007. View at Publisher · View at Google Scholar · View at Scopus
  78. S. Smale and D. X. Zhou, “Shannon sampling II: connections to learning theory,” Applied and Computational Harmonic Analysis, vol. 19, no. 3, pp. 285–302, 2005. View at Publisher · View at Google Scholar · View at Scopus
  79. W. Johnson and J. Linderstrauss, “Extensions of lipshitz mapping into hilbert space,” Contemporary Mathematics, vol. 26, pp. 189–206, 1984.
  80. D. Achlioptas, “Database-friendly random projections: Johnson-Lindenstrauss with binary coins,” Journal of Computer and System Sciences, vol. 66, no. 4, pp. 671–687, 2003. View at Publisher · View at Google Scholar · View at Scopus
  81. N. Silva and J. Costeira, “Subspace segmentation with outliers: a grassmannian approach to the maximum consensus subspace,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2008.
  82. R. I. Arriaga and S. Vempala, “Algorithm theory of learning: robust concepts and random projection,” in Proceedings of IEEE 40th Annual Conference on Foundations of Computer Science, pp. 616–623, October 1999. View at Scopus
  83. A. Aldroubi, M. Anastasio, C. Cabrelli, and U. Molter, “A dimension reduction scheme for the computation of optimal unions of subspaces,” Sampling Theory in Signal and Image Processing, vol. 10, no. 1-2, pp. 135–150, 2011.
  84. C. W. Gear, “Multibody grouping from motion images,” International Journal of Computer Vision, vol. 29, no. 2, pp. 133–150, 1998. View at Scopus
  85. R. Vidal, R. Tron, and R. Hartley, “Multiframe motion segmentation with missing data using PowerFactorization and GPCA,” International Journal of Computer Vision, vol. 79, no. 1, pp. 85–105, 2008. View at Publisher · View at Google Scholar · View at Scopus
  86. P. Tseng, “Nearest q-flat to m points,” Journal of Optimization Theory and Applications, vol. 105, no. 1, pp. 249–252, 2000. View at Scopus
  87. P. S. Bradley and O. L. Mangasarian, “k-plane clustering,” Journal of Global Optimization, vol. 16, no. 1, pp. 23–32, 2000. View at Scopus
  88. E. Elhamifar and R. Vidal, “Sparse subspace clustering,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR '09), pp. 2790–2797, June 2009. View at Publisher · View at Google Scholar · View at Scopus
  89. E. Elhamifar and R. Vidal, “Clustering disjoint subspaces via sparse representation,” in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '10), pp. 1926–1929, March 2010. View at Publisher · View at Google Scholar · View at Scopus
  90. E. Elhamifar and R. Vidal, “Sparse subspace clustering: algorithm, theory, and applications,” http://arxiv.org/abs/1203.1005.
  91. A. Sekmen, Susbpace Segmentation, Vanderbilt, 2012.
  92. O. Faugeras, P. Torr, T. Kanade et al., “Geometric motion segmentation andmodel selection—discussion,” Philosophical Transactions of the Royal Society A, vol. 356, pp. 1338–1340, 1998.
  93. A. Gruber and Y. Weiss, “Multibody factorization with uncertainty and missing data using the EM algorithm,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '04), pp. I707–I714, July 2004. View at Scopus
  94. L. Candillier, I. Tellier, F. Torre, and O. Bousquet, “SSC: statistical subspace clustering,” in 5mes Journes d’Extraction et Gestion des Connaissances (EGC '05), pp. 177–182, Paris, France, 2005.
  95. S. Smale and D.-X. Zhou, “Learning theory estimates via integral operators and their approximations,” Constructive Approximation, vol. 26, no. 2, pp. 153–172, 2007. View at Publisher · View at Google Scholar
  96. G. Chen, S. Atev, and G. Lerman, “Kernel spectral curvature clustering (KSCC),” in Proceedings of IEEE 12th International Conference on Computer Vision Workshops (ICCV '09), pp. 765–772, October 2009. View at Publisher · View at Google Scholar · View at Scopus
  97. T. Zhang, A. Szlam, Y. Wang, and G. Lerman, “Randomized hybrid linear modeling by local best-fit flats,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '10), pp. 1927–1934, June 2010. View at Publisher · View at Google Scholar · View at Scopus
  98. L. Zappella, X. Lladó, E. Provenzi, and J. Salvi, “Enhanced Local Subspace Affinity for feature-based motion segmentation,” Pattern Recognition, vol. 44, no. 2, pp. 454–470, 2011. View at Publisher · View at Google Scholar · View at Scopus
  99. M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981. View at Publisher · View at Google Scholar · View at Scopus
  100. T. Sarlós, “Improved approximation algorithms for large matrices via random projections,” in Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS '06), pp. 143–152, October 2006. View at Publisher · View at Google Scholar · View at Scopus