About this Journal Submit a Manuscript Table of Contents
Discrete Dynamics in Nature and Society
Volume 2014 (2014), Article ID 914963, 9 pages
http://dx.doi.org/10.1155/2014/914963
Research Article

Mixture Augmented Lagrange Multiplier Method for Tensor Recovery and Its Applications

1Department of Transportation Engineering, Beijing Institute of Technology, Beijing 100081, China
2Integrated Information System Research Center, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
3Marvell Semiconductor Inc., 5488 Marvell LN, Santa Clara, CA 95054, USA

Received 30 November 2013; Accepted 30 January 2014; Published 17 March 2014

Academic Editor: Huimin Niu

Copyright © 2014 Huachun Tan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The problem of data recovery in multiway arrays (i.e., tensors) arises in many fields such as computer vision, image processing, and traffic data analysis. In this paper, we propose a scalable and fast algorithm for recovering a low--rank tensor with an unknown fraction of its entries being arbitrarily corrupted. In the new algorithm, the tensor recovery problem is formulated as a mixture convex multilinear Robust Principal Component Analysis (RPCA) optimization problem by minimizing a sum of the nuclear norm and the -norm. The problem is well structured in both the objective function and constraints. We apply augmented Lagrange multiplier method which can make use of the good structure for efficiently solving this problem. In the experiments, the algorithm is compared with the state-of-art algorithm both on synthetic data and real data including traffic data, image data, and video data.

1. Introduction

A tensor is a multidimensional array. It is the higher-order generalization of vector and matrix, which has many applications in information sciences, computer vision, graph analysis [1], and traffic data analysis [24]. In the real world, as the size and the amount of redundancy of the data increase fast and nearly all of the existing high-dimensional real world data either have the natural form of tensor (e.g., multichannel images) or can be grouped into the form of tensor (e.g., tensor face [5], traffic data tensor model [24], and videos), challenges come up in many scientific areas when someone confronts with the high-dimensional real world data. Because of some reasons, one wants to capture the underlying low-dimensional structure of the tensor data or seeks to detect the irregular sparse patterns of the tensor data, such as image compression [6], foreground segmentation [7], saliency detection [8], and traffic data completion [2, 3]. As a consequence, it is desirable to develop algorithms that can capture the low-dimensional structure or the irregular sparse patterns in the high-dimensional tensor data.

In the two-dimensional case, that is, the matrix case, the “rank” and “sparsity” are the most useful tools for matrix-valued data analysis. Chandrasekaran et al. [9] proposed the concept of “rank-sparse incoherence” to depict the fundamental identifiability of recovering the low-rank and sparse components. Wright et al. [10] and Candes et al. [11] demonstrated that if the irregular sparse matrix is sufficiently sparse (relative to the rank of ), one can accomplish the sparse and low-rank recovery by solving the following convex optimization problem: where is the given matrix to be recovered; is the low-rank component of ; is the sparse component of ; denotes the nuclear norm defined by the sum of all singular values; denotes the sum of the absolute values of matrix entries; is a positive weighted parameter. This optimization method is called the Robust Principal Component Analysis [10, 11] (RPCA) or the Principal Component Pursuit (PCP) due to its ability of exactly recovering the underlying low-rank matrix even in the presence of being corrupted by large entries or outliers.

Although the low-rank matrix recovery problem has been well studied, there is not much work on tensors. Li et al. [12] derived a method for the optimal tensor decomposition model. Considering a real -mode tensor , the best approximation is to find a tensor with prespecified that minimizes the least-squares cost function: The -rank conditions imply that should have the Tucker decomposition [13] as . For the application, they applied the model to the high-dimensional tensor-like visual data by dividing the observed tensor into a low-dimensional structure plus unbounded but sparse irregular patterns: . By assumption that the -rank of should be small and the corruption is bounded, the original function is as follows: In order to solve the problem, they made some conversions to (3) and extended the matrix robust PCA problem to the tensor case. A relaxation technique was used to separate the dependant relationships and the block coordinate descent (BCD) method was used to solve the low--rank tensor recovery problem. Then they proposed the rank sparsity tensor decomposition (RSTD) algorithm. In fact, their algorithm can be seen as a basic version of Lagrange multiplier method. Although being simple and provably correct, the RSTD algorithm requires a very large number of iterations to converge and it is difficult to choose the parameters for speedup. Besides, due to the property of the basic Lagrange multiplier method, the accuracy of results needs to be improved.

In this paper, a new algorithm for low--tensor recovery, which is termed as Mixture Augmented Lagrange Multiplier Method for Tensor Recovery (MALM-TR), is proposed. In the new algorithm, analogy to the RSTD [12], we convert the tensor recovery problem into a mixture convex optimization problem by adopting the relaxation technique strategy which eliminates the interdependent trace norm and norm constrain. Actually, the elements involved in problem are all in matrix case. Thus, it can be treated as a multilinear extension of the RPCA problem and subsumes the matrix RPCA problem as a special case. Lin et al. [14] have proved that the matrix RPCA problem can be solved by ALM with achieving higher precision, being less storage/memory demanding, and having a pleasing Q-linear convergence speed. Inspired by these merits of ALM, we try to extend the augmented Lagrange multiplier method (ALM) to the multilinear RPCA problem and prove that ALM is not only fit to solve the matrix RPCA problem but also suitable to solve the multilinear RPCA problem.

For the usage of this algorithm, it is applied to real world data recovery including traffic data recovery, image restoration, and background modeling.

In traffic data analysis area, due to detector and communication malfunctions, traffic data often confronts with the noising data phenomenon, especially the outlier value noise, which has a great impact on the performance of Intelligent Transportation System (ITS). Therefore, it is essential to solve the issues caused by outlier data in order to fully explore the applicability of the data and realize the ITS applications. In the application part of this paper, we introduce the tensor form to model the traffic data, which can encode the multimode (e.g., week, day, record) correlations of the traffic data simultaneously and preserve the multiway nature of traffic data. For example, it is assumed that a loop detector collects traffic volume data every 15 minutes. Thus, it will have 96 records in a day. If we have 20 weeks traffic volume data, these data can be formed into a tensor of size . Then, the proposed tensor-based method which can well mine the multimode correlations of traffic data mentioned above is used to remove outlier noise of the traffic data.

It is observed that the multichannel image can be seen as a tensor with multidimensions. For example, RGB image has three channels including Red channel, Green channel, and Black channel. Thus, it can be represented as which is a 3-dimensional tensor. For the application, the proposed method is used to remove the noise of the image. Though the method would not be reasonable for some natural images, it has many applications for visual data such as structured images (e.g., the façade image), CT/MRI data, and multispectral image. Besides images, video data can be grouped into the form of tensor. For example, there is a video which has 300 gray frames and each of which is in size of . These video data can form a tensor of size . For the video application, the proposed method will be used for background modeling.

The rest of the paper is organized as follows. Section 2 presents some notations and states some basic properties of tensors. Section 3 discusses the detailed process of our proposed algorithm. Section 4 tests the algorithm on different settings, varying from simulated data to applications in computer vision, image processing, and traffic recovery. Finally, some concluding remarks are provided in Section 5.

2. Notation and Basics on Tensor Model

In this paper, the nomenclatures and the notations in [1, 12] on tensors are partially adopted. Scalars are denoted by lowercase letters (a, b, c,…), vectors by bold lowercase letters (a, b, c,), and matrices by uppercase letters (A, B, C,…). Tensors are written as calligraphic letters (). -mode tensors are denoted as . The elements of an -mode are denoted as , where , . The mode- unfolding (also called matricization or flattening) of a tensor is defined as . The tensor element () is mapped to the matrix element (), where

Therefore, , where . Accordingly, its inverse operator fold can be defined as .

The -rank of a -dimensional tensor , denoted by , is the rank of the mode- unfolding matrix :

If the -rank is very small related to the size of the tensor, we call it low--rank tensor.

The inner product of two same-size tensors is defined as the sum of the products of their entries, that is,

The corresponding Frobenius norm is . Besides, the norm of a tensor , denoted by , is the number of nonzero elements in and the norm is defined as . It is clear that , and for any .

The -mode (matrix) product of a tensor with a matrix is denoted by and is of size . In terms of flattened matrix, the -mode product can be expressed as

3. MALM-TR

This section is separated into 2 parts. In Section 3.1, we convert the low-n-tensor recovery problem into a multilinear RPCA problem. Section 3.2 simply introduces the ALM approach, extends ALM approach to solve the multilinear RPCA problem, and presents the details of the proposed algorithm.

3.1. The Multilinear RPCA Problem

The derivation starts with the general version [10] of matrix recovery problem: where is the given matrix to be recovered; is the low-rank component of ; is the sparse component of ; denotes the rank of ; denotes the number of nonzero matrix entries; is a positive weighted parameter. The higher-order tensor recovery problem can be generated from the matrix (i.e., 2nd-order tensor) case by utilizing the form of (8), leading to the formulation of the following: where are -mode tensors with identical size in each mode. is the observed tensor data. and represent the correspondent structured part and irregular sparse part, respectively. is the minimum number of rank-1 tensors that generates as their sum [15, 16]. However, (9) is unsolvable because there is no straightforward algorithm to determine the CP-rank of a specific given tensor and the norm is highly nonconvex. But when the given tensor is a low--rank tensor we can use the -rank of unfolding of a tensor instead of CP-rank of tensor to capture the global information of the given tensor. Therefore, we can minimize the -ranks of the given tensor, respectively, instead of minimizing the CP-rank to solve the tensor completion problem. Obviously, is equal to . As a result, a function which minimizes all the -ranks of the given tensor to replace (9) is obtained as follows: where are the mode- unfoldings of and . Equation (10) is a highly nonconvex optimization problem, and no efficient solution is known due to the nonconvexness of the matrix rank and norm. Fortunately, it is a fact that the nuclear norm and norm are the tightest convex approximation of rank and norm [10, 11], respectively. By replacing rank with nuclear norm and replacing -norm with -norm, a tractable convex optimization problem can be obtained: In order to utilize the information of each mode as much as possible, the -rank minimization problems of each mode are combined by weighted parameters to replace the function with which is defined in [17, 18]. Thus, the tensor completion problem becomes

Problem (12) is still hard to solve due to the interdependent trace norm and norm constraint. In order to simplify the problem, we introduce additional auxiliary matrix , . Then, we relax the equality constrains by and . It is easy to check that corresponds to the stable Principle Component Pursuit (sPCP) in the matrix case [19]. Finally, we get the relaxed form of (12) which can be seen as a multilinear RPCA problem:

3.2. Optimization Process

In [20], the general method of augmented Lagrange multipliers is introduced for solving constrained optimization problems of the kind: where and , the augmented Lagrange function is defined as where is a positive scalar, and then the optimization problem can be solved via the method of augmented Lagrange multipliers (see [21] for more details).

It is observed that (13) is well structured and the separable structure emerges in both the objective function and constraint conditions. We convert (9) into its augmented Lagrange form with proper , , and . The augmented Lagrange of (13) is

Equation (16) can be simplified into its equivalent form:

The core idea of solving the optimization problem in (17) is to optimize a group of variables while fixing the other groups. The variables in the optimization are , , ,  , ,   which can be divided into groups. To achieve the optimal solution, the method estimates , , , sequentially, followed by certain refinement in each iteration.

Computing . The optimal with all other variables fixed is the solution to the following subproblem:

As shown in [22], the optimal solution of (18) is given by where is the singular value decomposition given by and is the “shrinkage” operation. The “shrinkage” operator with is defined as

The operator can be extended to the matrix or tensor case by performing the shrinkage operator towards each element.

Computing . The optimal with all other variables fixed is the solution to the following subproblem:

By the well-known norm minimization [23], the optimal solution of (22) is

Computing . The optimal with all other variables fixed is the solution to the following subproblem:

It is easy to show that the solution to (24) is given by

Computing . The optimal with all other variables fixed is the solution to the following subproblem:

It is easy to show that the solution to (26) is given by

The pseudo-code of the proposed MALM-TR algorithm is summarized in Algorithm 1.

alg1
Algorithm 1: MALM-TR: MALM for tensor recovery.

Under some rather general conditions, when is an increasing sequence and both the objective function and the constraints are continuously differentiable functions, it has been proven in [20] that the Lagrange multipliers produced by Algorithm 1 converge Q-linearly to the optimal solution when is bounded and super-Q-linearly when is unbounded. Another merit of MALM-TR is that the optimal step size to update is proven to be the chosen penalty parameters , making the parameter tuning much easier. A third merit of MALM-TR is that the algorithm converges to the exact optimal solution, even without requiring to approach infinity [20].

4. Experiments

In this section, using both the numerical simulations and the real world data, we evaluate the performance of our proposed algorithm and then compare the results with RSTD on the low--rank tensor recovery problem.

In all the experiments, the Lanczos bidiagonalization algorithm with partial reorthogonalization [24] is adopted to obtain a few singular values and vectors in all iterations. A major challenge of our algorithm is the selection of parameters. We simply set the parameters for all experiments, where . Similarly, we choose as suggested in [11] and tune with the change of . For comparing with RSTD [12], we also use the difference of and in successive iterations against a certain tolerance as the stopping criterion. All the experiments are conducted and timed on the same desktop with a Pentium (R) Dual-Core 2.50 GHz CPU that has 4 GB memory, running on Windows 7 and Matlab.

4.1. Numerical Simulations

A low--rank tensor is generated as follows. The -way Tensor Toolbox [25] is used to generate a third-order tensor with the size of and the relative small -rank []. The generated tensor is in Tucker model [13] described as . To impose these rank conditions, is core tensor with each entry being sampled independently from a standard Gaussian distribution , and are , , factor matrices generated by randomly choosing each entry from . Here without loss of generality we make the factor matrices orthogonal. But one major difference is that the -ranks are always different along each mode while the column rank and row rank of a matrix are equal to each other. For simplicity, in this paper we set the mode- ranks with the same value.

The entries of sparse tensor are independently distributed, each taking on value 0 with probability 1 − spr, and each taking on impulsive value with probability spr. The recovered tensor is generated as .

The simulated tensor used in the experiments is of size , varying the -rank and the sparse ratio spr. The parameters are adjusted according to the different and spr. The quality of recovery is measured by the relative square error (RSE) to and , which is defined as

Tables 1 and 2 present the average results (across 10 instances) for different sparse ratio. The results demonstrate that our proposed algorithm MALM-TR outperforms RSTD on either efficiency or accuracy.

tab1
Table 1: , -rank = 5, 5, 5, spr = 5%, 15%, 25%, 35%.
tab2
Table 2: , -rank = 10, 10, 10, spr = 5%, 10%, 15%, 20%.
4.2. Image Restoration

One straightforward application of our algorithm is the image restoration. Same as [12] pointed, our algorithm also assumes the image to be well structured. Though the assumption would not be reasonable for some natural images, it has many applications for visual data such as structured images (e.g., the façade image), CT/MRI data, and multispectral image. In experiments, we apply the algorithm on image restoration of the façade image, which is also used in [12, 17]. We add different percent of random impulsive noise to the image and compare MALM-TR with RSTD. The results produced by both algorithms are shown in Figure 1.

fig1
Figure 1: Comparisons in terms of visual effects. The rows (1), (2), and (3) correspond to the images before recovery, the obtained results by MALM-TR and RSTD [12], respectively. The columns (a), (b), and (c) correspond to the images corrupted by 15%, 25%, and 35% sparse impulsive noise, respectively. (d) is the original image.
4.3. Background Modeling

Another application of our algorithm is to estimate a good model for the background variations in a scene (i.e., background modeling). In this situation, it is natural to model the background variation as approximately low rank. Foreground objects generally occupy only a fraction of the image pixels and hence can be treated as sparse part.

We test our algorithm using an example from [26] and compare with RSTD [12]. The visual comparisons of the background modeling are shown in Figure 2. It is observed that our algorithm is effective in separating the background which is even a dynamic scene. The results are also comparable to RSTD.

fig2
Figure 2: Background modeling. Top: original video sequence of a scene. Middle: foreground object recovered by MALM-TR. Bottom: foreground object recovered by RSTD [12]. The results are highlighted for both RSTD and MALM-TR.
4.4. Traffic Data Recovery

In our previous work [3, 4], we have proposed two tensor-based methods on traffic data application. In [3], a tensor imputation method based on Tucker decomposition is developed to estimate the missing value. As a fact that the exact coordinate and the number of the missing data in the tensor form can be observed and obtained because if an element in the tensor form is missing, it doesn’t have value so we can recognize it easily. While this paper recovers a low--rank tensor that is arbitrarily corrupted by a fraction of noise based on the trace norm and -norm optimization. The number and the coordinate of the corrupted data are unknown or not easy to obtain. That means that it is hard to recognize the corrupted data, because the corrupted data have values and hardly be separated from the correct data. The problems solved by the two papers are two different problems. Paper [4] is written from the point of traffic data recovery application which is the same problem that will be solved in this section. The main difference of the two proposed methods is how to use the constraint condition . Reference [4] puts the constraint condition to the minimized function with only one parameter , which leads the objective function to contain not only tensor but also matrix. However, as the size and structure of each mode of the given tensor data are not always the same, the contribution of each mode of the tensor to the final result may be different. In order to utilize the information of the constraint condition as much as possible, this paper unfolds constraint condition along each mode and use weighted parameters to obtain the new constraint condition in matrix versions which is put into the minimized function using the augmented Lagrange multiplier strategy. With different objective functions, the optimized process is different too. More details can be found in [4].

In the fourth part of the experiment section, we will apply the proposed algorithm to traffic data recovery. The data used in the experiment are collected by a fixed loop in Sacramento County and downloaded from http://pems.dot.ca.gov/. The period of the data lasts for 77 days from March 14 to May 29, 2011. The traffic volume data are recorded every 5 minutes. Therefore, a daily traffic volume series for a loop detector contains 288 records. To finish traffic data recovery by the proposed algorithm, the first step is to convert the mass traffic data into a tensor form. In this part, we choose 8-week complete traffic volume data from the 77 days. Then, the 8-week data are formed into a tensor model of size as Figure 3 shows. In this model, “8” stands for 8 weeks, “7” stands for seven days in one week, and “288” stands for 288 records in one day.

914963.fig.003
Figure 3: Tensor model of size .

In our previous work [3], the similarity coefficient [27] had been used to analyze the high multimode correlation (“link” mode, “week” mode, “day mode,” and “hour” mode) of traffic data from the point of view of statistic characteristic. For the high multicorrelations of traffic data, the tensor form of size can be approximated by a low--rank tensor.

According to the above description, the traffic data are reasonably converted into a tensor form which can be approximated by a low--rank tensor. In the traffic data recovery experiment, it is assumed that a subset of entries of the traffic data tensor form is corrupted by impulsive noise at random. The ratios of noisy are set from 5% to 25% with the tolerance 5%. Then we compare the proposed method with RSTD algorithm using RSE as the criterion. The criterion is defined as the following function shows:

Table 3 tabulates the RSEs by sparse impulsive noise with different ratio on traffic data. Especially, the unrecovered column presents the RSE between corrupted data and the original data. From data in the table, it is observed that the RSEs obtained by MALM-TR and RSTD are much smaller than the unrecovered data, which means that both algorithm can improve the quality of the corrupted data. Moreover, the RSEs of MALM-TR are smaller than RSTD. From the curves of Figure 4, it is vividly shown that our method performs better than RSTD.

tab3
Table 3: Comparison of RSE on traffic data.
914963.fig.004
Figure 4: Comparison of RSE curves on traffic data.

5. Conclusion

In this paper, we extend the matrix recovery problem to low--rank tensor recovery and propose an efficient algorithm based on mixture augmented Lagrange multiplier method. The proposed algorithm can automatically separate the low--rank tensor data and sparse part. Experiments show that the proposed algorithm is more stable and accurate in most cases and has excellent convergence rate. Different application examples show the broad applicability of our proposed algorithm in computer vision, image processing, and traffic data recovery.

In the future, we would like to investigate how to automatically choose the parameters in our algorithm and develop more efficient method for tensor recovery problem. Also we will explore more applications of our method.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The research was supported by NSFC (Grant nos. 61271376, 51308115, and 91120010), the National Basic Research Program of China (973 Program: no. 2012CB725405), and Beijing Natural Science Foundation (4122067). The authors would like to thank Professor Bin Ran from the University of Wisconsin-Madison and Yong Li from the University of Notre Dame for the suggestive discussions.

References

  1. T. G. Kolda and B. W. Bader, “Tensor decompositions and applications,” SIAM Review, vol. 51, no. 3, pp. 455–500, 2009. View at Publisher · View at Google Scholar · View at Scopus
  2. E. Acar, D. M. Dunlavy, T. G. Kolda, and M. Mørup, “Scalable tensor factorizations for incomplete data,” Chemometrics and Intelligent Laboratory Systems, vol. 106, no. 1, pp. 41–56, 2011. View at Publisher · View at Google Scholar · View at Scopus
  3. H. Tan, G. Feng, J. Feng, W. Wang, Y. J. Zhang, and F. Li, “A tensor-based method for missing traffic data completion,” Transportation Research Part C, vol. 28, pp. 15–27, 2013. View at Publisher · View at Google Scholar
  4. H. Tan, G. Feng, J. Feng, W. Wang, and Y. J. Zhang, “Traffic volume data outlier recovery via tensor model,” Mathematical Problems in Engineering, vol. 2013, Article ID 164810, 8 pages, 2013. View at Publisher · View at Google Scholar
  5. M. A. O. Vasilescu and D. Terzopoulos, “Multilinear analysis of image ensembles: Tensorfaces,” in Proceedings of the European Conference on Computer Vision (ECCV '02), pp. 447–460, 2002.
  6. A. S. Lewis and G. Knowles, “Image compression using the 2-D wavelet transform,” IEEE Transactions of Image Processing, vol. 1, no. 2, pp. 244–250, 1992. View at Scopus
  7. Y. Sheikh and M. Shah, “Bayesian modeling of dynamic scenes for object detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 11, pp. 1778–1792, 2005. View at Publisher · View at Google Scholar · View at Scopus
  8. L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 11, pp. 1254–1259, 1998. View at Publisher · View at Google Scholar · View at Scopus
  9. V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willskyc, “Rank-sparsity incoherence for matrix decomposition,” http://arxiv.org/abs/0906.2220.
  10. J. Wright, Y. Peng, Y. Ma, A. Ganesh, and S. Rao, “Robust principal component analysis: exact recovery of corrupted low-rank matrices by convex optimization,” in Proceedings of the 23rd Annual Conference on Neural Information Processing Systems (NIPS '09), pp. 2080–2088, December 2009. View at Scopus
  11. E. J. Candes, X. Li, Y. Ma, and J. Wright, “Robust principal component analysis?” 2009, arXiv:0912.3599v1.
  12. Y. Li, J. Yan, Y. Zhou, and J. Yang, “Optimum subspace learning and error correction for tensors,” in Proceedings of the 11th European Conference on Computer Vision (ECCV '10), Crete, Greece, 2010.
  13. L. R. Tucker, “Some mathematical notes on three-mode factor analysis,” Psychometrika, vol. 31, no. 3, pp. 279–311, 1966. View at Publisher · View at Google Scholar · View at Scopus
  14. Z. Lin, M. Chen, L. Wu, and Y. Ma, “The augmented Lagrange multiplier method for exact recovery of a corrupted low-rank matrices,” Mathematical Programming, 2009. View at Publisher · View at Google Scholar
  15. J. D. Carroll and J.-J. Chang, “Analysis of individual differences in multidimensional scaling via an n-way generalization of “Eckart-Young” decomposition,” Psychometrika, vol. 35, no. 3, pp. 283–319, 1970. View at Publisher · View at Google Scholar · View at Scopus
  16. R. A. Harshman, “Foundations of the PARAFAC procedure: models and conditions for an “explanatory” multi-modal factor analysis,” UCLA Working Papers in Phonetics, vol. 16, pp. 1–84, 1970.
  17. J. Liu, P. Musialski, P. Wonka, and J. Ye, “Tensor completion for estimating missing values in visual data,” in Proceedings of the International Conference on Computer Vision (ICCV '09), 2009.
  18. J. Liu, P. Musialski, P. Wonka, and J. Ye, “Tensor completion for estimating missing values in visual data,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 208–220, 2013.
  19. Z. Zhou, X. Li, J. Wright, E. Candès, and Y. Ma, “Stable principal component pursuit,” in Proceedings of the IEEE International Symposium on Information Theory (ISIT '10), pp. 1518–1522, June 2010. View at Publisher · View at Google Scholar · View at Scopus
  20. D. P. Bertsekas and A. E. Ozdaglar, “Pseudonormality and a lagrange multiplier theory for constrained optimization,” Journal of Optimization Theory and Applications, vol. 114, no. 2, pp. 287–343, 2002. View at Publisher · View at Google Scholar · View at Scopus
  21. D. Bertsekas, Nonlinear Programming, Athena Scientific, 1999.
  22. J.-F. Cai, E. J. Candès, and Z. Shen, “A singular value thresholding algorithm for matrix completion,” SIAM Journal on Optimization, vol. 20, no. 4, pp. 1956–1982, 2010. View at Publisher · View at Google Scholar · View at Scopus
  23. E. T. Hale, W. Yin, and Y. Zhang, “Fixed-point continuation for 11-minimization: methodology and convergence,” SIAM Journal on Optimization, vol. 19, no. 3, pp. 1107–1130, 2008. View at Publisher · View at Google Scholar · View at Scopus
  24. H. D. Simon, “The lanczos algorithm with partial reorthogonalization,” Mathematics of Computation, vol. 42, pp. 115–142, 1984. View at Publisher · View at Google Scholar
  25. C. A. Andersson and R. Bro, “The N-way toolbox for MATLAB, chemometrics and intelligent laboratory systems,” Chemometrics & Intelligent Laboratory Systems, vol. 52, no. 1, pp. 1–25.4, 2000.
  26. J. Zhong and S. Sclaroff, “Segmenting foreground objects from a dynamic textured background via a robust Kalman filter,” in Proceedings of the 9th IEEE International Conference on Computer Vision, pp. 44–50, October 2003. View at Scopus
  27. Y. Zhang and Y. Liu, “Missing traffic flow data prediction using least squares support vector machines in urban arterial streets,” in Proceedings of the IEEE Symposium on Computational Intelligence and Data Mining, CIDM 2009, pp. 76–83, April 2009. View at Publisher · View at Google Scholar · View at Scopus