Abstract
The problem of data recovery in multiway arrays (i.e., tensors) arises in many fields such as computer vision, image processing, and traffic data analysis. In this paper, we propose a scalable and fast algorithm for recovering a lowrank tensor with an unknown fraction of its entries being arbitrarily corrupted. In the new algorithm, the tensor recovery problem is formulated as a mixture convex multilinear Robust Principal Component Analysis (RPCA) optimization problem by minimizing a sum of the nuclear norm and the norm. The problem is well structured in both the objective function and constraints. We apply augmented Lagrange multiplier method which can make use of the good structure for efficiently solving this problem. In the experiments, the algorithm is compared with the stateofart algorithm both on synthetic data and real data including traffic data, image data, and video data.
1. Introduction
A tensor is a multidimensional array. It is the higherorder generalization of vector and matrix, which has many applications in information sciences, computer vision, graph analysis [1], and traffic data analysis [2–4]. In the real world, as the size and the amount of redundancy of the data increase fast and nearly all of the existing highdimensional real world data either have the natural form of tensor (e.g., multichannel images) or can be grouped into the form of tensor (e.g., tensor face [5], traffic data tensor model [2–4], and videos), challenges come up in many scientific areas when someone confronts with the highdimensional real world data. Because of some reasons, one wants to capture the underlying lowdimensional structure of the tensor data or seeks to detect the irregular sparse patterns of the tensor data, such as image compression [6], foreground segmentation [7], saliency detection [8], and traffic data completion [2, 3]. As a consequence, it is desirable to develop algorithms that can capture the lowdimensional structure or the irregular sparse patterns in the highdimensional tensor data.
In the twodimensional case, that is, the matrix case, the “rank” and “sparsity” are the most useful tools for matrixvalued data analysis. Chandrasekaran et al. [9] proposed the concept of “ranksparse incoherence” to depict the fundamental identifiability of recovering the lowrank and sparse components. Wright et al. [10] and Candes et al. [11] demonstrated that if the irregular sparse matrix is sufficiently sparse (relative to the rank of ), one can accomplish the sparse and lowrank recovery by solving the following convex optimization problem: where is the given matrix to be recovered; is the lowrank component of ; is the sparse component of ; denotes the nuclear norm defined by the sum of all singular values; denotes the sum of the absolute values of matrix entries; is a positive weighted parameter. This optimization method is called the Robust Principal Component Analysis [10, 11] (RPCA) or the Principal Component Pursuit (PCP) due to its ability of exactly recovering the underlying lowrank matrix even in the presence of being corrupted by large entries or outliers.
Although the lowrank matrix recovery problem has been well studied, there is not much work on tensors. Li et al. [12] derived a method for the optimal tensor decomposition model. Considering a real mode tensor , the best approximation is to find a tensor with prespecified that minimizes the leastsquares cost function: The rank conditions imply that should have the Tucker decomposition [13] as . For the application, they applied the model to the highdimensional tensorlike visual data by dividing the observed tensor into a lowdimensional structure plus unbounded but sparse irregular patterns: . By assumption that the rank of should be small and the corruption is bounded, the original function is as follows: In order to solve the problem, they made some conversions to (3) and extended the matrix robust PCA problem to the tensor case. A relaxation technique was used to separate the dependant relationships and the block coordinate descent (BCD) method was used to solve the lowrank tensor recovery problem. Then they proposed the rank sparsity tensor decomposition (RSTD) algorithm. In fact, their algorithm can be seen as a basic version of Lagrange multiplier method. Although being simple and provably correct, the RSTD algorithm requires a very large number of iterations to converge and it is difficult to choose the parameters for speedup. Besides, due to the property of the basic Lagrange multiplier method, the accuracy of results needs to be improved.
In this paper, a new algorithm for lowtensor recovery, which is termed as Mixture Augmented Lagrange Multiplier Method for Tensor Recovery (MALMTR), is proposed. In the new algorithm, analogy to the RSTD [12], we convert the tensor recovery problem into a mixture convex optimization problem by adopting the relaxation technique strategy which eliminates the interdependent trace norm and norm constrain. Actually, the elements involved in problem are all in matrix case. Thus, it can be treated as a multilinear extension of the RPCA problem and subsumes the matrix RPCA problem as a special case. Lin et al. [14] have proved that the matrix RPCA problem can be solved by ALM with achieving higher precision, being less storage/memory demanding, and having a pleasing Qlinear convergence speed. Inspired by these merits of ALM, we try to extend the augmented Lagrange multiplier method (ALM) to the multilinear RPCA problem and prove that ALM is not only fit to solve the matrix RPCA problem but also suitable to solve the multilinear RPCA problem.
For the usage of this algorithm, it is applied to real world data recovery including traffic data recovery, image restoration, and background modeling.
In traffic data analysis area, due to detector and communication malfunctions, traffic data often confronts with the noising data phenomenon, especially the outlier value noise, which has a great impact on the performance of Intelligent Transportation System (ITS). Therefore, it is essential to solve the issues caused by outlier data in order to fully explore the applicability of the data and realize the ITS applications. In the application part of this paper, we introduce the tensor form to model the traffic data, which can encode the multimode (e.g., week, day, record) correlations of the traffic data simultaneously and preserve the multiway nature of traffic data. For example, it is assumed that a loop detector collects traffic volume data every 15 minutes. Thus, it will have 96 records in a day. If we have 20 weeks traffic volume data, these data can be formed into a tensor of size . Then, the proposed tensorbased method which can well mine the multimode correlations of traffic data mentioned above is used to remove outlier noise of the traffic data.
It is observed that the multichannel image can be seen as a tensor with multidimensions. For example, RGB image has three channels including Red channel, Green channel, and Black channel. Thus, it can be represented as which is a 3dimensional tensor. For the application, the proposed method is used to remove the noise of the image. Though the method would not be reasonable for some natural images, it has many applications for visual data such as structured images (e.g., the façade image), CT/MRI data, and multispectral image. Besides images, video data can be grouped into the form of tensor. For example, there is a video which has 300 gray frames and each of which is in size of . These video data can form a tensor of size . For the video application, the proposed method will be used for background modeling.
The rest of the paper is organized as follows. Section 2 presents some notations and states some basic properties of tensors. Section 3 discusses the detailed process of our proposed algorithm. Section 4 tests the algorithm on different settings, varying from simulated data to applications in computer vision, image processing, and traffic recovery. Finally, some concluding remarks are provided in Section 5.
2. Notation and Basics on Tensor Model
In this paper, the nomenclatures and the notations in [1, 12] on tensors are partially adopted. Scalars are denoted by lowercase letters (a, b, c,…), vectors by bold lowercase letters (a, b, c,…), and matrices by uppercase letters (A, B, C,…). Tensors are written as calligraphic letters (). mode tensors are denoted as . The elements of an mode are denoted as , where , . The mode unfolding (also called matricization or flattening) of a tensor is defined as . The tensor element () is mapped to the matrix element (), where
Therefore, , where . Accordingly, its inverse operator fold can be defined as .
The rank of a dimensional tensor , denoted by , is the rank of the mode unfolding matrix :
If the rank is very small related to the size of the tensor, we call it lowrank tensor.
The inner product of two samesize tensors is defined as the sum of the products of their entries, that is,
The corresponding Frobenius norm is . Besides, the norm of a tensor , denoted by , is the number of nonzero elements in and the norm is defined as . It is clear that , and for any .
The mode (matrix) product of a tensor with a matrix is denoted by and is of size . In terms of flattened matrix, the mode product can be expressed as
3. MALMTR
This section is separated into 2 parts. In Section 3.1, we convert the lowntensor recovery problem into a multilinear RPCA problem. Section 3.2 simply introduces the ALM approach, extends ALM approach to solve the multilinear RPCA problem, and presents the details of the proposed algorithm.
3.1. The Multilinear RPCA Problem
The derivation starts with the general version [10] of matrix recovery problem: where is the given matrix to be recovered; is the lowrank component of ; is the sparse component of ; denotes the rank of ; denotes the number of nonzero matrix entries; is a positive weighted parameter. The higherorder tensor recovery problem can be generated from the matrix (i.e., 2ndorder tensor) case by utilizing the form of (8), leading to the formulation of the following: where are mode tensors with identical size in each mode. is the observed tensor data. and represent the correspondent structured part and irregular sparse part, respectively. is the minimum number of rank1 tensors that generates as their sum [15, 16]. However, (9) is unsolvable because there is no straightforward algorithm to determine the CPrank of a specific given tensor and the norm is highly nonconvex. But when the given tensor is a lowrank tensor we can use the rank of unfolding of a tensor instead of CPrank of tensor to capture the global information of the given tensor. Therefore, we can minimize the ranks of the given tensor, respectively, instead of minimizing the CPrank to solve the tensor completion problem. Obviously, is equal to . As a result, a function which minimizes all the ranks of the given tensor to replace (9) is obtained as follows: where are the mode unfoldings of and . Equation (10) is a highly nonconvex optimization problem, and no efficient solution is known due to the nonconvexness of the matrix rank and norm. Fortunately, it is a fact that the nuclear norm and norm are the tightest convex approximation of rank and norm [10, 11], respectively. By replacing rank with nuclear norm and replacing norm with norm, a tractable convex optimization problem can be obtained: In order to utilize the information of each mode as much as possible, the rank minimization problems of each mode are combined by weighted parameters to replace the function with which is defined in [17, 18]. Thus, the tensor completion problem becomes
Problem (12) is still hard to solve due to the interdependent trace norm and norm constraint. In order to simplify the problem, we introduce additional auxiliary matrix , . Then, we relax the equality constrains by and . It is easy to check that corresponds to the stable Principle Component Pursuit (sPCP) in the matrix case [19]. Finally, we get the relaxed form of (12) which can be seen as a multilinear RPCA problem:
3.2. Optimization Process
In [20], the general method of augmented Lagrange multipliers is introduced for solving constrained optimization problems of the kind: where and , the augmented Lagrange function is defined as where is a positive scalar, and then the optimization problem can be solved via the method of augmented Lagrange multipliers (see [21] for more details).
It is observed that (13) is well structured and the separable structure emerges in both the objective function and constraint conditions. We convert (9) into its augmented Lagrange form with proper , , and . The augmented Lagrange of (13) is
Equation (16) can be simplified into its equivalent form:
The core idea of solving the optimization problem in (17) is to optimize a group of variables while fixing the other groups. The variables in the optimization are , , , , , which can be divided into groups. To achieve the optimal solution, the method estimates , , , sequentially, followed by certain refinement in each iteration.
Computing . The optimal with all other variables fixed is the solution to the following subproblem:
As shown in [22], the optimal solution of (18) is given by where is the singular value decomposition given by and is the “shrinkage” operation. The “shrinkage” operator with is defined as
The operator can be extended to the matrix or tensor case by performing the shrinkage operator towards each element.
Computing . The optimal with all other variables fixed is the solution to the following subproblem:
By the wellknown norm minimization [23], the optimal solution of (22) is
Computing . The optimal with all other variables fixed is the solution to the following subproblem:
It is easy to show that the solution to (24) is given by
Computing . The optimal with all other variables fixed is the solution to the following subproblem:
It is easy to show that the solution to (26) is given by
The pseudocode of the proposed MALMTR algorithm is summarized in Algorithm 1.

Under some rather general conditions, when is an increasing sequence and both the objective function and the constraints are continuously differentiable functions, it has been proven in [20] that the Lagrange multipliers produced by Algorithm 1 converge Qlinearly to the optimal solution when is bounded and superQlinearly when is unbounded. Another merit of MALMTR is that the optimal step size to update is proven to be the chosen penalty parameters , making the parameter tuning much easier. A third merit of MALMTR is that the algorithm converges to the exact optimal solution, even without requiring to approach infinity [20].
4. Experiments
In this section, using both the numerical simulations and the real world data, we evaluate the performance of our proposed algorithm and then compare the results with RSTD on the lowrank tensor recovery problem.
In all the experiments, the Lanczos bidiagonalization algorithm with partial reorthogonalization [24] is adopted to obtain a few singular values and vectors in all iterations. A major challenge of our algorithm is the selection of parameters. We simply set the parameters for all experiments, where . Similarly, we choose as suggested in [11] and tune with the change of . For comparing with RSTD [12], we also use the difference of and in successive iterations against a certain tolerance as the stopping criterion. All the experiments are conducted and timed on the same desktop with a Pentium (R) DualCore 2.50 GHz CPU that has 4 GB memory, running on Windows 7 and Matlab.
4.1. Numerical Simulations
A lowrank tensor is generated as follows. The way Tensor Toolbox [25] is used to generate a thirdorder tensor with the size of and the relative small rank []. The generated tensor is in Tucker model [13] described as . To impose these rank conditions, is core tensor with each entry being sampled independently from a standard Gaussian distribution , and are , , factor matrices generated by randomly choosing each entry from . Here without loss of generality we make the factor matrices orthogonal. But one major difference is that the ranks are always different along each mode while the column rank and row rank of a matrix are equal to each other. For simplicity, in this paper we set the mode ranks with the same value.
The entries of sparse tensor are independently distributed, each taking on value 0 with probability 1 − spr, and each taking on impulsive value with probability spr. The recovered tensor is generated as .
The simulated tensor used in the experiments is of size , varying the rank and the sparse ratio spr. The parameters are adjusted according to the different and spr. The quality of recovery is measured by the relative square error (RSE) to and , which is defined as
Tables 1 and 2 present the average results (across 10 instances) for different sparse ratio. The results demonstrate that our proposed algorithm MALMTR outperforms RSTD on either efficiency or accuracy.
4.2. Image Restoration
One straightforward application of our algorithm is the image restoration. Same as [12] pointed, our algorithm also assumes the image to be well structured. Though the assumption would not be reasonable for some natural images, it has many applications for visual data such as structured images (e.g., the façade image), CT/MRI data, and multispectral image. In experiments, we apply the algorithm on image restoration of the façade image, which is also used in [12, 17]. We add different percent of random impulsive noise to the image and compare MALMTR with RSTD. The results produced by both algorithms are shown in Figure 1.
(a)
(b)
(c)
(d)
4.3. Background Modeling
Another application of our algorithm is to estimate a good model for the background variations in a scene (i.e., background modeling). In this situation, it is natural to model the background variation as approximately low rank. Foreground objects generally occupy only a fraction of the image pixels and hence can be treated as sparse part.
We test our algorithm using an example from [26] and compare with RSTD [12]. The visual comparisons of the background modeling are shown in Figure 2. It is observed that our algorithm is effective in separating the background which is even a dynamic scene. The results are also comparable to RSTD.
(a)
(b)
(c)
4.4. Traffic Data Recovery
In our previous work [3, 4], we have proposed two tensorbased methods on traffic data application. In [3], a tensor imputation method based on Tucker decomposition is developed to estimate the missing value. As a fact that the exact coordinate and the number of the missing data in the tensor form can be observed and obtained because if an element in the tensor form is missing, it doesn’t have value so we can recognize it easily. While this paper recovers a lowrank tensor that is arbitrarily corrupted by a fraction of noise based on the trace norm and norm optimization. The number and the coordinate of the corrupted data are unknown or not easy to obtain. That means that it is hard to recognize the corrupted data, because the corrupted data have values and hardly be separated from the correct data. The problems solved by the two papers are two different problems. Paper [4] is written from the point of traffic data recovery application which is the same problem that will be solved in this section. The main difference of the two proposed methods is how to use the constraint condition . Reference [4] puts the constraint condition to the minimized function with only one parameter , which leads the objective function to contain not only tensor but also matrix. However, as the size and structure of each mode of the given tensor data are not always the same, the contribution of each mode of the tensor to the final result may be different. In order to utilize the information of the constraint condition as much as possible, this paper unfolds constraint condition along each mode and use weighted parameters to obtain the new constraint condition in matrix versions which is put into the minimized function using the augmented Lagrange multiplier strategy. With different objective functions, the optimized process is different too. More details can be found in [4].
In the fourth part of the experiment section, we will apply the proposed algorithm to traffic data recovery. The data used in the experiment are collected by a fixed loop in Sacramento County and downloaded from http://pems.dot.ca.gov/. The period of the data lasts for 77 days from March 14 to May 29, 2011. The traffic volume data are recorded every 5 minutes. Therefore, a daily traffic volume series for a loop detector contains 288 records. To finish traffic data recovery by the proposed algorithm, the first step is to convert the mass traffic data into a tensor form. In this part, we choose 8week complete traffic volume data from the 77 days. Then, the 8week data are formed into a tensor model of size as Figure 3 shows. In this model, “8” stands for 8 weeks, “7” stands for seven days in one week, and “288” stands for 288 records in one day.
In our previous work [3], the similarity coefficient [27] had been used to analyze the high multimode correlation (“link” mode, “week” mode, “day mode,” and “hour” mode) of traffic data from the point of view of statistic characteristic. For the high multicorrelations of traffic data, the tensor form of size can be approximated by a lowrank tensor.
According to the above description, the traffic data are reasonably converted into a tensor form which can be approximated by a lowrank tensor. In the traffic data recovery experiment, it is assumed that a subset of entries of the traffic data tensor form is corrupted by impulsive noise at random. The ratios of noisy are set from 5% to 25% with the tolerance 5%. Then we compare the proposed method with RSTD algorithm using RSE as the criterion. The criterion is defined as the following function shows:
Table 3 tabulates the RSEs by sparse impulsive noise with different ratio on traffic data. Especially, the unrecovered column presents the RSE between corrupted data and the original data. From data in the table, it is observed that the RSEs obtained by MALMTR and RSTD are much smaller than the unrecovered data, which means that both algorithm can improve the quality of the corrupted data. Moreover, the RSEs of MALMTR are smaller than RSTD. From the curves of Figure 4, it is vividly shown that our method performs better than RSTD.
5. Conclusion
In this paper, we extend the matrix recovery problem to lowrank tensor recovery and propose an efficient algorithm based on mixture augmented Lagrange multiplier method. The proposed algorithm can automatically separate the lowrank tensor data and sparse part. Experiments show that the proposed algorithm is more stable and accurate in most cases and has excellent convergence rate. Different application examples show the broad applicability of our proposed algorithm in computer vision, image processing, and traffic data recovery.
In the future, we would like to investigate how to automatically choose the parameters in our algorithm and develop more efficient method for tensor recovery problem. Also we will explore more applications of our method.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The research was supported by NSFC (Grant nos. 61271376, 51308115, and 91120010), the National Basic Research Program of China (973 Program: no. 2012CB725405), and Beijing Natural Science Foundation (4122067). The authors would like to thank Professor Bin Ran from the University of WisconsinMadison and Yong Li from the University of Notre Dame for the suggestive discussions.