Abstract

In this paper, the low-complexity tensor completion (LTC) scheme is proposed to improve the efficiency of tensor completion. On one hand, the matrix factorization model is established for complexity reduction, which adopts the matrix factorization into the model of low-rank tensor completion. On the other hand, we introduce the smoothness by total variation regularization and framelet regularization to guarantee the completion performance. Accordingly, given the proposed smooth matrix factorization (SMF) model, an alternating direction method of multiple- (ADMM-) based solution is further proposed to realize the efficient and effective tensor completion. Additionally, we employ a novel tensor initialization approach to accelerate convergence speed. Finally, simulation results are presented to confirm the system gain of the proposed LTC scheme in both efficiency and effectiveness.

1. Introduction

As a high-order generalization of vector and matrix, tensor can show complicated structures of high-order data more clearly. With the tensor form, we can understand the internal connection of data from a higher level perspective. Thus, tensor is widely applied in several fields like signal reconstruction [1], signal processing [25], image recovery [6], and video inpainting [7].

The purpose of low-rank tensor completion (LRTC) is to estimate missing entries and recover the incomplete tensor data, which takes advantage of the low rank prior to estimate the missing data. Optimization problem LRTC can be ordinarily formulated as follows:Here, is the target tensor, and is the available data; is observed entries index. The constraint is to keep the entries of in consistent with .

It is obvious that minimizing the tensor rank is the fundamental problem of LRTC. However, there is no uniform definition of the tensor rank. CP-rank [8] and Tucker-rank [9] are two commonly used definitions, which lie in the corresponding decompositions, CP decomposition [10], and Tucker decomposition [11]. Moreover, the problem of calculating CP-rank is NP-hard, and it has no relevant relaxation.

Although minimizing Tucker-rank is still NP-hard, it can be relaxed by sum of nuclear norm (SNN) since it is the tightest surrogate of matrix rank. Based on this property of Tucker-rank, Liu et al. [12] developed a theoretical framework for LRTC, and the definition of the nuclear norm for tensors is formed as follows:Here, is the mode- unfolding matrix of , presents the nuclear norm of , and (, ) denotes the corresponding weight value of . Then, the LRTC model can be written asAs shown in (3), the LRTC problem is divided into a series of matrix nuclear norm minimization. Through unfolding the target tensor, the matrices are formed to be optimized. In order to resolve (3), SiLRTC, HaLRTC, and FaLRTC were proposed by Liu as the solution for LRTC [12]. Unfortunately, all these algorithms require to compute singular value decomposition (SVD) iteratively, which is complexity exhausted. Considering this difficulty, Liu and Shang [13] applied matrix factorization to the SNN model to reduce the scale of matrices requiring SVD (see details in Section 2).where , , and denote the Stiefel manifold, and is a given upper bound of rank. In this paper, we use SNN combined with matrix factorization to utilize the low-rankness feature of tensors and improve the efficiency of LRTC.

Additionally, only considering low-rankness factor is not adequate for LRTC, which results in inevitable performance degradation with the decrement of the sampling rate [14, 15]. Apart from the low rankness of tensor, smoothness is also a significant attribute in real world data. For instance, spectral signals [16] and natural pictures and videos [17] generally have this property. Total variation (TV) [18] is commonly applied as constraints to ensure smoothness. The application of TV in image processing has been proved that it can effectively encourage the smooth prior [4, 19, 20].

For the purpose of improving the quality of tensor completion, Yokota and Hontani [21] proposed LRTV-PDS to minimize SNN and TV simultaneously as:where is the trade-off connecting the SNN and TV term and is the noise threshold parameter. The first constraint restricts the recovered data in a range, . The second constraint means that the output tensor and original tensor are usually the same at observed entries. Considering the effect of noise, some deviation within a certain range is allowed.

As another commonly used definition of tensor rank, CP-rank is also widely utilized in LRTC. On the basis of CP decomposition, Yokota et al. [22] considered TV as the smoothness constraint along with the CP-rank and proposed the smooth PARAFAC tensor completion (SPC) method to achieve better result of tensor completion when the sampling rate is exceedingly low.

However, the TV regularization usually leads to staircase effect [23], which may cause the possible loss of information and geometric features in practice. For this reason, the framelet regularization can be further applied to avoid the possible performance degradation. More precisely, as a generation of the orthogonal basis, framelet effectively relaxes the restriction of the orthogonality and linear independence so that the valuable information and geometric features can be well preserved by the introduced redundancy [24]. In this paper, the framelet is applied to exploit the smoothness and preserve more details.

Considering SNN, matrix factorization, framelet, and TV simultaneously, we propose a novel model for tensor completion, which is named smooth matrix factorization (SMF). The SMF model is formulated asHere, and are adjustable parameters to balance SNN, framelet, and TV. Besides indicates framelet transformation, denotes the difference matrix, and -norm is the sum of absolute values of the matrix elements.

In addition, we notice that how to initialize the observed tensor can affect the efficiency and effectiveness of tensor completion. Most of these existing works only focus on optimizing the model and ignore initializing the tensor before iteration algorithm. There are two commonly used initialization methods: one is to set the unknown values as zero; the other is to set them as the average of all observed values. Different from the existing simple methods, our proposed initialization method considers the location of the known data and expands the known data around until the entire tensor is completed. Through the rough estimation of unknown data, we get a more accurate initial tensor, which can accelerate the convergence speed of solution.

In summary, the low-complexity tensor completion (LTC) scheme is proposed to improve the efficiency of LRTC. The proposed LTC mainly consists of two parts—the SMF model and the alternating direction method of multiple- (ADMM-) based solution. Specifically, the matrix factorization is introduced into the SNN to exploit the low rankness of the entire tensor. By factorizing matrices, the computational complexity in calculating SVD can be significantly reduced, which leads to efficiency improvement. Then, to guarantee the effectiveness of tensor completion, the TV is used to exploit the smoothness globally. After that, the framelet is further applied to further ensure the smoothness and preserve the information due to its redundancy. With respect to the proposed SMF model, we also give an effective ADMM-based algorithm to solve it. Additionally, we propose a novel initialization approach to accelerate convergence speed. On the basis of natural data, such as color image and grayscale video, the experimental results show our LTC scheme can achieve a better trade-off between performance and complexity for LRTC.

Compared with the existing works, the contributions of this paper are mainly three folds:(i)By concurrently utilizing the low-rank and smooth properties, we propose an advanced model for low-rank tensor completion. Besides, matrix factorization is introduced to this model to save calculation cost(ii)We propose an effective tensor initialization approach, which can get a better start point for tensor completion to accelerate the convergence speed(iii)An ADMM-based algorithm is developed to resolve the new SMF model. As shown in the result of numerical experiments, our method can clearly improve the effectiveness of LRTC

The rest of this paper is structured as follows: Firstly, we present some notations that this work needs in Section 2. Then, we propose the LTC scheme which consists of the SMF model and the corresponding ADMM-based solution in Section 3. Afterwards, the performance of LTC is evaluated and compared with other competing methods in Section 4. Finally, the conclusions are given in Section 5.

2. Preliminary

In this section, several basic notations and relevant definitions this work needs are shown [25].

2.1. Tensor Basics

Here, we use different fonts to distinguish data formats. For instance, we write vectors as , matrices as , and tensors as . The component of is denoted as .

The inner product of two tensors and is formulated as

As the corresponding norm to the inner product, the definition of Frobenius norm of tensor is then given as

The fiber of the tensor is a vector obtained by fixing each index except one. The mode- fiber is denoted as . The mode- unfolding of tensor is represented as . Additionally, we denote the inverse operator of unfolding as “fold.” For example, the operation of fold a matrix into a tensor is written as .

2.2. Matrix Factorization

Given a low-rank tensor with rank , the mode- unfolding can be factorized into the form , where , , and represent the Stiefel manifold, the collection of all orthonormal matrices, and is a designated constraint of rank, . By introducing matrix factorization, we have the following property of tensor nuclear norm:

Thus, the SNN minimization problem (3) can be rewritten with smaller scale matrices as follows to reduce computational complexity:

2.3. Framelet

In the discrete setting, the framelet transform is denoted as a linear operator . For instance, means that the framelet transform operator is applied to the image data, which rearranged as a vector . On the basis of the unitary extension principle, denotes the framelet transform matrix satisfying . In this paper, the piecewise linear B-spline framelets constructed by [26] is applied to exploit the smoothness and preserve details in tensor completion.

3. The Proposed LTC Scheme

In this section, the LTC scheme is proposed, which consists of the SMF model, the tensor initialization method, and the ADMM-based algorithm.

3.1. The Proposed SMF Model

Considering a 3rd-order tensor , the proposed SMF model is as follows:

Here, and are regularization parameters, is the object tensor, and is the incomplete input tensor, and is the set of indices of available data in .

Typically, the SMF model contains two main terms—SNN with matrix factorization and smoothness constraints.

The first term, SNN with matrix factorization, can exploit the low-rank property. The goal of introducing SNN is to exploit the globally multidimensional structure, which is the basic of LRTC. Based on the introduced SNN, matrix factorization is further applied to save calculation cost. The purpose of introducing matrix factorization in SNN is to improve the efficiency. Given a low-rank tensor with rank , the mode- unfolding can be factorized into the form . Then, the SNN problem can be rewritten with smaller scale matrices to reduce computational complexity. For example, the computational complexity of SVD of (, ) is . By introducing matrix factorization, the cost of computing SVD can be reduced to [13].

The second term, smoothness constraints, contains the total variation regularization and the framelet regularization, which are used for a better performance of tensor completion.

The total variation regularization is used to exploit piecewise smoothness along the mode-3 unfolding of , where is the difference matrixThe TV regularization is used to make the third dimension of the recovered tensor smooth to improve the efficiency of tensor completion.

The framelet regularization retains details in spatial domain. Here, denotes the framelet transform matrix satisfying . The framelet regularization can further ensure the output tensor smooth and preserve the details due to its redundancy.

In summary, SMF exploits the global low-rank property and the smoothness prior along both the spatial and third mode. As shown in experiments, SNN with matrix factorization term, total variation regularization, and framelet regularization, the three parts of SMF take advantage of these two properties to improve the efficiency and effectiveness of tensor completion.

3.2. The Tensor Initialization Method

In order to improve the efficiency of tensor completion, we initialize the incomplete tensor to get a better start point for subsequent data processing. Considering the location of known data, the main idea of our tensor initialization method is expanding the known data around until the entire tensor is completed.

Here, we use a specific example to illustrate our tensor initialization method. Considering an incomplete color image which is denoted as , we make a rough estimate of missing data by assigning the value of each known pixel to the unknown pixels around it. In other words, by averaging the known values adjacent each unknown pixel, the value of the pixel is initialized. For example, if is unknown and two adjacent pixels, and , are observed, we can set as . All unknown pixels observed with adjacent pixels can be initialized in this way.

Here, we introduce the operator , which meanswhich is used to project the original tensor to the binarized tensor.

In this step, we have completed an expansion of the known pixels, and the pixels initialized this time will be considered as known in next expansion. Then, we repeat this expansion process until the entire tensor is filled. The proposed tensor initialization method is detailed in Algorithm 1.

3.3. The Proposed ADMM-Based Algorithm

Here, we design an effective ADMM-based algorithm to resolve the convex problem. In particular, we introduce two additional variables , N, and the model (11) can be equivalently transformed to the following formulation:where and . By introducing matrices and , we separate blocks of variables and the augmented Lagrangian function of (14) becomes:

where and are the Lagrange multipliers and , , and are the penalty parameters. Based on ADMM, we can divide the problem (15) into subproblems which are easier to deal with in smaller sizes.

Input: The observed tensor .
Output: The initialized tensor .
1:
2: while 0 exists in do
3: 
4: fordo
5:  fordo
6:   fordo
7:    ifthen
8:     
9:     ifthen
10:      
11:     end if
12:    end if
13:   end for
14:  end for
15: end for
16: 
17:end while
18:return

For the purpose of facilitating the analysis of time complexity in the proposed ADMM-based algorithm, we assume that , , and .

The first subproblem optimizes the variable , which is presented as:

Through solving this subproblem with the orthogonality constraint, the optimal is calculated. Following [27, 28], the optimal solution is given by:

The cost of computing is .

The second subproblem optimizing the variable can be calculated as follows:

Then, is the optimal solution to the subproblem (18) if and only ifwhere is the subdifferential of . Considering the property of , , the formula can be transformed as

Obviously, the formula (20) also needs to be satisfied by the optimal solution for the following convex problem:

Therefore, is also the solution to the subproblem (21), which has an explicit solution.

Input: The initialized tensor , index set , parameters .
Output: The completed tensor .
1: initialize:
2: while not converged and do
3:  for to 3 do
4:    via (17)
5:    via (22)
6:  end for
7:   via (24)
8:   via (27)
9:   via (29)
10:   via (30)
11: end while
12: return

Here, is a singular value thresholding operator defined by , the SVD of is given by . The complexity of computing is .

The third subproblem about related to framelet regularization can be written as:

This problem has an explicit optimal solutionwhere is a soft-thresholding operator written as:

The cost of calculating is . Here, denotes the framelet level and denotes filter number, then the cost of calculating is .

The fourth subproblem about concerning total variation regularization can be calculated as follows:which has an explicit solution

The cost of computing is .

The final subproblem optimizing can be formulated as:

We can update as follows:

The cost of calculating is .

According to ADMM, the multipliers and are updated as

To summarize, the proposed iterative ADMM-based algorithm for solving the SMF model in (11) is outlined in Algorithm 2. The computing complexity of , , , , and at each iteration is .

4. Numerical Experiments

In this part, several experiments are conducted on one synthetic and some visual data to demonstrate the performance of our proposed LTC scheme. We also compare it with another four state-of-the-art algorithms: MFTC [13], HaLRTC [12], LRTV-PDS [21], and SPC [22]. Here, we notice that MFTC and HaLRTC only use Tucker-rank to exploit the low-rank prior, while LRTV-PDS and SPC use Tucker-rank and CP-rank, respectively. Besides, both of them are expected to achieve better results on visual data for considering TV term as the smoothness constraint. To measure the estimated accuracy of completed tensor data, peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) are employed as quality metrics. The PSNR is defined as . Here, MAX is the max of whole data, and MSE is formulated as . The SSIM estimates the resemblance of images in luminance, contrast, and structure. The relative change is defined as , which is set as stopping criterion of all five algorithms. In the following experiments, if the relative change is smaller than the tolerance during the iteration, we terminate the iterative process and output as the recovered tensor.

4.1. Synthetic Data Completion

Here, we apply our LTC scheme and MFTC on synthetic data.

Figure 1 displays a visualization of a synthetic tensor and the corresponding results of tensor completion achieved by MFTC and LTC. We use four Gaussian functions to get the synthetic 3rd-order tensor. By randomly removing 80% of the voxels, we get the incomplete tensor. Compared to the results of MFTC and LTC, it is obvious that only low-rank assumption is not enough for highly missing data, and smoothness constraints can effectively improve the performance of tensor completion.

4.2. Color Images Completion

In this part, we compare the performance of MFTC, HaLRTC, PDS, SPC, and the proposed LTC on four color images “Lena,” “Giant,” “Pepper,” and “Airplane.” The incomplete images are generated by randomly deleting elements. In color image data completion, the missing ratios (MRs) are set as 70%, 80%, and 90%. The quality metrics of recovered images completed by five methods are listed in Table 1. From Table 1, it is obvious that LTC achieves high quality metrics values and performs better than other LRTC algorithms except SPC.

Figure 2 presents the results of color image “Lena” recovered by MFTC, HaLRTC, PDS, SPC, and LTC with different missing ratios. The MRs are set as 70%, 80%, and 90%. From Figure 2, it can be observed that the quality of the completed images by MFTC and HaLRTC becomes worse as the missing ratio increases, while the results obtained by PDS, SPC, and LTC still contain most information of the original image.

Figure 3 presents the recovered color images “Lena,” “Giant,” “Pepper,” and “Airplane” completed by MFTC, HaLRTC, PDS, SPC, and LTC with . Obviously, we can see that methods with smoothness constraints (PDS, SPC, and LTC) perform better than those only considering low-rank prior (MFTC, HALRTC) for different color images.

Figure 4 shows performance comparison of color image “Lena” by MFTC, HaLRTC, PDS, SPC, and LTC in PSNR, SSIM, and running time. The MRs are set as 50%, 60%, 70%, 80%, and 90%, respectively. We can see that PDS, SPC, and LTC are superior to MFTC and HaLRTC in terms of PSNR and SSIM. With MR increasing, the advantage of methods with smoothness constraints becomes more prominent. While ensuring the performance, LTC achieves about 80% running time reduction compared to PDS and SPC, which implies that introducing matrix factorization indeed improves the efficiency of tensor completion.

In order to analyze the effect of parameters and , we evaluate the performance of the recovered color image “Lena” by LTC with . Figure 5 shows the change of the PSNR values for different values of and . It is observed that both and evidently effect the tensor completed results of our proposed LTC scheme. Additionally, the LTC achieves higher PSNR values when , .

To compare the convergence behavior of LTC with or without tensor initialization, we display the relative change (RelCha) values of the recovered color image “Lena” with in Figure 6. It is observed that the RelCha value of LTC with tensor initialization decreases faster than LTC without tensor initialization, which proves the effectiveness of our proposed tensor initialization method to accelerate the convergence speed.

4.3. Video Completion

Here, two videos: “suzie” and “hall” are tested, and only the first 30 frames of each video are used for tensor completion. The size of videos for LRTC is .

Figure 7 shows the 10th frame in two videos recovered by five tensor completion methods with and 80%, which presents the visual results of recovered videos. Obviously, the videos recovered by PDS, SPC, and LTC are visually superior to MFTC and HaLRTC. Definitely, SPC and LTC has a better performance to complete the video for preserving details well, while some parts of recovered video by PDS are still blurry.

Figure 8 presents the PSNR and SSIM values of each frame of completed video “suzie” by five algorithms with , 70% and 80%, which shows the performance comparison more straightforward. From curves in Figure 8, we can conclude that SPC and LTC are better than other algorithms in both stability and effectiveness.

Figure 9 shows performance comparison of tensor completion for video “suzie” by our proposed LTC and the comparing four algorithms in PSNR, SSIM, and running time. The MRs are set as 40%, 50%, 60%, 70%, and 80%, respectively. Apparently, PDS, SPC, and LTC perform better than MFTC and HaLRTC when the MR is high. While ensuring the performance, LTC achieves about 90% and 50% running time reduction compared to SPC and PDS, respectively. It means that introducing matrix factorization can effectively reduce the running time of recovering incomplete videos.

5. Conclusion

In this paper, we propose a low-complexity tensor completion scheme. Our model takes advantage of SNN to exploit the low-rankness, TV and framelet to recover details and characterize the smoothness, and matrix factorization to improve the efficiency. Besides, a novel tensor initialization method is proposed to accelerate convergence speed. Moreover, an efficient ADMM-based algorithm is developed to solve the SMF model. The numerical experiments on synthetic and real-world data demonstrate both the efficiency and effectiveness of proposed LTC scheme for tensor completion. Smoothing constraints for higher-dimensional data will be the research direction of tensor completion in the future.

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the China NSF Grants (61971217, 61971218, and 61631020), Jiangsu NSF Grant (BK20200444), the fund of Sonar Technology Key Laboratory (Research on the theory and algorithm of signal processing for two-dimensional underwater acoustics coprime array) and the fund of Sonar Technology Key Laboratory (Range estimation and location technology of passive target via multiple array combination), and Jiangsu Key Research and Development Project (BE2020101).