Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2020 / Article
Special Issue

Array Signal Processing with Imperfect Scenarios

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 9548749 | https://doi.org/10.1155/2020/9548749

Ruofei Zhou, Gang Wang, Bo Li, Jinlong Wang, Tianzhu Liu, Chungang Liu, "Key-Frame Detection and Super-Resolution of Hyperspectral Video via Sparse-Based Cumulative Tensor Factorization", Mathematical Problems in Engineering, vol. 2020, Article ID 9548749, 20 pages, 2020. https://doi.org/10.1155/2020/9548749

Key-Frame Detection and Super-Resolution of Hyperspectral Video via Sparse-Based Cumulative Tensor Factorization

Guest Editor: Liangtian Wan
Received11 May 2020
Revised25 May 2020
Accepted05 Jun 2020
Published14 Jul 2020

Abstract

Thanks to the rapid development of hyperspectral sensors, hyperspectral videos (HSV) can now be collected with high temporal and spectral resolutions and utilized to handle invisible dynamic monitoring missions, such as chemical gas plume tracking. However, using such sequential large-scale data effectively is challenged, because the direct process of these data requires huge demands in terms of computational loads and memory. This paper presents a key-frame and target-detecting algorithm based on cumulative tensor CANDECOMP/PARAFAC (CP) factorization (CTCF) to select the frames where the target shows up, and a novel super-resolution (SR) method using sparse-based tensor Tucker factorization (STTF) is used to improve the spatial resolution. In the CTCF method, the HSV sequence is seen as cumulative tensors and the correlation of adjacent frames is exploited by applying CP tensor approximation. In the proposed STTF-based SR method, we consider the HSV frame as a third-order tensor; then, HSV frame super-resolution problem is transformed into estimations of the dictionaries along three dimensions and estimation of the core tensor. In order to promote sparse core tensors, a regularizer is incorporated to model the high spatial-spectral correlations. The estimations of the core tensor and the dictionaries along three dimensions are formulated as sparse-based Tucker factorizations of each HSV frame. Experimental results on real HSV data set demonstrate the superiority of the proposed CTCF and STTF algorithms over the comparative state-of-the-art target detection and SR approaches.

1. Introduction

Hyperspectral imaging has been one of the most popular research fields due to its ability of identifying the materials from very high spectral resolution and coverage. In the last decade, researchers focused on the processing and application of hyperspectral image (HSI), such as denoising [1, 2], feature extraction [3, 4], classification [511], detection [1214], and super-resolution (fusion) [1518]. In this section, researching of the latter two fields which are related to this paper will be briefly introduced.

Basically, target detection is a kind of binary classifier with the purpose of labeling every image pixel as a target or background. In HSIs, pixels with a significantly different spectral signature from their neighboring background pixels are defined as spectral anomalies. Anomaly detectors are statistical or pattern recognition methods used to detect distinct pixels that differ from the background. It is worth mentioning that, in spectral anomaly detection approaches [1922], such as Reed-Xiaoli (RX) algorithm [23], no prior information of the target spectral signature is assumed or used. However, we focus on the detection of invisible gas plumes in this paper, and the prior knowledge of the desired targets spectral characteristics is assumed to be known. In such cases, signature-based target detection algorithms are presented instead of anomaly detection. In these algorithms, the spectral characteristics of the target can be represented by a target subspace or a single target spectrum [24]. Likewise, the characteristics of background can be statistically expressed by a Gaussian distribution or a subspace defining the local or whole background statistics. As for this category, the matched subspace detector (MSD) method [25] is one of the most typical algorithms. In the MSD, the target pixel vectors are represented by a linear combination of the target spectral signature and the background spectral signature, which stand for the subspace target spectra and the subspace background spectra, respectively. Then, the generalized likelihood ratio test (GLRT) is applied, using projection matrices associated with the background subspace and the target-and-background subspace. At last, the comparison between the output of GLRT and a preset threshold makes a final decision about whether the target is absent or present. From pixel level to subpixel level, a single pixel may contain several distinct pure materials (endmembers), also known as the mixed pixel. The presence of mixed pixels is a tough problem caused by the low spatial resolution of HSIs. Accordingly, some unmixing approaches [2628] have been designed to compute fractional abundance of endmembers. In [29], a hyperspectral unmixing approach based on constrained matrix factorization (CMF) was proposed. Unlike conventional methods, each column vector of endmember matrix is represented as a nonnegative linear combination of pixel spectra. After endmember matrix and the corresponding fractional abundance matrix are obtained by solving optimization problems, abundance map of the target endmember shows the detection result.

As mentioned before, the HSIs often suffered from low spatial resolution. To acquire an HSI, the number of sun photons in each spectral band has to be greater than a minimum value, and the number of spectral bands is so huge in an HSI that the spatial resolution has to be sacrificed. Therefore, super-resolution (SR) techniques have aroused great interest in the last decade. Generally, the SR methods of HSI can be classified into four categories: Bayesian [30], component analysis [31], deep learning [32], and sparse representation. Due to the limited length of this paper, we focus on the introduction of sparse-based algorithms. In such HSI super-resolution schemes, images are expressed by dictionaries and corresponding sparse coefficients. On the basis of the spatial-spectral sparsity in the HSIs, the dictionaries and sparse coefficients are estimated jointly [33]. Huang et al. [34] introduced a fusion method of multispectral images (MSIs) with different spectral and spatial resolutions based on sparse matrix factorization. Akhtar et al. [35] presented an MSI-HSI fusion approach using sparse coding and Bayesian dictionary learning. Moreover, some algorithms based on matrix factorization [3638] or unmixing [39] can also be regarded as the sparse representation schemes because the source images are decomposed into some basis and the corresponding coefficients. Yokoya et al. proposed a couple nonnegative matrix factorization (CNMF) [40] algorithm, where the unmixing techniques are employed to yield the endmember matrices and the high-resolution (HR) abundance matrices of HSI. In [41], Lanaras et al. suggested a joint scheme to solve the spectral unmixing problems. In [42], Zhang et al. fused the low-resolution (LR) HSI and HR-MSI based on the group spectral embedding and low-rank factorization.

However, the matrix factorization based schemes cannot fully exploit the spatial-spectral correlations of the HSIs. It is believed that considering HSIs as tensors is better because an HSI can be naturally expressed as a third-order tensor. In this paper, a detection algorithm based on cumulative tensor CP factorization (CTCF) is proposed. The sequential HSV data is expressed as a four-dimensional (4D) cumulative tensor; factor matrices are obtained by decomposing original 4D tensor using CP factorization. When a new frame presents and is added to the time dimension of the original tensor, this 4D cumulative tensor is updated together with the factor matrices. Consequently, a CP tensor approximation of the new frame is computed by updated factor matrices and the fitness between the new frame and the approximation is calculated. After comparing the fitness to a preset threshold, we can make the decision that whether the new frame continues to be used to update the cumulative tensor or the new frame is the key-frame where the target presents. CTCF-based method exploits not only the spatial-spectral correlations of the HSIs by applying tensor model, but also the temporal correlation between adjacent frames of the HSV.

On the other hand, tensor-based analysis has also been widely used in HSI super-resolution [4345]. To the best of our knowledge, most of the SR algorithms enhance spatial resolution by fusing high-resolution MSI (HR-MSI) and low-resolution HSI (LR-HSI) from the same scene. Unfortunately, it is less practical in real application. In some situations, LR-HSI is the only data we have rather than both. In this paper, we suggest an SR algorithm using sparse-based tensor Tucker factorization (STTF). Inspired by the Tucker factorization and its related works, the HSV frames are represented as third-order tensors, which are approximated by the multiplication of the dictionaries along three dimensions (i.e., the dictionaries of the height mode, the width mode, and the spectral mode: they are named “three modes dictionaries” for short in the rest of this paper) and a core tensor. Then, the problem of SR is transformed into the estimations of the three modes dictionaries and estimation of the core tensor. Specifically, the spatial information is represented by the height mode dictionary and the width mode dictionary, the spectral information is represented by the spectral mode dictionary, and the correlations of the three modes dictionaries are modeled by the core tensor. HSIs are generally self-similar so that a sparse prior can be imposed on the core tensor; then, the estimations of the core tensor and three modes dictionaries are formulated as the STTF of the LR and HR HSV frames. In the iteration of STTF, core tensor and dictionaries are all updated and accurate estimates are yielded when convergence is achieved.

The remainder of this paper is organized as follows. Section 2 presents the materials and methods, including the basic notations and preliminaries of tensor and tensor factorization, the proposed CTCF approach for key-frame detection, and the proposed STTF method for key-frame super-resolution problem. In Section 3, experimental results on real HSV and the discussions are given. The paper is summarized in Section 4 with ideas for future work along the path presented here.

2. Materials and Methods

2.1. Tensor Notations and Preliminaries
2.1.1. Tensor Notations

In this paper, vectors are denoted by boldface lowercase letters , matrices are denoted by boldface capital letters , and tensors are denoted by bold Euler script letters . Generally, a tensor is a kind of multidimensional array, denoted by . Here, tensor is an Nth-order tensor and is the dimension of the nth mode. Obviously, vectors are first-order tensors and matrices are second-order tensors. We use to denote the mode-n fiber, which are vectors yielded from tensor by changing index with other indexes fixed. The mode-n unfolding matrix of tensor is generated by placing all the mode-n fibers in a matrix as columns, denoted by .

An important calculation between a tensor and a matrix is the n-mode product, which is defined aswhere and . The elements of are denoted by , so the elements of are computed by

Given the definition of n-mode product, we can obtain

For continuous multiplication of a tensor and matrices in distinct modes, the result is not affected by the multiplication order, described by

If the modes are equivalent, equation (4) is transformed into

Suppose that is a collection of matrices; we define tensor as

The matricization form of equation (6) is presented bywhere () and () are vectors yielded by arranging the mode-1 fibers of the tensors and . The Kronecker product is denoted by symbol “.

Moreover, given the tensor , represents the -norm which equals the number of nonzero elements of , denotes the -norm, and denotes the Frobenius norm.

The definition of rank-one tensor is introduced at last. The Nth-order tensor is rank-one if it can be written as the outer product of N vectors, i.e., . The symbol “” denotes the vector outer product [46].

2.1.2. Tensor Factorizations

CANDECOMP/PARAFAC (CP) factorization decomposes a tensor into a sum of component rank-one tensors [47]. For example, given a third-order tensor , we may formulate it aswhere R is a positive integer and , , and (). The element of tensor can be computed by

CP factorization is illustrated in Figure 1.

The factorization result can be expressed by factor matrices of three dimensions. Factor matrices refer to the combination of the vectors from the rank-one components; i.e.,

Following [48], the CP model can be concisely represented as

On the basis of factor matrices, the mode-n unfolding matrices () of can be represented aswhere the symbol “” denotes the Khatri-Rao product [49]. In this way, loss functions can be modeled as the approximation of the mode-n unfolding matrices; then the factor matrices of CP factorization can be obtained by solving the corresponding optimization problem.

Tucker factorization is another popular tensor decomposing approach [50]. It decomposes a tensor into a core tensor multiplied by a matrix along each mode. Thus, in the same case as above where , the factorization can be described aswhere , , and are factor matrices which can be regarded as the principal components in each mode. Therefore, Tucker factorization is a form of higher-order principal component analysis (PCA). Tensor is the core tensor and its elements stand for the correlation level between the different components. Similar to (11), the Tucker model can be concisely represented by . Elementwise equation (13) can be represented as

The Tucker factorization is illustrated in Figure 2.

2.2. The Proposed CTCF-Based Detection Method

In this subsection, the optimization problem of updating factor matrix is presented, followed with the proposed cumulative tensor CP factorization (CTCF) of third-order tensors. It is then extended to Nth-order tensors. The CTCF-based detection method is described in the end of this subsection with its flowchart shown in Figure 3.

2.2.1. CP Tensor Approximation by Factor Matrices

Similar to equation (12), the mode-n unfolding matrix of can be approximated by factor matrices; i.e.,where the factor matrices are obtained by CP factorization. The corresponding loss function is

The Alternating Least Squares (ALS) algorithm is often applied to obtain factor matrices by solving the following optimization problem:

When the tensor updates, the new tensor can be computed by the updated factor matrices which are given by equation (17).

2.2.2. CTCF of Third-Order Tensor

Generally, an image is a second-order tensor; then sequential images form a third-order tensor, i.e., a video, adding a temporal dimension on two spatial dimensions. When a new video frame presents and is added to the time dimension of the original tensor, it is defined as a three-dimensional (3D) cumulative tensor. With the number of new frames increasing, the 3D cumulative tensor updates frame by frame.

In conventional CP tensor approximation, whenever a new frame of image is added in the time dimension, ALS algorithm needs to be reused to approximate the new cumulative tensor, which is a time consuming process. In addition, the temporal correlation between neighboring frames is not exploited in the decomposition of the cumulative tensor. This paper proposes CTCF to update the CP factorization of original cumulative tensor, obtain the updated factor matrices, and approximate the new frame.

Given an original 3D cumulative tensor , the result of CP factorization is denoted by . When a new tensor is added in the time dimension, the updated cumulative tensor is , of which the CP factorization appears as . We focus on obtaining , , and by updating , , and .

The updating process is operated in an alternating way. Firstly, temporal dimensional factor matrix is computed while factor matrices and are fixed; i.e.,where is divided into two terms. For and are fixed as and , the first row of (18) will be minimized if . To minimize the second row, according to (12), the optimal solution of is , where the symbol “” denotes Moore–Penrose pseudoinverse of the matrix [51]. So, can be updated by adding which is represented by

Secondly, factor matrix is computed while factor matrices and are fixed. Similar to 16, the loss function of estimating is written as

Derive with respect to ; then, we have

To simplifyequation (21), denote and ; thus, when , we have . According to [47], can be rewritten as

For computing , we also divide and into two terms; i.e.,

Since are fixed as , the first term of equation (23) contains only the information of original tensor, which can be expressed byso,equation (23) is rewritten asHence, can be updated from using mode-1 unfolding matrix of and factor matrix mentioned above . Generally, is initialized by , which is a small front part of , and updated iteratively by (25). Analogously, the update process of can be represented by

The update of may be summarized as

Finally, the update of factor matrix may likewise be expressed bywhere and .

To make the process clearer, the proposed CTCF of third-order tensor is summarized by Algorithm 1.

Input: original 3D cumulative tensor new tensor
 Step 1: new tensor is added in the time dimension and is obtained
 Step 2: decompose by CP factorization
 Step 3: update by (19), with and are fixed
 Step 4: update by (27), with and are fixed
 Step 5: update by (28), with and are fixed
 Step 6: estimate by updated , and
Output: approximation of updated cumulative tensor
2.2.3. CTCF of Nth-Order Tensor

On the basis of Section 2.2.2, we try to extend CTCF to higher-order tensors. Suppose an N-dimensional cumulative tensor where the last dimension is temporal dimension. The CP factorization of is represented as . When a new tensor is added in the time dimension, the updated cumulative tensor is , of which the CP factorization is denoted by .

Similar to Section 2.2.2, temporal dimensional factor matrix is firstly updated with other matrices fixed. Like 17, the optimization problem of estimating is formulated by

We also separate original part from new added part; i.e.,

The original part is minimized by fixing the first factor matrix and the new part is updated by .

The updates of nontemporal dimensional factor matrices () may refer to the ones of factor matrices and in Section 2.2.2. The loss function of estimating is the same as 16. Let and introduce matrices and ; the update of may be summarized aswhere and .

2.2.4. CTCF-Based Detection Method

In HSV, the sequential data is expressed as a 4D cumulative tensor; the temporal dimension increases with new frames are added in. Whenever a new frame presents, the results of original cumulative tensor CP factorization are updated to obtain the factor matrices of the new cumulative tensor, and the CP tensor approximation of the newly added frame is obtained at the same time. If the target is absent, the CP tensor approximation will lead to a small error, since the background information is similar between adjacent frames. On the contrary, if the error is large, the target is likely to present. We define the fitness between the new frame and its approximation in 34. If the fitness is smaller than the threshold, the target is supposed to appear in the new frame. Otherwise, the new frame is added in the temporal dimension and used to update original cumulative tensor.

The original 4D cumulative tensor is denoted by ; denotes the frame number of initial video. The factor matrices of four dimensions are represented aswhere , , , and and denotes the number of component rank-one tensors in CP factorization. When a new frame is added in the temporal dimension of original 4D cumulative tensor, the 4D cumulative tensor is updated and denoted by . The factor matrices of are expressed bywhere , , , and . Based on Section 2.2.3, we estimate and obtain the approximation of and , where . Actually, it is the specific case when .

We define the fitness (, ) as

If the target does not appear, the approximation error is small and the result of fitness is large. Given a preset threshold , when , i.e., the fitness is larger than , we decide that the target is absent. Then, the nontarget frame is added in temporal dimension and the updated 4D cumulative tensor becomes the new original 4D cumulative tensor, which can be expressed as

If the target appears, the approximation error is large and the fitness is smaller than . The residual of and is the approximation of the target tensor; i.e.,

The target of each frame will be shown in 2D form by taking the maximum value of every spectrum. In this way, the proposed CTCF-based detection method can extract not only the key-frames where the target presents, but also the approximate region of target in every key-frame. The flowchart of the proposed method is shown in Figure 3. In Section 3, experiments on real HSV data are conducted and the proposed method is compared with some representative techniques.

2.3. The Proposed STTF-Based Super-Resolution Method

In Section 2.2, we present an approach to detect the frames where the target appears in HSV and the approximate region of the target. However, as discussed in Section 1, there has to be a tradeoff between spectral resolution and the spatial resolution in HSI imaging systems [52]. The spatial resolution is always low since high spectral resolution is required in HSIs and HSV. So, we are interested in improving the spatial resolution of targets after the detecting process. Instead of fusing HR-MSI and LR-HSI, we try to handle the target SR problem by what we have got, which is more practical in real cases.

2.3.1. Problem Formulation

In this subsection, HSIs are represented as 3D tensors with three indexes (), which stand for the height, width, and spectral modes. denotes the HR-HSI and the LR-HSI is denoted by , where and . The goal is to estimate from .

There are two significant characteristics of HR-HSIs [53]: the first one is that spectral vectors can be well approximated in low dimensional subspaces, and the second one is that HSIs are spatially self-similar. This means that sparsity exists in both spectral and spatial dimensions. Inspired by sparse representation [54], the low dimensionality in spectral domain gives the possibility to form a spectral mode dictionary with few nonzero atoms; the self-similarities in spatial domain guarantee the sparse representations of the height and width modes with spatial dictionaries and . In this way, the conventional Tucker factorization is transformed into the multiplication of the core tensor and three modes dictionaries. The factorization is illustrated in Figure 4. The HR-HSI is represented aswhere , , and . The variables , , and denote the atoms (i.e., the number of columns) of , , and , respectively. The core tensor contains the coefficients of over three modes dictionaries. We can see that 37 incorporates the information of separated modes into a unified framework.

The LR key-frame of HSV can be seen as the spatially downsampled version of HR-HSI , which is written aswhere and are downsampling matrices of the height and width modes. Substituting 37 into (38), is represented bywhere and denotes the downsampled dictionary of height and width modes. To recover , we focus on estimating the dictionaries , , and and the core tensor .

2.3.2. The Proposed STTF-Based SR Algorithm

Since is a downsampled version, recovering from is a typical inverse problem, which is badly ill-posed. So, some prior knowledge of is needed to regularize the super-resolution problem. In HSI processing, the spectral sparsity is a widespread regularizer applied to solve varieties of ill-posed problems [5558]. In such regularization, spectral vectors are linearly combined by a small quantity of different spectral signatures. However, these schemes only take advantage of the sparsity of the spectral domain. In the proposed algorithm, taking into account the HSI self-similarity, sparsity regularization is extended to the spatial domain by exploiting the sparse-based tensor Tucker factorization (STTF). In STTF, the HR-HSI performs a united sparse representation of the core tensor and three modes dictionaries.

On the basis of equation (39), the HSV frame super-resolution is formulated as a constrained least-squares optimization problem:where represents the Frobenius norm and denotes the number of maximum nonzero elements of . Because of the -norm constraint, equation (40) is nonconvex. To make the optimization processable, the -norm is replaced by the -norm and 40 is transformed into an unconstrained version:where is the parameter of sparse regularizer. Equation (41) is also nonconvex, and the solutions of , , and and are not unique. Nonetheless, if we focus on only one variable with other variables fixed, the objective function in equation (41) is convex. Inspired by [59, 60], equation (41) can be solved by proximal alternating optimization scheme, which is guaranteed to reach convergence in a particular situation. Concretely, , , , and are updated iteratively bywhere denotes the previous estimation in the last iteration and denotes a positive number. Equation (41) defines the object function . The optimizations of , , , and will be presented detailedly in the appendix. The conjugate gradient (CG) method [61] and the alternating direction method of multipliers (ADMM) [62] will be used in the optimizations.

2.3.3. Initialization of the Proposed Method

Since the optimization problem in (41) is nonconvex, the solution would result in poor local minima if we set the initialization carelessly. In this paper, we initialize the spatial dictionaries and from and dictionary-updates-cycles KSVD (DUC-KSVD) [63]; this method can promote sparse representations. Then, initialization of spectral dictionary is accomplished by simplex identification split augmented Lagrangian (SISAL) algorithm [64]; this approach can efficiently identify a minimum unit that contains the spectral vectors.

The proposed STTF-based SR algorithm is summarized in Algorithm 2.

Input: LR-HSI
 Initialize with SISAL
 Initialize and with DUC-KSVD
 Initialize with (39)
while no convergence do
  Step 1: update by solving (A.3) with CG
    
  Step 2: update by solving (A.6) with CG
    
  Step 3: update by solving (A.9) with CG
    
  Step 4: update by solving (A.15) with CG
    
  end while
 Estimate by (37)
Output: HR-HSI

3. Results and Discussion

3.1. Experimental Data Set

To highlight the advantages of HSIs, we choose invisible gas plume to be the target. The proposed algorithms can be extended to other types of data reasonably. In this section, the HSV data set is acquired by the infrared imaging spectrometer “HyperCam-LW.” Sulfur hexafluoride (SF6) is chosen to be the target, since it is a kind of odorless and colorless gas plume with a distinct absorption peak in LWIR range. The HSV data set consists of 60 infrared hyperspectral frames with the size of . The imaging interval is 4.8 s, and the wavelength of the data ranges from 7.8 μm to 11.8 μm.

In SR method, only the middle pixels are used in the experiment (specifically, column 71 to column 198) for reasons connected with the algorithm process. And we remove the spectral band 41–127 because of water vapor absorption and extremely low SNR. At last, the size of input LR-HSI is .

3.2. Compared Methods

For CTCF-based detection method, we compare it with two representative methods: MSD (matched subspace detector) [25] and CMF (constrained matrix factorization) [29]. For STTF-based SR method, we compare it with three state-of-the-art algorithms: bicubic interpolation, sparse representation-based SR method [54], and sequence information-based SR method [65].

3.3. Qualitative and Quantitative Metrics

For detection methods, receiver operating characteristic (ROC) curves [66] are used to evaluate the performance. Generally, a detector outperforms another one if the area under its ROC curve is larger [67]. As suggested in [68], the area under the ROC curve (AUC) is also calculated as a measure of performance of these detection methods. Usually, a better detector gets a higher AUC value.

For SR algorithms, since we directly process the LR-HSI, there is no original HR-HSI (i.e., the ground truth) for reference. Thus, some popular quantitative metrics are not available, such as RMSE (root-mean-square error) [69], PSNR (peak signal to noise ratio), and SAM (spectral angle mapper). In this section, entropy and average gradient are introduced to evaluate the performance of SR methods.

3.3.1. Entropy

Super-resolution aims to introduce more useful information into images, so we may measure the performance of SR methods by calculating the contained information in the experimental results. The entropy is indicated as

The probability of a pixel in the image is denoted by and denotes the grey value range . The larger the entropy value of the image, the richer the information contained in the image.

3.3.2. Average Gradient

Another assessment to measure the performance of super-resolution is the change of the amount of detailed information in the image. We may evaluate the experimental results by average gradient, since it can reflect the ability of expressing the details and measuring the clarity of the image. The gradient increases if the greyscale level rate in one direction of the image varies quickly. The average gradient is formulated aswhere and denote the height and width of the image, respectively; denotes the greyscale value of pixel in the image. The larger the average gradient value of the image is, the clearer the image will be.

Besides, the visual quality of output images is an important qualitative metric.

3.4. Parameters Setting

In MSD, we pick 463 spectrums of gas target and 846 spectrums of background from the 12th frame of HSV to build up the training set. The size of the target subspace and background space is and , respectively. In CMF, the number of endmembers is 3, the sparsity of factor matrices is 2, and number of iteration is 3. In the proposed CTCF-based method, the original cumulative tensor is obtained by ALS, the tensor rank is 3, the maximum iteration number is 100, and the reconstruction error is 10−8; in update stage, the threshold of fitness is 0.9. In the proposed STTF-based SR method, the number of iterations is 5; the parameter is the weight in (42) and we set ; parameter controls the sparsity of ; we set ; parameter is set by ; the size of is set by , , and . The parameters above are decided after sufficient number of experiments to make a balance between efficiency and stability.

3.5. Experimental Results and Discussion

In this subsection, we show the experimental results of the various methods for detection and super-resolution.

After processing the HSV by the proposed CTCF-based method, we compute the values of Frobenius norm of each frame, which are presented in Figure 5. It is obvious that target gas appears in the 12th frame and disappears in the 51st frame. Figure 6 compares the ROC curves of test methods on four frames in detail, and Figure 7 illustrates the general trends of ROC curves of MSD, CMF, and CTCF, respectively. As can be seen from Figures 6 and 7, the proposed CTCF-based detection algorithm outperforms the other two methods. The AUC values of three approaches are shown in Table 1. In each row, the bold value represents the highest AUC value. Although the AUC values of CMF in some frames are better, we can see that the AUC values of CMF in some other frames are very low (less than 0.98). On the contrast, all the results of CTCF lie in the range of 0.98 to 1. From the average value and the variance (the bold value represents the highest value), we can conclude that the proposed method is superior and more stable. The graphical results are illustrated in Figure 8.


FrameMSD [25]CMF [29]CTCF

120.96550.99930.9980
130.84620.99950.9980
140.88780.99940.9981
150.81890.99650.9981
160.87340.99950.9977
170.93480.99870.9946
180.57920.74770.9915
190.78940.99880.9958
200.93360.89910.9934
210.82220.99800.9966
220.90010.99150.9969
230.83880.99860.9945
240.89140.99900.9983
250.91690.99890.9961
260.92540.99470.9989
270.87220.99740.9951
280.85030.99780.9930
290.94900.98920.9974
300.90110.93410.9885
310.91570.98670.9965
320.88810.85820.9811
330.93450.97710.9933
340.90070.99770.9922
350.92730.99520.9933
360.93490.99890.9950
370.95280.99860.9984
380.92990.98750.9981
390.88380.99790.9962
400.92950.99760.9966
410.91650.99390.9976
420.96650.99860.9988
430.96600.97630.9969
440.90830.99950.9979
450.90460.99770.9935
460.91560.99640.9918
470.92250.99790.9950
480.86230.99800.9967
490.88940.99920.9980
500.86400.99690.9947
Average0.89260.98170.9954
Variance0.4407 × 10−20.2290 × 10−20.1134×10−6

The target of each key-frame is shown in 2D form (grey image) by taking the maximum value of every spectrum. To save the length of the paper, we choose 8 frames to show the comparison of three detectors, which are shown in Figure 9. The first row to the eighth row present the detection result of the chosen frame, of which the frame number is 15, 18, 22, 28, 31, 39, 48, and 50. The higher the greyscale of the pixel in the image is, the closer it is to the target. It is apparent that our method extracts more accurate targets.

Table 2 shows the entropy and average gradient of the key-frames by four SR algorithms. Since sequence-based method needs 5 LR frames to form 1 HR frame, the compared frame number is changed from range 12∼50 to range 14∼48. In each row, the bold values represent the highest entropy value and the highest average gradient value. From Table 2, we can conclude that firstly, although interpolation can add more information in the frame, the details of the target are lost; secondly, sparse representation SR and sequence information SR have almost the same entropy, but the latter approach offers more details because in the method the HR dictionary is formed by several LR dictionaries; finally, the proposed STTF-based SR method outperforms the other three methods in both metrics.


MethodsLR frameBicubic interpolationSparse representation-based SR [54]Sequence information-based SR [65]STTF-based SR
FrameEntropyAverage gradientEntropyAverage gradientEntropyAverage gradientEntropyAverage gradientEntropyAverage gradient

145.16030.00765.37440.00525.49960.00785.42590.00905.60980.0121
154.70860.00775.14070.00635.23980.00895.21840.01035.36780.0139
165.55210.00845.80130.00605.87650.00855.82030.01035.97720.0135
175.52930.00865.70540.00565.79180.00815.61250.00945.88310.0129
184.29890.00724.83390.00624.97940.00885.04230.01065.11080.0138
194.48430.00735.00450.00635.13270.00895.18310.01065.26440.0140
205.14420.00755.43070.00605.50390.00865.39870.00995.61220.0137
214.88210.00715.22340.00605.32640.00865.25780.01005.44910.0137
224.34720.00674.89290.00654.98580.00904.94090.01025.12610.0141
234.61270.00675.04300.00615.15340.00865.08060.00985.28230.0135
244.41890.00644.86880.00604.98500.00854.89160.00965.11680.0133
254.32730.00664.90910.00665.00610.00914.96070.01035.14940.0143
264.13940.00644.75890.00664.84380.00904.80780.01034.99190.0141
274.11270.00654.69950.00664.78780.00914.77130.01044.93660.0142
283.96570.00614.65760.00664.74670.00914.75070.01074.89020.0142
294.18850.00634.75450.00644.86110.00884.84810.01015.00660.0137
303.96720.00634.63450.00674.72530.00924.72410.01064.87520.0144
313.94400.00614.61350.00654.71310.00904.73490.01044.86540.0141
323.86610.00604.57990.00644.69140.00884.69710.01014.84970.0138
334.04790.00604.71000.00644.81260.00884.81140.01004.96310.0137
344.16910.00604.78240.00664.86210.00884.84620.01025.00720.0136
354.09330.00624.71690.00674.80100.00914.82450.01084.95150.0143
363.91570.00634.59950.00674.68810.00924.67120.01034.85080.0142
373.78100.00594.50280.00644.60880.00894.59840.01004.76660.0138
383.88140.00614.54790.00654.64830.00904.63950.01014.80500.0140
394.31680.00604.83970.00614.94060.00844.91350.00995.07920.0130
403.93330.00614.65970.00674.73800.00914.72090.01044.89060.0142
414.20090.00634.78970.00664.87110.00894.83460.01025.01380.0138
424.10830.00634.75140.00674.83620.00914.83980.01034.98360.0142
434.04850.00634.68270.00674.76020.00914.71170.01014.91090.0142
444.05210.00624.65100.00634.75870.00874.72730.00974.91650.0136
454.34420.00604.90110.00615.00790.00854.93800.00975.14130.0134
464.00060.00614.59130.00624.70800.00874.64670.00974.85870.0136
474.47490.00625.01090.00605.10150.00845.00070.00955.23020.0131
484.06850.00654.68430.00654.78590.00904.76070.01014.93840.0141
Avg.4.31670.00664.86710.00634.96510.00884.93290.01015.10490.0138

Figure 10 presents the visual quality of the results obtained by four test methods. We choose the 16th, 21st, 34th, and 47th frames as a representative. The smaller one with size of is the LR 2D-form frame. The bigger ones with size of are the SR results of different algorithms. As can be seen from Figure 10, the proposed approach yields clearer outputs with sharper edges and more textures. A drawback is the “checkerboard artifacts,” which may be caused by the deconvolution operations in the method. We desired to fix it in our future work.

4. Conclusions

In this paper, aiming at hyperspectral video, we propose a novel key-frame and target detection method based on cumulative tensor CP factorization, termed as CTCF, and a super-resolution algorithm based on sparse-based tensor Tucker factorization, called STTF. Unlike conventional matrix factorization based methods, CTCF considers hyperspectral video (HSV) as 4D cumulative tensor and approximates new added frames by updating factor matrices. To break the limit of conventional methods and make super-resolution (SR) more practical, STTF exploits the sparsity of HSV frames and factorizes them as a sparse core tensor multiplied by three modes dictionaries. In this way, spatial resolution of LR-HSI is enhanced directly without HR samples. The experimental results systematically prove that the proposed CTCF and STTF methods outperform other state-of-the-art algorithms.

In the future works, we focus on tensor factorization based target tracking methods which are able to extract target region more accurately and clearly. For super-resolution, we aim at exploiting nonlocal similarities in tensor factorization framework, which has been widely used in inverse problems. Besides target tracking and super-resolution, regions of interest (ROI) approaches will be investigated, in order to make HSV target recognition more efficient and full featured. Inspired by [70] and other related works, we believe that the researches of chemical gas detecting methods will benefit the agricultural application of HSI/HSV. These studies will be of great significance in internet of things (IoT), smart agriculture, pollution monitoring, etc.

Appendix

The optimizations of , , , and in Section 2.3.2 are presented as follows.(1)Optimization of : when , , and are fixed, the optimization of in (42) is represented aswhere denotes the previous estimation of height mode dictionary in last iteration. Using characteristics of n-mode product (see (3)), (A.1) is represented aswhere denotes the mode-1 unfolding matrix of and . Equation (A.2) is quadratic and can be solved by computing general Sylvester matrix equation; i.e.,

The conjugate gradient (CG) method is utilized to solve (A.3). After several iterations, CG will reach the convergence in certain conditions. In our experiments, it has been found that the solution of (A.3) is well approximated after 20 iterations.(2)Optimization of : when , , and are fixed, the optimization of in (42) is expressed bywhere denotes the previous estimation of width mode dictionary in last iteration. Similar to the optimization of , (A.4) can be transformed intowhere denotes the mode-2 unfolding matrix of and . Equation (A.5) is also quadratic and can be solved by computing general Sylvester matrix equation; i.e.,

Likewise, CG is used to solve (A.6).(3)Optimization of : when , , and are fixed, the optimization with respect to in (42) can be formulated aswhere denotes the previous estimation of spectral mode dictionary in last iteration. Same as the processing in the two subsections above, we havewhere denotes the mode-3 unfolding matrix of and . Similarly, (A.8) can be solved by computing general Sylvester matrix equation; i.e.,

We apply CG to solve (A.9) and the convergence is achieved in a few iterations.(4)Optimization of : when , , and are fixed, the optimization of in (42) can be written aswhere denotes the previous estimation of core tensor in last iteration. Equation (A.10) is convex, so we can employ the ADMM to solve the optimization problem. Introducing splitting variables and , (A.10) can be transformed into the equivalent constrained form:where

Equation (A.11) is a typical form of optimization problem that corresponds to the standard ADMM. The augmented Lagrangian function for (A.11) is represented aswhere denotes the Lagrangian multiplier and denotes the penalty parameter. The process of ADMM is formulated as

Here, the optimizations of and are independent because function is decoupled with respect to these variables. Next, (A.14) will be discussed more detailedly.(i)Update : based on (A.13), we haveand the closed-form solution of (A.15) iswhere .(ii)Update : based on (A.13), we have

Based on (6) and (7), (A.17) is equivalent towhere the vectors , , , and are the vectorization form of tensors , , , and , respectively, and matrix . Equation (A.18) has the closed-form solution which is denoted by

However, is so large that (A.19) is too heavy to be solved. We rewrite the first term of (A.19) as follows:where and () denote eigenvector matrices and eigenvalue matrices of , , and , respectively. So, is diagonal and can be computed easily. Moreover, the operation of and of is i-mode products and the multiplication in (A.20) is elementwise. Finally, in the second term of (A.19) can be computed by(iii)Update : based on (A.14), is updated by

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

The authors would like to thank Professor Gu from Heilongjiang Province Key Laboratory of Space-Air-Ground Integrated Intelligent Remote Sensing for his selfless help. This work was supported by the National Natural Science Foundation of China (Grant no. 61671184) and the National Natural Science Foundation of Key International Cooperation of China (Grant no. 61720106002).

References

  1. H. Fan, C. Li, Y. Guo, G. Kuang, and J. Ma, “Spatial-spectral total variation regularized low-rank tensor decomposition for hyperspectral image denoising,” IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 10, pp. 6196–6213, 2018. View at: Publisher Site | Google Scholar
  2. X. Zheng, Y. Yuan, and X. Lu, “Hyperspectral image denoising by fusing the selected related bands,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 5, pp. 2596–2609, 2019. View at: Publisher Site | Google Scholar
  3. D. Hong, N. Yokoya, J. Chanussot, and X. X. Zhu, “CoSpace: common subspace learning from hyperspectral-multispectral correspondences,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 7, pp. 4349–4359, 2019. View at: Publisher Site | Google Scholar
  4. D. Hong, N. Yokoya, J. Chanussot, J. Xu, and X. X. Zhu, “Learning to propagate labels on graphs: an iterative multitask regression framework for semi-supervised hyperspectral dimensionality reduction,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 158, pp. 35–49, 2019. View at: Publisher Site | Google Scholar
  5. T. Liu, Y. Gu, X. Jia, J. A. Benediktsson, and J. Chanussot, “Class-specific sparse multiple kernel learning for spectral-spatial hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 12, pp. 7351–7365, 2016. View at: Publisher Site | Google Scholar
  6. Y. Gu, T. Liu, X. Jia, J. A. Benediktsson, and J. Chanussot, “Nonlinear multiple kernel learning with multiple-structure-element extended morphological profiles for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 6, pp. 3235–3247, 2016. View at: Publisher Site | Google Scholar
  7. T. Liu, Y. Gu, J. Chanussot, and M. Dalla Mura, “Multimorphological superpixel model for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 12, pp. 6950–6963, 2017. View at: Publisher Site | Google Scholar
  8. T. Liu, X. Zhang, and Y. Gu, “Unsupervised cross-temporal classification of hyperspectral images with multiple geodesic flow kernel learning,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 12, pp. 9688–9701, 2019. View at: Publisher Site | Google Scholar
  9. Y. Gu, T. Liu, and J. Li, “Superpixel tensor model for spatial-spectral classification of remote sensing images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 7, pp. 4705–4719, 2019. View at: Publisher Site | Google Scholar
  10. D. Hong, N. Yokoya, N. Ge, J. Chanussot, and X. X. Zhu, “Learnable manifold alignment (LeMA): a semi-supervised cross-modality learning framework for land cover and land use classification,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 147, pp. 193–205, 2019. View at: Publisher Site | Google Scholar
  11. M. Song, X. Shang, Y. Wang, C. Yu, and C.-I. Chang, “Class information-based band selection for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 11, pp. 8394–8416, 2019. View at: Publisher Site | Google Scholar
  12. Y. Zhang, W. Ke, B. Du, and X. Hu, “Independent encoding joint sparse representation and multitask learning for hyperspectral target detection,” IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 11, pp. 1933–1937, 2017. View at: Publisher Site | Google Scholar
  13. N. M. Nasrabadi, “Hyperspectral target detection: an overview of current and future challenges,” IEEE Signal Processing Magazine, vol. 31, no. 1, pp. 34–44, 2014. View at: Publisher Site | Google Scholar
  14. Y. Wang, L. Wang, C. Yu et al., “Constrained-target band selection for multiple-target detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 8, pp. 6079–6103, 2019. View at: Publisher Site | Google Scholar
  15. R. Dian, S. Li, and L. Fang, “Learning a low tensor-train rank representation for hyperspectral image super-resolution,” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 9, pp. 2672–2683, 2019. View at: Publisher Site | Google Scholar
  16. L. Fang, H. Zhuo, and S. Li, “Super-resolution of hyperspectral image via superpixel-based sparse representation,” Neurocomputing, vol. 273, no. 17, pp. 171–177, 2018. View at: Publisher Site | Google Scholar
  17. R. Dian, L. Fang, and S. Li, “Hyperspectral image super-resolution via non-local sparse tensor factorization,” in Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3862–3871, Honolulu, HI, USA, July 2017. View at: Google Scholar
  18. Y. Wang, L. Wang, H. Xie, and C.-I. Chang, “Fusion of various band selection methods for hyperspectral imagery,” Remote Sensing, vol. 11, no. 18, p. 2125, 2019. View at: Publisher Site | Google Scholar
  19. Z. Huang, L. Fang, and S. Li, “Subpixel-pixel-superpixel guided fusion for hyperspectral anomaly detection,” IEEE Transactions on Geoscience and Remote Sensing, pp. 1–10, 2020. View at: Publisher Site | Google Scholar
  20. X. Zhang, G. Wen, and W. Dai, “A tensor decomposition-based anomaly detection algorithm for hyperspectral image,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 10, pp. 5801–5820, 2016. View at: Publisher Site | Google Scholar
  21. S. Li, K. Zhang, Q. Hao, P. Duan, and X. Kang, “Hyperspectral anomaly detection with multiscale attribute and edge-preserving filters,” IEEE Geoscience and Remote Sensing Letters, vol. 15, no. 10, pp. 1605–1609, 2018. View at: Publisher Site | Google Scholar
  22. Y. Wang, L.-C. Lee, B. Xue et al., “A posteriori hyperspectral anomaly detection for unlabeled classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 6, pp. 3091–3106, 2018. View at: Publisher Site | Google Scholar
  23. I. S. Reed and X. Yu, “Adaptive multiple-band CFAR detection of an optical pattern with unknown spectral distribution,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 38, no. 10, pp. 1760–1770, 1990. View at: Publisher Site | Google Scholar
  24. D. Meng, X. Wang, M. Huang, L. Wan, and B. Zhang, “Robust weighted subspace fitting for DOA estimation via block sparse recovery,” IEEE Communications Letters, vol. 24, no. 3, pp. 563–567, 2020. View at: Publisher Site | Google Scholar
  25. L. L. Scharf and B. Friedlander, “Matched subspace detectors,” IEEE Transactions on Signal Processing, vol. 42, no. 8, pp. 2146–2157, 1994. View at: Publisher Site | Google Scholar
  26. D. Hong, N. Yokoya, J. Chanussot, and X. X. Zhu, “An augmented linear mixing model to address spectral variability for hyperspectral unmixing,” IEEE Transactions on Image Processing, vol. 28, no. 4, pp. 1923–1938, 2019. View at: Publisher Site | Google Scholar
  27. X. Li, X. Jia, L. Wang, and K. Zhao, “On spectral unmixing resolution using extended support vector machines,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 9, pp. 4985–4996, 2015. View at: Publisher Site | Google Scholar
  28. M.-D. Iordache, J. M. Bioucas-Dias, and A. Plaza, “Sparse unmixing of hyperspectral data,” IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 6, pp. 2014–2039, 2011. View at: Publisher Site | Google Scholar
  29. N. Akhtar and A. Mian, “RCMF: robust constrained matrix factorization for hyperspectral unmixing,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 6, pp. 3354–3366, 2017. View at: Publisher Site | Google Scholar
  30. Q. Wei, N. Dobigeon, and J. Tourneret, “Bayesian fusion of hyperspectral and multispectral images,” in Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3176–3180, Florence, Italy, May 2014. View at: Google Scholar
  31. V. P. Shah, N. H. Younan, and R. L. King, “An efficient pan-sharpening method via a combined adaptive PCA approach and contourlets,” IEEE Transactions on Geoscience and Remote Sensing, vol. 46, no. 5, pp. 1323–1335, 2008. View at: Publisher Site | Google Scholar
  32. R. Dian, S. Li, A. Guo, and L. Fang, “Deep hyperspectral image sharpening,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 11, pp. 5345–5355, 2018. View at: Publisher Site | Google Scholar
  33. W. Dong, F. Fu, G. Shi et al., “Hyperspectral image super-resolution via non-negative structured sparse representation,” IEEE Transactions on Image Processing, vol. 25, no. 5, pp. 2337–2352, 2016. View at: Publisher Site | Google Scholar
  34. B. Huang, H. Song, H. Cui, J. Peng, and Z. Xu, “Spatial and spectral image fusion using sparse matrix factorization,” IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 3, pp. 1693–1704, 2014. View at: Publisher Site | Google Scholar
  35. N. Akhtar, F. Shafait, and A. Mian, “Bayesian sparse representation for hyperspectral image super resolution,” in Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3631–3640, Boston, MA, USA, June 2015. View at: Google Scholar
  36. X. Fu, K. Huang, B. Yang, W.-K. Ma, and N. D. Sidiropoulos, “Robust volume minimization-based matrix factorization for remote sensing and document clustering,” IEEE Transactions on Signal Processing, vol. 64, no. 23, pp. 6254–6268, 2016. View at: Publisher Site | Google Scholar
  37. X. Fu, K. Huang, and N. D. Sidiropoulos, “On identifiability of nonnegative matrix factorization,” IEEE Signal Processing Letters, vol. 25, no. 3, pp. 328–332, 2018. View at: Publisher Site | Google Scholar
  38. K. Huang, N. D. Sidiropoulos, and A. Swami, “Non-negative matrix factorization revisited: uniqueness and algorithm for symmetric decomposition,” IEEE Transactions on Signal Processing, vol. 62, no. 1, pp. 211–224, 2014. View at: Publisher Site | Google Scholar
  39. W.-K. Ma, J. M. Bioucas-Dias, T.-H. Chan et al., “A signal processing perspective on hyperspectral unmixing: insights from remote sensing,” IEEE Signal Processing Magazine, vol. 31, no. 1, pp. 67–81, 2014. View at: Publisher Site | Google Scholar
  40. N. Yokoya, T. Yairi, and A. Iwasaki, “Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion,” IEEE Transactions on Geoscience and Remote Sensing, vol. 50, no. 2, pp. 528–537, 2012. View at: Publisher Site | Google Scholar
  41. C. Lanaras, E. Baltsavias, and K. Schindler, “Hyperspectral super-resolution by coupled spectral unmixing,” in Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), pp. 3586–3594, Santiago, Chile, December 2015. View at: Google Scholar
  42. K. Zhang, M. Wang, and S. Yang, “Multispectral and hyperspectral image fusion based on group spectral embedding and low-rank factorization,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 3, pp. 1363–1371, 2017. View at: Publisher Site | Google Scholar
  43. Y. Xu, Z. Wu, J. Chanussot, and Z. Wei, “Nonlocal patch tensor sparse representation for hyperspectral image super-resolution,” IEEE Transactions on Image Processing, vol. 28, no. 6, pp. 3034–3047, 2019. View at: Publisher Site | Google Scholar
  44. Y. Xu, Z. Wu, J. Chanussot, and Z. Wei, “Hyperspectral images super-resolution via learning high-order coupled tensor ring representation,” IEEE Transactions on Neural Networks and Learning Systems, pp. 1–14, 2020. View at: Publisher Site | Google Scholar
  45. R. Dian and S. Li, “Hyperspectral image super-resolution via subspace-based low tensor multi-rank regularization,” IEEE Transactions on Image Processing, vol. 28, no. 10, pp. 5135–5146, 2019. View at: Publisher Site | Google Scholar
  46. T. G. Kolda and B. W. Bader, “Tensor decompositions and applications,” Siam Review, vol. 51, no. 3, pp. 455–500, 2009. View at: Publisher Site | Google Scholar
  47. H. A. L. Kiers, “Towards a standardized notation and terminology in multiway analysis,” Journal of Chemometrics, vol. 14, no. 3, pp. 105–122, 2000. View at: Publisher Site | Google Scholar
  48. J. B. Kruskal, “Three-way arrays: rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics,” Linear Algebra and its Applications, vol. 18, no. 2, pp. 95–138, 1977. View at: Publisher Site | Google Scholar
  49. A. Smilde, R. Bro, and P. Geladi, Multi-Way Analysis: Applications in the Chemical Sciences, Wiley, West Sussex, England, UK, 2004.
  50. L. R. Tucker, “Some mathematical notes on three-mode factor analysis,” Psychometrika, vol. 31, no. 3, pp. 279–311, 1966. View at: Publisher Site | Google Scholar
  51. G. H. Golub and C. F. Van Loan, Matrix Computations, Johns Hopkins University Press, Baltimore, MD, USA, 1996.
  52. L. Loncan, L. B. de Almeida, J. M. Bioucas-Dias et al., “Hyperspectral pansharpening: a review,” IEEE Geoscience and Remote Sensing Magazine, vol. 3, no. 3, pp. 27–46, 2015. View at: Publisher Site | Google Scholar
  53. L. Zhuang and J. M. Bioucas-Dias, “Fast hyperspectral image denoising and inpainting based on low-rank and sparse representations,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 11, no. 3, pp. 730–742, 2018. View at: Publisher Site | Google Scholar
  54. J. Yang, J. Wright, T. S. Huang, and Y. Ma, “Image super-resolution via sparse representation,” IEEE Transactions on Image Processing, vol. 19, no. 11, pp. 2861–2873, 2010. View at: Publisher Site | Google Scholar
  55. T. Lu, S. Li, L. Fang, Y. Ma, and J. A. Benediktsson, “Spectral-spatial adaptive sparse representation for hyperspectral image denoising,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 1, pp. 373–385, 2016. View at: Publisher Site | Google Scholar
  56. R. Arablouei and F. de Hoog, “Hyperspectral image recovery via hybrid regularization,” IEEE Transactions on Image Processing, vol. 25, no. 12, pp. 5649–5663, 2016. View at: Publisher Site | Google Scholar
  57. L. Zhang, W. Wei, C. Tian, F. Li, and Y. Zhang, “Exploring structured sparsity by a reweighted Laplace prior for hyperspectral compressive sensing,” IEEE Transactions on Image Processing, vol. 25, no. 10, pp. 4974–4988, 2016. View at: Publisher Site | Google Scholar
  58. L. Fang, C. Wang, S. Li, and J. A. Benediktsson, “Hyperspectral image classification via multiple-feature-based adaptive sparse representation,” IEEE Transactions on Instrumentation and Measurement, vol. 66, no. 7, pp. 1646–1657, 2017. View at: Publisher Site | Google Scholar
  59. H. Attouch, J. Bolte, and B. F. Svaiter, “Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized gauss-seidel methods,” Mathematical Programming, vol. 137, no. 1-2, pp. 91–129, 2013. View at: Publisher Site | Google Scholar
  60. H. Attouch, J. Bolte, P. Redont, and A. Soubeyran, “Proximal alternating minimization and projection methods for nonconvex problems: an approach based on the Kurdyka-Łojasiewicz inequality,” Mathematics of Operations Research, vol. 35, no. 2, pp. 438–457, 2010. View at: Publisher Site | Google Scholar
  61. O. Axelsson, Iterative Solution Methods, Cambridge Univ. Press, Cambridge, UK, 1996.
  62. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1–122, 2010. View at: Publisher Site | Google Scholar
  63. L. N. Smith and M. Elad, “Improving dictionary learning: multiple dictionary updates and coefficient reuse,” IEEE Signal Processing Letters, vol. 20, no. 1, pp. 79–82, 2013. View at: Publisher Site | Google Scholar
  64. J. M. Bioucas-Dias, “A variable splitting augmented Lagrangian approach to linear spectral unmixing,” in Proceedings of the 2009 First Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, pp. 1–4, Grenoble, France, August 2009. View at: Google Scholar
  65. R. Zhou, G. Wang, D. Zhao, Y. Zou, and T. Zhang, “Super-resolution of low-quality images based on compressed sensing and sequence information,” in Proceedings of the 2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall), pp. 1–5, Honolulu, HI, USA, September 2019. View at: Google Scholar
  66. W. Li and Q. Du, “Collaborative representation for hyperspectral anomaly detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 3, pp. 1463–1474, 2015. View at: Publisher Site | Google Scholar
  67. Y. Zhang, B. Du, L. Zhang, and S. Wang, “A low-rank and sparse matrix decomposition-based Mahalanobis distance method for hyperspectral anomaly detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 3, pp. 1376–1389, 2016. View at: Publisher Site | Google Scholar
  68. L. Wang, C.-I. Chang, L.-C. Lee et al., “Band subset selection for anomaly detection in hyperspectral imagery,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 9, pp. 4887–4898, 2017. View at: Publisher Site | Google Scholar
  69. X. Wang, L. Wan, M. Huang, C. Shen, Z. Han, and T. Zhu, “Low-complexity channel estimation for circular and noncircular signals in virtual MIMO vehicle communication systems,” IEEE Transactions on Vehicular Technology, vol. 69, no. 4, pp. 3916–3928, 2020. View at: Publisher Site | Google Scholar
  70. W. Lu, X. Xu, G. Huang et al., “Energy efficiency optimization in SWIPT enabled WSNs for smart agriculture,” IEEE Transactions on Industrial Informatics, p. 1, 2020. View at: Publisher Site | Google Scholar

Copyright © 2020 Ruofei Zhou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views169
Downloads300
Citations