Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2019 / Article

Research Article | Open Access

Volume 2019 |Article ID 9378014 | https://doi.org/10.1155/2019/9378014

Ting Ge, Ning Mu, Tianming Zhan, Zhi Chen, Wanrong Gao, Shanxiang Mu, "Brain Lesion Segmentation Based on Joint Constraints of Low-Rank Representation and Sparse Representation", Computational Intelligence and Neuroscience, vol. 2019, Article ID 9378014, 11 pages, 2019. https://doi.org/10.1155/2019/9378014

Brain Lesion Segmentation Based on Joint Constraints of Low-Rank Representation and Sparse Representation

Academic Editor: Jussi Tohka
Received19 Jan 2019
Revised30 May 2019
Accepted09 Jun 2019
Published01 Jul 2019

Abstract

The segmentation of brain lesions from a brain magnetic resonance (MR) image is of great significance for the clinical diagnosis and follow-up treatment. An automatic segmentation method for brain lesions is proposed based on the low-rank representation (LRR) and the sparse representation (SR) theory. The proposed method decomposes the brain image into the background part composed of brain tissue and the brain lesion part. Considering that each pixel in the brain tissue can be represented by the background dictionary, a low-rank representation that incorporates sparsity-inducing regularization term is adopted to model the part. Then, the linearized alternating direction method with adaptive penalty (LADMAP) was selected to solve the model, and the brain lesions can be obtained by the response of the residual matrix. The presented model not only reflects the global structure of the image but also preserves the local information of the pixels, thus improving the representation accuracy. The experimental results on the data of brain tumor patients and multiple sclerosis patients revealed that the proposed method is superior to several existing methods in terms of segmentation accuracy while realizing the segmentation automatically.

1. Introduction

In recent years, brain diseases have become one of the most important diseases that endanger the health of human beings. The segmentation of brain lesions from brain images can be a valuable reference for the follow-up treatment of patients [1]. In the diagnosis of brain diseases, magnetic resonance (MR) imaging is the most commonly used imaging modality. Clinically, MR images of different sequences can be obtained by adjusting parameters so that brain diseases can be detected from multiple angles. Figure 1 shows two sets of multisequence MR images. It can be seen from these images that each sequence has a different effect on the display of the brain lesion regions. As such, the complete segmentation of brain lesions according to multisequence MR images has become a research hotspot recently.

One of the most important tasks in clinical practice is to analyse multisequence MR images and segment the brain lesions to calculate the shape and volume of the lesion regions. However, having radiologists segment multiple three-dimensional (3D) images manually is time-consuming, and the segmentation results are generally not repeatable [2]. Therefore, automatic or semiautomatic segmentation methods for brain lesions are important. At present, the image segmentation methods mainly include Atlas-based methods [35], curve/surface evolution methods [68], learning-based methods [913], and the methods based on sparse representation (SR) [1417] and low-rank representation (LRR) [1821]. The results of the Atlas-based methods depend on the registration algorithm, and, to date, there is no general registration algorithm that can register the target image with the standard image accurately. Therefore, such methods are commonly used to provide geometric priors for subsequent studies. The methods based on curve/surface evolution are slow when applied to 3D image segmentation. In addition, there are so many parameters that there is no good way to balance them for different target images at this time. The learning-based methods mainly search for the optimal classifier by learning the features of samples, in which the parameters are calculated by the optimization methods without manual settings. Moreover, most of them classify the pixel points by using multidimensional features, so they are suitable for the segmentation of multisequence MR images. However, the learning-based methods only use the features of the pixels themselves, which lack spatial correlation, and the training samples often need to be labelled manually by the experts according to their clinical experience, promoting subjectivity and nonrepeatability. Although the above methods can segment the lesion regions to some extent, it is necessary to know the prior information of the lesions in advance. Therefore, they are only applicable to detect certain brain diseases. The purpose of the batch detection for brain images and the automatic segmentation of brain lesions cannot be achieved.

Because of these factors and concerns, we propose a novel automatic segmentation approach for brain lesions based on the joint constraints of LRR and SR (JCLRRSR) in this paper. Since the LRR model is able to describe the whole structure of the brain tissues in image, while the SR model is good at characterizing the local information of the pixels, the proposed method can improve the representation accuracy of the image, thereby increasing the segmentation accuracy of the brain lesions.

The rest of the paper is organized into four sections. Section 2 introduces the SR model and the LRR model in brief. Section 3 presents the key schemes of the proposed JCLRRSR method for segmenting the brain lesions. The experiments and discussions on the data of patients with brain tumors and multiple sclerosis are given in Section 4. Finally, the conclusions are offered in Section 5.

2. SR and LRR Models

2.1. SR Model

The SR model was derived from the requirements of signal representation, compression, and coding and was first applied only in the field of signal processing. As images have become the main expression of information, the SR model has become more widely used in the field of image processing in recent years. Here, it can not only achieve good results in classical low-level image processing problems, such as image compression, denoising, restoration, and super-resolution processing, but also perform satisfactorily regarding the problems of feature extraction, image segmentation, pattern recognition, machine learning, and some other issues. In image segmentation applications, the extracted image features of training samples are used to construct the dictionary. Then, the dictionary is used to approximate the testing sample. The class of testing sample is decided by approximate residuals in each class. At last, the classes of all testing samples generate the image segmentation results.

The basic idea of SR theory is that signals of the same class can be sparsely represented under an overcomplete dictionary [22]. The model can be expressed as follows:where is the signal matrix; is the dictionary matrix of the signal; is the dictionary atom; and is the number of atoms in the dictionary, . In addition, is the representation coefficient matrix of the signal and denotes the zero norm of the matrix—that is, the number of nonzero elements in the matrix. Due to the nonconvexity of the zero norm, solving problem (1) is NP-hard. Considering that A is sparse enough, existing studies have shown that the convex relaxation method can be used for convex replacement, resulting in the following problem:where denotes the l1 norm of the matrix, defined as , and is the (i, j) element of A.

2.2. LRR Model

With the development of the representation learning theory, the LRR [23, 24] has become a classic theory in the field of image processing and also has been widely used in the medical image processing and research [2527]. It aims to search for the lowest rank representation of data under an appropriate dictionary and is good at mining data dependencies in multiple subspaces. Moreover, as compared with the subspace recovery methods based on SR, LRR is robust to noise and conducive to describing the global structure of the data, which is often unavailable in other methods. At present, the LRR model has been widely used in video patching; face recognition; and image restoration, detection, and segmentation.

For the observed signal with noise, it can be mapped to its true value signal without noise through low-dimensional subspace in high-dimensional space, where is of a low-rank [23]. Let be the noise signal, where is the element of , and then . Since the noise signals usually account for a small part of the observed signals, the representation model of robust principal component analysis (RPCA) [28, 29] can be constructed as follows:where denotes the rank function and is a coefficient to adjust the weight of the noise term. As is low-rank and is sparse, the following optimization problem can be obtained by relaxing problem (3) to its convex hull:where denotes the kernel norm of the matrix.

However, RPCA assumes that the data are in a single low-rank subspace, which is not suitable for the cases in which the data are in multiple subspaces. Subsequent development led to the formation of LRR theory, and the relevant model is as follows:where denotes the norm of the matrix, defined as

3. Brain Lesion Segmentation Based on JCLRRSR

3.1. Data Preprocessing

We first registered the multisequence brain MR images by the registration method in the MIPAV software and corrected the grey level of the images by the method of N4ITK. The purpose of these operations was to remove the skull of the T1 sequence image and then use the remaining part as a template to remove the skull of other sequence images. After that, we adjusted the greyscale range of the images to [0, 255] by means of the following equation:where denotes the original image of sequence and denotes the corresponding image after preprocessing. A similar preprocessing is applied on the images of other sequences.

3.2. Background Dictionary Construction

In order to segment the whole brain lesion regions, we regarded all of the brain tissues as background and treated brain lesions as abnormalities in the background distribution, respectively. Therefore, the background dictionary plays an important part in the proposed method and directly affects the subsequent segmentation performance of the brain lesions. In general, the background dictionary needs to meet three requirements, as follows: first, only the pixels of the brain tissue feature should be selected as the atoms, while the pixels in the lesion regions cannot be chosen; second, all categories of the brain tissue in the image should be included; and third, the number of atoms in the dictionary should be sufficient. Considering that the greyscale distribution of healthy brain tissue is relatively simple, the preprocessed normal human brain images were first classified and a certain number of pixels were extracted from the white matter, grey matter, and cerebrospinal fluid, respectively, as training samples. Then, the neighborhood of each training sample in each sequence image (the size of the neighborhood was ) was selected and converted into a greyscale vector (the length of the vector was ). The greyscale vectors of sequences were combined to form the feature vector of each training sample (the length of the feature vector was ). At last, all feature vectors were combined to construct the background dictionary matrix required for the proposed method (the number of training samples was S, and the size of the dictionary was ).

3.3. JCLRRSR Model

Each pixel in the brain image may correspond to one kind of brain tissue or the mixture of several kinds of brain tissue. The greyscale features of each kind of tissue can be expressed in a certain subspace, and the greyscale features of all pixels in the image should be considered in multiple subspaces. Meanwhile, the brain lesions are regarded as an abnormal form within the normal brain tissue background, which exists independently outside of all subspaces. Thus, only the pixels belonging to normal brain tissue can be represented by the background dictionary, while the pixels in lesion regions cannot be. Because of this, if we let be the high-dimensional feature matrix of the brain image to be measured, can be divided into two parts according to the LRR model (5)—that is, the background part composed of brain tissue and the brain lesion part. In model (5), is the background dictionary matrix, is the representation coefficient matrix, and corresponds to the brain lesions.

The LRR model can effectively characterize the overall structural of the image, while the SR model is good at maintaining the local features of the pixels. Because of the different advantages of these two models, we introduced a sparse constraint for matrix to the LRR model in this paper, and a new representation model for brain lesions was proposed as follows:where and are the coefficients to adjust the weight of the brain abnormalities and the sparse term, respectively.

By solving model (8) and obtaining the optimal solutions and , corresponding to and , respectively, the response value of the pixel in belonging to the abnormal regions can be defined as follows:where and are the column and the element of , respectively. If is greater than a predetermined threshold, can be determined as a pixel within the lesion regions.

3.4. Model Solving

Since the alternating direction method requires two auxiliary variables and asks for a complex matrix inverse operation in each iteration, we choose the linearized alternating direction method with adaptive penalty (LADMAP) [18, 30] to solve problem (8).

To make the objective function in problem (8) separable, we introduced an auxiliary variable which satisfies ; then, we can replace the second term in the objective function with . After that, problem (8) can be converted to the following problem:

The Lagrange equation is as follows:where and are the Lagrange multipliers, is the penalty parameter, and

The above multivariable optimization problem can be solved by alternately updating one variable while fixing the remaining variables. In the iteration, problem (10) can be divided into the following three subproblems:(1)Fix and and update , and the objective function becomeswhere the quadratic term is replaced by a first-order approximation in the step plus a neighboring operator [30, 31], is the derivative of to , and .(2)Fix and and update , and the objective function becomes(3)Fix and and update , and the objective function becomes

The steps of LADMAP are shown in Algorithm 1 where the order of step 2, step 3, and step 4 can be exchanged and and are the singular value contraction operator [32] and the soft threshold contraction operator [33], respectively. is defined as follows:where and , , and are the matrices obtained by SVD of ; that is, .

Input: high-dimensional feature matrix of the brain MR image
Output: optimal solution
Initialize:
Step 1: while or
     do
Step 2: update
   
Step 3: update
   
Step 4: update
   
Step 5: update
   
   
Step 6: update
   ,
   where
Step 7: update
   
Step 8: end while
   optimum solution

is defined as follows:where and . When operating on a matrix or a vector, means to operate on the elements in the matrix or vector, respectively.

According to Yang et al. [34], step 4 can be solved as follows: let , and then the column of the optimal solution iswhere and are the column of the matrices.

In summary, the general algorithm of the proposed method in this paper is given in Algorithm 2.

Input: multisequence MR image , where
Output: lesion region marker
Step 1: multisequence image fusion; establish the feature vector of each pixel and construct the high-dimensional feature matrix
Step 2: construct dictionary using the method in Section 3.2
Step 3: solve model (8) according to Algorithm 1 and obtain the optimal solution and
Step 4: calculate the response value of the pixel according to equation (9)
Step 5: extract the brain lesions

4. Experiments and Discussions

4.1. Experimental Data

To evaluate the effectiveness of the JCLRRSR, we performed experiments on the data of two groups of patients with brain diseases. The first dataset is the multisequence MR images of patients with brain tumors, provided by the MICCAI 2012 Brain Tumor Segmentation Challenge (BraTS 2012). There are 25 patients’ MR data, and each patient’s data include four sequences of MR images, which are T1, T2, FLAIR, and T1-enhancement, respectively, as well as the real-world results of brain tumor regions and edema regions. The image size is 240 × 240 × 155 and the resolution is 1 × 1 × 1 mm. In the experimental comparison in this section, both tumor and edema were regarded as brain lesions. The second dataset is the multisequence MR images of patients with multiple sclerosis, provided by the ACCORD-MIND database. There are 50 patients’ MR data, and each patient’s data include four sequences of MR images, specifically T1, T2, PD, and FLAIR, as well as the lesion regions labelled by radiologists manually. The image size is 256 × 256 × 46 and the resolution is 0.95 × 0.95 × 3 mm. Due to the influence of the image preprocessing effect, the JCLRRSR will segment the brain lesions as well as the skull which has not been removed completely. To this end, we postprocessed the segmentation results, in which we only retained the part belonging to the brain tissue and removed the others. In the analysis of the experimental results, the Dice Score indicator was adopted to verify the accuracy of the segmentation.

4.2. Number of Training Samples

The number of training samples in the dictionary is a key factor in the JCLRRSR. More training samples means there are more normal brain tissue samples and so the segmentation results will be better, but the computational efficiency will be lower. Conversely, the fewer the training samples, the higher the model computational efficiency but the lower the segmentation accuracy. Therefore, a clear trade-off between segmentation accuracy and computational efficiency exists. Because only the samples belonging to normal brain tissue, such as white matter, grey matter, and cerebrospinal fluid, are needed in the background dictionary and the greyscale characteristics of the three types of brain tissue are relatively close, the segmentation accuracy will reach a stable state when the training sample capacity reaches a certain number. It can be seen from the brain image that the area of white matter is larger than those of grey matter and cerebrospinal fluid. Therefore, the number of training samples we selected from the white matter was three times the number of selections made from the other two tissues. Figure 2 shows the relationship between the total number of training samples and the segmentation accuracy of brain tumor and multiple sclerosis lesions. It can be seen from the figure that, with the increase in the number of training samples, the segmentation accuracy also increases and that when the number increases to a certain extent, the segmentation accuracy reaches a stable state. In addition, the total number of training samples needed in brain tumor segmentation is less than that in multiple sclerosis injury region segmentation, which is mainly due to the fact that the brain tumor occupies a much larger area than the multiple sclerosis injury regions. In order to balance the segmentation efficiency, the total number of training samples was set to 500 when segmenting the brain tumor and 2000 when segmenting multiple sclerosis lesions in the experiment.

4.3. Size of Neighborhood

When constructing high-dimensional features of pixels, we transformed the neighborhoods of each pixel into a vector and then merged the vectors from the different sequence images. When the image block is too large, the categories included will be inconsistent and the extracted features cannot represent the current pixel well. Conversely, when the image block is too small, the features are less and the discrimination between different pixels will not be enough. Therefore, the size of image block is another key factor in the JCLRRSR. Figure 3 shows the effect of the neighborhood size on the segmentation accuracy of the brain tumor and the multiple sclerosis lesions. It can be seen from the figure that when the neighborhood size is set to , the segmentation accuracy of both the brain tumor and the multiple sclerosis lesions is optimal. The reason for this may be that the grey matter and the cerebrospinal fluid both present an elongated structure in the brain image. When the image block is too large, the central pixel and other pixels in the image block will belong to different brain tissue types. This will affect the accuracy of feature extraction and then affect the final segmentation.

4.4. Parameter Settings

There are two parameters, and , involved in the JCLRRSR. Figure 4 shows the effects of and on the segmentation accuracy of the brain tumor and the multiple sclerosis injury regions, where takes the value in {0.001, 0.005, 0.01, 0.05, 0.1, 0.5} and takes the value in {0.001, 0.01, 0.05, 0.1, 0.5, 1}. As can be seen from the figure, the algorithm is greatly affected by both for brain tumor data and multiple sclerosis data. This is mainly because, although the brain tumor area is much larger than the multiple sclerosis injury regions, it contains multiple subclasses such as tumor and edema, and there are differences in the characteristics of the pixels in these regions. In this experiment, we established and for brain tumor data and and for multiple sclerosis data, respectively.

4.5. Lesion Segmentation Results

Figures 5 and 6 show the segmentation results of the brain tumor and the multiple sclerosis injury regions, respectively. In the segmentation of the brain tumor, since the lesion regions in the image include the brain tumor and the edema around it, the JCLRRSR would detect them as a whole. If a subsequent quantitative analysis of the brain tumor is required, the test results will be further processed. The figures show that the segmentation results obtained by the JCLRRSR are close to the real-world results, which therefore meet the clinical needs. For better comparative analysis, different data subjects and different numbers of training samples are used to test several segmentation algorithms. In the brain tumor dataset, the samples are divided into high-grade and low-grade gliomas according to the degree of tumor malignancy. Separately, in the multiple sclerosis dataset, the samples are divided into big multiple sclerosis and small multiple sclerosis lesions according to the size of lesions. From Table 1, we can see that the average accuracies of SRD, LRR, and the proposed JCLRRSR method executed on different datasets and subjects have a strong correlation with the number of training samples, but the Global-RX method is not sensitive to the number of training samples. In general, these methods achieve better accuracy on HGG and BMSL because of the large targets present for these two subjects. Beside these, the JCLRRSR method can achieve optimal segmentation accuracy with different datasets and different subjects. This comparison demonstrates the superiority of the proposed method on multisequence MR images.


LesionsSubjectsNumber of training samplesGlobal-RX [35]SRD [36]LRRJCLRRSR

Brain tumorHGG2000.75420.68030.70160.7624
5000.76340.81530.85650.9175
8000.76890.82090.86020.9213
LGG2000.72270.54220.60140.6624
5000.72720.79030.83250.8951
8000.73050.80230.84120.9031
Total2000.73840.61120.65150.7148
5000.75030.80280.84450.9063
8000.74970.81160.85070.9122

Multiple sclerosisBMSL10000.66740.52130.56410.6425
20000.68460.72350.76370.8026
30000.68550.73210.77320.8242
SMSL10000.53260.48650.52450.6057
20000.55780.63950.68350.7344
30000.55870.64000.69100.7356
Total10000.60000.50390.54430.6241
20000.62120.68150.72360.7685
30000.62210.68600.73210.7799

5. Conclusions

This paper presents an improved segmentation method for brain lesions. The multisequence MR images were first fused to form a high-dimensional feature matrix, during which time the neighborhood information was incorporated into the high-dimensional features of each pixel. Then, according to the proposed JCLRRSR model, the image feature matrix was decomposed and modeled under the joint constraints of LRR and SR. The model not only reflected the global structure of the image but also maintained the local information of the pixels, thus improving the decomposition accuracy. Finally, considering the computational efficiency, the LADMAP was selected to solve the model and then the brain lesions were segmented. The setting of neighborhood size, the number of training samples, and the values of parameters and involved in the model were discussed in detail in Section 4. In order to verify the effectiveness of the JCLRRSR approach, experiments were carried out involving the brain tumor data and the multiple sclerosis data. The experimental results revealed that JCLRRSR can not only segment brain lesions automatically but also have certain advantages in terms of segmentation accuracy as compared with other existing methods.

Data Availability

The two sets of data used to support the findings of this study are both from open datasets. Among that, one is from the MICCAI BraTS Challenge 2012 (http://www2.imm.dtu.dk/projects/BRATS2012/data.html). The other is from the ACCORDION MIND database (https://clinicaltrials.gov/ct2/show/NCT00182910).

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This study was funded by the National Natural Science Foundation of China (nos. 61275198 and 60978069).

References

  1. S. Saritha and N. Amutha Prabha, “A comprehensive review: segmentation of MRI images-brain tumor,” International Journal of Imaging Systems and Technology, vol. 26, no. 4, pp. 295–304, 2016. View at: Publisher Site | Google Scholar
  2. E.-S. A. El-Dahshan, H. M. Mohsen, K. Revett, and A.-B. M. Salem, “Computer-aided diagnosis of human brain tumor through MRI: a survey and a new algorithm,” Expert Systems with Applications, vol. 41, no. 11, pp. 5526–5545, 2014. View at: Publisher Site | Google Scholar
  3. M. Saii and Z. Kraitem, “Automatic brain tumor detection in MRI using image processing techniques,” Biomedical Statistics and Informatics, vol. 2, no. 2, pp. 73–76, 2017. View at: Publisher Site | Google Scholar
  4. N. Cordier, H. Delingette, and N. Ayache, “A patch-based approach for the segmentation of pathologies: application to glioma labelling,” IEEE Transactions on Medical Imaging, vol. 35, no. 4, pp. 1066–1076, 2016. View at: Publisher Site | Google Scholar
  5. N. Cordier, “Multi-atlas patch-based segmentation and synthesis of brain tumor MR images,” Synfacts, vol. 10, no. 10, p. 1012, 2015. View at: Publisher Site | Google Scholar
  6. I. Zabir, S. Paul, M. A. Rayhan et al., “Automatic brain tumor detection and segmentation from multi-modal MRI images based on region growing and level set evolution,” in Proceedings of the IEEE International Wie Conference on Electrical and Computer Engineering, pp. 503–506, Dhaka, Bangladesh, 2016. View at: Google Scholar
  7. M. Dawngliana, D. Deb, M. Handique et al., “Automatic brain tumor segmentation in MRI: hybridized multilevel thresholding and level set,” in Proceedings of the IEEE International Symposium on Advanced Computing and Communication, pp. 219–223, Silchar, India, 2015. View at: Google Scholar
  8. E. Ilunga-Mbuyamba, J. G. Avina-Cervantes, J. Cepeda-Negrete et al., “Automatic selection of localized region-based active contour models using image content analysis applied to brain tumor segmentation,” Computers in Biology and Medicine, vol. 91, no. 1, pp. 69–79, 2017. View at: Publisher Site | Google Scholar
  9. M. Havaei, A. Davy, D. Warde-Farley et al., “Brain tumor segmentation with deep neural networks,” Medical Image Analysis, vol. 35, pp. 18–31, 2017. View at: Publisher Site | Google Scholar
  10. X. Chen, B. P. Nguyen, C. K. Chui et al., “Automated brain tumor segmentation using kernel dictionary learning and superpixel-level features,” in Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, pp. 002547–002552, Budapest, Hungary, 2016. View at: Google Scholar
  11. I. Ali, D. Cem, and S. Melike, “Review of MRI-based brain tumor image segmentation using deep learning methods,” Procedia Computer Science, vol. 102, pp. 317–324, 2016. View at: Publisher Site | Google Scholar
  12. N. Boughattas, M. Berar, K. Hamrouni et al., “Feature selection and classification using multiple kernel learning for brain tumor segmentation,” in Proceedings of the 2018 4th International Conference on Advanced Technologies for Signal and Image Processing, pp. 1–5, Sousse, Tunisia, March 2018. View at: Google Scholar
  13. T. Ge, N. Mu, and L. Li, “A brain tumor segmentation method based on softmax regression and graph cut,” Acta Electronica Sinica, vol. 45, no. 3, pp. 644–649, 2017. View at: Google Scholar
  14. Y. Li, F. Jia, and J. Qin, “Brain tumor segmentation from multimodal magnetic resonance images via sparse representation,” Artificial Intelligence in Medicine, vol. 73, pp. 1–13, 2016. View at: Publisher Site | Google Scholar
  15. X. Chen, B. P. Nguyen, C. K. Chui et al., “Reworking multilabel brain tumor segmentation: an automated framework using structured kernel sparse representation,” IEEE Systems Man and Cybernetics Magazine, vol. 3, no. 2, pp. 18–22, 2017. View at: Publisher Site | Google Scholar
  16. J. J. Tong, P. Zhang, Y. X. Weng et al., “Kernel sparse representation for MRI image analysis in automatic brain tumor segmentation,” Frontiers of Information Technology and Electronic Engineering, vol. 19, no. 4, pp. 471–480, 2018. View at: Publisher Site | Google Scholar
  17. G. Wu, Y. Chen, Y. Wang et al., “Sparse representation-based radiomics for the diagnosis of brain tumors,” IEEE Transactions on Medical Imaging, vol. 37, no. 4, pp. 893–905, 2018. View at: Publisher Site | Google Scholar
  18. L. Dai, J. Ding, J. Chen et al., “Object segmentation using low-rank representation with multiple block-diagonal priors,” in Proceedings of the 2016 23th International Conference on Pattern Recognition, pp. 1959–1964, Cancun, Mexico, December 2016. View at: Google Scholar
  19. L. Wei, X. Wang, A. Wu et al., “Robust subspace segmentation by self-representation constrained low-rank representation,” Neural Processing Letters, vol. 48, no. 3, pp. 1671–1691, 2018. View at: Publisher Site | Google Scholar
  20. J. Ma, J. Jiang, and C. Li, “Hyperspectral image denoising with segmentation-based low rank representation,” in Proceedings of the 2016 Visual Communications and Image Processing, pp. 1–4, Chengdu, China, November 2016. View at: Google Scholar
  21. K. Tang, Z. Su, W. Jiang et al., “Robust subspace learning-based low-rank representation for manifold clustering,” Neural Computing and Applications, pp. 1–13, 2018. View at: Publisher Site | Google Scholar
  22. E. J. Candes, X. Li, Y. Ma et al., “Robust principal component analysis?” Journal of the ACM, vol. 58, no. 3, pp. 1–37, 2011. View at: Publisher Site | Google Scholar
  23. G. Liu, Z. Lin, S. Yan et al., “Robust recovery of subspace structures by low-rank representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 171–184, 2013. View at: Publisher Site | Google Scholar
  24. Z. C. Lin, “A review on low-rank models in data analysis,” Big Data & Information Analytics, vol. 1, no. 2/3, pp. 139–161, 2016. View at: Publisher Site | Google Scholar
  25. F. Shi, J. Cheng, L. Wang et al., “LRTV: MR image super-resolution with low-rank and total variation regularizations,” IEEE Transactions on Medical Imaging, vol. 34, no. 12, pp. 2459–2466, 2015. View at: Publisher Site | Google Scholar
  26. S. H. Baete, J. Y. Chen, Y. C. Lin et al., “Low rank plus sparse decomposition of ODFs for improved detection of group-level differences and variable correlations in white matter,” NeuroImage, vol. 174, pp. 138–152, 2018. View at: Publisher Site | Google Scholar
  27. R. Liu, H. Nejati, and N. M. Cheung, “Joint estimation of low-rank components and connectivity graph in high-dimensional graph signals: application to brain imaging,” 2018, http://arxiv.org/abs/1801.02303. View at: Google Scholar
  28. N. Vaswani, T. Bouwmans, S. Javed et al., “Robust subspace learning: robust PCA, robust subspace tracking, and robust subspace recovery,” IEEE Signal Processing Magazine, vol. 35, no. 4, pp. 32–55, 2018. View at: Publisher Site | Google Scholar
  29. J. Wright, A. Ganesh, S. Rao et al., “Robust principal component analysis: exact recovery of corrupted low-rank matrices via convex optimization,” Advances in Neural Information Processing Systems, vol. 22, pp. 2080–2088, 2009. View at: Google Scholar
  30. Z. Lin, R. Liu, and Z. Su, “Linearized alternating direction method with adaptive penalty for low-rank representation,” Advances in Neural Information Processing Systems, vol. 24, pp. 612–620, 2011. View at: Google Scholar
  31. L. Zhuang, H. Gao, Z. Lin et al., “Non-negative low rank and sparse graph for semi-supervised learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2328–2335, Providence, RI, USA, June 2012. View at: Google Scholar
  32. J. Cai, E. J. Candès, and Z. Shen, “A singular value thresholding algorithm for matrix completion,” SIAM Journal on Optimization, vol. 20, no. 4, pp. 1956–1982, 2010. View at: Publisher Site | Google Scholar
  33. Z. Lin, M. Chen, and Y. Ma, “The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices,” 2010, http://arxiv.org/abs/1009.5055. View at: Google Scholar
  34. J. Yang, W. Yin, Y. Zhang et al., “A fast algorithm for edge-preserving variational multichannel image restoration,” SIAM Journal on Imaging Sciences, vol. 2, no. 2, pp. 569–592, 2009. View at: Publisher Site | Google Scholar
  35. I. S. Reed and X. Yu, “Adaptive multiple-band CFAR detection of an optical pattern with unknown spectral distribution,” IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 38, no. 10, pp. 1760–1770, 1990. View at: Publisher Site | Google Scholar
  36. W. Li and Q. Du, “Collaborative representation for hyperspectral anomaly detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 3, pp. 1463–1474, 2015. View at: Publisher Site | Google Scholar

Copyright © 2019 Ting Ge et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views846
Downloads468
Citations

Related articles