Research Article  Open Access
Ting Ge, Ning Mu, Tianming Zhan, Zhi Chen, Wanrong Gao, Shanxiang Mu, "Brain Lesion Segmentation Based on Joint Constraints of LowRank Representation and Sparse Representation", Computational Intelligence and Neuroscience, vol. 2019, Article ID 9378014, 11 pages, 2019. https://doi.org/10.1155/2019/9378014
Brain Lesion Segmentation Based on Joint Constraints of LowRank Representation and Sparse Representation
Abstract
The segmentation of brain lesions from a brain magnetic resonance (MR) image is of great significance for the clinical diagnosis and followup treatment. An automatic segmentation method for brain lesions is proposed based on the lowrank representation (LRR) and the sparse representation (SR) theory. The proposed method decomposes the brain image into the background part composed of brain tissue and the brain lesion part. Considering that each pixel in the brain tissue can be represented by the background dictionary, a lowrank representation that incorporates sparsityinducing regularization term is adopted to model the part. Then, the linearized alternating direction method with adaptive penalty (LADMAP) was selected to solve the model, and the brain lesions can be obtained by the response of the residual matrix. The presented model not only reflects the global structure of the image but also preserves the local information of the pixels, thus improving the representation accuracy. The experimental results on the data of brain tumor patients and multiple sclerosis patients revealed that the proposed method is superior to several existing methods in terms of segmentation accuracy while realizing the segmentation automatically.
1. Introduction
In recent years, brain diseases have become one of the most important diseases that endanger the health of human beings. The segmentation of brain lesions from brain images can be a valuable reference for the followup treatment of patients [1]. In the diagnosis of brain diseases, magnetic resonance (MR) imaging is the most commonly used imaging modality. Clinically, MR images of different sequences can be obtained by adjusting parameters so that brain diseases can be detected from multiple angles. Figure 1 shows two sets of multisequence MR images. It can be seen from these images that each sequence has a different effect on the display of the brain lesion regions. As such, the complete segmentation of brain lesions according to multisequence MR images has become a research hotspot recently.
(a)
(b)
One of the most important tasks in clinical practice is to analyse multisequence MR images and segment the brain lesions to calculate the shape and volume of the lesion regions. However, having radiologists segment multiple threedimensional (3D) images manually is timeconsuming, and the segmentation results are generally not repeatable [2]. Therefore, automatic or semiautomatic segmentation methods for brain lesions are important. At present, the image segmentation methods mainly include Atlasbased methods [3–5], curve/surface evolution methods [6–8], learningbased methods [9–13], and the methods based on sparse representation (SR) [14–17] and lowrank representation (LRR) [18–21]. The results of the Atlasbased methods depend on the registration algorithm, and, to date, there is no general registration algorithm that can register the target image with the standard image accurately. Therefore, such methods are commonly used to provide geometric priors for subsequent studies. The methods based on curve/surface evolution are slow when applied to 3D image segmentation. In addition, there are so many parameters that there is no good way to balance them for different target images at this time. The learningbased methods mainly search for the optimal classifier by learning the features of samples, in which the parameters are calculated by the optimization methods without manual settings. Moreover, most of them classify the pixel points by using multidimensional features, so they are suitable for the segmentation of multisequence MR images. However, the learningbased methods only use the features of the pixels themselves, which lack spatial correlation, and the training samples often need to be labelled manually by the experts according to their clinical experience, promoting subjectivity and nonrepeatability. Although the above methods can segment the lesion regions to some extent, it is necessary to know the prior information of the lesions in advance. Therefore, they are only applicable to detect certain brain diseases. The purpose of the batch detection for brain images and the automatic segmentation of brain lesions cannot be achieved.
Because of these factors and concerns, we propose a novel automatic segmentation approach for brain lesions based on the joint constraints of LRR and SR (JCLRRSR) in this paper. Since the LRR model is able to describe the whole structure of the brain tissues in image, while the SR model is good at characterizing the local information of the pixels, the proposed method can improve the representation accuracy of the image, thereby increasing the segmentation accuracy of the brain lesions.
The rest of the paper is organized into four sections. Section 2 introduces the SR model and the LRR model in brief. Section 3 presents the key schemes of the proposed JCLRRSR method for segmenting the brain lesions. The experiments and discussions on the data of patients with brain tumors and multiple sclerosis are given in Section 4. Finally, the conclusions are offered in Section 5.
2. SR and LRR Models
2.1. SR Model
The SR model was derived from the requirements of signal representation, compression, and coding and was first applied only in the field of signal processing. As images have become the main expression of information, the SR model has become more widely used in the field of image processing in recent years. Here, it can not only achieve good results in classical lowlevel image processing problems, such as image compression, denoising, restoration, and superresolution processing, but also perform satisfactorily regarding the problems of feature extraction, image segmentation, pattern recognition, machine learning, and some other issues. In image segmentation applications, the extracted image features of training samples are used to construct the dictionary. Then, the dictionary is used to approximate the testing sample. The class of testing sample is decided by approximate residuals in each class. At last, the classes of all testing samples generate the image segmentation results.
The basic idea of SR theory is that signals of the same class can be sparsely represented under an overcomplete dictionary [22]. The model can be expressed as follows:where is the signal matrix; is the dictionary matrix of the signal; is the dictionary atom; and is the number of atoms in the dictionary, . In addition, is the representation coefficient matrix of the signal and denotes the zero norm of the matrix—that is, the number of nonzero elements in the matrix. Due to the nonconvexity of the zero norm, solving problem (1) is NPhard. Considering that A is sparse enough, existing studies have shown that the convex relaxation method can be used for convex replacement, resulting in the following problem:where denotes the l_{1} norm of the matrix, defined as , and is the (i, j) element of A.
2.2. LRR Model
With the development of the representation learning theory, the LRR [23, 24] has become a classic theory in the field of image processing and also has been widely used in the medical image processing and research [25–27]. It aims to search for the lowest rank representation of data under an appropriate dictionary and is good at mining data dependencies in multiple subspaces. Moreover, as compared with the subspace recovery methods based on SR, LRR is robust to noise and conducive to describing the global structure of the data, which is often unavailable in other methods. At present, the LRR model has been widely used in video patching; face recognition; and image restoration, detection, and segmentation.
For the observed signal with noise, it can be mapped to its true value signal without noise through lowdimensional subspace in highdimensional space, where is of a lowrank [23]. Let be the noise signal, where is the element of , and then . Since the noise signals usually account for a small part of the observed signals, the representation model of robust principal component analysis (RPCA) [28, 29] can be constructed as follows:where denotes the rank function and is a coefficient to adjust the weight of the noise term. As is lowrank and is sparse, the following optimization problem can be obtained by relaxing problem (3) to its convex hull:where denotes the kernel norm of the matrix.
However, RPCA assumes that the data are in a single lowrank subspace, which is not suitable for the cases in which the data are in multiple subspaces. Subsequent development led to the formation of LRR theory, and the relevant model is as follows:where denotes the norm of the matrix, defined as
3. Brain Lesion Segmentation Based on JCLRRSR
3.1. Data Preprocessing
We first registered the multisequence brain MR images by the registration method in the MIPAV software and corrected the grey level of the images by the method of N4ITK. The purpose of these operations was to remove the skull of the T1 sequence image and then use the remaining part as a template to remove the skull of other sequence images. After that, we adjusted the greyscale range of the images to [0, 255] by means of the following equation:where denotes the original image of sequence and denotes the corresponding image after preprocessing. A similar preprocessing is applied on the images of other sequences.
3.2. Background Dictionary Construction
In order to segment the whole brain lesion regions, we regarded all of the brain tissues as background and treated brain lesions as abnormalities in the background distribution, respectively. Therefore, the background dictionary plays an important part in the proposed method and directly affects the subsequent segmentation performance of the brain lesions. In general, the background dictionary needs to meet three requirements, as follows: first, only the pixels of the brain tissue feature should be selected as the atoms, while the pixels in the lesion regions cannot be chosen; second, all categories of the brain tissue in the image should be included; and third, the number of atoms in the dictionary should be sufficient. Considering that the greyscale distribution of healthy brain tissue is relatively simple, the preprocessed normal human brain images were first classified and a certain number of pixels were extracted from the white matter, grey matter, and cerebrospinal fluid, respectively, as training samples. Then, the neighborhood of each training sample in each sequence image (the size of the neighborhood was ) was selected and converted into a greyscale vector (the length of the vector was ). The greyscale vectors of sequences were combined to form the feature vector of each training sample (the length of the feature vector was ). At last, all feature vectors were combined to construct the background dictionary matrix required for the proposed method (the number of training samples was S, and the size of the dictionary was ).
3.3. JCLRRSR Model
Each pixel in the brain image may correspond to one kind of brain tissue or the mixture of several kinds of brain tissue. The greyscale features of each kind of tissue can be expressed in a certain subspace, and the greyscale features of all pixels in the image should be considered in multiple subspaces. Meanwhile, the brain lesions are regarded as an abnormal form within the normal brain tissue background, which exists independently outside of all subspaces. Thus, only the pixels belonging to normal brain tissue can be represented by the background dictionary, while the pixels in lesion regions cannot be. Because of this, if we let be the highdimensional feature matrix of the brain image to be measured, can be divided into two parts according to the LRR model (5)—that is, the background part composed of brain tissue and the brain lesion part. In model (5), is the background dictionary matrix, is the representation coefficient matrix, and corresponds to the brain lesions.
The LRR model can effectively characterize the overall structural of the image, while the SR model is good at maintaining the local features of the pixels. Because of the different advantages of these two models, we introduced a sparse constraint for matrix to the LRR model in this paper, and a new representation model for brain lesions was proposed as follows:where and are the coefficients to adjust the weight of the brain abnormalities and the sparse term, respectively.
By solving model (8) and obtaining the optimal solutions and , corresponding to and , respectively, the response value of the pixel in belonging to the abnormal regions can be defined as follows:where and are the column and the element of , respectively. If is greater than a predetermined threshold, can be determined as a pixel within the lesion regions.
3.4. Model Solving
Since the alternating direction method requires two auxiliary variables and asks for a complex matrix inverse operation in each iteration, we choose the linearized alternating direction method with adaptive penalty (LADMAP) [18, 30] to solve problem (8).
To make the objective function in problem (8) separable, we introduced an auxiliary variable which satisfies ; then, we can replace the second term in the objective function with . After that, problem (8) can be converted to the following problem:
The Lagrange equation is as follows:where and are the Lagrange multipliers, is the penalty parameter, and
The above multivariable optimization problem can be solved by alternately updating one variable while fixing the remaining variables. In the iteration, problem (10) can be divided into the following three subproblems:(1)Fix and and update , and the objective function becomeswhere the quadratic term is replaced by a firstorder approximation in the step plus a neighboring operator [30, 31], is the derivative of to , and .(2)Fix and and update , and the objective function becomes(3)Fix and and update , and the objective function becomes
The steps of LADMAP are shown in Algorithm 1 where the order of step 2, step 3, and step 4 can be exchanged and and are the singular value contraction operator [32] and the soft threshold contraction operator [33], respectively. is defined as follows:where and , , and are the matrices obtained by SVD of ; that is, .

is defined as follows:where and . When operating on a matrix or a vector, means to operate on the elements in the matrix or vector, respectively.
According to Yang et al. [34], step 4 can be solved as follows: let , and then the column of the optimal solution iswhere and are the column of the matrices.
In summary, the general algorithm of the proposed method in this paper is given in Algorithm 2.

4. Experiments and Discussions
4.1. Experimental Data
To evaluate the effectiveness of the JCLRRSR, we performed experiments on the data of two groups of patients with brain diseases. The first dataset is the multisequence MR images of patients with brain tumors, provided by the MICCAI 2012 Brain Tumor Segmentation Challenge (BraTS 2012). There are 25 patients’ MR data, and each patient’s data include four sequences of MR images, which are T1, T2, FLAIR, and T1enhancement, respectively, as well as the realworld results of brain tumor regions and edema regions. The image size is 240 × 240 × 155 and the resolution is 1 × 1 × 1 mm. In the experimental comparison in this section, both tumor and edema were regarded as brain lesions. The second dataset is the multisequence MR images of patients with multiple sclerosis, provided by the ACCORDMIND database. There are 50 patients’ MR data, and each patient’s data include four sequences of MR images, specifically T1, T2, PD, and FLAIR, as well as the lesion regions labelled by radiologists manually. The image size is 256 × 256 × 46 and the resolution is 0.95 × 0.95 × 3 mm. Due to the influence of the image preprocessing effect, the JCLRRSR will segment the brain lesions as well as the skull which has not been removed completely. To this end, we postprocessed the segmentation results, in which we only retained the part belonging to the brain tissue and removed the others. In the analysis of the experimental results, the Dice Score indicator was adopted to verify the accuracy of the segmentation.
4.2. Number of Training Samples
The number of training samples in the dictionary is a key factor in the JCLRRSR. More training samples means there are more normal brain tissue samples and so the segmentation results will be better, but the computational efficiency will be lower. Conversely, the fewer the training samples, the higher the model computational efficiency but the lower the segmentation accuracy. Therefore, a clear tradeoff between segmentation accuracy and computational efficiency exists. Because only the samples belonging to normal brain tissue, such as white matter, grey matter, and cerebrospinal fluid, are needed in the background dictionary and the greyscale characteristics of the three types of brain tissue are relatively close, the segmentation accuracy will reach a stable state when the training sample capacity reaches a certain number. It can be seen from the brain image that the area of white matter is larger than those of grey matter and cerebrospinal fluid. Therefore, the number of training samples we selected from the white matter was three times the number of selections made from the other two tissues. Figure 2 shows the relationship between the total number of training samples and the segmentation accuracy of brain tumor and multiple sclerosis lesions. It can be seen from the figure that, with the increase in the number of training samples, the segmentation accuracy also increases and that when the number increases to a certain extent, the segmentation accuracy reaches a stable state. In addition, the total number of training samples needed in brain tumor segmentation is less than that in multiple sclerosis injury region segmentation, which is mainly due to the fact that the brain tumor occupies a much larger area than the multiple sclerosis injury regions. In order to balance the segmentation efficiency, the total number of training samples was set to 500 when segmenting the brain tumor and 2000 when segmenting multiple sclerosis lesions in the experiment.
(a)
(b)
4.3. Size of Neighborhood
When constructing highdimensional features of pixels, we transformed the neighborhoods of each pixel into a vector and then merged the vectors from the different sequence images. When the image block is too large, the categories included will be inconsistent and the extracted features cannot represent the current pixel well. Conversely, when the image block is too small, the features are less and the discrimination between different pixels will not be enough. Therefore, the size of image block is another key factor in the JCLRRSR. Figure 3 shows the effect of the neighborhood size on the segmentation accuracy of the brain tumor and the multiple sclerosis lesions. It can be seen from the figure that when the neighborhood size is set to , the segmentation accuracy of both the brain tumor and the multiple sclerosis lesions is optimal. The reason for this may be that the grey matter and the cerebrospinal fluid both present an elongated structure in the brain image. When the image block is too large, the central pixel and other pixels in the image block will belong to different brain tissue types. This will affect the accuracy of feature extraction and then affect the final segmentation.
4.4. Parameter Settings
There are two parameters, and , involved in the JCLRRSR. Figure 4 shows the effects of and on the segmentation accuracy of the brain tumor and the multiple sclerosis injury regions, where takes the value in {0.001, 0.005, 0.01, 0.05, 0.1, 0.5} and takes the value in {0.001, 0.01, 0.05, 0.1, 0.5, 1}. As can be seen from the figure, the algorithm is greatly affected by both for brain tumor data and multiple sclerosis data. This is mainly because, although the brain tumor area is much larger than the multiple sclerosis injury regions, it contains multiple subclasses such as tumor and edema, and there are differences in the characteristics of the pixels in these regions. In this experiment, we established and for brain tumor data and and for multiple sclerosis data, respectively.
(a)
(b)
4.5. Lesion Segmentation Results
Figures 5 and 6 show the segmentation results of the brain tumor and the multiple sclerosis injury regions, respectively. In the segmentation of the brain tumor, since the lesion regions in the image include the brain tumor and the edema around it, the JCLRRSR would detect them as a whole. If a subsequent quantitative analysis of the brain tumor is required, the test results will be further processed. The figures show that the segmentation results obtained by the JCLRRSR are close to the realworld results, which therefore meet the clinical needs. For better comparative analysis, different data subjects and different numbers of training samples are used to test several segmentation algorithms. In the brain tumor dataset, the samples are divided into highgrade and lowgrade gliomas according to the degree of tumor malignancy. Separately, in the multiple sclerosis dataset, the samples are divided into big multiple sclerosis and small multiple sclerosis lesions according to the size of lesions. From Table 1, we can see that the average accuracies of SRD, LRR, and the proposed JCLRRSR method executed on different datasets and subjects have a strong correlation with the number of training samples, but the GlobalRX method is not sensitive to the number of training samples. In general, these methods achieve better accuracy on HGG and BMSL because of the large targets present for these two subjects. Beside these, the JCLRRSR method can achieve optimal segmentation accuracy with different datasets and different subjects. This comparison demonstrates the superiority of the proposed method on multisequence MR images.
(a)
(b)
(c)
(a)
(b)
(c)

5. Conclusions
This paper presents an improved segmentation method for brain lesions. The multisequence MR images were first fused to form a highdimensional feature matrix, during which time the neighborhood information was incorporated into the highdimensional features of each pixel. Then, according to the proposed JCLRRSR model, the image feature matrix was decomposed and modeled under the joint constraints of LRR and SR. The model not only reflected the global structure of the image but also maintained the local information of the pixels, thus improving the decomposition accuracy. Finally, considering the computational efficiency, the LADMAP was selected to solve the model and then the brain lesions were segmented. The setting of neighborhood size, the number of training samples, and the values of parameters and involved in the model were discussed in detail in Section 4. In order to verify the effectiveness of the JCLRRSR approach, experiments were carried out involving the brain tumor data and the multiple sclerosis data. The experimental results revealed that JCLRRSR can not only segment brain lesions automatically but also have certain advantages in terms of segmentation accuracy as compared with other existing methods.
Data Availability
The two sets of data used to support the findings of this study are both from open datasets. Among that, one is from the MICCAI BraTS Challenge 2012 (http://www2.imm.dtu.dk/projects/BRATS2012/data.html). The other is from the ACCORDION MIND database (https://clinicaltrials.gov/ct2/show/NCT00182910).
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
This study was funded by the National Natural Science Foundation of China (nos. 61275198 and 60978069).
References
 S. Saritha and N. Amutha Prabha, “A comprehensive review: segmentation of MRI imagesbrain tumor,” International Journal of Imaging Systems and Technology, vol. 26, no. 4, pp. 295–304, 2016. View at: Publisher Site  Google Scholar
 E.S. A. ElDahshan, H. M. Mohsen, K. Revett, and A.B. M. Salem, “Computeraided diagnosis of human brain tumor through MRI: a survey and a new algorithm,” Expert Systems with Applications, vol. 41, no. 11, pp. 5526–5545, 2014. View at: Publisher Site  Google Scholar
 M. Saii and Z. Kraitem, “Automatic brain tumor detection in MRI using image processing techniques,” Biomedical Statistics and Informatics, vol. 2, no. 2, pp. 73–76, 2017. View at: Publisher Site  Google Scholar
 N. Cordier, H. Delingette, and N. Ayache, “A patchbased approach for the segmentation of pathologies: application to glioma labelling,” IEEE Transactions on Medical Imaging, vol. 35, no. 4, pp. 1066–1076, 2016. View at: Publisher Site  Google Scholar
 N. Cordier, “Multiatlas patchbased segmentation and synthesis of brain tumor MR images,” Synfacts, vol. 10, no. 10, p. 1012, 2015. View at: Publisher Site  Google Scholar
 I. Zabir, S. Paul, M. A. Rayhan et al., “Automatic brain tumor detection and segmentation from multimodal MRI images based on region growing and level set evolution,” in Proceedings of the IEEE International Wie Conference on Electrical and Computer Engineering, pp. 503–506, Dhaka, Bangladesh, 2016. View at: Google Scholar
 M. Dawngliana, D. Deb, M. Handique et al., “Automatic brain tumor segmentation in MRI: hybridized multilevel thresholding and level set,” in Proceedings of the IEEE International Symposium on Advanced Computing and Communication, pp. 219–223, Silchar, India, 2015. View at: Google Scholar
 E. IlungaMbuyamba, J. G. AvinaCervantes, J. CepedaNegrete et al., “Automatic selection of localized regionbased active contour models using image content analysis applied to brain tumor segmentation,” Computers in Biology and Medicine, vol. 91, no. 1, pp. 69–79, 2017. View at: Publisher Site  Google Scholar
 M. Havaei, A. Davy, D. WardeFarley et al., “Brain tumor segmentation with deep neural networks,” Medical Image Analysis, vol. 35, pp. 18–31, 2017. View at: Publisher Site  Google Scholar
 X. Chen, B. P. Nguyen, C. K. Chui et al., “Automated brain tumor segmentation using kernel dictionary learning and superpixellevel features,” in Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, pp. 002547–002552, Budapest, Hungary, 2016. View at: Google Scholar
 I. Ali, D. Cem, and S. Melike, “Review of MRIbased brain tumor image segmentation using deep learning methods,” Procedia Computer Science, vol. 102, pp. 317–324, 2016. View at: Publisher Site  Google Scholar
 N. Boughattas, M. Berar, K. Hamrouni et al., “Feature selection and classification using multiple kernel learning for brain tumor segmentation,” in Proceedings of the 2018 4th International Conference on Advanced Technologies for Signal and Image Processing, pp. 1–5, Sousse, Tunisia, March 2018. View at: Google Scholar
 T. Ge, N. Mu, and L. Li, “A brain tumor segmentation method based on softmax regression and graph cut,” Acta Electronica Sinica, vol. 45, no. 3, pp. 644–649, 2017. View at: Google Scholar
 Y. Li, F. Jia, and J. Qin, “Brain tumor segmentation from multimodal magnetic resonance images via sparse representation,” Artificial Intelligence in Medicine, vol. 73, pp. 1–13, 2016. View at: Publisher Site  Google Scholar
 X. Chen, B. P. Nguyen, C. K. Chui et al., “Reworking multilabel brain tumor segmentation: an automated framework using structured kernel sparse representation,” IEEE Systems Man and Cybernetics Magazine, vol. 3, no. 2, pp. 18–22, 2017. View at: Publisher Site  Google Scholar
 J. J. Tong, P. Zhang, Y. X. Weng et al., “Kernel sparse representation for MRI image analysis in automatic brain tumor segmentation,” Frontiers of Information Technology and Electronic Engineering, vol. 19, no. 4, pp. 471–480, 2018. View at: Publisher Site  Google Scholar
 G. Wu, Y. Chen, Y. Wang et al., “Sparse representationbased radiomics for the diagnosis of brain tumors,” IEEE Transactions on Medical Imaging, vol. 37, no. 4, pp. 893–905, 2018. View at: Publisher Site  Google Scholar
 L. Dai, J. Ding, J. Chen et al., “Object segmentation using lowrank representation with multiple blockdiagonal priors,” in Proceedings of the 2016 23th International Conference on Pattern Recognition, pp. 1959–1964, Cancun, Mexico, December 2016. View at: Google Scholar
 L. Wei, X. Wang, A. Wu et al., “Robust subspace segmentation by selfrepresentation constrained lowrank representation,” Neural Processing Letters, vol. 48, no. 3, pp. 1671–1691, 2018. View at: Publisher Site  Google Scholar
 J. Ma, J. Jiang, and C. Li, “Hyperspectral image denoising with segmentationbased low rank representation,” in Proceedings of the 2016 Visual Communications and Image Processing, pp. 1–4, Chengdu, China, November 2016. View at: Google Scholar
 K. Tang, Z. Su, W. Jiang et al., “Robust subspace learningbased lowrank representation for manifold clustering,” Neural Computing and Applications, pp. 1–13, 2018. View at: Publisher Site  Google Scholar
 E. J. Candes, X. Li, Y. Ma et al., “Robust principal component analysis?” Journal of the ACM, vol. 58, no. 3, pp. 1–37, 2011. View at: Publisher Site  Google Scholar
 G. Liu, Z. Lin, S. Yan et al., “Robust recovery of subspace structures by lowrank representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 171–184, 2013. View at: Publisher Site  Google Scholar
 Z. C. Lin, “A review on lowrank models in data analysis,” Big Data & Information Analytics, vol. 1, no. 2/3, pp. 139–161, 2016. View at: Publisher Site  Google Scholar
 F. Shi, J. Cheng, L. Wang et al., “LRTV: MR image superresolution with lowrank and total variation regularizations,” IEEE Transactions on Medical Imaging, vol. 34, no. 12, pp. 2459–2466, 2015. View at: Publisher Site  Google Scholar
 S. H. Baete, J. Y. Chen, Y. C. Lin et al., “Low rank plus sparse decomposition of ODFs for improved detection of grouplevel differences and variable correlations in white matter,” NeuroImage, vol. 174, pp. 138–152, 2018. View at: Publisher Site  Google Scholar
 R. Liu, H. Nejati, and N. M. Cheung, “Joint estimation of lowrank components and connectivity graph in highdimensional graph signals: application to brain imaging,” 2018, http://arxiv.org/abs/1801.02303. View at: Google Scholar
 N. Vaswani, T. Bouwmans, S. Javed et al., “Robust subspace learning: robust PCA, robust subspace tracking, and robust subspace recovery,” IEEE Signal Processing Magazine, vol. 35, no. 4, pp. 32–55, 2018. View at: Publisher Site  Google Scholar
 J. Wright, A. Ganesh, S. Rao et al., “Robust principal component analysis: exact recovery of corrupted lowrank matrices via convex optimization,” Advances in Neural Information Processing Systems, vol. 22, pp. 2080–2088, 2009. View at: Google Scholar
 Z. Lin, R. Liu, and Z. Su, “Linearized alternating direction method with adaptive penalty for lowrank representation,” Advances in Neural Information Processing Systems, vol. 24, pp. 612–620, 2011. View at: Google Scholar
 L. Zhuang, H. Gao, Z. Lin et al., “Nonnegative low rank and sparse graph for semisupervised learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2328–2335, Providence, RI, USA, June 2012. View at: Google Scholar
 J. Cai, E. J. Candès, and Z. Shen, “A singular value thresholding algorithm for matrix completion,” SIAM Journal on Optimization, vol. 20, no. 4, pp. 1956–1982, 2010. View at: Publisher Site  Google Scholar
 Z. Lin, M. Chen, and Y. Ma, “The augmented Lagrange multiplier method for exact recovery of corrupted lowrank matrices,” 2010, http://arxiv.org/abs/1009.5055. View at: Google Scholar
 J. Yang, W. Yin, Y. Zhang et al., “A fast algorithm for edgepreserving variational multichannel image restoration,” SIAM Journal on Imaging Sciences, vol. 2, no. 2, pp. 569–592, 2009. View at: Publisher Site  Google Scholar
 I. S. Reed and X. Yu, “Adaptive multipleband CFAR detection of an optical pattern with unknown spectral distribution,” IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 38, no. 10, pp. 1760–1770, 1990. View at: Publisher Site  Google Scholar
 W. Li and Q. Du, “Collaborative representation for hyperspectral anomaly detection,” IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 3, pp. 1463–1474, 2015. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2019 Ting Ge et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.