Research Article  Open Access
Haiyan Wang, Guoqiang Han, Haojiang Li, Guihua Tao, Enhong Zhuo, Lizhi Liu, Hongmin Cai, Yangming Ou, "A Collaborative Dictionary Learning Model for Nasopharyngeal Carcinoma Segmentation on Multimodalities MR Sequences", Computational and Mathematical Methods in Medicine, vol. 2020, Article ID 7562140, 15 pages, 2020. https://doi.org/10.1155/2020/7562140
A Collaborative Dictionary Learning Model for Nasopharyngeal Carcinoma Segmentation on Multimodalities MR Sequences
Abstract
Nasopharyngeal carcinoma (NPC) is the most common malignant tumor of the nasopharynx. The delicate nature of the nasopharyngeal structures means that noninvasive magnetic resonance imaging (MRI) is the preferred diagnostic technique for NPC. However, NPC is a typically infiltrative tumor, usually with a small volume, and thus, it remains challenging to discriminate it from tightly connected surrounding tissues. To address this issue, this study proposes a voxelwise discriminate method for locating and segmenting NPC from normal tissues in MRI sequences. The located NPC is refined to obtain its accurate segmentation results by an original multiviewed collaborative dictionary classification (CODL) model. The proposed CODL reconstructs a latent intact space and equips it with discriminative power for the collective multiview analysis task. Experiments on synthetic data demonstrate that CODL is capable of finding a discriminative space for multiview orthogonal data. We then evaluated the method on real NPC. Experimental results show that CODL could accurately discriminate and localize NPCs of different volumes. This method achieved superior performances in segmenting NPC compared with benchmark methods. Robust segmentation results show that CODL can effectively assist clinicians in locating NPC.
1. Introduction
Nasopharyngeal carcinoma (NPC) is an enigmatic malignancy with marked racial and geographical differences, being particularly prevalent in southern China, Southeast Asia, and northern Africa [1, 2]. Although advances in therapeutic techniques have contributed to improve clinical outcomes for patients with NPC, the mortality rate remains high. Early detection and accurate tumor localization of NPC are vital for surgical planning. Magnetic resonance imaging (MRI) is the first choice in primary tumor delineatio and a presurgical tool for localization and evaluation of the tumor entity [3â€“5]. In practice, the patient is usually scanned by T1weighted (T1w) or T2weighted (T2w) MR imaging. The T2weighted (T2w) imaging provides better fine structural information on soft tissues than by T1w imaging. A contrastenhanced T1weighted (CET1w) imaging is sometimes operated to provide direct evidence on tumor occurrence. Currently, identification and comprehensive assessment of the carcinoma entity NPC remain a great challenge. The infiltrative and migratory characteristics of NPC make it difficult to be discriminated from surrounding tissues.
To achieve automatic (or semiautomatic) segmentation of the NPC, traditional image processing has been used to fulfill the task. For example, [6] proposed a semiautomatic workflow, including masking, thresholding, and seed growing, to segment NPC from both T2w and CET1w from 7 patients to help radiation therapy. [7] proposed an automatic NPC segmentation method based on region growing and clustering and used neural networks to classify suspicious regions. [8] proposed to use a genetic algorithm for selecting the informative features and the support vector machine for classifying NPC. With the great success of deep learning models in computer vision, [9] proposed to use deep convolutional neural networks and graph cut on T1w images from 30 NPC patients. [10] tested a deep deconvolutional neural network, composing of an encoder network and a decoder network, on CT images from 230 patients. [11] reported an automatic NPC segmentation method based on the convolutional neural network (CNN) architecture with dynamic contrastenhanced MRI. [12] used fully convolutional networks with auxiliary paths to achieve automatic segmentation of NPC on PETCT images. [13] used a modified UNet model to automatically segment NPC on CT images from 502 patients. [14] proposed an automated method based on CNN for NPC segmentation on dualsequence MRI (i.e., T1w and T2w) from 44 patients. Furthermore, the tumor volume varies greatly and many of them are small. Such sample characteristics raise a large difficulty in constructing representative learning models using deep networks.
Recently, multiview learning models have been developed to analyze images from various imaging modalities or views. Fruitful advances have been made in reconstruction, face recognition, human motion recognition, and other object recognition issues [15â€“17]. In the current study, each patient underwent MRI by three sequences (i.e., T1w, T2w, and CET1w) to enjoy the merits of different imaging characteristics (see Figure 1). The study is aimed at achieving the identification and segmentation of the NPC with high accuracy. Different views usually provide supplemental information. The problem of NPC segmentation can be formulated as a voxelwise dictionary learning problem with three different views.
(a)
(b)
(c)
(d)
However, existing multiview learning methods cannot be tailored directly to be applied in NPC localization and segmentation. From the methodological aspect, most NPCs only occupy a small area in the entire slice. Such imbalance also results in a high false positive rate in applying learning models directly. To solve this difficulty, we preprocessed the data, that is, using a specially designed deep learning model with a fully convolutional network (FCN) structure to roughly locate the suspicious tumor area. In light of the advantages of multiview subspace learning, we propose to use a multiview learning collaborative dictionary model, which we call CODL, to further refine the detailed structure of NPC. The flowchart of NPC segmentation is illustrated in Figure 2.
The major contributions of our work are as follows: (1)An original collaborative dictionary model for multiview learning (CODL) is proposed to achieve fine segmentation. The CODL integrates cooperative information from multiple views to find latent intact space for the data and renders the latent space discriminative. The latent space is constructed by collaborative dictionary learning incorporating membership to possess discriminative power. Our approach takes into account the label of the samples to latent intact space. This gives a consistent indicator matrix discriminative capability(2)The numerical scheme involved in solving the CODL is provided. It treated the proposed unified framework into solvable subproblems, each with an explicit solution and a fast computation(3)While using all three MR sequences (T1w, T2w, and CET1w) achieved the highest accuracy, we show that, for patients having kidney diseases that prevent the use of contrast agent necessary in CET1w imaging [18, 19], using T1w and T2w alone does not significantly undermine the segmentation accuracy. This highlights the sensitivity and stability of the proposed CODL algorithm and improves the applicability of the proposed framework
2. Literature Review
Sparse codes generated by dictionaries can be directly used as features to train classifiers for recognition [20]. This twostage scheme has been extensively used in many existing methods [21â€“25], such method uses the discriminative nature of sparse representation to perform classification. However, generated sparse codes are often insufficiently discriminative in complex recognition tasks [26]. One alternative is to unify dictionary learning and classifier training in an optimization model [27]. However, most of the supervised dictionary methods only employ singleview information in the learning process, which will result in the data not having the optimal expressibility. Besides, the model will also depend on the peculiarities of training data.
A naive way of multiview learning is feature fusion [28]. However, consolidating each single view may be suboptimal if the different views belong to different sample spaces. To address the drawback, weighted combinations [29] have been proposed. Alternatively, recent advances are aimed at learning the multiview data via finding an intact space, such as the multiview intact space learning (MISL) [15] and the multiview discriminant analysis with view consistency (MvDAVC) [16]. In such approaches, a latent subspace shared by multiple views is learned by assuming that the input views are generated from this latent subspace. For multiview intact space learning, however, class membership is seldom used to find the optimal latent subspace and has little power available to handle problems in supervised learning.
As the goal of this paper is to develop a multiview dictionary learning method for voxelwise classification. We first give a brief review of the supervised dictionary and multiview subspace learning methods related to our work. In this paper, we propose a novel collaborative dictionary model for multiview learning, which also takes into account the label of the samples to latent intact space. The construction of the latent space is guided by the supervised dictionary learning within each individual view and equipped to have discriminative power.
3. Materials and Methods
3.1. Dataset
A total of 24 patients with nonmetastatic NPC at the Sun Yatsen University Cancer Center (SYSUCC) were enrolled in this study. MRI was performed on a 3.0T scanner (Discovery MR750; GE Healthcare, Milwaukee, WI, USA). The imaging parameters are as follows: axial T1w imaging (FSE, , and ), axial T2w imaging (FSE, , and ), and axial CET1w imaging (FSE, , and ). The number of slices per patient was 16, 32, or 36. Not every layer of MR images has lesions. The interval between each layer of images is 5â€‰mm, in which imaging has a high resolution of . T1w, T2w, and CET1w MR sequences were assessed for each patient. Regions of interest (ROI) were drawn by four experienced radiologists (>3 years of clinical experience) using semiautomatic methods. They were required to draw all discernable tumor regions cautiously along axial directions. Any disagreements were resolved through negotiating until full consent was derived by the four.
The purpose of this study is to develop a multiview dictionary learning method for voxelwise classification. We first give a rigid quality control on the selection of slices. Following the principle of multiple modalities sequences alignment, in total, 90 slices covering 30 instances of distinct tumor sizes were selected for our experiment. Each instance has three MR sequences (i.e., T1w, T2w, and CET1w) and wellaligned before feeding into models.
3.2. A Collaborative Dictionary Model for Multiview Classification (CODL)
In this paper, we proposed a collaborative multiview learning model to fuse multiple image modalities into a consolidated space. By integrating each single modality and exploiting its characteristics comprehensively, the information among different modalities is actively learned and reinterpreted in a latent space. The supervised membership is used to render the latent space being discriminative, and thus, the sample classification is finally conducted within the learned latent space.
3.2.1. Formulation of Multiview Collaborative Classification Model
Mathematically, let denote a dataset containing samples from the th view, with each sample characterized by a dimensional vector. We want to consolidate the multiview data into a latent space, denoted by , where is the dimensionality of the latent space.
Let denote the dictionary learned in the th view. The label for the training samples is denoted by . Our aim is to learn an informative latent space from multiple modalities and then achieve accurate classification task within the latent space. To this end, we proposed the following model to achieve latent space learning and classification simultaneously.
The first term in Equation (1) controls data fidelity by minimizing the reconstruction errors in the latent space through the dictionary . The second term renders the latent space with discriminative power. The two terms work collaboratively to yield a sharable latent space for different views. The third term encourages the loading coefficient to be sparse to achieve economic expression. Besides, it also helps to stabilize the optimization due to large freedom in the objective function. The hyperparameters and are aimed at penalizing the reconstruction error and sparsity.
Once we obtain the learned dictionaries and the latent space , we can map a query sample to its representation in the latent space. The latent representation is estimated by minimizing the following energy function:
Finally, we can classify the sample in the latent space using benchmark classification models, e.g., nearest neighbor.
The proposed CODL not only integrates complementary information in multiple views to find a latent intact space for the data but also renders the latent space discriminative.
3.2.2. Numerical Scheme for Solving CODL
The objective function Equation (1) is convex with respect to and . Therefore, we used a heuristic alternating direction method to solve it. By minimizing one variable while fixing the others, the alternating direction method iteratively updates each variable until convergence. The alternate minimization method enjoys an excellent characteristic. It can decompose a large complex problem into smallsized subproblems, thus enabling parallel solving to have a quick convergence. In particular to our problem, it decomposes Equation (1) into three subproblems with respect to the three variables , , and .
Step 1 to update : by fixing and and discarding irrelevant terms, the objective function Equation (1) could be simplified as
It is convex and differentiable with respect to the variable . By setting the gradient to zero, one has an explicit solution:
Step 2 to update : by fixing the variables and , the objective function Equation (1) could be simplified as:
It resembles the classical least absolute shrinkage and selection operator (LASSO) problem. By using a proximal gradient, its solution could be obtained by the iterative softthresholding algorithm (ISTA) [30]: where is the step size and is the softthresholding operator. One could further accelerate the ISTA to achieve fast convergence
Step 3 to update : by fixing and , the objective function Equation (1) could be simplified as
Setting the gradient with respect to to be zero, one has
The above three schemes are iteratively updated until convergence.
In the testing phase, one needs to find the new representation for query samples through the dictionary by solving Equation (2). It is a standard least square minimization problem with an explicit solution
The pseudocode for solving CODL is provided in Algorithm 1.

3.2.3. Complexity Analysis
The computational time of solving the proposed model is mainly taken by updating the , , and . As mentioned in Section 3.2.1, , , and , where is the dimensionality of the th view, is the dimensionality of the latent space, and is the number of multiview objects. According to Algorithm 1, the main computational cost of CODL is incurred in the iterative calculations of , , and . In each inner iteration, the computational cost of solving by Equation (4) is , the computational cost of solving by Equation (6) is , and the computational cost of solving via Equation (9) is . Therefore, the total computational complexity is .
4. Experiments and Results
We applied the proposed model on both a synthetic dataset and a real NPC dataset. For a fair comparison, each method was run on the synthetic data 10 times, and the averaged results were recorded. On a real NPC dataset, we tested the performance of each method using 10fold crossvalidation scheme. Classification accuracy was measured in terms of average accuracy across ten trials on different training and testing sets. Moreover, the parameters in each compared method are tuned to meet the best performance in the suggested range. For CODL, we empirically set the parameters, that is, , for single view, , for two views, and , for three views, throughout all experiments. All of our experiments were performed on a desktop computer with a 4.20â€‰GHz Intel(R) Core (TM)i77700K CPU, 16.0â€‰GB of RAM, and MATLAB R2017a (Ã—64).
4.1. Evaluation Metrics and Baseline Methods for Performance Comparisons
Six widely used metrics, including the sensitivity (SENS), the dice similarity coefficient (DICE), the area under the receiver operating characteristic curve (AUC), intersection over union (IoU), mean pixel accuracy (MPA), and Hausdorff distance (HD) were employed to measure the performances of each tested method. These qualitative metrics were defined as follows: where , , , , , and represented true positive, false positive, true negative, false negative, true positive rate, and true negative rate, respectively. We also plotted the receiver operating characteristic curve (ROC) for each method. The area under the ROC curve (AUC) was then estimated. For two point sets and , the Hausdorff distance between these two sets is defined as follows: where , . For this study, we have used the Euclidean norm .
Several benchmark methods are borrowed to serve as baseline methods for comparisons. They are widely used multiview methods and most relevant to our method. (i)Support vector machine (SVM) [31]: we concatenate the features of all views and perform support vector machine classification(ii)Multiview intact space learning (MISL) [15]: it is aimed at integrating the encoded complementary information from different views into a latent intact space. It shows theoretically that combining multiple views can obtain abundant information for latent intact space learning(iii)Multiview discriminant analysis with view consistency (MvDAVC) [16]: it seeks for a single discriminant common space for multiple views in a nonpairwise manner by jointly learning multiple viewspecific linear transforms. MvDAVC method has achieved good performance in addressing the problem of object recognition from multiple views(a)Zhao et al. [12]: it uses fully convolutional networks with an auxiliary path to achieve automatic segmentation of NPC on dualmodality PETCT images. The proposed method improves NPC segmentation by guiding the training of lower layers by auxiliary paths(b)Li et al. [13]: it proposes a modified version of the UNet, which performs well on NPC segmentation by modifying the downsampling layers and upsampling layers to have a similar learning ability and predict the same spatial resolution as the source image
4.2. Discriminative Capability Tests of CODL on Synthetic Data
We first constructed a synthetic data to test the discrimination power of the proposed methods. The synthetic data consisted of three classes, and they were separable within a threedimensional space, but inseparable when projected orthogonally into twodimensional (2D) plane (i.e.,  and  planes). The projected samples into each 2D plane were considered an observed individual view. The synthetic data contained 3000 samples from three classes, each following a multivariate normal distribution with mean values , , , and covariances respectively.
To test the robustness of the model over noise contaminations, the synthetic data were corrupted by Gaussian white noises with a standard deviation of 0.25, 0.5, and 1, respectively. The synthetic data was shown in Figure 3. The first row was the three different views along different planes (i.e., , , and  planes), respectively. The corresponding classified results by the proposed CODL were shown in the second row of Figure 3. Classification performance was measured in terms of average accuracy across ten trials. The percentage of training sets and test sets in each trial is 1â€‰:â€‰1. The averaged results were recorded and summarized in Table 1.
(a)
(b)
(c)
(d)
(e)
(f)
 
The V1, V2, and V3 denote , , and  views, respectively. FeaConcat means that we concatenate features of all views to generate a combined feature. Fusion means that we construct a multiview latent intact space learning by fusing all individual views. Classification performance is measured in terms of average accuracy. 
Since the individual view cannot reveal the intrinsic structure of the data, one may note that the classification on each individual view may not obtain accurate results. When the synesthetic data was noise free, the classification by MvDAVC obtained the highest accuracy by fusing , , and  views. However, when the noise level increased, its performances were inferior to the MISL and CODL. Throughout the experiments, the proposed CODL achieved the best performance uniformly. With the increasing noise level, the reduction of our methodâ€™s classification performance was significantly lower than that of other methods. Even when the data was heavily contaminated by the noises (), the CODL remained superior performance with the highest accuracy of 74.9%.
4.3. Realistic Experiments on Nasopharyngeal Carcinoma Data
4.3.1. Image Preprocessing
Most of the NPCs have a small volume and thus are very difficult to discriminate from its large surrounding. Such imbalance also results in a large false positive rate in applying learning models directly. To solve these difficulties, we firstly designed a fully convolutional network (FCN) to locate a rectangular box bounding the suspicious tumor. The network contains standard layers, including convolution, maximum pooling, and upsampling [32]. Our network used a jump structure to exploit deep and shallow semantic information. It also used multiscale convolution kernels to obtain a comprehensive global structure. The network was trained to predict a rectangular bounding box for the NPC.
The detailed architecture of the FCN network for NPC location is summarized in Table 2. Figure 4(a) showed the MR slices with bounding boxes identified by FCN, highlighted in red dots. We selected an outer area by extending the located bounding box by fifteen pixels outward to ensure that it sufficiently covers the tumor region.
 
The convolutional layer is denoted by Conv., and the max pooling by maxpool. 
(a)
(b)
(c)
4.3.2. Radiomics Feature Extraction and Classification
In the bounding box, each voxel is classified into a binary label of tumor vs. normal. The features for each pixel were estimated within a sliding window of centered itself. A total of 192 radiomics features (i.e., 32 Gabor, 5 Momentum, 154 GLCM, and 1 Pixel) were extracted for each sliding window. See section S1 in the Supplementary Material for more information on radiomics feature. If the border size is , it resulted in a sample matrix with 8034 samples and 192 features. The methods for extracting features from T1w, T2w, and CET1w sequences are the same. We use score for standardization. Finally, we use an adaptive median filter function to perform a simple postprocessing on the entire slice to retain the largest connected area.
We tested the performance of CODL using a 10fold crossvalidation scheme. The percentage of training sets and test sets per fold crossvalidation is 9â€‰:â€‰1. A total of 30 instances (training cohort: 27, testing cohort: 3) were enrolled in the voxel classification analysis. Classification accuracy was measured in terms of average accuracy across ten trials on different training and testing sets.
4.3.3. Experimental Results
Figure 5 visualizes NPC segmentation results on three typical instances, having large, medium, and small size tumors, respectively. Each row stands for segmentation results for one instance of MR sequences. From Figure 5, one would find that the segmentation results of CODL with fusing T1w, T2w, and CET1w MR sequences obtained a highly accurate segmentation.
(a)
(b)
(c)
(d)
(e)
(f)
Figure 4 shows the overall segmentation process. As is illustrated in Figure 4(a), we select the outer area by expanding the positioned bounding box 15 pixels outward. The extended areas used for fine classification were indicated by solid red lines. Figure 4(b) shows pixelwise fine classification results using MISL, MvDAVC, and CODL. One may observe that CODL obtained the highest accuracy. Our method performed stably in identifying tumors of different volumes. Specifically, Figure 4(c) showed the identified tumors in the whole slices. One may observe that the proposed method identifies the tumor successfully with its boundary almost perfectly overlapped with the actual one.
We report the detailed numerical results on cropped NPC dataset in Table 3.
 
FeaConcat2 and Fusion2 denote concatenating and fusing T1w and T2w, respectively. FeaConcat3 and Fusion3 denote concatenating and fusing T1w, T2w, and CET1w, respectively. 
In the first section in Table 3, we firstly tested the classification performance on each individual image modalities. CODL performed uniformly better than SVM. The superior performance of CODL is consistent with the synthetic data. Moreover, the CET1w provides a more accurate classification than T2w or T1w. The AUCs by CODL were , , and on CET1w, T2w, and T1w, respectively.
Considering that some NPC patients do not get CET1w scans due to kidney diseases, we used two modalities T1w and T2w to rerun the experiments. The results were summarized in the second section in Table 3. Overall, the accuracy has increased, which is higher than using any single MR modality. CODL with the fusion of T1w and T2w modalities scored the highest accuracy.
Finally, we used three MR modalities. One may observe that CODL achieved superior performances in classifying the nasopharyngeal carcinoma. The DICE, AUC, IoU, and MPA for CODL were uniformly larger than those by the other methods. Incorporating the imaging of CET1w achieved minor improvement () than without it () by CODL. It implies that the CODL could exploit fully discriminative information in the modality of T1w and T2w, such that the loss of accuracy after dropping CET1w is only mild.
Quantitative results of each method were shown by box plots in Figure 6. In terms of DICE, AUC, IoU, MPA, and HD, the performance of CODL is superior to the other methods. Another noticeable characteristic of the CODL lies in its robustness. One would find that the variances by the different metrics are dramatically smaller than by other methods. Such high robustness coincides with the experiments on synthetic data.
(a)
(b)
We report the detailed numerical results on whole MR slices in Table 4.