Computational and Mathematical Methods in Medicine

Computational and Mathematical Methods in Medicine / 2020 / Article
Special Issue

Computational Intelligence Methods for Brain-Machine Interfacing or Brain-Computer Interfacing

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 6789306 | https://doi.org/10.1155/2020/6789306

Wentao Wu, Daning Li, Jiaoyang Du, Xiangyu Gao, Wen Gu, Fanfan Zhao, Xiaojie Feng, Hong Yan, "An Intelligent Diagnosis Method of Brain MRI Tumor Segmentation Using Deep Convolutional Neural Network and SVM Algorithm", Computational and Mathematical Methods in Medicine, vol. 2020, Article ID 6789306, 10 pages, 2020. https://doi.org/10.1155/2020/6789306

An Intelligent Diagnosis Method of Brain MRI Tumor Segmentation Using Deep Convolutional Neural Network and SVM Algorithm

Guest Editor: Yi-Zhang Jiang
Received01 Jun 2020
Accepted01 Jul 2020
Published14 Jul 2020

Abstract

Among the currently proposed brain segmentation methods, brain tumor segmentation methods based on traditional image processing and machine learning are not ideal enough. Therefore, deep learning-based brain segmentation methods are widely used. In the brain tumor segmentation method based on deep learning, the convolutional network model has a good brain segmentation effect. The deep convolutional network model has the problems of a large number of parameters and large loss of information in the encoding and decoding process. This paper proposes a deep convolutional neural network fusion support vector machine algorithm (DCNN-F-SVM). The proposed brain tumor segmentation model is mainly divided into three stages. In the first stage, a deep convolutional neural network is trained to learn the mapping from image space to tumor marker space. In the second stage, the predicted labels obtained from the deep convolutional neural network training are input into the integrated support vector machine classifier together with the test images. In the third stage, a deep convolutional neural network and an integrated support vector machine are connected in series to train a deep classifier. Run each model on the BraTS dataset and the self-made dataset to segment brain tumors. The segmentation results show that the performance of the proposed model is significantly better than the deep convolutional neural network and the integrated SVM classifier.

1. Introduction

The incidence of brain tumors increases with age [1]. This article focuses on gliomas in brain tumors. According to the location of the glioma, the cell type, and the severity of the tumor, the World Health Organization classifies the glioma into I~IV grades. Among them, Classes I and II are low-grade gliomas, and Classes III and IV are high-grade gliomas [2]. In order to facilitate doctors to accurately remove gliomas during surgery, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Computed Tomography (PET) and other imaging techniques are commonly used in clinical treatment to brain image segmentation of the glioma area which helps the doctor to safely remove the tumor within the maximum range. At the same time, MRI has the characteristics of significant soft tissue contrast and can provide abundant physiological tissue information. In the clinical treatment of gliomas, MRI is usually used to diagnose gliomas preoperatively, intraoperatively, and postoperatively.

Glioma is a tumor composed of a necrotic core, a margin of tumor activity, and edema tissue. Multiple MRI sequences can be used to image different tumor tissues [3], as shown in Figure 1. At present, MRI imaging of gliomas generally has four modal sequences: T1-weighted, post-contrast T1-weighted, T2-weighted, and FLAIR. Different sequences reflect different glioma tissues [4]. The general FLAIR sequence is suitable for observing edema tissues, and the T1ce sequence is suitable for observing the active components of the tumor core.

MRI-based segmentation of gliomas and their surrounding abnormal tissues facilitates the doctor to observe the external morphology of each tumor tissue of the patient’s glioma and also facilitates the doctor’s imaging-based analysis and further treatment. Therefore, the segmentation of glioma is considered to be a first step in the MRI analysis of glioma patients. Because gliomas have different degrees of deterioration and contain multiple tumor tissue regions and brain MRI is a multimodal and many-layer three-dimensional scan image, manual segmentation of glioma regions requires a lot of time and manpower. In addition, manual segmentation is often based on the brightness of the image observed by the human eye for area segmentation, which is easily affected by the quality of the image generation and the personal factors of the tagger. It is prone to erroneous segmentation and segmentation of redundant areas. Therefore, in clinical practice, a fully automatic segmentation method with good segmentation accuracy for gliomas is needed. However, the problems in the study of automatic glioma segmentation methods are summarized as follows: (1) glioma is often distinguished in the image by the change in pixel intensity between the lesion area and surrounding normal tissues. Due to the presence of a gray-scale offset field, the intensity gradient between adjacent tumor tissues will be smoothed, resulting in blurred tumor tissue boundaries. (2) The structure of gliomas differs in size, shape, and position, making segmentation algorithms difficult to model. And because the growth position of glioma is not fixed, it is often accompanied by a tumor mass effect. This will cause the surrounding normal brain tissue to be compressed and change its shape, thereby generating irregular background information and increasing the difficulty of segmentation.

At present, computer-aided diagnosis technology based on machine learning has been widely used in medical image analysis in recent years [514]. Since the algorithm based on machine learning can train model parameters through various features of medical images and use the trained model to predict the extracted features, it can well solve the classification, regression, and aggregation in medical images. At the same time, the deep learning technology in machine learning can directly obtain high-dimensional features directly from the data and automatically adjust the model parameters through forward propagation and back-regulation algorithms, so that the performance of the model in related tasks can be optimized. Therefore, medical data processing of deep learning technology has developed into a research hotspot.

Brain tumor segmentation methods can be roughly divided into three categories: based on traditional image algorithms [1520], based on machine learning [2124], and based on deep learning [2530]. In recent years, deep learning has become the method of choice for complex tasks due to its high accuracy. The convolutional neural network (CNN) proposed in [25] has made tremendous progress in the field of image processing. Therefore, the segmentation method based on the convolutional neural network is widely used in segmentation of lung nodules, retinal segmentation, liver cancer segmentation, and glioma segmentation [26]. Many scholars have begun to apply CNN in deep learning to segmentation of gliomas. Reference [31] proposes a brain cancer segmentation method based on dual-path CNN. Reference [32] trained two CNNs to segment high-grade gliomas and low-grade gliomas. Reference [33] proposed a two-channel three-dimensional CNN for glioma segmentation.

This paper mainly studies the segmentation method of glioma based on the deep learning method, aiming at automatically and accurately segmenting the glioma region from the brain MRI through the deep learning algorithm. For the task of glioma segmentation, this paper proposes a DCNN-F-SVM deep classifier. The main research contents of this article are as follows: (1)A new depth classifier is proposed. The classifier is composed of a deep convolutional neural network and an integrated SVM algorithm. First, CNN was trained to learn the mapping from image space to tumor label space. The predicted labels in CNN together with the test images were input into an integrated SVM classifier. In order to make the results more accurate, we deepened the classification process and iterated these two steps again to form the framework of the next CNN-SVM in series(2)The traditional segmentation method is to use the training set to train a suitable classifier, and then test the set for verification. The method proposed in this study is completely different from the traditional method. The proposed model mainly includes three stages: one is preprocessing, feature extraction, and training CNN and SVM. The second is to test and generate the final segmentation results. The third is to deepen the order of our CNN-SVM cascade classifier through an iterative step(3)Apply the proposed model to public datasets and self-made datasets for evaluation. Compared with the segmentation performance of CNN and SVM alone, the superiority of the proposed model can be reflected in various evaluation indexes

2.1. Process of Brain Tumor Segmentation Algorithm Based on Deep Learning

In the currently proposed glioma segmentation method, the segmentation results of traditional image processing algorithms rely heavily on manual intervention, and a priori constraints are required to ensure the segmentation effect, resulting in poor robustness and low efficiency of the method. The glioma segmentation method based on machine learning needs to manually select the features of the image, so that the segmentation effect of this type of method depends on the artificial features, and the generalization ability of the segmentation algorithm itself is weak.

The glioma segmentation method based on deep learning can automatically extract image features through the neural network model and segment the glioma region. Therefore, the shortcomings of strong prior constraints and manual intervention in the above method are overcame. The automation and robustness of the segmentation algorithm are improved, and good segmentation results can be achieved in large-scale complex glioma segmentation scenarios. Figure 2 is the flow of glioma segmentation algorithm based on deep learning. The process can be described as follows: first, obtain the MRI of the patient’s brain and use it as the input data of the algorithm; then, divide the input data into the training set, verify the set, and test the set. At the same time, due to factors such as noise and uneven intensity in the original brain MRI, the divided data needs to be preprocessed. Commonly used glioma image preprocessing methods include image registration, skull removal, intensity standardization, and offset correction. Next, use the preprocessed input data to train the deep learning model. During the training process, the deep model will automatically perform feature extraction, and add the extracted features to the designed model structure for forward propagation. At the same time, the multiregion mask of glioma is used as a label to calculate the loss value, so that the model parameters are reversely adjusted in multiple iterations to achieve the purpose of optimal model performance. Then, at the end of each iteration, different evaluation indicators are used to evaluate the performance of the model, and the models that meet the conditions of the indicators are saved. Finally, the highly evaluated model is used to segment the test set data to obtain the final glioma segmentation results.

2.2. A Deep Brain Tumor Feature Generation Method

CNNs are well-known practical models in the field of deep learning, and their innovative ideas stem from the processing of human brain nerves. The perceptron model proposed in 1980 is considered to be the original model of convolutional neural networks. The perceptron model is a classic model in the field of machine learning, but this model also has great shortcomings and cannot solve XOR problems well. On this basis, reference [34] proposed the LeNet model, which has multiple convolutional layers, and each layer is a fully connected model trained using the back propagation algorithm [35]. Reference [36] proposed an artificial neural network called displacement invariance and studied the parallel structure of the convolutional neural network. However, these models are limited by experimental data and hardware conditions. Therefore, it is not suitable for complex tasks such as object detection and scene classification. In order to solve some problems in the training process of convolutional neural networks, Krizhevsky et al. proposed the AlexNet model [37]. In order to solve the overfitting problem of convolutional neural networks, the model proposes local convolution and Relu technologies, and the overfitting problem is well solved.

CNN is essentially a multilayer perceptron and a multilayer neural network, and there is an obvious sequence between these layers, which is composed of an input layer, a hidden layer, and an output layer. There can be multiple hidden layers, and each layer is composed of multiple two-dimensional planes. Each plane contains multiple neurons, and the hidden layer consists of a convolution layer, a downsampling layer, and a fully connected layer. The convolution layer and the downsampling layer appear alternately and can have multiple layers, and the fully connected layer can also have multiple layers. The network structure of the traditional convolutional neural network LeNet is shown in Figure 3.

In the convolution layer, the feature maps output by the previous layer are convolved by the learned convolution kernel, and the corresponding partial derivatives are input into the activation function together to form an output feature maps. The downsampling layer is used for feature selection to select representative features. The fully connected layer is a neural network layer whose role is to map two-dimensional distributed features into feature vectors for better classification. The output layer is a simple classification layer, usually using logistic regression for classification. Here, we use the Softmax classifier for classification.

The activation function usually selects a nonlinear function to better fit the nonlinear model. Selecting the activation function needs to consider its monotonicity and derivability. Common activation functions are shown as follows: (1)Relu function: (2)Softplus function:

The CNN model structure is simpler and easier to expand than the neurocognitive machine. In the neurocognitive machine, the downsampling layer and the convolutional layer alternate to form the function of feature extraction and abstraction, while in the convolutional neural network, the convolutional layer and the downsampling layer alternate, and their functions are similar. The convolution operation simplifies feature extraction, the excitation function replaces multiple nonlinear functions of the neurocognitive machine, and the pooling operation is also simpler. The CNN algorithm flow is shown in Figure 4.

2.3. Introduction of Brain Tumor Dataset

The BraTS Challenge held in 2012 provided a brain MRI dataset with both low-grade gliomas and high-grade gliomas. The dataset provides MRI of multiple patients and provides a multiregion glioma segmentation ground truth for each patient. Among them, ground truth is the result of fusion of 20 segmentation algorithms and then manually labeled by multiple human experts. Every BraTS competition will provide a public dataset of gliomas. However, the glioma dataset provided since BraTS17 has been significantly different from the dataset provided before 2016. The dataset used between BraTS14 and BraTS16 contains images of gliomas before and after surgery, which leads to confusing glioma segmentation criteria in the dataset and does not have the conditions to be true segmentation criteria. Therefore, the datasets between BraTS14 and BraTS16 are no longer used in the games after BraTS17. The BraTS18 dataset is based on the BraTS17 dataset with the addition of the TCIA glioma dataset. The TCIA glioma dataset includes 262 high-grade glioma patient images and 199 low-grade glioma patient images. This dataset contains the MRI and ground truth of 543 glioma patients and is currently the most standard glioma segmentation dataset. The details of the datasets in the BraTS competition datasets over the years are shown in Table 1.


DatasetDateTotal number of samples
Training setValidation setTest setTotal

BraTS12201230102565
BraTS13201330102565
BraTS14201440102565
BraTS152015274110384
BraTS162016274191465
BraTS17201721046146412
BraTS18201828567191543

As shown in Figure 5, gliomas are generally divided into four tumor regions, namely, edema around the tumor (ED), nonenhanced tumor core (NET), enhanced tumor core (ET), and necrotic core (NCR). Among them, ED, NET, and NCR are real glioma tumor tissues. The enhancement of the tumor core is to facilitate the observation of the tumor core.

2.4. Evaluation Method of Segmentation Result

The common evaluation methods for evaluating the performance of each model in the field of image segmentation are shown in Table 2.


IndexExpression/description

True Positive (TP)TP indicates that the model predicts a glioma region, and the doctor marks pixels that are also glioma regions
False Positive (FP)FP means pixels predicted by the model as the glioma area are actually the background area
True Negative (TN)TN indicates that the model predicted as the background area is actually the pixel of the background area
True Negative (TN)FN means pixels predicted by the model as the background area are actually as the tumor area
Dice Similarity Coefficient (DSC)
Sensitivity
Specificity

In addition to the above evaluation indicators, there are indicators such as Hausdorff Li and positive predictive value. The most commonly used are DSC and sensitivity.

3. Introduction of DCNN-F-SVM Model

This study proposes a brain tumor segmentation model based on convolutional neural network fusion SVM. Figure 6 is the model flow chart.

The proposed model segmentation of brain tumor images can be divided into two parts: one is preprocessing, feature extraction, and training CNN and SVM; the other is testing and generating the final segmentation results. It can be divided into 3 stages. In the first stage, CNN and integrated SVM are trained to obtain the mapping from the gray image domain to the tumor label domain. In the second stage, the labeled output of CNN and the test image are input into the integrated SVM classifier. In the third stage, an iterative step is used to connect the CNN and the integrated SVM classifier, which increases the number of layers. In order to select the optimal feature, an intermediate processing step is added to the model, as shown in Figure 7.

Grayscale, mean, and median are used to represent each pixel. These features are used to train CNN to obtain a nonlinear mapping between input features and labels. In the testing stage, an aggregated SVM classifier is independently trained using the aggregated CNN label map and the same features as before.

An iterative classification process is applied to the preprocessed input image. First, CNN classifies the pixels in the key area, thus generating a kind of presegmentation, which will be sent to the integrated SVM classifier. Then, a Region Of Interest (ROI) on presegmentation will be generated. In addition to presegmentation, classification based on integrated SVM will be performed on this ROI. After that, the integrated SVM explores the neighborhood of the CNN output. Use CNN to classify the marked ROI again. Repeat the above steps to further refine the segmentation results.

4. Simulation Experiment

4.1. Experiment-Related Instructions

The experimental dataset used in this study includes the public dataset and the self-made dataset. The comparison models are SVM, CNN, and DCNN-F-SVM. In the setting of experimental parameters, set the window size to 5, =0.1, and . The public dataset used is the BraTS18 dataset. The self-made dataset is the clinical MRI images of 26 patients. The evaluation index used in the experiment is DSC, sensitivity, and specificity. The description of the experimental software and hardware environment is shown in Table 3.


Hardware configurationSoftware configuration
Configuration itemConfiguration parameterConfiguration itemConfiguration parameter

Operating systemUbuntu 14.04Development environmentPyCharm
CPUAMD A8-5600KProgramming languagePython
RAM16.0GBImage algorithm libraryOpenCV
Video memory479 MBDeep learning algorithm libraryTensorFlow

4.2. Public Dataset Experiment

After the model training is completed, the test set can be predicted by the model to obtain the glioma segmentation result obtained by the model segmentation. In the test set divided by three-fold cross-validation, the evaluation index pair of each model on the BraTS18 dataset is shown in Table 4. The data in the table fully shows that the proposed model has better tumor segmentation performance than SVM and CNN. Compared with SVM, the proposed algorithm has improved by 8.3%, 9.7%, and 1.4% on the three indicators: DSC, sensitivity, and specificity; compared with CNN, the proposed algorithm has three indicators: DSC, sensitivity, and specificity, increased by 4.7%, 2.6% and 0.2%, respectively.


ModelDSCSensitivitySpecificity

SVM0.82680.83060.9845
CNN0.85560.88760.9962
DCNN-F-SVM0.89580.91100.9982

4.3. Self-Made Data Experiment

In this section, clinical MRI images of 26 patients were collected, and brain tumors were trained and segmented using three models, and the experimental results were given. Tables 5 and 6 show the segmentation results of CNN and DCNN-F-SVM for 26 patients, respectively.


NumberDSCSensitivitySpecificityNumberDSCSensitivitySpecificity

10.88010.90200.9563140.86950.88960.9411
20.87680.89630.9368150.87530.89760.9520
30.88930.91580.9605160.85360.87290.9264
40.86820.89100.9482170.84630.86670.9118
50.89260.90890.9795180.88310.90530.9786
60.87960.89980.9385190.892091070.9632
70.88590.90960.9543200.86970.88960.9408
80.86330.88590.9386210.87870.90060.9602
90.88280.90100.9715220.88110.91200.9632
100.89890.91570.9634230.89800.92340.9728
110.90030.92360.9726240.84790.87520.9388
120.84290.86950.9367250.82560.86100.9286
130.83960.86000.9302260.86940.88870.9385


NumberDSCSensitivitySpecificityNumberDSCSensitivitySpecificity

10.89230.92200.9663140.89560.92220.9785
20.88670.90630.9368150.88960.91850.9669
30.90910.91930.9702160.88760.91040.9678
40.87820.90140.9588170.87820.90860.9585
50.90260.92890.9795180.90200.91030.9786
60.89980.90980.9405190.90230.91230.9752
70.90560.91960.9743200.88850.91160.9600
80.90300.92290.9696210.89630.92050.9696
90.89270.91100.9711220.90040.92870.9745
100.91260.92890.9806230.91020.92580.9798
110.91850.92980.9885240.87630.91150.9598
120.87890.91100.9605250.86890.90880.9469
130.88250.91680.9693260.89960.93050.9797

Among the index values shown in Table 5, the DSC values are generally distributed around 0.86 and have an up and down floating error of about 0.18. The sensitivity values are generally distributed around 0.89 and have a floating error of about 0.14. The specificity values are generally distributed around 0.95 and have an up and down floating error of about 0.11.

Among the index values shown in Table 6, the DSC value is generally distributed around 0.89, and there is an upward and downward floating error of about 0.15. The sensitivity values are generally distributed around 0.91, and there is about 0.12 up and down floating error. The specificity value is generally distributed around 0.96, and there is about 0.09 up and down floating error.

Table 7 shows the DSC, specificity, and sensitivity values of the three methods. The proposed DCNN-F-SVM has increased in comparison with CNN and SVM used independently, in which the three indicators in the table (DSC, sensitivity, and specificity) are 3.5%, 2.6%, and 3.2% higher compared to those of SVM and 1.6%, 0.9%, and 2.4% higher compared to those of CNN. The proposed model can indeed improve the segmentation performance.


MethodDSCSensitivitySpecificity

SVM0.87050.90010.9586
CNN0.88690.91520.9657
DCNN-F-SVM0.90100.92360.9889

5. Conclusion

The diagnosis of brain diseases requires accurate diagnosis without deviation. Any misdiagnosis will cause irreparable losses. The incidence of brain tumors in brain diseases has been high, and the number of patients has increased year by year. This has also increased the workload of medical personnel in this field to a certain extent. An accurate and efficient method of brain tumor image segmentation needs to be urgently proposed, which has solved the increasing demand. Based on this background, this paper proposes a depth classifier to improve the segmentation accuracy and achieve automatic segmentation without manual intervention. The classifier is mainly composed of DCNN and integrated SVM connected in series. The implementation of the model is divided into three stages. In the first stage, a deep convolutional neural network is trained to learn the mapping from the image space to the tumor marker space. In the second stage, the predicted labels obtained from the deep convolutional neural network training are input into the integrated support vector machine classifier together with the test images. In the third stage, a deep convolutional neural network and an integrated support vector machine are connected in series to train a deep classifier. The simulation implementation verified the superiority and effectiveness of the proposed model. However, the proposed model still has shortcomings such as long calculation time. How to optimize the algorithm and shorten the running time will be the next research content.

Data Availability

The labeled dataset used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

This research was supported by the Program for China Northwest Cohort Study of the National Key Research and Development Program of China (Grant numbers 2017YFC0907200 and 2017YFC0907201) and Project of Birth Defect Control and Prevention in Shaanxi of the Shaanxi Health and Family Planning Commission (Grant number Sxwsjswzfcght2016-013).

References

  1. C. S. Muir, H. H. Storm, and A. Polednak, “Brain and other nervous system tumours,” Cancer surveys, vol. 19, no. 20, pp. 369–392, 1994. View at: Google Scholar
  2. D. N. Louis, H. Ohgaki, O. D. Wiestler et al., “The 2007 WHO classification of tumours of the central nervous system,” Acta Neuropathologica, vol. 114, no. 5, pp. 547–547, 2007. View at: Publisher Site | Google Scholar
  3. S. Bauer, R. Wiest, L. P. Nolte, and M. Reyes, “A survey of MRI-based medical image analysis for brain tumor studies,” Physics in Medicine and Biology, vol. 58, no. 13, pp. R97–R129, 2013. View at: Publisher Site | Google Scholar
  4. R. J. Gillies, P. E. Kinahan, and H. Hricak, “Radiomics: images are more than pictures, they are data,” Radiology, vol. 278, no. 2, pp. 563–577, 2016. View at: Publisher Site | Google Scholar
  5. S. Wang and R. M. Summers, “Machine learning and radiology,” Medical Image Analysis, vol. 16, no. 5, pp. 933–951, 2012. View at: Publisher Site | Google Scholar
  6. P. Qian, H. Friel, M. S. Traughber et al., “Transforming UTE-mDixon MR abdomen-pelvis images into CT by jointly leveraging prior knowledge and partial supervision,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2020. View at: Publisher Site | Google Scholar
  7. P. Qian, Y. Chen, J.-W. Kuo et al., “mDixon-Based Synthetic CT Generation for PET Attenuation Correction on Abdomen and Pelvis Jointly Using Transfer Fuzzy Clustering and Active Learning-Based Classification,” IEEE Transactions on Medical Imaging, vol. 39, no. 4, pp. 819–832, 2020. View at: Publisher Site | Google Scholar
  8. Y. Jiang, K. Zhao, K. Xia et al., “A novel distributed multitask fuzzy clustering algorithm for automatic MR brain image segmentation,” Journal of Medical Systems, vol. 43, no. 5, pp. 118:1–118:9, 2019. View at: Publisher Site | Google Scholar
  9. P. Qian, K. Xu, T. Wang et al., “Estimating CT from MR Abdominal Images Using Novel Generative Adversarial Networks,” Journal of Grid Computing, vol. 18, pp. 211–226, 2020. View at: Publisher Site | Google Scholar
  10. K. Xia, X. Zhong, L. Zhang, and J. Wang, “Optimization of diagnosis and treatment of chronic diseases based on association analysis under the background of regional integration,” Journal of Medical Systems, vol. 43, no. 3, pp. 46:1–46:8, 2019. View at: Publisher Site | Google Scholar
  11. P. Qian, C. Xi, M. Xu et al., “SSC-EKE: semi-supervised classification with extensive knowledge exploitation,” Information Sciences, vol. 422, pp. 51–76, 2018. View at: Publisher Site | Google Scholar
  12. Y. Jiang, Z. Deng, F.-L. Chung et al., “Recognition of epileptic EEG signals using a novel multiview TSK fuzzy system,” IEEE Transactions on Fuzzy Systems, vol. 25, no. 1, pp. 3–20, 2017. View at: Publisher Site | Google Scholar
  13. P. Qian, J. Zhou, Y. Jiang et al., “Multi-view maximum entropy clustering by jointly leveraging inter-view collaborations and intra-view-weighted attributes,” IEEE Access, vol. 6, pp. 28594–28610, 2018. View at: Publisher Site | Google Scholar
  14. Y. Jiang, D. Wu, Z. Deng et al., “Seizure classification from EEG signals using transfer learning, semi-supervised learning and TSK fuzzy system,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 25, no. 12, pp. 2270–2284, 2017. View at: Publisher Site | Google Scholar
  15. A. Stadlbauer, E. Moser, S. Gruber et al., “Improved delineation of brain tumors: an automated method for segmentation based on pathologic changes of 1H-MRSI metabolites in gliomas,” NeuroImage, vol. 23, no. 2, pp. 454–461, 2004. View at: Publisher Site | Google Scholar
  16. W. Deng, W. Xiao, H. Deng, and J. Liu, “MRI brain tumor segmentation with region growing method based on the gradients and variances along and inside of the boundary curve,” in 2010 3rd International Conference on Biomedical Engineering and Informatics, vol. 1, pp. 393–396, Yantai, China, 2010. View at: Publisher Site | Google Scholar
  17. D. Jayadevappa, S. S. Kumar, and D. S. Murty, “A hybrid segmentation model based on watershed and gradient vector flow for the detection of brain tumor,” International Journal of Signal Processing, Image Processing and Pattern Recognition, vol. 2, no. 3, pp. 29–42, 2009. View at: Google Scholar
  18. M. Prastawa, E. Bullitt, S. Ho, and G. Gerig, “A brain tumor segmentation framework based on outlier detection,” Medical Image Analysis, vol. 8, no. 3, pp. 275–283, 2004. View at: Publisher Site | Google Scholar
  19. A. Gooya, K. M. Pohl, M. Bilello et al., “GLISTR: glioma image segmentation and registration,” IEEE Transactions on Medical Imaging, vol. 31, no. 10, pp. 1941–1954, 2012. View at: Publisher Site | Google Scholar
  20. D. Kwon, R. T. Shinohara, H. Akbari, and C. Davatzikos, “Combining generative models for multifocal glioma segmentation and registration,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 763–770, Cham, Germany: Springer, 2014. View at: Publisher Site | Google Scholar
  21. H. Khotanlou, O. Colliot, J. Atif, and I. Bloch, “3D brain tumor segmentation in MRI using fuzzy classification, symmetry analysis and spatially constrained deformable models,” Fuzzy Sets and Systems, vol. 160, no. 10, pp. 1457–1473, 2009. View at: Publisher Site | Google Scholar
  22. S. Bauer, L. P. Nolte, and M. Reyes, “Fully automatic segmentation of brain tumor images using support vector machine classification in combination with hierarchical conditional random field regularization,” in International Conference on Medical Image Computing and Computer-assisted Intervention, pp. 354–361, Berlin, Germany: Springer, 2011. View at: Publisher Site | Google Scholar
  23. E. Geremia, B. H. Menze, and N. Ayache, Spatial decision forests for glioma segmentation in multi-channel MR images, MICCAI Challenge on Multimodal Brain Tumor Segmentation, Springer, Germany, 2012.
  24. L. Le Folgoc, A. V. Nori, S. Ancha, and A. Criminisi, “Lifted auto-context forests for brain tumour segmentation,” in International Workshop on Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, pp. 171–183, Cham, Germany: Springer, 2016. View at: Publisher Site | Google Scholar
  25. Y. LeCun and Y. Bengio, “Convolutional networks for images, speech, and time series,” The handbook of brain theory and neural networks, vol. 3361, no. 10, p. 1995, 1995. View at: Google Scholar
  26. Z. Zhang and E. Sejdić, “Radiological images and machine learning: trends, perspectives, and prospects,” Computers in Biology and Medicine, vol. 108, no. 6, pp. 354–370, 2019. View at: Publisher Site | Google Scholar
  27. D. Zikic, Y. Ioannou, M. Brown, and A. Criminisi, “Segmentation of brain tumor tissues with convolutional neural networks,” Proceedings MICCAI-BRATS, vol. 36, pp. 36–39, 2014. View at: Google Scholar
  28. P. Dvořák and B. Menze, “Local structure prediction with convolutional neural networks for multimodal brain tumor segmentation,” in International MICCAI Workshop on Medical Computer Vision, pp. 59–71, Cham, Germany: Springer, 2015. View at: Publisher Site | Google Scholar
  29. M. Havaei, A. Davy, D. Warde-Farley et al., “Brain tumor segmentation with deep neural networks,” Medical Image Analysis, vol. 35, pp. 18–31, 2017. View at: Publisher Site | Google Scholar
  30. S. Pereira, A. Pinto, V. Alves et al., “Deep convolutional neural networks for the segmentation of gliomas in multi-sequence MRI,” in BrainLes 2015, pp. 131–143, Cham, Germany: Springer, 2015. View at: Publisher Site | Google Scholar
  31. E. Shelhamer, J. Long, and T. Darrell, “Fully convolutional networks for semantic segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 4, pp. 640–651, 2017. View at: Publisher Site | Google Scholar
  32. K. Kamnitsas, C. Ledig, V. F. J. Newcombe et al., “Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation,” Medical Image Analysis, vol. 36, pp. 61–78, 2017. View at: Publisher Site | Google Scholar
  33. H. Dong, G. Yang, F. Liu, Y. Mo, and Y. Guo, “Automatic brain tumor detection and segmentation using U-Net based fully convolutional networks,” in Annual Conference on Medical Image Understanding and Analysis, pp. 506–517, Cham, Germany: Springer, 2017. View at: Publisher Site | Google Scholar
  34. Y. LeCun, B. E. Boser, J. S. Denker et al., Handwritten digit recognition with a back-propagation network, Advances in neural information processing systems, Morgan Kaufmann Publishers, Inc, 1990.
  35. R. Hechtnielsen, “Theory of the backpropagation neural network,” in International 1989 Joint Conference on Neural Networks, vol. 1, pp. 593–605, Washington, DC, USA, 1989. View at: Publisher Site | Google Scholar
  36. W. Zhang, K. Itoh, J. Tanida, and Y. Ichioka, “Parallel distributed processing model with local space-invariant interconnections and its optical architecture,” Applied Optics, vol. 29, no. 32, pp. 4790–4797, 1990. View at: Publisher Site | Google Scholar
  37. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” Advances in Neural Information Processing Systems, vol. 25, no. 2, pp. 1097–1105, 2012. View at: Google Scholar

Copyright © 2020 Wentao Wu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views3220
Downloads782
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.