Deep Learning as a Gateway to Predictive, Preventive, Personalized, and Participatory MedicineView this Special Issue
Research Article | Open Access
Han Jiao, Xinhua Jiang, Zhiyong Pang, Xiaofeng Lin, Yihua Huang, Li Li, "Deep Convolutional Neural Networks-Based Automatic Breast Segmentation and Mass Detection in DCE-MRI", Computational and Mathematical Methods in Medicine, vol. 2020, Article ID 2413706, 12 pages, 2020. https://doi.org/10.1155/2020/2413706
Deep Convolutional Neural Networks-Based Automatic Breast Segmentation and Mass Detection in DCE-MRI
Breast segmentation and mass detection in medical images are important for diagnosis and treatment follow-up. Automation of these challenging tasks can assist radiologists by reducing the high manual workload of breast cancer analysis. In this paper, deep convolutional neural networks (DCNN) were employed for breast segmentation and mass detection in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). First, the region of the breasts was segmented from the remaining body parts by building a fully convolutional neural network based on U-Net++. Using the method of deep learning to extract the target area can help to reduce the interference external to the breast. Second, a faster region with convolutional neural network (Faster RCNN) was used for mass detection on segmented breast images. The dataset of DCE-MRI used in this study was obtained from 75 patients, and a 5-fold cross validation method was adopted. The statistical analysis of breast region segmentation was carried out by computing the Dice similarity coefficient (DSC), Jaccard coefficient, and segmentation sensitivity. For validation of breast mass detection, the sensitivity with the number of false positives per case was computed and analyzed. The Dice and Jaccard coefficients and the segmentation sensitivity value for breast region segmentation were 0.951, 0.908, and 0.948, respectively, which were better than those of the original U-Net algorithm, and the average sensitivity for mass detection achieved 0.874 with 3.4 false positives per case.
Breast cancer is one of the most common cancers amongst women worldwide . Early diagnosis and treatment are proven to reduce the mortality rate . Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a more reliable tool for early detection of breast cancer than mammography and ultrasound . The correct resolution of DCE-MRI image of the breast depends largely on the quality of visualization, operation experience, and the time needed for data analysis. Because manual analysis of MR series is time-consuming and error-prone, several specific systems have been developed to help radiologists detect and diagnose breast lesions, which greatly improves clinicians’ work efficiency . Although some computer-aided diagnosis (CAD) systems are currently used in the clinic, fully automatic detection of breast lesions is still an ongoing problem .
Generally, a DCE-MRI image of the breast also includes other organs such as the lung, heart, liver, and pectoral muscles. Separation of the breast region from other organs is necessary for further analysis. Manual segmentation of the breast region is tedious; therefore it is impractical to segment the entire breast region from a large number of diverse datasets. Automatic segmentation of the breast region reduces result bias and can accelerate data processing. Conventional breast region segmentation methods proposed in the literature are based on threshold, morphology, and fuzziness with specific connections between pixels. Thakran et al. segmented the outer and inner breast tissue by thresholding, morphological operation, and B-spline curve fitting . Wu et al.  combined edge enhancement and edge linking with candidate evaluation to detect the chest-wall line in the sagittal plane. Jiang et al.  used dynamic programing combined with preprocessing to segment the breast region in fat-suppressed transverse DCE-MRI. While these traditional methods have shown good performance, the robustness of these algorithms is insufficient, in that they tend to fail in some specific data because they can only process the underlying information. In recent years, convolutional neural networks (CNNs) have been widely used in biomedical semantic segmentation such as FCN , SegNet , and U-Net . These deep networks can automatically extract high-level features and complete pixel-wise segmentation. Xu et al. used a 2D U-Net for automatic breast region segmentation in DCE-MRI . Adoui et al. built two fully CNNs based on SegNet and U-Net for breast tumor segmentation . Building on the success of U-Net in medical image segmentation, this study additionally used U-Net++  for comparison of DCE-MRI images for breast region segmentation. Our results indicated the effectiveness and accuracy of this method in biomedical segmentation tasks.
Because of the varying sizes, shapes, appearances, and densities of masses, CAD for breast mass detection is a challenging task [15, 16]. Conventional methods for breast mass detection mainly rely on threshold values  or mass templates  based on various kinds of filter operators. Huang et al. used multiscale Hessian-based analysis for breast mass detection , and Wang et al. detected breast masses based on Gestalt psychology . These traditional detection algorithms are sensitive to image noise, and the hand-designed features are not adequately robust to blurred contrast and bias-field. Deep convolutional neural networks (DCNNs) have significantly outperformed traditional methods in recent years, owing to the strong feature expression ability that can further improve the detection accuracy. With the development of deep learning, DCNN has been widely used in medical image detection [21–23]. These CAD systems can be used to migrate between breast cancer and lung cancer. The application of a CAD system helps the radiologist as a second reviewer to evaluate screening medical images. A CAD system based on one of the most successful object detection frameworks—Faster RCNN —could achieve high sensitivity with few false positive results on the datasets of mammograms and tomosynthesis [25, 26]. In the field of mammary MRI, the development of deep learning frameworks is limited, because only a small number of datasets are available. With this background, we incorporated Faster RCNN into our DCE-MRI datasets for detection of breast mass.
In this paper, DCNN-based frameworks were present for computer-aided segmentation of breast region and detection of breast masses. Breast region segmentation was first performed by U-Net++ to remove other organs except the breast region to eliminate interference. U-Net was also implemented as a comparison of segmentation framework. Given the limited amount of our datasets, data augmentation was used during training to reduce overfitting. Breast mass detection was implemented by training Faster RCNN with images preprocessed by segmentation network and labels annotated by radiologists. The proposed methods were validated by a 5-fold cross validation technique to obtain more reliable results than independent tests. Performance of the automatic segmentation was evaluated using similarity coefficients. The breast mass detection model was evaluated using sensitivity at the number of false positives per case.
2. Materials and Methods
The dataset consisted of 75 women (mean age, 47 years; age range, 18–68 years) with histopathologically confirmed breast masses at the Sun Yat-sen University Cancer Center (Guangzhou, China). Patients with suspicious breast masses were recruited after they provided written informed consent. The Ethics Committee of Sun Yat-sen University Cancer Center approved the study. Patients were scanned in the prone position with the bilateral breast naturally hanging into the two holes of the coil. A 3.0 T superconductive magnetic system (Discovery 750, GE Healthcare) with a bilateral 8-channel phased-array breast-specific surface coil was used for the imaging. Standard imaging was performed, including axial fast spin echo (FSE) T1WI, and axial and sagittal FSE T2WI. Subsequently, diffusion-weighted images (DWI) were acquired in the axial planes. The DCE-MRI data were acquired using the VIBRANT-FLEX technique in the axial orientation, after one set of unenhanced baseline images were obtained using an MRI-specific automatic power injector (Medrad Inc., Pittsburgh, PA, USA) to inject 0.1 mmol/kg body weight contrast medium (gadopentetate dimeglumine; Magnevist, Bayer Schering Pharma, Berlin, Germany), with a hand venipuncture technique at a rate of 3 mL/s. Saline (10 mL at 3 mL/s) was then injected to wash the tube. Dynamic scanning was initiated by simultaneously pushing the high-pressure syringe button and the dynamic scan button. Eight postcontrast sets were acquired under the following scanning conditions: field of view, 32 cm; matrix, 320 × 320; section thickness without a gap, 1.4 mm; repetition time, 3.9 ms; echo time, 1.7 ms; and flip angle, 5°. The patients had not received any treatment before undergoing MRI. In this study, only mass-like masses showed strong contrast.
Diagnosis was confirmed following pathological analysis subsequent to core-needle biopsy or surgical excision, or lesion not changed at a minimum follow-up of 2 years defined as benign lesion.
All images were analyzed independently by two radiologists with ten years of experience in breast MRI. The images were assessed independently and any disagreements were resolved by achieving consensus. All lesions were assessed using the Breast Imaging Reporting and Data System (BI-RADS). BI-RADS category 1 (negative) and category 2 (benign) denote an essentially 0% likelihood of cancer. BI-RADS category 3 (probably benign) assessment is more intuitive and can be recommended in the case of a unique focal finding for which the likelihood of malignancy is ≥0% but ≤2%. BI-RADS category 4 (suspicious) and category 5 (highly suggestive of malignancy) describe MRI findings that are suspicious enough to warrant tissue diagnosis. BI-RADS category 6 (known biopsy-proven malignancy) describes MRI findings of biopsy-proven breast cancer for which surgical excision is recommended when clinically appropriate.
The objective of this study is to detect the mass. All lesions with a BI-RADS score greater than 1 were used in the dataset. According to statistics, the diameter of the mass ranges from 1.07 to 6.69 cm.
Our fully automatic method consists of two parts: breast region segmentation and breast mass detection. Automatic segmentation of breast region is a challenging task because of large variations in breast shapes, sizes, image artifacts, and other noise-induced errors. Inspired by the success of DCNN models on object segmentation tasks, we implemented a U-Net++ framework for breast region segmentation, along with a U-Net framework for comparison. Next, we obtained images of the breast region without interference from other organs/body parts as the input for Faster RCNN to detect breast masses. Finally, the positions and sizes of the breast masses were identified. The whole framework of this proposed method is shown in Figure 1.
U-Net  was used in breast region segmentation for comparison, and its architecture is shown in Figure 2. One advantage of this method is its robustness even with small training data. The architecture consists of downsampling (left side) and upsampling (right side). The left side acts as an encoder and extracts features through the network. It contains a typical convolutional structure: two 3 × 3 convolution operations and one 2 × 2 max-pooling operation. Each convolution operation is followed by batch normalization (BN) and a rectified linear unit (ReLU). The right side acts as a decoder and also contains a typical architecture: a 2 × 2 upsampling operation and two 3 × 3 convolution operations, each followed by BN and a ReLU. Contracting paths are used to combine the high resolution feature maps with the upsampling outputs to accurately classify and locate each pixel. Every pixel of input image is classified as breast region or background. To evaluate the segmentation loss, we used the binary cross entropy (BCE) loss function.where is the ground truth and is the predicted probability for all the M pixels.
U-Net++  is an improved version of U-Net for biomedical image segmentation. For the skip connection of U-Net, U-Net++ adds modules to integrate the features of different levels through superposition. Further, to ensure gradient propagation, U-Net++ uses a deep supervision scheme that connects the middle module to the final output, finally forming a dense block structure as shown in Figure 3. The loss function is a combination of binary cross entropy and dice coefficient on each of the above four semantic levels, which is described aswhere and denote the flattened ground truths and the flattened predicted probabilities of the image, respectively, and indicates the batch size.
The object detection network named Faster RCNN  is composed of two modules. The first module is a deep fully CNN that proposes regions and the second module is the detector that uses the proposed regions. Feature map is extracted from input image by using ResNet-101 . ResNet-101 has been proven to perform well on classification tasks, which shows that it has good feature extraction ability. A region proposal network (RPN) considers the feature map as input and provides a set of rectangular object proposals as the output, each with an objectness score. The RPN structure in  is used in this paper without modification. The function of ROI Align is to map region of interest (ROI) areas of different sizes to feature maps of fixed sizes. ROI Align  can effectively solve the problem of misalignment caused by twice quantization in ROI Pooling. For the detection of large objects, the difference between the two schemes is rare. If there are more small objects in the picture to be detected, ROI Align is preferred, which is more accurate. In this paper, the masses of breast MRI image are taken as the research target, and the diameter of them is less than 48 pixels, which belongs to small targets. Therefore, ROI Align is selected for detection network. The feature is shared by RPN and fully connected (FC) layers. The structure is shown in Figure 4.
We computed four regression losses. was associated with predicted mass coordinates, widths, and height, and was the classification loss for the predicted mass probabilities. The ground truth labels are determined for each anchor as follows: If an anchor overlaps with a mass with an intersection over union (IOU) greater than 0.7, then it is regarded as positive (). On the contrary, if an anchor overlaps with a mass with an IOU less than 0.3, it is regarded as negative (). All other anchors do not contribute to the loss, and only positive anchors contribute to the regression loss. The final loss function for anchor is defined as follows:
Here, is a vector representing the four parameterized coordinates of the predicted bounding box and is the ground truth associated with a positive anchor. We used binary cross entropy loss for and smooth L1 loss for .
3.1. Performance Evaluation
The performance of the proposed method for breast region segmentation was tested using the Dice similarity coefficient (DSC) , Jaccard coefficient , and segmentation sensitivity described by Udupa et al. , which are given by the following equations:where A is the automatic segmentation result and B is the ground truth.
For target detection tasks, performance is measured by sensitivity with the average number of false positives per sample. In the task of breast mass detection, the sensitivity, also known as the true positive rate (TPR), represents the proportion of the number of detected masses to the number of all masses in the dataset. This is calculated using the true positive (TP), false negative (FN), and false positive (FP). It should be noted that true negative (TN) in target detection tasks is meaningless.
The receiver operating characteristic (ROC) curve was first invented by electronics and radar engineers during World War II to detect enemy vehicles on the battlefield, i.e., signal detection theory. Soon afterwards, it was introduced into psychology to detect signal perception. Since then, it has been introduced into the field of machine learning to evaluate classification and test results and is a very important and common statistical analysis method. However, the classical ROC method cannot solve the practical problem of evaluating target detection task on an image. In the 1970s, the concept of FROC (free-response ROC) was proposed, which allows the evaluation of arbitrary anomalies on each image. The FROC curve considers the number of false positives in each sample as the x-axis and the recall as the y-axis. The closer the curve to the upper left corner, the better the performance of the model.
4.1. Breast Segmentation
Owing to the uncertainty of breast MRI, traditional methods cannot at times segment breast region very well. Therefore, in this study, we adopted the deep learning method for breast region segmentation.
We proposed breast region segmentation by training a U-Net++ model, which removes interference from different DCE-MRI series external to the breast region. We used the dataset of DCE-MRIs from 75 patients. For the segmentation task, we created breast region labels for each patient (Figure 5). Because of the similarity in consecutive images in the DCE-MRI series, we allocated one breast region label for every 10 consecutive images. We employed patient-level, 5-fold cross validation. Each subset contains 15 cases of DCE-MRI. Moreover to illustrate the robustness of our algorithm, we trained our model on four subsets and validated it on the other subset. For data augmentation, the training sets were flipped horizontally. We trained the U-Net++ model and the U-Net model as comparison for 50 epochs using stochastic gradient descent (SGD) as the optimizer and used the last epoch to predict validation datasets. The batch size was set to five given the limitation of GPU memory. The training processes of U-Net and U-Net++ are shown in Figures 6 and 7, respectively, which show the final convergence of the networks.
The segmentation performance was validated by DSC, Jaccard, and segmentation sensitivity on the validation dataset. We computed performance values of U-Net++ for each validation image, and the average of them was 0.951, 0.908, and 0.948, respectively, which were better than those of U-Net. Evaluation results are provided in Tables 1 and 2. Exp 1 to 5 are experiments from 5-fold cross validation. The performance of each experiment result was evaluated, and mean values of the Dice coefficient (DSC), Jaccard coefficient, and segmentation sensitivity between segmentation results and ground truths from 15 different cases are calculated. The average performance was assessed. The breast region segmentation results on the validation dataset are shown in Figure 8. From the overall segmentation effect, the integrity and robustness of U-Net++ segmentation were better than those of U-Net.
4.2. Mass Detection
Encouraged by the overall success of Faster RCNN and deep residual networks in natural images, we used them in our study of breast mass detection. To eliminate the impact of redundant information and external noise, we first used the well-trained U-Net++ model to preprocess images. After that, the training and validation dataset only contained information pertaining to the breast region. All the slices containing the mass were extracted, and the central coordinates and the width and height of the mass on the image were marked for training. In validation, the whole case was segmented and processed. The detection results obtained by the network were mapped to three-dimensional space, and the bounding boxes close to each other in the space would be merged into one candidate. Candidates would be evaluated as TP if the IOUs with ground truth were greater than 0.5 while FP if less than 0.5. We employed patient-level 5-fold cross validation. Each subset contains 15 cases of DCE-MRI. Further, to illustrate the robustness of our algorithm, we trained our model on 4 subsets and validated it based on the other remaining subset. In the training subset, we used 30 epochs in total with SGD optimization and a momentum of 0.9. Because of the limitation of GPU memory, the batch size parameter was set to 8. We used a weight decay of . The initial learning rate was 0.001 and multiplied by 0.1 every 10 epochs. Before training the network, the weights of pretrained ResNet-101 were loaded for transfer learning, which effectively accelerated convergence. The training process of the network is shown in Figure 9, which shows the final convergence of the network.
We validated the Faster RCNN model based on the validation dataset and the average mass-level sensitivity achieved 0.874 at 3.4 false positives per case. The comparison of several detection results is shown in Figure 10. The network outputs the specific location and size information of different kinds of mass, as well as the probability. Regarding the 5-fold cross validation, Figure 11 shows the FROC curve of each experiment and Figure 12 shows the average FROC curve for 5-fold cross validation. The total size of the dataset is small so that it is probable that a few samples are obviously different from other cases, which affects the accuracy of the experimental results. Therefore, the cross validation method is used to make the experimental results more convincing in small datasets. The performance of the curves supports the proposed breast mass detection method as a useful tool to assist doctors with the diagnosis.
We proposed a method based on DCNN to effectively segment breast region and detect breast masses on DCE-MRI images. The success of the proposed method was mainly dependent on two aspects. First, breast region segmentation is necessary because other organs outside the breast occupy most of the image area, which greatly interferes with detection of masses. U-Net++ is a fully convolutional network applicable to various biomedical segmentation problems. The network contains convolutional layers and max-pooling layers and does not have any fully connected layers. Specifically, there are dense blocks between skip connections, which can make full use of extracted features. Owing to the lack of datasets, 5-fold cross validation was adopted. We used our own data to train U-Net++ for 50 epochs, and the DSC, Jaccard, and segmentation sensitivity values obtained were 0.951, 0.908, and 0.948, respectively, which were better than those obtained with U-Net. The experimental results showed that U-Net++ segmentation of the breast region was more precise and complete than the U-Net segmentation. As seen in Figure 8, the segmentation of the end of the breast region was not very accurate, but the key areas were well preserved, which was helpful in mass detection. Second, Faster RCNN is a flexible generic object detection framework that is easily extended to biomedical detection tasks. This study used ResNet-101 as the backbone of Faster RCNN to extract feature map from the input image. Then, there were two parallel branches to share the feature map, bounding box regression, and classification. We used the well-trained U-Net++ model to preprocess the training dataset to eliminate interference. Before training the network, the weights of pretrained ResNet-101 were loaded for transfer learning, which effectively accelerated convergence. For the performance evaluation, Faster RCNN accurately found the position of the breast mass and provided corresponding confidence information. We used 5-fold cross validation, and the average sensitivity performance achieved 0.874 with 3.4 false positives per case. This meant that our results may have a potential value for early diagnosis and treatment of breast cancer, but more clinical cases are needed to carry out research for verification.
Our study has some limitations, but it can verify the feasibility of this method in our dataset. Future research includes detection from multiple directions as well as using 3D CNN for extracting 3D feature map. The dataset used in this paper is relatively small. We aim to establish a large-scale open dataset for convenient performance comparison of papers with different algorithms in future.
Herein, we presented an automatic method for breast region segmentation and mass detection based on DCNN in DCE-MRIs. Our method consists of the breast region segmentation by U-Net++ and the breast mass detection by Faster RCNN. The DCNNs were trained and validated in DCE-MRIs from Sun Yat-sen University Cancer Center and showed good performance. We believe that this method provides a powerful clinical tool to help doctors accurately and quickly diagnose breast masses. Future research will include the establishment of larger open data and the opinions of radiologists and surgeons in clinical practice.
The DCE-MRI data used to support the findings of this study were supplied by the Sun Yat-sen University Cancer Center (Guangzhou, China) under license and have not been made freely available because of patient privacy. If our dataset is useful to you, please contact the corresponding authors by e-mail.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Han Jiao and Xinhua Jiang contributed equally to this study.
This work was supported by the National Natural Science Foundation of China (no.61771494) and the Science and Technology Planning Project of Guangdong Province (no. 2016A010104007). This work was funded by the Science and Technology Planning Project of Guangdong Province, China (no. 2016B090918066) and the Science and Technology Program of Guangzhou, China (no. 201704020060, 201807010057).
- N. Howlader, A. M. Noone, M. Krapcho et al., SEER Cancer Statistics Review, 1975–2008, National Cancer Institute, Bethesda, MD, USA, 2011.
- A. Jemal, F. Bray, M. M. Center, J. Ferlay, E. Ward, and D. Forman, “Global cancer statistics,” CA: A Cancer Journal for Clinicians, vol. 61, no. 2, pp. 69–90, 2011.
- E. Warner, D. B. Plewes, R. S. Shumak et al., “Comparison of breast magnetic resonance imaging, mammography, and ultrasound for surveillance of women at high risk for hereditary breast cancer,” Journal of Clinical Oncology, vol. 19, no. 15, pp. 3524–3531, 2001.
- Z. Pang, D. Zhu, D. Chen, L. Li, and Y. Shao, “A computer-aided diagnosis system for dynamic contrast-enhanced MR images based on level set segmentation and relief feature selection,” Computational and Mathematical Methods in Medicine, vol. 2015, Article ID 450531, 10 pages, 2015.
- A. Gubern-Mérida, R. Martí, J. Melendez et al., “Automated localization of breast cancer in DCE-MRI,” Medical Image Analysis, vol. 20, no. 1, pp. 265–274, 2015.
- S. Thakran, S. Chatterjee, M. Singhal, R. K. Gupta, and A. Singh, “Automatic outer and inner breast tissue segmentation using multi-parametric MRI images of breast tumor patients,” PLoS One, vol. 13, no. 1, Article ID e0190348, 2018.
- S. Wu, S. P. Weinstein, E. F. Conant, M. D. Schnall, and D. Kontos, “Automated chest wall line detection for whole-breast segmentation in sagittal breast MR images,” Medical Physics, vol. 40, no. 4, Article ID 042301, 2013.
- L. Jiang, X. Hu, Q. Xiao, Y. Gu, and Q. Li, “Fully automated segmentation of whole breast using dynamic programming in dynamic contrast enhanced MR images,” Medical Physics, vol. 44, no. 6, pp. 2400–2414, 2017.
- J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 39, no. 4, pp. 640–651, 2014.
- V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: a deep convolutional encoder-decoder architecture for image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 12, pp. 2481–2495, 2017.
- O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2015), pp. 234–241, Munich, Germany, October 2015.
- X. Xu, L. Fu, Y. Chen et al., “Breast region segmentation being convolutional neural network in dynamic contrast enhanced MRI,” in Proceedings of the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 750–753, Honolulu, HI, USA, July 2018.
- M. Adoui, S. A. Mahmoudi, M. A. Larhmam, and M. Benjelloun, “MRI breast tumor segmentation using different encoder and decoder CNN architectures,” Computers, vol. 8, no. 3, p. 52, 2019.
- Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang, “UNet++: a nested U-net architecture for medical image segmentation,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. DLMIA 2018, ML-CDS 2018: Lecture Notes in Computer Science, et al., Ed., vol. 11045, Springer Cham, Berlin, Germany, 2018.
- C. Gallego-Ortiz and A. L. Martel, “Improving the accuracy of computer-aided diagnosis for breast MR imaging by differentiating between mass and nonmass lesions,” Radiology, vol. 278, no. 3, pp. 679–688, 2015.
- N. Karssemeijer and G. M. te Brake, “Detection of stellate distortions in mammograms,” IEEE Transactions on Medical Imaging, vol. 15, no. 5, pp. 611–619, 1996.
- G. Kom, A. Tiedeu, and M. Kom, “Automated detection of masses in mammograms by local adaptive thresholding,” Computers in Biology and Medicine, vol. 37, no. 1, pp. 37–48, 2007.
- N. H. Eltonsy, G. D. Tourassi, and A. S. Elmaghraby, “A concentric morphology model for the detection of masses in mammography,” IEEE Transactions on Medical Imaging, vol. 26, no. 6, pp. 880–889, 2007.
- Y.-H. Huang, Y.-C. Chang, C.-S. Huang, J.-H. Chen, and R.-F. Chang, “Computerized breast mass detection using multi-scale hessian-based analysis for dynamic contrast-enhanced MRI,” Journal of Digital Imaging, vol. 27, no. 5, pp. 649–660, 2014.
- W. Hongyu, F. Jun, B. Qirong et al., “Breast mass detection in digital mammogram based on gestalt psychology,” Journal of Healthcare Engineering, vol. 2018, Article ID 4015613, 13 pages, 2018.
- W. Zhu, C. Liu, W. Fan, and X. Xie, “Deeplung: deep 3d dual path nets for automated pulmonary nodule detection and classification,” in Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 673–681, Lake Tahoe, NV, USA, March 2018.
- S. Albarqouni, C. Baur, F. Achilles, V. Belagiannis, S. Demirci, and N. Navab, “Aggnet: deep learning from crowds for mitosis detection in breast cancer histology images,” IEEE Transactions on Medical Imaging, vol. 35, no. 5, pp. 1313–1321, 2016.
- D. Selvathi and A. A. Poornila, “Deep Learning Techniques for Breast Cancer Detection Using Medical Image Analysis,” Biologically Rationalized Computing Techniques for Image Processing Applications, Springer, Berlin, Germany, 2018.
- S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: towards real-time object detection with region proposal networks,” IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 39, no. 6, pp. 1137–1149, 2017.
- D. Ribli, H. Anna, Z. Unger, P. Pollner, and I. Csabai, “Detecting and classifying lesions in mammograms with deep learning,” Scientific Reports, vol. 8, no. 1, 2017.
- M. Fan, Y. Li, S. Zheng, W. Peng, W. Tang, and L. Li, “Computer-aided detection of mass in digital breast tomosynthesis using a faster region-based convolutional neural network,” Methods, vol. 166, pp. 103–111, 2019.
- K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, Las Vegas, NV, USA, June 2016.
- K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask R-CNN,” in Proceedings of the 2017 IEEE international conference on computer vision (ICCV), IEEE, Venice, Italy, October 2017.
- L. R. Dice, “Measures of the amount of ecologic association between species,” Ecology, vol. 26, pp. 297–302, 2016.
- P. Jaccard, “The distribution of the flora in the alpine Zone.1,” New Phytologist, vol. 11, no. 2, pp. 37–50, 1912.
- J. K. Udupa, V. R. Leblanc, Y. Zhuge et al., “A framework for evaluating image segmentation algorithms,” Computerized Medical Imaging and Graphics, vol. 30, no. 2, pp. 75–87, 2006.
Copyright © 2020 Han Jiao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.