Wireless Communications and Mobile Computing

Wireless Communications and Mobile Computing / 2020 / Article
Special Issue

Mobile Intelligence Assisted by Data Analytics and Cognitive Computing 2020

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 8893494 | https://doi.org/10.1155/2020/8893494

Mehedi Masud, Ghulam Muhammad, M. Shamim Hossain, Hesham Alhumyani, Sultan S. Alshamrani, Omar Cheikhrouhou, Saleh Ibrahim, "Light Deep Model for Pulmonary Nodule Detection from CT Scan Images for Mobile Devices", Wireless Communications and Mobile Computing, vol. 2020, Article ID 8893494, 8 pages, 2020. https://doi.org/10.1155/2020/8893494

Light Deep Model for Pulmonary Nodule Detection from CT Scan Images for Mobile Devices

Academic Editor: Yin Zhang
Received27 May 2020
Revised11 Jun 2020
Accepted13 Jun 2020
Published03 Jul 2020


The emergence of cognitive computing and big data analytics revolutionize the healthcare domain, more specifically in detecting cancer. Lung cancer is one of the major reasons for death worldwide. The pulmonary nodules in the lung can be cancerous after development. Early detection of the pulmonary nodules can lead to early treatment and a significant reduction of death. In this paper, we proposed an end-to-end convolutional neural network- (CNN-) based automatic pulmonary nodule detection and classification system. The proposed CNN architecture has only four convolutional layers and is, therefore, light in nature. Each convolutional layer consists of two consecutive convolutional blocks, a connector convolutional block, nonlinear activation functions after each block, and a pooling block. The experiments are carried out using the Lung Image Database Consortium (LIDC) database. From the LIDC database, 1279 sample images are selected of which 569 are noncancerous, 278 are benign, and the rest are malignant. The proposed system achieved 97.9% accuracy. Compared to other famous CNN architecture, the proposed architecture has much lesser flops and parameters and is thereby suitable for real-time medical image analysis.

1. Introduction

Due to the advancement of sophisticated machine learning algorithms, mobile computing, wireless communications [1, 2], and finally cognitive computing [3, 4], the healthcare industry is booming in recent years. The traditional healthcare industry is now gradually shifting towards the smart healthcare industry [5]. The smart healthcare enables patients to have their health problems diagnosed sitting at their homes, to get the prescription and advice online, and thereby to save time for communication and getting an appointment [6]. One of the major driving forces behind the rise of the smart healthcare industry is the invention of deep learning algorithms in machine learning domain [7]. Deep learning has brought about a paradigm shift to machine learning. For the last ten years, it was used in numerous applications for signal and image processing, including medical signals or images [810].

Lung cancer has become one of the leading causes of death worldwide. In a recent statistic of 2019 from the American Cancer Society, it was found that more than 142K people died of lung and bronchus cancer and more than 228K people were diagnosed for lung and bronchus cancer [11]. The number of fatal cancer deaths can be greatly reduced by early diagnosis.

The early detection of cancer can be detected in two ways: manually by radiologists or automatically by a computer-aided diagnosis (CAD) system. The CAD system is not a standalone system; it can only assist the radiologists or the doctors to take a correct decision. The final decision depends on the radiologists or the doctors [12]. A radiologist needs careful observation of the density of pulmonary nodules, because at an early stage, this density may resemble the densities of other lung parts [13]. A CAD system tries to make a boundary of a pulmonary nodule by detecting some distinguishing features in the nodule. These features are either manual features or deep-learned features. Manual features include information about texture, density, and morphology. The features are fed to a classifier for the detection or the classification of the nodule.

The CAD system helps the radiologists to improve the reading of the computed tomography (CT) scans; however, a significant number of nodules remain undetected if a low positive rate is desired. This forbids the use of the CAD system in reality [14, 15]. There are variations in shapes, sizes, and types of the nodules, and some are even varied in texture and density. These wide variations are sometimes not diagnosed by the CAD system if the algorithm is not sophisticated enough.

Recently, because of the success of deep learning in numerous applications, CAD systems are also utilizing deep learning [16]. End-to-end deep learning has brought success in many medical image processing applications [17]. The pulmonary nodule detection systems from CT scan images also used several deep learning architectures in recent days [1820]. These systems outperformed the systems using hand-crafted features [21].

As new healthcare is shifting towards smart healthcare, the use of wireless communication and mobile computing has been increasing in a smart healthcare framework. Until today, 3G/4G/5G communication is successfully used [2224]. In [22], a block chain-based security scheme was proposed. An automatic seizure detection system using a mobile framework was proposed in [23]. A deep learning-based network resource algorithm in 5G was proposed in [24]. Now, the paradigm is shifting towards beyond 5G/6G to provide low latency, high transfer rate, and accommodate many sensors [25]. Smart systems are becoming popular in many applications [26, 27]. One of the important aspects of smart healthcare is to have a component of cognitive computing. Cognitive computing can facilitate health monitoring, medicine prescription, and mental state recognition [28]. The emotion of a human can tell a lot about the state of a patient. Therefore, recognizing the correct emotion can help understand the situation of a patient.

In a smart healthcare, a patient’s case can be analysed by multiple doctors from various physical locations. A lung CT scan image can be uploaded to a computer system that can be accessed by several registered doctors. The system can produce an output of correct segmentation of nodules, if any, and provide a decision whether the image belongs to normal, benign, or malignant.

In this paper, we proposed a convolutional neural network- (CNN-) based pulmonary nodule detection and classification system. The classification outputs either one the three classes: normal, low level malignant, and high level malignant. The performance of the proposed system is compared with some of the state-of-the-art related systems.

The paper is structured as follows: Section 2 briefly outlines some of the previous related works. Section 3 describes the proposed system for detecting and classifying pulmonary nodules. Section 4 delivers experimental results and discussions. Section 5 concludes the paper.

Most of the previous works used the Lung Image Database Consortium (LIDC-IDRI) database [29]. In different works, various numbers of samples from the database were used based on a selection criterion. In this section, we mainly focus on the works that used the LIDC database; however, some other important works are also mentioned.

First, we mention the works that used hand-crafted features to detect pulmonary nodules. Wu et al. proposed a nodule classification system using textual and radiological features [30]. They used 13 GLCM textual features and 12 radiological features along with a back-propagation neural network. A total of 2217 CT slices were used, and an area under the receiver operative characteristics (ROC) curve of 0.91 was obtained.

Shape and texture-based features together with a genetic algorithm and a support vector machine (SVM) were proposed to detect nodules in [31]. Before extracting features, the samples were enhanced by a quality threshold and a region growing-based segmentation. 97.5% accuracy was obtained with 140 samples from the LIDC database.

Orozco et al. developed a lung nodule classification system using 19 GLCM features extracted from different subbands of the wavelet transform and the SVM classifier [32]. The accuracy of 82% was obtained using a subset of the LIDC database. Han et al. used 3D GLCM features and the SVM for the nodule classification and got the area under the ROC curve of 92.7% [33]. Phylogenetic diversity index and genetic algorithm-based nodule classification systems were proposed in [34]. A total of 1403 images from the LIDC database were used in the experiments, and 92.5% accuracy was achieved by the system.

Second, we mention the works on pulmonary lung nodule detection and classification using deep learning. Mainly, we focus on the papers from 2018 onwards. A 4-channel CNN-based system was proposed in [35]. In this system, the scan images were enhanced by a Frangi filter, and the learning was based on a multigroup criterion. The LIDC images were used, and a sensitivity of 80.1% was obtained.

A topology-based phylogenetic diversity index on CT scans was used with CNN in [18]. 1404 images from the LIDC database were used in the experiments, and an accuracy of 92.6% was obtained. The images consisted of 394 malignant and 1011 benign nodules. A fusion of classifications using the Adaboost back propagation neural network was used in [36]. Three different types of features were utilized. One set of features was GLCM features, the second set of features was Fourier shape features, and the third set of features was obtained from a CNN architecture. These three sets of features were learned by three neural networks and fused. 1972 sample images (648 malignant and 1323 benign) of the LIDC database were used in the experiments, and an accuracy of 96.7% was achieved.

Xie et al. proposed a nodule detection system using a faster region-based CNN [37]. The 2D convolutional operation was used to reduce false positives. The system achieved 86.4% accuracy using 150414 images. An end-to-end automated lung nodule detection system was developed in [38]. The system had three main phases. The system got 91.4% accuracy with false positives one per scan using 888 CT scans.

From the above review, we found that significant progress in lung nodule detection has been made during the last seven-eight years. The challenges are still there. The challenges include detection and classification of unevenly controlled nodules found on size, shape, and density. Therefore, there is a need for a fully automated system that can overcome some of these challenges.

3. Proposed System

Major CNN architectures such as AlexNet, VGG Net, and Google ResNet were designed to classify natural images that had around 1000 classes. These architectures were trained over millions of images and thereby were designed as very deep models. Medical structured data are not available in plenty, or the data size is limited. So, this limited data can cause overfitting in these architectures. Also, the visualization of medical data may not be meaningful using these very deep models.

3.1. CNN Architecture

In this paper, we developed a CNN architecture that is light (not very deep) and appropriate for medical image processing. The overall architecture is shown in Figure 1. There are four convolutional layers, followed by a global average pooling (GAP), two fully connected (FC) layers, and the softmax output layer, which has three output neurons corresponding to three classes (normal, benign, and malignant). Each convolutional layer has two successive convolutional blocks with rectified linear units (ReLUs), a connector convolutional block with the ReLU, and a max pooling block. In the first layer, the number of filters in each convolutional block is 16, for the second layer 32, for the third layer 48, and, finally, for the last layer 64. The stride of the filters is 1. The output of the connector convolutional block is summed with the output of the second CNN block before the max pooling. The stride of the max pooling is 2; so, the resolution is reduced by a factor of 4. Before each convolutional block, zero padding is applied to maintain the size. Mini batch normalization is applied to each layer to speed up the training. The GAP is used as a purpose of pooling, but it is more efficient than the pooling [39].

The input to the CNN is the image of size . The number of layers in the CNN is four so that the receptor window covers the whole image. We also tested with three layers; however, four layers performed better. Each pixel of the input image is normalized by the mean (mean subtraction) and standard deviation (divided by the standard deviation) of the pixels of the whole database.

The minibatch size was 4 samples, and the cost function was categorical cross-entropy. Before each minibatch, the samples were shuffled to ensure complete randomization of the learning; this also helped to overcome the overfitting. The initial weights were found by applying the normalization [40]. The Adam optimizer was used for optimizing the weights, and the parameters were , and the learning rate was . The proposed CNN architecture is a modified version of the architecture proposed in [41]. The main difference between these two architectures is the number of layers; in our proposed architecture, we have a smaller number of layers, which makes the model a light model.

3.2. Database
3.2.1. Database Selection

The database that was used in the experiments is a publicly available database, named the LIDC-IDRI database [29]. There are 1018 CT scans of 1010 subjects from seven institutions. The slice thickness of the CT scans varied from 0.6 mm to 5.0 mm with a median of 2.0 mm. Four expert radiologists made the annotations of the scans in two separate reading sets. In the first set of readings, each suspicious lesion was classified independently as nonnodule, nodule with a size smaller than 3 mm, and nodule with size greater than or equal to 3 mm. In the second set of reading, 3D segmentation was done for the nodules which are greater than or equal to 3 mm.

3.2.2. Samples’ Selection

The samples were selected in the experiments in the following manner. First, all the scans which had thickness above 3.0 mm were removed. Samples with nodule size less than 3 mm were also removed. Those nodules of size greater than or equal to 3 mm that were agreed by three or four radiologists were retained. The nodules were classified into different stages of malignancy and were ranked from malignancy level 1 to malignancy level 5. Levels 1 and 2 were denoted as benign, and levels 4 and 5 were denoted as malignancy. The samples with malignancy level 3 were not considered to make a clear distinction between benign and malignancy. Overall, there were 1279 samples selected for the experiments, of which 569 nonnodules, 278 benign, and 432 malignant. Figure 2 shows an example of a CT image, where the lung nodule is marked by a red circle. On the right side of the figure, there are ground truths (GTs) and corresponding segmentation as the nodule region of interest (NROI) by four radiologists. From the figure, we see that the radiologists’ segmentations differ for a sample.

Nodule candidate regions are mined slice by slice from the LIDC. The candidate nodules’ pixels retained their original values using a mask and make it a size by padding zero as described in [41]. Eventually, all the samples are resized to .

3.2.3. Data Augmentation

The number of samples was not enough for proper training of the CNN, and also the numbers of the samples per class are unbalanced. Therefore, we need to raise the number of samples and balance the numbers by data augmentation. We applied the augmentation only for the training data. Only rotation and translation operations were used for the augmentation. The samples were rotated with random angles (between 10° and 60°) and translated within a range of [-2, 2].

4. Experimental Results and Discussion

The experiments were done by means of the 10-fold cross-validation approach. As described earlier, we removed level 3 samples to make a clear distinction between benign and malignant samples. In fact, in two sets of experiments, we also included level 3 samples. Therefore, we had three sets: set 1 had samples of level 3 removed, set 2 had samples of level 3 included in the benign category, and set 3 had samples of level 3 included in the malignant category. Set 1 had a total of 1279 samples of which 569 were normal, 278 were benign, and 432 were malignant. Set 2 had a total of 1508 samples, of which 507 were benign and 432 were malignant. Set 3 had 1508 samples of which 278 were benign and 661 were malignant. Figure 3 illustrates the accuracy of the proposed system using three sets. Set 1 had an accuracy of 94.65%, set 2 had 89.21%, and set 3 had 73.4%. From these results, we conclude that the samples of level 3 are more benign than malignant. In the subsequent experiments, we use only set 1.

Figure 4 displays the confusion matrix of the system using set 1. From the matrix, we find that the normal class generally was not confused with benign or malignant. Some benign and malignant samples were confusing between them. Malignant samples were confused the most.

We also found the confusion matrix recall and precision values of the system. The average recall was 98.07%, and the average precision was 98.06. Figure 5 shows the ROC curve of the system. The area under the curve was 0.987, which is considered very good. Figure 6 illustrates the learning curves in terms of accuracy and loss of the system. From the figure, we found that the accuracy and the loss are steady after some iterations. Figure 7 shows some malignant samples which were misclassified as benign samples. The misclassification samples did not have any specific criteria; however, the fading boundaries and size could contribute to such misclassification. We need more investigation into this matter.

Table 1 provides a measure of performance between systems. The proposed system was compared with some recent related systems which used deep learning. All the compared systems used the same LIDC database; however, the number of samples varied. The results of the systems were extracted from the corresponding papers. From the table, we find that the proposed system has got the highest accuracy. The closet accuracy was with the system in [36]. This system used three-streams and fused hand-crafted features with CNN features using an Adaboost neural network. Therefore, the system in [36] is heavily computationally intensive.

SystemNumber of samplesAccuracy

[35]1006 scans
[18]1011 benign, 394 malignant92.6%
[36]1011 benign, 394 malignant, 567 normal96.7
Proposed278 benign, 432 malignant, 569 normal97.9%

The proposed architecture has 275 MFLOPS and approximately 200 K parameters. On the other hand, the AlexNet has around 1.5 GFLOPS and 60 million parameters, and the GoogLeNet has around 3 GFLOPS and 7 million parameters. Therefore, our proposed architecture is very light compared to these famous architectures. All the experiments in this paper were carried out using a quad-core machine with 12 GB RAM and Nvidia GeForce GTX 1050 GPU.

5. Conclusion

The use of mobile computing, cognitive computing, machine learning, and healthcare data analytics greatly influences our life. To this end, a pulmonary nodule detection and classification system using a light CNN model was proposed. The system was evaluated using the LIDC database samples. The system achieved 97.9% accuracy when level 3 of malignancy samples was excluded in the experiments. The average recall and precision values were above 98%. Compared to the other state-of-the-art systems, the proposed system’s performance is high. In a future study, we aim to visualize the nodule boundaries. We also want to fuse the features from different layers of the CNN architecture to enhance accuracy. Another direction is to use active learning to improve the performance [42].

Data Availability

Not applicable.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.


This study was funded by the Deanship of Scientific Research, Taif University, KSA (Research Project number 1-440-6146).


  1. Y. Zhang, M. S. Hossain, A. Ghoneim, and M. Guizani, “COCME: content-oriented caching on the mobile edge for wireless communications,” IEEE Wireless Communications, vol. 26, no. 3, pp. 26–31, 2019. View at: Publisher Site | Google Scholar
  2. J. Wang, Y. Miao, P. Zhou, M. S. Hossain, and S. M. M. Rahman, “A software defined network routing in wireless multihop network,” Journal of Network and Computer Applications, vol. 85, pp. 76–83, 2017. View at: Publisher Site | Google Scholar
  3. K. Lin, C. Li, D. Tian, A. Ghoneim, M. S. Hossain, and S. U. Amin, “Artificial-intelligence-based data analytics for cognitive communication in heterogeneous wireless networks,” IEEE Wireless Communications, vol. 26, no. 3, pp. 83–89, 2019. View at: Publisher Site | Google Scholar
  4. Y. Zhang, X. Ma, J. Zhang, M. S. Hossain, G. Muhammad, and S. U. Amin, “Edge intelligence in the cognitive internet of things: improving sensitivity and interactivity,” IEEE Network, vol. 33, no. 3, pp. 58–64, 2019. View at: Publisher Site | Google Scholar
  5. G. Muhammad, M. F. Alhamid, M. Alsulaiman, and B. Gupta, “Edge computing with cloud for voice disorder assessment and treatment,” IEEE Communications Magazine, vol. 56, no. 4, pp. 60–65, 2018. View at: Publisher Site | Google Scholar
  6. M. S. Hossain and G. Muhammad, “Emotion-aware connected healthcare big data towards 5G,” IEEE Internet of Things Journal, vol. 5, no. 4, pp. 2399–2406, 2018. View at: Publisher Site | Google Scholar
  7. G. Muhammad, M. F. Alhamid, and X. Long, “Computing and processing on the edge: smart pathology detection for connected healthcare,” IEEE Network, vol. 33, no. 6, pp. 44–49, 2019. View at: Publisher Site | Google Scholar
  8. A. Yassine, S. Singh, M. S. Hossain, and G. Muhammad, “IoT big data analytics for smart homes with fog and cloud computing,” Future Generation Computer Systems, vol. 91, pp. 563–573, 2019. View at: Publisher Site | Google Scholar
  9. M. Masud, M. S. Hossain, and A. Alamri, “Data interoperability and multimedia content management in e-health systems,” IEEE Transactions on Information Technology in Biomedicine, vol. 16, no. 6, pp. 1015–1023, 2012. View at: Publisher Site | Google Scholar
  10. Z. Ali, G. Muhammad, and M. F. Alhamid, “An automatic health monitoring system for patients suffering from voice complications in smart cities,” IEEE Access, vol. 5, no. 1, pp. 3900–3908, 2017. View at: Publisher Site | Google Scholar
  11. R. L. Siegel, K. D. Miller, and A. Jemal, “Cancer statistics, 2019,” CA: A Cancer Journal for Clinicians, vol. 69, no. 1, pp. 7–34, 2018. View at: Publisher Site | Google Scholar
  12. C. Jacobs, E. M. van Rikxoort, T. Twellmann et al., “Automatic detection of subsolid pulmonary nodules in thoracic computed tomography images,” Medical Image Analysis, vol. 18, no. 2, pp. 374–384, 2014. View at: Publisher Site | Google Scholar
  13. G. Xiuhua, S. Tao, W. Huan, and L. Zhigang, “Prediction models for malignant pulmonary nodules based-on texture features of CT image,” in Theory and Applications of CT Imaging and Analysis, pp. 63–76, IntechOpen, Noriyasu Homma. View at: Publisher Site | Google Scholar
  14. B. van Ginneken, S. G. Armato III, B. de Hoop et al., “Comparing and combining algorithms for computer-aided detection of pulmonary nodules in computed tomography scans: the ANODE09 study,” Medical Image Analysis, vol. 14, no. 6, pp. 707–722, 2010. View at: Publisher Site | Google Scholar
  15. M. Firmino, A. H. Morais, R. M. Mendoça, M. R. Dantas, H. R. Hekis, and R. Valentim, “Computer-aided detection system for lung cancer in computed tomography scans: review and future prospects,” Biomedical Engineering Online, vol. 13, no. 1, p. 41, 2014. View at: Publisher Site | Google Scholar
  16. M. S. Hossain and G. Muhammad, “Cloud-based collaborative media service framework for healthcare,” International Journal of Distributed Sensor Networks, vol. 10, no. 3, Article ID 858712, 2014. View at: Publisher Site | Google Scholar
  17. S. U. Amin, M. Alsulaiman, G. Muhammad, M. A. Mekhtiche, and M. Shamim Hossain, “Deep learning for EEG motor imagery classification based on multi-layer CNNs feature fusion,” Future Generation Computer Systems, vol. 101, pp. 542–554, 2019. View at: Publisher Site | Google Scholar
  18. A. O. de Carvalho Filho, A. C. Silva, A. C. de Paiva, R. A. Nunes, and M. Gattass, “Classification of patterns of benignity and malignancy based on CT using topology-based phylogenetic diversity index and convolutional neural network,” Pattern Recognition, vol. 81, pp. 200–212, 2018. View at: Publisher Site | Google Scholar
  19. N. Tajbakhsh and K. Suzuki, “Comparing two classes of end-to-end machine-learning models in lung nodule detection and classification: Mtanns vs. cnns,” Pattern Recognition, vol. 63, pp. 476–486, 2017. View at: Publisher Site | Google Scholar
  20. X. Yuan, L. Xie, and M. Abouelenien, “A regularized ensemble framework of deep learning for cancer detection from multi-class, imbalanced training data,” Pattern Recognition, vol. 77, pp. 160–172, 2018. View at: Publisher Site | Google Scholar
  21. Y. Wang, Y. Qiu, T. Thai, K. Moore, H. Liu, and B. Zheng, “A two-step convolutional neural network-based computer-aided detection scheme for automatically segmenting adipose tissue volume depicting on CT images,” Computer Methods and Programs in Biomedicine, vol. 144, pp. 97–104, 2017. View at: Publisher Site | Google Scholar
  22. M. A. Rahman, M. M. Rashid, M. S. Hossain, E. Hassanain, M. F. Alhamid, and M. Guizani, “Blockchain and IoT-based cognitive edge framework for sharing economy services in a smart city,” IEEE Access, vol. 7, pp. 18611–18621, 2019. View at: Publisher Site | Google Scholar
  23. G. Muhammad, M. Masud, S. U. Amin, R. Alrobaea, and M. F. Alhamid, “Automatic seizure detection in a mobile multimedia framework,” IEEE ACCESS, vol. 6, pp. 45372–45383, 2018. View at: Publisher Site | Google Scholar
  24. M. S. Hossain and G. Muhammad, “A deep-tree-model-based radio resource distribution for 5G networks,” IEEE Wireless Communications, vol. 27, no. 1, pp. 62–67, 2020. View at: Publisher Site | Google Scholar
  25. Y. Zhang, Y. Qian, D. Wu, M. S. Hossain, A. Ghoneim, and M. Chen, “Emotion-aware multimedia systems security,” IEEE Transactions on Multimedia, vol. 21, no. 3, pp. 617–624, 2019. View at: Publisher Site | Google Scholar
  26. A. Alelaiwi, A. Alghamdi, M. Shorfuzzaman, M. Rawashdeh, M. S. Hossain, and G. Muhammad, “Enhanced engineering education using smart class environment,” Computers in Human Behavior, vol. 51, Part B, pp. 852–856, 2015. View at: Publisher Site | Google Scholar
  27. M. S. Hossain, S. U. Amin, M. Alsulaiman, and G. Muhammad, “Applying deep learning for epilepsy seizure detection and brain mapping visualization,” ACM Transactions on Multimedia Computing, Communications, and Applications, vol. 15, no. 1s, pp. 1–17, 2019. View at: Publisher Site | Google Scholar
  28. Y. Hao, J. Yang, M. Chen, M. S. Hossain, and M. F. Alhamid, “Emotion-aware video QoE assessment via transfer learning,” IEEE MultiMedia, vol. 26, no. 1, pp. 31–40, 2019. View at: Publisher Site | Google Scholar
  29. S. G. Armato III, G. McLennan, L. Bidaut et al., “The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): a completed reference database of lung nodules on CT scans,” Medical Physics, vol. 38, no. 2, pp. 915–931, 2011. View at: Publisher Site | Google Scholar
  30. H. Wu, T. Sun, J. Wang et al., “Combination of radiological and gray level co-occurrence matrix textural features used to distinguish solitary pulmonary nodules by computed tomography,” Journal of Digital Imaging, vol. 26, no. 4, pp. 797–802, 2013. View at: Publisher Site | Google Scholar
  31. A. O. de Carvalho Filho, W. B. de Sampaio, A. C. Silva, A. C. de Paiva, R. A. Nunes, and M. Gattass, “Automatic detection of solitary lung nodules using quality threshold clustering, genetic algorithm and diversity index,” Artificial Intelligence in Medicine, vol. 60, no. 3, pp. 165–177, 2014. View at: Publisher Site | Google Scholar
  32. H. M. Orozco, O. O. V. Villegas, V. G. C. Sánchez, H. de Jesús Ochoa Domínguez, and M. de Jesús Nandayapa Alfaro, “Automated system for lung nodules classification based on wavelet feature descriptor and support vector machine,” Biomedical Engineering Online, vol. 14, no. 1, p. 9, 2015. View at: Publisher Site | Google Scholar
  33. F. Han, H. Wang, G. Zhang et al., “Texture feature analysis for computer-aided diagnosis on pulmonary nodules,” Journal of Digital Imaging, vol. 28, no. 1, pp. 99–115, 2015. View at: Publisher Site | Google Scholar
  34. A. O. de Carvalho Filho, A. C. Silva, A. Cardoso de Paiva, R. A. Nunes, and M. Gattass, “Computer-aided diagnosis of lung nodules in computed tomography by using phylogenetic diversity, genetic algorithm, and svm,” Journal of Digital Imaging, vol. 30, no. 6, pp. 812–822, 2017. View at: Publisher Site | Google Scholar
  35. H. Jiang, H. Ma, W. Qian et al., “An automatic detection system of lung nodule based on multigroup patch-based deep learning network,” IEEE Journal of Biomedical and Health Informatics, vol. 22, no. 4, pp. 1227–1237, 2018. View at: Publisher Site | Google Scholar
  36. Y. Xie, J. Zhang, Y. Xia, M. Fulham, and Y. Zhang, “Fusing texture, shape and deep model-learned information at decision level for automated classification of lung nodules on chest CT,” Information Fusion, vol. 42, pp. 102–110, 2018. View at: Publisher Site | Google Scholar
  37. H. Xie, D. Yang, N. Sun, Z. Chen, and Y. Zhang, “Automated pulmonary nodule detection in ct images using deep convolutional neural networks,” Pattern Recognition, vol. 85, pp. 109–119, 2019. View at: Publisher Site | Google Scholar
  38. X. Huang, W. Sun, T.-L. B. Tseng, C. Li, and W. Qian, “Fast and fully-automated detection and segmentation of pulmonary nodules in thoracic CT scans using deep convolutional neural networks,” Computerized Medical Imaging and Graphics, vol. 74, pp. 25–36, 2019. View at: Publisher Site | Google Scholar
  39. A. A. Amory, G. Muhammad, and H. Mathkour, “Deep convolutional tree networks,” Future Generation Computer Systems, vol. 101, pp. 152–168, 2019. View at: Publisher Site | Google Scholar
  40. K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: surpassing human-level performance on imagenet classification,” in 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1026–1034, Santiago, 2015. View at: Publisher Site | Google Scholar
  41. T. A. Lampert, A. Stumpf, and P. Gancarski, “An empirical study into annotator agreement, ground truth estimation, and algorithm evaluation,” IEEE Transactions on Image Processing, vol. 25, no. 6, pp. 2557–2572, 2016. View at: Publisher Site | Google Scholar
  42. G. Muhammad and M. F. Alhamid, “User emotion recognition from a larger pool of social network data using active learning,” Multimedia Tools and Applications, vol. 76, no. 8, pp. 10881–10892, 2017. View at: Publisher Site | Google Scholar

Copyright © 2020 Mehedi Masud et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.