Disease Markers

Disease Markers / 2021 / Article
Special Issue

Imaging Disease Markers as a Diagnostic, Prognostic, and Educational Tool 2021

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 5522729 | https://doi.org/10.1155/2021/5522729

Varalakshmi Perumal, Vasumathi Narayanan, Sakthi Jaya Sundar Rajasekar, "Prediction of COVID-19 with Computed Tomography Images using Hybrid Learning Techniques", Disease Markers, vol. 2021, Article ID 5522729, 15 pages, 2021. https://doi.org/10.1155/2021/5522729

Prediction of COVID-19 with Computed Tomography Images using Hybrid Learning Techniques

Academic Editor: Dong Pan
Received30 Jan 2021
Revised18 Mar 2021
Accepted31 Mar 2021
Published22 Apr 2021

Abstract

Reverse Transcription Polymerase Chain Reaction (RT-PCR) used for diagnosing COVID-19 has been found to give low detection rate during early stages of infection. Radiological analysis of CT images has given higher prediction rate when compared to RT-PCR technique. In this paper, hybrid learning models are used to classify COVID-19 CT images, Community-Acquired Pneumonia (CAP) CT images, and normal CT images with high specificity and sensitivity. The proposed system in this paper has been compared with various machine learning classifiers and other deep learning classifiers for better data analysis. The outcome of this study is also compared with other studies which were carried out recently on COVID-19 classification for further analysis. The proposed model has been found to outperform with an accuracy of 96.69%, sensitivity of 96%, and specificity of 98%.

1. Introduction

The COVID-19 virus, believed to have initially originated from the Phinolophus bat, transmitted to human beings in December 2019. Wuhan city’s Huanan Seafood Market was the nerve center for the COVID-19 outbreak which spread rapidly all around the world [1] and was eventually announced as a pandemic by World Health Organization (WHO) during March 2020 [2]. COVID-19-infected individuals have experienced severe acute respiratory disorders, fever, continuous coughing, and other infections. The mortality rate of this pandemic reached its peak in a short span of time. Early detection of the COVID-19 virus is the best way in mortality reduction. The CT scan images of COVID-19-affected individuals show distinctive characteristics like patchy multifocal consolidation, ground-glass opacities, interlobular cavitation, lobular septum thickening, and clear indication of fibrotic lesions, peribronchovascular, pleural effusion, and thoracic lymphadenopathy. The evolution of consolidation and ground-glass opacities over a period of time of a COVID-19-affected patient from symptom commencement to the next 31 days is delineated in Figure 1 [24]. RT-PCR is known to be the standard testing tool but has produced false negative rates in recent studies [5, 6] at the early stages. Studies also postulated the importance of CT scan images to screen COVID-19 with better specificity and sensitivity [7].

The characteristics of COVID-19 are similar to other viral pneumonia [4]. Yet with help of deep learning techniques, one can predict the differences between types of viral pneumonia precisely. The main differences between pneumonia caused by different types of viruses including the Respiratory Syncytial Virus (RSV) and Human Metapneumovirus (HMPV) in terms of ground-glass opacity (GGO), consolidation, and pleural effusion are depicted in Table 1. +++ is 50% area of lungs being involved and + is 10% area of lungs being involved.


InfectionsTransmissionGGOConsolidationNodulePleural effusion

AdenovirusRespiratory, oral-fecal++++++Centro lobularC
InfluenzaDroplet, airborne++++UC
RSVAerosol, contact++Centrilobular +++C
SARS-CoV2Airborne, contact++++RareRare
HMPVContact, droplet++Centrilobular +++UC

The large number of CT scan images opens up a research area for start-up companies. These techniques proposed by researchers aid radiologists and physicians for fast and early prediction of the disease.

RT-PCR which is used for diagnosing COVID-19 has a few limitations. Firstly, the test kits are not sufficiently available and consume more time for testing, and the sensitivity of testing varies. Thus, using CT scan images for screening COVID-19 is important. CT scans images expose patchy ground-glass opacities which are hazy white spots in the lungs, which is the primary sign of COVID-19. In a recent study [8], with 1,014 patients, deep learning technique was able to predict (888/1014) positive cases using CT scan images of suspected COVID-19 patients, while RT-PCR was only able to predict (601/1014) positive cases of suspected COVID-19 patients. The results have shown that the CT scan images were able to diagnose COVID-19 effectively thus saving more lives. The mortality rates for different CoV viruses are discussed in Table 2. There is little knowledge on what will be the future of the outbreak. There are different manifestations of COVID-19 as discussed in a study [9]. In a study [10], it was found that CT scans had a high sensitivity while diagnosing for COVID-19. CT scan of the chest is considered to be an important tool for COVID-19 detection in endemic regions. As a result of the sensitivity and specificity of CT scans, a clinical detection threshold based upon ideal CT scan imaging manifestations is now utilized in China. So, CT scan images act as a better alternative to RT-PCR testing. Thus, chest CT scan images can be utilized as a primary resource for detecting COVID-19 in endemic regions which lack access to the testing kits.


CoVYearOriginMortality rateCommunity attack rateIncubation time

SARS2002Saudi Arabia10%30%-40%4-14 days
MERS2013Saudi Arabia34%10-60%7 days
COVID-192019China3.4%4-13%6 days

This also takes less time thereby saving radiologist’s time for carrying out the further treatments. The following conclusions were arrived from the researchers carried out by many studies mentioned above: (1)The sensitivity and specificity of chest CT scans to screen COVID-19 is high. Thus, in endemic regions one can use the automated system that detects COVID-19 precisely(2)Chest CT scan images play a vital role in monitoring and evaluating COVID-19 patients with extreme and severe respiratory symptoms. Based on CT scans, the intensity of the lung infection and the time taken by the disease to evolve were assessed and discussions on treatments were made accordingly(3)Patients infected with COVID-19 require multiple chest CT scan images during the treatment to find the progression of the disease. Analysing multiple CT images is a time-consuming task and it cannot be completed with greater precision manually. Thus, screening many images quickly is a priority which is achieved through deep learning techniques(4)The prime abnormalities which are developed after the onset of symptoms in COVID-19-affected patients are ground-glass opacities (GGO), consolidations, and nodules. These features are easily recognizable through deep learning techniques(5)Early detection of COVID-19 infection is critical for treatment mitigation and safety control. When compared with RT-PCR, testing with chest CT images are more dependable, rapid, and practical methodology to scan and monitor COVID-19 patients, specifically in the hotspot regions(6)Even when the symptoms are not visible (asymptomatic), CT findings can detect visible changes and series of abnormalities in COVID-19-affected lungs using proposed model

So far, medical and clinical studies on chest CT scan findings have been discussed. In Table 3, the deep learning techniques which were carried out using images are presented.


AuthorImageAccuracyClassification

Wang [11]CT82.9%Transfer learning
Zhao [12]CT85%DenseNet
Vruddhi Shah [13]CT94.52%VGG-19
He X [14]CT94%Self-trans model
Michael J. Horry [15]CT84%Fine-tuned VGG-19
Song Ying [16]CT93%Deep CNN

The accuracy of the works is also shown along with the classification methods that were used. The predominant works delineated in Table 3 show 94.52% accuracy when model was built for CT images. It is also seen that most of the models are built for X-ray images [1726]. Studies have shown instances where patient’s chest X-ray showed no traces of lung nodules but then were later identified using CT scans [13, 15]. CT images play a major role in detecting the COVID-19 infection. Hence, for the above reasons, a hybrid learning model was proposed which scans the CT images and classifies them as COVID-19, CAP, and Normal images using machine learning and deep learning techniques.

2. Materials and Methods

Figure 2 shows the overall progression of the proposed hybrid learning model. The CT scan input images are collected from various sources like Google Images, RSNA, and Github, so they are different in resolution, size, and many other features. So, all the CT scan input images are preprocessed to standardize the images and given to the pretrained deep learning models for feature extraction. The extracted features are then given to machine learning classification models. The pretrained deep learning models used in the proposed work are VGG-16, Resnet50, InceptionV3, and AlexNet. The machine learning models used in the proposed work are Support Vector Machine (SVM), Random Forest, Decision Tree, Naive Bayes, and -Nearest Neighbour (KNN).

2.1. Image Processing

Figure 3 shows the progression of image processing.

The histogram equalization is applied to enhance the quality of the image without losing the important features of the image. The histograms of the original and equalized image are shown in Figure 4. The Weiner filter is used to remove the noises from the image yet preserving fine details and edges of the lungs. The filter size is chosen to be in order to prevent the image from getting over smooth. Weiner filter is typically based on estimation of variance and mean from the local adjacent of individual pixels. It then constructs pixel-based linear filters using the Eq (1). where denotes the position of pixel in filtered image and denotes the position of pixel in the original image. and are mean and variance of local adjacent pixels, respectively. is called the noise variance. Images are then resized to focus on a specific area of interest in order to extract its features.

2.2. Feature Extraction

Feature extraction is achieved using pretrained CNN models such as VGG-16, Restnet50, InceptionV3, and AlexNet. CNN models are purposely used for image classification. An image is viewed as an array of pixel which also depends upon the resolution of an image. These CNN models consist series of convolutional and pooling layers. The data augmentation is achieved using a convolutional layer. The convolution operation is applied to a region of an image, sampling the values of the pixels in that particular region and converting them into a solitary value. This convolution operation is defined in Eq (2) and Figure 5. where is the value of pixel at after convolution operation; is the value of pixel at in input matrix and is the value of pixel at in filter (Kernel) matrix and is the kernel size or size of the filter matrix.

The output size of the convolution layer is given in Eq (3). where is the size of the output matrix, is the size of the input matrix, is the size of the convolution filter, is padding, and is stride value for convolution operation. The max-pooling layer performs dimensionality reduction. This layer will downsample the value without losing any important information. It does max operation by finding the maximum valued neuron in a particular region for the output from the previous layer which is given in Eq (4) and Figure 5. where is the value of pixel at after pooling operation is performed; is the value of pixel at of preceding layer’s output and is the size of previous layer’s output grid. The output size of the max-pooling layer is given in Eq (5). where is the size of the output matrix, is the size of the previous layer’s matrix, is the size of the pooling filter, and is the stride value same as what was chosen for convolution operation. Relu acts as an activation for convolutional and max-pooling layer as given in Eq (6) where is the input value provided to activate the neuron. Thus, all the parameters which were extracted from the series of convolution and pooling operations from all the pretrained models that were used for feature extraction only are shown in Table 4. One can notice that nontrainable parameters are less and only trainable parameters are used by the backpropagation algorithm to optimize and update the values of weight and bias. Thus, only the important features are utilized for training the model. The features are nonredundant and informative values are intended to facilitate precise diagnosis of classes.


ModelImage sizeTotal parametersTrainable parametersNontrainable parametersNumber of layers

VGG-1614,882,8831,66,40314,716,48016
InceptionV322,370,3395,62,69121,807,648748
Resnet5024,155,2675,62,69123,592,57650
AlexNet62,378,3442,29,12362,149,2218

2.3. Classification

Classification refers to a predictive modelling problem where a class label is predicted for an input image. The classification is performed using traditional machine learning classifiers by removing the fully connected layers from the pretrained deep learning models. The extracted features were utilized for the final classification using Support Vector Machine (SVM), Decision Tree, Naive Bayes, -Nearest Neighbour (KNN), and Random Forest. In SVM, the input values are plotted in an -dimensional space, and the optimal hyperplane that differentiates the classes is found. In Random Forest, a large number of decision trees are built to operate as an ensemble model where all decision trees predict the class label and eventually the class that gets more votes will be chosen as the predicted label. In Decision Tree, each node acts as a splitting criterion and the branches lead to the final node (leaf node) to provide the output. Naive Bayes is a conditional probability model which used the Bayes theorem for classification. KNN is a nonparametric classifier which classifies images based on its -nearest neighbours.

3. Results and Discussion

In this section, datasets that have been utilized for carrying out the experiments are discussed. Further, the comparative analysis of results is discussed.

3.1. Data Formulation

The dataset used here contains CT scan images for COVID-19 (includes both symptomatic and asymptomatic), CAP, and normal chest CT scan images. The images were assimilated from multiple resources for training the model precisely. The data collected from different resources are shown in Table 5. Scanning scheme used for scanning the image is diverse thus the model is able to learn all possible images. Image preprocessing has been applied to make the dataset a standardized one. A total of approximately 500 CT scan images were obtained for each class to maintain the data balance. The images were split for training, validation, and testing purposes which are shown in Table 6. The project was conducted on windows platform using the Python software (Python Jupyter Notebook). Different packages like pandas for data loading and data accessing, numpy for array (matrix) creation, scikit-learn for machine learning classifiers, Keras’s Tensorflow for deep learning classifiers, and matplotlib for plotting graphs are used in the implementation of the proposed work. These tools have been helpful in completely satisfying the requirements producing promising results.


CategoriesSourceImages

COVID-19medRXiv, bioRxiv, NEJM, JAMA, Lancet Medical Segmentation349
Coronacases100
Radiopaedia10
Zenodo9
Total488
CAPGoogle Images, RSNA500
NormalGoogle Images, Github500


ClassTrainingValidationTestingTotal

COVID-1934037111488
CAP34049111500
Normal34049111500

3.2. Experimental Results

The COVID-19 images are correctly classified by the present model with greater precision and recall. Initially, about 111 images were tested for machine learning models like Support Vector Machine (SVM), Decision Tree, Naive Bayes, -Nearest Neighbour (KNN), and Random Forest. Secondly, the images were trained and tested for deep learning models such as CNN, AlexNet, VGG-16, InceptionV3, and Resnet50. On further analysis, the fully connected layers for CNN models were removed, and the prediction was performed with machine learning models as hybrid learning models. This showed that the hybrid learning models such as AlexNet+SVM and AlexNet+Random Forest models yielded better results when compared with other models.

Figure 6 shows the colormap images for COVID-19-affected CT scan images which were correctly classified by AlexNet+SVM and AlexNet+Random Forest. Figure 7 shows the correctly classified CAP images, and Figure 8 shows the correctly classified normal CT scan images. These images in Figures 68 show the infected region in CT scan images which are then classified as CAP or COVID-19. The normal CT scan image does not have any infected region pointed in the image. The COVID-19 image shows an infected region in the left lower lobe region. This identification of the infected region is performed using Jet Colormap and Turbo heat map provided in python.

To compare this work with RT-PCR, 12 sample images of 3 patients are taken to test the model. All these images in Figure 9 are classified correctly by AlexNet+SVM and AlexNet+Random Forest, which are found to be negative by RT-PCR. The infected regions are also shown in the images using the colormap function provided by python.

Various metrics used to analyse different models are discussed below. F1-score, precision, and recall are defined in Eq (7), Eq (8), and Eq (9). Accuracy of a model shows how correctly the images are classified. The precision of the model determines the reproducibility of values or how many values are predicted correctly. Recall of a model shows how many correct values are discovered among all classes. F1-score takes precision and recall into account in order to calculate a balanced average value. where precision and recall are defined in Eq (8) and Eq (9). These values are in fact calculated from a Confusion matrix that is built using test data images. where is the number of images observed as positive and predicted as positive and is the number of images observed as negative and predicted as positive. where is the number of images observed as positive and predicted as positive and is the number of images observed as positive and predicted as negative. Recall is also called sensitivity.

The specificity is defined in Eq (10). where is the number of images observed as negative and predicted as negative and is the number of images observed as negative and predicted as positive.

The accuracy for all the models can be calculated by Eq (11). where is the number of images observed as negative and predicted as negative. The Root Mean Square Error (RMSE) value for all the images can be evaluated using Eq (12). where is the actual value, is the predicted value, and is the total number of images.

The Mean Absolute Error (MAE) can be calculated using [26]. where is actual value, is predicted value, and is the total number of images.

The Confusion matrix is often used to analyse the performance of the classification models using predicted class label for test images against known class label for test images. Classification report is used to evaluate the quality of prediction of class labels by classification models. The Confusion matrix and classification report for the models built using conventional machine learning classifiers are presented in Table 7. It is obvious that Random Forest has produced better results with the precision of 0.95, recall of 0.96, and specificity of 0.97 when compared with other machine learning classifiers. The Confusion matrix and classification report for models constructed using deep learning techniques are analysed and shown in Table 8. It is seen that AlexNet has produced better prediction outcomes with the precision of 0.94, recall of 0.94, and specificity of 0.97. The Confusion matrix and classification report for the proposed hybrid learning models are presented in Tables 913. The proposed works performed better than other classifiers. AlexNet+SVM has produced better results with the precision of 0.96, recall of 0.96, and specificity of 0.98 when tested for 333 test images. Resnet50+Random Forest has also produced better outcomes with precision, recall, and specificity of 0.95, 0.95, and 0.97, respectively. The feature extraction produces only necessary features to be trained and remove unnecessary features that are not vital for the classification task. This also helps the model to be faster in training and testing the classification models.


Confusion matrixClassification report
ModelsCategoryCOVID-19CAPNormalTotalCategoryPrecisionRecallF1ScoreSpecificity

SVMCOVID-1910470111COVID-190.930.920.920.96
CAP71032111CAP0.920.880.890.96
Normal16104111Normal0.930.980.950.96
Total112116106333Average0.920.930.920.96

Random ForestCOVID-1910650111COVID-190.950.970.950.97
CAP31062111CAP0.950.950.950.97
Normal06105111Normal0.950.980.950.97
Total109111107333Average0.950.960.950.97

Decision TreeCOVID-1910452111COVID-190.930.930.930.96
CAP31035111CAP0.920.910.910.96
Normal44103111Normal0.920.930.920.96
Total111112110333Average0.920.920.920.96

Naive BayesCOVID-1982209111COVID-190.730.860.790.87
CAP88320111CAP0.740.620.670.86
Normal52977111Normal0.690.720.700.87
Total95132106333Average0.720.730.720.87

KNNCOVID-1910470111COVID-190.930.910.910.97
CAP61032111CAP0.920.880.890.97
Normal46101111Normal0.900.980.930.96
Total114117103333Average0.920.920.920.97


Confusion matrixClassification report
ModelsCategoryCOVID-19CAPNormalTotalCategoryPrecisionRecallF1ScoreSpecificity

CNNCOVID-1910056111COVID-190.900.900.900.95
CAP51014111CAP0.910.890.890.95
Normal5799111Normal0.890.900.890.95
Total111113109333Average0.900.900.900.95

AlexNetCOVID-1910533111COVID-190.940.940.940.97
CAP21063111CAP0.950.950.950.97
Normal42104111Normal0.930.930.930.97
Total111111111333Average0.940.940.940.97

VGG-16COVID-1910443111COVID-190.930.930.930.96
CAP31044111CAP0.930.930.930.95
Normal43104111Normal0.930.930.930.96
Total111111111333Average0.930.930.930.96

Resnet50COVID-1910263111COVID-190.920.930.920.96
CAP51015111CAP0.910.900.900.95
Normal36102111Normal0.920.930.920.96
Total110113110333Average0.920.920.920.96

Inception v3COVID-1910047111COVID-190.900.890.890.94
CAP61005111CAP0.900.910.900.95
Normal6699111Normal0.890.890.890.95
Total112110111333Average0.900.900.900.95


Confusion matrixClassification report
ModelsCategoryCOVID-19CAPNormalTotalCategoryPrecisionRecallF1ScoreSpecificity

CNN+SVMCOVID-1910443111COVID-190.930.930.930.97
CAP31044111CAP0.930.930.930.97
Normal43104111Normal0.930.930.930.97
Total111113109333Average0.930.930.930.97

CNN+Random ForestCOVID-1910434111COVID-190.940.930.930.96
CAP31035111CAP0.930.930.930.96
Normal55101111Normal0.910.920.910.97
Total112`111110333Average0.930.930.930.96

CNN+Decision TreeCOVID-1910146111COVID-190.910.920.920.95
CAP31017111CAP0.910.920.920.96
Normal55101111Normal0.910.890.900.96
Total109110114333Average0.910.910.910.96

CNN+Naive BayesCOVID-1910434111COVID-190.940.930.930.91
CAP31035111CAP0.930.930.930.92
Normal55101111Normal0.910.920.910.91
Total112111110333Average0.930.930.930.91

CNN+KNNCOVID-1910254111COVID-190.920.910.910.95
CAP41025111CAP0.910.910.910.95
Normal54102111Normal0.910.910.910.96
Total111111110333Average0.910.910.910.95


Confusion matrixClassification report
ModelsCategoryCOVID-19CAPNormalTotalCategoryPrecisionRecallF1ScoreSpecificity

AlexNet+SVMCOVID-1910740111COVID-190.960.960.960.98
CAP11082111CAP0.950.970.960.97
Normal13107111Normal0.980.960.970.98
Total109113109333Average0.960.960.960.98

AlexNet+Random ForestCOVID-1910254111COVID-190.920.920.910.96
CAP41025111CAP0.910.910.910.98
Normal54102111Normal0.910.910.910.98
Total111111110333Average0.910.910.910.98

AlexNet+Decision TreeCOVID-1910443111COVID-190.940.940.940.97
CAP31034111CAP0.930.930.930.97
Normal43104111Normal0.930.930.930.96
Total111111111333Average0.940.940.940.97

AlexNet+Naive BayesCOVID-1993810111COVID-190.840.820.830.91
CAP10938111CAP0.840.830.830.90
Normal101190111Normal0.810.830.820.91
Total113112108333Average0.830.830.830.91

AlexNet + KNNCOVID-1910434111COVID-190.940.940.940.94
CAP410340111CAP0.930.940.930.93
Normal34104111Normal0.940.930.930.94
Total111110112333Average0.940.940.940.94


Confusion matrixClassification report
ModelsCategoryCOVID-19CAPNormalTotalCategoryPrecisionRecallF1ScoreSpecificity

VGG-16+SVMCOVID-1910524111COVID-190.950.950.950.97
CAP31053111CAP0.940.950.940.97
Normal24105111Normal0.940.950.940.97
Total110111112333Average0.940.940.940.97

VGG-16+Random ForestCOVID-1910623111COVID-190.950.950.960.97
CAP31062111CAP0.950.950.950.97
Normal33105111Normal0.940.950.940.97
Total112111110333Average0.950.950.950.97

VGG-16+Decision TreeCOVID-1910443111COVID-190.940.940.940.96
CAP31034111CAP0.930.940.930.96
Normal34104111Normal0.940.930.930.96
Total111110112333Average0.940.940.930.96

VGG-16+Naive BayesCOVID-1994710111COVID-190.850.860.860.93
CAP79410111CAP0.850.850.850.92
Normal8994111Normal0.850.820.840.93
Total109110114333Average0.850.840.850.92

VGG-16+KNNCOVID-1910344111COVID-190.930.940.930.96
CAP31034111333CAP0.940.930.930.96
Normal44103111Normal0.930.940.930.96
Total110112110333Average0.930.940.930.96


Confusion matrixClassification report
ModelsCategoryCOVID-19CAPNormalTotalCategoryPrecisionRecallF1ScoreSpecificity

Resnet50+SVMCOVID-1910443111COVID-190.940.950.940.94
CAP31044111CAP0.940.930.930.93
Normal34104111Normal0.940.940.940.94
Total110112111333Average0.940.940.940.94

Resnet50+Random ForestCOVID-1910533111COVID-190.950.950.950.97
CAP31044111CAP0.940.950.950.97
Normal33105111Normal0.940.940.950.97
Total111110111333Average0.950.950.950.97

Resnet50+Decision TreeCOVID-1910245111COVID-190.920.930.930.96
CAP41034111CAP0.930.920.930.96
Normal45102111Normal0.920.920.920.96
Total11011111333Average0.920.920.920.96

Resnet50+Naive BayesCOVID-199777111COVID-190.870.870.870.94
CAP7968111CAP0.880.860.870.94
Normal8697111Normal0.860.860.860.93
Total112109112333Average0.870.860.860.94

Resnet50+KNNCOVID-1910254111COVID-190.920.920.920.95
CAP51015111CAP0.910.920.910.96
Normal44103111Normal0.930.920.930.96
Total111110112333Average0.920.920.920.96


Confusion matrixClassification report
ModelsCategoryCOVID-19CAPNormalTotalCategoryPrecisionRecallF1ScoreSpecificity

IncpetionV3+SVMCOVID-1910344111COVID-190.930.930.930.95
CAP41025111CAP0.920.930.920.96
Normal44103111Normal0.930.920.940.96
Total11111011333Average0.930.930.930.96

Inception V3 + Random ForestCOVID-1910254111COVID-190.920.920.920.96
CAP41034111CAP0.910.910.910.96
Normal55101111Normal0.910.920.910.95
Total111113109333Average0.910.920.920.96

InceptionV3+Decision TreeCOVID-1910155111COVID-190.910.910.910.95
CAP51015111CAP0.910.900.910.95
Normal56100111Normal0.900.900.900.95
Total111112110333Average0.910.900.910.95

InceptionV3+Naive BayesCOVID-199696111COVID-190.860.860.860.93
CAP6969111CAP0.860.850.860.93
Normal7995111Normal0.890.880.880.93
Total108112107333Average0.870.870.870.93

InceptionV3+KNNCOVID-1910155111COVID-190.910.920.910.95
CAP41025111CAP0.920.910.920.95
Normal55101111Normal0.960.910.910.95
Total110112111333Average0.910.920.920.95

The outcomes of the models when trained for images before preprocessing and after preprocessing are also compared. There is a visible difference in results and it shows the significance of preprocessing the images. This analysis report is presented in Table 14. When comparing the outcome with studies that are featured in Table 3, the presented hybrid learning models have produced better results.


ModelWith preprocessingWithout preprocessing
AccuracyF1-scoreMAERMSESpecificityAccuracyF1-scoreMAERMSE+Specificity

SVM93.39%0.920.2290.0540.9691.11%0.910.2980.0890.94
Random Forest95.19%0.950.2180.0490.9794.23%0.940.2270.0540.96
Decision Tree93.12%0.920.2620.0630.9692.02%0.920.2650.0770.94
Naive Bayes72.69%0.720.7950.3960.8769.63%0.690.8030.4120.78
KNN92.49%0.920.2260.0510.9790.15%0.900.3210.0930.92
CNN90.01%0.900.3140.0870.9589.45%0.890.3560.0970.92
AlexNet94.59%0.940.2060.0610.9793.78%0.930.2260.0490.94
VGG-1693.69%0.930.2270.0510.9691.82%0.920.2250.0510.93
Resnet5091.59%0.910.2720.0770.9691.18%0.910.2990.0650.93
InceptionV389.78%0.890.3130.0820.9587.43%0.860.3910.0990.94
CNN+SVM91.12%0.910.2810.0820.9788.73%0.890.3660.0960.93
CNN+Random Forest92.49%0.930.2270.0520.9789.99%0.900.3250.0880.93
CNN+Decision Tree90.99%0.910.2710.0780.9688.31%0.880.3470.0930.93
CNN+Naive Bayes82.85%0.830.4560.1230.9179.56%0.800.4780.1780.87
CNN+KNN91.89%0.920.2530.0610.9589.56%0.890.3120.0790.90
AlexNet+SVM96.69%0.970.2170.0430.9895.12%0.950.2170.0470.93
AlexNet+Random Forest96.09%0.960.2250.0490.9895.11%0.950.2130.0470.95
AlexNet+Decision Tree93.09%0.930.2250.0500.9792.45%0.920.2230.0530.92
AlexNet+Naive Byes83.13%0.830.4210.0990.9180.55%0.810.4920.1530.86
AlexNet+KNN93.39%0.930.2200.0550.9490.91%0.910.2790.0500.91
VGG-16+SVM94.59%0.950.2050.0610.9793.69%0.930.2210.0450.93
VGG-16+Random Forest95.19%0.950.2000.0540.9793.34%0.930.220.0490.94
VGG-16+Decision Tree93.39%0.930.2140.0430.9691.23%0.910.2770.0800.93
VGG-16+Naive Bayes84.68%0.850.4190.0830.9282.87%0.830.4550.1220.88
VGG-16+KNN93.09%0.930.2610.0620.9692.45%0.920.2270.0530.92
Resnet50+SVM93.69%0.940.2270.0500.9791.78%0.930.2200.0480.94
Resnet50+Random Forest94.29%0.940.2010.0590.9786.45%0.860.3990.0870.93
Resnet50+Decision Tree92.19%0.910.2780.0790.9689.10%0.890.3690.1010.93
Resnet50+Naive Bayes87.08%0.870.3890.1000.9485.18%0.850.4020.0770.90
Resnet50+KNN91.89%0.920.2490.0580.9688.99%0.890.3370.0880.91
InceptionV3+SVM92.79%0.930.2200.0470.9989.99%0.900.3190.0910.92
InceptionV3+Random Forest91.89%0.920.2360.0590.9687.91%0.880.3200.0790.93
InceptionV3+Decision Tree90.69%0.910.2660.0700.9588.45%0.880.3400.0910.91
InceptionV3+Naive Bayes86.18%0.860.4110.0780.9384.72%0.850.4160.0810.91
InceptionV3+KNN91.29%0.910.2880.0750.9590.11%0.900.3110.0900.93

Thus, the prediction of COVID-19 using the classification model has been constructed in a robust way and it helps in quicker prediction of COVID-19. AlexNet model takes 13 minutes 25 seconds for training and 6 minutes 38 seconds for testing. VGG-16 model takes 20 minutes 43 seconds for training and 12 minutes 30 seconds for testing. InceptionV3 model takes 34 minutes 12 seconds for training and 20 minutes 12 seconds for testing. Resnet50 model takes 43 minutes for training and 21 minutes for testing. As the model gets deeper, it takes more time to train and test the images. The time taken to run the models is inversely proportional to a number of layers. RT-PCR which is used as a standard reference takes 1-2 days in India for confirming a patient to be infected by COVID-19 or not. When compared to RT-PCR, the present model aids in quicker prediction and can aid radiologist in carrying out further treatment and procedures. The accuracy of this model is also quite promising to perform the prediction when compared with RT-PCR. The CT scan images that were tested as negative by RT-PCR are also correctly predicted by these models. Often, the medical images are unclear with lesions and tissues being captured in CT scan images which can impede the prediction task. In order to overcome these difficulties, various image preprocessing techniques were applied. The image preprocessing techniques that are incorporated has an impact on accuracies and results. These techniques provide better resolution, high quality, and high-definition images for carrying out the prediction. In conclusion, the models presented in this study have produced better results in terms of outcomes (accuracy) and quicker prediction even in the early stages. In short, the proposed work can be used for this global public health emergency situation which requires immediate attention.

4. Conclusion

Early detection of COVID-19 is vital for treating and isolating the patients in order to avoid the spread of the virus. RT-PCR is contemplated as the standard technique, but it is reported that chest CT could be used as a rapid and reliable approach for scanning of COVID-19. The proposed hybrid learning models are able to detect COVID-19 with chest CT scan images with an accuracy of 96.69%, sensitivity of 96%, and specificity of 98% for AlexNet+SVM model. Even though there is overlap in patterns of abnormalities in CAP- and COVID-19-affected CT scans, these models are capable of performing well with greater accuracy, sensitivity, and specificity using multisource data assimilation. Finally, reliable models are proposed to distinguish COVID-19 and CAP from CT scan images.

Data Availability

MedRvix BioRvix NEJM JAMA Lancet https://www.kaggle.com/c/covidct/data Radiopedia https://radiopaedia.org/articles/imaging-data-sets-artificial-intelligence Zenodo https://zenodo.org/record/3757476#.YBVbtWRN0zQ GitHub https://github.com/ieee8023/covid-chestxray-dataset.

Ethical Approval

This study does not involve human participants, and hence, ethical approval is not required.

Conflicts of Interest

On behalf of all the authors, the corresponding author state that there is no conflict of interest.

Authors’ Contributions

V.P performed the supervision, project administration, writing, reviewing, and editing. V.N performed the data curation and formal analysis and wrote the original draft. S.J.S.R performed the conceptualization, investigation, and methodology.

References

  1. N. Zhu, D. Zhang, W. Wang et al., “A novel coronavirus from patients with pneumonia in China, 2019,” New England Journal of Medicine, vol. 382, no. 8, pp. 727–733, 2020. View at: Publisher Site | Google Scholar
  2. WHO, Coronavirus Disease 2019 (COVID-19) Situation Report–39, World Health Organization, 2020.
  3. H. Shi, N. Xiaoyu Han, Y. Jiang, O. Cao, J. Alwalid, and C. Zheng, “Radiological findings from 81 patients with COVID-19 pneumonia in Wuhan, China: a descriptive study,” The Lancet Infectious Diseases, vol. 20, no. 4, pp. 425–434, 2020. View at: Publisher Site | Google Scholar
  4. F. Pan, T. Ye, P. Sun et al., “Time course of lung changes at chest ct during recovery from coronavirus disease 2019 (COVID-19),” Radiology, vol. 295, no. 3, pp. 715–721, 2020. View at: Publisher Site | Google Scholar
  5. X. Xie, Z. Zhong, W. Zhao, C. Zheng, F. Wang, and J. Liu, “Chest ct for typical coronavirus disease 2019 (COVID-19) pneumonia: relationship to negative rt-pcr testing,” Radiology, vol. 296, no. 2, pp. E41–E45, 2020. View at: Publisher Site | Google Scholar
  6. P. Huang, T. Liu, L. Huang et al., “Use of chest CT in combination with negative RT-PCR assay for the 2019 novel coronavirus but high clinical suspicion,” Radiology, vol. 295, no. 1, article 200330, pp. 22-23, 2020. View at: Publisher Site | Google Scholar
  7. Y. Fang, H. Zhang, J. Xie et al., “Sensitivity of chest CT for COVID-19: com-parison to RT-PCR,” Radiology, vol. 296, no. 2, pp. E115–E117, 2020. View at: Publisher Site | Google Scholar
  8. Q. Sun, X. Xu, J. Xie, J. Li, and X. Huang, “Evolution of computed tomography manifestations in five patients who recovered from coronavirus disease 2019 (COVID-19) pneumonia,” Korean Journal of Radiology, vol. 21, 2020. View at: Publisher Site | Google Scholar
  9. T. Ai, Z. Yang, H. Hou et al., “Correlation of chest ct and RT-PCR testing for coronavirus disease 2019 (COVID-19) in China: a report of 1014 cases,” Radiology, vol. 296, no. 2, pp. E32–E40, 2020. View at: Publisher Site | Google Scholar
  10. Committee, NH: General office of national health committee, Notice on the Issuance of a Program for the Diagnosis and Treatment of Novel Coronavirus (2019-ncov) Infected Pneumonia (Trial Revised Fifth Edition), Policy document of the State Administration of Traditional Chinese Medicine, 2020.
  11. X. Xiaowei, J. Xiangao, M. Chunlian et al., “Deep learning system to screen coronavirus disease 2019 pneumonia,” 2020. View at: Publisher Site | Google Scholar
  12. J. Zhao, X. He, X. Yang, Y. Zhang, S. Zhang, and P. Xie, “Covid-ct-dataset: a ct scan dataset about covid-19,” 2020, https://arxiv.org/abs/2003.13865. View at: Google Scholar
  13. V. Shah, R. Keniya, A. Shridharani, M. Punjabi, J. Shah, and N. Mehendale, “Diagnosis of COVID-19 using CT scan images and deep learning techniques,” Emergency Radiology, 2021. View at: Publisher Site | Google Scholar
  14. X. He, X. Yang, S. Zhang et al., Sample-Efficient Deep Learning for COVID-19 Diagnosis Based on CT Scans, vol. 1, Med Rxiv., 2020.
  15. M. J. Horry, S. Chakraborty, M. Paul et al., “COVID-19 detection through transfer learning using multimodal imaging data,” IEEE Access, vol. 8, pp. 149808–149824, 2020. View at: Publisher Site | Google Scholar
  16. Y. Song, S. Zheng, L. Li et al., Deep Learning Enables Accurate Diagnosis of Novel Coronavirus (COVID-19) with CT Images, vol. 1, Med Rxiv., 2020.
  17. Z. Jianpeng, X. Yutong, L. Yi, S. Chunhua, and X. Yong, “Covid-19 screening on chest x-ray images using deep learning based anomaly detection,” 2020, https://arxiv.org/abs/2003.12338. View at: Google Scholar
  18. N. Ali, K. Ceren, and P. Ziynet, “Automatic detection of coronavirus disease (covid-19) using x-ray images and deep convolutional neural networks,” 2020, https://arxiv.org/abs/2003.10849. View at: Google Scholar
  19. E. E. D. Hemdan, M. A. Shouman, and M. E. Karar, “Covidx-net: a framework of deep learning classifiers to diagnose covid-19 in x-ray images,” 2020, https://arxiv.org/abs/2003.11055. View at: Google Scholar
  20. I. D. Apostolopoulos and T. A. Mpesiana, “Covid-19: automatic detection from x-ray images utilizing transfer learning with convolutional neural networks,” Physical and Engineering Sciences in Medicine, vol. 43, no. 2, pp. 635–640, 2020. View at: Publisher Site | Google Scholar
  21. P. Sethy and S. Behera, Detection of Coronavirus Disease (COVID-19) Based on Deep Features, Preprints, 2020. View at: Publisher Site
  22. A. Abbas, M. M. Abdelsamea, and M. M. Gaber, “Classification of COVID-19 in chest X-ray images using DeTrac deep convolutional neural network,” 2020, https://arxiv.org/abs/2003.13815. View at: Google Scholar
  23. P. Afshar, S. Heidarian, F. Naderkhani, A. Oikonomou, K. N. Plataniotis, and A. Mohammadi, “Covid-caps: a capsule network-based framework for identification of covid-19 cases from x-ray images,” 2020, https://arxiv.org/abs/2004.02696. View at: Google Scholar
  24. L. O. Hall, R. Paul, D. B. Goldgof, and G. M. Goldgof, “Finding covid-19 from chest x-rays using deep learning on a small dataset,” 2020, https://arxiv.org/abs/2004.02060. View at: Google Scholar
  25. M. Farooq and A. Hafeez, “Covid-resnet: a deep learningframework for screening of covid 19 from radiographs,” 2020, https://arxiv.org/abs/2003.14395. View at: Google Scholar
  26. H. S. Maghdid, K. Z. Ghafoor, A. S. Sadiq, K. Cur-ran, and K. Rabie, “A novel AI-enabled framework to diagnose coronavirus COVID-19 using smartphone embedded sensors: design study,” 2020, https://arxiv.org/abs/2003.07434. View at: Google Scholar

Copyright © 2021 Varalakshmi Perumal et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views2267
Downloads705
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.