Abstract

This paper presents an automated and noninvasive technique to discriminate COVID-19 patients from pneumonia patients using chest X-ray images and artificial intelligence. The reverse transcription-polymerase chain reaction (RT-PCR) test is commonly administered to detect COVID-19. However, the RT-PCR test necessitates person-to-person contact to administer, requires variable time to produce results, and is expensive. Moreover, this test is still unreachable to the significant global population. The chest X-ray images can play an important role here as the X-ray machines are commonly available at any healthcare facility. However, the chest X-ray images of COVID-19 and viral pneumonia patients are very similar and often lead to misdiagnosis subjectively. This investigation has employed two algorithms to solve this problem objectively. One algorithm uses lower-dimension encoded features extracted from the X-ray images and applies them to the machine learning algorithms for final classification. The other algorithm relies on the inbuilt feature extractor network to extract features from the X-ray images and classifies them with a pretrained deep neural network VGG16. The simulation results show that the proposed two algorithms can extricate COVID-19 patients from pneumonia with the best accuracy of 100% and 98.1%, employing VGG16 and the machine learning algorithm, respectively. The performances of these two algorithms have also been collated with those of other existing state-of-the-art methods.

1. Introduction

On March 11, 2020, the World Health Organization (WHO) declared the COVID-19 outbreak a pandemic [1]. Initially, this unique virus emerged in Wuhan, China, and was named a novel coronavirus. Later, the International Committee on Taxonomy of Viruses renamed this virus as severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Since then, millions of people worldwide have been infected by this coronavirus and its variants [2].

To prevent the spreading of this virus, control mechanisms, including wearing facemasks and massive testing campaigns, have been suggested [3]. Wearing masks have been mandated in public places by many government and private organizations worldwide. Even a convolutional neural network (CNN) based facemask detection algorithm has been developed by researchers [4] to enforce wearing masks in public places.

The reverse transcription-polymerase chain reaction (RT-PCR) has been introduced to test the coronavirus and is still considered the gold standard for testing this virus [5]. However, there are some limitations of the RT-PCR test: (a) it is not economical, (b) it needs variable time to produce the results, (c) it necessitates person-to-person contact to administer, and (d) it is not even reachable to the major population due to a lack of healthcare facilities [6]. Moreover, the RT-PCR test is invasive and uncomfortable for patients, specifically children. To overcome these limitations, researchers strived to find an alternative to the RT-PCR test, and they have recommended using noninvasive techniques instead. Biomedical signals and radiological images are recommended for this purpose.

Biomedical signals, including speech, vowels, words, phrases, and counting numbers, have been used to detect several diseases [79]. Literature survey shows that these signals can be used to detect various diseases, including asthma [10], Alzheimer’s disease [11], Parkinson’s disease [12], vocal fold diseases [13], depression [14], schizophrenia [15, 16], autism [17, 18], dysphonia [19], abnormality in fetal heart rate [20], and breast cancer [21]. Recent works have also demonstrated that coughing sounds could detect respiratory disorders in COVID-19 patients [22, 23]. However, biomedical signal-based disease diagnosis requires sophisticated equipment to administer. In addition, only trained technologists can perform the signal acquisition, processing, and analysis tasks. Biomedical images can overcome these limitations.

Biomedical images have been used for diagnosing diseases in plants [24] and animals for a long time. The three significant steps followed by these diagnoses are (a) preprocessing, (b) image feature extraction, and (c) classification. The preprocessing techniques may include image acquisition, image resizing, image enhancement, image segmentation, and extracting the region of interest (ROI). Then, image features are extracted from the preprocessed images. Finally, these features are applied to a classifier for final diagnosis.

Recently, deep learning-based algorithms are playing an essential role as classifiers. For example, an enhanced deep learning-based CNN model with a leaky rectified linear unit (ReLU) activation function has been proposed in [25] to detect a skin disease called acne. The authors have used different image processing techniques in their work, namely, -means, texture analysis, and segmentation. The results show that the deep learning-based algorithm can achieve a higher accuracy (i.e., 97.54%) than the SVM algorithm while detecting this disease.

Biomedical images, including computerized tomography (CT) and X-ray images, are popularly used to diagnose lung diseases like pneumonia [2629], tuberculosis [3032], interstitial lung diseases [33], early lung cancer [3437], and pulmonary nodules [3843]. The key advantages of using radiological images are the following: (a) they can be readily produced at any medical facility equipped with the necessary instrument and (b) the physicians need considerably less time to perform visual subjective diagnoses. The chest CT image is generally computed by scanning techniques, whereas the X-ray images are captured from different angles and compiled to form a single image. The CT scans provide physicians with more detailed information about the patient for diagnoses compared to the X-ray images. However, CT scans are more expensive than their X-ray counterparts and are available only in specialized healthcare facilities. This investigation considers the chest X-ray images only. However, comparing the chest X-ray images of patients with COVID-19 and other lung diseases often leads to wrong diagnoses [44, 45]. For example, it is hard to subjectively differentiate between the X-ray images of a COVID-19 patient and a viral pneumonia patient, as shown in Figure 1. However, early discrimination and isolation of COVID-19 patients from pneumonia patients are vital to prevent the pandemic’s spreading. It is also essential for healthcare facilities to reduce their ever-increasing burden.

This work presents a noninvasive technique to detect COVID-19 patients from chest X-ray images and artificial intelligence. A deep CNN and several machine learning algorithms have been used as classifiers. The CNN is trained with original chest X-ray images. On the other hand, the machine learning algorithms are matriculated with encoded image features extracted from the chest X-ray images. The main contributions of this work are as follows: (a)To develop a classification model to differentiate COVID-19 and pneumonia patients using lung X-ray images based on machine learning and deep learning approaches(b)To extract encoded image features and investigate their usefulness in identifying COVID-19 and pneumonia patients using several machine learning algorithms(c)To reduce the computational burden and hence to faster the algorithm, using a data reduction technique(d)To provide a detailed performance analysis of the proposed classification systems in terms of statistical performance parameters(e)To compare the performances of the proposed techniques with those of other state-of-the-art algorithms

The rest of the paper is organized as follows. Related works are presented in Section 2. Materials and methods are presented in Section 3. Simulation results are presented in Section 4. The paper is concluded with Section 5. A list of acronyms used throughout this paper is provided in Table 1.

Recently, COVID-19 patient detection, using X-ray images and artificial intelligence, has drawn considerable attention from researchers. One of the earliest works can be found in [47]. The authors have used seven different architectures of CNN to detect COVID-19 patients. They achieved the best detection accuracy with the VGG19 (90%) and DenseNet (90%).

Two deep learning models, VGG19 and U-Net, have been deployed in [48] to process the X-ray images and classify them as COVID-19 positive or COVID-19 negative. The proposed system preprocessed the images by segmentation and then categorized them using a transfer learning scheme. The authors achieved a detection accuracy of around 97%.

An Android application was designed in [49] to identify COVID-19 patients using X-ray images. For this purpose, a CNN was developed and deployed on an Android mobile phone. By employing a 5-fold cross-validation, the authors achieved an average accuracy, sensitivity, specificity, precision, and F1-score of 98.65%, 98.49%, 98.82%, 98.81%, and 98.65%, respectively.

To overcome the limitation imposed by the small dataset, a deep convolutional generative adversarial network (DCGAN) has been used in [50]. The DCGAN regenerates enough data from the limited existing data for the training task and hence overcomes the constraint of a limited dataset. The simulation showed that the DCGAN could successfully classify the X-ray images into normal, pneumonia, and COVID-19.

Supervised machine learning techniques have been used in [51] to detect COVID-19 patients based on X-ray images. The authors have extracted a color layout descriptor (CLD) feature from the images. The results show that the CLD can assist a machine learning algorithm in achieving a high precision and recall value while discriminating COVID-19 from other pulmonary diseases.

A novel machine learning algorithm called the Siamese CNN model was proposed in [52] to detect COVID-19 automatically by utilizing X-ray images. The authors have used three consecutive models in parallel to extract the image features. The results showed that the proposed algorithm achieved an accuracy of 96.70% while classifying the X-ray images into COVID-19, non-COVID-19, and pneumonia. In a similar work [53], common bacterial pneumonia, COVID-19, and healthy subjects have also been investigated. The authors have used a transfer learning scheme in their work. They achieved the accuracy, sensitivity, and specificity of 96.78%, 98.66%, and 96.46%, respectively.

In [54], the authors have introduced a novel network architecture to detect COVID-19. They replaced the final classifier layer in the DenseNet-201 with a new network consisting of a global averaging layer, a batch normalization layer, a dense layer with ReLU activation, and a final classification layer. They achieved an accuracy of 94% while detecting COVID-19.

A pretrained Res-CovNet has been used in [55] to classify the X-ray images of healthy, bacterial pneumonia, viral pneumonia, and COVID-19. The authors have introduced a novel framework using the Internet of Medical Things (IoMT) to collect X-ray images from remotely located patients. The results showed that the proposed model could discriminate COVID-19 patients from healthy patients with an accuracy of 98.4%. However, the proposed model detected COVID-19 patients from normal, bacterial pneumonia, and viral pneumonia with a lower accuracy (i.e., 96.2%).

Fifteen (15) pretrained CNN models have been used in [56]. The authors achieved the highest accuracy with the VGG19.

Computer vision algorithms and medical image analysis techniques have been used in [57] to identify COVID-19. For this purpose, the authors have employed three state-of-the-art deep learning models, namely, ResNet-V2, InceptionNet-V3, and NASNetLarge. They have investigated two techniques, namely, (a) with data augmentation and (b) without data augmentation. They achieved 98.63% and 99.02% accuracies for these two cases, respectively.

A pretrained novel network model called ResNet-50 and several image processing techniques, including augmenting, enhancing, normalizing, and resizing, have been used in [58] to detect COVID-19 patients. The results showed that the proposed system outperformed other algorithms, including VGG16, VGG19, and DenseNet.

In [59], the Apache Spark system has been utilized as an extensive data framework to collect the X-ray images of healthy and COVID-19 subjects. Three models, namely, Inception-V3, TestNet-50, and VGG19, have been investigated in their work. All these three models achieved an accuracy of 100% while discriminating COVID-19 samples from healthy samples.

A large dataset of X-ray images has been investigated in [60]. In this investigation, the authors used the X-ray images of the COVID-19 patients from Github and the healthy X-ray images from the Kaggle website. The authors achieved an accuracy of 100%, and the credit went to this large dataset.

A persuasive classification and reliable detection of the COVID-19 algorithm have been presented in [61]. The authors have used the existing state-of-the-art CNN algorithms in their work. They also built a novel CNN from scratch in this work. The achieved accuracy was 100% for COVID-19 and healthy classifications. A classification accuracy of 93.75% was achieved to categorize healthy, COVID-19, and pneumonia patients.

In [62], the authors have used three features, namely, hand-crafted features, radionics features (specialized for medical images), and deep features (extracted by a pretrained deep learning architecture). The authors have combined these features and made shallow hand-crafted features. They concluded that these shallow features performed better than those individual feature sets. Four models, namely, Inception-V3, MobileNet, Xception, and DenseNet, have been used in [63] to detect COVID-19 patients using X-ray images. Based on the performance parameters for accuracy, recall, and F1-score, the authors recommend using MobileNet to detect COVID-19 patients.

CT and radiographic images (chest X-rays) have been used in [64] to detect COVID-19. The authors have used two deep learning algorithms for classification, namely, VGG19 and ResNet-50. The simulation results showed that the X-ray images have higher accuracy than the CT images.

Nine deep learning algorithms, namely, MobileNet-V2, ResNet-50, Inception-V3, NASNet-Mobile, VGG16, Xception, Inception, ResNet-V2, and DenseNet-121, have been used in [65] to detect COVID-19 patients. The authors recommended using these pretrained deep learning models as they are very fast to produce results compared to the RT-PCR test. In [66], the authors have used data processing techniques, including dataset balancing, medical experts’ image analysis, and data augmentation, to implement their algorithm. They achieved an accuracy of 99%.

Unlike other works mentioned above, both chest X-ray images and symptoms have been considered in [67] to detect COVID-19. The symptoms included cough, fever, sore throat, headache, and shortness of breath. These symptoms were preprocessed and applied to a logistic regression analyzer to diagnose COVID-19 patients. Similarly, the chest X-ray images were preprocessed to classify the samples as normal, non-COVID-19-viral, bacterial, and COVID-19 positive. A decision tree algorithm combined the results of logistic regression and multiclass classification for the final classification. The proposed algorithm achieved an accuracy of 78.88%, a specificity of 94%, and a sensitivity of 77%.

Both deep learning and machine learning algorithms have been used in [68] to detect COVID-19 using chest X-ray images. The authors conducted 38 experiments using CNN, 10 experiments using five machine learning algorithms, and 14 experiments using state-of-the-art pretrained networks. They achieved a mean sensitivity, specificity, accuracy, and area under the curve (AUC) of 93.84%, 99.18%, 98.50%, and 96.51%, respectively.

In addition to the hyperparameter tuning, multiobjective adaptive differential evolution (MADE) has been introduced in [69] to detect COVID-19 using a CNN. The simulation results showed that the proposed algorithm achieved an accuracy of 94.48%, which is higher than that of other machine learning algorithms, including random forest (RF), CNN-SVM, DarkNet-19, reduced support vector machine (RSVM), DarkCOVIDNet, DeTrack, and deep transfer learning (DTL).

The abovementioned related works have used different approaches and techniques to detect COVID-19 and achieved varying accuracy levels. However, these works have some limitations too. The major limitations of the abovementioned related works are as follows. (a)The algorithms need a huge dataset to train the classifiers(b)The computational complexity of the algorithms is very high as there are considerable parameters to deal with(c)Most of the algorithms mainly apply image processing techniques that are also computationally expensive(d)The accuracy of the algorithms is still not very high (in general), although a couple of algorithms achieved an accuracy of 100%

3. Materials and Methods

3.1. Database

The X-ray images available on the Kaggle website [46] have been used in this investigation. This database is created to assist scientists, clinicians, and healthcare experts in COVID-19 diagnosis. It is one of the most popular databases used by the scientific community. These X-ray images are collected from different sources and stored in the Kaggle database. This database contains the chest X-ray images of 3616 COVID-19-positive cases, 10192 normal (healthy) cases, 6012 lung opacity (non-COVID-19 lung infection) cases, and 1345 viral pneumonia cases. This investigation uses 280 X-ray images of viral pneumonia and COVID-19 samples. These images are randomly selected from the Kaggle database. Seventy (70%) percent of these X-ray images are used for training. The remaining 30% are equally divided for validation and testing purposes.

3.2. Classification Algorithms

In this work, CNN and seven machine learning algorithms have been used for COVID-19 detection. CNN has been popularly used in image classification tasks. The unique characteristic of CNN is that it can recognize patterns in an image irrespective of its orientation [70]. However, CNN requires a large dataset for the training. On the other hand, the publicly available databases, including Kaggle, provide a limited dataset. Hence, a transfer learning approach has been used in this work. A pretrained network model called VGG16 has been used for this investigation. The VGG16 network is already trained on large datasets (i.e., 22000 categories of images) and is also available as prepackaged with the Keras. The VGG16 used in this work consists of a stack of 13 convolutional layers followed by three fully connected layers, as shown in Figure 2. All hidden layers use the ReLU activation function, and the final layer uses the Softmax activation function. The original X-ray images are rescaled to a fixed size of . Five max-pooling layers carry out spatial pooling to reduce the dimension of the data. This kind of dimension reduction technique is illustrated in Figure 3. As demonstrated in this figure, the image features vary at the different network levels and become more abstract as the layer increases. The detailed steps used by the VGG16 are illustrated in Algorithm 1. The network model was implemented in Google Colab [71].

The investigated 7 machine learning algorithms are available with the Statistics and Machine Learning Toolbox of MATLAB 2020. These seven algorithms were selected among the available machine learning algorithms as they provided the best accuracies. The chosen machine learning algorithms are (a) linear discriminant analysis (LDA), (b) fine tree, (c) logistic regression, (d) coarse Gaussian support vector machine, (e) cosine -nearest neighbors (kNN), (f) ensemble subspace discriminant, and (g) linear SVM.

1: Load images for training, testing, and validation
2: Rescale the images to (224, 224, 3)
3: Normalize the pixel values of the images between 0 and 1
4: Load the pre-trained VGG16 model
5: Freeze the base network
6: Flatten the network
7: Add two dense network layers on top of the base network
8: Split the images, into training, validation, and testing set in the ratio of 70:15:15
9: Extract the feature map, from the images using pre-trained VGG16
10: Set the epoch,
11: Set the counter, i
12: do while
13:  Select the initial hyperparameter values (e.g., learning rate, batch size, etc)
14:  Train the classifier using the training dataset
15:  Evaluate VGG16 performance with the validation data
16: end
17: Choose the best weight matrix that provides minimum validation error rate
18: Compute prediction scores, based on testing samples
19: Classify samples as COVID-19 or pneumonia using
20: Identify the best-trained model using the test images

These machine learning algorithms use the speeded-up robust features (SURF) [7275] extracted from the X-ray images. The SURF is a fast and robust approach that has been popularly used in implementing computer vision algorithms [7680]. It is computed by using two main steps, namely, (a) feature extraction and (b) feature description. The feature extraction step consists of three stages, namely, (a) integral image formation, (b) Hessian matrix-based interest point detection, and (c) scale-space formation. The integral image at location represents the sum of all pixels in the input image within a rectangular image formed by

The integral image formation expedites the computation of the SURF feature. The SURF feature then uses the Hessian matrix to find the interest points in the integral image. The Hessian matrix in at scale is defined as where is the convolution of the Gaussian second-order derivative with the image at point in the -direction. Similarly, is the convolution of the Gaussian second-order derivative with the image at point in the -direction and -direction, and is the convolution of the Gaussian second-order derivative with an image in the -direction.

To calculate the determinant of the Hessian matrix, first, the convolution is applied with the Gaussian kernel, and then, the second-order derivative is calculated. To reduce the computational cost, the SURF uses approximation approaches using box filters to compute convolution and second-order derivatives. In this work, a box filter of size is used. Denoting the approximation of the Hessian defined by , , and , the derivative of can be approximated by

Then, the scale spaces are implemented by image pyramids. The images are repeatedly smoothed with a Gaussian function and are subsampled to achieve a higher-level representation in the image pyramid. The scale space is analyzed by upscaling the filter size. By increasing the filter size and doubling the sampling intervals for the interest point extraction, the upscaling of the filter is accomplished at a constant rate.

The creation of the SURF descriptor takes place in two steps, namely, (a) orientation assignment and (b) descriptor extraction. By using the orientation assignment, the SURF is made rotation invariant. To achieve this, the SURF feature calculates the Haar wavelet response in the -direction and -direction. This is done in a circular neighborhood of radius and around the key points, where is the scale at which the key points are detected, as shown in Figure 4. Then, the sum of vertical and horizontal wavelet responses in a scanning area is calculated.

To extract the descriptor, the first step consists of constructing a square region centered around the key points and oriented along with the orientation mentioned above. Then, the region is split into smaller square subregions. A few features are computed for each subregion at regularly spaced sample points. Assuming that is the Haar wavelet response in the horizontal direction and is the Haar wavelet response in the vertical direction, then and are weighted by a Gaussian centered at the key point. The wavelet responses and are summed up over each subregions, and a feature vector is formed. To cope with the intensity changes, the absolute sum of the and is also calculated. Hence, each subregion has a four-dimensional descriptor vector, . This results in a descriptor vector for all subregions of length 64. To further reduce the computation cost, eighty percent (80%) of the most important features were selected from the X-ray images, and the rest of them were discarded. Then, a 500-word visual vocabulary is formed by using a -means clustering algorithm. The encoded visual word occurrences for the X-ray images of a COVID-19 patient and a pneumonia patient are shown in Figure 5. This figure demonstrates that the visual word occurrences for COVID-19 and pneumonia patients are distinctly different. Finally, the feature vectors are formed for the chest X-ray images, and the dimensions are reduced further by principal component analysis (PCA) with a covariance of 0.95. The computation of the steps mentioned above is illustrated in Figure 6. Once the feature vectors are formed, they are applied to the machine learning algorithms for classification. The detailed steps for the classification of the X-ray images by using machine learning algorithms are illustrated in Algorithm 2.

1: Load images for training,
2: Load images for testing,
3: Set grid step size    / dimension to [8,8] /
4: Set block width    / dimension [32,64,96,128] /
5: Extract features using SURF algorithm  /10240 features /
6: Apply the -mean clustering algorithm / number of cluster 500 /
7: Select the 80% strongest features / 8192 features /
8: Compute the histogram
9: Form dictionary / 500 words /
10: Split the images, into training and validation set in the ratio of 7:3
11: Set the epoch,
12: Set the counter
13: do while
14:  Select the initial hyperparameter values (e.g., learning rate, batch
   size, etc.)
15:  Train the machine learning algorithm using the training dataset
16:  Validate the results of the machine learning algorithms
17: end
18: Choose the best candidate, typically with a minimum validation error rate
19: Generate prediction scores, based on testing samples
20: Use prediction scores, to classify samples as COVID-19 or pneumonia
21: Test the performances of the algorithms using the test samples

4. Results and Discussion

As stated earlier, the proposed algorithm discriminates the X-ray images of COVID-19 patients from pneumonia patients. The performances of the proposed system are evaluated with the commonly accepted measures of accuracy, precision, recall/sensitivity, and F1-score as described in the following equations [81, 82]. The evaluation parameters used in the equations are as follows: (a) TP (true positive): the X-ray image belongs to COVID-19, and the algorithm correctly diagnoses it as COVID-19; (b) TN (true negative): the X-ray image belongs to pneumonia, and the algorithm correctly evaluated it as pneumonia; (c) FP (false positive): the X-ray image belongs to pneumonia, but the algorithm wrongly diagnosed it as COVID-19; and (d) FN (false negative): the X-ray image belongs to COVID-19, but the algorithm wrongly diagnosed it as pneumonia. The performance measures investigated are defined as follows.

Accuracy is the ratio of the correctly predicted observations to the total observations. It is defined by

Precision/positive predictive value (PPV) is the ratio of correctly predicted positive observations to the total predicted positive observations. It is defined by

Recall/true positive rate (TPR) is the ratio of correctly predicted positive observations to all observations in the actual class.It is defined by

The false detection rate (FDR) is the expected ratio of false positive observations to the total number of positive observations. It is defined by

A false negative rate (FNR) is the test’s probability of missing a true positive. It is defined as

F1-score is the weighted average of precision and recall. Therefore, this score takes both false positives and false negatives into account. F1-score is defined by

The geometric mean, -mean, reveals the contribution of sensitivity and specificity. It is formulated as

Matthew’s correlation coefficient considers all evaluation parameters into account, as defined by

The performance of the system model with VGG16 was optimized with the parameters listed in Table 2. The training and validation losses are plotted in Figure 7. The performance metrics are summarized in Table 3. It shows that the system model with VGG16 achieves the optimum values for the precision, recall, F1-score, -mean, and MCC all equal to 1.0. The table also shows the log loss score of 0.0373 only. The receiver operating characteristic (ROC) is shown in Figure 8. The ROC demonstrates that the AUC is 1.0, indicating that the VGG16 could correctly distinguish the X-ray images of the COVID-19 patients from pneumonia patients with an accuracy of 100%.

The simulations were repeated with the encoded SURF features employing seven top-performing machine learning algorithms as mentioned in the previous section. The performances of these machine learning algorithms are listed in Tables 4 and 5. These tables list the performances in terms of TPR, FNR, PPV, FDR, and F1-score. Table 4 shows that the coarse Gaussian SVM provides the highest TPR of 98.8%, the lowest FNR of 1.2%, and the highest F1-score of 98.15% while detecting COVID-19 patients. The linear SVM provides the highest PPV of 98.70% and the lowest FDR of 1.3%.

Table 5 presents the performances of the machine learning algorithms for detecting pneumonia. This table shows that the highest TPR of 98.8% is achieved by the linear SVM and cosine kNN algorithms. The linear SVM also provided the highest F1-score of 98.14%. The coarse Gaussian SVM algorithm provides the highest PPV of 98.70%. The performance comparison of the machine learning models is presented in Table 6, which shows that the linear SVM and coarse Gaussian SVM provide the highest accuracy of 98.10%. But the coarse Gaussian SVM demonstrates the fastest prediction rate (2.04 ms/prediction). Also, the corresponding AUCs are displayed for each machine learning algorithm. Based on the simulation results presented in Tables 46, it can be concluded that the SVM provides the best performance in terms of accuracy, prediction rate, and AUC.

Finally, the proposed method’s performances are compared with those of the recently published works. The comparison results are listed in Table 7. This table shows that the highest accuracy of 100% was achieved with the pretrained VGG16 which is comparable to the works presented in [5961, 63], as listed in Table 7. This table also shows that two machine learning algorithms, linear SVM and coarse Gaussian SVM, also achieve an accuracy of 98.1% with the SURF features. This accuracy is higher than that of the algorithms presented in [47, 48, 5254, 56, 62, 65, 67] in Table 7. Although the accuracy is a little less than that achieved with the VGG16, considering the lowest training and prediction rate (09.72 seconds and 2.04 milliseconds), the system justifies its viability with the SURF feature to differentiate COVID-19 patients from pneumonia using chest X-ray images with the machine learning algorithms.

5. Conclusion

This paper presented a noninvasive, automated detection algorithm to identify COVID-19 patients from viral pneumonia patients based on the X-ray images objectively. A deep pretrained CNN model, VGG16, and several machine learning algorithms were investigated. Despite the visual similarities between the X-ray images of pneumonia and COVID-19 patients, it is shown that wisely selected machine learning algorithms and extraction of discriminative features from the images could successfully discriminate them. Among the investigated machine learning algorithms, the SVM was able to differentiate the X-ray images of COVID-19 patients from viral pneumonia patients with the highest accuracy of 98.1%. It was also shown that the deep pretrained VGG16 achieved an accuracy of 100% even with the limited data samples.

In the future, other pulmonary diseases like asthma, bacterial pneumonia, and lung opacity that can strongly correlate with COVID-19 will be considered to optimize the proposed algorithms. The proposed model can be easily extended to a multiclass classifier for discriminating COVID-19 from other pulmonary diseases mentioned above. Also, the variable morphology of airways and lung dimensions that can alter the diagnoses for different genders will be examined.

Data Availability

The data can be found in https://www.kaggle.com/tawsifurrahman/covid19-radiography-database.

Conflicts of Interest

The authors declare that they have no conflicts of interest.