Abstract

The coronavirus disease (COVID-19) outbreak, which began in December 2019, has claimed numerous lives and impacted all aspects of human life. COVID-19 was deemed an outbreak by the World Health Organization (WHO) as time passed, putting a tremendous strain on substantially all countries, particularly those with poor health services and delayed reaction times. This recently identified virus is highly contagious. Controlling the rapid spread of this infection requires early detection of infected people through comprehensive screening. For COVID-19 viral diagnosis and follow-up, chest radiography imaging is an excellent tool. Deep learning (DL) has been used for a variety of healthcare purposes, including diabetic retinopathy detection, image classification, and thyroid diagnosis. DL is a useful strategy for combating the COVID-19 outbreak because there are so many streams of medical images (e.g., X-rays, CT, and MRI). In this study, we used the benchmark chest X-ray scan (CXRS) dataset for both COVID-19-infected and noninfected patients. We evaluate the results of DL-based convolutional neural network (CNN) models after preprocessing the scans and using data augmentation. Transfer learning (TL) is used to improve the algorithm’s classification performance for chest radiography imaging. Finally, features of the attention and feature interweave modules are combined to create a more accurate feature map. The architecture is trained for COVID-19 CXRS using CNN, and the newly generated feature layer is applied to TL architecture. The experimental results found that training enhances the CNN + TL algorithm’s ability to classify CXRS with an overall detection accuracy of 99.3%, precision (0.97), recall (0.98), f-measure (0.98), and receiver operating characteristic (ROC) curve (area = 0.97). The results show that further training improves the classification architecture’s performance by 99.3%.

1. Introduction

COVID-19 is a recently identified coronavirus infectious disease [1]. At the end of 2019, instances of COVID-19 first appeared, when a suspicious disease was identified in Wuhan, China. As a novel coronavirus, the source of infection was soon confirmed, and the outbreak has since expanded to several countries worldwide and has become a pandemic disease [2, 3]. Numerous forums have reported COVID-19 details and have provided various factors taken into consideration to their users in order to avoid the transmission of the infection, such as wearing masks and cleaning their hands, maintaining a gap among themselves and others [4, 5].

Around the globe, COVID-19 is causing an increase in reported cases and mortality rates. There is a scarcity of data on the effect of cardiovascular complications on fatal outcomes [6]. COVID-19 is a highly contagious infection that travels primarily by contact with an infected human’s respiratory droplets. Ingestion of these particles may allow them to enter the human body [7]. COVID-19 has been recorded in 217 countries and countries across the world as of October 2021, with approximately 235 million confirmed cases and 4.8 million death rates. Figures 1 and 2 depict the total number of confirmed cases and deaths from December 31, 2020, to October 06, 2021, respectively. COVID-19 has been confirmed in 235,673,032 people worldwide, with 4,814,651 deaths, according to WHO data as of October 6, 2021. A total of 6,188,903,420 vaccine doses have been handed out as of October 2nd, 2021 [8].

Early detection and isolation of potentially infectious topics is a serious process in fighting COVID-19. Reverse transcription-polymerase chain reaction (RT-PCR) by gene sequencing of respiratory or blood samples is the gold standard screening method for identifying the coronavirus [9]. Nevertheless, due to a lack of testing equipment, inadequate facility, time-consuming, laborious, and susceptibility, most patients would not be able to detect immediately in current health emergencies [10]. As a result, the risk of infecting a safe population will increase. As a result, health professionals have explored a faster and more accurate screening process, such as chest radiographs (chest X-ray) or computed tomography (CT) imaging, which can reveal advanced features associated with the COVID-19 virus [11]. Patients with COVID-19 have been shown to have malformations on chest radiographs. The scanning method is thought to be a quick screening tool for quickly identifying suspicious patients in an epidemic region. One significant disadvantage of CT imaging is that CT scanners are not commonly accessible in many emerging countries.

When a patient has symptoms of COVID-19, such as cough, fever, breathlessness, or chest tightness, the most peculiar result is something called “ground-glass opacities,” which means that certain parts of the lungs are a vague grey color rather than black and have outstanding clear lung outlines for blood vessels. Multifocal or dispersed integration is shown in both lungs in COVID-19 patients with severe form COVID-19, resulting in “white lung.” Although this chest X-ray is not responsive to minor symptoms, it has already been shown to be active in other coronaviruses, including severe acute respiratory syndrome (SARS) and the middle east respiratory syndrome (MERS) [12].

This fact has inspired a huge number of research projects to be suggested and conducted for the first months of 2020. In this study, we first summarize the state-of-the-art work on DL applications for COVID-19 medical image processing. Then, we go over DL and its potential in healthcare that have been discovered in the last decade. Following that, three use cases from China, Korea, and Canada demonstrate DL applications for COVID-19 medical image processing. Finally, we cover a few concerns and challenges linked to DL solutions for COVID-19 medical image processing, which are predicted to spark more research into the outbreak and early response, resulting in smart, healthy communities.

In this paper, we propose a pipeline for detecting and tracking down medical CXRS tests in images and generating an automated classification report of the COVID-19 patient. The features from the CXRS are extracted in this approach, and the appropriate features are then designated with the help of a CNN algorithm [13]. The selected features are then adapted in classifying the CXRS of COVID-19 patients by adapting CNN with and without the TL model. Finally, the attention and feature interweave modules’ features are combined to build a better feature map. The TL is used to improve the algorithm’s classification performance for chest radiography imaging. The model is built and evaluated on COVID-19 infected CXRS [14] and a Kaggle-sourced open dataset of noninfected CXRS [15].

Here is a rundown of our contributions in this domain:(1)A new DL sensor method that identifies and tracks down medical chest X-ray tests in images and generates an automated classification report of COVID-19 patients(2)We present a detailed analysis of the efficiency of the proposed pipeline in terms of accuracy, precision, and recall(3)We present the graphical visualization of the confusion matrix and receiver operating characteristic (ROC) curve for the best performance pipeline using performance evaluation criteria(4)We compare the performance accuracy of the proposed pipelines by optimizing different CNN formations without TL, CNN with TL (CNN + TL)(5)We adopt various optimizers to train the model with minimal loss by minimizing the overfitting and underfitting to get an optimized and efficient model(6)The proposed pipeline achieves promising improvements over classical learning methods

The flow of the paper is structured into sections as follows. In Section 2, the related work on the COVID-19 pandemic is discussed. Section 3 provides an overview of the dataset, DL techniques, TL domain, and the proposed method in detail. Section 4 reports the experimental design and process and evaluation criteria as well as outcomes and description. Section 5 concludes the paper.

The coronavirus pandemic is a highly infectious disease that can cause serious respiratory illness or even death in certain cases. There has also been some research into using machine learning algorithms to combat coronavirus, but few studies have provided a completely profound comprehension. De Sousa et al. [16] proposed a model called CNN-COVID for classifying the CXRS of COVID-19 patients. They composed the dataset into two sets. In dataset I, they used 217 images of COVID-19 infected and 1126 noninfected while in dataset II 2025 images were used as COVID-19 infected and 2025 noninfected. For model development, a CNN is used to classify the CXRS. Sheykhivand et al. [17] developed a method based on generative adversarial networks with deep LSTM networks to classify pneumonia without the use of feature extraction/selection. They used the CXRS dataset and classify them into 2–4 classes like infectious, respiratory, and COVID-19 classes. Singh et al. [18] presented a model for the detection of COVID-19 in CXRS using hybrid social group optimization (SGO) and support vector machine (SVM). SGO was used for feature extraction while SVM was applied for classification.

For COVID-19 case detection, Fang et al. [19] suggested a study to assess the sensitivity of a chest CT image with a viral nucleic acid detection approach using real-time polymerase chain reaction (RT-PCR). Bernheim et al. [20] looked at the results of 121 symptomatic positive COVID-19 patients’ chest CTs and found a link between the results of chest CTs and the time between the appearance of the symptoms of the interim CT scan (i.e., early, 0 to 2 days (36 cases), intermediate, 3 to 5 days (36 cases), and late, 6 to 12 days (25 cases)). According to the findings, 28% of early cases (10/36) had bilateral lung illness, and 76% of middle cases (25/33) and 88% of late cases (22/25) have lung infection on both sides. Narin et al. [21] developed a CNN-based method for detecting coronavirus pneumonia in CXR patients. ResNet50, InceptionV3, and Inception-ResNetV2 are three pretrained networks used in this study. To test the performance of the suggested model, 50 COVID-19 CXR and 50 normal CXRS were employed. They discovered that ResNet50, among other things, has the highest classification accuracy of 98% in binary classification algorithms. Zhang et al. [22] investigated that CXRS can help diagnose COVID-19 viral infection. They used ResNet50, which has been pretrained on 102 COVID-19 cases and 102 additional pneumonia patients. Abbas et al. [23] proposed that their previously created CNN, Decompose, TL, and Compose network (DeTraC-Net) classify COVID-19 CXRS from normal and SARS cases using pretrained ResNet-50 as TL. To use multiobjective differential evolution based CNN, Singh et al. [24] classified chest CT images from infected persons with and without COVID-19.

Narin et al. [21] used a dataset of 100 CXR scans, half of which were COVID-19 infected cases, to compare three different deep CNN-based techniques (e.g., InceptionV3, ResNet50, and Inception-ResNetV2). The best results were found with the pretrained ResNet50 algorithm, which had a 98% accuracy rate. Al-Waisy et al. [25] presented a DL-based hybrid multimodal approach to improve COVID-19 pneumonia detection in CXRS. To encode the input image into low-dimensional vectors, Kassania et al. [26] employed CNN networks as feature descriptors, which were subsequently processed by several algorithms to produce amassed results. The results were confirmed using the same dataset as in [14]. Islam et al. [27] used long-short-term memory (LSTM) for COVID-19 identification, after extracting the feature with CNN and Garg et al. [28] presented a comparative study of various methods for detecting COVID-19 infection. Chen et al. [29] presented Residual Attention U-Net as an automated multiclass segmentation method to provide the groundwork for a quantitative identification of pneumonia related to COVID-19 using CT scans.

To conclude, researchers have discovered that CXRS reveals critical information on COVID-19. An intelligent method can assist radiologists in detecting COVID-19 from CXRS, which could be useful in distant areas of many emerging regions. In this paper, we propose a pipeline for classifying COVID-19 infection using CXRS. Before being used as input, all CXRSs were trimmed to 224 × 224 pixels and balanced using data augmentation techniques. We did not use any further preprocessing stages because much other research has been conducted without them, and using a similar approach allowed us to compare our methods to those of other studies. The relevant features from the CXRS are retrieved and optimized using the proposed pipelines CNN without TL and CNN + TL method. Finally, the attention and feature interweave modules’ features are combined to build a better feature map. Using a variety of classifiers, the selected features were then trained to classify the CXRS adapting CNN, and the newly generated feature layer is applied to the current architecture.

3. Materials and Methods

This section presents the proposed pipelines to achieve the proposed objectives. Figure 3 depicts the modules for dataset description, data preprocessing and augmentation, feature extraction, adapting relevant features, and a classifier (sigmoid activation) for generating classification reports from CXRS (COVID-19 positive and COVID-19 negative), optimization, and evaluation.

3.1. Dataset Description

We used publicly available datasets created as part of a project by Cohen et al. [14]. CXRSs of MERS, SARS, ARDS, and other respiratory disorders are included in this dataset. The CXRSs in this database were collected indirectly from hospitals and clinicians from a variety of public sources. This dataset was used to collect CXRSs from patients who were positive or suspected of having COVID-19. Moreover, Kaggle [15] was used to obtain the CXRSs of healthy patients. We extracted CXRSs of healthy patients and selected 300 CXRSs to ensure that the COVID-19 CXRSs are balanced. The dataset was developed for further use: it contains 200 CXRSs of COVID-19 and 300 images of normal CXRSs. Thus, COVID-19 CXRSs are included in the data collection. The Posteroanterior (PA) view turns out to be the most used view; hence, we employed the COVID-19 PA view CXRS for the proposed work. Before being used as input, all CXRSs were trimmed to 224 × 224 pixels and balanced. Table 1 lists the specifics of the dataset, while Table 2 shows samples of CXRS compared to COVID-19 and non-COVID-19 patients’ reports.

3.2. Augmenting and Compiling the X-Rays Scans

To prevent overfitting and increase the trained model’s generalization capacity, data augmentation was used. The original image was first rescaled to (224 × 224) pixels, and then five random image areas of (128 × 128) pixels are taken from each image. Every single image in the dataset was then flipped horizontally and rotated 5 degrees (clockwise and counterclockwise). From both classes, a total of 500 X-ray scans (224 × 224) pixels are retrieved (e.g., COVID-19-positive, and COVID-19-negative CXRS). To avoid creating biased prediction results, data augmentation is applied after partitioning the COVID-19 positive-vs-COVID-19 negative dataset into three sets (e.g., training, validation, and testing sets).

To get around this problem, we employed the ImageDataGenerator, which created new images for the training stage. Digital processing was used to create the new images, which were geometric transformations of the originals. Geometric manipulations including translation, rotation, patch extraction, and reflection do not modify the picture object attributes, allowing for “data augmentation.” The advantage of this method is that it improves CNN’s capacity to generalize when trained with an enhanced dataset [30]. As a result, overfitting can be reduced, which occurs when a network’s ability to generalize is lost when new data is provided. For data augmentation, the following strategies were used: width shift range (0.2), range rotation (40), height shift range (0.2), zoom range (0.2), shear range (0.2), rescale (1/255), horizontal flip (True), and vertical shift (True). Following these improvements, the dataset was able to balance COVID-positive and COVID-negative classification on the testing and training sets. When CXRS is supplied as an input for the classifier, this database augmentation occurs at run time. This is something that aids in the best-fit training of our model. Augmentation is a type of image preprocessing in which a model is trained on a large variety of images. Scaling, translation, rotation, and flipping, among other methods, can be used to increase the diversity of an image. When CXRSs have been augmented, they can be reshaped into the input shape of (224 × 224) with a batch size of 32 and train the training set. Table 3 shows the results of the augmented CXRS.

3.3. Model Development

In this section, we design the architecture of the CNN and TL as shown in Table 4. The model is trained on the COVID-19 chest scan dataset (our input photographs). Then, the Convolutional and Pooling layers will be imported, but the “upper section” of the model will be left out (the Fully Connected layer). It can be seen in Figure 4 that each layer in the feature extraction layer accepts the output of the layer before it as input, and its output is handed on to the layers after it.

The output is generated after the image has been passed through a set of convolutional, nonlinear, pooling, and fully connected layers and classification layers combined. To develop the model, for example, many convolutional networks are merged with nonlinear and pooling layers. When an image passes through one convolution layer, the result of the previous layer is used as the input for the following layer. The feature extractors contain (Conv2D (128, (3 × 3)), (Conv2D (64 (3 × 3)), (Conv2D (128, (3, 3)) max-pooling layer (pool_size = (2, 2)), and a ReLU activation function between them. The result of the convolution and max-pooling algorithms is organized into feature maps, which are two-dimensional (2D) maps. With an input of image of size (223 × 224 × 3), we got (222, 222, 32), (220, 220, 128), (108, 108, 64), and (52, 52, 128) sizes of extracted features for the convolution operations and (110, 110, 128), (54, 54, 64), and (26, 26, 128) sizes of extracted features for the pooling operations, respectively. This happens with each convolutional layer that follows.

The convolution layer is the initial layer. The image is received by it (a matrix of pixel values). Suppose the input matrix is read from the image’s top-left corner. After that, the software selects a smaller matrix, known as a filter (neuron). Then, the filter works in convolution, which means it transmits along with the input image. The filter’s job is to multiply its values by the pixel values from which they came. This is the sum of all the multiplications. A single number appears at the end. The filter moves 1 unit to the right and restarts the procedure because it has only read the image in the upper left corner. After passing the filter overall points, a matrix is formed, but it is less than the original input matrix.

After each convolution operation, the nonlinear layer is applied. It has a nonlinear feature due to its activation function. A network without this feature would be insufficiently active and unable to model the dependent variables (as a class label).

The nonlinear layer is followed by the pooling layer. It performs a downsampling process on the image’s width and height. Consequently, the size of the image is lowered. This indicates that if some features (such as boundaries) were previously recognized in the last convolution operation, a detailed image is no longer required for further processing and is compressed into less detailed images.

It is important to add a fully connected layer after completing a succession of convolutional, nonlinear, and pooling layers. The output information from convolutional networks is fed into this layer. When a completely connected layer is attached to the network’s end, it produces an N-dimensional vector, where N is the number of classes from which the model picks the specified class.

3.4. Classification Model

Following the methodology presented in Figures 3 and 4, pretrained networks were analyzed after constructing a balanced dataset, which is critical for producing good conclusions.

Convolutional neural networks [31] were used to build this architecture. It has three groups of layers, with convolution layers (Conv2D), nonlinear layers (ReLU), and pooling layers alternated (MaxPooling2D). Then, two layers are securely bonded together (Dense). Consider the primary convolution layer, i.e., Conv2D layer. In the convolution, the value 32 indicates the amount of output filter. The width and height of the 2D convolution window are determined by the kernel size, which is represented by the integers (3, 3). The input shape, which is the input array of pixels, is a core part of the first convolution layer. Following convolution layers, which are constructed in the same fashion, do not incorporate the input shape.

The usage of pretrained models allows a new model to settle faster and work better on a smaller dataset by leveraging characteristics acquired on a bigger dataset [32]. The Keras [33] platform provides pretrained classifiers, the weights are derived from channel images, and the X-ray data is contained in a single channel.

Transfer learning [34] is adapted to fine-tune four popular pretrained DL models using the training images of the COVID CXRS dataset to address the constrained data sizes. A model trained on one task is adapted to another similar activity using TL, which usually involves some adaptation to the new task. For instance, on a smaller dataset, one may use an image classification algorithm trained on ImageNet (which has millions of annotated images) to start certain task learning for COVID-19 identification. In the proposed work, a TL-based model VGG16 (Visual Geometry Group) [13] is used. It is most useful for situations when there are not enough training sets to build a network from the ground up, including medical image categorization for odd or novel disorders. This is especially true for deep neural network models, which must train a huge number of parameters. TL allows the model parameters to start with the already baseline values that simply require minor tweaks to make them optimized/appropriate for the new task.

4. Experimental Results and Performance Evaluation

In this work, we used the CXRS dataset of corona patients to find the best available pretrained neural network for COVID-19 classification. All of the experiments in this study are run on a workstation using the Anaconda framework (Python version 3.6.4) (Anaconda,” https://anaconda.org/) on a 64-bit OS with 8 GB RAM and an Intel Core i5 CPU. The dataset was split into three parts: 70% training, 10% validation, and 20% testing using k-fold cross-validation [35]. The CXRS dataset includes 200 images from COVID-19 patients and 300 images from healthy people. Table 5 shows the accuracy, training time, and loss for each formation of CNN without TL and CNN + TL model. It is noted that training time and loss are reduced firmly, and the testing accuracy is significantly increased. Table 6 reports the average total for precision, recall, and f-measure for the formation of the CNN and CNN + TL model.

Evaluation criteria: the confusion matrix [36], also termed the possibility table or error matrix, is a special matrix that is used to display the visual effect blueprint of a classifier’s performance. The predicted values are shown by the rows, while the actual values are represented by the columns. False positive, true positive, false negative, and true negative are the categories employed in the analysis. Figure 5 shows the structure of the confusion matrix. For example, a patient report is as follows: “an X-rays report shows that a person is infected with corona.” True positive (TP) denotes that the truth is positive and that the classifier predicts a positive outcome. For example, the report is positive, and the model classifies accurately this report. The term TN stands for true negative, which signifies that the truth is negative, and the classifier predicts a negative. For example, “an X-rays report shows that a person is infected with corona,” and the classifier correctly reports this. The term “false positive” (FP) refers to a situation in which the truth is negative, yet the classifier predicts a positive outcome. For example, “an X-rays report shows that a person is not infected with corona,” but the classifier incorrectly reports it as such. FN means false negative: the classifier predicts a negative, even though the truth is positive. For example, a person is infected with corona, yet the classifier incorrectly indicates that it is not. The confusion matrix for all the formations of CNN and CNN + TL is presented in Figure 5.

Finally, the features of the attention and feature interweave modules are combined to create a more accurate feature map. For feature extraction, deep learning architecture, i.e., VGG-16, is used along with transfer learning. The experimental results found that training enhances the CNN + TL algorithm’s ability to classify chest radiography imaging, with an overall detection accuracy of 99.3%.

In addition, to perform evaluation, the ROC curve [37, 38], which plots the TP rate as a function of the FP rate, is used. The ROC graph is constructed with TP rate on the y-axis and FP rate on the x-axis. Figure 6 depicts the ROC curves of the proposed models. Since the ROC curve is a performance statistic for classification issues at various threshold levels, it indicates how well the classifier can differentiate between classes. The greater the ROCC, the better the model classifies 0 class as COVID-19 positive and 1 class as COVID-19 negative. According to the ROC, the proposed formations of CNN perform similarly, with the CNN + TL (Figure 6(j)) having a little higher ROC (0.97) than the others, which means that the classifier can detect more numbers of true-positives and true-negatives than false-negatives and false-positives.

The major goal of this research is not to find out the differences between pretrained and trained neural networks; rather, it is to give a solution for COVID-19 testing that is focused on already available and established technology. If the accuracy of the pretrained model is not satisfactory to radiologists, it may be worthwhile to investigate various untrained convolutional neural networks. Additionally, incorporating the patient’s demographics, D-Dimer, respiratory rate level, myosin level, leukocyte to lymphocyte ratio, sugar levels, temperature, heartbeat, and degree of inspiration could increase the overall accuracy rate.

The architecture is trained for COVID-19 CXRS using TL, and the newly generated feature layer is applied to the current architecture. The results show that further training improves the classification architecture’s performance by 99.3%. The CNN + TL classifier in the proposed pipeline yields a classification accuracy of 99.3%, outperforming existing state-of-the-art DL methods for binary classification.

5. Conclusion and Future Work

To conclude, tools that are quick, adaptable, efficient, and easy to use are required to identify and manage COVID-19 testing contagion. The current gold standard clinical tests are time-consuming and expensive, causing testing postponements. Patients with lower breathing symptoms or suspected COVID-19 pneumonia can be screened with chest radiography, which is a frequently accessible procedure. Adding computer-aided radiography can help improve processing and early disease diagnosis; this is especially true throughout a pandemic, especially during the spike, and in places where radiologists are in low supply. In this study, we investigated and analyzed different hyperparameters to optimize a variety of DL method CNN with and without TL for detecting radiographic characteristics of COVID-19 pneumonia that are accessible today. To analyze the efficiency of the proposed model, 500 CXRS sets are acquired from the benchmark repositories [14, 15], with 360 being used for training and 40 for validation. Our findings demonstrated that CNN- is the best pretrained DL network for the classification of COVID-19 pneumonia imaging patterns on chest radiographs, after testing 10 different pretrained neural networks models. The architecture is trained for COVID-19 CXRS using TL, and the newly generated feature layer is applied to the current architecture. The results show that further training improves the classification architecture’s performance by 99.3%. Future studies can be conducted to expand the specificity of these methods in the context of various respiratory contagions. In addition, the work might be extended to include disease classification and severity.

Data Availability

All data and code related to this article can be requested from the corresponding author.

Conflicts of Interest

There are no conflicts of interest among the authors regarding the publication of the manuscript.

Acknowledgments

This research was supported by Taif University Researchers Supporting Project number (TURSP-2020/254), Taif University, Taif, Saudi Arabia.