Machine Learning Enabled Signal Processing Techniques for Large Scale 5G and 5G NetworksView this Special Issue
Analysis of Chest X-Ray Images for the Recognition of COVID-19 Symptoms Using CNN
The 2019 coronavirus pandemic (COVID-19) struck without warning, and existing medical screening and clinical management systems were unprepared, causing a high fatality rate. Given the virus’s ongoing evolution, there is still a potential for reemergence; earlier weak preparedness will not be accepted in such a situation. Therefore, it is vital to understand and rectify past diagnostic work’s flaws. RT-PCR and antigen tests, both widely used, have experienced problems in the past. They either were too sluggish or produced an excessive number of false negatives. Another issue was a lack of test kits. As a result, chest X-ray image-based disease classification has emerged. However, managing a variety of chest X-ray pictures for COVID-19 and pneumonia patients is complicated and error-prone. As a result, the only way to improve the current diagnosis is to apply deep learning algorithms that learn from radiography pictures and anticipate COVID-19 development. We constructed our own convolutional neural network (CNN) by incorporating transfer learning from the most popular ResNet, VGG, and InceptionNet models. The endeavor necessitated the creation of a sizable dataset that accurately depicted the patient population. Before importing the model, the images were enhanced to remove artifacts caused by noise, motion, or blurring that could impair the detection of infection. Preprocessing has a substantial impact on the model’s accuracy. The results indicated that the VGG16 architecture, with a detection accuracy of 95.29%, is optimal for COVID-19 identification from X-ray images. Furthermore, most generated models outperformed current state-of-the-art research in the same field.
Coronavirus disease 2019 (COVID-19) is a coronavirus infection triggered by a new coronavirus originally called as 2019-nCoV. It is a component of a variety of pathogens that causes breathing infections, including severe acute respiratory syndrome (SARS) and the Middle East respiratory syndrome (MERS) . COVID-19 virus was discovered for the first time in Wuhan, Hubei, China. The virus causes breathing disease, illness, dry cough, and shortness of breath as frequent symptoms . No exact drug or vaccination is present, and therapies are continuously being investigated . COVID-19 is a contagious illness spread mainly by drops formed when a disease-ridden individual coughs, sneezes, or breathes out. Before the outbreak, the infection was utterly unknown, and it is regarded as the most significant challenge due to the socioeconomic catastrophe it produces. The pathogen, which affects the upper respiratory system, is readily passed from person to person, making this sickness hazardous. As a result, early discovery may aid in treating, isolating, and hospitalization of infected individuals. His virus’s numerous testing techniques are available, including RT-PCR, RT-LAMP, electrochemical, and optical biosensors for RNA recognition .
Presently, two methods for detecting COVID-19 contamination in afflicted people are available: diagnostic testing (present contamination) and antibody tests (past infection). Rapid detection of COVID-19 is accomplished by using diagnostic techniques like reverse transcription-polymerase chain reaction (RT-PCR) and antigen assays. Because false positives (FPs) are more prevalent in antigen testing, RT-PCR is the gold standard typical for illness detection. However, RT-PCR tests need extensive laboratory work to get the data , and the test’s cost is a significant problem in several countries with a privatized wellbeing system. While PCR and antigen testing may now offer a quick diagnosis, medical imaging of the lungs will provide material on illness load. Additionally, a faster and more accurate diagnosis of COVID-19 would aid in detaching infected individuals quicker, limiting disease dissemination.
Apart from laboratory detection procedures, various alternative approaches for detecting COVID-19 are available. The usual medical imaging modalities for diagnosing lung illness are chest radiography (CXR) and computed tomography (CT) images [6, 7]. While CT scans are often employed in diagnosing COVID-19 [8–10], cost  and radiation exposure are significant considerations. Additionally, it was discovered that chest CT has high compassion for analysis  and that X-ray pictures reveal pictorial indices linked to COVID-19 . CXR images are favored over CT imaging due to their lower radiation dose and widespread availability.
Nowadays, healthcare professionals collect and generate vast amounts of data, which contain critical information and signals that may be analyzed and used to overcome the limitations of conventional analytical processes. However, this exponential development of medical pictures necessitates substantial effort from medical experts, which is highly subjective and prone to human mistakes. An alternative method is to systematize the composite procedure of medical analysis by utilizing health data and contemporary machine learning algorithms. As a result, using automated techniques for identification aids in the diagnostic process and provides very accurate early detection .
Computer-aided chest X-ray examination procedures are required for COVID-19 case identification from chest X-ray images. Deep learning approaches are effective in generating high-quality results while also providing extra benefits such as (1) maximizing the use of unstructured data, (2) eliminating unnecessary costs, (3) reducing feature engineering, and (4) eliminating explicit data labeling. As a result, deep learning algorithms are frequently used to extract essential features from photos to categorize them automatically. Moreover, deep learning algorithms have made significant contributions to medical image analysis and the accomplishment of high classification performance using less time-consuming simulated tasks .
We describe a technique based on deep learning for detecting COVID-19 infection from chest X-ray pictures in this work. To identify X-ray pictures as COVID-19 positive or COVID-19 negative, we suggest a deep convolutional neural network (CNN) model. The suggested technique was developed utilizing a transfer learning strategy using a variety of dense convolutional neural network pretrained models, including VGG16 , VGG19 , ResNet50 , and InceptionResNet-V2 . A model capable of detecting COVID-19 contamination from chest radiography pictures should benefit physicians in the triage, quantification, and follow-up of positive patients. Even though this approach does not entirely swap current testing methods, it may be used to reduce the number of situations that need urgent testing or more review by specialists. The contributions of the work are as follows. (1)The current study utilized an extensive dataset for model training and validation, resulting in a genuine depiction of the real-world patient populace(2)Development of fine-tuned models using state-of-the-art CNNs to classify COVID-19 positive chest X-rays from normal chest X-rays efficiently. Our work modified only the fully connected networks. Kernels for feature extraction remained unchanged(3)Proposed efficient preprocessing and enhancement techniques that aided in the improvement of the proposed deep learning models’ accuracy(4)Comparing the current study with previous works in the same domain suggests that it outperforms the vast majority of them
The design of the paper is as follows. Section 2 delivers a summary of the previous work done in the domain. Section 3 describes the various datasets utilized for the development of the model. Section 4 described the model architecture followed in the work. Section 5 gives the details about the model evaluation metrics. Results and Discussions are discussed in Section 6. A separate section is dedicated for discussion in Section 7. Related work and its comparison are done in Section 8. Section 9 is concluded with conclusions.
2. Literature Review
Various CNN-based deep neural networks are frequently employed to classify medical images. Using CNN as a feature extractor in medical picture classification may avoid expensive and challenging feature extraction processes . A CNN for diagnosing lung illness from image patches using a shallow convolutional layer (convlayer) was developed. The testing employed 16,220 patches from 92 HRCT pictures, and the authors obtained a precision of 94 percent utilizing the suggested model.
Reference  demonstrated a CNN-based approach for analyzing large chest X-ray film datasets. The authors utilized the Stanford Normal Radiology Diagnostic Dataset. It comprises about 400,000 CXR with 108,948 frontal-view CXRs for experiments. It achieved an accuracy and a recall of 0.90 and 0.91, respectively.
The authors in  conducted a comparative investigation on CXR into average bacterium and coronavirus using pretrained models based on DCNN, including VGG16, VGG19, InceptionResNet-V2, InceptionV3, ResNet50, DenseNet201, and MobileNetV2 (multiclass classification). The InceptionResNet-V2 model has an accuracy of 92.11 percent for coronavirus detection.
Reference  presented COVIDX-Net; this system is based on a deep learning technique built on seven DCNNs, including VGG19, Xception, ResNetV2, InceptionV3, InceptionResNet-V2, DenseNet201, and MobileNetV2, for diagnosing COVID-19 using X-ray pictures. The VGG19 and DenseNet201 models outperformed other models by 90 percent accuracy, with an F1 score of 0.89 for regular and 0.91 for COVID-19.
Reference  also used DL to identify COVID-19 patients based on a limited number of chest X-ray pictures. They employed pretrained ResNet50 networks, which achieved an overall accuracy of 89.2%.
Reference  detected COVID-19 chest X-ray pictures via a transfer learning InceptionV3 model, demonstrating that the transfer learning approach is stable and simple to extend for COVID-19 detection. Reference  classified healthy individuals, COVID-19, and bacterial pneumonia correctly using an enhanced version of the ResNet50 pretrained network. Reference  classified COVID-19, bacterial pneumonia, viral pneumonia, and standard persons with a precision rate of 80.6 percent using the GooLeNet pretraining model. Reference  used a multilayer threshold in conjunction with a support vector machine (SVM) approach to accurately categorize X-ray pictures of COVID-19-infected individuals. Reference  classified COVID-19 X-ray pictures with great accuracy using machine learning algorithms such as SVM, CNN, and random forest (RF). Reference  fine-tuned seven CNNs including InceptionV3, ResNet50V2, Xception, DenseNet121, MobileNetV2, EfficientNet-B0, and EfficientNetV2 for the detection of COVID. Additionally,  developed an optimized CNN model that can be deployed in a low-powered embedded system.
Reference , on the other hand, developed a CovidGAN-based Auxiliary Classier Generative Adversarial Network (ACGAN) model to generate synthetic chest X-ray (CXR) imagery. Additionally, they proved that the CovidGAN-generated synthetic pictures might be used to improve the performance of CNNs for COVID-19 identification. Classification using CNN solely achieved a precision of 85%, but with the addition of synthetic pictures generated by CovidGAN, the efficiency climbed to 95%. Some of the similar work is available in .
The most significant limits of the initial research are the comparatively small test dataset used for classification. Additionally, no consideration was made for an unbalanced accurate depiction of the patient population. Additionally, most works used raw medical images for model training without performing any preprocessing on the images. As a result, medical images are frequently subjected to artifacts caused by noise, motion, or blurring, all of which can impair disease detection. Thus, preprocessing and enhancement of images are critical steps before applying machine learning or deep learning models. The current work addresses these deficiencies and proposes a more effective solution.
3. Materials and Methods
In response to the quick outbreak of the COVID-19 pandemic and the need for efficient and early diagnosis, several public open-source datasets of chest X-rays and computerized tomography (CT) images have been available. We used the COVIDx chest X-ray benchmark dataset available online at  These data sources are the COVID-19 X-ray images , COVID-19 chest X-ray dataset , Actualmed COVID-19 chest X-ray dataset , Kaggle COVID-19 radiography database-version 3 [38, 39], chest X-ray8 dataset  originally acquired from the National Institute of Health (NIH) , RSNA international COVID-19 open radiology database (RICORD) , BIMCV-COVID19+ dataset , and the Stony Brook University COVID-19 positive case dataset . The databases used in this work are summarized in Table 1. Our dataset contains 30,882 chest X-ray images of 14,192 negative (non-COVID) and 16,690 positive COVID-19 cases; Figure 1 shows example images of each class. The data were acquired from 17,026 patients. The distribution of chest X-ray images in the dataset for positive and negative cases is shown in Figure 2.
Transfer learning has been widely used in image classification problems. We do not need to start learning from scratch; instead, we use pretrained deep models trained on other enormous datasets and fine-tune the model based on our dataset. In our approach, we focused on applying some of the most used and popular transfer learning models available in Python’s Keras library. We applied VGG16, VGG19, ResNet-50, and InceptionResNet-V2 models; pretrained on over a million images from the ImageNet database to our dataset after preprocessing techniques to enhance performance and improve the quality of the input images to the models. Figure 3 shows the pipeline of our approach.
Medical images are usually exposed to some artifacts due to noise, motion, or blurring that can impair disease detection. Hence, image preprocessing and enhancement are essential steps before applying any machine learning or deep learning models. Image preprocessing is aimed at enhancing the quality of the image by suppressing distortion and enhancing the image features. Following are the preprocessing steps we applied through our approach: (1)Noise removal
Salt and pepper, speckle, Gaussian, and Poisson noise types are most common in medical images. Denoising algorithms such as median, Gaussian, and Weiner filters were proved to be effective with these types of noise. In our approach, we used the Gaussian smoothing technique by applying the GaussianBlur method in the OpenCV library with a kernel of size . (2)Morphology filter
Morphological operations are simply based on erosion and dilation. While erosion removes white noise in the boundary, the dilation increases the area again. Erosion followed by dilation is known as the opening method, and the opposite of that is the closing method. We applied opening and closing operations with a kernel size of to ensure removing any noise still in the image and close small dots if they exist. (3)Contrast enhancement
Contrast limited adaptive histogram equalization (CLAHE) is very powerful in adjusting image contrast, which improves the visibility of foggy image parts, resulting in better image quality and enhanced details. Therefore, we applied the CLAHE filter on our image dataset using OpenCV createCLAHE method using a clip limit of 4 and tile grid size of .
The above preprocessing steps are then applied consequently to the images by converting to grayscale to apply the filters and then back to RGB, all during image flow to the ImageDataGenerator along with rescaling pixel values from 0–255 to 0–1 and resizing to the networks default size of . Figure 4 shows an example image before and after applying the preprocessing method.
The dataset is then split with 60-20-20% ratio for train, test, and validation sets, respectively, and the distribution of the ratios is summarized in Table 2.
4. Model Architecture
As stated previously, we trained the dataset with four pretrained models that will be discussed in detail in this section.
4.1. VGG16 and VGG19
VGG16 was first launched in 2014 and was the winner of the ImageNet large-scale visual recognition challenge (ILSVRC) . The architecture of VGG models generally consists of five blocks of small convolutional filters of with , each followed by RELU activation function, using the same padding and max-pooling layers of filter size with , and three last fully connected layers. VGG16 and VGG19 model architectures are the same; VGG19 has three more convolutional layers than VGG16. We used the two architectures implemented in Python’s Keras library, removing the three top fully connected layers using the parameter and adding a global average pooling layer, a drop-out layer with 0.2, and a softmax dense output classifier layer (see Figure 5). VGG16 was trained with Adam optimizer, , and with freezing; all layers except for the last two are fine-tuned. The same setup was used with the VGG19 model but with a learning rate of 0.01.
ResNet-50 the deep residual network (ResNet) won ILSVRC in 2015, introducing the skip connection concept  that uses a shortcut between every two layers and direct connections. This approach is proven to help overcome the vanishing gradient problem that appears to go and more profound in the network. We used the 50-layer deep ResNet version in our experiment that consists of 5 convolution blocks. The first block consists of a convolutional layer with a kernel size of , followed by a max-pooling layer with . The second block contains three convolutional layers with the first and third kernel sizes of and the second of . These three layers are repeated three times, giving nine convolutional layers. Next is the third block with three convolutional layers repeated four times with twelve layers. The ResNet-50 architecture continues as depicted in Figure 6 by removing the top layers and adding global average, dropout, and dense layers. The model was trained using an RMSprop optimizer and a learning rate of 0.0001.
The combination of Inception architecture and the residual connections from the ResNet network resulted in the InceptionResNet-V2 network (the network architecture is shown in Figure 7), which was found to be accelerating the training cost of Inception networks  to achieve high performance with low computational cost. It consists of a stem block (Figure 8) that carries out an early convolution stage and pooling before the inception module. In addition, it has three Inception modules named A, B, and C and two reduction blocks that are used to change the grid’s width and height. The detailed structure of each of the Inception module blocks is described in Figure 9. In our experiment, we used the InceptionResNet-V2 architecture in Keras with a similar classifier layer setup as the previous three models, RMSprop optimizer, and a learning rate of 0.0001.
5. Model Evaluation Metrics
Apart from the confusion matrices, we generated some evaluation measures such as accuracy, precision, recall (sensitivity), F1, mean intersection union (IOU), and dice coefficient scores to evaluate the performance of the proposed model on unseen test data. While the accuracy score in (1) checks for the number of correct predictions over the total count of predictions, the precision metric computes the ratio of true positives to all positive predictions as given in (2), and the recall is given in (3), which calculates the percentage of true positives over the ground-truth positives. In addition, F1 is a score that combines precision and recall into one metric score, as given in (4). Furthermore, we computed another two metrics, which are the IOU score (5); also known as the Jaccard index, which calculates the percentage of overlap between the ground truth and the prediction labels, and the dice coefficient (6) is very similar and positively correlated to the IOU, and both ranges from 0 (no overlap) to 1 (perfectly overlapped).
6. Results and Discussion
The four model architectures were trained for 20 epochs adding an early stopping callback function with six patience epochs and a minimum delta change of 0.01. Figure 10 summarizes the train validation accuracy and loss learning curves for each model. The training learning curve gives an idea of how well the model learnt and performs over the epochs on the training set. From the validation learning curve, we follow the model performance on unseen data, which indicates how well the model is generalizing. The VGG16 and VGG19 models topped early after 14 and 10 epochs. The learning curves show that the loss is steadily decreasing for both training and validation sets, especially the loss curve of the VGG16 model, which is very smooth with no oscillations, and the accuracy is increasing. The ResNet-50 model early stopped after eight epochs, and we can observe one or two spikes in the learning curves but within a slight difference. Additionally, the InceptionResNet-V2 model stopped after ten epochs, and the learning curves show stability and smoothness but with a prolonged convergence rate.
As mentioned earlier in Table 2, a test set of a total of 6177 images, 3338 positive COVID-19 cases and 2839 negative cases, is preserved for testing and evaluation of the generalization performance of the models. In Figure 11, we can find the confusion matrices of each of the four models indicating the number of true-positive (TP), true-negative (TN), false-positive (FP), and false-negative (FN) predictions. Table 3 compares our experimental results using the different evaluation metrics previously stated. The VGG 16 model resulted in accuracy and F1 score of 95.3% and 95.26%, respectively, and a dice similarity coefficient of 0.953. And next to it, with a slight difference, is the VGG19 model with an accuracy of 94.5%. The ResNet-50 model resulted in a value of 91.97% for F1 score and 92.02% accuracy, while the InceptionResNet-V2 achieved an accuracy of 88.4% which is expected due to the very slow learning.
As can be concluded from Table 3 and as expected from the accuracy and loss learning curves, the VGG16 and VGG19 models attained the best performance as their evaluation metrics on the test set were the highest among the rest of the models. Moreover, as for the confusion matrix, the two models have the least number of false positives and false negatives. The VGG16 model has 140 false positives and 151 false negatives. On the other hand, the VGG19 model resulted in 223 and 116 false-positive and false-negative predictions, respectively. This important insight gives an advantage for the VGG19 model over the VGG16 model in our experiment. Minimizing false-negative predictions is a critical and essential matter in healthcare and medical applications, as it may lead to delayed diagnosis and hence delay diagnosis treatment. In the next section, a comparative analysis was performed with literature-related work for COVID-19 detection from chest X-ray images or CT scan images using different approaches.
8. Related Work Comparison
In Table 4, we summarize the performance comparison of our approach compared to other similar studies. Regarding the study conducted by Wang et al. , they proposed the same benchmark dataset we are using in our study applying COVID-Net deep convolutional neural network, the first open-source network implemented for COVID-19 detection from chest X-ray images. COVID-Net resulted in 93.3% accuracy, while the VGG19 and ResNet-50 models were used and resulted in 83% and 90.6% accuracies, respectively. Horry et al.  developed a study to classify COVID-19 into the three most used medical imaging techniques, chest X-rays, ultrasound, and CT scan images, using transfer learning. Their study resulted in 79%, 87%, 73%, and 75% accuracies on the VGG16, VGG19, InceptionResNet-V2, and ResNet-50 models, respectively. Image-enhancing technique preprocessing and image-enhancing techniques exhibited a distinct advantage with the same transfer learning paradigm. In addition, Oyelade et al.  conducted another study proposing a CNN framework called CovFrameNet to classify and detect COVID-19 disease from chest X-ray images. Although their proposed model achieved an accuracy of 99.9%, it resulted in 85% for precision and recall and a 90% F1 score due to the class imbalance in the used dataset. Another study was conducted by Ahmed et al.  which introduced an Internet of things- (IoT-) based framework for early detection of COVID-19 using a faster region-based convolutional neural network (Faster R-CNN), resulting in 98% accuracy, recall of 98%, and 97% for negative and positive images, respectively. When we compared the findings of these investigations, we discovered that only Ahmed et al. , who employed the Faster R-CNN, had a model that outperformed ours, providing our technique an advantage over previous COVID-19 detection studies, especially those that used transfer learning models.
As a follow-up to our previous research, we aimed to broaden our investigation to include real-world datasets comprising a variety of chest infections brought on by COVID-19 (multiclass). In addition to this, we may work on developing models that are both more lightweight and highly accurate to use them in portable devices. Finally, as the study focuses on medical data, which is of extremely highly crucial value, having an awareness of the errors associated with each model prediction will also be an asset.
Despite marking two years since the COVID-19 outbreak, an early and accurate diagnosis is still necessary and needed. This work implemented and evaluated four pretrained models, VGG16, VGG19, ResNet-50, and InceptionResNet-V2 architecture models, for COVID-19 disease detection from chest X-ray images which are considered an inexpensive, fast, and most available test that can potentially be used in COVID-19 diagnosis. We applied our proposed method on an available large-sized benchmark dataset collected from seven different open sources of chest X-ray images. Our approach examined the significance of image preprocessing and enhancement techniques such as smoothing, denoising, and contrast equalization for enhancing model performance, particularly with this complex dataset compiled from multiple sources. This could be useful for other researchers who wish to utilize this dataset for their own investigations. Our results demonstrated the power of transfer learning-based methodologies in addressing such problems with satisfying performance. The VGG architecture is proven effective throughout the executed experiments for classifying normal and COVID-19 chest X-ray images with up to 95.3% accuracy, precision, and recall. Overall, the “results of the four models” were very promising and showed that the transfer learning pretrained models perform very well on diseases detected from chest X-ray images.
The dataset used in this study is available at “COVID-Net Open Initiative. (Nov. 11, 2021). [Online]. Available: https://alexswong.github.io/COVID-Net/.”
Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this paper.”
The authors extend their appreciation to the researchers supporting Project number (rsp-2021/384), King Saud University, Riyadh, Saudi Arabia.
Z. A. Memish, S. Perlman, M. D. Van Kerkhove, and A. Zumla, “Middle East respiratory syndrome,” The Lancet, vol. 395, no. 10229, pp. 1063–1077, 2020.View at: Publisher Site | Google Scholar
N. Chen, M. Zhou, X. Dong et al., “Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: a descriptive study,” The Lancet, vol. 395, no. 10223, pp. 507–513, 2020.View at: Publisher Site | Google Scholar
“Home,Who.int,” 2020, https://www.who.int.View at: Google Scholar
E. Shabani, S. Dowlatshahi, and M. J. Abdekhodaie, “Laboratory detection methods for the human coronaviruses,” European Journal of Clinical Microbiology & Infectious Diseases, vol. 40, no. 2, pp. 225–246, 2021.View at: Google Scholar
W. Wang, Y. Xu, R. Gao et al., “Detection of SARS-CoV-2 in different types of clinical specimens"""",” JAMA, vol. 323, no. 18, pp. 1843-1844, 2020.View at: Publisher Site | Google Scholar
A. K. Jaiswal, P. Tiwari, S. Kumar, D. Gupta, A. Khanna, and J. J. P. C. Rodrigues, “Identifying pneumonia in chest X-rays: a deep learning approach,” Measurement, vol. 145, pp. 511–518, 2019.View at: Publisher Site | Google Scholar
M. Annarumma, S. J. Withey, R. J. Bakewell, E. Pesce, V. Goh, and G. Montana, “Automated triaging of adult chest radiographs with deep artificial neural networks",",” Radiology, vol. 291, no. 1, pp. 196–202, 2019.View at: Publisher Site | Google Scholar
C. Zheng, X. Deng, Q. Fu et al., “Deep learning-based detection for COVID-19 from chest CT using weak label,” MedRxiv, 2020.View at: Google Scholar
X. Wang, X. Deng, Q. Fu et al., “A weakly-supervised framework for COVID-19 classification and lesion localization from chest CT",",” IEEE Transactions on Medical Imaging, vol. 39, no. 8, pp. 2615–2625, 2020.View at: Publisher Site | Google Scholar
H. Ko, H. Chung, W. S. Kang et al., “COVID-19 pneumonia diagnosis using a simple 2D deep learning framework with a single chest CT image: model development and validation,” Journal of Medical Internet Research, vol. 22, no. 6, article e19569, 2020.View at: Publisher Site | Google Scholar
M. Garin, D. F. Carballo, and R. Montet, “High discordance of chest x-ray and CT for detection of pulmonary opacities in ED patients: implications for diagnosing pneumonia,” American Journal of Respiratory and Critical Care Medicine, vol. 31, no. 2, 2013.View at: Google Scholar
T. Ai, Z. Yang, H. Hou et al., “Correlation of chest CT and RT-PCR testing for coronavirus disease 2019 (COVID-19) in China: a report of 1014 cases,” Radiology, vol. 296, no. 2, pp. E32–E40, 2020.View at: Publisher Site | Google Scholar
J. P. Kanne, B. P. Little, J. H. Chung, B. M. Elicker, and L. H. Ketai, “Essentials for radiologists on COVID-19: an update—Radiology scientific expert panel,” Radiology, vol. 296, no. 2, pp. E113–E114, 2020.View at: Publisher Site | Google Scholar
S. Minaee, R. Kafieh, M. Sonka, S. Yazdani, and G. J. Soufi, “Deep-COVID: predicting COVID-19 from chest X-ray images using deep transfer learning,” Medical Image Analysis, vol. 65, article 101794, 2020.View at: Publisher Site | Google Scholar
G. Jain, D. Mittal, D. Thakur, and M. K. Mittal, “A deep learning approach to detect COVID-19 coronavirus with X-ray images,” Biocybernetics and Biomedical Engineering, vol. 40, no. 4, pp. 1391–1405, 2020.View at: Publisher Site | Google Scholar
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” 2014, https://arxiv.org/abs/1409.1556.View at: Google Scholar
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, Las Vegas, NV, USA, 2016.View at: Publisher Site | Google Scholar
C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-ResNet and the impact of residual connections on learning,” in Thirty-first AAAI conference on artificial intelligence, San Francisco, California, USA, 2017.View at: Google Scholar
Q. Li, W. Cai, X. Wang, Y. Zhou, D. D. Feng, and M. Chen, “Medical image classification with convolutional neural network,” in 2014 13th international conference on control automation robotics & vision (ICARCV), pp. 844–848, Singapore, 2014.View at: Publisher Site | Google Scholar
X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers, “ChestX-ray8: hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax disease,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2097–2106, Honolulu, HI, USA, 2017.View at: Google Scholar
K. El Asnaoui and Y. Chawki, “Using X-ray images and deep learning for automated detection of coronavirus disease",",” Journal of Biomolecular Structure & Dynamics, vol. 39, no. 10, pp. 3615–3626, 2021.View at: Publisher Site | Google Scholar
E. E.-D. Hemdan, M. A. Shouman, and M. E. Karar, “COVIDX-Net: a framework of deep learning classifiers to diagnose COVID-19 in X-ray images,” 2020, https://arxiv.org/abs/2003.11055.View at: Google Scholar
L. O. Hall, R. Paul, D. B. Goldgof, and G. M. Goldgof, “Finding COVID-19 from chest X-rays using deep learning on a small dataset,” 2020, https://arxiv.org/abs/2004.02060.View at: Google Scholar
S. Asif, Y. Wenhui, H. Jin, and S. Jinhai, “Classification of COVID-19 from chest X-ray images using deep convolutional neural network,” in 2020 IEEE 6th international conference on computer and communications (ICCC), pp. 426–433, Chengdu, China, 2020.View at: Publisher Site | Google Scholar
Y. Song, S. Zheng, L. Li et al., “Deep learning enables accurate diagnosis of novel coronavirus (COVID-19) with CT images,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 18, no. 6, pp. 2775–2780, 2021.View at: Publisher Site | Google Scholar
M. Loey, F. Smarandache, and N. E. M. Khalifa, “Within the lack of chest COVID-19 X-ray dataset: a novel detection model based on GAN and deep transfer learning,” Symmetry, vol. 12, no. 4, p. 651, 2020.View at: Publisher Site | Google Scholar
L. N. Mahdy, K. A. Ezzat, H. H. Elmousalami, H. A. Ella, and A. E. Hassanien, “Automatic X-ray COVID-19 lung image classification system based on multi-level thresholding and support vector machine",",” MedRxiv, 2020.View at: Google Scholar
A. M. Alqudah, S. Qazan, H. Alquran, I. A. Qasmieh, and A. Alqudah, “COVID-2019 detection using X-ray images and artificial intelligence hybrid systems,” Biomedical Signal and Image Analysis and Project, 2020.View at: Google Scholar
M. L. Huang and Y. C. Liao, “A lightweight CNN-based network on COVID-19 detection using X-ray and CT images,” Computers in Biology and Medicine, vol. 146, article 105604, 2022.View at: Publisher Site | Google Scholar
T. Sanida, A. Sideris, D. Tsiktsiris, and M. Dasygenis, “Lightweight neural network for COVID-19 detection from chest X-ray images implemented on an embedded system,” Technologies, vol. 10, no. 2, p. 37, 2022.View at: Publisher Site | Google Scholar
A. Waheed, M. Goyal, D. Gupta, A. Khanna, F. Al-Turjman, and P. R. Pinheiro, “CovidGAN: data augmentation using auxiliary classifier GAN for improved COVID-19 detection",",” IEEE Access, vol. 8, pp. 91916–91923, 2020.View at: Publisher Site | Google Scholar
M. J. Horry, S. Chakraborty, M. Paul et al., “COVID-19 detection through transfer learning using multimodal imaging data,” IEEE Access, vol. 8, pp. 149808–149824, 2020.View at: Publisher Site | Google Scholar
“Kaggle COVIDx CXR-2 dataset,” https://alexswong.github.io/COVID-Net/.View at: Google Scholar
“COVID-Net open initiative,” 2021, https://alexswong.github.io/COVID-Net.View at: Google Scholar
J. P. Cohen, P. Morrison, L. Dao, K. Roth, T. Q. Duong, and M. Ghassemi, “COVID-19 image data collection: prospective predictions are the future,” 2020, https://arxiv.org/abs/2006.11988.View at: Google Scholar
“COVIDNet,” 2020, https://github.com/agchung/Figure1-COVID-chestxray-dataset.View at: Google Scholar
“Dataset,” 2020, https://github.com/agchung/Actualmed-COVID-chestxray-dataset.View at: Google Scholar
M. E. H. Chowdhury, T. Rahman, A. Khandakar et al., “Can AI help in screening viral and COVID-19 pneumonia,” IEEE Access, vol. 8, pp. 132665–132676, 2020.View at: Publisher Site | Google Scholar
T. Rahman, A. Khandakar, Y. Qiblawey et al., “Exploring the effect of image enhancement techniques on COVID-19 detection using chest X-ray images,” Computers in Biology and Medicine, vol. 132, p. 104319, 2021.View at: Publisher Site | Google Scholar
“National Institute of Health Clinical Center,” https://nihcc.app.box.com/v/ChestXray-NIHCC.View at: Google Scholar
“RSNA International COVID-19 Open Radiology Database (RICORD) release 1c - chest X-ray COVID+ (MIDRC-RICORD-1c),” https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=70230281.View at: Google Scholar
M. D. Vayá, J. M. Saborit, J. A. Montell et al., “BIMCV COVID-19+: a large annotated dataset of RX and CT images from COVID-19 patients,” 2020, https://arxiv.org/abs/2006.01174.View at: Google Scholar
J. Saltz, “Stony Brook University COVID-19 positive cases,” The Cancer Imaging Archive, 2021.View at: Google Scholar
L. Wang, Z. Q. Lin, and A. Wong, “COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images,” Scientific Reports, vol. 10, no. 1, pp. 1–12, 2020.View at: Publisher Site | Google Scholar
O. N. Oyelade, A. E.-S. Ezugwu, and H. Chiroma, “CovFrameNet: an enhanced deep learning framework for COVID-19 detection,” IEEE Access, vol. 9, pp. 77905–77919, 2021.View at: Publisher Site | Google Scholar
I. Ahmed, A. Ahmad, and G. Jeon, “An IoT-based deep learning framework for early assessment of COVID-19,” IEEE Internet of Things Journal, vol. 8, no. 21, pp. 15855–15862, 2020.View at: Google Scholar