One of the most pressing issues in the current COVID-19 pandemic is the early detection and diagnosis of COVID-19, as well as the precise separation of non-COVID-19 cases at the lowest possible cost and during the disease's early stages. Deep learning-based models have the potential to provide an accurate and efficient approach for the identification and diagnosis of COVID-19, with considerable increases in sensitivity, specificity, and accuracy when used in the processing of modalities. COVID-19 illness is difficult to detect and recognize since it is comparable to pneumonia. The main objective of this study is to distinguish between COVID-19-positive images and pneumonia-positive images. We have proposed an integrated convolutional neural network focused on discriminating against COVID-19-infected patients and pneumonia patients. Preprocessing is done on the image datasets. The novelty of this research work is to differentiate the COVID-19 images from the pneumonia images. It will help the medical experts in the decision-making. In order to train the model, the image is given directly as input to integrated convolutional neural network architecture; after training the model, the system is integrated with three different kinds of datasets: COVID-19 image dataset, RSNA pneumonia dataset, and a new dataset created from COVID-19 image dataset. The attainment of the system is evaluated by calculating the measures of sensitivity, specificity, precision, and accuracy, and this system produces the accuracy values of 94.04%, 97.2%, and 97.5% for the above datasets, respectively.

1. Introduction

Coronavirus pandemic was brought about by the stringent acute respiratory syndrome coronavirus (SARS-CoV-2) and has quickly spread over the world. Because of the virus’s sneaky infectious nature and a lack of vaccinations or effective therapies, early diagnosis is critical in preventing further transmission and controlling it within current healthcare institutions. The most dangerous sickness induced by COVID-19 is pneumonia, which affects the lungs. High fever, dyspnea, cough, and runny nose are some of the symptoms of the condition [1]. Chest X-ray imaging analysis for abnormalities is the most common way to diagnose these situations. X-radiation, often known as X-ray, is a type of electromagnetic radiation that absorbs radiation. As a COVID-19 innovative diagnostic approach, X-ray imaging has a number of advantages over conventional testing procedures [2]. The low cost, wide accessibility of X-ray imaging, noninvasiveness, reduced time consumption, and equipment accessibility are only a few of the advantages [3]. In light of the current global healthcare crisis, imaging of X-ray may be the best choice for a mass, simple, and active diagnosis technique in a generic situation such as COVID-19. Image-based medical diagnostics can save a lot of time when it comes to detecting COVID-19, which can help to limit and prevent the spread of the virus. The major forms of pictures that can play a big role in defeating COVID-19 are CXR and CT [4]. The convolutional layer, which acts on a localized area rather than all over, is used in the first portion of CNN. It pulls distinctions from raw input and turns that layer into the one before it. The pooling layer then takes what is learned from the preceding layer and reduces the process’s complexity. The completely linked layer executes the features learned from all preceding layers in the second half, resulting in the intended classified outputs [5].

Deep learning explicitly in clinical images depends on the determination. Models of deep learning achieved conspicuously on PC objective issues connected with the clinical image examination. The ANNs beat other customary models and strategies for image investigation. Because of the exceptionally encouraging outcomes given by CNNs in the clinical image examination and order, they are speculated to be the obvious norm in this area. CNN has been utilized for an assortment of classifying tasks connected with clinical findings. For example, lung sickness identification of malaria sick in images of slender blood smears, bosom malignant growth identification, remote endoscopy images, and interstitial lung contamination CAD depends on the determination in radiography of chest analysis of skin disease by order and programmed conclusion of different chest illnesses utilizing chest X-beam image characterization. Various scientists are locked in with the experimentation and examination exercises connected with the finding, treatment, and management of COVID-19. One more huge review has provided details regarding the X-beam dataset, including X-ray images having a place with normal COVID-19 patients, pneumonia patients, and individuals with no sickness [6]. The review utilizes the best in CNN structures for the automated, accurate identification of COVID-19 patients. Move learning has accomplished exactness about 97.82% in the identification of COVID-19 [7]. This review and one more recent and significant review have led to the approval and versatility of transfer, deform, and compose, type deep-based CNN for COVID-19 identification utilizing chest X-beam picture arrangement [8]. In the current review, 18 shape features were separated. Then, at that point, the shape highlights, descriptors of the two-layered size, and the state of the ROI were incorporated. The shape features are free of the gray level force circulation in the ROI; in this way, they are just determined on the noninferred images. In the local force distinction features, different elements, including hecticness, coarseness, intricacy, contrast, and surface strength, were registered [9].

Investigating the gathering of tests determined to have COVID-19, the stamped superpixels are somewhat more centered on the chest and body area rather than the checked superpixels in the examples with practically no analyzed pathology. Here, shoulders, neck, and hands are set apart with a huge size of patches. Moreover, when the images pairwise COVID-19 are analyzed with the ordinary images, we can see that areas of positive superpixels in COVID-19 images are generally sheltering similar situations as the negative superpixels in other characterized images [10]. In the clinical field, the CNN method and deep learning are the first rates at images that are utilized to make clinical pictures, not just for making and, furthermore, for grouping and recognizing the images. The convolution neural networks are used for image characterization and recognition [11]. CNNs play out the best exact for the pneumonia recognition with X-ray images taken from the part of the chest. The outcomes show more than 95% of exactness. Not just pneumonia, the CNN plays out the very good accuracy for the identification of pneumonia disease with X-ray images taken from the part of the chest. The outcomes show more than 95% of accuracy [12].

3. Existing Methodologies

3.1. Convolutional Neural Networks (CNNs)

The CNNs are influenced by the perceptible arrangement of the individual mind. The thought of bringing up the rear CNNs, consequently, is to make the PCs fit for review of the world as people see it. In this direction, CNNs can be utilized in the domains of image acknowledgment and investigation, image arrangement, and regular language handling [13]. CNN is a kind of deep-seated neural organization, which consists of the pooling, convolutional, and nonlinear actuation layers. The convolutional layer, regarded as a fundamental layer of a convolutional NN, plays out the activity called “convolution” that provides the name for CNN. Kernels available in the layer of convolution are exercised on the input layer [14]. The convolutional layers’ results are intervolved as an insight map. In this review, the ReLU has been utilized in the function of activation amidst a convolutional layer that is useful to expand the nonlinearity in the provided input image because the images are generally nonlinear in general [15]. Accordingly, CNN in connection with ReLU in the present situation is simpler and quicker. Because the score of ReLU is zero for generally bad information sources, it tends to be constituted as just Zmax (0, I). Here, the capacity infers that the result z is 0 for generally regrettable and positive-remaining parts of the steady [16].

3.2. Deep Convolutional Neural Networks (DCNNs)

Normally, deep convolutional neural organizations (DCNNs) perform better in a more prominent database than in a more modest dataset [17]. Transfer learning (TL) is valuable in systems like CNN with a generally restricted assortment of information. In this review, we have attempted the assignment of characterization of pictures of chest X-beams into one of the following 3 classes: COVID-19-positive, viral pneumonia, or typical [5, 18]. The constructed CNNs were tested using databases such as MNIST and CIFAR-10 to see how well they performed. The test results were 99.467% and 91.167%, respectively. These findings are similar to those of other CNNs that have been tested for accuracy. As a result, it was established that the constructed CNN not only has fewer parameters than existing CNNs but also performs well in tests [19].

3.3. KNN Classifier

The contribution to the model that classifies is a bunch of images of 2 groups: COVID-19 and ordinary cases. The equal FrMEMs are exercised on multicenter CPUs to derive the highlights of the image. At that instant, an optimization-based algorithm is utilized in view of element extraction. At long last, a KNN method was prepared and assessed. The consequences of the suggested COVID-19 X-beam characterization, image-oriented strategy contrasted and other famous MH procedures that are employed as FS; the particular methods incorporate sine cosine calculation SCA, grey wolf improvement GWO, whale streamlining calculation WOA, and Harris hawks’ analyzer HHO [20]. These calculations are utilized in this correlation since they set them up for their presentation in various appliances like worldwide improvement and element determination techniques [21]. They have enumerated images from the SIRM and created the COVID-19 database. This dataset comprises 219 numbers of COVID-19-positive images and 1341 numbers of negative COVID-19 images. For two datasets, the images derived from the COVID-19 image dataset were gathered from a person aged between 40 and 84, representing the two sexes. The information encompasses 216 numbers of COVID-19-positive and 1675 COVID-19 negative [22]. The KNN-based method is used to conclude whether a provided chest X-beam image is a COVID-19 or typical case. The suggested technique was assessed on a number of distinctive datasets. Contrasting with an effective CNN design, the MobileNet model and the presented strategy accomplished practically identical execution on the exactness, review, and accuracy of assessment measurements with the least number of features [23].

3.4. FCN- and MMD-Based Domain Adaptation Module

It comprises a backbone network, an arrangement descent, and a division descent. Compared to FCN, our model has a helper grouping descent. The task grouping descent is intended for the two tasks: one type is to empower our method to achieve both grouping classification and division tasks for the mechanised disease estimation and COVID-19 determination. Another part is to work with the utilization of MMD-oriented strategies for the area’s transformation. The backbone-oriented network is answerable for separating deep highlights by playing out the spatial and convolution pooling procedures on CXRs and DRRs. The extricated deep highlights are then taken care of in the order of descent and division descent independently. In the characterization part, we embrace an extremely straightforward construction with a worldwide normal pooling layer and a completely convolution-oriented layer. In this framework, an off-the-rack MMD-based space transformation method is taken on. LMMD can quantify the error of neighborhood circulations by thinking about the connections of the applicable subdomains. By limiting the loss of LMMD while the preparation of deep models, the dispersions of significant subdomains inside a similar class in the fountain area and function space are moved closer. Just as the LMMD strategy is suggested with regards to protest acknowledgment and digit arrangement undertakings, we use it to the order of descent straight by adjusting the deep elements from layers of GAP. The impact of element arrangement can be engendered on the division part, verifiably through the contribution of the layer GAP [1, 24].

3.5. COVID-19-GATNet Architecture

There are two potential ways of improving the expressive force of the CNN model. One is to expand the number of layers to make it more profound, the other is to increment the quantity of layers to make it more profound, and the third is to build the quantity of convex channels to make it progressively wide. The work process of this review starts with an assortment of essential datasets that encompass 2 image groups: one group of images had a place with the chest X-ray of COVID-19 affirmed cases and another group of images had a place with individuals with pneumonia infection. The concerned clinical experts investigated the dataset and eliminated a portion of the X-beam images that were not satisfactory as far as quality and indicative boundaries. Henceforth, the produced dataset was exceptionally perfect, as every X-beam image was of good quality, just as clear as far as huge demonstrative boundaries as per their expertise [13]. In DenseNet, each layer in the organization is straightforwardly layered to understand the reuse of associated features. As the quantity of layers expands, the organization progressively adds the new element data produced by each layer to the current worldwide element data. The Coronavirus GATNet adds numerous arrangements of autonomous consideration instruments by adding a diagram consideration layer. After that, the element extraction has done, so that the multihead consideration system can act as the conveyance of various pertinent elements between the focal hub and neighbour hubs. Hubs in a similar area dole out various loads, which can grow the model scale and make the model learning capacity all the more impressive. By consolidating the strong element extraction ability of DenseNet and the consideration system, the fundamental construction of the COVID-19-GAT model is acquired [25].

4. Dataset and Preprocessing

4.1. Dataset

Table 1 lists the dataset used for the proposed neural network systems. Our system considers three types of image datasets taken from open sources: the first image dataset is the COVID-19 image dataset, which includes 584 numbers of COVID-19 images and 1760 numbers of normal images among a total number of 2344 images [26]. The RSNA pneumonia dataset consists of 500 numbers of pneumonia images and 1600 numbers of normal images among a total number of 2100 images. The third dataset is created by taking the data from the COVID-19 image dataset and the RSNA pneumonia dataset; the newly created dataset contains 484 numbers of both COVID-19-positive and pneumonia-positive images, as well as 1400 normal images among a total number of 1884 images [27]. The pneumonia dataset named RSNA is comprised of images from NIH (NI of Health) and marked by the Radiological Society of North America alongside the Thoracic Radiology Society. The objective of this dataset was to foster an AI classifier equipped to recognize control and pneumonia images, so it was delivered in the competition of Kaggle in the year 2018. It comprises 26684 pictures, of which 20672 are control and 6012 are pneumonia pictures. However, in this study, just 2100 pictures are considered for performance assessment. Choosing X-ray images for analysis is helpful because they are cheaper, faster, and more common [28].

4.2. Dataset Preprocessing

Pneumonia positive cases and discriminating pneumonia from the COVID-19 positive case images have been used to stabilize the dataset to enhance the attainment of the suggested CNN models in identifying COVID-19 cases. The data preprocessing includes the accompanying tilt correction, removing noises, cropping images, and padding. The tilt correction is significant since the model can perceive all of the data in the same orientation when it is trained. Directly addressing the tilt in big datasets is time consuming and costly. As a result, an automatic tilt correction technique in preprocessing before training is required. In the wake of applying these preprocessing steps to information, we see that model accuracy has expanded altogether [29]. During preprocessing, eliminating noises is a vital stage since the information is improved on later in the execution, so we can see it all the more obviously. In this way, the model can be prepared well. Tilt correction is the arrangement of the chest X-ray in a proposed manner. At the point when tilt is experienced by chest X-ray pictures, it might bring about misalignment for clinical applications. It is significant in the data that when we train the model, it can see the entire information through a similar arrangement. Physically correcting the tilt for huge scope of information is tedious and costly. In this manner, there is a requirement for a programmed method of performing tilt correction in preprocessing [30]. Cropping the image is expected to put the chest X-ray image in the middle and dispose of pointless pieces of chest X-ray. Likewise, some may be put in various areas inside the image. By cropping the image and adding cushions, we will ensure practically every one of the images is in the same area inside the general images itself.

5. The Proposed Methodology

This proposed method will diagnose and classify different kinds of diseases. A convolutional neural network is trained. The modules include dataset creation, dataset preprocessing, and building convolutional neural networks. The convolutional neural network was chosen because it automatically discovers significant features without the need for human intervention. From two datasets, a new dataset was randomly created; they are COVID-19 chest X-ray image dataset, pneumonia dataset, and a dataset that is downloaded from Kaggle. This dataset contains two types of images: one is pneumonia-positive image and another one is pneumonia negative image. Pneumonia negative images are considered as COVID-19 negative images. A convolutional neural network is trained with pneumonia negative images, COVID-19-positive images, and pneumonia-positive images. This method is developed using Python, TensorFlow, and Keras, which combines a variety of methods and models to allow users to build deep neural networks for applications like image recognition and classification. The complete path of the COVID-19-positive image dataset is set and trained. OpenCV is used to read the images. In the first phase, COVID-19 negative and COVID-19-positive images are trained. The main intention of this work is to distinguish between pneumonia-positive images and COVID-19-positive images [31]. Three different kinds of datasets are considered for training and testing. After preprocessing, the image datasets are trained. Figure 1 shows the proposed architecture for an integrated convolutional neural network.

The size of the image can be changed, and in turn, the image can be changed to gray color. In this modified convolutional neural network architecture, the image is given directly as input. We have a parallel 2D convolutional layer of 128@3 × 3, 128@5 × 5, and 128@7 × 7. In order to create a parallel convolutional layer, the kernel size is defined as 3, 5, and 7. All three layers are appended into a single layer. The second layer is the concatenation layer, which is created by concatenating all the convolutional layers. Then, we have a general layer of convolution 64@3 × 3, where 64 kernels are available, and another layer with 32@3 × 3, where 32 kernels are available. The data coming from the 2D convolutional layers are given as input into the flatten layer, the dense layer (128), and the dense layer (64). To construct a single lengthy feature vector, we flatten the output of the convolutional layers. It is also linked to the final classification model. Finally, the output layer, which had two neurons, produces two results: one is COVID-19-positive and the other one is pneumonia-positive. Specifically, two neurons, COVID-19-positive and pneumonia-positive, are produced as the output. This metadata.csv file contains the image file name, disease name, patient age, and other details also. The metadata.csv file is read and it filters the COVID-19 images from the image dataset. Softmax is used as the activation of the output. In this implementation, we have used 85% of the data for training purposes and 15% of the data for testing purposes. In our system, a modified and integrated convolutional neural network is utilized for diagnosing and segregating the chest X-ray images. The first convolutional layer will process the dataset with different diseases and produce output as a result of the output layer. The output layer produces two outputs as a result. The first result category is an image with disease (positive) and the second category is an image without the disease (negative).

6. Results and Discussion

Following the dataset’s preprocessing, it was comprised of a total of 2344 X-ray images. The dataset was isolated into two different subsets. The training dataset is comprised of 584 COVID-19 X-ray images and 1760 normal X-ray images, leading to a total of 2344 X-ray images trained using the integrated convolutional neural network. This new CNN was tested using the testing dataset, which similarly contained 285 chest X-ray images. Figure 2 shows the confusion matrix of the ICNN on the COVID-19 image dataset.

As a first test, our integrated convolutional neural network is trained with the abovementioned number of chest X-ray images and tested using the test dataset with 285 images taken from the RSNA pneumonia dataset. This model produced the results of 258 numbers of true positives, 10 numbers of true negatives, 9 numbers of false negatives, and 8 numbers of false positives among the 285 images of test data and sensitivity of 96.60%, specificity of 55.60%, precision of 96.90%, and accuracy of 96.81% were calculated and other measures such as negative predictive value, false-positive rate, false discovery rate, F1 score, and Matthews correlation coefficient are listed in Table 2.

As a second test, our integrated convolutional neural network is trained using the training dataset with 500 chest X-ray images and tested using the test dataset with 500 images taken from the RSNA pneumonia dataset. Figure 3 shows the confusion matrix of the ICNN on the RSNA pneumonia dataset.

This model produced the results of 451 numbers of true positives, 35 numbers of true negatives, 10 numbers of false negatives, and 4 numbers of false positives among the 500 images of test data. Sensitivity was 99.12%, specificity was 77.78%, precision was 97.83%, and accuracy was 97.2%. Other measures such as negative predictive value, false discovery rate, F1 score, false-positive rate, and Matthews correlation coefficient are listed in Table 3. Finally, our integrated convolutional neural network is trained using the training dataset with 484 chest X-ray images and tested utilizing the test dataset with 200 images taken from the RSNA pneumonia dataset. Figure 4 shows the confusion matrix of the ICNN on the newly created dataset.

This model produced the results of 158 numbers of true positives, 37 numbers of true negatives, 3 numbers of false negatives, and 2 numbers of false positives among the 200 images of test data, a sensitivity of 98.14%, specificity of 94.87%, precision of 98.75%, and accuracy of 97.5% are calculated, and other measures such as false-positive rate, negative predictive value, false discovery rate, F1 score, and Matthews correlation coefficient are listed in Table 4. Our proposed system has produced a value of 98.14% of the sensitivity based on the newly created dataset when comparing sensitivity and specificity.

The specificity performance measure is compared with all the existing methods used in the previous work. We have used three different kinds of datasets for testing our newly developed model. Our system produced good results based on the newly created dataset rather than the RSNA pneumonia dataset, and it produced a difference value of 17.09% of the specificity. Figure 5 shows the comparison of specific performance measures based on different methods and specificity measures for the datasets 1, 2, and 3 are plotted; here, our third dataset was created from both the COVID-19 image dataset and produced good results.

The sensitivity performance measure is compared with all the existing methods used in the previous work. We have used three different kinds of datasets for testing our newly developed model. Figure 6 shows the comparison of sensitivity performance measures based on different methods and sensitivity measures for the datasets 1, 2, and 3 are plotted; here, our second dataset derived from the RSNA pneumonia dataset and the RSNA pneumonia dataset produced good results.

The accuracy of our developed system is compared with all the existing methods used in the previous work. We have used three different kinds of datasets for testing our newly developed model. Our proposed system has produced a value of 98.14% of the sensitivity based on the newly created dataset when comparing sensitivity and accuracy. Figure 7 shows the comparison of accuracy performance measures based on different methods and accuracy measures for the datasets 1, 2, and 3 are plotted; here, our third dataset is derived from the RSNA pneumonia dataset, and the RSNA pneumonia dataset produced good accuracy.

7. Conclusions

Patients with pneumonia symptoms are more likely to have COVID-19 viral pneumonia than bacterial pneumonia acquired in the environment. It is difficult to tell whether pneumonia is caused by the COVID-19 virus or by bacteria. It is impossible to deliver the optimal treatment to individuals with lung infections without a firm diagnosis of the condition. We have developed an integrated convolutional neural network; this will identify the COVID-19-positive images in the COVID-19 image dataset by separating them from normal images. Pneumonia-positive images are identified from the RSNA pneumonia dataset by separating them from normal images. The COVID-19 images are diagnosed and identified from pneumonia-positive images from the dataset, which is newly created by taking the images from both the COVID-19 image dataset and the RSNA pneumonia dataset. The model is tested with three different kinds of image datasets with 285,500 and 200 numbers of images. The results of the experiments revealed that our new model ICNN achieves good accuracy, specificity, and F1-score values. In the future, the system will be tested with a large image dataset.

Data Availability

The data used to support the findings of this study are included within the article. Should further data or information be required, these are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest.


The authors thank Chandigarh University, Punjab, for providing characterization support to complete this research work.