VGG-UNet/VGG-SegNet Supported Automatic Segmentation of Endoplasmic Reticulum Network in Fluorescence Microscopy Images
This research work aims to implement an automated segmentation process to extract the endoplasmic reticulum (ER) network in fluorescence microscopy images (FMI) using pretrained convolutional neural network (CNN). The threshold level of the raw FMT is complex, and extraction of the ER network is a challenging task. Hence, an image conversion procedure is initially employed to reduce its complexity. This work employed the pretrained CNN schemes, such as VGG-UNet and VGG-SegNet, to mine the ER network from the chosen FMI test images. The proposed ER segmentation pipeline consists of the following phases; (i) clinical image collection, 16-bit to 8-bit conversion and resizing; (ii) implementation of pretrained VGG-UNet and VGG-SegNet; (iii) extraction of the binary form of ER network; (iv) comparing the mined ER with ground-truth; and (v) computation of image measures and validation. The considered FMI dataset consists of 223 test images, and image augmentation is then implemented to increase these images. The result of this scheme is then confirmed against other CNN methods, such as U-Net, SegNet, and Res-UNet. The experimental outcome confirms a segmentation accuracy of >98% with VGG-UNet and VGG-SegNet. The results of this research authenticate that the proposed pipeline can be considered to examine the clinical-grade FMI.
Artificial intelligence (AI) techniques are widely adopted in various engineering and scientific domains to obtain the finest possible solutions for a considerable number of problems. For example, AI-supported medical data assessment is one of the vital research fields. Therefore, a chosen scheme is employed to examine the information collected from hospitals and scan centers.
Typically, medical data includes the information collected from patients, such as personal data, diagnostic information, and collected biosignals and bioimages [1–3]. Assessment of medical data collected from the patient is essential during diagnosis, treatment execution, and monitoring of the recovery rate.
The literature confirms that various traditional and machine-learning (ML) procedures are employed to examine a variety of medical data with improved accuracy [4, 5]. Along with the ML schemes, deep learning (DL) procedures are also widely employed to examine various medical data to get a better diagnosis [6, 7]. The DL scheme works well on various medical databases available in the form of features, signals, and images. Therefore, it helps to achieve a superior result compared to other techniques. Most of the earlier works considered the pretrained DL procedure because of its superior result. The conventional and modified forms of the DL techniques are employed in segmentation and classification tasks [8–10].
In this work, the fluorescence microscopy image (FMI) database available in  is considered for the assessment. This dataset consists of the endoplasmic reticulum (ER) network image, and the study of the structural information plays a vital role in mediating cell condition assessment. Pricewise segmentation of this network is essential for evaluating the morphology needed to support the disease diagnosis and drug discovery functions. In the human cells, the ER network forms an energetic construction that supports the following functions in the cell, including calcium storage, protein synthesis, and lipid metabolism. Therefore, the segmentation and examination of ER network structure are essential in the medical domain to assess the cell’s complete information and protein structure. This information can be found in [12, 13].
The extraction of ER network from the FMI is achieved using the CNN approach, and in this work, novel VGG-UNet and VGG-SegNet are enhanced with the VGG19 model. The proposed scheme is tested and validated on the benchmark FMI dataset. This dataset consists of 223 images, data augmentation is implemented to increase the image dataset to 2007 images, and each image is resized to pixels. This image is then considered for testing and validating the ER network segmentation performance of the CNN technique, and the investigation is implemented using MATLAB®. A detailed comparative assessment of CNN schemes, such as U-Net, SegNet, Res-UNet, VGG-UNet, and VGG-SegNet, is presented. The segmented ER network is then compared against the ground truth (GT) available in the database, and the necessary image measures are computed. The experimental proposed investigation with the proposed schemes helped achieve a segmentation accuracy of >98%. This research confirmed that the outcome of VGG-SegNet is better than other schemes considered in this work. This work was also tested on the version of the FMI dataset existing in  and achieved a better result. This confirms that the proposed scheme is efficient in examining the FMI. In the future, it can be considered to evaluate the clinical-grade ER network evaluation task using the FMI.
The main contributions of this study are as follows: (i)The complexity of the dataset is addressed, and to reduce the complexity, a 16-bit to 8-bit conversion is employed(ii)The recent CNN segmentation schemes such as VGG-UNet and VGG-SegNet are implemented to examine the fluorescence microscopy images(iii)Comparative analysis between the commonly considered CNN segmentation schemes is presented
Other sections of this research are organized as follows: Section 2 presents earlier works on FMI, Section 3 shows the methodology employed, and Sections 4 and 5 discuss the results and conclusions of the present research, respectively.
2. Related Earlier Research
The assessment of ER network morphology is a clinically significant task during the disease diagnosis and drug discovery process. Therefore, this assessment is performed to examine the cell and its related information, and in the literature, the researchers discuss several ER network examination methods.
Usaj et al.  discussed single-cell image-supported morphology assessment to detect the cell-to-cell variability of the internal structures. Abrisch et al.  presented a study regarding mitochondrial morphology regulation based on fission/fusion procedures converging to ER network. Silva et al.  discussed various procedures to be adopted to study the cell signaling process during cancer and neurodegenerative disorder conditions. Powers et al.  discuss a detailed assessment of the tubular ER network’s reconstruction process.
The image processing supported cell image assessment is also widely discussed to examine the cell condition during normal and disease conditions. The research of Heinrich et al.  presented an automated segmentation procedure to extract the cell organelle from volumetric electron microscopy images. Chen et al.  discussed a novel three-dimensional residual channel attention procedure to improve the visibility of FMI. Shamir  presented low-level picture descriptors to support the computer-based FMI examination. Pécot et al.  presented a conditional random field technique-based segmentation and fluorescence estimation procedure to examine the live cell. Tahir  presented a detailed assessment of the morphological structure of protein images recorded using FMI. This work implemented gray level cooccurrence matrix (GLCM) technique to assess the FMI pictures. Zhang and Zhao  proposed a CNN scheme called CapsNet to evaluate the FMI database to classify 2D HeLa cells. Moen et al.  presented a detailed assessment of cellular images using a deep learning scheme. Mabaso et al.  present a detailed review of the assessment and segmentation of FMI.
Extracting the EM network from FMI is a complex task and achieving better segmentation accuracy is also challenging. Hence, CNN-supported segmentation is employed to extract and evaluate the ER network from FMI with better segmentation accuracy.
This part of the work demonstrates the methodology employed to mine the ER network using the CNN scheme.
Figure 1 depicts the architecture employed in this research work to examine the FMI database. Initially, the complex FMI is collected from the dataset. The collected FMI has a complex threshold level, and its complexity is initially reduced using an image conversion process that converts the tagged image file (.tif) format into a bitmap (.bmp) with a chosen threshold of 256 and it is resized to pixels. Next, image augmentation is employed to increase the number of test images, and the augmented image is then considered to train and validate the performance of the CNN scheme. After excellent training, the segmentation performance is tested, and the extracted ER network section is then compared with the GT image. Finally, based on the attained image performance, the advantage of this proposal is confirmed.
3.1. Image Database
This work considered the ER network FMI dataset  for assessment, and this dataset is formed with the help of cultured live cells; recorded with the help of spinning disk confocal microscopy (SDCM).
The recorded ER network was obtained from cells labelled with Green Fluorescent Protein (GFP), fused sec61β, and cultivated on MatTek cover glass dishes to 60% confluence. Pictures were collected using a oil W.D. 0.13 mm objective on a TI2-E reversed microscope associated with a 488 nm 150 mW laser of 4 laser combiner units (Axxis), a CSU-W1 spinning disk scan head (Yokogawa), and a 95BSI sCMOS camera focused by Nikon elements software (Nikon). This imagery is then collected as patches, and every picture is available with its G.T. image. These images were clustered into two categories: FMI version 1 (FMI1) and version 2 (FMI2), and in this study, the proposed versions were separately tested and validated. FMI1 consists of 223 test images in the chosen database, and FMI2 is associated with 175 images.
The significant complexity of this dataset is that every image is registered as a 16-bit image which exhibits a complex threshold value, and it needs a 16-bit to 8-bit conversion to support the computerized evaluation. In this work, the conversion of 16-bit to 8-bit conversion is initially performed to reduce the complexity, and the converted image is then considered for the examination. The conversion of 16-bit to 8-bit is achieved using the MATLAB command
. Figure 2(a) depicts the sample test image and its histogram for 16-bit and 8-bit cases as in Figures 2(b) and 2(c), respectively.
(a) Test image (8-bit)
(b) Histogram (16-bit)
(c) Histogram (8-bit)
The image values are then increased using the image augmentation (picture rotation by insteps of ) process, and this helped to achieve an increase in image number to 2007 for FMI1. Figures 3 and 4 present the sample test images and augmented images considered in this research work.
(a) Test image
3.2. Proposed CNN Scheme
In this research, the CNN segmentation methods, such as UNet  and SegNet , were improved using the VGG19 scheme. The earlier versions of CNN segmentation schemes are available along with VGG11 or VGG16, and these approaches have already confirmed their eminence in a class of medical images with gray/RGB scales. In this work, the conventional CNN segmentation procedures were improved by considering the VGG19 as the encoder and its inverse operation as the decoder section. The proposed encoder-decoder section was then trained to extract the ER network from the test images with better accuracy using a SoftMax classifier unit. The architecture of the VGG-UNet and VGG-SegNet is depicted in Figures 5 and 6, respectively. The necessary information on VGG-UNet and VGG-SegNet can be found in [28–32].
3.3. Pretraining and Segmentation
The considered CNN segmentation models were initially trained using the test/GT images to learn about the ER network, which is to be extracted from the FMI. Initially, the performances of the CNN models were tuned using optimizers, such as ADAM and stochastic gradient descent (SGD) with various batch sizes, such as 4, 8, 16, and 32 with a , , , and . The initial approach helped get a better learning rate (better accuracy with lesser dice loss) when the ADAM optimizer was used with a batch size of 8. This process is repeated until a training accuracy of >95% is achieved, and the sample result achieved during this process can be found in Figure 7.
The experimental investigation is repeated using the Python®, and the attained results are presented in Figure 8. Figure 8(a) depicts the images considered to train and validate the U-Net, and Figure 8(b) depicts the performance of the considered scheme for 100 epochs ( axis). Figure 8(b) confirms that the training of this scheme saturates before 50 epochs. This confirms that the pretrained scheme needs only minimum epochs to learn and extract the essential section from the considered test images. A similar result is achieved for other schemes considered in this study. This confirms that the pretrained CNN segmentation works similar when implemented with MATLAB® as well as Python®.
(a) Training images
(b) Convergence of loss and accuracy
3.4. Performance Validation
The overall merit of the CNN image segmentation scheme depends on the performance values computed during the comparison of extracted ER and GT. In this work, the necessary values were computed based on the attained values of true positive (TP), true negative (TN), false positive (FP), and false negative (FN). From these values, other measures, such as Jaccard Index (JA), Dice coefficient (DI), accuracy (AC), precision (PR), sensitivity (SE), specificity (SP), -score (F1), and negative predictive value (NPV) were derived.
The representations of these values are presented in Eqs. (2) to (8) [33–39]:
4. Results and Discussion
This part of work demonstrates the results achieved on a workstation; Intel i7 2.9 GHz processor with 20 GB RAM and 4 GB VRAM equipped with MATLAB®.
Initially, the pretrained U-Net is employed to segment the ER network from the FMI1 dataset. The considered CNN scheme is trained using the resized and augmented test images, this training process is continued until it achieves a training accuracy of >95%, and other procedures followed in this process are discussed in Subsection 3.3. When the CNN model is completely trained and achieved the required accuracy, then, 50 numbers of test images from FMI1 and FMI2 dataset are considered to validate the segmentation performance of the U-Net. After obtaining the necessary results, a similar procedure is then followed with SegNet, Res-UNet, VGG-UNet, and VGG-SegNet, and the results are recorded.
The result achieved at various layers of the VGG-UNet is shown in Figure 9. Figure 9(a) depicts the various layer results of encoder section; Figure 9(b) depicts the final convolution layer outcome of decoder, Figures 9(c) and 9(d) depict the outcome of SoftMax and the binary form of the extracted ER network, respectively.
(a) Conv-layer outcome
Every CNN scheme is trained using the FMI1 database (original and augmented images of data size 2007 numbers), and after the training, the segmentation performance is individually validated. The segmentation outcome achieved for a sample test image is depicted in Figure 10. Figure 10(a) shows the GT, and Figures 10(b) to 10(f) presents the results of the CNN scheme. After collecting the binary form of the ER network, a relative assessment with GT is performed, and the obtained image measures are presented in Tables 1 and 2. Table 1 presents the initial measures, like JA and DI, and Table 2 presents the essential performance values.
The JA, DI, and segmentation accuracy achieved with VGG-UNet are better than other methods, and the VGG-UNet and VGG-SegNet help achieve an accuracy of >99%. Even though the individual results of U-Net, SegNet, and Res-UNet are better, the comparison confirms that the proposed scheme is superior. A glyph plot is also constructed to confirm the overall performance of the proposed CNN. This plot also confirms that the overall merit of the proposed CNN scheme is better, and VGG-SegNet helps achieve a better result than VGG-UNet.
The developed scheme is verified using 50 numbers of FMI1 and FMI2 images, and the attained results are individually recorded. Finally, the mean values of the images are computed along with their standard deviation (), and the results are presented in Table 3. This table confirms that the segmentation accuracy achieved with the proposed CNN scheme is better (>98%), and the result of VGG-segment is comparatively reasonable than VGG-UNet. This confirms that the proposed scheme works well on the FRI database, and in the future, it can be used to assess the clinically collected FMI available from ER network.
Figure 11 presents the glyph plot, and this confirms that the result of VGG-UNet and VGG-SegNet is better than U-Net, SegNet, and Res-UNet for the Table 2 values. Figure 12 presents the spider plot for the overall result of Table 3, and Figures 12(a) and 12(b) confirm that the VGG-UNet and VGG-SegNet provide the better result on FMI1 and FMI2 compared to other approaches. In both cases, the overall performance of VGG-SegNet is better than VGG-UNet and other CNN schemes in this study. Automatic segmentation and evaluation of the endoplasmic reticulum network in fluorescence microscopy images is a difficult task due to the image complexity. This research confirms that the proposed CNN scheme helps extract the required sections with better accuracy. The proposed scheme can be considered to examine other complex biomedical images collected from actual clinics in the future.
In the medical domain assessment of ER networks, structural information is essential to support disease analysis and drug discovery operations. This research employs CNN-supported segmentation to extract the ER network from the FMI dataset. This work proposes VGG19-based CNN architectures, such as VGG-UNet and VGG-SegNet, to extract the needed information from test images. In this work, two FMI image sets (FMI1 and FMI2) are considered for the assessment, and the experimental investigation is performed in a MATLAB® environment. This work presented a detailed assessment of U-Net, SegNet, Res-UNet, and the proposed schemes. The experimental outcome of this work confirmed that the proposed CNN scheme helped get a better classification accuracy (>98%), and the VGG-SegNet offered better overall performance than other techniques. In the future, this technique can be considered to examine clinically collected FMI.
The fluorescence microscopy images considered in this research work can be accessed from https://ieee-dataport.org/documents/fluorescence-microscopy-image-datasets-deep-learning-segmentation-intracellular-orgenelle
Conflicts of Interest
The authors declare no conflict of interest.
The authors of this paper would like to thank the contributors of the dataset.
Z. Chen, W. Guo, D. Kang et al., “Label-free identification of early stages of breast ductal carcinoma via multiphoton microscopy,” Scanning, vol. 2020, 8 pages, 2020.View at: Publisher Site | Google Scholar
S. M. U. Talha, T. Mairaj, W. B. Yousuf, and J. A. Zahed, “Region-based segmentation and Wiener pilot-based novel amoeba denoising scheme for CT imaging,” Scanning, vol. 2020, Article ID 6172046, 12 pages, 2020.View at: Publisher Site | Google Scholar
J. L. Semmlow and B. Griffel, Biosignal and Medical Image Processing, CRC press, Florida,United States, 2008.
S. Maqsood, R. Damasevicius, and F. M. Shah, “An efficient approach for the detection of brain tumor using fuzzy logic and U-NET CNN classification,” International Conference on Computational Science and Its Applications, Springer, Cham, pp. 105–118, 2021.View at: Publisher Site | Google Scholar
M. P. Rajakumar, R. Sonia, B. Uma Maheswari, and S. P. Karuppiah, “Tuberculosis detection in chest X-ray using mayfly-algorithm optimized dual-deep-learning features,” Journal of X-Ray Science and Technology, vol. 29, no. 6, pp. 961–974, 2021.View at: Publisher Site | Google Scholar
F. I. Diakogiannis, F. Waldner, P. Caccetta, and C. Wu, “ResUNet-a: a deep learning framework for semantic segmentation of remotely sensed data,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 162, pp. 94–114, 2020.View at: Publisher Site | Google Scholar
V. Rajinikanth, A. N. Joseph Raj, K. P. Thanaraj, and G. R. Naik, “A customized VGG19 network with concatenation of deep and handcrafted features for brain tumor detection,” Applied Sciences, vol. 10, no. 10, p. 3429, 2020.View at: Publisher Site | Google Scholar
A. Malhotra, A. Sankaran, M. Vatsa, and R. Singh, “On matching finger-selfies using deep scattering networks,” IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 2, no. 4, pp. 350–362, 2020.View at: Publisher Site | Google Scholar
W. Zamora-Cárdenas, M. Mendez, S. Calderon-Ramirez et al., “Enforcing morphological information in fully convolutional networks to improve cell instance segmentation in fluorescence microscopy images,” International Work-Conference on Artificial Neural Networks, Springer, Cham, pp. 36–46, 2021.View at: Publisher Site | Google Scholar
T. Rahman, A. Khandakar, M. A. Kadir et al., “Reliable tuberculosis detection using chest X-ray with deep learning, segmentation and visualization,” IEEE Access, vol. 8, pp. 191586–191601, 2020.View at: Publisher Site | Google Scholar
Y. Luo, Y. Guo, W. Li, G. Liu, and G. Yang, “Fluorescence microscopy image datasets for deep learning segmentation of intracellular orgenelle networks,” IEEE Dataport, 2020.View at: Publisher Site | Google Scholar
M. Lu, F. W. van Tartwijk, J. Q. Lin et al., “The structure and global distribution of the endoplasmic reticulum network are actively regulated by lysosomes,” Advances, vol. 6, no. 51, article eabc7209, 2020.View at: Publisher Site | Google Scholar
J. Liu, L. Li, Y. Yang et al., “Automatic reconstruction of mitochondria and endoplasmic reticulum in electron microscopy volumes by deep learning,” Frontiers in Neuroscience, vol. 14, p. 599, 2020.View at: Publisher Site | Google Scholar
M. Usaj, N. Sahin, H. Friesen et al., “Systematic genetics and single-cell imaging reveal widespread morphological pleiotropy and cell-to-cell variability,” Molecular Systems Biology, vol. 16, no. 2, article e9243, 2020.View at: Google Scholar
R. G. Abrisch, S. C. Gumbin, B. T. Wisniewski, L. L. Lackner, and G. K. Voeltz, “Fission and fusion machineries converge at ER contact sites to regulate mitochondrial morphology,” Journal of Cell Biology, vol. 219, no. 4, 2020.View at: Publisher Site | Google Scholar
D. C. da Silva, P. Valentão, P. B. Andrade, and D. M. Pereira, “Endoplasmic reticulum stress signaling in cancer and neurodegenerative disorders: tools and strategies to understand its complexity,” Pharmacological Research, vol. 155, article 104702, 2020.View at: Publisher Site | Google Scholar
R. E. Powers, S. Wang, T. Y. Liu, and T. A. Rapoport, “Reconstitution of the tubular endoplasmic reticulum network with purified components,” Nature, vol. 543, no. 7644, pp. 257–260, 2017.View at: Publisher Site | Google Scholar
L. Heinrich, D. Bennett, D. Ackerman et al., “Automatic whole cell organelle segmentation in volumetric electron microscopy,” bioRxiv, 2020.View at: Publisher Site | Google Scholar
J. Chen, H. Sasaki, H. Lai et al., “Three-dimensional residual channel attention networks denoise and sharpen fluorescence microscopy image volumes,” Nature Methods, vol. 18, no. 6, pp. 678–687, 2021.View at: Publisher Site | Google Scholar
L. Shamir, “Assessing the efficacy of low-level image content descriptors for computer- based fluorescence microscopy image analysis,” Journal of Microscopy, vol. 243, no. 3, pp. 284–292, 2011.View at: Publisher Site | Google Scholar
T. Pécot, P. Bouthemy, J. Boulanger et al., “Background fluorescence estimation and vesicle segmentation in live cell imaging with conditional random fields,” IEEE Transactions on Image Processing, vol. 24, no. 2, pp. 667–680, 2015.View at: Google Scholar
M. Tahir, “Pattern analysis of protein images from fluorescence microscopy using gray level co-occurrence matrix,” Journal of King Saud University-Science, vol. 30, no. 1, pp. 29–40, 2018.View at: Publisher Site | Google Scholar
X. Zhang and S. G. Zhao, “Fluorescence microscopy image classification of 2D HeLa cells based on the CapsNet neural network,” Medical & Biological Engineering & Computing, vol. 57, no. 6, pp. 1187–1198, 2019.View at: Publisher Site | Google Scholar
E. Moen, D. Bannon, T. Kudo, W. Graf, M. Covert, and D. Van Valen, “Deep learning for cellular image analysis,” Nature Methods, vol. 16, no. 12, pp. 1233–1246, 2019.View at: Publisher Site | Google Scholar
M. A. Mabaso, D. J. Withey, and B. Twala, “Spot detection methods in fluorescence microscopy imaging: a review,” 2018.View at: Publisher Site | Google Scholar
O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” International Conference on Medical image computing and computer-assisted intervention, Springer, Cham, pp. 234–241, 2015.View at: Publisher Site | Google Scholar
V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: a deep convolutional encoder-decoder architecture for image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 12, pp. 2481–2495, 2017.View at: Publisher Site | Google Scholar
V. Rajinikanth, S. Kadry, and Y. Nam, “Convolutional-neural-network assisted segmentation and SVM classification of brain tumor in clinical MRI slices,” Information Technology and Control, vol. 50, no. 2, pp. 342–356, 2021.View at: Publisher Site | Google Scholar
W. Li, S. Cheng, K. Qian, K. Yue, and H. Liu, “Automatic recognition and classification system of thyroid nodules in CT images based on CNN,” Computational Intelligence and Neuroscience, vol. 2021, 11 pages, 2021.View at: Publisher Site | Google Scholar
M. Fradi, E. H. Zahzah, and M. Machhout, “Real-time application based CNN architecture for automatic USCT bone image segmentation,” Biomedical Signal Processing and Control, vol. 71, article 103123, 2022.View at: Publisher Site | Google Scholar
M. Fawakherji, C. Potena, D. D. Bloisi, M. Imperoli, A. Pretto, and D. Nardi, “UAV image based crop and weed distribution estimation on embedded GPU boards,” International Conference on Computer Analysis of Images and Patterns, Springer, Cham, pp. 100–108, 2019.View at: Publisher Site | Google Scholar
Y. Mei, H. Jin, B. Yu, E. Wu, and K. Yang, “Visual geometry group-UNet: deep learning ultrasonic image reconstruction for curved parts,” The Journal of the Acoustical Society of America, vol. 149, no. 5, pp. 2997–3009, 2021.View at: Publisher Site | Google Scholar
V. Rajinikanth and S. Kadry, “Development of a framework for preserving the disease-evidence-information to support efficient disease diagnosis,” International Journal of Data Warehousing and Mining (IJDWM), vol. 17, no. 2, pp. 63–84, 2021.View at: Publisher Site | Google Scholar
K. J. W. Tang, C. K. E. Ang, C. Theodoros, V. Rajinikanth, U. R. Acharya, and K. H. Cheong, “Artificial intelligence and machine learning in emergency medicine,” Biocybernetics and Biomedical Engineering, vol. 41, no. 1, pp. 156–172, 2020.View at: Google Scholar
F. Saeed, M. A. Khan, M. Sharif, M. Mittal, L. M. Goyal, and S. Roy, “Deep neural network features fusion and selection based on PLS regression with an application for crops diseases classification,” Applied Soft Computing, vol. 103, article 107164, 2021.View at: Publisher Site | Google Scholar
W. H. Bangyal, K. Nisar, A. Ibrahim et al., “Comparative analysis of low discrepancy sequence-based initialization approaches using population-based algorithms for solving the global optimization problems,” Applied Sciences, vol. 11, no. 16, article 7591, 2021.View at: Publisher Site | Google Scholar
W. Haider Bangyal, A. Hameed, J. Ahmad et al., “New modified controlled bat algorithm for numerical optimization problem,” Computers, Materials & Continua, vol. 70, no. 2, pp. 2241–2259, 2022.View at: Publisher Site | Google Scholar
W. H. Bangyal, R. Qasim, Z. Ahmad et al., “Detection of fake news text classification on COVID-19 using deep learning approaches,” Computational and Mathematical Methods in Medicine, vol. 2021, Article ID 5514220, p. 14, 2021.View at: Publisher Site | Google Scholar
D. O. Oyewola, E. G. Dada, S. Misra, and R. Damaševičius, “A novel data augmentation convolutional neural network for detecting malaria parasite in blood smear images,” Applied Artificial Intelligence, pp. 1–22, 2022.View at: Publisher Site | Google Scholar