#### Abstract

This research utilizes metaheuristic optimization inspired by the Egyptian Vulture Optimization (EVO) technique. Biomedical image segregation is developed to reduce the complex association of hyperparameters of Convolutional Neural networks (CNN). The complex attributes of CNN include the type of kernel, size of the kernel, size of the batch, epoch counts, momentum, learning rate, activation function, convolution layer, and dropout. However, the life cycle of an Egyptian vulture influences the optimization technique to resolve complexity and increase the accuracy of CNN. The proposed CNN-based EVO model was evaluated in comparison to ANN-based and deep learning-based classifiers utilizing brain MRI image datasets. The results achieved have confirmed the efficiency and performance of the proposed CNN-based EVO model, in which the average detection accuracy and precision were 93% and 95%, respectively.

#### 1. Introduction

The heavy weightage of image data is readily available for the last few years regarding biomedical picture analysis and medical text practice. Nowadays, various new techniques have evolved based on machine learning techniques. The authentic analysis of images obtained from different clinical records by means of consistency, confidentiality, transparency, and validation of generated data is a challenging task. In the image analysis system, brain MRI image analysis is most trending in the research sector [1]. The biomedical data analysis has taken place before categorizing data for diagnostic analysis. Establishing such an efficient model in order to classify image selection is a crucial task for the machine learning approach. Recent works have taken place based on CNN-based classifications due to its abundant quality of extracting features [2]. The use of CNN requires a set of parameters that function manually after tuning to the data externally. These parameters help in building an efficient and accurate model in a particular time window. So, these are popularly known as hyperparameters. These parameters constitute the variables of the network structure of CNN. These parameters are best in use for image classification when categorized without external source intervention [3]. Many optimization techniques such as genetic algorithms and evolutionary algorithms have evolved as a fruitful solution to optimize hyperparameters. But most of the recent research works have focused on optimization techniques. The lacuna in recent works is tuning the hyperparameters with CNN-based models and optimizing the assumptions for validation [4]. This is a practical scenario in the medical sector where image classification is done best by authenticating the information and classifying it into objects. Therefore, in this work, we have focused on hyperparameter selection based on tuned CNN architecture, which is optimized for image classification [5]. We have considered and implemented Egyptian Vulture Optimization (EVO), which is a nature-inspired algorithm to get the optimized value for the best suit of the optimization technique.

The use of gradient-based optimization machine learning technique was complex for mathematical calculations consisting of limited tools of elements. But these methods are inconsistent in terms of exact function calculation and efficient for limited tools. To avoid such kinds of limitations, the metaheuristic optimization techniques come into a picture that is cost-friendly, and simple for calculation and experiment. The biological features of vultures are considered in this work mathematically to tune the hyperparameters of CNN [6].

The EVO-based algorithm explains the section process of hyperparameters tuned into the CNN network by optimizing the variables of parameters obtained by the training procedure. The variables of parameters are assigned by means of the upper and lower bound of the vector. Mathematically, these variables are modeled in order to get the best-optimized value of the hyperparameter. EVO technique is used in order to get the accurate optimized value in terms of hybridized exploitation and exploration of biological features extracted from this metaheuristic optimization technique.

A huge number of researches on biomedical image segregation for designing the CAD-based image classification system are done by researchers nowadays [1]. It deals with research on the classification of MRI images and many other biomedical images [7].

The exact execution and performance analysis of data in machine learning algorithms scenario is a widely used solution for all types of data analysis issues. But it is important to note that the design of data analysis is taken place using machine learning techniques by selecting the tools and models to obtain optimal values. Parameters are selected manually to process the learning phase and choose the classifier to get the optimal values. It is vital to note that the parameters are basically of two types, which are model parameters and hyperparameters. Model parameters are weighted neurons of the ANN model and get updated by the learning process [1]. The hyperparameters are designed to train the ANN model from the learning phase to design the model. Recent works have shown that a model of tuned hyperparameters can achieve a finite result of improved performance. A multispectral method based upon CNN has been designed to get the optimal values of hyperparameters of CNN. It has been tested with the model containing the Titan data along with the classical CNN models, which are, namely, VGG16, ResNet50, etc.

To regularize the hyperparameters of SVM, the gradient descent method has come to focus to get the optimized value of a bilevel optimization model. This method has been validated by a few existing methods with the proposed strategy of SVM their model with few existing gradient descent methods and clarified that the proposed method for SVM is getting the better accuracy in terms of forecast and analysis [1].

The main contributions of this paper can be summarized as follows:(1)We present a thorough cost-effective multiclass classification system that is capable of classifying brain tumor imaging into two different modern biomedical image datasets (Glioma Brain MRI Images).(2)We develop a hybrid system that makes use of the Egyptian Vulture Optimization (EVO) algorithm along with the convolutional neural networks (CNN) to resolve complexity and increase the accuracy of CNN.(3)We present broad investigational outcomes and comparative analysis with several AI-based models to provide more insights into the proposed hybrid system.

This paper has been organized as follows. Section 2 discusses the findings of other researchers in this area and preliminary remarks based on the related work. The research methodology adopted for model implementation and evaluation is discussed in Section 3. Section 4 provides a detailed discussion of the research layout and experimental setup. Sections 5 and 6 explain the evaluation metrics, the datasets features, the parameters setup, and the analysis of the results. Section 7 concludes this work with remarks on future work.

#### 2. Related Work

Image classification is a recent development of the machine learning and deep learning era. Image classification and clustering techniques are used to solve the problems related to medical science [1], natural language processing, Haboush et al. [8] and Hammouri et al. [9], and other engineering fields [10]. In the field of medical science, identifying the brain tumor is a critical task and needs visual analysis, which can be solved by the image classification process. There were many traditional methods of image classification, and nowadays, CNN is a popular technique. In Rath et al. [1], the authors experimented with both the traditional methods and deep network techniques, and in their findings, they said that the CNN model performs with better accuracy than other models.

In Rath et al. [1], it was also found that the performance of CNN can be improved by hyperparameter tuning, which is a motivation for this work. Khairandish et al. [2] used a hybrid model of CNN and SVM to classify Benign and Malignant tumors from the brain MRI images. From the experiment, they found that the hybrid model of CNN and SVM performs with an accuracy 98.49%. In the same concern, Alsaffar et al. [11] evaluated several classification models including SVM, logistic regression, and nearest neighbors to analyze X-ray imaging to spot abnormalities. It was recorded that SVM achieved the best performance.

El-Dahshan et al. [3] used two hybrid classifiers, that is, feed-forward backpropagation artificial neural network (FP-ANN) and k-nearest neighbor (k-NN), to classify the MRI data, where they obtained an accuracy of 97% and 98% for FP-ANN and k-NN, respectively. In Jiang & Siddiqui [12], researchers tuned the parameters of support vector machines using stochastic gradient descent and dual coordinate descent to enhance the performance of SVM, which is a motivation for hyperparameter tuning of CNN.

Yoo [13] focused on optimizing the hyperparameters of deep neural networks using dynamic encoding algorithm. The model performance was developed after hyperparameter tuning. Similarly, Aszemi and Dominic [14] describe how genetic algorithms optimize hypermeters to enhance the performance of CNN, while Cui and Bai [15] confirm that CNN https://www.sciencedirect.com/topics/engineering/convolutional-neural-networks performance is closely related to the efficient setup of model hyperparameters. Singh et al. [16] proposed a Multilevel Particle Swarm Optimization (MPSO) algorithm to find the architecture and hyperparameters of the CNN simultaneously.

In Musallam et al. [17], Deep Convolutional Neural Network (DCNN) architecture for effective diagnosis of Brian tumors is presented. The proposed model was considered lightweight and had a small number of max-pooling layers; convolutional and training iterations were conducted. The results reported indicated 97.72% detection accuracy. In the same concern, lightweight deep neural network model for image classification was presented in Wang et al. [18]; their aim was to improve the classification accuracy while reducing number of parameters. The proposed model was described as Dense-MobileNet, which utilizes the concept of dense blocks. The reported accuracy for Dense-MobileNet was 96.46%. An adaptive medical image classification based on CNN with adaptive momentum hyperparameter optimization was presented in Aytaç et al. [19]. This adaptive method has reduced classification error from 6.12 to 5.44%; hence, the detection accuracy was increased to 95% comparing to state-of-the-art CNN architectures while using the same datasets.

Table 1 below summarizes major related work being referred in this work with a focus towards its implemented model, overall achieved results, and limitations.

Previous research analysis has inspired this work to select a nature-inspired optimization algorithm to enhance the performance of CNN. Hence, the following preliminary remarks can be summarized:(1)The biomedical images like CT scans or MRIs have a large quantity of data about the underlying tissue architecture [21]. Interpolation and interpretation of these numerous data are required to yield an exact evaluation and correct calculation of the diagnosis of any diseases.(2)This type of process often requires machine learning techniques like classification in order to build an expert system working manually to ensure an uncommon and exact diagnosis of diseases, which is beneficial for the biomedical system.(3)With the advent of different strategies developed for image classification, there are many tricks for analysis of the image. The biomedical images containing important information regarding different fields of the human body such as brain MRIs, detection of cancer or tumors, and reports of ultrasound are being analyzed by using machine learning techniques easily. For an accurate diagnosis, the biomedical images are being analyzed by the radiologist, and the clinical reports are being developed.(4)The Computer-Aided Diagnosis (CAD) based methodologies are being developed and organized by various supervised or unsupervised machine learning techniques, which are applied to develop and organize the data with respect to the image classification. Here, we have studied the implementation of some traditional machine learning strategies along with some deep learning-based methodologies [22]. In this work, the focus is on choosing the best values for the hyperparameters in CNN to overcome the limitation of choosing the parameters manually.(5)The recent works regarding the bioimage classification have been done excluding the validation of hyperparameters of CNN [23]. So, many metaheuristic optimization methods have gained importance to analyze the values of hyperparameters in tuned CNN. So, using EVO based optimization technique can give a better solution for this biomedical image classification.

#### 3. Methodology

In this work, we have focused on the machine learning techniques along with the pedagogical method to get the optimized metaheuristic optimized solution. The biomedical image segregation based on issues and challenges is associated with the CAD-based systems for the classification of images and validating the key attributes of image classification. The input parameters of this work are as follows: Kaggle Repository is known as an authentic data repository to get the classifiers of the input parameters to collect the brain MRI images. The machine learning technique based on hyper-tuned parameters of EVO optimization technique and CNN have been combined to train the model to eradicate the limitations of the huge volume of data. The proposed model can be deployed for IoT health care diagnosis applications such as screening using real-time data [24]. A direct mechanism for comparison has been made to classify the image-based of CNN and deep neural networks. The performance evaluation of the combined model is refined and checked for further improvements.

The authenticity of the proposed model has been validated by means of integrity, classification rate, precision rate, and accuracy. The error factor has been reduced during the pretraining and testing phase of the datasets. To check the validity of performance, the evaluation metrics graph has been designed. The experiment is implemented phase-wise.

#### 4. Classifiers

The classifiers in machine learning scenario are a set of algorithms that automatically synchronize data into one or more than one set of classes. Machine learning algorithms are responsible for automating the tasks to be done manually to save large amount of time and energy to make process efficient and easier. The classifiers in machine learning scenario can be classified as traditional ANN-based classifiers, which are Multilayer Perceptron (MLP), Extreme-learning Machine (ELM) [1], and Ensemble learning [25]. In the case of ANN, the neural networks combine the behavior of different species behavior with machine learning tasks.

ANN comprises a large number of neurons that consist of input and output units. The task of the neuron is to receive the information from input units and process the operation over it in order to get the output and forward the result to its next unit. Mainly, ANN has two important variations, which are MLP and ELM. The deep learning classifiers are CNN and DNN. Recent research works are considering deep neural network classifiers in order to improve performance.

##### 4.1. Multilayer Perceptron (MLP)

MLP is considered as one of the easiest variations of ANN. It comprises one input layer and two output layers, which are the target output and output layer. MLP can function as an approximation function, which is used in various applications. The activation function sometimes dismisses the functional flow of network architecture due to mismanagement in the selection of appropriate interconnection weights.

##### 4.2. Extreme Learning Machine (ELM)

ELM is one of the traditional ANN network architectures, which is commonly known as feed-forward neural network. It works in two steps; firstly, the input layer is being initiated, and secondly the weights of the input layer and hidden layer are explored. The weights between the hidden layer and output layer are then calculated on the basis of Moore-Penrose Generalized Inverse method. In addition, ELM networks require least number of computations as compared to other existing methods in the less time-consuming scenario. It uses activation function without requiring a gradient descent method for calculating the values at the hidden layer. The choice of activation function affects the processing of determination network architecture. ELM networks can approximate any continuous nonlinear function *f* (*x*) by satisfying the condition

The optimized value of the hidden layer is calculated bywhere *H* are the values of the hidden layer connected with the output layer, and *β* is the weight matrix of the hidden layer. The objective of ELM is to reduce the error in the model; therefore, the updated weights’ matrix *β* can be calculated by the following equation:where is known as the Moore–Penrose generalized inverse of the hidden layer output.

In spite of the tremendous performance of ELM and its classification, it is hard to manage the issues of convolution related to image variations and object detection. The specific variation in exceeding the number of hidden nodes is the highest limitation of ELM. Calculation of output matrix is highly expensive.

##### 4.3. Convolutional Neural Network (CNN)

The other type of variation of ANN is the Convolutional Neural Network, which is based on the application area derived from the output generated from a series of actions. The structure of the network layer is derived from the series of actions from the processing of input to output, and the CNN architecture is illustrated in Figure 1. The CNN is influenced by the organization of connectivity patterns along with the visual cortex, which is quite matching with the neurons of brain MRI images [23]. Basically, the CNN networks are widely used for speech reorganization, audio-visual image classification, object identification, etc. Exploiting the spatiotemporal behavior of input and output captured by the special layers of the CNN is beneficial for image classification. The general architecture of the classifier consists of convolution layers, input layers, and pooling layers. The CNN network takes an image as input and provides pixels as output [26]. The connected hidden layers are pooling layers with a lesser size to provide a large number of images to be trained.

A convolution tool that separates and identifies the various features of the image for analysis in a process is called Feature Extraction, while a fully connected layer utilizes the output from the convolution process and predicts the class of the image based on the features extracted in previous stages. Weight initialization is processed at the beginning using several techniques [27]. The main parts of the employed CNN architecture are as follows:(a)Convolutional Layer: this layer is the first layer that is used to extract the various features from the input images. In this layer, the mathematical operation of convolution is performed between the input image and a filter of a particular size MxM. By sliding the filter over the input image, the dot product is taken between the filter and the parts of the input image with respect to the size of the filter (MxM). The output is termed the Feature map, which gives us information about the image such as the corners and edges. Later, this feature map is fed to other layers to learn several other features of the input image.(b)Pooling layers: the primary aim of this layer is to decrease the size of the convolved feature map to reduce computational costs. This is performed by decreasing the connections between layers and independently operating on each feature map. Depending upon the method used, there are several types of pooling operations. In Max Pooling, the largest element is taken from the feature map. The Average Pooling calculates the average of the elements in a predefined sized Image section. The total sum of the elements in the predefined section is computed in Sum Pooling. The Pooling Layer usually serves as a bridge between the Convolutional Layer and the FC Layer(c)Fully connected (FC) layers: the Fully Connected (FC) layer consists of the weights and biases along with the neurons and is used to connect the neurons between two different layers. These layers are usually placed before the output layer and form the last few layers of a CNN Architecture. In this, the input image from the previous layers is flattened and fed to the FC layer. The flattened vector then undergoes a few more FC layers where the mathematical function operations usually take place. In this stage, the classification process begins to take place.(d)Dropout: usually, when all the features are connected to the FC layer, it can cause overfitting in the training dataset. Overfitting occurs when a particular model works so well on the training data causing a negative impact on the model’s performance when used on new data. To overcome this problem, a dropout layer is utilized, wherein a few neurons are dropped from the neural network during the training process resulting in a reduced size of the model. On passing a dropout of 0.3, 30% of the nodes are dropped out randomly from the neural network.(e)Activation Functions: finally, one of the most important parameters of the CNN model is the activation function. They are used to learn and approximate any kind of continuous and complex relationship between variables of the network. In simple words, it decides which information of the model should fire in the forward direction and which ones should not at the end of the network. It adds nonlinearity to the network. There are several commonly used activation functions such as the ReLU, Softmax, tanH, and the Sigmoid functions. Each of these functions has a specific usage. For a binary classification CNN model, sigmoid and Softmax functions are preferred for multiclass classification; generally, Softmax is used.(f)The backpropagation algorithm along with the gradient descent search is one of the most used algorithms [28]. Different types of architecture of CNN networks are used in research and industrial fields in order to mitigate various limitations of ELM algorithms. VGG-19, VGG-16, LeNet, and AlexNet are different types of CNN network architecture being used extensively nowadays.

##### 4.4. Deep Neural Network (DNN)

Different types of neural networks present in machine learning systems are one of the interesting key research areas for researchers nowadays. The simple type of neural network is a deep neural network, which is very much powerful and easy to implement [12]. It consists of two layers that are bounded with the high computational power of the devices of the network. The structure of the network is a helpful one in solving complex problems. The series of hidden layers is preceded by the single input layer, where the latter starts with the network architecture [13]. The last output layer is followed by the hidden layers, which are responsible to yield the output. The linear and nonlinear data transformations are carried out to give the output of the total process. The activation function is carried out at every hidden layer to perform the data transmission.

The backpropagation algorithm is one of the most common learning algorithms to train these kinds of neural networks [14]. Basically, the system architecture of DNN is responsible for the convergence of deep learning of the network with respect to nonlinear system processing. The selection of activation function to solve the problem is designed in such a way that the hyperparameters perform with the interconnection in the network [15]. A lot of optimization techniques have evolved in order to solve the deep learning problems with the association of tuned hyperparameters. DNN can be treated as a double-edged solution for interconnecting networks for obtaining the optimal value of these parameters [29]. The network design along with the activation function can highly affect the optimization results. The initial weighted values of the hyperparameters along with the convergence speed of the network can provide the best optimal solution for solving the problems. The output of the next layer could be calculated as follows using (4) and (5). is the input for the next layer; is the bias, which is the connection between different layers.

CNN networks have a lot of variations that solve a large range of problems [16]. In the case of single-class and multi-class classification, the input and output of the matrix can be computed by the last layer of the output.

##### 4.5. Egyptian Vulture Optimization (EVO)

The EVO algorithm of the neural network is designed on the biological features and activities of Egyptian Vultures observed from different species [30]. The series of actions taken place by the searching and hunting of food as compared to other birds are quite unique. It is a metaheuristic optimization algorithm that was primarily initiated to solve difficult arrangement problems. It is arranged by the behavior of Egyptian Vulture to nourish its biological orders. The smooth functioning of this winged animal is transmitted to an efficient algorithm to solve a different kind of optimization problems. This kind of vulture is famous for its unique type of hunting capabilities. Raw flesh is the main food of these kinds of animals, but they find it comfortable to eat the eggs of different species. They use huge stones to break the hard surface of eggs. They use twigs to move fast for the sake of hunting and collecting food. These are two important features of this bird, which are remarkable activities and can be transformed into algorithms step by step. In Figure 2, the phases of EVO are depicted graphically. Also, the EVO Algorithm can be formulated as an algorithm as follows: Step-1: The string of the solution set is first initialized with the presentation of parameters in the format of variables. Step-2: The string displays the set of parameters that present a particular state of desirable solution. Step-3: Then, the conditions are checked for a refined variable with the constraints. The constraints are superimposed. Step-4: The stones that are used for breaking the eggs are selected randomly as a selection point. Step-5: The selected parts of the entire string are then considered for twigs to be performed on a multiroller basis. Step-6: Selected portion of the solution of the problem is then made reversed in order to change the tactic of the angle. Step-7: The fitness value of the function is then evaluated. Step-8: Then, checking the stooping criteria is done on regular basis.

###### 4.5.1. Pebble Tossing

The EVO algorithm uses the biological features of the Egyptian Vulture, which breaks the hard eggs of different feathered birds to collect the food. After breaking the entire egg, the vulture can have their entire food. This method of action can build the technique for a route plan for action.

Determination of the operation is based on two variables known as Pebble Size (PS), which corresponds to the level of occupying, where PS is greater or equal to 0. The second variable is Force of Tossing (FT), which corresponds to the level of removal, and its value should be greater than or equal to zero. Accordingly, If PS > 0, then “Get In”; otherwise, “No Get In.” In the same concern, if FT > 0, then “Removal”; otherwise, “No Removal.” The level of occupying describes how many solutions they should pebble and carry. The level of removal describes how many solutions are taken out from the solution set. Hence, FT denotes the number of nodes removed [31]. Overall, there are four combinations of operations that are possible, which are as follows: Case 1: Get In and No Removal; Case 2: No Get In and No Removal; Case 3: Get In and Removal; Case 4: No Get In and No Removal [31].

###### 4.5.2. Twigs Rolling

The activity of this type of vulture can find out the weakness of the twigs, which are moving in nature. After finding out the weakness of the twigs, the position of the object is moved towards the floor. The change in variable position of the activity can assume the modifications in the old position and can set new positions. The angel rotation and degree of modification affect the variables and parameters, which are being formulated mathematically to get the desired solution [32]:(1)DS = Degree of roll(2)DR as Direction of rolling(3)DR = 0, if shift is right = 1, if shift is left(4)DR = Left Rolling/Shift for RightHalf > LeftHalf = Right Rolling/Shift for RightHalf < LeftHalf

where 0 and 1 are generated randomly [6].

###### 4.5.3. Changes in Angle

The vulture can rotate the angle of the stone to hit the egg. The rotation of angle depends on the analogy of the changes in angle with respect to pebble tossing in order to derive the experimental results of the process, which increases the chances of breaking the eggs. The more the chances are, the more it can break the harder eggs. The sequence of nodes can be completed by analyzing the sequence of the reverse of connected nodes. These changed angles are presented as mutation steps. The local search operation gives the decision of selecting the neighboring points. The connected graphs depend on the number of nodes of the string. Suppose that a string is full of nodes and can not handle Pebble tossing; then, Change in Angle is a good option for finding the solution [32].

#### 5. Proposed Biomedical Image Classification Model

Biomedical image dataset is preprocessed and passes through different phases of classification. The flow diagram is explained in Figure 3. Brain tumor MRI images are collected from the repository; after that, the images are given as input to the CNN model. The CNN model process images as described in Section 4. The hyperparameters control the CNN model. The performance of the model depends on its hyperparameters. In this work, the EVO tunes the nine numbers of hyperparameters of CNN to get better accuracy. In Figure 4, it is depicted that the EVO tunes the CNN hyperparameters.

#### 6. Research Metrics and Empirics

In this section, the system configuration, parameters, and datasets used are discussed for all the classification models, and also, the range of the hyperparameters for CNN is discussed.

##### 6.1. System Configuration

The experimental evaluation is carried out in the Google Colab environment, under Windows7 with 64 bit and 4 GB RAM of Intel i3. Google Colab requires zero configurations with free access to GPUs to write and execute Python in the researcher’s browser, which also provides an easy sharing facility. Additionally, this Colab harnesses the full power of popular libraries to analyze and visualize the data.

##### 6.2. Datasets used for Experimentation

In this work, two datasets belonging to brain MRI are considered for experimental evaluation and comparison, which were collected from Kaggle Repository. The first dataset, known as “Glioma,” has been published by Bhuvaji et al. [33], which is Brain MRI images. The second dataset being utilized is known as Brain MRI Images for Brain Tumor Detection, published and maintained by [34]. A detailed description of those two datasets is given in Table 2. In the Glioma dataset, 3264 samples are present, from which 826 Glioma tumor samples are used for training purposes, and 100 samples are used for testing purposes; similarly, 827 samples are used for training, and 74 samples, for testing for pituitary tumor class. From the Meningioma tumor class, 822 samples are used for training, and 115 samples, for testing, and from the no tumor class, 395 samples are used for training, and 105 samples, for testing purposes. In the second dataset, the total numbers of samples are 253, which contain two classes of data. The tumor present class contains 155 samples, out of which 108 samples are used for training purposes, and 47 samples are used for testing, and no tumor class contains 98 samples, out of which 68 samples are used for training purposes, and 30 samples are used for testing.

##### 6.3. Parameters Selection

The CNN model has many predefined hyperparameters, which control the numbers of layers, nodes in each layer, etc. [35]. Similarly, in MLP, ELM, and DNN, the hyperparameters control the output of models [36–38]. As detailed in Table 3, the number of input nodes chosen for MLP and ELM is 2250. The number of nodes for the hidden layers for MLP utilized the Keras API; therefore, there is no need of choosing the number of hidden layers, number of nodes, and activation function. The ELM architectures use one hidden layer with 1300 nodes. The number of nodes in the output layer chosen for MLP and ELM is four. The ReLU activation function has been used for both input and hidden layers of ELM, whereas Softmax has been used for the output layer of ELM. Tables 4–7 represent CNN parameters, hyperparameters to be tuned, hyperparameters of CNN after tuning for Glioma dataset, and hyperparameters of CNN after tuning for the Brain MRI dataset.

#### 7. Proposed Model Comparison and Validation

This section discusses the performance evaluation of the proposed optimized hyperparameters CNN based on the EVO algorithm. The proposed hyperparameter tuned CNN has been compared with variants of ANN-based and deep learning-based classifiers such as MLP, ELM, DNN, and CNN. The various performance measures based on classification accuracy have been recorded for both Glioma and Brain MRI image datasets. The result of hyperparameters values that minimize the cost as obtained by using the proposed biomedical image classifier is showing good results in comparison to other strategies considered for experimentation and comparison. Table 8 provides the detailed comparison results of MLP, ELM, DNN, and CNN over the proposed tuned model for every individual class. The comparison in the table considers the different output classes including Glioma Tumor, Meningioma Tumor, No Tumor, and Pituitary Tumor. Also, the table compares the performance of the different Tumor classes in terms of classification accuracy, precision, recall, specificity, f1-score, misclassification rates (MCR), false discovery rate (FDR), and true negative rate (TNR).

Figure 5 provides the comparison results of MLP, ELM, DNN, and CNN over the proposed tuned model for the overall model tested over the Glioma dataset. The comparison considers two dimensions including: (A) quality factors, which compare the positive performance indicators such as accuracy, precision, recall, specificity, and F-score; (B) error rates, which compare the negative performance indicators such as MCR, FDR, and TNR. It can be clearly seen that hyperparameter tuned CNN outperforms other models scoring highest quality indicators and least error rates.

**(a)**

**(b)**

Table 9 reports the observed training and testing average accuracy of hyperparameter tuned CNN for Glioma and Brain MRI datasets. Consequently, the experimental simulations exhibit an improved classification performance for the hyperparameter tuned CNN when trained and tested using the Glioma dataset providing 98.7% of classification accuracy.

The area under ROC represents the accuracy of the classification models [39]. From Figures 6 and 7, it is observed that hyper-tuned CNN covers more area than other models. Therefore, it can be concluded that the hyper-tuned CNN performs better than the other experimented models.

#### 8. Conclusion and Future Scope

This work presented an efficient classifier model focused on detecting cancer cell existence. After detecting a cancer cell, the second step is to find the category of cancer. This work proposed an optimized CNN based on the EVO approach, which has been compared with variants of ANN-based and deep learning-based classifiers such as MLP, ELM, DNN, and CNN. All model evaluation was based on two up-to-date datasets of Brian MRI images (Glioma and Brain MRI image datasets). The results achieved have confirmed the high performance of the CNN model after optimization and tuning of its hyperparameters based on the EVO technique. Results achieved by research lead towards investigating further optimization of hyperparameters of CNN and the integration of additional deep learning models for advanced detection performance. In the future, we will consider tuning our proposed model to perform high-performance classification tasks for other medical images such as lung cancer images and phasic dopamine releases [40]. Besides, the proposed system can be customized to provide detection in advance with high accuracy for several other health risks such as breast cancer detection [41], Tuberculosis Disease Diagnosis [42], and early-stage diabetes risk prediction [43]. Also, we will seek to develop a comparative study on the use of CNN with several other metaheuristic algorithms [20] such as particle swarm optimization (PSO) and Cuckoo Optimization Algorithm (COA) [42].

#### Data Availability

The [Glimo] data used to support the findings of this study have been deposited in the [Cancer Data Access System] repository ([https://cdas.cancer.gov/datasets/plco/16/]).

#### Conflicts of Interest

The authors declare that they have no conflicts of interest.