Swarm Intelligence and Neural Network Schemes for Biomedical Data Evaluation
View this Special IssueResearch Article  Open Access
Zeynab Nasr Isfahani, Iman JannatDastjerdi, Fatemeh Eskandari, Saeid Jafarzadeh Ghoushchi, Yaghoub Pourasad, "Presentation of Novel Hybrid Algorithm for Detection and Classification of Breast Cancer Using Growth Region Method and Probabilistic Neural Network", Computational Intelligence and Neuroscience, vol. 2021, Article ID 5863496, 14 pages, 2021. https://doi.org/10.1155/2021/5863496
Presentation of Novel Hybrid Algorithm for Detection and Classification of Breast Cancer Using Growth Region Method and Probabilistic Neural Network
Abstract
Mammography is a significant screening test for early detection of breast cancer, which increases the patient’s chances of complete recovery. In this paper, a clustering method is presented for the detection of breast cancer tumor locations and areas. To implement the clustering method, we used the growth region approach. This method detects similar pixels nearby. To find the best initial point for detection, it is essential to remove human interaction in clustering. Therefore, in this paper, the FCMGA algorithm is used to find the best point for starting growth. Their results are compared with the manual selection method and Gaussian Mixture Model method for verification. The classification is performed to diagnose breast cancer type in two primary datasets of MIAS and BIRADS using features of GLCM and probabilistic neural network (PNN). Results of clustering show that the presented FCMGA method outperforms other methods. Moreover, the accuracy of the clustering method for FCMGA is 94%, as the best approach used in this paper. Furthermore, the result shows that the PNN methods have high accuracy and sensitivity with the MIAS dataset.
1. Introduction
Breast cancer is a deadly and frequent illness that affects people all over the world. In the next 20 years, the number of new breast cancer patients is expected to increase by 75 percent. Consequently, according to the WHO in 2019, precise and early detection plays a critical role in developing the diagnostic and increasing the patients’ survival rate with breast cancer from 20% to 60%. Tumors come in various forms that must be identified independently since each might lead to different treatment options and prognoses [1]. To aid oncologic decisionmaking, cancer categorization strives to give an accurate diagnosis of the illness and a prognosis of tumor activity. Traditional breast cancer categorization, which is mainly focused on clinicopathologic aspects and the use of routine biomarkers, may not represent the wide range of clinical outcomes experienced by individual breast cancers. The biology that underpins cancer genesis and progression is complex. Recent highthroughput technology results have added to our understanding of breast cancer’s underlying genetic changes and biological processes [2].
Mammography is the most effective method for the early detection of breast cancer [3]. However, a look back reveals that many lesions that can be seen on a mammogram are overlooked by radiologists, which can have various causes, such as poor quality of the mammogram image, benign appearance of the lesions, and eye fatigue or neglect by radiologists. Utilizing diagnostic approaches in the early stages of cancer development can be very effective and essential for patient treatment so that this early diagnosis can help doctors treat patients and significantly reduce the mortality of patients. Examination of breast tumors has a special place in the initial diagnosis of breast cancer [4]. Due to this, diagnosis with the eye can be prone to error, and the radiologist may not identify the tumor and cancer. Therefore, having an image processing system with the power to extract features that the human eye cannot detect or recognize with low accuracy can be very useful. A tumor is an abnormal mass of cells. Tumor cells grow for reasons that are still unknown, and they grow regardless of the body’s needs [5]. Moreover, because nutrients absorb normal cells from the blood, they are often harmful to the body. Tumors are often called neoplasms or neoplasms. Body tissues are permanently repaired and replaced with new cells following damage or damage caused by natural cell depletion. Therefore, in general, growth and repair depend on the body’s needs. Specific organs can grow in size (hypertrophy) or increase the number of cells (hyperplasia) if the organ is required to do more than its capacity [6].
A breast cancer diagnosis can help physicians to treat patients and significantly reduce mortality. Also, it increases the 5year survival rate of patients with this cancer from 14% to 49% [7]. It is very important to check for breast cancer and to diagnose the tumor quickly and accurately. It is because eye diagnosis can be prone to error, and the radiologist may not detect the tumor. Therefore, there is an image processing system with high extraction power for detecting tumors, which can be very useful. The reduction in breast cancer mortality during screening may be even more significant. Since investigations have revealed that radiologists have failed to identify a remarkable number of breast cancer cases, the reasons for these cases are the failure of mammography screening. It is often unclear that disorders that are not visible in the images should be ignored. However, cancer may not be detected due to the absence of symptoms. Computeraided diagnosis (CAD) is being developed. These methods utilized pattern recognition approaches to find features in the image that characterize breast cancer tumor location. Therefore, CAD systems assist the radiologist during the examination [8]. The image of suspicious areas is used. Most CAD systems also have diagnostic errors. However, there is also evidence that the CAD system can enhance the radiologist’s ability to interpret detection lesions. Although the results of a small number of recent studies indicate that the performance of existing commercial CAD systems still needs to be developed, they can meet the needs of imaging centers and clinics. Therefore, improving the performance of CAD systems is a crucial issue for investigation, and future developments remain [9].
The convolutional neural network (CNN) is a multilayer system that recovers features from raw input. It is a symbol for hierarchical structuring [10]. Convolution layers, fully connected layers, pooling, and an output layer are among the layers that make up a deep neural network (DNN) [11]. A convolution layer is one of these layers that is beneficial for learning highlevel characteristics such as the edges of an image. FC layers are used to learn pixelbypixel characteristics. A pooling layer can minimize the quantity of convolved features, lowering the amount of computing power required. Max pooling and average pooling are two operations that this layer may execute [12, 13]. There are two types of CNNs utilized for breast image or data classification: de novo trained models and transfer learningbased models. The term “de novo model” refers to CNNbased models created and trained from the ground up [14]. On the contrary, transfer learning networks are CNN models that use previously trained neural network models such as AlexNet, visual geometry group, and residual neural network [15, 16].
This study aims to cluster the breast cancer area using the region growth method in combination with the FCMGA approach. These results are compared with the manual selection method and Gaussian Mixture Model method for verification. In the second part of the paper, the classification is performed to diagnose breast cancer type in two datasets of MIAS and BIRADS using features of GLCM and probabilistic neural network (PNN).
2. Literature Review
Veena and Padma [17] preprocessed the input image by reducing the noise coefficient using the intermediate filter method, reducing the image noise. Then, they use Gaussian mixed model (GMM), one of the wellknown clustering algorithms for image segmentation, and finally, by applying probabilistic neural network (PNN) classifier on the features extracted with coincident matrix algorithm. Gray area (GLCM) is classified into three categories of benign, malignant, and normal cases. Punitha et al. [18] used intelligent artificial bee colonization and Improved Monarch Butterfly Optimization Technique (IABCEMBOT) to detect breast cancer. The method used is of good speed and accuracy. Classification accuracy is 97.53%, sensitivity is up to 96.75%, specificity is up to 97.04%, and the average processing time is 113.42. Sakri et al. [19] presented a feature selection method for predicting the recurrence of breast cancer. The selection method is the Particle Swarm Optimization (PSORM) feature which uses three different classifiers KNN, NB, and the fast decision tree. Among the 34 features, the proposed method chooses the best quadratic method and improves the accuracy of all three classifiers. KNN accuracy improved from 70% to 81%, NB from 76% to 80%, and the fast decision tree from 66% to 75%.
Karthik et al. [20] used a deep neural network to learn data characteristics (DNNs). They categorized breast cancer data using multiplelayer DNNs. Experimental results show that the accuracy obtained from this system is 97.66%, and the sensitivity is slightly less than 0.98. The deep network designed in this study is for breast cancer datasets only. Unni et al. [21] used a general thresholding method to estimate the basal chest muscle boundary and then applied morphological methods to correct the extracted area boundaries and the mean filter to eliminate noise. The GLCM algorithm is used to extract the property. Then, a subset of these features that provides the best classification rate is selected using a genetic algorithm. Finally, Support Vector Machine Classification (SVM) is used to classify benign and malignant cancers.
Selvathi and Poornila [22] propose a general thresholding method for extracting chest bounds in images in which images are converted to binary using a fixed threshold value of 18. Each component with a significant number of pixels connected is considered as the chest area. The region boundary is then smoothed using a disk with a radius of 5 pixels using morphologically based filtering operations. Sasikala and Ezhilarasi [23] also proposed a general thresholding method for the extraction of breast boundaries. The 8bit image noise used by the mean filter is reduced, and the image contrast is improved by the Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm. The images are then converted to binary images with a fixed threshold value followed by morphologybased filtering operations to eliminate small background objects. Results reported for this study included a maximum accuracy of 97.1%, a sensitivity of 98.8%, and a specificity of 95.4%. Heidari et al. [24] took mammographic image features and created an optimal classification model to estimate the risk of breast cancer. The data analyzed are 500 and divided into 50% high risk and 50% low risk. To anticipate the risk of cancer diagnosis, they proposed an LPP model based on the combination of several features to reduce the dimensions of the feature space. Unlike typical feature selection techniques that select a set of optimal features from the primary feature, LPP creates a new optimum feature array that includes features different from the main features in the feature pool, which ultimately created a 9.7% rise in risk anticipation accuracy.
Tariq et al. [25] conducted a study to classify mammographic images of breast cancer. GLCM algorithm was used to extract texturetype features from images. Then, a smaller set was made by the individual, in addition to the whole set of features. 60% of the data is used for training, 20% for validation, and 20% for testing. Using ANN as a classifier, the results of this study achieved 99% accuracy in the image recognition process. Kashyap et al. [26] used a partial differential equation adjustment process to extract the chest area, mammogram images, dark masking, and moderate filtering and map suspicious anomalies. Fuzzy cmean clustering is applied. To calculate the texture characteristics of suspected fragmented masses, the local binary pattern was rotated, and local binary patterns were calculated. At the end of the support vector machine, polynomial kernel and radial basis function and multilayer and linear perceptron were used to classify areas of suspicion of abnormal and ordinary clades.
Chowdhary et al. [43] used the fuzzy cmean bounded probability (IPFCM) method to overcome the bugs in traditional methods, namely, noise and random clustering, and used a fuzzy histogram algorithm in initial preprocessing in mammographic images. Finally, extraction, classification, and validation are performed to assist the radiologist in tumor diagnosis.
Rampun et al. [44] used a pretrained modified version of AlexNet with detailed adjustments in the database of CBISDDSM mammography images. The AlexNet network architecture used different parameters with more advanced functions such as PReLu instead of ReLu. The experimental results of this study are classification accuracy (ACC) of 80.4% and area under the curve (AUC) of 0.84. Kim et al. [45] evaluated the feasibility of using datadriven imaging biomarkers (DIBMG) to characterize the deep CNN algorithm in mammographic images, including normal and benign classes. Using the algorithm, they have achieved a sensitivity of 76% and a specificity of 89%.
Zhang et al. [46] used a neural network algorithm to classify mammographic images. They evaluated ten different CNN architectures and concluded that combining both data addition methods (each original image into eight images) and a circular neural network would improve classification performance. The area under the ROC curve is 0.73. Salama et al. [47] developed a new computeraided diagnosis (CAD) system to diagnose breast cancer in digital mammography. They used the WBCT algorithm to extract the feature. GASVMMI optimization technique was used to select the optimal set of features. In this algorithm, the classification accuracy is between normalabnormal (97.5%) and benign (96%) cases (see Table 1).

3. Methods and Materials
3.1. Data Collection
MIAS (Mammography Image Analysis Society), a UK research community engaged in understanding mammography, is a database for digital mammograms that have been developed. Films taken from the UK National Breast Screening Program were digitized with a JoyceLoebl scanning microdensitometer to the 50micron pixel tip, a linear unit in the optical density range 0–3.2, and an 8 bit term for each pixel. The list is split into film pairs, where each pair reflects a single patient’s left (even filename numbers) and right (odd filename numbers) mammograms. The file resolution is 1024 pixels x 1024 pixels for ALL images. In the matrix, the images have been centered. Center positions and radii, rather than individual calcification, refer to clusters when calcification is present. The bottomleft corner is the root of the coordination system. In some cases, calcification is widely dispersed rather than concentrated at a single position throughout the image. Center positions and radii are unsuitable in these situations and removed [48].
The second dataset is a BIRADS data collection intended to standardize reporting on breast imaging and minimize uncertainty in the understanding of breast imaging. It also promotes monitoring of outcomes and evaluation of quality. It provides a lexicon for systematic terminology and chapters on report organizations and guidance for use in everyday practice, for Mammograms, US breast, and MRI. A database of clinical mammograms comprising 60 patient photos was taken from mammogram screening centers. A broad range of cases that are difficult to classify by radiologists is included in the realtime database. Both clinical mammograms obtained from screening clinics were positive for abnormalities in their presence. Initially, as an input, we take a 2D mammogram image of size MN and add the average filter to it. The pictures are 20 benign, 20 malignant, and 20 regular images of the breast tumor.
3.2. Growth Region Algorithm
This approach focuses on the definition employed to divide the image into distinctive regions to analyze the homogeneity and centered on the resemblance or the homogeneity of the neighboring pixels. The pixels in each region are comparable to particular parameters, such as color and strength. These histogrambased image clustering approaches concentrate only on the dispersion of image pixels at the grayscale, whereas district growth techniques note that neargray levels are often present in the surrounding pixels.
Areabased methods are performed as follows: (1)The beginning of the algorithm is regarded to be the number of initial seeds.(2)The region starts to develop from these seeds. The pixels are inserted in this area similar to the initial pixels.(3)The next grain is considered when the region’s growth ends, and the following area’s growth continues.(4)These steps keep going until one region belongs to all the pixels in the image.
The following measures refer to the growth method of the region (Figure 1).
Step 1: select initial seeds. It must manually insert the initial points to start the algorithm. In the handy process, the algorithm begins by choosing the initial points for the user. Several techniques automatically take out the initial points in the sector: the utilization of a random step algorithm to identify the first points, for instance. To pick the initial points, this study proposes an algorithm that focuses on FCMGA methods. The purpose of the algorithm is to use the fuzzy clustering algorithm to implement clustering initially. Moreover, it is distinguished by membership grade M and cluster centers C. Then, through the genetic algorithm, the appropriate values of these parameters are gained based on reducing target function. In the following approach, the error performance criterion E iswhere the m relation is as follows:
Furthermore, from the following relationship, m and C are computed:
Step 2: determine the similarity of regions. The selection of the similarity criterion between the regions is achieved. Then, the initial points have been defined in the previous stage. The criterion of similarity is utilized to evaluate the resemblance of the new pixels. The pixels of the regions define the assignment of the new pixel to the corresponding area.
The standard deviation criterion is one of these criteria for similarity. The new pixel relates to the region when it is used under the following condition and is used in the mean and standard deviation areas:where X is a variable that describes the difference in the area by how many pixels, usually, 99.7% of the actual seeds in the field, are placed in an isolated area where X is considered to be 3. The smaller X is, the fewer points the region contains, and the picture splits into more regions. The threshold criterion is another deemed criterion. In this process, the average of the regions is determined, and the new pixel relates to the area as the following condition:
In this context, when the gap is less than the defined limit between the pixels and the average area, the area covers that pixel. For color images (red, green, and blue), this situation must also extend to all three layers to connect these pixels to this region.
Step 3: the third step is growth region. The region’s growth is carried out after choosing the initial seeds for beginning the algorithm and the criterion of similarity for pixels with zones. The area’s development is such that the adjacent regions are chosen, beginning from the initial seeds.
3.3. Fuzzy CMeans (FCM)
FCM Bezdek et al. (1984) developed a classification algorithm regarding the reductions of the objective function:where manages the nonresolution degree and the classification spike.
U seems to be the inextensible membership in the center class of the data. The interval between the data and the middle of class and d should be the interval.
U is subject to the following situations:
For each group of the following relationships, the membership function and center membership are gained:
3.4. Genetic Algorithm (GA)
The genetic algorithm (GA) is an optimization tool based on the Darwinian evolutionary rule. In each step of applying the GA, a set of search points is applied to arbitrary processing. Every point has appointed a sequence of characters in this method, and genetic operators carry out these sequences. To obtain new results in the search space, the resulting sequences are then diced. In the end, the likelihood of their presence in the next level is calculated based on the premise that each point has an objective function.
The fitness function was described in this research as follows. This function is determined based on the difference between the degraded database image and the image acquired by the growth method of the field, beginning at the initial random point, as follows:where gray, white, and black layers are and are images derived from fragmentation, respectively.
3.5. Adaptive Median Filter
The adaptive median filter classifies each pixel of the image as noise with its surrounding pixels. The size of the neighborhood, as well as the reference threshold, are adjustable. As impulse noise, a pixel that varies from most of its neighbors and is not functionally compatible with those pixels to which it is identical is labeled. The objects are (1) deleteing noise from impulse, (2) smoothing other noises, and (3) diminishing distortion, such as extreme thinning or object boundary thickening [49].
3.6. Gaussian Mixture Model
As a random parameter, the pixel’s value in an image (i.e., the intensity or color) can be taken. Since each random variable has a probability distribution, then pixel values have a distribution of probability [50]. A reasonable probability distribution for pixel values of an image is the Gaussian mixture distribution. The form is the Gaussian mixture distribution formula:where
We presume that an image is categorized into a class known to the class k. , and parameters are of class in terms of mean, variance, and likelihood.
It means the following:
3.7. ExpectationMaximization Algorithm
In setup to the Kmeans, the EM algorithm is very identical. Similarly, choosing the input partitions is the first part. In this case, to compare findings more realistic, the same initial partitions as utilized in the color segmentation with KMeans were used. Here is the comparison parameter, RGB color was again chosen. The EM cycle starts with the following equation described by an Expectation phase [51]:
This equation asserts that concerning partition j, the assumptions or weight for pixel z equals the possibility that x is pixel x_{i} provided that μ is partition µ_{i} divided by the sum over all components k of the same probability subsequently defined. For weights, this contributes to the lower expression. The sigma expresses the covariance of pixel data squared shown in the second expression. The M step or maximization step starts once the E step has been implemented, and every pixel has a weight of expectation for each partition. The following equation defines this step:
This equation indicates that the partition value j is modified to the weighted average of the pixel values for this specific partition, where the weights are the weights of the E phase. For each new set of partitions, this EM loop is replicated until, as in the KMeans algorithm, the partition values do not shift by a significant amount anymore.
3.8. Hidden Markov Random Field Model (HMRFM)
HMRFM is defined by a sequence of observations as accidental processes generated by a sequence of Markov whose state sequence cannot be explicitly tracked. Each observation of the state series is presumed to be a stochastic function. According to the transition probability matrix, in which represents the number of states, the underlying Markov chain alters its state. HMMs have been effectively used in speech recognition and handwritten script recognition [52]. Since the original HMMs were created as 1D Markov chains, 2D/3D problems like image segmentation cannot be used directly with firstorder neighborhood structures. Here, we consider an HMM case that contains a Markov random field rather than a Markov chain and thus is not confined to 1D as the underlying stochastic process. We call this particular case an HMRF model. The following characterizes an HMRF model mathematically:(i)Hidden Random Field (MRF). Let be a finite status space with the distribution of probability; the arbitrary vector is an underlying MRF (5). The condition of X is not observable.(ii)Objective Random Field. is a random field with space D as a finite state. Each Y_{i} is the conditional probability distribution p (y_{i}  x_{i}) for function given any unique configuration, where the parameters associated are. The emission likelihood function is termed this distribution, and Y is often pointed to as the arbitrary vector released.(iii)Conditional Dependency. For any, the arbitrary parameters Y_{i} are conditional independent:
3.9. Probabilistic Neural Network (PNN)
The PNN is a network for neural feedforward, widely employed in diagnosis and pattern recognition algorithm. According to the discriminant analysis of the Kernel Fisher, this form of ANN was extracted. The operations are categorized into four layers of a multilayered feedforward network within a PNN as input, pattern, summation, and output layers (see Figure 2).
In classification problems, PNN is also utilized [53]. The first layer measures the difference when input is available, from the input vector to the training input vectors. The pattern layer calculates the relationship between each type of inputs and produces the net output as a likelihood vector. Eventually, a competitive transfer feature on the output of the second layer chooses the sum of those possibilities. For that class and nontarget classes, binary identification is created, respectively. Each neuron in the input layer defines a predictor variable. In categorical variables, when there are N category numbers, N1 neurons are considered. The pattern layer includes a neuron for every case in the training process. It keeps the values of the input variables for the scenario, along with the output value. In the summation layer, the values for the class they represent are added by the pattern neurons. In the output layer, for each target category, it contrasts the weighted collected in the pattern layer and uses the maximum values to predict the targets.
3.10. ROC Curve
To determine the outcome of binary classification (duality), sensitivity and specificity are used in the statistics of both measures. The consistency of the outcomes of a test that separates the information into these two categories is observable and descriptive using sensitivity and attribute metrics when the data can be separated into positive and negative classes. Sensitivity means the percentage of positive cases that would be correctly checked as positive. Specificity means the percentage of negative cases that correctly label them as negative. The sensitivity is the division of truepositive cases in statistical terms into the sum of truepositive cases and falsenegative cases:
The sensitivity of the test and its specificity depend on the quality of the test and the type of test utilized. However, it is not possible to describe the outcome of a test using sensitivity and specificity alone:
The ROC curve is a plot displaying the diagnostic capabilities of the binary classifier system as its discrimination threshold is different. The ROC curve is formed by plotting the truepositive rate against fallout or the falsepositive rate at different threshold settings. Sensitivity, recall, or likelihood of identification are also recognized as the truepositive rate. Accuracy, specificity, and precision are other performance analysis parameters.
4. Results and Discussion
4.1. Presented Approach
The diagram of the approach presented is shown in Figure 3. We used the growth region technique for tumor detection and adaptive median filters for classification to eliminate noise from the image since it is best for all spatial filtering and recognizes noise from fine information. The Adaptive Median Filter implies space processing to evaluate the impulse noise pixels in an image. By comparing each pixel with its surrounding pixels in the image, the adaptive media filters classify pixels as noise. The size of the neighborhood, as well as the comparison threshold, is adjustable.
4.2. The Tumor Detection Stage
4.2.1. Growth Region Method
In the growth region algorithm, the average value of the initial seeds is the average of the tumor area in this work, and the initial STD is equal by zero for region growth beginning from the initial seeds in the input image. Neighborhoods are referred to for their initial point 8. The analysis is done so that the pixels around it are checked, starting with the initial seed, and if they relate to that class, they are attached to it.
By accumulating points to that class as follows, the average and STD of the old class are recursively updated by using the mean and STD of the previous stage:
Then, they will also be located in the same neighborhood, and the criteria for the neighboring point applied to this class will be changed. This search will continue until ultimately detecting the first class and not add any other point. The cluster size of the FCM is ten, and the population size of GA is 100. Furthermore, 0.2 is the rate of mutations. Figure 4 shows the clustering findings using the FCMGA approach mentioned. The precise location of the target tumor area was detected in the final image (see Figure 5).
4.2.2. Segmentation Using GMM
The outcomes of Adaptive Median Filtration are also shown in Figure 6. The image is clarified, and the noise is reduced. These noise pixels would then substitute the value of the median pixel in the adjacency. Initially, the image is transformed into a grayscale image, adaptive median filtering is then applied to the output image, and the image is transformed to an integer eight that is unsigned. Then, with two regions, two GMM components, and 100 iterations, the GMM clustering is performed on the preprocessed image.
We used the kmeans clustering method (k = 2) and applied HMRFEM. Figures 7 and 8 display the effects of the GMM process. Malignant and benign tumors are illustrated in Figures 7 and 8, respectively, and we can segment the tumors using the presented procedure.
Figure 9 demonstrates the effects of the approach provided for normal breasts. The results do not reveal the critical section of the tumor in the model. Also, the chest muscle is present in the white position in the top corner of the figure.
4.2.3. The Performances of Approaches
Table 2 shows the fuzzy fitness value with three presented techniques in the genetic algorithm and is compared by the maximum values of Jaccard and the minimum Jaccard distance. In the genetic algorithm, the FCM based fitness function works well due to similarities between the techniques and the smallest Jaccard distance.

The clustering methods are presented based on the growth region method. Then, the analysis was used in this work on a variety of performance criteria. The suggested approach can diminish the RMS error, as shown from the results (Table 3). It indicates that the division of the picture was performed more precisely. In this analysis, FCMGA was employed to separate the initial points using the area growth process. As mentioned, the appropriate initial points have been selected using a genetic algorithm and evaluating the proper fitness function for image clustering using fuzzy logic. With this hybrid method, we get the required initial seeds for starting the growth process.

Consequently, the presented method was implemented on mammography MRI breast cancer images, and the outcomes were recorded. The finding revealed that the suggested algorithm could reduce the clustering error. We utilized 212 healthy breasts and 110 breast cancer images for the implementation of the algorithm. The performance criteria are shown in Table 4. The findings indicate that for presented techniques, maximum sensitivity is shown. Besides, the GMM approach offers minimal fallout and optimum sensitivity with better detection performance using the GA to collect initial seeds.

We demonstrate that this method is appropriate according to the receiver operating characteristic (ROC) curve. Since it has a sensitivity above the line of guess, the optimal outcome should have minimal fallout and optimum sensitivity concerning Figure 10.
4.3. The Classification Stages
We used two BIRADS and MIAS datasets in the classification stage. We utilized 60 breast cancer images in BIRADS, including 100 malignant and 100 benign breast cancer images. Table 5 displays the sensitivity, fallout, precision, accuracy, and specificity parameters of classification using the PNN method. The result shows that the PNN methods have high accuracy and sensitivity with the MIAS dataset. Also, the fallout for the MIAS dataset is 0.08, which is lower than BIRADS as 0.8, so the number of datasets will trigger classification outcomes.

5. Conclusion
In this study, an automated clustering method is proposed to detect breast cancer from mammographic MRI images based on the growth region method. Also, FCMGA was utilized to find initial seeds. MRI images are proposed for target area detection and extraction utilizing genetic algorithmbased dynamic image analysis. Comparison results of RMSE error reveal that the suggested algorithm has the lowest error rates relative to the other method. Moreover, the suggested FCMGA method has a higher Jaccard index and the minimum Jaccard distance than other methods.
Moreover, the suggested GMM solution is close to the method of FCMGA. As shown in the findings, the RMS error could be minimized by the suggested approach. For the automatic selection of the appropriate initial seeds, a genetic algorithm has been used based on fuzzy logic clustering techniques to start the growth process. Also, we used the adaptive median filter for classification to eliminate noise from the picture of breast cancer, to distinguish between fine details and noise. We then performed the GMM clustering on the preprocessed image. Moreover, the PNN method is used to classify or diagnose tumor types based on GLCM features extracted from GMM resulted in images. We utilized two datasets MIAS and BIRADS for this target. With minimal fallout and optimum sensitivity, we can see perfect outcomes.
Data Availability
The mammography data of this study are available in the MIAS and BIRADS repositories (the Mammographic Image Analysis Society Digital Mammogram Database, ACR BIRADS® Mammography).
Conflicts of Interest
The authors declare that they have conflicts of interest.
References
 S. Bharati, P. Podder, and M. Mondal, “Artificial neural network based breast cancer screening: a comprehensive review,” 2020, http://arxiv.org/abs/2006.01767. View at: Google Scholar
 J. Tsang and G. M. Tse, “Molecular classification of breast cancer,” Advances in Anatomic Pathology, vol. 27, no. 1, pp. 27–35, 2020. View at: Publisher Site  Google Scholar
 R. T. Chlebowski, G. L. Anderson, A. K. Aragaki et al., “Association of menopausal hormone therapy with breast cancer incidence and mortality during longterm followup of the women’s health initiative randomized clinical trials,” Jama, vol. 324, no. 4, pp. 369–380, 2020. View at: Publisher Site  Google Scholar
 F. M. Sutter and A. Ye, “Radiation therapy in the management of breast cancer and the impact of BC cancer center for the north on patient choice of treatment,” British Columbia Medical Journal, vol. 60, 2018. View at: Google Scholar
 S. Dorosti, S. J. Ghoushchi, E. Sobhrakhshankhah, M. Ahmadi, and A. Sharifi, “Application of gene expression programming and sensitivity analyses in analyzing effective parameters in gastric cancer tumor size and location,” Soft Computing, vol. 24, no. 13, pp. 9943–9964, 2020. View at: Publisher Site  Google Scholar
 C. I. Ullrich, R. Aloni, M. E. Saeed, W. Ullrich, and T. Efferth, “Comparison between tumors in plants and human beings: mechanisms of tumor development and therapy with secondary plant metabolites,” Phytomedicine, vol. 64, Article ID 153081, 2019. View at: Publisher Site  Google Scholar
 C. Allemani, T. Matsuda, V. Di Carlo et al., “Global surveillance of trends in cancer survival 2000–14 (CONCORD3): analysis of individual records for 37 513 025 patients diagnosed with one of 18 cancers from 322 populationbased registries in 71 countries,” The Lancet, vol. 391, no. 10125, pp. 1023–1075, 2018. View at: Google Scholar
 T. SyedaMahmood, “Role of big data and machine learning in diagnostic decision support in radiology,” Journal of the American College of Radiology, vol. 15, no. 3, pp. 569–576, 2018. View at: Publisher Site  Google Scholar
 M. Ahmadi, A. Sharifi, S. Hassantabar, and S. Enayati, “QAISDSNN: tumor area segmentation of MRI image with optimized quantum matchedfilter technique and deep spiking neural network,” BioMed Research International, vol. 2021, Article ID 6653879, 16 pages, 2021. View at: Publisher Site  Google Scholar
 Y. Liu, S. Stojadinovic, B. Hrycushko, Z. Wardak, S. Lau, W. Lu et al., “A deep convolutional neural networkbased automatic delineation strategy for multiple brain metastases stereotactic radiosurgery,” PloS One, vol. 12, no. 10, Article ID e0185844, 2017. View at: Publisher Site  Google Scholar
 S. P. RM, P. K. R. Maddikunta, M. Parimala et al., “An effective feature engineering for DNN using hybrid PCAGWO for intrusion detection in IoMT architecture,” Computer Communications, vol. 160, pp. 139–149, 2020. View at: Publisher Site  Google Scholar
 O. Hadad, R. Bakalo, R. BenAri, S. Hashoul, and G. Amit, “Classification of breast lesions using crossmodal deep learning,” in Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), pp. 109–112, Melbourne, Australia, 2017, April. View at: Google Scholar
 R. Yamashita, M. Nishio, R. K. G. Do, and K. Togashi, “Convolutional neural networks: an overview and application in radiology,” Insights Into Imaging, vol. 9, no. 4, pp. 611–629, 2018. View at: Publisher Site  Google Scholar
 A. Abdel Rahman, S. Belhaouari, A. Bouzerdoum, H. Baali, T. Alam, and A. Eldaraa, “breast mass tumor classification using deep learning,” in Proceedings of the 2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT), Doha, Qatar, February 2020. View at: Google Scholar
 M. Ahmadi, A. Sharifi, M. Jafarian Fard, and N. Soleimani, “Detection of brain lesion location in MRI images using convolutional neural network and robust PCA,” International Journal of Neuroscience, vol. 12, pp. 1–12, 2021 b. View at: Publisher Site  Google Scholar
 S. Charan, M. J. Khan, and K. Khurshid, “Breast cancer detection in mammograms using convolutional neural network,” in Proceedings of the 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), pp. 1–5, IEEE, Sukkur, Pakistan, 2018, March. View at: Google Scholar
 M. Veena and M. C. Padma, “Detection of breast cancer using digital breast tomosynthesis,” in Emerging Research in Electronics, Computer Science and Technology, pp. 721–730, Springer, Singapore, 2019. View at: Publisher Site  Google Scholar
 S. Punitha, A. Amuthan, and K. Suresh Joseph, “Enhanced monarchy butterfly optimization technique for effective breast cancer diagnosis,” Journal of Medical Systems, vol. 43, no. 7, 206 pages, 2019. View at: Publisher Site  Google Scholar
 S. Sakri, N. B. Abdul Rashid, and Z. Muhammad Zain, “Particle swarm optimization feature selection for breast cancer recurrence prediction,” IEEE Access, vol. 6, pp. 29637–29647, 2018. View at: Publisher Site  Google Scholar
 S. Karthik, R. Srinivasa Perumal, and C. Mouli, “Breast cancer classification using deep neural networks,” in Knowledge Computing and its Applications, pp. 227–241, Springer, Singapore, 2018. View at: Publisher Site  Google Scholar
 A. Unni, E. Nidheep, S. Vinod, and S. Lekha, “Tumour detection in double threshold segmented mammograms using optimized GLCM features fed SVM,” in Proceedings of the 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI), pp. 554–559, IEEE, Bangalore, India, September 2018. View at: Google Scholar
 D. Selvathi and A. Aarthy Poornila, “Deep learning techniques for breast cancer detection using medical image analysis,” in Biologically Rationalized Computing Techniques for Image Processing Applications, pp. 159–186, Springer, Berlin, Germany, 2018. View at: Publisher Site  Google Scholar
 S. Sasikala and M. Ezhilarasi, “Fusion of kGabor features from mediolateraloblique and craniocaudal view mammograms for improved breast cancer diagnosis,” Journal of Cancer Research and Therapeutics, vol. 14, no. 5, 1036 pages, 2018. View at: Publisher Site  Google Scholar
 M. Heidari, A. B. Hollingsworth, G. Danala et al., “Prediction of breast cancer risk using a machine learning approach embedded with a locality preserving projection algorithm,” Physics in Medicine & Biology, vol. 63, no. 3, Article ID 035020, 2018. View at: Publisher Site  Google Scholar
 N. Tariq, B. Abid, K. Ali Qadeer, I. Hashim, Z. Ali, and I. Khosa, “Breast cancer classification using global discriminate features in mammographic images,” Breast Cancer, vol. 10, 2 pages, 2019. View at: Publisher Site  Google Scholar
 K. L. Kashyap, M. K. Bajpai, P. Khanna, and G. George, “Mesh‐free based variational level set evolution for breast region segmentation and abnormality detection using mammograms,” International Journal for Numerical Methods in Biomedical Engineering, vol. 34, no. 1, Article ID e2907, 2018. View at: Publisher Site  Google Scholar
 D. Ribli, H. Anna, Z. Unger, P. . Pollner, and I. Csabai, “Detecting and classifying lesions in mammograms with deep learning,” Scientific Reports, vol. 8, no. 1, 4165 pages, 2018. View at: Publisher Site  Google Scholar
 F. Gao, T. Wu, L. Jing et al., “A shallowdeep CNN for improved breast cancer diagnosis,” Computerized Medical Imaging and Graphics, vol. 70, pp. 53–62, 2018. View at: Publisher Site  Google Scholar
 H. Jung, B. Kim, I. Lee et al., “Detection of masses in mammograms using a onestage object detector based on a deep convolutional neural network,” PLoS One, vol. 13, no. 9, Article ID e0203355, 2018. View at: Publisher Site  Google Scholar
 S. Shams, R. Platania, J. Zhang, J. Kim, K. Lee, and S.J. Park, “Deep generative breast cancer screening and diagnosis,” in Proceedings of the International Conference on Medical Image Computing and ComputerAssisted Intervention, pp. 859–867, Springer, Granada, Spain, September 2018. View at: Publisher Site  Google Scholar
 M. A. Almasni, M. A. Alantari, J.M. Park et al., ““Simultaneous detection and classification of breast masses in digital mammograms via a deep learning YOLObased CAD system,” Computer Methods and Programs in Biomedicine, vol. 157, pp. 85–94, 2018. View at: Publisher Site  Google Scholar
 H. Chougrad, Z. Hamid, and A. Omar, “Deep convolutional neural networks for breast cancer screening,” Computer Methods and Programs in Biomedicine, vol. 157, pp. 19–30, 2018. View at: Publisher Site  Google Scholar
 D. A. Ragab, M. Sharkas, S. Marshall, and J. Ren, “Breast cancer detection using deep convolutional neural networks and support vector machines,” Peer Reviewed Journal, vol. 7, Article ID e6201, 2019. View at: Publisher Site  Google Scholar
 M. Hazarika and L. B. Mahanta, “A new breast border extraction and contrast enhancement technique with digital mammogram images for improved detection of breast cancer,” Asian Pacific Journal of Cancer Prevention: APJCP, vol. 19, 2141 pages, 2018. View at: Publisher Site  Google Scholar
 P. Anjaiah, K. Rajendra Prasad, and C. Raghavendra, “Effective texture features for segmented mammogram images,” International Journal of Engineering &Technology, vol. 7, no. 3, pp. 666–669, 2018. View at: Publisher Site  Google Scholar
 M. M. Eltoukhy, M. Elhoseny, K. M. Hosny, and K. Amit, “Computer aided detection of mammographic mass using exact Gaussian–Hermite moments,” Journal of Ambient Intelligence and Humanized Computing, vol. 167, pp. 1–9, 2018. View at: Google Scholar
 T. V. Padmavathy, M. N. Vimalkumar, and D. S. Bhargava, “Adaptive clustering based breast cancer detection with ANFIS classifier using mammographic images,” Cluster Computing, vol. 22, no. 6, pp. 13975–13984, 2019. View at: Publisher Site  Google Scholar
 M. Tahmooresi, A. Afshar, B. Bashari Rad, K. B. Nowshath, and M. A. Bamiah, “Early detection of breast cancer using machine learning techniques,” Journal of Telecommunication, Electronic and Computer Engineering (JTEC), vol. 10, no. 32, pp. 21–27, 2018. View at: Google Scholar
 M. Amrane, S. Oukid, I. Gagaoua, and T. Ensarİ, “Breast cancer classification using machine learning,” in Proceedings of the 2018 Electric Electronics, Computer Science, Biomedical Engineerings’ Meeting (EBBT), pp. 1–4, IEEE, Istanbul, Turkey, July 2018. View at: Google Scholar
 R. Vijayarajeswari, P. Parthasarathy, S. Vivekanandan, and A. Alavudeen Basha, “Classification of mammogram for early detection of breast cancer using SVM classifier and Hough transform,” Measurement, vol. 146, 2019. View at: Google Scholar
 C. L. Chowdhary, M. Mittal, P. A. Pattanaik, and Z. Marszalek, “An efficient segmentation and classification system in medical images using intuitionist possibilistic fuzzy Cmean clustering and fuzzy SVM algorithm,” Sensors, vol. 20, no. 14, 3903 pages, 2020. View at: Publisher Site  Google Scholar
 C. L. Chowdhary, P. G. Shynu, and V. K. Gurani, “Exploring breast cancer classification of histopathology images from computer vision and image processing algorithms to deep learning,” International Journal of Advanced Science and Technology, vol. 29, pp. 43–48, 2020. View at: Google Scholar
 Chowdhary, C. Lal, and D. P. Acharjya, “Segmentation of mammograms using a novel intuitionistic possibilistic fuzzy cmean clustering algorithm,” in Nature Inspired Computing, pp. 75–82, Springer, Singapore, 2018. View at: Publisher Site  Google Scholar
 A. Rampun, B. W. Scotney, P. J. Morrow, and H. Wang, “Breast mass classification in mammograms using ensemble convolutional neural networks,” in Proceedings of the 2018 IEEE 20th International Conference on EHealth Networking, Applications and Services (Healthcom), pp. 1–6, IEEE, Ostrava, Czech Republic, September 2018. View at: Google Scholar
 E.K. Kim, H.E. Kim, K. Han et al., “Applying datadriven imaging biomarker in mammography for breast cancer screening: preliminary study,” Scientific Reports, vol. 8, no. 1, 2762 pages, 2018. View at: Publisher Site  Google Scholar
 X. Zhang, Y. Zhang, Y. Erik et al., “Classification of whole mammogram and tomosynthesis images using deep convolutional neural networks,” IEEE Transactions on Nanobioscience, vol. 17, no. 3, pp. 237–242, 2018. View at: Publisher Site  Google Scholar
 M. S. Salama, A. S. Eltrass, and H. M. Elkamchouchi, “An improved approach for computeraided diagnosis of breast cancer in digital mammography,” in Proceedings of the 2018 IEEE International Symposium on Medical Measurements and Applications (MeMeA), pp. 1–5, IEEE, Rome, Italy, June 2018. View at: Google Scholar
 Suckling, J., Parker, J., Dance, D., et al. (2015). Mammographic Image Analysis Society (MIAS) Database V1.21 [dataset].
 M. Rakshit and S. Das, “An efficient ECG denoising methodology using empirical mode decomposition and adaptive switching mean filter,” Biomedical Signal Processing and Control, vol. 40, pp. 140–148, 2018. View at: Publisher Site  Google Scholar
 B. Zong, Q. Song, M. R. Min et al., “Deep autoencoding Gaussian mixture model for unsupervised anomaly detection,” in Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, 2018, February. View at: Google Scholar
 E. Tzoreff and A. J. Weiss, “Expectationmaximization algorithm for direct position determination,” Signal Processing, vol. 133, pp. 32–39, 2017. View at: Publisher Site  Google Scholar
 R. Liu, C. Huang, T. Li, L. Yang, and H. Zhu, “Statistical disease mapping for heterogeneous neuroimaging studies,” in Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 1415–1418, IEEE, Washington, DC, USA, 2018 April. View at: Google Scholar
 Y. Zeinali and B. A. Story, “Competitive probabilistic neural network,” Integrated ComputerAided Engineering, vol. 24, no. 2, pp. 105–118, 2017. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2021 Zeynab Nasr Isfahani et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.