Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2021 / Article
Special Issue

Swarm Intelligence and Neural Network Schemes for Biomedical Data Evaluation

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 5863496 | https://doi.org/10.1155/2021/5863496

Zeynab Nasr Isfahani, Iman Jannat-Dastjerdi, Fatemeh Eskandari, Saeid Jafarzadeh Ghoushchi, Yaghoub Pourasad, "Presentation of Novel Hybrid Algorithm for Detection and Classification of Breast Cancer Using Growth Region Method and Probabilistic Neural Network", Computational Intelligence and Neuroscience, vol. 2021, Article ID 5863496, 14 pages, 2021. https://doi.org/10.1155/2021/5863496

Presentation of Novel Hybrid Algorithm for Detection and Classification of Breast Cancer Using Growth Region Method and Probabilistic Neural Network

Academic Editor: Suresh Manic
Received09 Apr 2021
Accepted10 Jun 2021
Published21 Jun 2021

Abstract

Mammography is a significant screening test for early detection of breast cancer, which increases the patient’s chances of complete recovery. In this paper, a clustering method is presented for the detection of breast cancer tumor locations and areas. To implement the clustering method, we used the growth region approach. This method detects similar pixels nearby. To find the best initial point for detection, it is essential to remove human interaction in clustering. Therefore, in this paper, the FCM-GA algorithm is used to find the best point for starting growth. Their results are compared with the manual selection method and Gaussian Mixture Model method for verification. The classification is performed to diagnose breast cancer type in two primary datasets of MIAS and BI-RADS using features of GLCM and probabilistic neural network (PNN). Results of clustering show that the presented FCM-GA method outperforms other methods. Moreover, the accuracy of the clustering method for FCM-GA is 94%, as the best approach used in this paper. Furthermore, the result shows that the PNN methods have high accuracy and sensitivity with the MIAS dataset.

1. Introduction

Breast cancer is a deadly and frequent illness that affects people all over the world. In the next 20 years, the number of new breast cancer patients is expected to increase by 75 percent. Consequently, according to the WHO in 2019, precise and early detection plays a critical role in developing the diagnostic and increasing the patients’ survival rate with breast cancer from 20% to 60%. Tumors come in various forms that must be identified independently since each might lead to different treatment options and prognoses [1]. To aid oncologic decision-making, cancer categorization strives to give an accurate diagnosis of the illness and a prognosis of tumor activity. Traditional breast cancer categorization, which is mainly focused on clinicopathologic aspects and the use of routine biomarkers, may not represent the wide range of clinical outcomes experienced by individual breast cancers. The biology that underpins cancer genesis and progression is complex. Recent high-throughput technology results have added to our understanding of breast cancer’s underlying genetic changes and biological processes [2].

Mammography is the most effective method for the early detection of breast cancer [3]. However, a look back reveals that many lesions that can be seen on a mammogram are overlooked by radiologists, which can have various causes, such as poor quality of the mammogram image, benign appearance of the lesions, and eye fatigue or neglect by radiologists. Utilizing diagnostic approaches in the early stages of cancer development can be very effective and essential for patient treatment so that this early diagnosis can help doctors treat patients and significantly reduce the mortality of patients. Examination of breast tumors has a special place in the initial diagnosis of breast cancer [4]. Due to this, diagnosis with the eye can be prone to error, and the radiologist may not identify the tumor and cancer. Therefore, having an image processing system with the power to extract features that the human eye cannot detect or recognize with low accuracy can be very useful. A tumor is an abnormal mass of cells. Tumor cells grow for reasons that are still unknown, and they grow regardless of the body’s needs [5]. Moreover, because nutrients absorb normal cells from the blood, they are often harmful to the body. Tumors are often called neoplasms or neoplasms. Body tissues are permanently repaired and replaced with new cells following damage or damage caused by natural cell depletion. Therefore, in general, growth and repair depend on the body’s needs. Specific organs can grow in size (hypertrophy) or increase the number of cells (hyperplasia) if the organ is required to do more than its capacity [6].

A breast cancer diagnosis can help physicians to treat patients and significantly reduce mortality. Also, it increases the 5-year survival rate of patients with this cancer from 14% to 49% [7]. It is very important to check for breast cancer and to diagnose the tumor quickly and accurately. It is because eye diagnosis can be prone to error, and the radiologist may not detect the tumor. Therefore, there is an image processing system with high extraction power for detecting tumors, which can be very useful. The reduction in breast cancer mortality during screening may be even more significant. Since investigations have revealed that radiologists have failed to identify a remarkable number of breast cancer cases, the reasons for these cases are the failure of mammography screening. It is often unclear that disorders that are not visible in the images should be ignored. However, cancer may not be detected due to the absence of symptoms. Computer-aided diagnosis (CAD) is being developed. These methods utilized pattern recognition approaches to find features in the image that characterize breast cancer tumor location. Therefore, CAD systems assist the radiologist during the examination [8]. The image of suspicious areas is used. Most CAD systems also have diagnostic errors. However, there is also evidence that the CAD system can enhance the radiologist’s ability to interpret detection lesions. Although the results of a small number of recent studies indicate that the performance of existing commercial CAD systems still needs to be developed, they can meet the needs of imaging centers and clinics. Therefore, improving the performance of CAD systems is a crucial issue for investigation, and future developments remain [9].

The convolutional neural network (CNN) is a multilayer system that recovers features from raw input. It is a symbol for hierarchical structuring [10]. Convolution layers, fully connected layers, pooling, and an output layer are among the layers that make up a deep neural network (DNN) [11]. A convolution layer is one of these layers that is beneficial for learning high-level characteristics such as the edges of an image. FC layers are used to learn pixel-by-pixel characteristics. A pooling layer can minimize the quantity of convolved features, lowering the amount of computing power required. Max pooling and average pooling are two operations that this layer may execute [12, 13]. There are two types of CNNs utilized for breast image or data classification: de novo trained models and transfer learning-based models. The term “de novo model” refers to CNN-based models created and trained from the ground up [14]. On the contrary, transfer learning networks are CNN models that use previously trained neural network models such as AlexNet, visual geometry group, and residual neural network [15, 16].

This study aims to cluster the breast cancer area using the region growth method in combination with the FCM-GA approach. These results are compared with the manual selection method and Gaussian Mixture Model method for verification. In the second part of the paper, the classification is performed to diagnose breast cancer type in two datasets of MIAS and BI-RADS using features of GLCM and probabilistic neural network (PNN).

2. Literature Review

Veena and Padma [17] preprocessed the input image by reducing the noise coefficient using the intermediate filter method, reducing the image noise. Then, they use Gaussian mixed model (GMM), one of the well-known clustering algorithms for image segmentation, and finally, by applying probabilistic neural network (PNN) classifier on the features extracted with coincident matrix algorithm. Gray area (GLCM) is classified into three categories of benign, malignant, and normal cases. Punitha et al. [18] used intelligent artificial bee colonization and Improved Monarch Butterfly Optimization Technique (IABC-EMBOT) to detect breast cancer. The method used is of good speed and accuracy. Classification accuracy is 97.53%, sensitivity is up to 96.75%, specificity is up to 97.04%, and the average processing time is 113.42. Sakri et al. [19] presented a feature selection method for predicting the recurrence of breast cancer. The selection method is the Particle Swarm Optimization (PSO-RM) feature which uses three different classifiers KNN, NB, and the fast decision tree. Among the 34 features, the proposed method chooses the best quadratic method and improves the accuracy of all three classifiers. KNN accuracy improved from 70% to 81%, NB from 76% to 80%, and the fast decision tree from 66% to 75%.

Karthik et al. [20] used a deep neural network to learn data characteristics (DNNs). They categorized breast cancer data using multiple-layer DNNs. Experimental results show that the accuracy obtained from this system is 97.66%, and the sensitivity is slightly less than 0.98. The deep network designed in this study is for breast cancer datasets only. Unni et al. [21] used a general thresholding method to estimate the basal chest muscle boundary and then applied morphological methods to correct the extracted area boundaries and the mean filter to eliminate noise. The GLCM algorithm is used to extract the property. Then, a subset of these features that provides the best classification rate is selected using a genetic algorithm. Finally, Support Vector Machine Classification (SVM) is used to classify benign and malignant cancers.

Selvathi and Poornila [22] propose a general thresholding method for extracting chest bounds in images in which images are converted to binary using a fixed threshold value of 18. Each component with a significant number of pixels connected is considered as the chest area. The region boundary is then smoothed using a disk with a radius of 5 pixels using morphologically based filtering operations. Sasikala and Ezhilarasi [23] also proposed a general thresholding method for the extraction of breast boundaries. The 8-bit image noise used by the mean filter is reduced, and the image contrast is improved by the Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm. The images are then converted to binary images with a fixed threshold value followed by morphology-based filtering operations to eliminate small background objects. Results reported for this study included a maximum accuracy of 97.1%, a sensitivity of 98.8%, and a specificity of 95.4%. Heidari et al. [24] took mammographic image features and created an optimal classification model to estimate the risk of breast cancer. The data analyzed are 500 and divided into 50% high risk and 50% low risk. To anticipate the risk of cancer diagnosis, they proposed an LPP model based on the combination of several features to reduce the dimensions of the feature space. Unlike typical feature selection techniques that select a set of optimal features from the primary feature, LPP creates a new optimum feature array that includes features different from the main features in the feature pool, which ultimately created a 9.7% rise in risk anticipation accuracy.

Tariq et al. [25] conducted a study to classify mammographic images of breast cancer. GLCM algorithm was used to extract texture-type features from images. Then, a smaller set was made by the individual, in addition to the whole set of features. 60% of the data is used for training, 20% for validation, and 20% for testing. Using ANN as a classifier, the results of this study achieved 99% accuracy in the image recognition process. Kashyap et al. [26] used a partial differential equation adjustment process to extract the chest area, mammogram images, dark masking, and moderate filtering and map suspicious anomalies. Fuzzy c-mean clustering is applied. To calculate the texture characteristics of suspected fragmented masses, the local binary pattern was rotated, and local binary patterns were calculated. At the end of the support vector machine, polynomial kernel and radial basis function and multilayer and linear perceptron were used to classify areas of suspicion of abnormal and ordinary clades.

Chowdhary et al. [43] used the fuzzy c-mean bounded probability (IPFCM) method to overcome the bugs in traditional methods, namely, noise and random clustering, and used a fuzzy histogram algorithm in initial preprocessing in mammographic images. Finally, extraction, classification, and validation are performed to assist the radiologist in tumor diagnosis.

Rampun et al. [44] used a pretrained modified version of AlexNet with detailed adjustments in the database of CBIS-DDSM mammography images. The AlexNet network architecture used different parameters with more advanced functions such as PReLu instead of ReLu. The experimental results of this study are classification accuracy (ACC) of 80.4% and area under the curve (AUC) of 0.84. Kim et al. [45] evaluated the feasibility of using data-driven imaging biomarkers (DIB-MG) to characterize the deep CNN algorithm in mammographic images, including normal and benign classes. Using the algorithm, they have achieved a sensitivity of 76% and a specificity of 89%.

Zhang et al. [46] used a neural network algorithm to classify mammographic images. They evaluated ten different CNN architectures and concluded that combining both data addition methods (each original image into eight images) and a circular neural network would improve classification performance. The area under the ROC curve is 0.73. Salama et al. [47] developed a new computer-aided diagnosis (CAD) system to diagnose breast cancer in digital mammography. They used the WBCT algorithm to extract the feature. GA-SVM-MI optimization technique was used to select the optimal set of features. In this algorithm, the classification accuracy is between normal-abnormal (97.5%) and benign (96%) cases (see Table 1).


AuthorsMethodObjectiveResults

1Ribli et al. [27]Faster R–CNN used mammography to identify tumors and showed that this method was quite time-efficient; however, the faster R–CNN is generally weak, meaning that the training set must contain a large set of ROIs yet complete enough to include all possible waste changesDiagnosis and classification of masses in breast tissueThe system can detect 90% of malignant lesions in the INbreast dataset with only 0.3 false-positive marks per image

2Gao [28]They used a low-energy image (LE) similar to full-field digital mammography (FFDM) and a recombination image; in the proposed algorithm, the shallow CNN has the task of “image reconstruction,” and the deep CNN has “extracting features”Diagnosis and classification of masses in breast tissueExperimental results on 89 FFDM datasets are obtained using recombinant “virtual” imaging features with an accuracy of 0.90 and AUC = 0.92

3Jung et al. [29]The model used a single-stage detector and a two-stage detector; one-step detectors such as RetinaNet generate a fixed number of projections over a network to cover possible positive sample spaces; unlike RetinaNet, the R–CNN mask can classify finite boxes in any range of scales and aspect ratios in any situation by segmenting the pixel surfaceDiagnosis and classification of masses in breast tissueThe best result of this algorithm for TPR @ FPPI in the GURO dataset is (0.99 ± 0.01@3.0)
(1.00 ± 0.00@1.3)
(0.94 ± 0.03@0.5)

4Shams et al. [30]A deep generative multitasking network (DiaGRAM) has been used to solve data loss and limited training data to interpret the lesions because it is a costly and time-consuming task; this multifunctional network is built on a combination of Concentration neural networks and a generational opposite networkClassification of masses in breast tissueThe results of this algorithm for the INbreast dataset are as follows:
Accuracy equal to 9.2% ± 5/93
And AUC = 92.5 ± 2.4%

5Al-masni et al. [31]A CAD system based on deep learning ROI (area of interest) techniques using CNN called YOLO was developed to diagnose and classify breast masses into benign and malignant states in DDSM mammographic imagesDiagnosis and classification of masses in breast tissueThe results of this algorithm are as follows
Sensitivity = 93.2%
Feature = 78%
AUC = 87.7%

6Chougrad et al. [32]CAD describes a system based on deep CNN that distinguishes malignant and benign breast mass in high-resolution mammographic images; the models used are VGG16, ResNet50, and Inceptionv3Classification of masses in breast tissueWith the ResNet50 network in the DDSM dataset, they achieved an accuracy of 27.27%

7Ragab et al. [33]The region-based method was used to determine the threshold of 76 and determine the most significant area; at the feature extraction stage, DCNN was used; AlexNet network retraining was used to distinguish between the two classes; for better accuracy, the last layer of DCNN was replaced with SVMDiagnosis of masses in breast tissueAccuracy, AUC, sensitivity, specificity, accuracy, and F1 score were 80.5%, 88.8%, 88.4%, 84.2%, 86%, and 81.5%, respectively

8Hazarika and Mahanta [34]The method of extraction of breast border area using threshold-based zoning with morphological operations was proposed; for this purpose, the two-stage image contrast enhancement method was used; in the first phase, a two-step histogram correction technique is used to improve the image at the general level, and in the second phase, a nonlinear filter based on local mean and the local standard deviation for each pixel is applied to the image with the modified histogramDiagnosis of cancer in breast tissueThe result of the proposed algorithm with 322 images is 98% accuracy
9Rajendra et al. [35]Four texture extraction algorithms were used to extract features from mammographic images, and the SVM classification method was used to classify mammographic images into normal and abnormal categoriesClassification of masses in breast tissueThe best results are related to the GLCM feature
Accuracy = 92%, sensitivity = 94%, specificity = 93%, accuracy = 95%

10Eltoukhy et al. [36]The CAD system, based on feature extraction, was applied using the Gauss-Hermitage method, and the features were classified into four different categories: K-NN, random forest, and AdaBoostDiagnosis of masses in breast tissueThe accuracy of the method used on the two sets of IRMA and MIAS images is 93.27 and 90.56, respectively

11Padmavathy et al. [37]A practical method was used by NSST + ANFIS to diagnose breast cancer; NSST was used to parse original images in multiple directions and several scales, and ANFIS-compliant clustering was used to classify input imagesDiagnosis of masses in breast tissueThe results of the proposed algorithm
Accuracy = 98.2%, sensitivity = 90.4%, specificity = 90.6%

12Tahmooresi et al. [38]A method for the early diagnosis of breast cancer has been proposed. In which it combines different machine learning methods, Support Vector Machine (SVM), ANN, KNN, Decision Tree (DT)Classification of masses in breast tissueAccuracy of 99.8% was achieved for the diagnosis of breast cancer

13Amrane et al. [39]They studied different machine learning methods such as support vector machine, Naive Bayes classifier, and nearest neighbor (KNN) to classify images of breast cancer and claimed that the KNN classification method performed better than vector machine; gain support and Naïve BayesClassification of masses in breast tissueThe KNN method has a higher accuracy of 97.57. However, the NB method also has a good accuracy of 96.99%

14Anjaiah et al. [35]Multi-ROI segmentation is one of the ways these authors have used mammography images; also, images were extracted using statistical criteria to measure the texture characteristics of mammographic imagesClassification of masses in breast tissueThis method helps better detect the texture and shape of suspicious mammography images and better diagnose breast cancer

15Vijayarajeswari et al. [40]They efficiently categorized normal and abnormal classes into mammographic images using Huff transforms; improve results using other features such as mean, variance, and entropy; finally, the SVM cluster was used for classificationClassification of masses in breast tissueThe diagnosis accuracy in standard images is 65%, and in nonnormal images, 71%

16Chowdhary et al. [41]The goal of this study is to use intuitionistic possibilistic fuzzy c-mean clustering to segment medical photosSegmentation of breast cancerFor MIAS pictures with varying noise levels of 5%, 7%, and 9% of the presented method, the average segmentation accuracy is 91.25 percent, 87.50 percent, and 85.30 percent

17Chowdhary et al. [42]Deep convolution neural networks are used to classify breast cancer using computer vision and image processingClassification of breast cancerFor benign and malignant pictures, the traditional computer vision and image processing paradigm has a classification accuracy of 85 percent and 82 percent, respectively

3. Methods and Materials

3.1. Data Collection

MIAS (Mammography Image Analysis Society), a UK research community engaged in understanding mammography, is a database for digital mammograms that have been developed. Films taken from the UK National Breast Screening Program were digitized with a Joyce-Loebl scanning microdensitometer to the 50-micron pixel tip, a linear unit in the optical density range 0–3.2, and an 8 bit term for each pixel. The list is split into film pairs, where each pair reflects a single patient’s left (even filename numbers) and right (odd filename numbers) mammograms. The file resolution is 1024 pixels x 1024 pixels for ALL images. In the matrix, the images have been centered. Center positions and radii, rather than individual calcification, refer to clusters when calcification is present. The bottom-left corner is the root of the coordination system. In some cases, calcification is widely dispersed rather than concentrated at a single position throughout the image. Center positions and radii are unsuitable in these situations and removed [48].

The second dataset is a BI-RADS data collection intended to standardize reporting on breast imaging and minimize uncertainty in the understanding of breast imaging. It also promotes monitoring of outcomes and evaluation of quality. It provides a lexicon for systematic terminology and chapters on report organizations and guidance for use in everyday practice, for Mammograms, US breast, and MRI. A database of clinical mammograms comprising 60 patient photos was taken from mammogram screening centers. A broad range of cases that are difficult to classify by radiologists is included in the real-time database. Both clinical mammograms obtained from screening clinics were positive for abnormalities in their presence. Initially, as an input, we take a 2D mammogram image of size MN and add the average filter to it. The pictures are 20 benign, 20 malignant, and 20 regular images of the breast tumor.

3.2. Growth Region Algorithm

This approach focuses on the definition employed to divide the image into distinctive regions to analyze the homogeneity and centered on the resemblance or the homogeneity of the neighboring pixels. The pixels in each region are comparable to particular parameters, such as color and strength. These histogram-based image clustering approaches concentrate only on the dispersion of image pixels at the grayscale, whereas district growth techniques note that near-gray levels are often present in the surrounding pixels.

Area-based methods are performed as follows: (1)The beginning of the algorithm is regarded to be the number of initial seeds.(2)The region starts to develop from these seeds. The pixels are inserted in this area similar to the initial pixels.(3)The next grain is considered when the region’s growth ends, and the following area’s growth continues.(4)These steps keep going until one region belongs to all the pixels in the image.

The following measures refer to the growth method of the region (Figure 1).

Step 1: select initial seeds. It must manually insert the initial points to start the algorithm. In the handy process, the algorithm begins by choosing the initial points for the user. Several techniques automatically take out the initial points in the sector: the utilization of a random step algorithm to identify the first points, for instance. To pick the initial points, this study proposes an algorithm that focuses on FCM-GA methods. The purpose of the algorithm is to use the fuzzy clustering algorithm to implement clustering initially. Moreover, it is distinguished by membership grade M and cluster centers C. Then, through the genetic algorithm, the appropriate values of these parameters are gained based on reducing target function. In the following approach, the error performance criterion E iswhere the m relation is as follows:

Furthermore, from the following relationship, m and C are computed:

Step 2: determine the similarity of regions. The selection of the similarity criterion between the regions is achieved. Then, the initial points have been defined in the previous stage. The criterion of similarity is utilized to evaluate the resemblance of the new pixels. The pixels of the regions define the assignment of the new pixel to the corresponding area.

The standard deviation criterion is one of these criteria for similarity. The new pixel relates to the region when it is used under the following condition and is used in the mean and standard deviation areas:where X is a variable that describes the difference in the area by how many pixels, usually, 99.7% of the actual seeds in the field, are placed in an isolated area where X is considered to be 3. The smaller X is, the fewer points the region contains, and the picture splits into more regions. The threshold criterion is another deemed criterion. In this process, the average of the regions is determined, and the new pixel relates to the area as the following condition:

In this context, when the gap is less than the defined limit between the pixels and the average area, the area covers that pixel. For color images (red, green, and blue), this situation must also extend to all three layers to connect these pixels to this region.

Step 3: the third step is growth region. The region’s growth is carried out after choosing the initial seeds for beginning the algorithm and the criterion of similarity for pixels with zones. The area’s development is such that the adjacent regions are chosen, beginning from the initial seeds.

3.3. Fuzzy C-Means (FCM)

FCM Bezdek et al. (1984) developed a classification algorithm regarding the reductions of the objective function:where manages the nonresolution degree and the classification spike.

U seems to be the inextensible membership in the center class of the data. The interval between the data and the middle of class and d should be the interval.

U is subject to the following situations:

For each group of the following relationships, the membership function and center membership are gained:

3.4. Genetic Algorithm (GA)

The genetic algorithm (GA) is an optimization tool based on the Darwinian evolutionary rule. In each step of applying the GA, a set of search points is applied to arbitrary processing. Every point has appointed a sequence of characters in this method, and genetic operators carry out these sequences. To obtain new results in the search space, the resulting sequences are then diced. In the end, the likelihood of their presence in the next level is calculated based on the premise that each point has an objective function.

The fitness function was described in this research as follows. This function is determined based on the difference between the degraded database image and the image acquired by the growth method of the field, beginning at the initial random point, as follows:where gray, white, and black layers are and are images derived from fragmentation, respectively.

3.5. Adaptive Median Filter

The adaptive median filter classifies each pixel of the image as noise with its surrounding pixels. The size of the neighborhood, as well as the reference threshold, are adjustable. As impulse noise, a pixel that varies from most of its neighbors and is not functionally compatible with those pixels to which it is identical is labeled. The objects are (1) deleteing noise from impulse, (2) smoothing other noises, and (3) diminishing distortion, such as extreme thinning or object boundary thickening [49].

3.6. Gaussian Mixture Model

As a random parameter, the pixel’s value in an image (i.e., the intensity or color) can be taken. Since each random variable has a probability distribution, then pixel values have a distribution of probability [50]. A reasonable probability distribution for pixel values of an image is the Gaussian mixture distribution. The form is the Gaussian mixture distribution formula:where

We presume that an image is categorized into a class known to the class k. , and parameters are of -class in terms of mean, variance, and likelihood.

It means the following:

3.7. Expectation-Maximization Algorithm

In setup to the K-means, the EM algorithm is very identical. Similarly, choosing the input partitions is the first part. In this case, to compare findings more realistic, the same initial partitions as utilized in the color segmentation with K-Means were used. Here is the comparison parameter, RGB color was again chosen. The EM cycle starts with the following equation described by an Expectation phase [51]:

This equation asserts that concerning partition j, the assumptions or weight for pixel z equals the possibility that x is pixel xi provided that μ is partition µi divided by the sum over all components k of the same probability subsequently defined. For weights, this contributes to the lower expression. The sigma expresses the covariance of pixel data squared shown in the second expression. The M step or maximization step starts once the E step has been implemented, and every pixel has a weight of expectation for each partition. The following equation defines this step:

This equation indicates that the partition value j is modified to the weighted average of the pixel values for this specific partition, where the weights are the weights of the E phase. For each new set of partitions, this EM loop is replicated until, as in the K-Means algorithm, the partition values do not shift by a significant amount anymore.

3.8. Hidden Markov Random Field Model (HMRFM)

HMRFM is defined by a sequence of observations as accidental processes generated by a sequence of Markov whose state sequence cannot be explicitly tracked. Each observation of the state series is presumed to be a stochastic function. According to the transition probability matrix, in which represents the number of states, the underlying Markov chain alters its state. HMMs have been effectively used in speech recognition and handwritten script recognition [52]. Since the original HMMs were created as 1D Markov chains, 2D/3D problems like image segmentation cannot be used directly with first-order neighborhood structures. Here, we consider an HMM case that contains a Markov random field rather than a Markov chain and thus is not confined to 1D as the underlying stochastic process. We call this particular case an HMRF model. The following characterizes an HMRF model mathematically:(i)Hidden Random Field (MRF). Let be a finite status space with the distribution of probability; the arbitrary vector is an underlying MRF (5). The condition of X is not observable.(ii)Objective Random Field. is a random field with space D as a finite state. Each Yi is the conditional probability distribution p (yi | xi) for function given any unique configuration, where the parameters associated are. The emission likelihood function is termed this distribution, and Y is often pointed to as the arbitrary vector released.(iii)Conditional Dependency. For any, the arbitrary parameters Yi are conditional independent:

3.9. Probabilistic Neural Network (PNN)

The PNN is a network for neural feed-forward, widely employed in diagnosis and pattern recognition algorithm. According to the discriminant analysis of the Kernel Fisher, this form of ANN was extracted. The operations are categorized into four layers of a multilayered feed-forward network within a PNN as input, pattern, summation, and output layers (see Figure 2).

In classification problems, PNN is also utilized [53]. The first layer measures the difference when input is available, from the input vector to the training input vectors. The pattern layer calculates the relationship between each type of inputs and produces the net output as a likelihood vector. Eventually, a competitive transfer feature on the output of the second layer chooses the sum of those possibilities. For that class and nontarget classes, binary identification is created, respectively. Each neuron in the input layer defines a predictor variable. In categorical variables, when there are N category numbers, N-1 neurons are considered. The pattern layer includes a neuron for every case in the training process. It keeps the values of the input variables for the scenario, along with the output value. In the summation layer, the values for the class they represent are added by the pattern neurons. In the output layer, for each target category, it contrasts the weighted collected in the pattern layer and uses the maximum values to predict the targets.

3.10. ROC Curve

To determine the outcome of binary classification (duality), sensitivity and specificity are used in the statistics of both measures. The consistency of the outcomes of a test that separates the information into these two categories is observable and descriptive using sensitivity and attribute metrics when the data can be separated into positive and negative classes. Sensitivity means the percentage of positive cases that would be correctly checked as positive. Specificity means the percentage of negative cases that correctly label them as negative. The sensitivity is the division of true-positive cases in statistical terms into the sum of true-positive cases and false-negative cases:

The sensitivity of the test and its specificity depend on the quality of the test and the type of test utilized. However, it is not possible to describe the outcome of a test using sensitivity and specificity alone:

The ROC curve is a plot displaying the diagnostic capabilities of the binary classifier system as its discrimination threshold is different. The ROC curve is formed by plotting the true-positive rate against fall-out or the false-positive rate at different threshold settings. Sensitivity, recall, or likelihood of identification are also recognized as the true-positive rate. Accuracy, specificity, and precision are other performance analysis parameters.

4. Results and Discussion

4.1. Presented Approach

The diagram of the approach presented is shown in Figure 3. We used the growth region technique for tumor detection and adaptive median filters for classification to eliminate noise from the image since it is best for all spatial filtering and recognizes noise from fine information. The Adaptive Median Filter implies space processing to evaluate the impulse noise pixels in an image. By comparing each pixel with its surrounding pixels in the image, the adaptive media filters classify pixels as noise. The size of the neighborhood, as well as the comparison threshold, is adjustable.

4.2. The Tumor Detection Stage
4.2.1. Growth Region Method

In the growth region algorithm, the average value of the initial seeds is the average of the tumor area in this work, and the initial STD is equal by zero for region growth beginning from the initial seeds in the input image. Neighborhoods are referred to for their initial point 8. The analysis is done so that the pixels around it are checked, starting with the initial seed, and if they relate to that class, they are attached to it.

By accumulating points to that class as follows, the average and STD of the old class are recursively updated by using the mean and STD of the previous stage:

Then, they will also be located in the same neighborhood, and the criteria for the neighboring point applied to this class will be changed. This search will continue until ultimately detecting the first class and not add any other point. The cluster size of the FCM is ten, and the population size of GA is 100. Furthermore, 0.2 is the rate of mutations. Figure 4 shows the clustering findings using the FCM-GA approach mentioned. The precise location of the target tumor area was detected in the final image (see Figure 5).

4.2.2. Segmentation Using GMM

The outcomes of Adaptive Median Filtration are also shown in Figure 6. The image is clarified, and the noise is reduced. These noise pixels would then substitute the value of the median pixel in the adjacency. Initially, the image is transformed into a grayscale image, adaptive median filtering is then applied to the output image, and the image is transformed to an integer eight that is unsigned. Then, with two regions, two GMM components, and 100 iterations, the GMM clustering is performed on the preprocessed image.

We used the k-means clustering method (k = 2) and applied HMRF-EM. Figures 7 and 8 display the effects of the GMM process. Malignant and benign tumors are illustrated in Figures 7 and 8, respectively, and we can segment the tumors using the presented procedure.

Figure 9 demonstrates the effects of the approach provided for normal breasts. The results do not reveal the critical section of the tumor in the model. Also, the chest muscle is present in the white position in the top corner of the figure.

4.2.3. The Performances of Approaches

Table 2 shows the fuzzy fitness value with three presented techniques in the genetic algorithm and is compared by the maximum values of Jaccard and the minimum Jaccard distance. In the genetic algorithm, the FCM based fitness function works well due to similarities between the techniques and the smallest Jaccard distance.


MethodJaccardSimilarity

Growth region hand selection0.820.63
Growth region FCM-GA selection0.950.71
GMM method0.930.65

The clustering methods are presented based on the growth region method. Then, the analysis was used in this work on a variety of performance criteria. The suggested approach can diminish the RMS error, as shown from the results (Table 3). It indicates that the division of the picture was performed more precisely. In this analysis, FCM-GA was employed to separate the initial points using the area growth process. As mentioned, the appropriate initial points have been selected using a genetic algorithm and evaluating the proper fitness function for image clustering using fuzzy logic. With this hybrid method, we get the required initial seeds for starting the growth process.


MethodRMS

Growth region hand selection0.5711
Growth region FCM-GA selection0.3681
GMM method0.4216

Consequently, the presented method was implemented on mammography MRI breast cancer images, and the outcomes were recorded. The finding revealed that the suggested algorithm could reduce the clustering error. We utilized 212 healthy breasts and 110 breast cancer images for the implementation of the algorithm. The performance criteria are shown in Table 4. The findings indicate that for presented techniques, maximum sensitivity is shown. Besides, the GMM approach offers minimal fall-out and optimum sensitivity with better detection performance using the GA to collect initial seeds.


SensitivitySpecificityPrecisionFall-out

Hand selection0.720.650.760.35
FCM-GA selection0.830.790.940.21
GMM method0.760.630.920.37

We demonstrate that this method is appropriate according to the receiver operating characteristic (ROC) curve. Since it has a sensitivity above the line of guess, the optimal outcome should have minimal fall-out and optimum sensitivity concerning Figure 10.

4.3. The Classification Stages

We used two BI-RADS and MIAS datasets in the classification stage. We utilized 60 breast cancer images in BI-RADS, including 100 malignant and 100 benign breast cancer images. Table 5 displays the sensitivity, fall-out, precision, accuracy, and specificity parameters of classification using the PNN method. The result shows that the PNN methods have high accuracy and sensitivity with the MIAS dataset. Also, the fall-out for the MIAS dataset is 0.08, which is lower than BI-RADS as 0.8, so the number of datasets will trigger classification outcomes.


SensitivitySpecificityPrecisionAccuracyFall-out

MIAS0.910.920.990.920.08
BI-RADS0.880.720.950.800.28

5. Conclusion

In this study, an automated clustering method is proposed to detect breast cancer from mammographic MRI images based on the growth region method. Also, FCM-GA was utilized to find initial seeds. MRI images are proposed for target area detection and extraction utilizing genetic algorithm-based dynamic image analysis. Comparison results of RMSE error reveal that the suggested algorithm has the lowest error rates relative to the other method. Moreover, the suggested FCM-GA method has a higher Jaccard index and the minimum Jaccard distance than other methods.

Moreover, the suggested GMM solution is close to the method of FCM-GA. As shown in the findings, the RMS error could be minimized by the suggested approach. For the automatic selection of the appropriate initial seeds, a genetic algorithm has been used based on fuzzy logic clustering techniques to start the growth process. Also, we used the adaptive median filter for classification to eliminate noise from the picture of breast cancer, to distinguish between fine details and noise. We then performed the GMM clustering on the preprocessed image. Moreover, the PNN method is used to classify or diagnose tumor types based on GLCM features extracted from GMM resulted in images. We utilized two datasets MIAS and BI-RADS for this target. With minimal fall-out and optimum sensitivity, we can see perfect outcomes.

Data Availability

The mammography data of this study are available in the MIAS and BI-RADS repositories (the Mammographic Image Analysis Society Digital Mammogram Database, ACR BI-RADS® Mammography).

Conflicts of Interest

The authors declare that they have conflicts of interest.

References

  1. S. Bharati, P. Podder, and M. Mondal, “Artificial neural network based breast cancer screening: a comprehensive review,” 2020, http://arxiv.org/abs/2006.01767. View at: Google Scholar
  2. J. Tsang and G. M. Tse, “Molecular classification of breast cancer,” Advances in Anatomic Pathology, vol. 27, no. 1, pp. 27–35, 2020. View at: Publisher Site | Google Scholar
  3. R. T. Chlebowski, G. L. Anderson, A. K. Aragaki et al., “Association of menopausal hormone therapy with breast cancer incidence and mortality during long-term follow-up of the women’s health initiative randomized clinical trials,” Jama, vol. 324, no. 4, pp. 369–380, 2020. View at: Publisher Site | Google Scholar
  4. F. M. Sutter and A. Ye, “Radiation therapy in the management of breast cancer and the impact of BC cancer center for the north on patient choice of treatment,” British Columbia Medical Journal, vol. 60, 2018. View at: Google Scholar
  5. S. Dorosti, S. J. Ghoushchi, E. Sobhrakhshankhah, M. Ahmadi, and A. Sharifi, “Application of gene expression programming and sensitivity analyses in analyzing effective parameters in gastric cancer tumor size and location,” Soft Computing, vol. 24, no. 13, pp. 9943–9964, 2020. View at: Publisher Site | Google Scholar
  6. C. I. Ullrich, R. Aloni, M. E. Saeed, W. Ullrich, and T. Efferth, “Comparison between tumors in plants and human beings: mechanisms of tumor development and therapy with secondary plant metabolites,” Phytomedicine, vol. 64, Article ID 153081, 2019. View at: Publisher Site | Google Scholar
  7. C. Allemani, T. Matsuda, V. Di Carlo et al., “Global surveillance of trends in cancer survival 2000–14 (CONCORD-3): analysis of individual records for 37 513 025 patients diagnosed with one of 18 cancers from 322 population-based registries in 71 countries,” The Lancet, vol. 391, no. 10125, pp. 1023–1075, 2018. View at: Google Scholar
  8. T. Syeda-Mahmood, “Role of big data and machine learning in diagnostic decision support in radiology,” Journal of the American College of Radiology, vol. 15, no. 3, pp. 569–576, 2018. View at: Publisher Site | Google Scholar
  9. M. Ahmadi, A. Sharifi, S. Hassantabar, and S. Enayati, “QAIS-DSNN: tumor area segmentation of MRI image with optimized quantum matched-filter technique and deep spiking neural network,” BioMed Research International, vol. 2021, Article ID 6653879, 16 pages, 2021. View at: Publisher Site | Google Scholar
  10. Y. Liu, S. Stojadinovic, B. Hrycushko, Z. Wardak, S. Lau, W. Lu et al., “A deep convolutional neural network-based automatic delineation strategy for multiple brain metastases stereotactic radiosurgery,” PloS One, vol. 12, no. 10, Article ID e0185844, 2017. View at: Publisher Site | Google Scholar
  11. S. P. RM, P. K. R. Maddikunta, M. Parimala et al., “An effective feature engineering for DNN using hybrid PCA-GWO for intrusion detection in IoMT architecture,” Computer Communications, vol. 160, pp. 139–149, 2020. View at: Publisher Site | Google Scholar
  12. O. Hadad, R. Bakalo, R. Ben-Ari, S. Hashoul, and G. Amit, “Classification of breast lesions using cross-modal deep learning,” in Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), pp. 109–112, Melbourne, Australia, 2017, April. View at: Google Scholar
  13. R. Yamashita, M. Nishio, R. K. G. Do, and K. Togashi, “Convolutional neural networks: an overview and application in radiology,” Insights Into Imaging, vol. 9, no. 4, pp. 611–629, 2018. View at: Publisher Site | Google Scholar
  14. A. Abdel Rahman, S. Belhaouari, A. Bouzerdoum, H. Baali, T. Alam, and A. Eldaraa, “breast mass tumor classification using deep learning,” in Proceedings of the 2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT), Doha, Qatar, February 2020. View at: Google Scholar
  15. M. Ahmadi, A. Sharifi, M. Jafarian Fard, and N. Soleimani, “Detection of brain lesion location in MRI images using convolutional neural network and robust PCA,” International Journal of Neuroscience, vol. 12, pp. 1–12, 2021 b. View at: Publisher Site | Google Scholar
  16. S. Charan, M. J. Khan, and K. Khurshid, “Breast cancer detection in mammograms using convolutional neural network,” in Proceedings of the 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), pp. 1–5, IEEE, Sukkur, Pakistan, 2018, March. View at: Google Scholar
  17. M. Veena and M. C. Padma, “Detection of breast cancer using digital breast tomosynthesis,” in Emerging Research in Electronics, Computer Science and Technology, pp. 721–730, Springer, Singapore, 2019. View at: Publisher Site | Google Scholar
  18. S. Punitha, A. Amuthan, and K. Suresh Joseph, “Enhanced monarchy butterfly optimization technique for effective breast cancer diagnosis,” Journal of Medical Systems, vol. 43, no. 7, 206 pages, 2019. View at: Publisher Site | Google Scholar
  19. S. Sakri, N. B. Abdul Rashid, and Z. Muhammad Zain, “Particle swarm optimization feature selection for breast cancer recurrence prediction,” IEEE Access, vol. 6, pp. 29637–29647, 2018. View at: Publisher Site | Google Scholar
  20. S. Karthik, R. Srinivasa Perumal, and C. Mouli, “Breast cancer classification using deep neural networks,” in Knowledge Computing and its Applications, pp. 227–241, Springer, Singapore, 2018. View at: Publisher Site | Google Scholar
  21. A. Unni, E. Nidheep, S. Vinod, and S. Lekha, “Tumour detection in double threshold segmented mammograms using optimized GLCM features fed SVM,” in Proceedings of the 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI), pp. 554–559, IEEE, Bangalore, India, September 2018. View at: Google Scholar
  22. D. Selvathi and A. Aarthy Poornila, “Deep learning techniques for breast cancer detection using medical image analysis,” in Biologically Rationalized Computing Techniques for Image Processing Applications, pp. 159–186, Springer, Berlin, Germany, 2018. View at: Publisher Site | Google Scholar
  23. S. Sasikala and M. Ezhilarasi, “Fusion of k-Gabor features from medio-lateral-oblique and craniocaudal view mammograms for improved breast cancer diagnosis,” Journal of Cancer Research and Therapeutics, vol. 14, no. 5, 1036 pages, 2018. View at: Publisher Site | Google Scholar
  24. M. Heidari, A. B. Hollingsworth, G. Danala et al., “Prediction of breast cancer risk using a machine learning approach embedded with a locality preserving projection algorithm,” Physics in Medicine & Biology, vol. 63, no. 3, Article ID 035020, 2018. View at: Publisher Site | Google Scholar
  25. N. Tariq, B. Abid, K. Ali Qadeer, I. Hashim, Z. Ali, and I. Khosa, “Breast cancer classification using global discriminate features in mammographic images,” Breast Cancer, vol. 10, 2 pages, 2019. View at: Publisher Site | Google Scholar
  26. K. L. Kashyap, M. K. Bajpai, P. Khanna, and G. George, “Mesh‐free based variational level set evolution for breast region segmentation and abnormality detection using mammograms,” International Journal for Numerical Methods in Biomedical Engineering, vol. 34, no. 1, Article ID e2907, 2018. View at: Publisher Site | Google Scholar
  27. D. Ribli, H. Anna, Z. Unger, P. . Pollner, and I. Csabai, “Detecting and classifying lesions in mammograms with deep learning,” Scientific Reports, vol. 8, no. 1, 4165 pages, 2018. View at: Publisher Site | Google Scholar
  28. F. Gao, T. Wu, L. Jing et al., “A shallow-deep CNN for improved breast cancer diagnosis,” Computerized Medical Imaging and Graphics, vol. 70, pp. 53–62, 2018. View at: Publisher Site | Google Scholar
  29. H. Jung, B. Kim, I. Lee et al., “Detection of masses in mammograms using a one-stage object detector based on a deep convolutional neural network,” PLoS One, vol. 13, no. 9, Article ID e0203355, 2018. View at: Publisher Site | Google Scholar
  30. S. Shams, R. Platania, J. Zhang, J. Kim, K. Lee, and S.-J. Park, “Deep generative breast cancer screening and diagnosis,” in Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 859–867, Springer, Granada, Spain, September 2018. View at: Publisher Site | Google Scholar
  31. M. A. Al-masni, M. A. Al-antari, J.-M. Park et al., ““Simultaneous detection and classification of breast masses in digital mammograms via a deep learning YOLO-based CAD system,” Computer Methods and Programs in Biomedicine, vol. 157, pp. 85–94, 2018. View at: Publisher Site | Google Scholar
  32. H. Chougrad, Z. Hamid, and A. Omar, “Deep convolutional neural networks for breast cancer screening,” Computer Methods and Programs in Biomedicine, vol. 157, pp. 19–30, 2018. View at: Publisher Site | Google Scholar
  33. D. A. Ragab, M. Sharkas, S. Marshall, and J. Ren, “Breast cancer detection using deep convolutional neural networks and support vector machines,” Peer Reviewed Journal, vol. 7, Article ID e6201, 2019. View at: Publisher Site | Google Scholar
  34. M. Hazarika and L. B. Mahanta, “A new breast border extraction and contrast enhancement technique with digital mammogram images for improved detection of breast cancer,” Asian Pacific Journal of Cancer Prevention: APJCP, vol. 19, 2141 pages, 2018. View at: Publisher Site | Google Scholar
  35. P. Anjaiah, K. Rajendra Prasad, and C. Raghavendra, “Effective texture features for segmented mammogram images,” International Journal of Engineering &Technology, vol. 7, no. 3, pp. 666–669, 2018. View at: Publisher Site | Google Scholar
  36. M. M. Eltoukhy, M. Elhoseny, K. M. Hosny, and K. Amit, “Computer aided detection of mammographic mass using exact Gaussian–Hermite moments,” Journal of Ambient Intelligence and Humanized Computing, vol. 167, pp. 1–9, 2018. View at: Google Scholar
  37. T. V. Padmavathy, M. N. Vimalkumar, and D. S. Bhargava, “Adaptive clustering based breast cancer detection with ANFIS classifier using mammographic images,” Cluster Computing, vol. 22, no. 6, pp. 13975–13984, 2019. View at: Publisher Site | Google Scholar
  38. M. Tahmooresi, A. Afshar, B. Bashari Rad, K. B. Nowshath, and M. A. Bamiah, “Early detection of breast cancer using machine learning techniques,” Journal of Telecommunication, Electronic and Computer Engineering (JTEC), vol. 10, no. 3-2, pp. 21–27, 2018. View at: Google Scholar
  39. M. Amrane, S. Oukid, I. Gagaoua, and T. Ensarİ, “Breast cancer classification using machine learning,” in Proceedings of the 2018 Electric Electronics, Computer Science, Biomedical Engineerings’ Meeting (EBBT), pp. 1–4, IEEE, Istanbul, Turkey, July 2018. View at: Google Scholar
  40. R. Vijayarajeswari, P. Parthasarathy, S. Vivekanandan, and A. Alavudeen Basha, “Classification of mammogram for early detection of breast cancer using SVM classifier and Hough transform,” Measurement, vol. 146, 2019. View at: Google Scholar
  41. C. L. Chowdhary, M. Mittal, P. A. Pattanaik, and Z. Marszalek, “An efficient segmentation and classification system in medical images using intuitionist possibilistic fuzzy C-mean clustering and fuzzy SVM algorithm,” Sensors, vol. 20, no. 14, 3903 pages, 2020. View at: Publisher Site | Google Scholar
  42. C. L. Chowdhary, P. G. Shynu, and V. K. Gurani, “Exploring breast cancer classification of histopathology images from computer vision and image processing algorithms to deep learning,” International Journal of Advanced Science and Technology, vol. 29, pp. 43–48, 2020. View at: Google Scholar
  43. Chowdhary, C. Lal, and D. P. Acharjya, “Segmentation of mammograms using a novel intuitionistic possibilistic fuzzy c-mean clustering algorithm,” in Nature Inspired Computing, pp. 75–82, Springer, Singapore, 2018. View at: Publisher Site | Google Scholar
  44. A. Rampun, B. W. Scotney, P. J. Morrow, and H. Wang, “Breast mass classification in mammograms using ensemble convolutional neural networks,” in Proceedings of the 2018 IEEE 20th International Conference on E-Health Networking, Applications and Services (Healthcom), pp. 1–6, IEEE, Ostrava, Czech Republic, September 2018. View at: Google Scholar
  45. E.-K. Kim, H.-E. Kim, K. Han et al., “Applying data-driven imaging biomarker in mammography for breast cancer screening: preliminary study,” Scientific Reports, vol. 8, no. 1, 2762 pages, 2018. View at: Publisher Site | Google Scholar
  46. X. Zhang, Y. Zhang, Y. Erik et al., “Classification of whole mammogram and tomosynthesis images using deep convolutional neural networks,” IEEE Transactions on Nanobioscience, vol. 17, no. 3, pp. 237–242, 2018. View at: Publisher Site | Google Scholar
  47. M. S. Salama, A. S. Eltrass, and H. M. Elkamchouchi, “An improved approach for computer-aided diagnosis of breast cancer in digital mammography,” in Proceedings of the 2018 IEEE International Symposium on Medical Measurements and Applications (MeMeA), pp. 1–5, IEEE, Rome, Italy, June 2018. View at: Google Scholar
  48. Suckling, J., Parker, J., Dance, D., et al. (2015). Mammographic Image Analysis Society (MIAS) Database V1.21 [dataset].
  49. M. Rakshit and S. Das, “An efficient ECG denoising methodology using empirical mode decomposition and adaptive switching mean filter,” Biomedical Signal Processing and Control, vol. 40, pp. 140–148, 2018. View at: Publisher Site | Google Scholar
  50. B. Zong, Q. Song, M. R. Min et al., “Deep autoencoding Gaussian mixture model for unsupervised anomaly detection,” in Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, 2018, February. View at: Google Scholar
  51. E. Tzoreff and A. J. Weiss, “Expectation-maximization algorithm for direct position determination,” Signal Processing, vol. 133, pp. 32–39, 2017. View at: Publisher Site | Google Scholar
  52. R. Liu, C. Huang, T. Li, L. Yang, and H. Zhu, “Statistical disease mapping for heterogeneous neuroimaging studies,” in Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pp. 1415–1418, IEEE, Washington, DC, USA, 2018 April. View at: Google Scholar
  53. Y. Zeinali and B. A. Story, “Competitive probabilistic neural network,” Integrated Computer-Aided Engineering, vol. 24, no. 2, pp. 105–118, 2017. View at: Publisher Site | Google Scholar

Copyright © 2021 Zeynab Nasr Isfahani et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views131
Downloads115
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.