Advanced Deep Learning and Neuro-Evolution Metaheuristic Techniques in Medical ApplicationsView this Special Issue
Classification of Breast Cancer Histopathological Images Using DenseNet and Transfer Learning
Breast cancer is one of the most common invading cancers in women. Analyzing breast cancer is nontrivial and may lead to disagreements among experts. Although deep learning methods achieved an excellent performance in classification tasks including breast cancer histopathological images, the existing state-of-the-art methods are computationally expensive and may overfit due to extracting features from in-distribution images. In this paper, our contribution is mainly twofold. First, we perform a short survey on deep-learning-based models for classifying histopathological images to investigate the most popular and optimized training-testing ratios. Our findings reveal that the most popular training-testing ratio for histopathological image classification is 70%: 30%, whereas the best performance (e.g., accuracy) is achieved by using the training-testing ratio of 80%: 20% on an identical dataset. Second, we propose a method named DenTnet to classify breast cancer histopathological images chiefly. DenTnet utilizes the principle of transfer learning to solve the problem of extracting features from the same distribution using DenseNet as a backbone model. The proposed DenTnet method is shown to be superior in comparison to a number of leading deep learning methods in terms of detection accuracy (up to 99.28% on BreaKHis dataset deeming training-testing ratio of 80%: 20%) with good generalization ability and computational speed. The limitation of existing methods including the requirement of high computation and utilization of the same feature distribution is mitigated by dint of the DenTnet.
Breast cancer is one of the most familiar invasive cancers in women worldwide. Nowadays, it is overtaking lung cancer as the world’s chiefly regularly diagnosed cancer . The diagnosis of breast cancer in the early stages significantly decreases the mortality rate by allowing the choice of adequate treatment. With the onset of pattern recognition and machine learning, a good deal of handcrafted or engineered features-based studies have been proposed for classifying breast cancer histology images. In image classification, feature extraction is a cardinal process used to maximize the classification accuracy by minimizing the number of selected features [2–5]. Deep learning models have the power to automatically extract features, retrieve information, and take in the latest intellectual depictions of data. Thus, they can solve the problems of common feature extraction methods. The automated classification of breast cancer histopathological images is one of the important tasks in CAD (Computer-Aided Detection/Diagnosis) systems, and deep learning models play a remarkable role by detecting, classifying, and segmenting prime breast cancer histopathological images. Many researchers worldwide have invested appreciable efforts in developing robust computer-aided tools for the classification of breast cancer histopathological images using deep learning. At present, in this research arena, the most popular deep learning models proposed in the literature are based on CNNs [6–66].
A pretrained CNN model, for example, DenseNet , utilizes dense connection between layers, reduces the number of parameters, strengthens propagation, and encourages feature reutilization. This improved parameter efficiency makes the network faster and easier to train. Nevertheless, a DenseNet  has an excessive connection, as all its layers have a direct connection to each other. Those lavish connections have been shown to decrease the computational and parameter efficiency of the network. In addition, features extracted by a neural network model stay in the same distribution. Therefore, the model might overfit as the features cannot be guaranteed to be sufficient enough. Besides, a CNN-training task demands a large number of training samples; otherwise, it leads to overfitting and reduces generalization ability. However, it is arduous to secure labeled breast cancer histopathological images, which severely limits the classification ability of CNN .
On the other hand, the use of transfer learning can expand prior knowledge about data by including information from a different domain to target future data . Consequently, it is a good idea to extract data from a related domain and then transfer those extracted data to the target domain. This way, resources can be saved and the efficiency of the model can be improved during training. A great number of breast cancer diagnosis methods based on transfer learning have been proposed and implemented by distinct researchers (e.g., [57–66]) to achieve state-of-the-art performance (e.g., ACC, AUC, PRS, RES, and F1S) on different datasets. Yet, the limitations of such performance indices, algorithmic assumptions, and computational complexities are indicating a further development of smart algorithms.
In this paper, we aim to propose a novel neural-network-based approach called DenTnet (see Figure 1) for classifying breast cancer histopathological images by taking the benefits of both DenseNet  and transfer learning . To address the cross-domain learning problems, we employ the principle of transfer learning for transferring information from a related domain to the target domain. Our proposed DenTnet is anticipated to increase the accuracy of breast cancer histopathological images classification and accelerate the learning process. The DenTnet demonstrates better performance over its alternative CNN and/or transfer-learning-based methods (e.g., see Table 1) on the same dataset as well as training-testing ratio.
To find the best performance scores of deep learning models for classifying histopathological images, contrasting training-testing ratios were applied for divergent models on the same dataset. What would be the most popular and/or optimized training-testing ratios to classify histopathological images considering existing state-of-the-art deep learning models? There exist many surveys enriched to sufficient contemporary methods and materials with systematic deep discussion of automatic classification of breast cancer histopathological images [68–72]. Nevertheless, to the best of our knowledge, the direct or indirect indication of this question was not reported in any of the previous studies. Henceforth, we perform a succinct survey to investigate this question. Our findings include that the most popular training-testing ratio for histopathological image classification is 70%: 30%, whereas the best performance (accuracy) is achieved by using the training-testing ratio of 80%: 20% on the identical dataset.
In summary, the main contributions of this context are as follows:(i)Determine the most popular and/or optimized training-testing ratios for classifying histopathological images using the existing state-of-the-art deep learning models.(ii)Propose a novel approach named DenTnet that amalgamates both DenseNet  and transfer learning technique to classify breast cancer histopathological images. DenTnet is anticipated to achieve high accuracy and fasten the learning process due to its utilization of dense connections from its backbone architecture (i.e., DenseNet ).(iii)Determine the generalization ability of DenTnet and the superiority measure considering nonparametric statistical tests.
The rest of the paper is organized as follows: Section 2 hints some preliminaries; Section 3 surveys briefly the existing deep models for histopathological image classification and reports our findings; Section 4 depicts the architecture of our proposed DenTnet and its implementation details; Section 5 demonstrates the experimental results and comparison on BreaKHis dataset ; Section 6 evaluates the generalization ability of DenTnet; Section 7 discusses nonparametric statistical tests, their reported results, and reasons for superiority along with few hints of further study; and Section 8 concludes the paper.
Breast cancer is one of the oldest known kinds of cancer first found in Egypt . It is caused by the uncontrolled growth and division of cells in the breast, whereby a mass of tissue called a tumor is created. Nowadays, it is one of the most terrifying cancers in women worldwide. For example, in 2020, there were 2.3 million women diagnosed with breast cancer and 685000 deaths globally . Early detection of breast cancer can save many lives. Breast cancer can be diagnosed in view of histology and radiology images. The radiology images analysis can help to identify the areas, where the abnormality is located. However, they cannot be used to determine whether the area is cancerous . On the other hand, a biopsy is an examination of tissue removed from a living body to discover the presence, cause, or extent of a disease (e.g., cancer). Biopsy is the only reliable way to make sure if an area is cancerous . Upon completion of the biopsy, the diagnosis will be based on the qualification of the histopathologists who determine cancerous regions and malignancy degree [7, 75]. If the histopathologists are not well trained, the histopathology or biopsy report can lead to an incorrect diagnosis. Besides, there might be a lack of specialists, which may cause keeping the tissue samples for up to a few months. In addition, diagnoses made by unspecialized histopathologists are sometimes difficult to replicate. As if that were not enough of a problem, at times, even expert histopathologists tend to disagree with each other. Despite notable progress being reached by diagnostic imaging technologies, the final breast cancer grading and staging are still done by pathologists using visual inspection of histological samples under microscopes.
As analyzing breast cancer is nontrivial and would get down to disagreements among experts, computerized and interdisciplinary systems can improve the accuracy of diagnostic results by reducing the processing time. The CAD can help to assist doctors in reading and interpreting medical images by locating and identifying possible abnormalities in the image . It is proclaimed that the utilization of CAD to automatically classify histopathological images does not only improve the diagnostic efficiency with low cost but also provide doctors with more objective and accurate diagnosis results . Consequently, there is an adamant demand for the CAD . There exist several comprehensive surveys for CAD based methods in the literature. For example, Zebari et al.  provided a common description and analysis of existing CAD systems that are utilized in both machine learning and deep learning methods as well as their current state based on mammogram image modalities and classification methods. However, the existing breast cancer diagnosis models take issue with complexity, cost, human-dependency, and inaccuracy . Furthermore, the limitation of datasets is another practical problem in this arena of research. In addition, every deep learning model demands a metric to judge its performance. Explicitly, performance evaluation metrics are the part and parcel of every deep learning model as they indicate progress indices.
In the two following subsections, we discuss the commonly used datasets for classifying histopathological images and the performance evaluation metrics of various deep learning models.
2.1. Brief Description of Datasets
Accessing relevant images and datasets is one of the key challenges for image analysis researchers. Datasets and benchmarks enable validating and comparing methods for developing smarter algorithms. Recently, several datasets of breast cancer histopathology images have been released for this purpose. Figure 2 shows a sample breast cancer histopathological image from BreaKHis  dataset of a patient who suffered from papillary carcinoma (malignant) with four magnification levels: (a) 40x, (b) 100x (c) 200x, and (d) 400x . The following list of datasets has been used in the literature as incorporated in Table 2:(i)BreaKHis  It is considered as the most popular and clinical valued public breast cancer histopathological dataset. It consists of 7909 breast cancer histopathology images, 2480 benign and 5429 malignant samples, from 82 patients with different magnification factors (e.g., 40x, 100x, 200x, and 400x) .(ii)Bioimaging2015  The Bioimaging2015  dataset contained 249 microscopy training images and 36 microscopy testing images in total, equally distributed among the four classes.(iii)ICIAR2018  This dataset, available as part of the BACH grand challenge , was an extended version of the Bioimaging2015 dataset [8, 122]. It contained 100 images in each of four categories (i.e., normal, benign, in situ carcinoma, and invasive carcinoma) .(iv)BACH  The database of BACH holds images obtained from ICIAR2018 Grand Challenge . It consists of 400 images with equal distribution of normal (100), benign (100), in situ carcinoma (100), and invasive carcinoma (100). The high-resolution images are digitized with the same conditions and magnification factor of 200x. In this dataset, images have a fixed size of pixels .(v)TMA  The TMA (Tissue MicroArray) database from Stanford University is a public resource with an access to 205161 images. All the whole-slide images have been scanned by a 20x magnification factor for the tissue and 40x for the cells .(vi)Camelyon  The Camelyon (cancer metastases in lymph nodes) was established based on a research challenge dataset competition in 2016. The Camelyon organizers trained CNNs on smaller datasets for classifying breast cancer in lymph nodes and prostate cancer biopsies. The training dataset consists of 270 whole-slide images; among them 160 are normal slides and 110 slides contain metastases .(vii)PCam  It is a modified version of the Patch Camelyon (PCam) dataset, which consists of 327680 microscopy images with -pixel sized patches extracted from the whole-slide images with a binary label hinting the presence of metastatic tissue .(viii)HASHI  Each image in the dataset of HASHI (high-throughput adaptive sampling for whole-slide histopathology image analysis)  has the size of .(ix)MIAS  The Mammographic Image Analysis Society (MIAS) database of digital mammograms  contains 322 mammogram images, each of which has a size of pixels with PGM format .(x)INbreast  The INbreast database has a total of 410 images collected from 115 cases (i.e., patients) indicating benign, malignant, and normal cases having sizes of or pixels. It contains 36 benign and 76 malignant masses .(xi)DDSM  The DDSM  dataset was collected by the expert team at the University of South Florida . It contains 2620 scanned film mammography studies. Explicitly, it involves 2620 breast cases (i.e., patients) categorized in 43 different volumes with average size of pixels .(xii)CBIS-DDSM  The CBIS-DDSM  is an updated version of the DDSM providing easily accessible data and improved region-of-interest segmentation [128, 146]. The CBIS-DDSM dataset comprises 2781 mammograms in the PNG format .(xiii)CMTHis  The CMTHis (Canine Mammary Tumor Histopathological Image)  dataset comprises 352 images acquired from 44 clinical cases of canine mammary tumors.(xiv)FABCD  The FABCD (Fully Annotated Breast Cancer Database)  consists of 21 annotated images of carcinomas and 19 images of benign tissue taken from 21 patients .(xv)IICBU2008  The IICBU2008 (Image Informatics and Computational Biology Unit) malignant lymphoma dataset contains 374 H&E stained microscopy images captured using bright field microscopy .(xvi)VLAD  The VLAD (Vector of Locally Aggregated Descriptors) dataset  consists of 300 annotated images with resolution of .(xvii)LSC  The LSC (Lymphoma Subtype Classification)  dataset has been prepared by pathologists from different laboratories to create a real-world type cohort which contains a larger degree of stain and scanning variances . It consists of 374 images with resolution of .(xviii)KimiaPath24  The official KimiaPath24  dataset consists of a total of 23916 images for training and 1325 images for testing. It is publicly available. It shows various body parts with texture patterns .
2.2. Performance Evaluation Metrics
Performance evaluation of any deep learning model is an important task. An algorithm may give very pleasing results when evaluated using a metric (e.g., ACC), but it may give poor results when evaluated against other metrics (e.g., F1S) . Usually, we use classification accuracy to measure the performance of deep learning algorithms. But it is not enough to determine the model perfectly. For truly judge any deep learning algorithm, we can use nonidentical types of evaluation metrics including classification ACC, AUC, PRS, RES, F1S, RTM, and GMN.(i)ACC It is normally defined in terms of error or inaccuracy . It can be calculated using the following equation: where tn is true negative, tp is true positive, fp is false positive, and fn is false negative. Sometimes, ACC and the percent correct classification (PCC) can be used interchangeably.(ii)PRS Its best value is 100 and the worst value is just 0. It can be formulated using the following equation:(iii)RES It should ideally be 100 (the highest) for a good classifier. It can be calculated using the following equation:(iv)AUC It is one of the most widely used metrics for evaluation [177–179]. The AUC of a classifier equals the probability that the classifier ranks a randomly chosen positive sample higher than a randomly chosen negative sample. The AUC varies in value from 0 to 1. If the predictions of a model are 100% wrong, then its ; but if its predictions are 100% correct, then its .(v)F1S It is the harmonic mean between precision and recall. It is also called the F-score or F-measure. It is used in deep learning . It conveys the balance between the precision and the recall. It also tells us how many instances it classifies correctly. Its highest possible value is 1, which indicates perfect precision and recall. Its lowest possible value is 0, when either the precision or the recall is zero. It can be formulated as where PRS is the number of correct positive results divided by the number of positive results predicted with the classifier and RES is the number of correct positive results divided by the number of all relevant samples.(vi)RTM Estimating the RTM complexity of algorithms is mandatory for many applications (e.g., embedded real-time systems ). The optimization of the RTM complexity of algorithms in applications is highly expected. The total RTM can prove to be one of the most important determinative performance factors in many software-intensive systems.(vii)GMN It indicates the central tendency or typical value of a set of numbers by considering the product of their values instead of using their sum. It can be used to attain a more accurate measure of returns than the mean or arithmetic mean or average. The GMN for any set of numbers can be defined as(viii)MCC The Matthews correlation coefficient (MCC) is used as a measure of the quality of binary classifications, introduced by biochemist Brian W. Matthews in 1975.(ix) The metric of Cohen’s kappa can be used to evaluate binary classifications.
3. A Succinct Survey of State of the Art
This section deals with a summary of existing studies apposite for the classification of breast cancer histopathological images followed by a short discussion and our findings.
3.1. Summary of Previous Studies
Table 2 provides a short summary of previous studies carried out to classify breast cancer from images. Experimental results of miscellaneous deep models in the literature on publicly available datasets demonstrated various degrees of accurate cancer prediction scores. However, as AUC and ACC are the most important metrics for breast cancer histopathological images classification , the experimental results in Table 2 take them into account as the performance indices.
3.2. Key Techniques and Challenges
The CNNs can be regarded as a variant of the standard neural networks. Instead of using fully connected hidden layers, the CNNs introduce the structure of a special network, which comprises so-called alternating convolution and pooling layers. They were first introduced for overcoming known problems of fully connected deep neural networks when handling high dimensionality structured inputs, such as images or speech. From Table 2, it is noticeable that CNNs have become state-of-the-art solutions for breast cancer histology images classification. However, there are still challenges even when using the CNN-based approaches to classify pathological breast cancer images , as given below:(i)Risk of overfitting The number of parameters of CNN increases rapidly depending on how large the network is, which may lead to poor learning.(ii)Being cost-intensive To get a huge number of labeled breast cancer images is very expensive.(iii)Huge training data CNNs need to be trained using a lot of images, which might not be easy to find considering that collecting real-world data is a tedious and expensive process.(iv)Performance degradation Various hyperparameters have a significant influence on the performance of the CNN model. The model’s parameters need to be tuned properly to achieve a desirable result , which usually is not an easy task.(v)Employment difficulty In the process of training CNN model, it is usually inevitable to rearrange the learning rate parameters to get a better performance. This makes it arduous for the algorithm to use in real-life applications by nonexpert users .
Many methods had been proposed in the literature considering the aforementioned challenges. In 2012, AlexNet  architecture was introduced for ImageNet Challenge having error rate of 16%. Later various variations of AlexNet  with denser network were introduced. Both AlexNet  and VGGNet  were the pioneering works that demonstrated the potential of deep neural networks . AlexNet was designed by Alex Krizhevsky . It contained 8 layers; the first 5 were convolutional layers, some of them followed by max-pooling layers, and the last 3 were fully connected layers . It was the first large-scale CNN architecture that did well on ImageNet  classification. AlexNet  was the winner of the ILSVRC  classification, the benchmark in 2012. Nevertheless, it was not very deep. SqueezeNet  was proposed to create a smaller neural network with fewer parameters that could be easily fit into computer memory and transmitted over a computer network. It achieved AlexNet  level accuracy on ImageNet with 50x fewer parameters. It was compressed to less than 0.5 MB (510x smaller than AlexNet ) with model compression techniques. The VGG  is a deep CNN used to classify images. The VGG19 is a variant of VGG which consists of 19 layers (i.e., 16 convolution layers and 3 fully connected layers, in addition to 5 max-pooling layers and 1 SoftMax layer) . There exist many variants of VGG  (e.g., VGG11, VGG16, VGG19, etc.). VGG19 has 19.6 billion FLOPs (floating point operations per second). VGG  is easy to implement but slow to train. Nowadays, many deep-learning-based methods are implemented on influential backbone networks; among them, both DenseNet  and ResNet  are very popular. Due to the longer path between the input layer and the output layer, the information vanishes before reaching its destination. DenseNet  was developed to minimize this effect. The key base element of ResNet  is the residual block. DenseNet  concentrates on making the deep learning networks move even deeper as well as simultaneously making them well organized to train by applying shorter connections among layers. In short, ResNet  adopts summation, whereas DenseNet  deals with concatenation. Yet, the dense concatenation of DenseNet  creates a challenge of demanding high GPU (Graphics Processing Unit) memory and more training time . On the other hand, the identity shortcut that balances training in ResNet  curbs its representation dimensions . Compendiously, there is a dilemma in the alternative between ResNet  and DenseNet  for many applications in terms of performance and GPU resources .
3.3. Our Findings
Although various deep learning models in Table 2 often achieved pretty good scores of AUC and ACC, the models demand a large amount of data but breast cancer diagnosis always suffers from a lack of data. To adopt artificial data is a tentative solution of this issue, but the determination of the best hyperparameters is extremely difficult. Besides efficient deep learning models, the datasets themselves have some limitations, for example, overinterpretation, which cannot be diagnosed using typical evaluation methods based on the ACC of the model. Deep learning models trained on popular datasets (e.g., BreaKHis ) may suffer from overinterpretation. In overinterpretation, deep learning models make confident predictions based on details that do not make any sense to humans (e.g., promiscuous patterns and image borders). When deep learning models are trained on datasets, they can make apparently authentic predictions based on both meaningful and meaningless subtle signals. This effect, eventually, can reduce the overall classification performance of deep models. Most probably, this is one of the reasons why any state-of-the-art deep learning model in the literature for classifying breast cancer histopathological images (see Table 2) could not show an ACC of 100%.
In addition, the training-testing ratio can regulate the performance of a deep model for image classification. We wish to determine the most popular and/or optimized training-testing ratios for classifying histopathological images using Table 2. To this end, we have calculated the usage frequency of the training-testing ratio (i.e., percentage of the number of papers that used the same ratio) by considering data in Table 2 and the following equation:
Figure 3 demonstrates the frequency of usage of training-testing ratio considering data in Table 2. From Figure 3, it is noticeable that the most popular training-testing ratio for histopathological image classification is 70%: 30%. The second-best used training-testing ratio is 80%: 20%, followed by 90%: 10%, 75%: 25%, 50%: 50%, and so on. Figure 4 presents the GMN of ACC for the most frequently used training-testing ratios considering data in Table 2. It shows a different history; in terms of ACC, the rate of 80%: 20% became the best option for the training-testing ratio to classify histopathological images. Explicitly, the GMN of ACC formed like a Gaussian shaped curve and the ratio of 80%: 20% owned its highest peak. To cut a long story short, by considering ACC, the training-testing ratio of 80%: 20% became the finest and the optimal choice for classifying histopathological images.
4. Methods and Materials
This section explains in detail our proposed DenTnet model and its implementation. Figure 5 demonstrates a general flowchart of our methodology to classify breast cancer histopathological images automatically.
4.1. Architecture of Our Proposed DenTnet
The architecture of our proposed DenTnet is shown in Figure 1, which consists of four different blocks, namely, the input volume, training from scratch, transfer learning, and fusion and recognition.
4.1.1. Input Volume
The input is a 3D RGB (three-dimensional red, green, and blue) image with a size of , that is, .
4.1.2. Training from Scratch
Initially, features are extracted from the input images by feeding the input to the convolutional layer. The convolution (conv) layers contain a set of filters (or kernels) parameters, which are learned throughout the training. The size of the filters is usually smaller than the actual image, where each filter convolves with the image and creates an activation map. Thereafter, the pooling layer progressively decreases the spatial size of the representation for reducing the number of parameters in the network. Instead of differentiable functions such as sigmoid and tanh, the network utilizes the ReLU as an activation function. Finally, the extracted features or the output of the last layer from the training from scratch block is then amalgamated with the features extracted from the transfer learning approach. Figure 1 includes the design of the DenseNet  architecture used to extract the feature using the learning-from-scratch approach.
4.1.3. Transfer Learning
In transfer learning, given that a domain consists of feature space and a marginal probability distribution , where = , and a task consists of a label space and an objective predictive function f: , the corresponding label of a new instance is predicted by function , where the new tasks denoted by = , are learned from the training data consisting of pairs and , where and . When utilizing the learning-from-scratch approach, the extracted features stay in the same distribution. To solve this problem, we amalgamated both learning-from-scratch and the transfer learning approach. The learned parameters are further fine-tuned by retraining the extracted features. This is anticipated to expand the prior knowledge of the network about the data, which might improve the efficiency of the model during training, thereby accelerating the learning speed and also increasing the accuracy of the model. As shown in Figure 1, there is a connection between the blocks of the input volume and transfer learning. The transfer learning approach extracted features from the ImageNet  weights. The weight is the parameters (including trainable and nontrainable) learned from the ImageNet  dataset. Since transfer learning involves transferring knowledge from one domain to another, we have utilized the ImageNet weight as the models developed in the ImageNet  classification competition are measured against each other for performance. Henceforth, the ImageNet weight provides a measure of how good a model is for classification. Besides, the ImageNet weight has already showed a markedly high accuracy . The extracted features are then used by the network before being passed to the fusion and recognition block, where the features are amalgamated with the extracted features from the learning-from-scratch block for recognition.
4.1.4. Fusion and Recognition
The extracted features based on the ImageNet weights are then amalgamated with the features extracted by the block of training from scratch. A global average pooling is performed. Dropout technique helps to prevent a model from overfitting. It is used with dense fully connected layers. The fully connected layer compiles the data extracted by previous layers to form the final output. The last step passes the features through the fully connected layer, which then uses SoftMax to classify the class of the input images.
4.2. Implementation Details
4.2.1. Data Preparation
We have adopted data augmentation, stain normalization, and image normalization strategies to optimize the training process. Hereby, we have explained each of them briefly.
4.2.2. Data Augmentation
Due to the limited size of the input samples, the training of our DenTnet was prone to overfitting, which caused low detection rate. One solution to alleviate this issue was the data augmentation, which generated more training data from the existing training set. Dissimilar data augmentation techniques (e.g., horizontal flipping, rotating, and zooming) were applied to datasets for creating more training samples.
4.2.3. Stain Normalization
The breast cancer tissue slices are stained by H&E to differentiate between nuclei stained with purple color and other tissue structures stained with pink and red color to help pathologists analyze the shape of nuclei, density, variability, and overall tissue structure . The H&E staining variability between acquired images exists due to the different staining protocols, scanners, and raw materials. This is a common problem with histological image analysis. Therefore, stain normalization of H&E-stained histology slides was a key step for reducing the color variation and obtaining a better color consistency prior to feeding input images into the DenTnet architecture. Different techniques are available for stain normalization in histological images. We have considered Macenko technique  due to its promising performance in many studies to standardize the color intensity of the tissue. This technique was based on a singular value decomposition. A logarithmic function was used to adaptively transform color concentration of the original histopathological image into its optical density (OD) image as , where hints the matrix of optical density values, belongs to the image intensity in red-green-blue space, and addresses the illuminating intensity incident on the histological sample.
4.2.4. Intensity Normalization
Intensity normalization was another important preprocessing step. Its primary aim was to get the same range of values for each input image before feeding to the DenTnet. It also speeded up the convergence of DenTnet. Input images were normalized to the standard normal distribution by min-max normalization (i.e., using one of the most popular ways to normalize data) to the intensity range of [0, 1], which can be computed aswhere , , and indicate pixel, minimum, and maximum intensity values of the input image, respectively.
4.2.5. Hardware and Software Requirements
DenTnet was implemented using the TensorFlow and Keras framework [188, 189] and coded in Python using Jupyter Notebook on a Kaggle Private Kernel. The experiment was performed on a machine with the following configuration: Intel® Xeon® CPU @ 2.30 GHz with 16 CPU Cores, 16 GB RAM, and NVIDIA Tesla P100 GPU. We implemented and trained everything on the cloud using Kaggle GPU hours.
4.2.6. Training and Testing Setup
The dataset was divided in a 80%: 20% ratio, where 80% was used for training and the remaining 20% was used for testing. The data used for testing were kept isolated from the training set and never seen by the model during training. To evaluate the images classification, we have computed the recognition rate at the image level over the two different classes: (i) correctly classified images and (ii) the total number of images in the test set.
4.2.7. Training Procedure
In the training of a neural network, a measure of error is required to compute the error between the targeted output and the computed output of training data known as the loss function. An optimization algorithm is needed to minimize this function. We have considered Adam optimizer  with numerical stability constant epsilon = None, decay = 0.0, and AMSGrad = True. Table 3 presents the hyperparameter values of the proposed deep learning model. Learning rate (also referred to as step size) signifies the proportion to which weights are updated. A smaller value (e.g., 0.000001) slows down the learning process during training, whereas a larger value (e.g., 0.400) results in faster learning. We have considered a learning rate of 0.001. The exponential decay rates of the first and second moments were estimated to be 0.60 and 0.90, respectively. To update the weights, the number of epochs was set to 50 with 3222 steps per epoch and a batch size of 32. For the BreaKHis  dataset, we had a training sample of 103104 images, with 12288 validation samples and 697 testing samples. The training process used 10-fold cross-validation, where one of the samples was used to validate the data and the remaining 9 samples were used to train the DenTnet model. The fully connected layer used 1024 filters with a dropout rate of 0.50. Finally, the last layer used two filters with a SoftMax layer to classify the image into two classes (e.g., benign and malignant). We have used categorical cross-entropy as the objective function to quantify the difference between two probability distributions. The whole training process took more than 4 hours for the breast cancer tissue images.
5. Experimental Results and Comparison on BreaKHis Dataset
This section demonstrates the experimental results achieved from classifying the breast cancer histopathology (i.e., BreaKHis ) images using our proposed DenTnet model.
Figure 6 shows the performance curves obtained during the training of DenTnet using BreaKHis  dataset. A normalized confusion matrix for the classification of breast cancer test set images is illustrated in Figure 7(a). The main reason for confusion between benign and malignant breast tissues is their similar textures or expression. Henceforth, careful description of texture is required to remove the confusion between the two classes. For binary classification, 5 images only were misclassified, indicating that DenTnet achieved the highest and best ACC of 99.28%. Figures 7(b) and 7(c) demonstrate the ROC curve and precision-recall curve for classification of benign and malignant images from BreaKHis  dataset, respectively. AUC of 0.99, sensitivity of 97.73%, and specificity 100% have been reported. Table 4 lists the complete classification report of DenTnet. It achieved an ACC of 99.28%.
Table 1 compares the results obtained by several methods. The methods of Togacar et al. , Parvin et al. , Man et al. , Soumik et al. , Liu et al. , Zerouaoui and Idri , and Chattopadhyay et al.  were centered on mainly CNN models, but they were tested against the same training-testing ratio of 80%: 20% on the BreaKHis dataset . However, Boumaraf et al.  suggested a transfer-learning-based method deeming the residual CNN ResNet-18 as a backbone model with block-wise fine-tuning strategy and obtained a mean ACC of 92.15% applying a training-testing ratio of 80%: 20% on BreaKHis dataset . From Table 1, it is notable that DenTnet [ours] achieved the best ACC on the same ground.
6. Generalization Ability Evaluation of Proposed DenTnet
What would be the performance of the proposed DenTnet compared with other types of cancer or disease datasets? To evaluate the generalization ability of DenTnet, this section presents the experimental result obtained not only from the dataset of BreaKHis  but also from additional datasets of Malaria , CovidXray , and SkinCancer .
6.1. Datasets Irrelevant to Breast Cancer
The three following datasets are not related to breast cancer. Herewith, their primary aim is to evaluate the generalization ability of our proposed method DenTnet:(i)Malaria  This dataset contains a total of 27558 infected and uninfected images for malaria.(ii)SkinCancer  This dataset contains balanced images from benign skin moles and malignant skin moles. The data consist of two folders, each containing 1800 pictures from the two types of mole.(iii)CovidXray  Corona (COVID-19) virus affects the respiratory system of healthy individual. The chest X-ray is one of the key imaging methods to identify the coronavirus. This dataset contains chest X-ray of healthy versus pneumonia (Corona) infected patients along with few other categories including SARS (Severe Acute Respiratory Syndrome), Streptococcus, and ARDS (Acute Respiratory Distress Syndrome) with a goal of predicting and understanding the infection.
6.2. Experimental Results Comparison
Using four datasets in the experiment, DenTnet has been compared with six widely used and well-known deep learning models, namely, AlexNet , ResNet , VGG16 , VGG19 , Inception V3 , and SqueezeNet . To evaluate and analyze the performance of DenTnet, four different cases are considered. The first case is the evaluation of different deep learning methods, which are trained and tested on BreaKHis  dataset. The second case studies the performance of the deep-learning-based classification methods that are trained and tested on Malaria  dataset. The third case is to train and test the deep learning models on SkinCancer  dataset. The final one is to understand and analyze the performance of the deep learning models on CovidXray  dataset. The overall results are tabulated in Tables 5–9. Besides, the RTM in seconds of various datasets using the deep learning models is shown in Table 10.
According to the results in terms of GMN of ACC, RES, F1S, and AUC as shown in Tables 5–9, respectively, the proposed DenTnet architecture provides the best scores as compared to AlexNet , ResNet , VGG16 , VGG19 , Inception V3 , and SqueezeNet . On the other hand, DenTnet gets the third best result. Moreover, in most of the cases, AlexNet  obtains the lowest results.
6.3. Performance Evaluation
The deepening of deep models makes their parameters rise rapidly, which may lead to overfitting of the model. To take the edge off the overfitting problem, predominantly a large number of dataset images are required as the training set. Considering a small dataset, it is possible to reduce the risk of overfitting of the model by reducing the parameters and augmenting the dataset. Accordingly, DenTnet used fewer parameters along with the dense connections in the construction of the model, instead of the direct connections among the hidden layers of the network. As DenTnet used fewer parameters, it attenuated the vanishing gradient descent and strengthened the feature propagation. Consequently, the proposed DenTnet outperformed its alternative state-of-the-art methods. Yet, its runtime was a bit longer in Malaria  and SkinCancer  datasets as compared to ResNet . The main reason why the DenTnet model may require more time is that it uses many small convolutions in the network, which can run slower on GPU than compact large convolutions with the same number of GFLOPS. Still, DenTnet includes fewer parameters compatibility when compared to ResNet . Henceforth, it is more efficient in solving the problem of overfitting. In general, all of the used algorithms suffered from some degree of overfitting problem on all datasets. We minimized such problems by reducing the batch size and adjusting the learning rate and the dropout rate. In some cases, the proposed DenTnet predicted fewer positive samples as compared to ResNet . This is due to the lack of its conservative designation of the positive class. Thus, the GMN PRS of the proposed DenTnet was about 2% lower than that of ResNet .
As VGG16  is easier to implement, many deep learning image classification problems benefit from the technique by using the network either as a sole model or as a backbone architecture to classify images. While VGG19  is better than the VGG16  model, they are both very slow to train—for example, a ResNet with 34 layers only requires 18% of operations as a VGG with 19 layers (around half the layers of the ResNet) will require . Regarding AlexNet , the model struggled to scan all features as it is not very deep, resulting in poor performance. The SqueezeNet  model achieved approximately the same performance as the AlexNet  model. VGG19  and Inception V3  showed almost the same level of effectiveness. Although the ResNet  model has proven to be a powerful tool for image classification and is usually fast, it has been shown to take a long time to train. Concisely, using all benefits of DenseNet  with optimization, DenTnet obtained the highest GMN ACC of 0.9463, RES of 0.9649, F1S of 0.9531, and AUC of 0.9465 from all four datasets. This implies that DenTnet has the best generalization ability compared to its alternative methods.
Often, it is important to measure that certain deep learning models are more efficient and practical as compared to their alternatives. Seemingly, it is difficult to measure such superiority from the obtained experimental results in Tables 5–10. Nonetheless, nonparametric statistical test can make a clear picture of this issue.
7. Nonparametric Statistical Analysis
Figure 9 depicts performance evaluation of various algorithms deeming the numerical values of the ineffectualness metrics and RTM from Table 11. It is noted that, for a better visualization purpose, the RTM scores in Figure 9 use log-normal distribution  with a mean of 10 and standard deviation of 1. However, from this graph, it is extremely hard to rank each algorithm. However, statistically, it is possible to show that one algorithm is better than its alternatives. Friedman test  and its derivatives (e.g., Iman-Davenport test ) are normally referred to as examples of the most well-known nonparametric tests for multiple comparisons. The mathematical equations of Friedman , Friedman’s aligned rank , and Quade  tests can be found in the works of Quade  and Westfall and Young . Friedman test  takes measures in preparation for ranking of a set of algorithms with performance in descending order. But it can solely inform us about the appearance of differences among all samples of results under comparison. Henceforth, its alternatives (e.g., Friedman’s aligned rank test  and Quade test ) can give us further information. Consequently, we have performed the tests of Friedman , Friedman’s aligned rank , and Quade  for average rankings based on the features of our experimental study. On rejecting null-hypotheses, we have continued to use post hoc procedures to find the special pairs of algorithms that give idiosyncrasies. In the case of comparisons, the post hoc procedures make up for Bonferroni-Dunn’s , Holm’s , Hochberg’s , Hommel’s [204, 205], Holland and Copenhaver’s , Rom’s , Finner’s , and David Li’s  procedures, whereas the post hoc procedures of Nemenyi , Shaffer , and Bergmann-Hommel  are involved in comparisons. The details can be found in the works of Bergmann and Hommel , García and Herrera , and Hommel and Bernhard .
7.1. Average Ranking of Algorithms
To get the nonparametric statistical test results, Friedman , Friedman’s aligned rank , and Quade  tests have been applied to the results of seven models in Table 11. Explicitly, statistical tests have been applied to a matrix with dimension of , where 7 is the number of models and 6 is the number of parameters (as 6 datasets while applied to the statistical software environment ) in each model. Table 12 shows the average ranking computed by using Friedman , Friedman’s aligned rank , and Quade  nonparametric statistical tests. The nonparametric Friedman , Friedman’s aligned rank , and Quade  tests determine whether there were significant differences among various models taking data from Table 11. These tests provide the average ranking of all algorithms; that is, the best performing algorithm gets the highest rank of 1, the second-best algorithm gets the rank of 2, and so on.
Figure 10 makes a visualization of the average rankings using the data in Table 12. From Figure 10, it is noticeable that the algorithm of DenTnet [ours] became the best performing one, with the longest bars of 0.6667, 0.1395, and 0.7242 for Friedman test , Friedman’s aligned rank test , and Quade test , respectively. This indicates that the algorithm of DenTnet [ours] gives great performance for the solution of underlaying problems of classifying breast cancer histopathological images from four different datasets. Friedman  statistic considered reduction performance (distributed according to chi-square with 6 degrees of freedom) of 24.500000. Friedman’s aligned  statistic considered reduction performance (distributed according to chi-square with 6 degrees of freedom) of 23.102557. Iman-Davenport  statistic considered reduction performance (distributed according to F-distribution with 6 and 30 degrees of freedom) of 10.652174. Quade  statistic considered reduction performance (distributed according to F-distribution with 6 and 30 degrees of freedom) of 5.274194. The values computed through Friedman statistic, Friedman’s aligned statistic, Iman-Davenport statistic, and Quade statistic are 0.000422, 0.000762847204, 0.000002458229, and 0.000820133186, respectively.
Table 13 demonstrates the results obtained on post hoc comparisons of adjusted values; and . Using level of significance , (i) Bonferroni-Dunn’s  procedure rejects those hypotheses that have an unadjusted value ; (ii) Holm’s  procedure rejects those hypotheses that have an unadjusted value ; (iii) Hochberg’s  procedure rejects those hypotheses that have an unadjusted value ; (iv) Hommel’s  procedure rejects those hypotheses that have an unadjusted value ; (v) Holland’s  procedure rejects those hypotheses that have an unadjusted value ; (vi) Rom’s  procedure rejects those hypotheses that have an unadjusted value ; (vii) Finner’s  procedure rejects those hypotheses that have an unadjusted value ; and (viii) Li’s  procedure rejects those hypotheses that have an unadjusted value .
7.2. Post Hoc Procedures: Comparisons
In the case of comparisons, the post hoc procedures consist of Bonferroni-Dunn’s , Holm’s , Hochberg’s , Hommel’s [204, 205], Holland and Copenhaver’s , Rom’s , Finner’s , and David Li’s  procedures. In these tests, multiple comparison post hoc procedures have been considered for comparing the control algorithm of DenTnet [ours] with others. The results have been shown by computing values for each comparison. Table 14 depicts the obtained values using the ranks computed by nonparametric Friedman , Friedman’s aligned rank , and Quade  tests. All tests have demonstrated significant improvements of DenTnet [ours] over AlexNet , ResNet , VGG16 , VGG19 , Inception V3 , and SqueezeNet  counting each and every post hoc procedure. Besides, David Li’s  procedure had the greatest performance, reaching the lowest value in the comparisons.
7.3. Post Hoc Procedures: Comparisons
In the case of comparisons, the post hoc procedures consist of Nemenyi’s , Shaffer’s , and Bergmann-Hommel’s  procedures. Table 15 presents 21 hypotheses of equality among 7 different algorithms and values achieved. Using level of significance , (i) Nemenyi’s  procedure rejects those hypotheses that have an unadjusted value ; (ii) Holm’s  procedure rejects those hypotheses that have an unadjusted value ; (iii) Shaffer’s  procedure rejects those hypotheses that have an unadjusted value ; and (iv) Bergmann’s  procedure rejects those hypotheses of AlexNet  versus DenTnet [ours], ResNet  versus SqueezeNet , and SqueezeNet  versus DenTnet [ours]. On the other hand, considering , (i) Nemenyi’s  procedure rejects those hypotheses that have an unadjusted value ; (ii) Holm’s  procedure rejects those hypotheses that have an unadjusted value ; (iii) Shaffer’s  procedure rejects those hypotheses that have an unadjusted value ; and (iv) Bergmann’s  procedure rejects those hypotheses of AlexNet  versus DenTnet [ours], ResNet  versus SqueezeNet , and SqueezeNet  versus DenTnet [ours].
7.4. Critical Distance Diagram from Nemenyi  Test
Nemenyi  test is very conservative with a low power, and hence it is not a recommended choice in practice . Nevertheless, it has a unique advantage of having an associated plot to demonstrate the results of fair comparison. Figure 11 depicts the Nemenyi  post hoc critical distance diagrams at three distinct levels of significance values. If the distance between algorithms is less than the critical distance, then there is no statistically significant difference between them. The diagrams in Figures 11(a) and 11(b) associated with with the critical distance of 3.3588 and with with the critical distance of 3.6768, respectively, are identical, whereas the diagram in Figure 11(c) related to with the critical distance of 4.3054 is different. Any two algorithms are considered as significantly different if their performance variation is greater than the critical distance. To this end, from Figure 11, it is noticeable that, at , both SqueezeNet  versus DenTnet [ours] and SqueezeNet  versus ResNet  are remarkably different, while other pairs are not remarkably divergent as their performance differences are less than 4.3054. As compared to ResNet , DenTnet [ours] differs from SqueezeNet  by a greater distance. On the other hand, SqueezeNet  versus DenTnet [ours] and AlexNet  versus DenTnet [ours] are significantly different at both and , whereas SqueezeNet  versus ResNet  is significantly dissimilar at those values. Straightforwardly, DenTnet [ours] is outstandingly unalike both SqueezeNet  and AlexNet , but ResNet  is not outstandingly unalike AlexNet . This implies that the method of DenTnet [ours] outperforms that of ResNet , which also agrees with the finding in Figure 10.
7.5. Reasons of Superiority
In this study, DenseNet  was a great choice as it was very compact and deep. It used less training parameters and reduced the risk of model overfitting and improved the learning rate. In the dense block of DenTnet, the outputs from the previous layers were concatenated instead of using the summation. This type of concatenation helped to markedly speed up the processing of data for large number of columns. The dense block of DenTnet contained convolution and nonlinear layers, which applied several optimization techniques (e.g., dropout and BN). DenTnet scaled to hundreds of layers, while exhibiting no optimization difficulties. Overall, this model was applied to a very large number of preprocessed augmented images from BreaKHis , Malaria , SkinCancer , and CovidXray  datasets. To the best of our knowledge, no other studies in the literature had such an edge. Additionally, the use of data augmentation approach in this study positively affected the performance of the model due to expansion in the size of training data, which is the foremost requirement of a deep network for its proper working. Our DenTnet was well trained through various parameters’ tuning. For example, in the case of BreaKHis , unlike other existing models, our model was trained on all the magnifications combined (40x, 100x, 200x, and 400x) to avoid any loss of generality.
In sum and substance, based on the aforementioned experimental and nonparametric statistical test results, it is, therefore, possible to conclude that the proposed DenTnet [ours] outperformed AlexNet , ResNet , VGG16 , VGG19 , Inception V3 , and SqueezeNet  in terms of computational speed. Significantly, the accuracy achieved by the proposed DenTnet [ours] surpassed those of existing state-of-the-art models in classifying images of the BreaKHis , Malaria , SkinCancer , and the CovidXray  dataset.
7.6. Limitation of Proposed Model and Methodology
Despite these promising results, questions remain as to whether the proposed DenTnet model could be utilized to classify categorical images. Moreover, DenTnet was tested with one breast cancer dataset (i.e., BreaKHis ) only. Although the generalization ability of DenTnet with three non-breast-cancer-related datasets was studied in Section 6, it is unknown whether DenTnet can generalize to other state-of-the-art breast cancer datasets. Future work should, therefore, investigate the efficacy and generalizability of DenTnet with datasets along with multiclass labels, as well as other publicly available breast cancer datasets (e.g., the most recently introduced MITNET dataset ).
The classification effect of breast cancer histopathological images of any deep learning methodology is related to the features and many studies predominantly focused on how to develop good feature descriptors and better extract features. Different from traditional handcrafted feature-based models, DenTnet can automatically extract more abstract features. Nevertheless, it is worth noting that although the proposed DenTnet has addressed the cross-domain problem by utilizing the transfer learning approach, features extracted in the methodology are solely deep-network-based features, which are extracted by feeding images directly to the model. However, feeding deep models directly with images would not generalize as the models consider color distribution of an image. It is understood that local information can be captured from color images using Local Binary Pattern (LBP) . Therefore, future work can use multiple types of features by combining the features extracted by the proposed method with LBP features to address this issue.
We presented that, for classifying breast cancer histopathological images, the most popular training-testing ratio was 70%: 30%, while the best performance was indicated by the training-testing ratio of 80%: 20%. We proposed a novel approach named DenTnet to classify histopathology images using training-testing ratio of 80%: 20%. DenTnet achieved a very high classification accuracy on the BreaKHis dataset. Several impediments of existing state-of-the-art methods including the requirement of high computation and the utilization of the identical feature distribution were attenuated. To test the generalizability of DenTnet, we conducted experiments on three additional datasets (Malaria, SkinCancer, and CovidXray) with varying difficulties. Experimental results on all four datasets demonstrated that DenTnet achieved a better performance in terms of accuracy and computational speed than a large number of effective state-of-the-art classification methods (AlexNet, ResNet, VGG16, VGG19, InceptionV3, and SqueezeNet). These findings contributed to our understanding of how a lightweight model could be used to improve the accuracy and accelerate the learning process of images, including histopathology image classification on using the wild state-of-the-art datasets. Future work shall investigate the efficacy of DenTnet on datasets with multiclass labels.
The four following publicly available datasets were used in this study: BreaKHis  (https://www.kaggle.com/datasets/ambarish/breakhis), Malaria  (https://www.kaggle.com/iarunava/cell-images-for-detecting-malaria), CovidXray  (https://github.com/ieee8023/covid-chestxray-dataset), and SkinCancer  (https://www.kaggle.com/fanconic/skin-cancer-malignant-vs-benign).
Conflicts of Interest
The authors have no conflicts of interest to declare.
Who, Breast Cancer Now Most Common Form of Cancer: WHO Taking Action, , World Health Organization, Geneva, Switzerland, 2021.
A. A. Ewees, L. Abualigah, D. Yousri et al., “Improved Slime Mould Algorithm Based on Firefly Algorithm for Feature Selection: A Case Study on QSAR Model,” Engineering with Computers, vol. 38, pp. 1–15, 2021.View at: Google Scholar
M. Jannesari, M. Habibzadeh, H. Aboulkheyr et al., “Breast cancer histopathological image classification: a deep learning approach,” in Proceedings of the International Conference on Bioinformatics and Biomedicine, pp. 2405–2412, BIBM, Madrid, Spain, June 2018.View at: Google Scholar
S. H. Kassani, P. H. Kassani, M. J. Wesolowski, K. A. Schneider, and R. Deters, “Classification of histopathological biopsy images using ensemble of deep learning networks,” in Proceedings of the Annual International Conference on Computer Science and Software Engineering (CASCON), pp. 92–99, Markham, Ontario, July 2019.View at: Google Scholar
K. Kumar and A. C. S. Rao, “Breast cancer classification of image using convolutional neural network,” in Proceedings of the International Conference on Recent Advances in Information Technology (RAIT), pp. 1–6, Dhanbad, India, October 2018.View at: Google Scholar
K. Das, S. Conjeti, A. G. Roy, J. Chatterjee, and D. Sheet, “Multiple instance learning of deep convolutional neural networks for breast histopathology whole slide classification,” in Proceedings of the Int. Symposium on Biomedical Imaging (ISBI), pp. 578–581, New York, NY, USA, July 2018.View at: Google Scholar
B. Du, Q. Qi, H. Zheng, Y. Huang, and X. Ding, “Breast cancer histopathological image classification via deep active learning and confidence boosting. Artificial neural networks and machine learning (ICANN),” in Proceedings of the 27th international conference on artificial neural networks, vol. 11140, pp. 109–116, Greece, May 2018.View at: Google Scholar
V. Gupta and A. Bhavsar, “Sequential modeling of deep features for breast cancer histopathological image classification,” in Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops, pp. 2254–2261, Salt Lake City, UT, USA, June 2018.View at: Google Scholar
Y. Benhammou, S. Tabik, B. Achchab, and F. Herrera, “A first study exploring the performance of the state-of-the art CNN model in the problem of breast cancer,” in Proceedings of the International Conference on Learning and Optimization Algorithms: Theory and Applications (LOPAL), no. 1–47, p. 47, Rabat, Morocco, June 2018.View at: Google Scholar
S. Cascianelli, R. Bello-Cerezo, F. Bianconi et al., “Dimensionality reduction strategies for CNN-based classification of histopathological images,” Intelligent Interactive Multimedia Systems and Services, Springer International Publishing: Cham, New York, NY, USA, pp. 21–30, 2018.View at: Google Scholar
Y. Song, H. Chang, H. Huang, and W. Cai, “Supervised intra-embedding of Fisher vectors for histopathology image classification,” in Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), pp. 99–106, Quebec City, Canada, March 2017.View at: Google Scholar
B. Wei, Z. Han, X. He, and Y. Yin, “Deep learning model based breast cancer histopathological image classification,” in Proceedings of the International Conference on Cloud Computing and Big Data Analysis (ICCCBDA), pp. 348–353, Chengdu, April 2017.View at: Google Scholar
K. Das, S. P. K. Karri, A. G. Roy, J. Chatterjee, and D. Sheet, “Classifying histopathology whole-slides using fusion of decisions from deep convolutional network on a collection of random multi-views at multi-magnification,” in Proceedings of the International Symposium on Biomedical Imaging (ISBI), pp. 1024–1027, Melbourne, Australia, March 2017.View at: Google Scholar
Y. Song, J. J. Zou, H. Chang, and W. Cai, “Adapting Fisher vectors for histopathology image classification,” in Proceedings of the International Symposium on Biomedical Imaging (ISBI), pp. 600–603, Melbourne, Australia, April 2017.View at: Google Scholar
V. Gupta and A. Bhavsar, “Partially-Independent framework for breast cancer histopathological image classification,” in Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops, pp. 1123–1130, Long Beach, CA, USA, June 2019.View at: Google Scholar
M. Togacar, K. B. Ozkurt, B. Ergen, and Z. Comert, “BreastNet: a novel convolutional neural network model through histopathological images for the diagnosis of breast cancer,” Physica A: Statistical Mechanics and Its Applications, vol. 545, Article ID 123592, 2020.View at: Publisher Site | Google Scholar
D. Albashish, R. Al-Sayyed, A. Abdullah, M. H. Ryalat, and N. Ahmad Almansour, “Deep CNN Model Based on VGG16 for Breast Cancer Classification,” in Proceedings of the International Conference on Information Technology (ICIT), pp. 805–810, Amman, Jordan, July 2021.View at: Google Scholar
F. Parvin and M. A. Mehedi Hasan, “A comparative study of different types of convolutional neural networks for breast cancer histopathological image classification,” in Proceedings of the 2020 IEEE Region 10 Symposium (TENSYMP), pp. 945–948, Dhaka, Bangladesh, June 2020.View at: Google Scholar
F. A. Spanhol, L. S. Oliveira, C. Petitjean, and L. Heutte, “Breast cancer histopathological image classification using Convolutional Neural Networks,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN), pp. 2560–2567, Vancouver, BC, 2016.View at: Google Scholar
F. A. Spanhol, L. S. Oliveira, P. R. Cavalin, C. Petitjean, and L. HeutteA. B. Banff, “Deep features for breast cancer histopathological image classification,” in Proceedings of the International Conference on Systems, Man, and Cybernetics (SMC), pp. 1868–1873, Canada, CA, USA, 2017.View at: Google Scholar
N. Bayramoglu, J. Kannala, and J. Heikkilä, “Deep learning for magnification independent breast cancer histopathology image classification,” in Proceedings of the International Conference on Pattern Recognition (ICPR), pp. 2440–2445, Cancun, Mexico, December 2016.View at: Google Scholar
J. Sun and A. Binder, “Comparison of deep learning architectures for H&E histopathology images,” in Proceedings of the IEEE Conference on Big Data and Analytics (ICBDA), pp. 43–48, Kuching, Malaysia, November 2017.View at: Google Scholar
M. Talo, “Convolutional neural networks for multi-class histopathology image classification,” 2019, https://arxiv.org/ftp/arxiv/papers/1903/1903.10035.pdf.View at: Google Scholar
F. P. Romero, A. Tang, and S. Kadoury, “Multi-level batch normalization in deep networks for invasive ductal carcinoma cell discrimination in histopathology images,” in Proceedings of the International Symposium on Biomedical Imaging (ISBI), pp. 1092–1095, Venice, Italy, July 2019.View at: Google Scholar
H. L. Minh, M. M. Van, and T. V. Lang, “Deep feature fusion for breast cancer diagnosis on histopathology images,” in Proceedings of the International Conference on Knowledge and Systems Engineering (KSE), pp. 1–6, Da Nang, Vietnam, September 2019.View at: Google Scholar
M. A. Alantari, S. M. Han, and T. S. Kim, “Evaluation of deep learning detection and classification towards computer-aided diagnosis of breast lesions in digital X-ray mammograms,” Computer Methods and Programs in Biomedicine, vol. 196, Article ID 105584, 2020.View at: Publisher Site | Google Scholar
E. Mahraban Nejad, L. S. Affendey, R. B. Latip, I. B. Ishak, and R. Banaeeyan, “Transferred semantic scores for scalable retrieval of histopathological breast cancer images,” International Journal of Multimedia Information Retrieval, vol. 7, no. 4, pp. 241–249, 2018.View at: Publisher Site | Google Scholar
W. Zhi, H. W. F. Yeung, Z. Chen, S. M. Zandavi, Z. Lu, and Y. Y. Chung, “Using transfer learning with convolutional neural networks to diagnose breast cancer from histopathological images,” International Conference on Neural Information Processing (ICONIP), China, vol. 10637, pp. 669–676, 2017.View at: Google Scholar
J. Chang, J. Yu, T. Han, H. Chang, and E. Park, “A method for classifying medical images using transfer learning: a pilot study on histopathology of breast cancer,” in Proceedings of the International Conference on E-Health Networking, Applications and Services (Healthcom), pp. 1–4, Dalian, China, June 2017.View at: Google Scholar
M. F. I. Soumik, A. Z. B. Aziz, and M. A. Hossain, “Improved transfer learning based deep learning model for breast cancer histopathological image classification,” in Proceedings of the 2021 International Conference on Automation, Control and Mechatronics for Industry 4.0 (ACMI), pp. 1–4, Rajshahi, Bangladesh, June 2021.View at: Google Scholar
S. Boumaraf, X. Liu, Z. Zheng, X. Ma, and C. Ferkous, “A new transfer learning based approach to magnification dependent and independent classification of breast cancer in histopathological images,” Biomedical Signal Processing and Control, vol. 63, Article ID 102192, 2021.View at: Publisher Site | Google Scholar
G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely Connected Convolutional Networks,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2261–2269, Honolulu, HI, USA, July 2017.View at: Google Scholar
M. A. Mohammed, B. Al-Khateeb, A. N. Rashid, D. A. Ibrahim, M. K. Abd Ghani, and S. A. Mostafa, “Neural network and multi-fractal dimension features for breast cancer classification from ultrasound images,” Computers & Electrical Engineering, vol. 70, pp. 871–882, 2018.View at: Publisher Site | Google Scholar
WHO, “Breast Cancer,” 2021, https://www.who.int/news-room/fact-sheets/detail/breast-cancer.View at: Google Scholar
K. He, X. Zhang, S. Ren, and J. Sun, “Identity Mappings in Deep Residual Networks,” in Proceedings of the European Conference on Computer Vision (ECCV), vol. 9908, pp. 630–645, Netherlands, Europe, 2016.View at: Google Scholar
N. B. C. Foundation, Biopsy, The National Breast Cancer Foundation, Texas, TX, USA, 2018.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems 25,” in Proceedings of the 26th Annual Conference on Neural Information Processing Systems 2012, pp. 1106–1114, Lake Tahoe, Nevada, USA, October 2012.View at: Google Scholar
H. Bay, T. Tuytelaars, and L. V. Gool, “SURF: Speeded Up Robust Features,” in Proceedings of the European Conference on Computer Vision ECCV, vol. 3951, pp. 404–417, Graz, Austria, July 2006.View at: Google Scholar
S. F. University, “Digital Database for Screening Mammography,” 2006, http://www.eng.usf.edu/cvprg/mammography/database.html.View at: Google Scholar
J. Suckling and J. Parker, “Mammographic Image Analysis Society (MIAS) Database v1.21 [Dataset],” 2015, https://www.repository.cam.ac.uk/handle/1810/250394.View at: Google Scholar
C. Szegedy, W. Liu, Y. Jia et al., “Going deeper with convolutions,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9, Boston, MA, USA, July 2015.View at: Google Scholar
C. Szegedy, W. Liu, Y. Jia et al., “Going deeper with convolutions,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9, Boston, MA, USA, July 2015.View at: Google Scholar
M. A. Kahya, W. Al-Hayani, and Z. Y. Algamal, “Classification of breast cancer histopathology images based on adaptive sparse support vector machine,” Journal of Applied Mathematics and Bioinformatics, vol. 7, pp. 49–69, 2017.View at: Google Scholar
V. Gupta and A. Bhavsar, “An integrated multi-scale model for breast cancer histopathological image classification with joint colour-texture features,” in Proceedings of the International Conference on Computer Analysis of Images and Patterns (CAIP), vol. 10425, pp. 354–366, Ystad, Sweden, August 2017.View at: Google Scholar
Y. Jia, E. Shelhamer, J. Donahue et al., “Caffe: Convolutional Architecture for Fast Feature Embedding,” in Proceedings of the ACM Int. Conf. on Multimedia (MM), pp. 675–678, Florida, FL, USA, 2014.View at: Google Scholar
S. Kaymak, A. Helwan, and D. Uzun, “Breast cancer image classification using artificial neural networks. Procedia Computer Science,” in Proceedings of the International Conference on Theory and Application of Soft Computing, Computing with Words and Perception (ICSCCW), vol. 120, pp. 126–131, Budapest, Hungary, June 2017.View at: Google Scholar
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proceedings of the 3rd International Conference on Learning Representations (ICLR), San Diego, CA, USA, July 2015.View at: Google Scholar
A. A. Nahid, A. Mikaelian, and Y. Kong, “Histopathological breast-image classification with restricted Boltzmann machine along with backpropagation,” Biomedical Research, vol. 29, pp. 2068–2077, 2018.View at: Google Scholar
J. A. Badejo, E. Adetiba, A. Akinrinmade, and M. B. Akanle, “Medical image classification with hand-designed or machine-designed texture descriptors: a performance evaluation,” International Work-Conference on Bioinformatics and Biomedical Engineering (IWBBIO), Granada, Spain, vol. 10814, pp. 266–275, 2018.View at: Google Scholar
P. Alirezazadeh, B. Hejrati, A. Monsef-Esfahani, and A. Fathi, “Representation learning-based unsupervised domain adaptation for classification of breast cancer histopathology images,” Biocybernetics and Biomedical Engineering, vol. 38, no. 3, pp. 671–683, 2018.View at: Publisher Site | Google Scholar
J. Spencer and K. S. John, “Random sparse bit strings at the threshold of adjacency,” in Proceedings of the Annual symposium on theoretical aspects of computer science (STACS), vol. 1373, pp. 94–104, Paris, France, October 1998.View at: Google Scholar
F. Giannakas, C. Troussas, A. Krouska, C. Sgouropoulou, and I. Voyiatzis, “XGBoost and deep neural network comparison: the case of teams’ performance,” International Conference on Intelligent Tutoring Systems (ITS), Virtual Event, vol. 12677, pp. 343–349, 2021.View at: Google Scholar
D. S. Morillo, J. Gonzalez, M. G. Rojo, and J. Ortega, “Classification of breast cancer histopathological images using KAZE features,” Int. Work-Conference on Bioinformatics and Biomedical Engineering (IWBBIO), Spain, vol. 10814, pp. 276–286, 2018.View at: Google Scholar
P. F. Alcantarilla, A. Bartoli, and A. J. Davison, “KAZE Features,” in Proceedings of the European Conference on Computer Vision (ECCV), vol. 7577, pp. 214–227, Florence, Italy, June 2012.View at: Google Scholar
R. Mukkamala, P. S. Neeraja, S. Pamidi, T. Babu, and T. Singh, “Deep PCANet framework for the binary categorization of breast histopathology images,” in Proceedings of the International Conference on Advances in Computing, Communications and Informatics (ICACCI), pp. 105–110, India, September 2018.View at: Google Scholar
A. Rakhlin, A. Shvets, V. Iglovikov, and A. A. Kalinin, “Deep convolutional neural networks for breast cancer histology image analysis,” in Proceedings of the International Conference on Image Analysis and Recognition (ICIAR), vol. 10882, pp. 737–744, Portugal, July 2018.View at: Google Scholar
B. S. Veeling, J. Linmans, J. Winkens, T. Cohen, and M. Welling, “Rotation equivariant CNNs for digital pathology,” Int. Conf. on Medical Image Computing and Computer Assisted Intervention (MICCAI), Spain, vol. 11071, pp. 210–218, 2018.View at: Google Scholar
R. Lenz and P. L. Carmona, “Transform Coding of RGB-Histograms,” in Proceedings of the International Conference on Computer Vision Theory and Applications (VISAPP), pp. 117–124, Lisboa, Portugal, November 2009.View at: Google Scholar
J. Hu, L. Shen, and G. Sun, “Squeeze-and-Excitation networks,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7132–7141, Salt Lake City, UT, USA, May 2018.View at: Google Scholar
M. Babaie, S. Kalra, A. Sriram et al., “Classification and Retrieval of Digital Pathology Scans,” in Proceedings of the A New Dataset. Conference on Computer Vision and Pattern Recognition Workshops, pp. 760–768, CVPR Workshops, Honolulu, HI, USA, July 2017.View at: Google Scholar
C. Roa, “Data from: High-Throughput Adaptive Sampling for Whole-Slide Histopathology Image Analysis (HASHI) via Convolutional Neural Networks: Application to Invasive Breast Cancer Detection,” 2018, https://datadryad.org/stash/dataset/doi:10.5061/dryad.1g2nt41.View at: Google Scholar
F. Chollet, “Xception: Deep Learning with Depthwise Separable Convolutions,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1800–1807, Honolulu, HI, USA, May 2017.View at: Google Scholar
A. Janowczyk and A. Madabhushi, “Grading of invasive breast carcinoma through Grassmannian VLAD encoding,” Journal of Pathology Informatics, vol. 7, Article ID 27563488, 2016.View at: Google Scholar
R. H. Carvalho, A. S. Martins, L. A. Neves, and M. Z. do Nascimento, “Analysis of features for breast cancer recognition in different magnifications of histopathological images,” in Proceedings of the International Conference on Systems, Signals and Image Processing (IWSSIP), pp. 39–44, Brazil, August 2020.View at: Google Scholar
J. Li, J. Zhang, Q. Sun et al., “Breast cancer histopathological image classification based on deep second-order pooling network,” in Proceedings of the Int. Joint Conf. On Neural Networks (IJCNN), pp. 1–7, London, U K, November 2020.View at: Google Scholar
P. Li, J. Xie, Q. Wang, and Z. Gao, “Towards Faster Training of Global Covariance Pooling Networks by Iterative Matrix Square Root Normalization,” in Proceedings of the Conference on computer vision and pattern recognition (CVPR), pp. 947–955, Salt Lake City, UT, USA, September 2018.View at: Google Scholar
S. Sabour, N. Frosst, and G. E. Hinton, “Dynamic routing between capsules Advances in Neural Information Processing Systems 30,” in Proceedings of the Annual Conference on Neural Information Processing Systems, pp. 3856–3866, Long Beach, CA, USA, April 2017.View at: Google Scholar
A. R. H. Khayeat, X. Sun, and P. L. Rosin, “Improved DSIFT descriptor based copy-rotate-move forgery detection. Image and video technology - 7th pacific-rim symposium (PSIVT), auckland,” New Zealand, vol. 9431, pp. 642–655, 2015.View at: Google Scholar
J. Wang, J. Yang, K. Yu, F. Lv, T. S. Huang, and Y. Gong, “Locality-constrained linear coding for image classification,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3360–3367, San Francisco, USA, January 2010.View at: Google Scholar
K. C. Burçak, Ö. K. Baykan, and H. Uguz, “A new deep convolutional neural network model for classifying breast cancer histopathological images and the hyperparameter optimisation of the proposed model,” The Journal of Supercomputing, vol. 77, no. 1, pp. 973–989, 2021.View at: Publisher Site | Google Scholar
A. Botev, G. Lever, and D. Barber, “Nesterov’s accelerated gradient and momentum as approximations to regularised update descent,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN), pp. 1899–1903, Anchorage, AK, USA, August 2017.View at: Google Scholar
N. Shi, D. Li, M. Hong, and R. Sun, “RMSprop converges with proper hyper-parameter,” in Proceedings of the International Conference on Learning Representations (ICLR), Virtual Event, Austria, February 2021.View at: Google Scholar
Q. B. Baker and A. Abu Qutaish, “Evaluation of histopathological images segmentation techniques for breast cancer detection,” in Proceedings of the International Conference on Information and Communication Systems (ICICS), pp. 134–139, Valencia, Spain, May 2021.View at: Google Scholar
M. Z. D. Nascimento, A. S. Martins, L. A. Neves, R. P. Ramos, E. L. Flôres, and G. A. Carrijo, “Classification of masses in mammographic image using wavelet domain features and polynomial classifier,” Expert Systems with Applications, vol. 40, no. 15, pp. 6213–6221, 2013.View at: Publisher Site | Google Scholar
M. Tan and Q. V. Le, “EfficientNet: rethinking model scaling for convolutional neural networks,” in Proceedings of the 36th International Conference on Machine Learning (ICML), vol. 97, pp. 6105–6114, Long Beach, California, USA, December 2019.View at: Google Scholar
J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, and L. F. Fei, “ImageNet: A Large-Scale Hierarchical Image Database,” in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255, Miami, Florida, USA, April 2009.View at: Google Scholar
A. Ameh Joseph, M. Abdullahi, S. B. Junaidu, H. Hassan Ibrahim, and H. Chiroma, “Improved multi-classification of breast cancer histopathological images using handcrafted features and deep neural network (dense layer),” Intelligent Systems with Applications, vol. 14, Article ID 200066, 2022.View at: Publisher Site | Google Scholar
P. Jungklass and M. Berekovic, “Static allocation of basic blocks based on runtime and memory requirements in embedded real-time systems with hierarchical memory layout,” in Proceedings of the Second Workshop on Next Generation Real-Time Embedded Systems, vol. 87, no. 1–3, pp. 3–14, Budapest, Hungary, February 2021.View at: Google Scholar
I. Loshchilov and F. Hutter, “SGDR: stochastic gradient descent with warm restarts,” in Proceedings of the International Conference on Learning Representations (ICLR), pp. 1–16, Toulon, France, September 2017.View at: Google Scholar
C. Zhang, P. Benz, D. M. Argaw et al., “ResNet or DenseNet? Introducing Dense Shortcuts to ResNet,” in Proceedings of the Winter Conference on Applications of Computer Vision (WACV), pp. 3549–3558, HI, USA, December 2021.View at: Google Scholar
S. Kornblith, J. Shlens, and Q. V. Le, “Do Better ImageNet Models Transfer Better?” in Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2661–2671, Canada CA, USA, March 2019.View at: Google Scholar
M. Macenko, M. Niethammer, J. S. Marron et al., “A method for normalizing histology slides for quantitative analysis,” in Proceedings of the International Symposium on Biomedical Imaging: From Nano to Macro, pp. 1107–1110, Boston, MA, USA, June 2009.View at: Google Scholar
Keras, Keras API, 2021.
H. A. Shehu, R. A. Ramadan, and M. H. Sharif, “Artificial intelligence tools and their capabilities,” PLOMS AI, p. 1, 2021.View at: Google Scholar
D. P. Kingma, J. Ba, and Adam, “A Method for Stochastic Optimization,” in Proceedings of the International Conference on Learning Representations (ICLR), Y. Bengio and Y. LeCun, Eds., San Diego, CA, USA, May 2015.View at: Google Scholar
N. I. H. Malaria, “Datasets of National Institutes of Health (NIH),” 2021, https://www.kaggle.com/iarunava/cell-images-for-detecting-malaria.View at: Google Scholar
Kaggle, “CoronaHack - chest X-ray-dataset,” 2021, https://github.com/ieee8023/covid-chestxray-dataset.View at: Google Scholar
Kaggle and Skin Cancer, “Malignant vs. Benign,” 2021, https://www.kaggle.com/fanconic/skin-cancer-malignant-vs-benign.View at: Google Scholar
H. A. Shehu, W. Browne, and H. Eisenbarth, “An Adversarial Attacks Resistance-Based Approach to Emotion Recognition from Images Using Facial Landmarks,” in Proceedings of the IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1307–1314, Naples, Italy, August 2020.View at: Google Scholar
M. H. Sharif and C. Djeraba, “A simple method for eccentric event espial using mahalanobis metric. Progress in pattern recognition, image analysis, computer vision, and applications,” in Proceedings of the 14th iberoamerican conference on pattern recognition (CIARP), vol. 5856, pp. 417–424, Guadalajara, Mexico, April 2009.View at: Google Scholar
P. Westfall and S. Young, Resampling-based Multiple Testing: Examples and Methods for P-Value Adjustment, John Wiley & Sons, New Jersey, NY, USA, 2004.
S. Holm, “A simple sequentially rejective multiple test procedure,” Scandinavian Journal of Statistics, vol. 6, pp. 65–70, 1979.View at: Google Scholar
P. Nemenyi, “Distribution-free Multiple Comparisons,” Princeton University, New Jersey, NY, USA, 1963, PhD thesis.View at: Google Scholar
G. Bergmann and G. Hommel, “Improvements of general multiple test proceduresfor redundant systems of hypotheses,” Multiple Hypotheses Testing, Springer, New York, NY, USA, pp. 100–115, 1988.View at: Google Scholar
S. García and F. Herrera, “An extension on ”Statistical comparisons of classifiers over multiple data sets” for all pairwise comparisons,” Journal of Machine Learning Research, vol. 9, pp. 2677–2694, 2008.View at: Google Scholar