Contrast Media & Molecular Imaging

Contrast Media & Molecular Imaging / 2021 / Article
Special Issue

Artificial Intelligence in Radiomics

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 7192016 |

Omneya Attallah, Maha Sharkas, "Intelligent Dermatologist Tool for Classifying Multiple Skin Cancer Subtypes by Incorporating Manifold Radiomics Features Categories", Contrast Media & Molecular Imaging, vol. 2021, Article ID 7192016, 14 pages, 2021.

Intelligent Dermatologist Tool for Classifying Multiple Skin Cancer Subtypes by Incorporating Manifold Radiomics Features Categories

Academic Editor: Yu-Dong Zhang
Received09 Jul 2021
Revised20 Aug 2021
Accepted01 Sep 2021
Published15 Sep 2021


The rates of skin cancer (SC) are rising every year and becoming a critical health issue worldwide. SC’s early and accurate diagnosis is the key procedure to reduce these rates and improve survivability. However, the manual diagnosis is exhausting, complicated, expensive, prone to diagnostic error, and highly dependent on the dermatologist’s experience and abilities. Thus, there is a vital need to create automated dermatologist tools that are capable of accurately classifying SC subclasses. Recently, artificial intelligence (AI) techniques including machine learning (ML) and deep learning (DL) have verified the success of computer-assisted dermatologist tools in the automatic diagnosis and detection of SC diseases. Previous AI-based dermatologist tools are based on features which are either high-level features based on DL methods or low-level features based on handcrafted operations. Most of them were constructed for binary classification of SC. This study proposes an intelligent dermatologist tool to accurately diagnose multiple skin lesions automatically. This tool incorporates manifold radiomics features categories involving high-level features such as ResNet-50, DenseNet-201, and DarkNet-53 and low-level features including discrete wavelet transform (DWT) and local binary pattern (LBP). The results of the proposed intelligent tool prove that merging manifold features of different categories has a high influence on the classification accuracy. Moreover, these results are superior to those obtained by other related AI-based dermatologist tools. Therefore, the proposed intelligent tool can be used by dermatologists to help them in the accurate diagnosis of the SC subcategory. It can also overcome manual diagnosis limitations, reduce the rates of infection, and enhance survival rates.

1. Introduction

The World Health Organization (WHO) has declared that cancer is the foremost cause of death globally. It estimates that the number of individuals identified with cancer would be twice over a subsequent couple of decades [1]. Among cancer types, skin cancer (SC) is considered one of the most common deadly tumors among both women and men populations with almost 9% of them diagnosed with SC in the United States [2]. Throughout the last few decades, countries such as Canada and Australia experienced a huge increase in the number of patients diagnosed with SC [35]. Moreover, in Brazil, based on the Brazilian Cancer Institute (INCA), 33% of the people affected by cancer are due to SC [6]. The rates of death and SC infection are still growing. These rates can be decreased if cancer is detected and cured during its initial stages. Primary detection of SC is the keystone to enhancing outcomes and is associated with great improvement in survival rates. Nevertheless, if the disease is progressed ahead of the skin, the survival rates become poor [7].

SC happens when skin cells are harmed and injured, for instance, by overexposure to the sun’s ultraviolet radiations. SC can be classified into two main categories: melanocytic and nonmelanocytic lesions. The former category involves melanoma and nevus SC subtypes which occur in malignant and benign forms whereas nonmelanocytic lesions include basal cell and squamous cell carcinoma (SCC) which also appear in malignant and benign types. Actinic keratosis (ak) is the primary sort of SCC. Furthermore, vascular, benign keratosis, and dermatofibroma are recognized as nonmelanocytic benign lesions [8, 9]. In the present medical routine, traditional methods to diagnose and detect SC subtypes involve manual screening and visual examination. These procedures are exhausting, complicated, prone to diagnostic error, and highly dependent on the dermatologist’s experience and abilities [10]. The reason for misdiagnosis is the complex patterns of skin lesions located in images [8]. Moreover, to analyze, clarify, and interpret skin lesions, the pixels of these lesions should be recognized explicitly which is hard due to several reasons [11]. First, skin lesions usually suppress hair, oils, and blood vessels that disturb the segmentation process. Furthermore, the low contrast among the lesion and the surrounding regions presents challenges in the accurate segmentation of the lesion. Lastly, these lesions commonly have distinct shapes, dimensions, and colors which increase the difficulty in precisely classifying lesions subtypes. These reasons lead to the massive need for automated intelligent systems for skin lesions analysis to overcome the above-mentioned challenges [12].

Recently, artificial intelligence- (AI-) based assistant systems have offered solutions to revolutionize medicine and health care. AI techniques have shown impressive outcomes in numerous medical fields including breast cancer diagnosis [13, 14], brain tumors [15, 16], gastrointestinal diseases [17], lung diseases [18], and heart complications [1922]. They also revealed remarkable success in healthcare applications such as telerehabilitation [23], health monitoring [24], and assisting people with disabilities [25, 26]. Furthermore, the latest surveys [10, 27, 28] have proven the achievement of AI-based dermatologist tools in the automatic diagnosis and detection of SC diseases. These automated systems can assist and support clinicians in the fast and accurate decisions regarding the SC subtype, thus avoiding the challenges of the manual diagnosis. They can also offer a user-friendly atmosphere for nonskilled dermatologists. Moreover, they may provide a second opinion which leads to a more confident decision [29].

Radiomics is an evolving field in medical image quantitative analysis [30]. It is also known as quantitative image features. Radiomics associates the large number of significant features extracted from medical images to the biological or clinical endpoints [31]. The integration of radiomics and AI techniques has facilitated the accurate diagnosis of cancer types [32]. This is because radiomics can determine texture and other fundamental components of the tumor from medical images which help the AI methods to perform well and achieve accurate classification or diagnostic results [33]. This paper proposes an intelligent dermatologist tool for the automatic classification of several SC subtypes using an integration of AI and radiomics feature extraction techniques [34]. The motivation behind this work and the novelty of the proposed tool is discussed in the next section. The details of the proposed intelligent tool are illustrated in the methods sections.

The paper is arranged as follows. Section 2 includes background regarding AI-enabled tools for SC diagnosis. Section 3 involves the dataset description, methods of deep learning, and the proposed intelligent tool. Section 4 illustrates the evaluation metrics. Section 5 presents and discusses the results of the proposed tool and Section 5 concludes the paper.

2. Background on Artificial Intelligence in Skin Cancer Diagnosis

Throughout the past years, several automated tools have been introduced for SC detection and diagnosis. These tools can be classified into two classes, conventional and deep learning- (DL-) based methods. The former methods are based on traditional machine learning which includes image preprocessing, image segmentation, and feature extraction that mine low-level radiomics features based on handcrafted approaches. Monica et al. [35] proposed an automated system based on low-level radiomics feature extraction methods such as grey level covariance matrix (GLCM) and some statistical features to learn an SVM classifier to classify 8 subclasses of SC reaching an accuracy of 96.25%. Likewise, Arora et al. [36] fused several low-level features using bag of features (BoF) with SURF features to classify skin images into cancerous and noncancerous. The authors classified images using an SVM classifier and obtained an 85.7% accuracy. Also, Kumar et al [37] implemented a system for differentiating cancerous and noncancerous lesions of the skin using low-level features. First, the authors preprocessed images using a median filter. Then, they segmented the lesions using the fuzzy-C-means clustering approach. Next, they extracted textural features as GLCM and local binary pattern (LBP) as well as color features. Finally, an artificial neural network was trained using the differential evaluation algorithm to classify skin lesions reaching an accuracy of 97.7%.

On the other hand, DL-based techniques are the most recent branch of machine learning techniques that are commonly used in image processing. This is due to their great capacity in diagnosing several diseases from images even without preprocessing, segmentation, and feature extraction. They can also be used as feature extractors to extract high-level radiomics features from medical images [3840] to be used in the classification process. Rodrigues et al. [41] designed an automated system based on DL and the Internet of things (IoT) to assist doctors in distinguishing between nevi and melanoma skin cancer subclasses. The authors utilized VGG, Inception, ResNet, Inception-ResNet, Xception, MobileNet, DenseNet, and NASNet convolutional neural networks (CNNs) as feature extractors. These high-level features were used separately to construct and train numerous classifiers. The highest performance (accuracy of 96.805%) was attained using the deep radiomics features of the DenseNet-201 and k-nearest neighbor (KNN) classifier. Similarly, Khamparia et al. [42] proposed a framework that can remotely classify skin tumors into malignant and benign using DL techniques. The authors extracted high-level deep features from four CNNs, including ResNet-50, VGG-19, Inception, and SqueezeNet using transfer learning (TL). Next, these features were utilized as inputs to the fully connected layer of a CNN for classification using dense and max-pooling operation attaining a maximum accuracy of 99.6%. Khan et al. [43] presented a novel framework for diagnosing SC subclasses. The framework consists of two main stages: segmentation and classification. In the segmentation stage, a mask recurrent CNN (R-CNN) was employed based on ResNet-50 and the feature pyramid network. Afterward, in the classification stage, a 24-layer based CNN was constructed which employs the Softmax activation function for classification. The accuracy achieved was 86.5%. Later, Khan et al. [44] preprocessed images using decorrelation deformation algorithm, and then employed mask-R-CNN for segmenting skin lesions from these images. Next, deep features from the pooling and fully connected layers of DenseNet are extracted and combined. Afterward, optimal features were selected using entropy-controlled least square SVM. The accuracy attained was 88.5%.

Alternatively, some authors combined several high-level deep features; for example, in [45], the authors mined high-level features from pretrained AlexNet and VGG-16. Afterward, these features are combined in a concatenation method and reduced using principal component analysis. Finally, these reduced features were used to learn several classifiers to classify skin tumors into malignant and benign. The Bagged tree classifier obtained the highest accuracy of 98.71%. Similarly, To˘gaçar et al. [46] introduced an intelligent system to differentiate malignant and benign skin tumors. Initially, images were reconstructed using an autoencoder and then used to train MobileNet. The original images are used to train another MobileNet. The high-level features extracted from the two MobileNet are combined, and a spiking neural network (SNN) is employed to perform classification reaching a 95.27% accuracy. Conversely, the authors of [47] extracted low-level radiomics features based on textural analysis such as GLCM and LBP features and then reduced these features using principal component analysis (PCA). Afterward, the reduced features are used to train several individual classifiers to classify malignant and benign skin lesions. Parallelly, the authors extracted high-level features from a VGG-19 and a customized CNN to classify images into malignant and benign using individual classifiers. Finally, the predictions attained using both levels of features are merged using a voting ensemble classifier reaching a 97.5% accuracy.

The aforementioned techniques have several drawbacks. First, most of them were constructed for binary classification problems such as differentiating between benign and malignant, cancerous and noncancerous, or two skin lesions subclasses. A few of them have classified several subtypes of cancer discussed earlier. The majority of them are either based on the low level or high level of features except for [47] that have fused both levels to perform binary classification. These shortcomings have motivated us to propose a new intelligent dermatologist tool to classify seven skin cancer categories. The proposed tool examines the influence of combining two low-level radiomics features. It also studies the effect of merging several high-level deep features. Finally, it investigates the impact of fusing manifold low-level and high-level features.

3. Methods and Materials

3.1. Feature Extraction Methods
3.1.1. High-Level Radiomics Features Based on Deep Learning Techniques

ResNet is one of the most potent CNNs that are commonly used in the medical field. It received a prominent place in ILSVRC and COCO 2015 competition [48]. It has high capabilities to converge effectively with adequate computation time despite the expanding number of layers. This superiority is due to its new construction introduced by He et al. [48] that entirely relies on the deep residual block. This block embeds shorter paths along the conventional deep CNN to exclude some layers during the training phase which leads to great acceleration in the convergence process [18]. The number of deep layers used in the pretrained ResNet employed in the paper is 50.

DenseNet: several research articles have stated that deep networks may be considerably deeper, accurate, and time-cost effective when created with shorter ties including layers close to the input and output. Thus, the Dense Convolutional Network (DenseNet) was implemented by Huang et al. [49] depending on the aforementioned short links. DenseNet ties all the layers to each other in a feed-forward practice where feature maps are inputs to the subsequent layer while the feature maps of the current layer are supplied to the whole succeeding layers. The DenseNet CNN included in this study has 201 deep layers.

DarkNet was initially implemented by Redmon and Farhadi [50] in 2017. It extremely depends on YOLO-V2. It has a cascaded series of convolutional layers having sizes of 1 × 1 and 3 × 3 which are doubled after each pooling process. DarkNet employs a global average pooling layer to lower the feature presentation between the 3×3 convolutional layers. The number of deep layers involved in the DarkNet used in this study is 53.

3.1.2. Low-Level Features Based on Handcrafted Techniques

Discrete wavelet transform (DWT) applies orthogonal basis functions termed “wavelets” to analyze input data [51]. For 1D input data as the deep radiomics features mined in the earlier phases, the DWT procedure is accomplished through convolving the input features with a low and high pass filter [52]. After that, a reduction process is accomplished by downsampling the output data by 2 [53]. Subsequently, two clusters of coefficients are produced called the approximation coefficients CA1 and detail coefficients CD1 [54].

Local binary pattern (LBP) was proposed by Ojala et al. [55] as a feature extractor approach that determines the local demonstrations and information from pixels. It simply transforms an image into a set of local textures. LBP gives a binary label to each pixel value in an image according to a specific threshold calculated from the neighbor pixel values around the center pixel.

3.2. Dataset

The dataset used in this work is called HAM10000 [56]. This dataset contains images of seven subclasses of SC including melanoma (mel), nevus (nv), basal cell carcinoma (bcc), actinic keratosis (ak), vascular (vasc), benign keratosis (bkl), and dermatofibroma (df). The HAM10000 dataset consists of 10,008 images dermoscopic photos. Among these images 514 are bcc, 327 are ak, 6705 are nv, 1095 are bkl, 1110 are mel, 115 are df, and 142 are vasc skin lesion subtypes. Samples of these images are shown in Figure 1.

3.3. Proposed Intelligent Dermatologist Tool

The proposed intelligent tool consists of four steps involving preprocessing of the dermoscopic photos, feature mining, feature incorporation and selection, and classification steps. Initially, photos are resized and augmented. Then, in the feature mining step, low-level features are extracted from two traditional feature extractions. Also, high-level features are mined using three DL techniques. Afterward, features of different levels are integrated and examined and then reduced in the feature incorporation and selection step. Finally, three support vector machine (SVM) classifiers are utilized to classify multiple SC subclasses. The block diagram of the proposed intelligent dermatologist tool is shown in Figure 2.

3.3.1. Preprocessing of Dermoscopic Images

The dermoscopic images of the HAM10000 dataset are of different sizes; therefore, they are all resized to the corresponding dimension of each of the CNNs DL techniques used in this work (224 × 224 × 3 for ResNet-50 and DenseNet-201, and 256 × 256 × 3 for DarkNet-53). Furthermore, as noticed in the dataset section, the number of photos in each class of the dataset is unbalanced; therefore, we used several augmentation techniques to balance the dataset. These augmentation techniques include shearing, rotation, and top and bottom hat filtering. The number of images after augmentation is 1028 for bcc, 981 for ak, 1050 for nv, 1095 for bkl, 1110 for mel, 920 for df, and 994 for vasc.

3.3.2. Feature Mining

In this step, two categoriesof radiomics features are mined consisting of low level and high levelfeatures. In the low-level features, two handcrafted feature extraction methods including LBP [57] and DWT [58] are used. These techniques are based on texture analysis which frequently produces sufficient classification performance, particularly when merged [59]. In the DWT, 3 decomposition levels with Daubechies 4 (db-4) mother wavelet are made. The coefficients of approximation coefficients CA3 and three detail coefficients CD3 are considered as low-level features.

On the other hand, the high-level features include features extracted from three DL approaches. These techniques are the ResNet-50, DenseNet-201, and DarkNet-53 CNNs. To mine these features, initially, TL [60] is performed on the three deep pretrained CNNs learned with the ImageNet dataset to be capable of classifying the seven skin lesion categories. Afterward, few parameters are adjusted for each CNN. Next, the three CNNs are trained with images of the HAM10000 dataset after being resized and augmented. Lastly, high-level features are extracted from the last average pooling layer of the three CNNs. The dimensions of the high-level and low-level features are shown in Table 1.

Feature typeSize

Low-level features

High-level features

To reproduce the high-level features, first, some parameters of the three CNNs should be adjusted such as the learning rate (0.003), number of epochs to 30, validation frequency to 20, and min-batch size to 4. Afterward, TL is employed to use the pretrained CNNs (previously trained on the ImageNet dataset) and change the number of output layers to seven. Next, the three CNNs are trained with the HAM10000 dataset using stochastic gradient descent with a momentum algorithm. Finally, TL is used to extract the high-level features from the latest average pooling layer of the three CNNs. Some features comply with the 174 standards of image biomarker standardization initiative (IBSI) [61, 62] while others are not. Table S1 in the supplementary material discusses the compliance/noncompliance of these features.

3.3.3. Feature Incorporation and Selection

The feature incorporation step is accomplished in three phases. In the first phase, the low-level features extracted in the feature mining stage are integrated using a concatenated procedure. In the second phase, high-level features are fused in a concatenated manner. Additionally, in the third phase, each combination of low- and high-level feature sets is combined to determine the influence of incorporating manifold feature categories and select the integrated manifold feature combination which impacts the classification performance. After accomplishing the incorporation phases, the integrated features set that accomplished the high impact on the classification performance undergo a feature selection stage. Feature selection is done to reduce the huge dimension of fused features. Minimum redundancy maximum relevance (mRMR) feature selection procedure [63] is used in this step.

3.3.4. Classification

In the classification step, the well-known SVM classifier is used to classify the seven subclasses of SC. The kernel functions employed in the classification process are linear, cubic, and quadratic. The 5-fold cross-validation (CV) method is utilized to validate the classification outcomes of the proposed dermatologist tool. In the CV procedure, the dataset is initially split into 5 equal folds. Afterward, 4 folds of them are employed in the training process of the SVM classifiers, where the 5th fold is used for testing. This process is repeated 5 times where at each time the SVM classifiers are trained with different 4 training folds and the 5th is used for testing. Several performance metrics that will be mentioned in the next sections are calculated for each testing fold and averaged for the 5 folds.

4. Metrics of Performance

Some metrics are used to measure the performance of the proposed intelligent dermatologist tool, including classification accuracy (CA), F1-score, sensitivity, precision, and specificity [16]. Formulas (1)–(5) are used to determine these metrics:where TP is the true positive, FN exemplifies false negative, TN represents the true negative, and FP is the and false-positive.

5. Results and Discussion

This section will present and discuss the results of the proposed dermatologist tool. The section will first discuss the classification results utilizing low-level features. Afterward, it will show and illustrate the classification outputs using the high-level features. Next, it will introduce and explain the classification outcomes using the integration of manifold radiomics feature categories. Finally, it will compare the results of the proposed intelligent dermatologist tool with recent related works constructed with the same dataset to verify its competence.

5.1. Results of Low-Level Features

The classification results of the SVM classifiers trained with low-level features including DWT and LBP are shown in Figure 3. Note that the DWT-A, DWT-H, DWT-V, and DWT-D correspond to the approximation, horizontal, vertical, and diagonal DWT coefficients, respectively. As it can be noticed from Figure 3, the SVM classifiers trained with low-level features produce classification accuracy that ranges between 33.5 and 70.5%. The highest accuracy is obtained with the cubic SVM classifier constructed using the DWT-A features. These results verify that using low-level features alone is not capable of reaching accurate results for SC classification.

5.2. Results of High-Level Features

The outputs of the SVM classifiers learned with high-level features of DenseNet-201, ResNet-50, and DarkNet-53 CNNs are shown in Figure 4. The maximum accuracies of 95.6%, 95.6%, and 94.9% are obtained by the cubic, quadratic, and linear SVM classifiers correspondingly trained with the high-level features of DenseNet-201. Slightly lower accuracies (95.3%, 95.3%, and 94.8%) are achieved using the same classifiers learned with ResNet-50 features. The DarkNet-53 features accomplish accuracies of 94.6%, 64.3%, and 93.65 using the cubic, quadratic, and linear SVM classifiers, respectively. Figure 4 proves that utilizing high-level features has higher classification accuracy compared to low-level features shown in Figure 3.

5.3. Results of Incorporating Manifold Feature Categories and Feature Selection

The classification accuracies for the SVM classifiers trained with the incorporated manifold levels features are shown in Table 2. Table 2 first illustrates the accuracy attained using each combination of high-level features with low-level features. As it is clear, the fusion of every single high-level feature with one and two low-level feature sets has improved the accuracy of classification reaching peak accuracies of 97.5%, 97.9%, and 97.9% (linear, quadratic, and cubic SVM correspondingly) using the incorporation of DenseNet-201+ DWT-A + LBP features. These accuracies are higher than those attained using either the individual high-level features or single low-level features shown in Figures 3 and 4.

Incorporated manifold feature setsLinearQuadraticCubic

Single high-level features incorporated with low-level features
ResNet-50 + DWT-A95.796.196.2
ResNet-50 + LBP96.196.496.4
ResNet-50 + LBP + DWT-A96.596.996.8
DarkNet-53 + DWT-A94.694.995.1
DarkNet-53 + LBP95.89696.1
DarkNet-53 + DWT-A + LBP95.795.996
DenseNet-201 + DWT-A96.597.197.2
DenseNet-201 + LBP97.197.597.4
DenseNet-201 + DWT-A + LBP97.597.997.9

Two high-level feature sets incorporated
ResNet-50 + DarkNet-5395.996.396.6
ResNet-50 + DenseNet-20197.79898.1
DenseNet-201 + DarkNet-5397.998.198

Two high-level feature sets incorporated with low-level feature sets
ResNet-50 + DenseNet-201 + DWT-A98.298.598.5
ResNet-50 + DenseNet-201 + LBP97.898.198.1
ResNet-50 + DenseNet-201 + DWT-A + LBP98.298.698.5
ResNet-50 + DarkNet-53 + DWT-A96.496.996.9
ResNet-50 + DarkNet-53 + LBP96.796.997
DenseNet-201 + DarkNet-53 + DWT-A98.298.498.5
DenseNet-201 + DarkNet-53 + LBP97.998.498.4
DenseNet-201 + DarkNet-53 + DWT-A + LBP98.398.698.6

Three high-level feature sets incorporated
ResNet-50 + DenseNet-201 + DarkNet-5398.298.598.5

Three high-level feature sets incorporated with low-level feature sets
ResNet-50 + DenseNet-201 + DarkNet-53 + DWT-A98.698.898.8
ResNet-50 + DenseNet-201 + DarkNet-53 + LBP98.598.898.7
ResNet-50 + DenseNet-201 + DarkNet-53 + DWT-A + LBP98.79999

Next, Table 2 discusses the results of fusing every two high-level features as well as combining every two high-level feature sets with low-level features. Table 2 verifies that combining two high-level features has a positive impact on the accuracy as it increases to reach 97.9%, 98.1%, and 98% (linear, quadratic, and cubic SVM, respectively) using DenseNet-201 + DarkNet-53 high-level features. Moreover, when merging two high-level features with two low-level features, the classification accuracies of the SVM classifiers are enhanced to reach maximum accuracies in this scenario of 98.2%, 98.6%, and 98.5% utilizing the combined features of ResNet-50 + DenseNet-201 + DWT-A + LBP which are higher than those achieved when combining one high-level feature set with low-level features.

Finally, Table 2 displays the classification accuracies of fusing the three high-level features along with integrating the three high-levels features with low-levels features. Table 2 proves that incorporating manifold features of different categories has a high impact on classification accuracy. This is obvious as when merging the three high-levels features of ResNet-50 + DenseNet-201+ DarkNet-53 with the low-level features of DWT-A + LBP, the accuracy is boosted to 98.7%, 99%, and 99% (linear, quadratic, and cubic SVM, respectively). This improvement in the classification accuracy indicates the capacity of the proposed intelligent dermatologist tool in classifying the subclasses of skin cancer. Figure 5 shows the confusion matrix for the cubic SVM classifier trained with the manifold features of ResNet-50 + DenseNet-201+ DarkNet-53 + DWT-A + LBP.

The performance metrics including the sensitivity, specificity, precision, and F1-score for the cubic SVM classifier trained with ResNet-50 +DenseNet-201 + DarkNet-53 + DWT-A + LBP features are shown in Table 3. Table 3 shows that the mean specificity, sensitivity, precision, and F1-score for the seven classes of SC are 0.9969, 0.9854, 0.9884, and 0.988. These results verify that the proposed dermatologist tool is reliable. This is because, as stated in [6466], for any medical system to be reliable, the precision and specificity must be more than 0.95 and sensitivity should exceed 0.8. The receiving operating characteristics (ROC) curves along with the area under the curve (AUC) are displayed in Figure 6.



The results after using the mRMR feature selection approach are shown in Figure 7. Note that the classification accuracy for both quadratic and linear SVM has increased to 99.1% and 98.8%, respectively, whereas, for the cubic SVM, the accuracy is the same (99%). The mRMR feature selection procedure has reduced the number of features to 2500 which is lower than the 6495 of the combined manifold features of ResNet-50 + DenseNet-201+ DarkNet-53 + DWT-A + LBP. Figure 8 shows the heat map analysis of the selected radiomics features.

5.4. Comparing the Performance of the Proposed Tool with Related Works

To verify the competence of the proposed intelligent dermatologist tool, its performance is compared with recent related studies based on the HAM1000 dataset. This comparison is shown in Table 4. It is obvious from Table 4 that the proposed tool has a superior performance compared to other related works since the accuracy, sensitivity, specificity, and F1 score achieved using the proposed tool are 99%, 98.54%, 99.69%, 98.84%, and 98.83% which are greater than all other studies. This outperformance is because the proposed intelligent dermatologist tool is based on incorporating manifold features categories. It first examined the use of three individual high-level features and then two low-level handcrafted features. Next, it investigated the influence of incorporating several high- and low-level features and searched for the best-integrated manifold features. The results of the proposed tool have shown that merging manifold features of different categories have a great impact on classification accuracy. This is not the case in other related studies shown in Table 4 as they are based on either low-level features or high-level features. Most of them employed individual features sets and did not examine the influence of feature fusion.

ArticleAccuracy (%)SensitivitySpecificityPrecisionF1-score

Proposed tool9998.54%99.69%98.84%98.83%

Early detection of SC is very important to prevent it from progression. It can also help in choosing appropriate treatments and follow-up plans and decreasing death rates. This study proposed an intelligent tool for the automatic classification of lesion types. The results achieved using the proposed intelligent tool are promising. They verify that the proposed tool is an effective method that can be used in clinical practice. In this common sense, the key privilege of the proposed tool is its accessibility which means that it can be used in several regions easily especially those which suffer from the lack of skilled dermatologists. Besides, this tool will enable dermatologists to automatically diagnose the SC subclass and avoid challenges they face during manual examinations due to the complex patterns of skin lesions located in SC images [8]. It will also ease and fasten the diagnosis procedure compared to manual diagnosis. Moreover, the accurate classification of the lesion using the proposed tool will prevent patients diagnosed with a noncancerous lesion from the excess hospital visits, as normal medication can cure them without the need for exposure to radiation or chemotherapy. On the other hand, the tool can accurately diagnose patients with the specific SC category which helps doctors to select the suitable treatment procedure. Several studies have studied the use of individual feature extraction methods including traditional low-level features and high-level features based on deep learning to diagnose SC; however, the fusion among these features is of great importance, as the results of the proposed tool showed that integrating these features can enhance the performance. The results of the proposed tool prove that this tool adds value to the healthcare division. This is because the tool can diagnose the SC category more accurately than those methods used in the literature.

6. Conclusion

Skin cancer (SC) is one of the widespread malignant tumors among human populations. The increasing rate of infection of this type of cancer can be reduced if accurately diagnosed and treated during its initial stages. This paper proposed a dermatologist tool based on AI methods and manifold radiomics features categories to enable doctors to accurately diagnose the SC subtype. This could facilitate choosing the appropriate follow-up and treatments plans. The proposed intelligent tool is based on several deep learning and machine learning techniques. It incorporates manifold radiomics features categories including three high-level features of ResNet-50, DenseNet-201, and DarkNet-53 and two low-level radiomics features of DWT and LBP. This study proved that integrating both levels of radiomics features boosted the performance of the dermatologist tool compared to using either high-level or low-level features alone. The performance of the intelligent dermatologist tool was compared with related AI-based dermatologist tools and this comparison verified the superiority of the proposed tool over other tools; thus the proposed intelligent tool can be used to assist dermatologists in the accurate diagnosis of the subcategory of SC and avoid the complications of manual diagnosis. Upcoming work will consider using more deep learning techniques, other radiomics techniques, segmentation methods, and applying other integration techniques. The main limitation of this tool is using the 5-fold cross-validation method for validating the performance; however, cross-center validation using other datasets is required. Therefore, future work will consider using another dataset for cross-center validation.

Data Availability

The dataset employed in the paper can be found in Kaggle ( Codes are available at the following link:

Conflicts of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as potential conflicts of interest.

Supplementary Materials

Some radiomics features extracted in this study comply with the 174 standards of IBSI. Table S1 illustrates the compliance/noncompliance of these features. (Supplementary Materials)


  1. A. Naeem, M. S. Farooq, A. Khelifi, and A. Abid, “Malignant melanoma classification using deep learning: datasets, performance measurements, challenges and opportunities,” IEEE Access, vol. 8, pp. 110575–110597, 2020. View at: Publisher Site | Google Scholar
  2. H. Sung, J. Ferlay, R. L. Siegel et al., “Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries,” CA: a cancer journal for clinicians, vol. 71, no. 3, pp. 209–249, 2021. View at: Publisher Site | Google Scholar
  3. C. Sinclair and P. Foley, “Skin cancer prevention in Australia,” British Journal of Dermatology, vol. 161, pp. 116–123, 2009. View at: Publisher Site | Google Scholar
  4. L. D. Marrett, P. De, P. Airia, and D. Dryer, “Cancer in Canada in 2008,” Canadian Medical Association Journal, vol. 179, no. 11, pp. 1163–1170, 2008. View at: Publisher Site | Google Scholar
  5. D. E. O’Sullivan, D. R. Brenner, P. A. Demers, J. V. Paul, M. F. Christine, and D. K. Will, “Indoor tanning and skin cancer in Canada: a meta-analysis and attributable burden estimation,” Cancer epidemiology, vol. 59, pp. 1–7, 2019. View at: Google Scholar
  6. P. C. Marcelo de and P. Dubuisson, “An overview of the ultraviolet index and the skin cancer cases in Brazil,” Photochemistry and Photobiology, vol. 78, 2003. View at: Google Scholar
  7. Z. Apalla, A. Lallas, E. Sotiriou, E Lazaridou, and D Ioannides, “Epidemiological trends in skin cancer,” Dermatology Practical & Conceptual, vol. 7, pp. 1–6, 2017. View at: Publisher Site | Google Scholar
  8. A. Esteva, B. Kuprel, R. A. Novoa et al., “Dermatologist-level classification of skin cancer with deep neural networks,” Nature, vol. 542, no. 7639, pp. 115–118, 2017. View at: Publisher Site | Google Scholar
  9. S. S. Han, M. S. Kim, W. Lim, G. H. Park, I. Park, and S. E. Chang, “Classification of the clinical images for benign and malignant cutaneous tumors using a deep learning algorithm,” Journal of Investigative Dermatology, vol. 138, no. 7, pp. 1529–1538, 2018. View at: Publisher Site | Google Scholar
  10. A. Adegun and S. Viriri, “Deep learning techniques for skin lesion analysis and melanoma cancer detection: a survey of state-of-the-art,” Artificial Intelligence Review, vol. 54, no. 2, pp. 811–841, 2021. View at: Publisher Site | Google Scholar
  11. M. E. Vestergaard, P. Macaskill, P. E. Holt, and S. W. Menzies, “Dermoscopy compared with naked eye examination for the diagnosis of primary melanoma: a meta-analysis of studies performed in a clinical setting,” British Journal of Dermatology, vol. 159, pp. 669–676, 2008. View at: Google Scholar
  12. A. Masood and A. Ali Al-Jumaily, “Computer aided diagnostic support system for skin cancer: a review of techniques and algorithms,” International Journal of Biomedical Imaging, vol. 2013, Article ID 323268, 22 pages, 2013. View at: Publisher Site | Google Scholar
  13. D. A. Ragab, O. Attallah, M. Sharkas, and J. Ren, “A framework for breast cancer classification using multi-DCNNs,” Computers in Biology and Medicine, vol. 131, Article ID 104245, 2021. View at: Google Scholar
  14. D. A. Ragab, M. Sharkas, and O. Attallah, “Breast cancer diagnosis using an efficient CAD system based on multiple classifiers,” Diagnostics, vol. 9, 2019. View at: Google Scholar
  15. O. Attallah, “MB-AI-His: histopathological diagnosis of pediatric medulloblastoma and its subtypes via AI,” Diagnostics, vol. 11, pp. 359–384, 2021. View at: Google Scholar
  16. O. Attallah, “CoMB-deep: composite deep learning-based pipeline for classifying childhood medulloblastoma and its classes,” Frontiers in Neuroinformatics, vol. 15, p. 21, 2021. View at: Publisher Site | Google Scholar
  17. O. Attallah and M. Sharkas, “GASTRO-CADx: a three stages framework for diagnosing gastrointestinal diseases,” PeerJ Computer Science, vol. 7, p. e423, 2021. View at: Publisher Site | Google Scholar
  18. O. Attallah, D. A. Ragab, and M. Sharkas, “MULTI-DEEP: a novel CAD system for coronavirus (COVID-19) diagnosis from CT images using multiple convolution neural networks,” PeerJ, vol. 8, Article ID e10086, 2020. View at: Publisher Site | Google Scholar
  19. O. Attallah and X. Ma, “Bayesian neural network approach for determining the risk of re-intervention after endovascular aortic aneurysm repair,” Proceedings of the Institution of Mechanical Engineers - Part H: Journal of Engineering in Medicine, vol. 228, no. 9, pp. 857–866, 2014. View at: Publisher Site | Google Scholar
  20. O. Attallah, A. Karthikesalingam, P. J. Holt et al., “Using multiple classifiers for predicting the risk of endovascular aortic aneurysm repair re-intervention through hybrid feature selection,” Proceedings of the Institution of Mechanical Engineers-Part H: Journal of Engineering in Medicine, vol. 231, no. 11, pp. 1048–1063, 2017. View at: Publisher Site | Google Scholar
  21. O. Attallah, A. Karthikesalingam, P. J. E. Holt et al., “Feature selection through validation and un-censoring of endovascular repair survival data for predicting the risk of re-intervention,” BMC Medical Informatics and Decision Making, vol. 17, no. 1, pp. 115–133, 2017. View at: Publisher Site | Google Scholar
  22. A. Karthikesalingam, O. Attallah, X. Ma et al., “An artificial neural network stratifies the risks of Reintervention and mortality after endovascular aneurysm repair; a retrospective observational study,” PloS One, vol. 10, no. 7, Article ID e0129024, 2015. View at: Publisher Site | Google Scholar
  23. A. Baraka, H. Shaban, M. Abou El-Nasr, and O. Attallah, “Wearable accelerometer and sEMG-based upper limb BSN for tele-rehabilitation,” Applied Sciences, vol. 9, no. 14, pp. 2795–2816, 2019. View at: Publisher Site | Google Scholar
  24. A. Ayman, O. Attalah, and H. Shaban, “An efficient human activity recognition framework based on wearable imu wrist sensors,” in Proceedings of the 2019 IEEE International Conference on Imaging Systems and Techniques (IST), pp. 1–5, Abu Dhabi, UAE, December 2019. View at: Publisher Site | Google Scholar
  25. O. Attallah, J. Abougharbia, M. Tamazin, and A. A. Nasser, “A BCI system based on motor imagery for assisting people with motor deficiencies in the limbs,” Brain Sciences, vol. 10, no. 11, pp. 864–888, 2020. View at: Publisher Site | Google Scholar
  26. J. Abougharbia, O. Attallah, and M. Tamazin, “A novel BCI system based on hybrid features for classifying motor imagery tasks,” in Proceedings of the 2019 Ninth International Conference on Image Processing Theory, Tools and Applications (IPTA), pp. 1–6, Istanbul, Turkey, November 2019. View at: Publisher Site | Google Scholar
  27. R. B. Oliveira, J. P. Papa, A. S. Pereira, and J. M. R. S. Tavares, “Computational methods for pigmented skin lesion classification in images: review and future trends,” Neural Computing & Applications, vol. 29, no. 3, pp. 613–636, 2018. View at: Publisher Site | Google Scholar
  28. M. Goyal, T. Knackstedt, S. Yan, and S. Hassanpour, “Artificial intelligence-based image classification methods for diagnosis of skin cancer: challenges and opportunities,” Computers in Biology and Medicine, vol. 127, Article ID 104065, 2020. View at: Publisher Site | Google Scholar
  29. N. Nami, E. Giannini, M. Burroni, M. Fimiani, and P. Rubegni, “Teledermatology: state-of-the-art and future perspectives,” Expert Review of Dermatology, vol. 7, no. 1, pp. 1–3, 2012. View at: Publisher Site | Google Scholar
  30. V.-H. Le, Q.-H. Kha, T. N. K. Hung, and N. Q. K. Le, “Risk score generated from CT-based radiomics signatures for overall survival prediction in non-small cell lung cancer,” Cancers, vol. 13, no. 14, p. 3616, 2021. View at: Publisher Site | Google Scholar
  31. M. Avanzo, L. Wei, J. Stancanello et al., “Machine and deep learning methods for radiomics,” Medical Physics, vol. 47, pp. e185–e202, 2020. View at: Publisher Site | Google Scholar
  32. N. Q. K. Le, T. N. K. Hung, D. T. Do, L. H. T. Lam, L. H. Dang, and T.-T. Huynh, “Radiomics-based machine learning model for efficiently classifying transcriptome subtypes in glioblastoma patients from MRI,” Computers in Biology and Medicine, vol. 132, Article ID 104320, 2021. View at: Publisher Site | Google Scholar
  33. P. Afshar, A. Mohammadi, K. N. Plataniotis, A. Oikonomou, and H. Benali, “From handcrafted to deep-learning-based cancer radiomics: challenges and opportunities,” IEEE Signal Processing Magazine, vol. 36, no. 4, pp. 132–160, 2019. View at: Publisher Site | Google Scholar
  34. N. Q. K. Le, D. T. Do, F.-Y. Chiu, E. K. Y. Yapp, H.-Y. Yeh, and C.-Y. Chen, “XGBoost improves classification of MGMT promoter methylation status in IDH1 wildtype glioblastoma,” Journal of Personalized Medicine, vol. 10, no. 3, p. 128, 2020. View at: Publisher Site | Google Scholar
  35. M. K. Monika, N. Arun Vignesh, C. Usha Kumari, M. N. V. S. S. Kumar, and E. L. Lydia, “Skin cancer detection and classification using machine learning,” Materials Today: Proceedings, vol. 33, pp. 4266–4270, 2020. View at: Publisher Site | Google Scholar
  36. G. Arora, A. K. Dubey, and Z. A. Jaffery, “Bag of feature and support vector machine based early diagnosis of skin cancer,” Neural Computing & Applications, pp. 1–8, 2020. View at: Publisher Site | Google Scholar
  37. M. Kumar, M. Alshehri, R. AlGhamdi, P. Sharma, and V. Deep, “A de-ann inspired skin cancer detection approach using fuzzy c-means clustering,” Mobile Networks and Applications, vol. 25, no. 4, pp. 1319–1329, 2020. View at: Publisher Site | Google Scholar
  38. O. Attallah, M. A. Sharkas, and H. Gadelkarim, “Deep learning techniques for automatic detection of embryonic neurodevelopmental disorders,” Diagnostics, vol. 10, no. 1, pp. 27–49, 2020. View at: Publisher Site | Google Scholar
  39. D. A. Ragab and O. Attallah, “FUSI-CAD: coronavirus (COVID-19) diagnosis based on the fusion of CNNs and handcrafted features,” PeerJ Computer Science, vol. 6, Article ID e306, 2020. View at: Publisher Site | Google Scholar
  40. O. Attallah, F. Anwar, N. M. Ghanem, and M. A. Ismail, “Histo-CADx: duo cascaded fusion stages for breast cancer diagnosis from histopathological images,” PeerJ Computer Science, vol. 7, Article ID e493, 2021. View at: Publisher Site | Google Scholar
  41. D. d. A. Rodrigues, R. F. Ivo, S. C. Satapathy, S. Wang, J. Hemanth, and P. P. R. Filho, “A new approach for classification skin lesion based on transfer learning, deep learning, and IoT system,” Pattern Recognition Letters, vol. 136, pp. 8–15, 2020. View at: Publisher Site | Google Scholar
  42. A. Khamparia, P. K. Singh, P. Rani, S. Debabrata, K. Ashish, and B. Bharat, “An internet of health things-driven deep learning framework for detection and classification of skin cancer using transfer learning,” Transactions on Emerging Telecommunications Technologies, vol. 7, Article ID e3963, 2020. View at: Google Scholar
  43. M. A. Khan, Y.-D. Zhang, M. Sharif, and T. Akram, “Pixels to classes: intelligent learning framework for multiclass skin lesion localization and classification,” Computers & Electrical Engineering, vol. 90, Article ID 106956, 2021. View at: Publisher Site | Google Scholar
  44. M. A. Khan, T. Akram, Y.-D. Zhang, and M. Sharif, “Attributes based skin lesion detection and recognition: a mask RCNN and transfer learning-based deep learning framework,” Pattern Recognition Letters, vol. 143, pp. 58–66, 2021. View at: Publisher Site | Google Scholar
  45. J. Amin, A. Sharif, N. Gul et al., “Integrated design of deep features fusion for localization and classification of skin cancer,” Pattern Recognition Letters, vol. 131, pp. 63–70, 2020. View at: Publisher Site | Google Scholar
  46. M. Toğaçar, Z. Cömert, and B. Ergen, “Intelligent skin cancer detection applying autoencoder, MobileNetV2 and spiking neural networks,” Chaos, Solitons & Fractals, vol. 144, Article ID 110714, 2021. View at: Google Scholar
  47. S. M. Alizadeh and A. Mahloojifar, “Automatic skin cancer detection in dermoscopy images by combining convolutional neural networks and texture features,” International Journal of Imaging Systems and Technology, vol. 31, 2021. View at: Google Scholar
  48. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, June 2016. View at: Publisher Site | Google Scholar
  49. G. Huang, Z. Liu, and L. Van Der Maaten, “Densely connected convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708, Las Vegas, NV, USA, June 2017. View at: Publisher Site | Google Scholar
  50. J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7263–7271, Las Vegas, NV, USA, June 2017. View at: Publisher Site | Google Scholar
  51. M. Antonini, M. Barlaud, P. Mathieu, and I. Daubechies, “Image coding using wavelet transform,” IEEE Transactions on Image Processing, vol. 1, no. 2, pp. 205–220, 1992. View at: Publisher Site | Google Scholar
  52. H. Demirel, C. Ozcinar, and G. Anbarjafari, “Satellite image contrast enhancement using discrete wavelet transform and singular value decomposition,” IEEE Geoscience and Remote Sensing Letters, vol. 7, pp. 333–337, 2009. View at: Google Scholar
  53. E. Hatamimajoumerd and A. Talebpour, “A Temporal neural trace of wavelet coefficients in human object vision: an MEG study,” Frontiers in Neural Circuits, vol. 13, p. 20, 2019. View at: Publisher Site | Google Scholar
  54. O. Attallah, M. A. Sharkas, and H. Gadelkarim, “Fetal brain abnormality classification from MRI images of different gestational age,” Brain Sciences, vol. 9, no. 9, pp. 231–252, 2019. View at: Publisher Site | Google Scholar
  55. T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971–987, 2002. View at: Publisher Site | Google Scholar
  56. P. Tschandl, C. Rosendahl, and H. Kittler, “The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions,” Scientific data, vol. 5, pp. 180161–180169, 2018. View at: Publisher Site | Google Scholar
  57. Z. Zhenhua Guo, L. Lei Zhang, and D. Zhang, “A completed modeling of local binary pattern operator for texture classification,” IEEE Transactions on Image Processing, vol. 19, no. 6, pp. 1657–1663, 2010. View at: Publisher Site | Google Scholar
  58. D. Zhang, “Wavelet transform, texts in computer science,” in Fundamentals of Image Data Mining, pp. 35–44, Springer, Berlin, Germany, 2019. View at: Publisher Site | Google Scholar
  59. M. H. Bharati, J. J. Liu, and J. F. MacGregor, “Image texture analysis: methods and comparisons,” Chemometrics and Intelligent Laboratory Systems, vol. 72, no. 1, pp. 57–71, 2004. View at: Publisher Site | Google Scholar
  60. S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 22, pp. 1345–1359, 2009. View at: Google Scholar
  61. A. Zwanenburg, S. Leger, and M. Vallières, “Image biomarker standardisation initiative,” 2016, View at: Google Scholar
  62. A. Zwanenburg, M. Vallières, M. A. Abdalah et al., “The image biomarker standardization initiative: standardized quantitative radiomics for high-throughput image-based phenotyping,” Radiology, vol. 295, no. 2, pp. 328–338, 2020. View at: Publisher Site | Google Scholar
  63. M. Radovic, M. Ghalwash, N. Filipovic, and Z Obradovic, “Minimum redundancy maximum relevance feature selection approach for temporal gene expression data,” BMC Bioinformatics, vol. 18, pp. 9–14, 2017. View at: Publisher Site | Google Scholar
  64. D. Colquhoun, “An investigation of the false discovery rate and the misinterpretation of p -values,” Royal Society Open Science, vol. 1, no. 3, Article ID 140216, 2014. View at: Publisher Site | Google Scholar
  65. P. D. Ellis, The Essential Guide to Effect Sizes: Statistical Power, Meta-Analysis, and the Interpretation of Research Results, Cambridge University Press, Cambridge, UK, 2010.
  66. O. Attallah, “An effective mental stress state detection and evaluation system using minimum number of frontal brain electrodes,” Diagnostics, vol. 10, no. 5, pp. 292–327, 2020. View at: Publisher Site | Google Scholar
  67. H. W. Huang, B. W. Y. Hsu, C. H. Lee, and V. S. Tseng, “Development of a light‐weight deep learning model for cloud applications and remote diagnosis of skin cancers,” The Journal of Dermatology, vol. 48, no. 3, pp. 310–316, 2021. View at: Publisher Site | Google Scholar
  68. P. N. Srinivasu, J. G. SivaSai, M. F. Ijaz, A. K. Bhoi, W. Kim, and J. J. Kang, “Classification of skin disease using deep learning neural networks with MobileNet V2 and LSTM,” Sensors, vol. 21, no. 8, p. 2852, 2021. View at: Publisher Site | Google Scholar
  69. M. A. Khan, M. Sharif, T. Akram, R. Damaševičius, and R. Maskeliūnas, “Skin lesion segmentation and multiclass classification using deep learning features and improved moth flame optimization,” Diagnostics, vol. 11, no. 5, p. 811, 2021. View at: Publisher Site | Google Scholar
  70. S. Hosseinzadeh Kassani and P. Hosseinzadeh Kassani, “A comparative study of deep learning architectures on melanoma detection,” Tissue and Cell, vol. 58, pp. 76–83, 2019. View at: Publisher Site | Google Scholar

Copyright © 2021 Omneya Attallah and Maha Sharkas. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.