Journal of Healthcare Engineering

Journal of Healthcare Engineering / 2021 / Article
Special Issue

Healthcare of Things and Big Data for Healthcare Engineering

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 8899263 | https://doi.org/10.1155/2021/8899263

Bilal Khan, Rashid Naseem, Muhammad Arif Shah, Karzan Wakil, Atif Khan, M. Irfan Uddin, Marwan Mahmoud, "Software Defect Prediction for Healthcare Big Data: An Empirical Evaluation of Machine Learning Techniques", Journal of Healthcare Engineering, vol. 2021, Article ID 8899263, 16 pages, 2021. https://doi.org/10.1155/2021/8899263

Software Defect Prediction for Healthcare Big Data: An Empirical Evaluation of Machine Learning Techniques

Academic Editor: Nazir Shah
Received09 Sep 2020
Revised29 Sep 2020
Accepted24 Feb 2021
Published16 Mar 2021

Abstract

Software defect prediction (SDP) in the initial period of the software development life cycle (SDLC) remains a critical and important assignment. SDP is essentially studied during few last decades as it leads to assure the quality of software systems. The quick forecast of defective or imperfect artifacts in software development may serve the development team to use the existing assets competently and more effectively to provide extraordinary software products in the given or narrow time. Previously, several canvassers have industrialized models for defect prediction utilizing machine learning (ML) and statistical techniques. ML methods are considered as an operative and operational approach to pinpoint the defective modules, in which moving parts through mining concealed patterns amid software metrics (attributes). ML techniques are also utilized by several researchers on healthcare datasets. This study utilizes different ML techniques software defect prediction using seven broadly used datasets. The ML techniques include the multilayer perceptron (MLP), support vector machine (SVM), decision tree (J48), radial basis function (RBF), random forest (RF), hidden Markov model (HMM), credal decision tree (CDT), K-nearest neighbor (KNN), average one dependency estimator (A1DE), and Naïve Bayes (NB). The performance of each technique is evaluated using different measures, for instance, relative absolute error (RAE), mean absolute error (MAE), root mean squared error (RMSE), root relative squared error (RRSE), recall, and accuracy. The inclusive outcome shows the best performance of RF with 88.32% average accuracy and 2.96 rank value, second-best performance is achieved by SVM with 87.99% average accuracy and 3.83 rank values. Moreover, CDT also shows 87.88% average accuracy and 3.62 rank values, placed on the third position. The comprehensive outcomes of research can be utilized as a reference point for new research in the SDP domain, and therefore, any assertion concerning the enhancement in prediction over any new technique or model can be benchmarked and proved.

1. Introduction

Software engineering (SE) is a discipline that is worrisome with all qualities of software development from the beginning of software specification over to keeping up to the software maintenance after it has gone into practice [1]. In the domain of SE, software defect prediction (SDP) is the utmost significant and dynamic research zone that assumes a significant job in the software quality assurance (SQA) [2, 3]. The rising convolutions as well dependencies of software systems have expanded the difficulty to deliver software with minimal effort, high caliber, and maintainability as well increase the chances of making software defects (SDs) [4, 5]. SD is a flaw or insufficiency in a software system that roots the development of a spontaneous result. An SD can moreover be the situation when the last software product does not meet the client’s desire or client prerequisite [6]. SD’s can cause the diminution of the software product quality and increase the development cost.

SDP is a momentous commotion to assure the substances of a software system that leads to adequate development cost and recover the quality by identifying defect-prone instances before testing [4]. It moreover embraces categorizing software components in different varieties of a software system that constructs the testing progression supplementary by concentrating on testing as well as evaluating the components classified as defective [7]. Defects adversely affect software reliability and quality [8].

SDP in the primary period of the software development life cycle (SDLC) is measured as an utmost thought-provoking aspect of SQA [9]. In SE, bug fixing and testing are very costly which also require a massive amount of resources. Forecasting the software defects in software development has been observed by numerous studies in the last decades. Amid all these studies, machine learning (ML) techniques are considered as the best approach toward SDPs [7, 10, 11].

Keeping the above issue related to SDP, various researchers evaluated and built SDP models utilizing diverse classification techniques. Still, it is quite challenging to sort any broad-spectrum preparation to inaugurate the usability of these techniques. Inclusively, it was originated that notwithstanding some dissimilarities in the studies, no particular SDP technique delivers higher to the other techniques diagonally different datasets. The researchers have utilized different evaluation measures to assess the projected models to find the best model for SDP [12, 13].

However, this study focuses on the empirical analysis of ten ML techniques amid which some are proposes as new solutions for SDP. ML techniques include the multilayer perceptron (MLP), radial basis function (RBF), support vector machine (SVM), decision tree (J48), random forest (RF), hidden Markov model (HMM), credal decision tree (CDT), K -nearest neighbor (KNN), average one dependency estimator (A1DE), and Naïve Bayes (NB) for SDP. Amid all these techniques, HMM and A1DE are proposed aimed for the first time for SDP. These techniques are employed on seven different datasets including AR1, AR3, CM1, JM1, KC2, KC3, and MC1. All the experiments are validated using relative absolute error (RAE), mean absolute error (MAE), root relative squared error (RRSE), root mean squared error (RMSE), recall, and accuracy.

Following is a list of the contributions of this research:(1)To benchmark ten different ML techniques (MLP, J48, SVM, RF, RBF, HMM, CDT, A1DE, KNN, and NB) for SDP(2)To demeanor a series of try-outs on different datasets such as AR1, AR3, CM1, JM1, KC2, KC3, and MC1(3)To reveal insight into the experimental outcomes, evaluation is accomplished using MAE, RAE, RMSE, RRSE, recall, and accuracy(4)To show that experimental outcomes are significantly different and comparable with verifying the best results, Friedman two-way examination of difference by ranks is performed

Hereinafter, Section 2 presents the literature survey, Section 3 comprises the methodology and techniques, while experimental outcomes are discussed in Sections 4, and Section 5 covers the inclusive conclusion.

2. Literature Survey

This section delivers an ephemeral study about existing techniques in the field of SDP. Several researchers have employed ML techniques for SDP at the initial phase of software development. Several particular studies converse here. Czibula et al. [11] presented a model grounded on relational association discovery (RAD) for SDP. They apply all investigations on NASA dataset including KC1, KC3, MC2, MW1, JM1, PC3, PC4, PC1, PC2, and CM1. To assess the model as compared to other models, use accuracy, precision, specificity, probability of detection (PD), and area under cover (ROC) assessment measure. The acquired outcomes present that RAD perform well rather than other employed techniques.

A framework for SDP named the Defect Prediction through Convolutional Neural Network (DP-CNN) has been recommended by Li et al. [14]. The authors evaluated the DP-CNN on seven different open source projects such as Camel, jEdit, Lucene, Xalam, Xerces, Synapse, and Poi in terms of F-measure in defect predictions. Overall outcomes illustrate that on average, the DP-CNN enhanced the up-to-the-minute technique by 12%.

Jacob and Raju [15] introduced a hybrid feature selection (HFS) method for SDP. They also perform their analysis on NASA datasets including PC1, PC2, PC3, PC4, CM1, JM1, KC3, and MW1. The outcomes of HFS are benchmarked with Naïve Bayes (NB), neural networks (NN), RF, random tree (RT), and J48. Benchmarking is carried out using accuracy, specificity, sensitivity, and Matthew’s correlation coefficient (MCC). The analyzed outcome shows that HFS outperform while improving classification accuracy from 82% to 98%.

Bashir et al. [16] presented a joined framework to improve the SDP model using Ranker feature selection (RFS), data sampling (DS), and iterative partition filter (IPF) techniques to conquest class imbalance, noisy correspondingly, and high dimensionality. Seven ML techniques including NB, RF, KNN, MLP, SVM, J48, and decision stump are employed on CM1, JM1, KC2, MC1, PC1, and PC5 datasets for evaluations. The outcomes are carried out utilizing receiver operating characteristic (ROC) performance evaluation. Overall experimental outcomes of the proposed model outperformed other models.

A new approach for SDP utilizing a hybridized gradual relational association (HyGRAR) and artificial neural network (ANN) to classify the defective and nondefective objects is projected in [7]. Experiments were achieved based on ten different open source datasets such as Tomcat 6.0, Anr 1.7, jEdit 4.0, jEdit 4.2, jEdit 4.3, AR1, AR3, AR4, AR5, and AR6. For module evaluation, accuracy, sensitivity, specificity, and precision measures were utilized. The author concluded that HyGRAR achieved better outcomes as compared to most of the foregoing projected approaches.

Alsaeedi and Khan [8] performed the comparison on supervised learning techniques including bagging, SVM, decision tree (DT), and RF and ensemble classifiers on different NASA datasets such as CM1, MC1, MC2, PC1, PC3, PC4, PC5, KC2, KC3, and JM1. The basic learning and ensemble classifiers are evaluated using G-measure, specificity, F-score, recall, precision, and accuracy. The experimental results conducted show that RF, AdaBoost with RF, and DS with bagging outperform than other employed techniques.

The author in [9] performed comparative exploration of several ML techniques for SDP on twelve NASA datasets such as MW1, CM1, JM1, PC1, PC2, PC3, PC4, PC5, KC1, KC3, MC1, and MC2, while the classification techniques include one rule (OneR), NB, MLP, DT, RBF, kStar (K), SVM, KNN, PART, and RF. The performance of each technique is assessed using MCC, ROC area, recall, precision, F-measure, and accuracy.

Malhotra and Kamal [6] evaluated the efficiency of ML classifiers for SDP on twelve excessive datasets taken from the NASA repository by employing sampling approaches and cost-sensitive classifiers. They examine five prevailing methods including J48, RF, NB, AdaBoost, and bagging, as well as suggest the SPIDER3 method for SDP. They have compared the performance based on accuracy, sensitivity, specificity, and precision.

Manjula and Florence [17] developed a hybrid model of the genetic algorithm (GA) and the deep neural network (DNN). GA is utilized for feature optimization while DNN is for classification. The enactment of the projected technique is benchmarked with NB, RF, DT, Immunos, ANN-artificial bee colony (ABC), SVM, majority vote, AntMiner+, and KNN. All the performances are carried out on a dataset that includes KC1, KC2, CM1, PC1, and JM1 and assessed via recall, F-score, sensitivity, precision, specificity, and accuracy. The tentative results show that the recommended technique beats other techniques in terms of achieving better accuracy.

Researchers have used various techniques to incredulous the boundaries of SDP on a variety of datasets. In each study, different evaluation measures are accomplished to evaluate and benchmark the proposed techniques. The overall summary of the literature discussed above is listed in Table 1, where the first column represents the authors who conducted research studies utilizing various ML techniques. The second column of the table shows techniques utilized by an individual study, while the third and fourth columns represent dataset and evaluation measures utilized in different studies. As shown in Table 1, each study has used different evaluation measures to achieve higher accuracy, but none affects decreasing error rate which is a significant feature.


AuthorTechnique/ModelDatasetsEvaluation measures

Czibula et al. [11]RADMW1, JM1, PC1, PC2, PC3, PC4, KC1, KC3, MC2, and CM1Accuracy, specificity, precision, PD, and ROC
Li et al. [14]DP-CNNCamel, jEdit, Lucene, Xalam, Xerces, Synapse, and PoiF-measure
Jacob and Raju [15]HFS, NB, NN, RF, RT, J48PC1, PC2, PC3, PC4, CM1, MW1, KC3, and JM1Specificity, sensitivity, MCC, and accuracy
Bashir et al. [16]NB, RF, KNN, MLP, SVM, J48, and decision stumpCM1, JM1, KC2, MC1, PC1, and PC5ROC
Miholca et al. [7]HyGRARTomcat 6.0, Anr 1.7, jEdit 4.0, AR1, jEdit 4.2, AR3, jEdit 4.3, AR5, AR4, and AR6Accuracy, sensitivity, specificity, and precision
Alsaeedi and Khan [8]Bagging, SVM, DT, and RFPC1, PC3, PC4, PC5, JM1, KC2, KC3, MC1, MC2, and CM1G-measure, specificity, F-score, recall, precision, and accuracy
Iqbal et al. [9]OneR, NB, K, MLP, SVM, RBF, RF, KNN, DT, and PARTJM1, MW1, CM1, MC1, PC1, MC2, PC4, PC3, PC2, PC5, KC3, and KC1MCC, ROC area, F-measure, recall, precision, and accuracy
Malhotra and Kamal [6]J48, RF, NB, AdaBoost, and bagging, and SPIDER3NASA datasetsAccuracy, sensitivity, specificity, and precision
Manjula and Florence [17]GA, DNN, NB, RF, DT, ABC, SVM, and KNNKC1, KC2, CM1, PC1, and JM1Precision, sensitivity, specificity, recall, F-score, and accuracy

Moreover, the ML techniques are also utilized by many researchers in healthcare engineering and the development of medical data analyzing software [1]. Khan et al. [2] utilized machine learning techniques for the prophecy of chronic kidney disease (CKD) to suggest the best model of early prediction of CKD. The study of Makumba et al. [3] on heart disease prediction using data mining (DM)/ML techniques can also be the baseline for new researchers. They have employed the DM/ML techniques on heart disease datasets. Hence, many researchers have utilized ML techniques on different healthcare datasets for early prediction of disease. However, the most important task is that when they propose an optimal solution for any kind of disease, they also have to give the assurance for the quality of software that will be developed using their optimal solution. To ensure this, we have to predict the defect that may occur in the software which leads towards decreasing the quality of the software system. Those are the reasons behind this research study.

3. Methodology and Techniques

This study objects to present the performance analysis of ML techniques for SDP on various datasets including AR1, AR3, CM1, JM1, KC2, KC3, and MC1. All these datasets can be found on the UCI ML repository (https://archive.ics.uci.edu/). The experimentation is performed using the open source ML and DM tool Weka version 3.9 (https://machinelearningmastery.com/use-ensemble-machine-learning-algorithms-weka/). As per the information presented in Table 1, AR1 and AR3 are reported in the literature single time; as shown in Figure 1, CM1 and JM1 reported 6 times, KC2 and MC1 reported 1 time, while KC3 reported 4 times. Each dataset is consisting of some attributes along with known output class. Respectively, datasets contain numerical data, while the total numbers of attributes and instances are different as presented in Table 2. In Table 2, the first column shows the datasets and second and third columns present number of metrics (attributes) and several cases (instances) correspondingly. The fourth and fifth columns represent the number of defective modules and the number of nondefective modules correspondingly, while the last column shows the type of data in each dataset. However, Table 3 shows the list of all attributes (software metrics) according to each dataset utilized in this research. The experimental setup for SDP is shown in Figure 2, which explains how each task is performed in this research. After training the datasets, the preprocessing step is taken only on the class attribute of each dataset that is solitary to change the type of data from numerical to categorical due to some of the ML techniques unable to work on numerical type class attributes. After all, when ML techniques apply to each dataset, the outcome is assessed using different assessment measures to show the better performance of an individual technique. Therefore, six assessment measure named MAE [13, 18, 19], RMSE [8, 20, 21], RAE [16, 22, 23], RRSE [22, 24], recall [9, 10, 25], and accuracy [2628] are utilized to evaluate the performance of ML techniques on SDP datasets. We have used error-based assessment measures which are not reported in the literature, while recall and accuracy have been used 3 and 7 times, respectively (Figure 3).


S. No.DatasetsNo. of attributesNo. of instancesNo. of defective modulesNo. of nondefective modulesData type

1AR13012197.4%112(92.6%)Numerical
2AR33063812.7%55(87.3%)Numerical
3CM122498499.8%449(90.2%)Numerical
4JM1229593175918.3%7834(81.7%)Numerical
5KC22252210720.5%415(79.5%)Numerical
6KC3401943618.6%158(81.4%)Numerical
7MC1409466680.7%9398(99.3%)Numerical


AttributesDatasets
AR1AR3CM1JM1KC2KC3MC1

Halstead attributesHalstead content-
Halstead difficulty
Halstead effort
Halstead error estimator-
Halstead length
Halstead level
Halstead program time
Halstead volume
Number of operands
Number of operators
Number of unique operands
Number of unique operators
McCabe attributesEssential complexity
Cyclomatic complexity
Design complexity
Cyclomatic density
Size attributesNumber of lines-
LOC total
LOC executable
LOC comments
LOC code and comments
LOC blank
Others attributesBranch count
Condition count
EDGE count
Parameter count
Modified condition count
Multiple condition count
Node count
Design density
Essential density
Decision count
Decision density-
Call pairs
Global data complexity
Global data density
Maintenance severity
Normalized cyclomatic complexity
Pathological complexity
Percent comments
Class attributeDefective

Table 4 shows the calculation mechanism and a description of each evaluation measure. The second column of Table 4 shows the list of evaluation measures, while the third column represents the equation of each measure, where, is the absolute error, n is the number of errors, is the goal value for record ji, is the prediction value by the particular technique I for record j (beyond n records), TP is the quantity of true-positive classification, FN is the amount of false-negative classification, TN is the amount of true-negative classification, and FP is the quantity of false-positive classifications.


S. No.MeasureEquation

1MAE
2RMSE
3RAE
4RRSE
5Recall
6Accuracy

4. Techniques Employed

ML techniques are currently extensively used to excerpt significant knowledge commencing massive volumes of data in diverse areas. ML applications embrace numerous real-world situations such as cyber-security, bioinformatics, detecting communities in social networks, and software process enhancement to harvest high-quality software systems [7]. ML-based solutions for SDP have also been investigated [6, 10, 29]. From which, we have selected the top seven techniques as reported in Table 1, and the count of each technique is given in Figure 4. RBF is selected randomly, while the other two, i.e., HMM and A1DE, are new explorations for SDP. All of the ten selected techniques are briefly discussed in the following subsections.

4.1. Support Vector Machine

SVM has numerous uses in the field of classification, biophotonics, and pattern recognition [8, 25]. First, it was developed for binary classification; however, it can also be used for multiple classes [30]. In binary classification, the core impartial of SVM is to describe a line among classes of data to exploit the remoteness of edge line from data points lying neighboring to it. In that case, if data are linearly inseparable, a mathematical function is utilized to transmute the data to a higher-attribute space, so that it may become linear divisible in the new space. The function used is called kernel function, and the equation of a linear SVM can be written aswhere is the prompt with label , is the Lagrange multiplier, and is the partiality, while N signifies the number of support vectors. For nonlinearly divisible issue, the overhead equation can be improved for kernel SVM aswhere is the kernel function.

4.2. Decision Tree (J48)

This is the basic C4.5 decision tree (DT) used for classification problems [26]. It is the deviation of information gain (IG), usually utilized to stun the result of unfairness. An attribute with a maximum gain ration is nominated in direction to shape a tree as a splitting attribute. Gain ratio- (GR-) based DT performs well as compared to IG [31], in terms of accuracy. GR is defined as

4.3. Random Forest

It produces a set of techniques that involve constructing an ensemble or termed as a forest of decision trees from a randomized variation of tree induction techniques [32]. RF works by forming a mass of decision trees at the training period and harvesting the class in the approach of the class output by a single tree [33]. It is deliberated as one of the utmost techniques which is extremely proficient for both classification and regression problems.

4.4. Multilayer Perceptron

MLPs are deliberated as the utmost momentous classes of the neural network including an input layer, output layer, and least one hidden layer [3436]. The techniques behind the neural network are that when data are accessible as the input layer, the network neurons start calculation in the sequential layer until an output value is gained at each of the output neurons. A threshold node is moreover added in the input layer which identifies the weight function. The resultant calculations are used to gain the activity of the neurons by smearing a sigmoid activation function that can be defined aswhere is the linear combination of inputs x1, x2, …, xn, is the threshold, is the connection weight between and neuron j, is the activation function of the jth neuron, and is the output. A sigmoid function is a mutual choice of activation function that can be described as

4.5. Radial Basis Function

It is also a neural network model that needs a very few computational time for training a network [37, 38]. Likewise, MLP also contains input, hidden, and output layers. The input variables in the input layer permit straight to the hidden layer deprived of weights. The transfer functions of the hidden knobs are RBFs, which factors are elevated throughout the training. The process of appropriating RBFs to data, for function of rough calculation, is thoroughly associated with space-weighted regression.

4.6. Hidden Markov Model

HMM is a probabilistic or [39] a statistical Markov model where the scheme being modeled is probable to be a Markov procedure using unobservable states or hidden statuses. It can be epitomized as the gentlest dynamic Bayesian network. It is reliant on splitting large data into the smallest sequences of data using a fewer sensitive pairwise sequence comparison method [40]. This model can be reflected in the generality of a combination model where the hidden variables that control the combination section to be nominated for every statement are connected through a Markov process moderately than liberated from each other. HMMs are particularly identified for their use in reinforcement learning and chronological pattern recognition such as speech, handwriting, part-of-speech tagging, gesture recognition, partial discharges, musical score following, and bioinformatics [39, 41].

4.7. Credal Decision Tree

Credal decision trees (CDTs) are algorithms to design classifiers grounded on inexact possibilities and improbability measures [42]. Throughout the creation procedure of a CDT, to sidestep producing a very problematical decision tree, a new standard was presented: stay once the total improbability rises due to splitting of the decision tree. The function utilized in the total hesitation dimension can be fleetingly articulated as [43, 44]where is a Credal fixed on frame X, TU is the value of total hesitation, IG represents a common function of nonspecificity on the resultant Credal set, and GG is a common function of arbitrariness for a Credal set.

4.8. Average One Dependency Estimator

A1DE is a probabilistic technique used for mostly classification problems. It succeeds extremely precise classification by averaging inclusive of a minor space of different NB-like models that have punier independence suppositions than NB. A1DE was designed to address the attribute-independence issues of a popular NB technique. It was designed to address the attribute-independence issues of the prevalent naive Bayes classifier. A1DE pursues to estimate the possibility of every class y assumed a quantified set of features x1, x2, …, xn, [45]. This can be calculated aswhere represents an assessment of is the frequency through which the influences seem in the trial data, and m is a user quantified least frequency by which a term essentially seems in direction to be utilized in outer summation. Currently, m is the habitually set at 1.

4.9. Naïve Bayes

NB is a kinfolk of modest probabilistic technique grounded on Bayes theorem with unconventionality suppositions amid the predictors [46, 47]. The NB model is precise simple to construct and can be executed for any dataset containing a large amount of data. The posterior probability, , is taken from , and . The consequence of the value of a forecaster (x) on assumed class (c) is independent of the value of other forecasters.

4.10. K-Nearest Neighbor

KNN is a supervised learning technique where the preparation of features attributes to forecast the class of new test data. KNN classifies first-hand data grounded on the least distance from the new data to the K-nearest neighbors [48, 49]. The nearest distance can be found using different distance functions such as Euclidean distance (ED), Manhattan distance (MD), and Minkowski distance (MkD). Here, in this study, ED is used that can be formulated aswhere X = (x1, x2, …, xn) and Y = (y1, y2, …, y3).

5. Experimental Results

5.1. Results and Analysis

This section provides an experimental study for SDP employing ten ML techniques using a standard approach of the 10-fold cross-validation process for assessment [34]. This process splits the complete data into ten subgroups of equal sizes; one subgroup is used for testing, whereas the rest of the subgroups are used for training. This process is continuing until each subgroup has been used for testing.

In this work, we considered seven different software defect datasets named AR1, AR3, CM1, JM1, KC2, KC3, and MC1. Using these datasets, we apply a software defect prediction system where the performance of all employed ML techniques is compared with each other based on correctly and incorrectly classified instances, true-positive and false-positive rates, MAE, RAE, RMSE, RRSE, recall, and accuracy. Table 5 presents the benchmark analysis of correctly classified instances (CCI), while Table 6 presents the benchmark analysis of incorrectly classified instances (ICI) using ML techniques. In both tables, the first column represents techniques employed, while the rest of the columns show details of each dataset concerning CCI and ICI. Figure 5 shows the inclusive performance CCI and ICI evaluation of each employed ML technique.


S. No.TechniqueAR1AR3CM1JM1KC2KC3MC1

1SVM1115644678424321599398
2J481095543876684251549406
3RF1095844479304351589417
4MLP1095943678634421509411
5RBF1115544678694371559398
6HMM112554491759415369398
7CDT1125544578334331599407
8A1DE1105843078164351559297
9NB1035742578104361538913
10KNN1095442273954201409418


S. No.TechniqueAR1AR3CM1JM1KC2KC3MC1

1SVM107521751903568
2J48128601925974060
3RF125541663873649
4MLP124621730804455
5RBF108521724853968
6HMM9849783410715868
7CDT98531760893559
8A1DE1156817778739169
9NB1867317838641553
10KNN1297621981025448

Table 7 illustrates the true-positive rate (TPR) and false-positive rate (FPR) of each technique on different hired datasets. TPR reveals the probability of the positive modules correctly classified, while FPR defines the probability of the negative modules incorrectly classified as the positive modules [5]. The first column of the table shows the list of datasets used, while the second column represents the TPR and FPR on the respective dataset. Apart from this, each row represents the achieved TPR and FPR concerning the individual dataset.


DatasetSVMJ48RFMLPRBFHMMCDTA1DENBKNN

AR1TPR0.9170.9010.9010.9010.9170.9260.9260.9090.8510.901
FPR0.9260.7230.9280.7230.9260.9260.9260.9270.5230.621
AR3TPR0.8890.8730.9210.9370.8730.8730.8730.9210.9050.857
FPR0.6570.4460.3320.330.7660.8730.8730.3320.2270.555
CM1TPR0.8960.880.8920.8760.8960.9020.8940.8630.8530.847
FPR0.9020.8490.8480.8860.9020.9020.9020.8690.6160.762
JM1TPR0.8170.7990.8270.820.820.1830.8170.8150.8140.771
FPR0.8120.6310.6350.770.7570.1830.6950.6620.6580.551
KC2TPR0.8280.8140.8330.8470.8370.7950.830.8330.8350.805
FPR0.6340.4220.4310.4350.4720.7950.4390.4240.4730.432
KC3TPR0.820.7940.8140.7730.7990.1860.820.7890.7890.722
FPR0.7920.5620.7070.6090.7970.1860.6630.5610.520.728
MC1TPR0.9930.9940.9950.9940.9930.9930.9940.9820.9420.995
FPR0.9930.7010.6570.730.9930.9930.7740.6280.380.496

Tables 8 and 9 show the outcomes of absolute errors that are MAE and RAE, respectively. In each table, the first column represents the list of techniques, while the rest of the columns represent the error rate of each dataset concerning techniques employed. As shown in Table 8, while calculating MAE, SVM performs well in reducing the error rate as associated to other utilized techniques. SVM produces better results on five datasets, while MLP and NB produce better results only on two datasets. In the case of calculating RAE, SVM creates better results utilizing four datasets, while A1DE and NB do the same only for one dataset individually. This determines to calculate the absolute error, and SVM outperforms other techniques.


S. No.TechniqueAR1AR3CM1JM1KC2KC3MC1

1SVM0.8260.11110.10440.18250.17240.18040.0072
2J480.1270.16060.17570.25730.23740.23720.01
3RF0.1270.14790.16310.24790.22050.2570.0083
4MLP0.10370.11010.15680.25690.22590.23710.0072
5RBF0.15560.18120.18160.27730.23950.29950.025
6HMM0.50.50.50.50.50.50.5
7CDT0.13780.2090.17450.26330.22960.28020.0112
8A1DE0.1570.1050.18860.25910.1970.27080.0258
9NB0.15190.10850.15240.18630.16380.21620.059
10KNN0.10440.1550.1550.23190.21140.28090.0063

The bold values in the table indicate the reduced error rate.

S. No.TechniqueAR1AR3CM1JM1KC2KC3MC1

1SVM57.24947.90858.37960.938852.777559.231349.9624
2J4887.976969.24898.213285.902172.67577.887769.4547
3RF87.961163.436891.194582.753267.492984.379257.6946
4MLP71.826647.461187.648285.755969.13577.852149.9963
5RBF107.817578.1281100.97492.575373.310398.3279174.0763
6HMM346.3562215.586279.5455166.9291153.0549164.15533477.5284
7CDT95.475290.103797.589387.912170.2991.981978.072
8A1DE108.746543.7714105.430586.513860.388.8947179.4312
9NB105.24946.768685.221862.213950.147170.9899410.217
10KNN72.315166.84686.636477.431164.709592.20944.0253

The bold values in the table indicate the reduced error rate.

However, Tables 10 and 11 present the outcomes of each squared error that are RMSE and RRSE individually. Here, the outcomes of squared error are different than outcomes of absolute error. While calculating RMSE or RRSE in both cases, RF produces better results for three datasets that are JM1, KC3, and MC1, RBF for two datasets that are CM1 and KC2, whereas MLP and CDT for only one dataset separately that are AR3 and AR1, respectively. Although, this analysis shows the best performance of RF as compared to other employed ML techniques.


S. No.TechniqueAR1AR3CM1JM1KC2KC3MC1

1SVM0.28750.33330.32310.42720.41520.42470.0848
2J480.29970.34240.33010.40530.39680.430.0779
3RF0.28560.27240.29510.35770.3490.36670.0669
4MLP0.28820.2560.31210.37060.34190.44140.0754
5RBF0.26640.29390.29190.36830.34130.38790.0837
6HMM0.50.50.50.50.50.50.5
7CDT0.26270.33770.30460.37520.36270.38180.0772
8A1DE0.29310.29250.31830.37540.35540.40340.1184
9NB0.37330.31760.380.42910.40190.45460.24
10KNN0.31220.37190.39050.4750.44270.52460.0712

The bold values in the table indicate the reduced error rate.

S. No.TechniqueAR1AR3CM1JM1KC2KC3MC1

1SVM109.40599.6674108.4851110.4067102.8529109.2121100.3607
2J48114.0496102.3912110.822104.749198.2924110.557392.2254
3RF108.687881.436899.087292.427886.454394.282479.2174
4MLP109.665776.5306104.78595.781684.6955113.482789.3325
5RBF101.386287.866997.987895.173384.543599.745499.0846
6HMM190.2829149.5011167.8622129.2111123.8513128.5606592.0558
7CDT99.9593100.9585102.252296.970889.834498.162891.4393
8A1DE111.556887.4683106.863997.003188.0436103.724140.1728
9NB142.068394.9565127.572110.877699.5502116.8883284.2012
10KNN118.7971111.1859131.084122.7563109.6576134.891484.3477

The bold values in the table indicate the reduced error rate.

Table 12 shows the outcomes achieved using recall assessment measures. In this table, the first row represents the list of datasets, while the first column represents the list of employed techniques. The rest of the rows concerning individual techniques shows the outcomes utilizing each dataset. This table shows that calculating recall using the AR1 dataset, HMM, and CDT performs well and produces the same results of 0.926. Proceeding utilizing AR3 and KC2 datasets, MLP outperforms other techniques generating 0.937 and 0.847 correspondingly, while on CM1 and AR1 datasets, HMM and on KC3 and AR1 datasets CDT performs well while producing 0.926 and 0.902 results. Moreover, on MC1 and JM1 datasets, the results of RF are better as compared to other techniques that are 0.827 and 0.995 accordingly; while, on the KC3 dataset, SVM performance is better, that is, 0.82. Figure 6 presents the overall recall performance of ML techniques for datasets. It can be concluded that RF, MLP, HMM, and CDT have better performed in terms of recall.


S. No.TechniqueAR1AR3CM1JM1KC2KC3MC1

1SVM0.9170.8890.8960.8170.8280.820.993
2J480.9010.8730.880.7990.8140.7940.994
3RF0.9010.9210.8920.8270.8330.8140.995
4MLP0.9010.9370.8760.820.8470.7730.994
5RBF0.9170.8730.8960.820.8370.7990.993
6HMM0.9260.8730.9020.1830.7950.1860.933
7CDT0.9260.8730.8940.8170.830.820.994
8A1DE0.9090.9210.8630.8150.8330.7990.982
9NB0.8510.9050.8530.8140.8350.7890.942
10KNN0.9010.8570.8470.7710.8050.7220.995

The bold values in the table indicate the highest recall in each column.

Table 13 shows the accuracy performance of each employed technique using different datasets. In this table, the first column represents the list of techniques, whereas the first row represents the list of datasets. The rest of the columns and rows show the outcome of each technique utilizing every dataset. Amid all the outcomes, the better performance of each technique under the individual dataset is listed in bold as shown in Table 13. This analysis shows that HMM produces better accuracy on three datasets, namely, AR1, AR3, and CM1, and outcomes are 92.562%, 97.3016%, and 90.1606%, respectively. RF harvests better accuracy on JM1 and near to best on MC1, that is, 82.6644% and 99.4824%, while SVM and MLP create better accuracy for KC3 and KC2, that is, 81.9588% and 84.6743%, respectively. Utilizing the MC1 dataset, A1DE outperforms other techniques achieving the accuracy of 99.4929%. The clinched performance of all techniques on individual datasets is presented in Figure 7.


SVMJ48RFMLPRBFHMMCDTA1DENBKNN

AR191.73 (2.5)90.08 (4.25)90.08 (4.25)90.08 (4.25)91.73 (2.5)92.56 (1.5)92.56 (1.5)90.90 (3)85.12 (5)90.08 (4.25)
AR388.88 (5)87.30 (6.33)92.06 (3.5)93.65 (2)87.30 (6.33)97.30 (1)87.30 (6.33)92.06 (3.5)90.47 (4)85.71 (7)
CM189.55 (2.5)87.95 (5)89.15 (4)87.55 (6)89.55 (2.5)90.16 (1)89.35 (3)86.34 (7)85.34 (8)84.39 (9)
JM181.74 (4)79.93 (8)82.66 (1)81.96 (3)82.02 (2)18.33 (10)81.65 (5)81.47 (6)81.41 (7)77.08 (9)
KC282.75 (6)81.41 (7)83.33 (4.5)84.67 (1)83.71 (2)79.50 (9)82.95 (5)83.33 (4.5)83.52 (3)80.45 (8)
KC381.95 (1.5)79.38 (4)81.44 (2)77.31 (7)79.89 (3.5)18.55 (9)81.95 (1.5)79.89 (3.5)78.86 (6)72.16 (8)
MC199.28 (5.33)99.36 (4)99.48 (1.5)99.41 (2)99.28 (5.33)99.28 (5.33)99.37 (3)98.21 (6)94.15 (7)99.49 (1.5)
Sum (accuracy)615.93605.43618.23614.66613.52495.70615.16612.24598.90589.39
Average (accuracy)87.9986.4988.3287.8187.6570.8187.8887.4685.5684.20
Sum (rank)26.8338.5820.7525.2524.1636.8325.3333.5040.0046.75
Average (rank)3.835.512.963.613.455.263.624.795.716.68

Our outcomes suggest that there is uncertainty in the ML techniques. No individual technique performs well on every dataset. Different assessment measures are utilized to test the performance of each ML techniques on every dataset. Table 14 also presents the ranking of each technique, where we can see that HMM produces better results on 3 datasets; this number is maximum from the better results produced by any other techniques. However, on average, RF produces better results (average rank = 2.96), and the KNN produced poor results (average rank = 6.68). This is due to RF produces the forest with several trees [33, 50]. Overall, the more trees in the forest, the more forceful the forest resembles. Likewise in the RF classifier, the large amount of trees in the forest causes to give higher accuracy results [51, 52].


DatasetsEvaluation measures
MAERAERMSERRSERecallAccuracy

AR1MLPSVMCDTCDTHMM, CDTHMM, CDT
AR3A1DE, NBA1DEMLPMLPMLPHMM
CM1SVMSVMRBFRBFHMMHMM
JM1SVMSVMRFRFRFRF
KC2NB, SVMNBRBFRBFMLPMLP
KC3SVMSVMRFRFSVMSVM
MC1KNN, SVM, MLPKNNRFRFRFKNN, RF

To get insight into the number, Table 13 shows the overall decision for SDP utilizing ML techniques on AR1, AR3, CM1, JM1, KC2, KC3, and MC1 datasets. This table concludes that which technique performs well on an individual dataset to a specific assessment criterion.

A standard approach to benchmark the performances of classifiers is to count () the number of datasets on which an algorithm is an overall subjugator, also known as the Count of Wins test. We have used 7 datasets, and no technique has given the best results for at least 7 datasets at α = 0.05, according to the critical values in Table 3 of [53]. Since the Count of Wins test is also considered to be a weak testing procedure, therefore, we have a detailed matrix Table 14. As it can be observed from the very first dataset from Table 14, that is AR1, CDT outperforms other techniques in terms of increasing accuracy and reducing squared error while reducing absolute errors; MLP and SVM also perform well. On second and third datasets such as AR3 and CM1, HMM outperforms other techniques in terms of increasing accuracy; however, reducing the error rate on the AR3 dataset, MLP and A1DE produces better results, and utilizing the CM1 dataset, SVM and RBF performs well. Moreover, using JM1 and MC1, RF and KNN produce better results in terms of increasing accuracy and decreasing squared error rate, while decreasing absolute error SVM and KNN outperform well. Furthermore, on the KC2 dataset, MLP performs well in increasing accuracy, and using the KC3 dataset, SVM performs well. However, on KC2 and KC3, SVM, RF, RBF, and NB performance is better in terms of reducing error rates.

All the employed techniques perform well certain in terms of reducing error rate, while some in terms of increasing accuracy, excluding J48. J48 is an insecure technique, for data containing categorical variables with a diverse number of altitudes as we have in employed datasets, and information gain in the decision tree is unfair in service of those metrics with more levels and fairly imprecise [54]. The performance of every individual technique is different on each singular dataset, which is due to the change of population in each dataset as well as differences between the values range and a number of attributes.

5.2. Friedman Two-Way Analysis of Variance by Ranks

To compare all applied ML techniques on numerous datasets, we have smeared the statistical technique as defined by Sheskin [55] and García [56]. The Friedman two-way analysis of difference by ranks (Friedman) [57] is adopted with rank-order data in a hypothesis testing condition. A significant test specifies that there is a significant variance amid at least two of the techniques in the set of k techniques. The Friedman test checks whether the measured average ranks are significantly different from the mean rank (in our case, Rj = 4.54). The chi-square (χ2) distribution is used to approximate the Friedman test statistic [55]. Friedman’s statistic is

To throw away the null hypothesis, the workout value must be equal to or greater than χ2, the tabled (table of the chi-square distribution) precarious chi-square value at the prespecified level of significance [55]. The number of degrees of freedom df = k − 1. Thus, df = 10 − 1 = 9. For df = 9, the tabled critical α = 0.05 and chi-square value χ2 = 16.92. Since the computed value = 63.218 is greater than χ20.05 = 16.92, the alternative hypothesis is supported at α = 0.05. It can be concluded that there is a significant difference among at least nine of the ten ML techniques. This result can be summarized as follows: χ20.05 (9) = 63.218, .

Since the critical value is lower than χ2, we can continue with posthoc tests to spot the significant pairwise differences among all the techniques. The results are shown in Table 15, where z is the corresponding statistics and values are for each hypothesis. Z is computed using the following equation:where Ri is the ith technique, and the standard error is . Columns 5 and 6 represent Nemenyi’s and Holm’s static procedure. The second last column lists the differences between the average ranks of ith and jth techniques. While, the last column shows the critical difference (CD), and it states that the performance of the two techniques is expressively diverse if the consistent average ranks differ by at least the CD. CD can be assessed usingwhere critical values is given in (Table 5(b), Demsar 2006) [53]. The notations “>” and “<” represent whether the difference of the average rank (Ri − Rj) is greater or less than the value of CD, respectively. Greater means a significant difference between two means. Here, the value of CD is 0.692.


S. No.Algo versus algozNM (0.05)HolmRi − RjCD

1RFKNN14.87406.07E − 080.0010.00113.7143>
2RBFKNN12.92322.04E − 070.0010.00113.2271>
3MLPKNN12.29973.12E − 070.0010.00123.0714>
4CDTKNN12.25393.22E − 070.0010.00123.0600>
5SVMKNN11.39585.97E − 070.0010.00122.8457>
6RFNB11.01257.97E − 070.0010.00132.7500>
7J48RF10.20011.52E − 060.0010.00132.5471>
8RFHMM9.19903.57E − 060.0010.00132.2971>
9RBFNB9.06174.04E − 060.0010.00142.2629>
10MLPNB8.43817.21E − 060.0010.00142.1071>
11CDTNB8.39247.53E − 060.0010.00142.0957>
12J48RBF8.24948.65E − 060.0010.00152.0600>
13J48MLP7.62581.62E − 050.0010.00151.9043>
14J48CDT7.58001.7E − 050.0010.00161.8929>
15A1DEKNN7.58001.7E − 050.0010.00161.8929>
16SVMNB7.53431.78E − 050.0010.00171.8814>
17RFA1DE7.29402.3E − 050.0010.00171.8214>
18RBFHMM7.24822.41E − 050.0010.00181.8100>
19SVMJ486.72194.32E − 050.0010.00191.6786>
20MLPHMM6.62474.83E − 050.0010.00191.6543>
21HMMCDT6.57895.09E − 050.0010.00201.6429>
22SVMHMM5.72080.0001430.0010.00211.4286>
23HMMKNN5.67500.0001520.0010.00221.4171>
24RBFA1DE5.34320.0002330.0010.00231.3343>
25MLPA1DE4.71960.0005450.0010.00241.1786>
26J48KNN4.67390.0005810.0010.00251.1671>
27CDTA1DE4.67390.0005810.0010.00261.1671>
28NBKNN3.86150.0019190.0010.00280.9643>
29SVMA1DE3.81580.0020580.0010.00290.9529>
30A1DENB3.71850.0023910.0010.00310.9286>
31SVMRF3.47820.0034790.0010.00330.8686>
32J48A1DE2.90620.008710.0010.00360.7257>
33RFCDT2.62010.0139030.0010.00380.6543<
34RFMLP2.57430.0149870.0010.00420.6429<
35RFRBF1.95080.0414310.0010.00450.4871<
36HMMA1DE1.90500.0445850.0010.00500.4757<
37HMMNB1.81350.0515820.0010.00560.4529<
38SVMRBF1.52740.0804990.0010.00630.3814<
39J48HMM1.00110.1714580.0010.00710.2500<
40SVMMLP0.90390.1948050.0010.00830.2257<
41SVMCDT0.85810.2065480.0010.01000.2143<
42J48NB0.81240.2187750.0010.01250.2029<
43RBFCDT0.66930.2600420.0010.01670.1671<
44MLPRBF0.62360.2741950.0010.02500.1557<
45MLPCDT0.04580.4822480.0010.05000.0114<

In Table 15, the family of hypotheses is ordered by their values. As can be seen, Nemenyi’s procedure rejects the first 27 hypotheses, whereas Holm’s procedure also rejects the next 4 hypotheses; meanwhile, the corresponding values are lesser than the adjusted NM-α’s and Holm. Hence, we conclude that the performance of MLP and CDT is comparable, and KNN has a lower performance. Besides, the obtained value CD = 0.692 specifies that any variance amid the average ranks of two techniques that is equal to or greater than 0.692 is significant. Concerning the pairwise comparisons in Table 15, the difference between the average ranks of two techniques which are greater than CD = 0.692 is the first 32. Thus, it can be concluded that there is a momentous alteration among the average ranks of the first 32 pairs of techniques.

6. Threats to Validity

This section converses the effects that could anguish the validity of this research work.

6.1. Internal Validity

The exploration of this study is grounded on diverse very familiar valuation standards that are used in the past in various studies. Amid these standards, several are used to assess the error rate while certain used to assess the accuracy. So, the treat can be that the renewal of new valuation standards as a replacement for utilized standards may deteriorate the accuracy. Furthermore, the machine learning techniques used in this study may be replaced with other existing techniques and can be merged that can harvest enhanced outcomes than the employed techniques.

6.2. External Validity

We piloted investigations on various datasets. A threat to validity may arise if the projected techniques are related in the other actual data composed from the diverse software development organizations using surveys or replace these datasets with some other datasets, which may distress the outcomes while growing the error rates. Likewise, the projected technique might not be capable to harvest improved forecast in outcomes utilizing several other SDP datasets. Hence, this study concentrated on AR1, AR3, CM1, JM1, KC2, KC3, and MC1 datasets to measure the performance of the utilized techniques.

6.3. Construct Validity

Diverse ML techniques are benchmarked with each on various datasets on the base of several valuation standards. The assortment of techniques utilized in this study is on the canter of their progressive features over other techniques that ought to exploit by the researchers in the last decades. Though the threat can be that we put on several new techniques, at that point, it can be the probability that these new techniques can exhaust the projected techniques. Furthermore, the training and testing method is applied or we change the number of folds validation (increase or decrease) for the experimentations that can decrease the error rate. It moreover can be promising that using the newest valuation standards creates improved outcomes that can beat the current accomplished outcomes.

7. Conclusions

Nowadays SDP using ML techniques is dignified as one of the developing research zones. The identification of software defects at the primary phase of SDLS is a challenging task, as well it can subsidize the provision of high-quality software systems. This study focused on comparing seven famous ML techniques that are broadly used for SDP, on seven extensively used openly available datasets. The ML techniques include SVM, J48, RF, MLP, RBF, HMM, and CDT. The performance is evaluated utilizing different measures such as MAE, RAE, RMSE, RRSE, recall, and accuracy.

The experimental results have illustrated that NB and SVM produced fewer MAE and RAE, respectively. However, experimental results using RMSE, RRSE, recall, and accuracy showed that an average RF performed better. Friedman’s two-way analysis of variance by ranks has performed on experimental results using accuracy. The Friedman test indicates that results are significant at . We also performed a pairwise statistical test which revealed that several pairs are significant. Moreover, a critical difference test showed that RF and KNN produced significantly different results at , where RF produced better while KNN the poorest. The outcomes obtainable in this study may be recycled as the reference point for other studies and researchers, in such a way that the outcomes of any projected technique, model, or framework can be benchmarked and simply confirmed. For future works, class imbalance matters ought to be committed to these datasets. Furthermore, to increase the enactment, ensemble learning and feature selection techniques could also be explored.

Data Availability

The datasets used in this research are taken from UCI ML Learning Repository available at https://archive.ics.uci.edu/.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. A. O. Balogun, A. O. Bajeh, V. A. Orie, and A. W. Yusuf-asaju, “Software defect prediction using ensemble learning: an ANP based evaluation method,” Journal of Engineering Technology, vol. 3, no. 2, pp. 50–55, 2018. View at: Publisher Site | Google Scholar
  2. T. Menzies, Z. Milton, B. Turhan, B. Cukic, Y. Jiang, and A. Bener, “Defect prediction from static code features: current results, limitations, new approaches,” Automated Software Engineering, vol. 17, no. 4, pp. 375–407, 2010. View at: Publisher Site | Google Scholar
  3. T. Hall, S. Beecham, D. Bowes, D. Gray, and S. Counsell, “A systematic literature review on fault prediction performance in software engineering,” IEEE Transactions on Software Engineering, vol. 38, no. 6, pp. 1276–1304, 2012. View at: Publisher Site | Google Scholar
  4. Z. Li, X.-Y. Jing, and X. Zhu, “Progress on approaches to software defect prediction,” IET Software, vol. 12, no. 3, pp. 161–175, 2018. View at: Publisher Site | Google Scholar
  5. H. Wang, “Software defects classification prediction based on mining software repository,” 2014. View at: Google Scholar
  6. R. Malhotra and S. Kamal, “An empirical study to investigate oversampling methods for improving software defect prediction using imbalanced data,” Neurocomputing, vol. 343, pp. 120–140, 2019. View at: Publisher Site | Google Scholar
  7. D.-L. Miholca, G. Czibula, and I. G. Czibula, “A novel approach for software defect prediction through hybridizing gradual relational association rules with artificial neural networks,” Information Sciences, vol. 441, pp. 152–170, 2018. View at: Publisher Site | Google Scholar
  8. A. Alsaeedi and M. Z. Khan, “Software defect prediction using supervised machine learning and ensemble techniques: a comparative study,” Journal of Software Engineering and Applications, vol. 12, no. 05, pp. 85–100, 2019. View at: Publisher Site | Google Scholar
  9. A. Iqbal et al., “Performance analysis of machine learning techniques on software defect prediction using NASA datasets,” International Journal of Advanced Computer Science and Applications, vol. 10, no. 5, pp. 300–308, 2019. View at: Publisher Site | Google Scholar
  10. T. Menzies, A. Dekhtyar, J. Distefano, and J. Greenwald, “Problems with precision: a response to “comments on data mining static code attributes to learn defect predictors”,” IEEE Transactions on Software Engineering, vol. 33, no. 9, pp. 637–640, 2007. View at: Publisher Site | Google Scholar
  11. G. Czibula, Z. Marian, and I. G. Czibula, “Software defect prediction using relational association rule mining,” Information Sciences, vol. 264, pp. 260–278, 2014. View at: Publisher Site | Google Scholar
  12. C. Willmott and K. Matsuura, “Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance,” Climate Research, vol. 30, no. 1, pp. 79–82, 2005. View at: Publisher Site | Google Scholar
  13. H. Alasker, S. Alharkan, W. Alharkan, A. Zaki, and L. S. Riza, “Detection of kidney disease using various intelligent classifiers,” in Proceedings of the 2017 3rd International Conference on Science in Information Technology: “Theory and Application of IT for Education, Industry, and Society in Big Data Era”, Bandung, Indonesia, October 2017. View at: Google Scholar
  14. J. Li, P. He, J. Zhu, and M. R. Lyu, “Software defect prediction via convolutional neural network,” in Proceedings of the 2017 IEEE International Conference on Software Quality, Reliability & Security. QRS, pp. 318–328, Prague, Czech Republic, July 2017. View at: Google Scholar
  15. S. Jacob and G. Raju, “Software defect prediction in large space systems through hybrid feature selection and classification,” International Arab Journal of Information Technology, vol. 14, no. 2, pp. 208–214, 2017. View at: Google Scholar
  16. K. Bashir, T. Li, C. W. Yohannese, and Y. Mahama, “Enhancing software defect prediction using supervised-learning based framework,” in Proceedings of the 2017 12th International Conference on Intelligent Systems and Knowledge Engineering (ISKE), Nanjing, China, November 2017. View at: Google Scholar
  17. C. Manjula and L. Florence, “Deep neural network based hybrid approach for software defect prediction using software metrics,” Cluster Computing, vol. 22, no. S4, pp. 9847–9863, 2019. View at: Publisher Site | Google Scholar
  18. B. Khan, R. Naseem, M. Ali, M. Arshad, and N. Jan, “Machine learning approaches for liver disease diagnosing,” International Journal of Data Science and Advanced Analytics, vol. 1, no. 1, pp. 27–31, 2019. View at: Google Scholar
  19. S. A. Lauer, K. Sakrejda, E. L. Ray et al., “Prospective forecasts of annual dengue hemorrhagic fever incidence in Thailand, 2010-2014,” Proceedings of the National Academy of Sciences, vol. 115, no. 10, pp. E2175–E2182, 2018. View at: Publisher Site | Google Scholar
  20. K. Balasaravanan and M. Prakash, “Detection of dengue disease using artificial neural network based classification techniquetion,” International Journal of Engineering and Technology, vol. 7, no. 1, pp. 13–15, 2018. View at: Google Scholar
  21. S. Chae, S. Kwon, and D. Lee, “Predicting infectious disease using deep learning and big data,” International Journal of Environmental Research and Public Health, vol. 15, no. 8, 2018. View at: Publisher Site | Google Scholar
  22. C. G. Raji and S. S. Vinod Chandra, “Graft survival prediction in liver transplantation using artificial neural network models,” Journal of Computational Science, vol. 16, pp. 72–78, 2016. View at: Publisher Site | Google Scholar
  23. K. Morik, “Medicine: applications of machine learning,” Encyclopedia of Machine Learning and Data Mining, Springer, Berlin, Germany, 2017. View at: Publisher Site | Google Scholar
  24. B. Khan, R. Naseem, F. Muhammad, G. Abbas, and S. Kim, “An empirical evaluation of machine learning techniques for chronic kidney disease prophecy,” IEEE Access, vol. 8, pp. 55012–55022, 2020. View at: Publisher Site | Google Scholar
  25. C. Davi, A. Pastor, T. Oliveira et al., “Severe dengue prognosis using human genome data and machine learning,” IEEE Transactions on Biomedical Engineering, vol. 66, no. 10, pp. 2861–2868, 2019. View at: Publisher Site | Google Scholar
  26. M. M. Saritas, “Performance analysis of ANN and naive Bayes classification algorithm for data classification,” International Journal of Intelligent Systems and Applications in Engineering, vol. 7, no. 2, pp. 88–91, 2019. View at: Publisher Site | Google Scholar
  27. C. Wu, S.-C. Kao, C.-H. Shih, and M.-H. Kan, “Open data mining for Taiwan's dengue epidemic,” Acta Tropica, vol. 183, pp. 1–7, 2018. View at: Publisher Site | Google Scholar
  28. N. Nahar and F. Ara, “Liver disease prediction by using different decision tree techniques,” International Journal of Data Mining & Knowledge Management Process, vol. 8, no. 2, pp. 01–09, 2018. View at: Publisher Site | Google Scholar
  29. J. Chen, Y. Yang, K. Hu, Q. Xuan, Y. Liu, and C. Yang, “Multiview transfer learning for software defect prediction,” IEEE Access, vol. 7, pp. 8901–8916, 2019. View at: Publisher Site | Google Scholar
  30. S. Khan, R. Ullah, A. Khan, N. Wahab, M. Bilal, and M. Ahmed, “Analysis of dengue infection based on Raman spectroscopy and support vector machine (SVM),” Biomedical Optics Express, vol. 7, no. 6, p. 2249, 2016. View at: Publisher Site | Google Scholar
  31. S. Perveen, M. Shahbaz, K. Keshavjee, and A. Guergachi, “A systematic machine learning based approach for the diagnosis of non-alcoholic fatty liver disease risk and progression,” Scientific Reports, vol. 8, no. 1, pp. 1–12, 2018. View at: Publisher Site | Google Scholar
  32. A. N. Arbain and B. Y. P. Balakrishnan, “A comparison of data mining algorithms for liver disease prediction on imbalanced data,” International Journal of Data Science and Analytics, vol. 1, no. 1, 2019. View at: Google Scholar
  33. A. Gulia, R. Vohra, and P. Rani, “Liver patient classification using intelligent techniques,” International Journal of Computer Science and Information Technologies, vol. 5, no. 4, pp. 5110–5115, 2014. View at: Google Scholar
  34. K. A. Otunaiya and G. Muhammad, “Performance of datamining techniques in the prediction of chronic kidney disease,” Computer Science and Information Technology, vol. 7, no. 2, pp. 48–53, 2019. View at: Publisher Site | Google Scholar
  35. S. Chatterjee, N. Dey, F. Shi, A. S. Ashour, S. J. Fong, and S. Sen, “Clinical application of modified bag-of-features coupled with hybrid neural-based classifier in dengue fever classification using gene expression data,” Medical & Biological Engineering & Computing, vol. 56, no. 4, pp. 709–720, 2018. View at: Publisher Site | Google Scholar
  36. A. B. Nassif, D. Ho, and L. F. Capretz, “Towards an early software estimation using log-linear regression and a multilayer perceptron model,” Journal of Systems and Software, vol. 86, no. 1, pp. 144–160, 2013. View at: Publisher Site | Google Scholar
  37. E.-H. A. Rady and A. S. Anwar, “Prediction of kidney disease stages using data mining algorithms,” Informatics in Medicine Unlocked, vol. 15, Article ID 100178, 2019. View at: Publisher Site | Google Scholar
  38. K. Kesorn, P. Ongruk, J. Chompoosri et al., “Morbidity rate prediction of dengue hemorrhagic fever (DHF) using the support vector machine and the Aedes aegypti infection rate in similar climates and geographical areas,” PLoS One, vol. 10, no. 5, pp. 1–16, 2015. View at: Publisher Site | Google Scholar
  39. M. Stanke and S. Waack, “Gene prediction with a hidden Markov model and a new intron submodel,” Bioinformatics, vol. 19, no. SUPPL. 2, pp. 215–225, 2003. View at: Publisher Site | Google Scholar
  40. L. S. Johnson, S. R. Eddy, and E. Portugaly, “Hidden Markov model speed heuristic and iterative HMM search procedure,” BMC Bioinformatics, vol. 11, 2010. View at: Publisher Site | Google Scholar
  41. S. Z. Yu and H. Kobayashi, “An efficient forward-backward algorithm for an explicit-duration hidden Markov model,” IEEE Signal Processing Letters, vol. 10, no. 1, pp. 11–14, 2003. View at: Google Scholar
  42. C. J. Mantas and J. Abellán, “Credal decision trees in noisy domains,” in Proceedings of the 22nd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. ESANN 2014, pp. 683–688, Bruges, Belgium, April 2014. View at: Google Scholar
  43. Q. He et al., “Novel entropy and rotation forest-based credal decision tree classifier for landslide susceptibility modeling,” Entropy, vol. 21, no. 2, 2019. View at: Publisher Site | Google Scholar
  44. J. Abellán and A. R. Masegosa, “An ensemble method using credal decision trees,” European Journal of Operational Research, vol. 205, no. 1, pp. 218–226, 2010. View at: Publisher Site | Google Scholar
  45. S. Picek, A. Heuser, and S. Guilley, “Template attack versus Bayes classifier,” Journal of Cryptographic Engineering, vol. 7, no. 4, pp. 343–351, 2017. View at: Publisher Site | Google Scholar
  46. A. Naik and L. Samant, “Correlation review of classification algorithm using data mining tool: WEKA, rapidminer, tanagra, orange and knime,” Procedia Computer Science, vol. 85, pp. 662–668, 2016. View at: Publisher Site | Google Scholar
  47. T. R. Baitharu and S. K. Pani, “Analysis of data mining techniques for healthcare decision support system using liver disorder dataset,” Procedia Computer Science, vol. 85, pp. 862–870, 2016. View at: Publisher Site | Google Scholar
  48. U. R. Acharya, H. Fujita, S. Bhat et al., “Decision support system for fatty liver disease using GIST descriptors extracted from ultrasound images,” Information Fusion, vol. 29, pp. 32–39, 2016. View at: Publisher Site | Google Scholar
  49. E. K. Hashi, M. S. Uz Zaman, and M. R. Hasan, “An expert clinical decision support system to predict disease using classification techniques,” in Proceedings of the ECCE 2017-International Conference on Electrical, Computer and Communication Engineering, pp. 396–400, Cox’s Bazar, Bangladesh, February 2017. View at: Google Scholar
  50. T. M. Carvajal, K. M. Viacrusis, L. F. T. Hernandez, H. T. Ho, D. M. Amalin, and K. Watanabe, “Machine learning methods reveal the temporal pattern of dengue incidence using meteorological factors in metropolitan Manila, Philippines,” BMC Infectious Diseases, vol. 18, no. 1, pp. 1–15, 2018. View at: Publisher Site | Google Scholar
  51. L. Lau, Y. Kankanige, B. Rubinstein et al., “Machine-learning algorithms predict graft failure after liver transplantation,” Transplantation, vol. 101, no. 4, pp. e125–e132, 2017. View at: Publisher Site | Google Scholar
  52. H. Jin, S. Kim, and J. Kim, “Decision factors on effective liver patient data prediction,” International Journal of Bio-Science and Bio-Technology, vol. 6, no. 4, pp. 167–178, 2014. View at: Publisher Site | Google Scholar
  53. J. Demsar, “Statistical comparisons of classifiers over multiple data sets,” Journal of Machine Learning Research, vol. 7, pp. 1–30, 2006. View at: Google Scholar
  54. H. Deng, G. Runger, and E. Tuv, “Bias of importance measures for multi-valued attributes and solutions,” Lecture Notes in Computer Science, vol. 6792, Springer, Berlin, Germany, 2011. View at: Publisher Site | Google Scholar
  55. D. J. Sheskin, “Parametric and nonparametric statistical procedures,” 2000. View at: Google Scholar
  56. S. Garcia, “An extension on “statistical comparisons of classifiers over multiple data sets” for all pairwise comparisons,” Journal of Machine Learning Research, vol. 9, pp. 2677–2694, 2008. View at: Google Scholar
  57. M. Friedman, “The use of ranks to avoid the assumption of normality implicit in the analysis of variance,” Journal of the American Statistical Association, vol. 32, pp. 37–41, 2012. View at: Google Scholar

Copyright © 2021 Bilal Khan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views799
Downloads401
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.