Advanced Aspects of Computational Intelligence and Applications of Fuzzy Logic and Soft ComputingView this Special Issue
Decision Tree Algorithm in the Performance Evaluation of School-Enterprise Cooperation for Higher Vocational Education
With the development of China’s economy and the internet, machine learning has become one of the people’s favorite ways of working. This study aims to solve the problems of brain drain, inefficiency, and injustice in the performance appraisal of school-enterprise cooperation. Decision tree technology is used to establish an assessment system. After the assessment index data are segmented, and the fuzzy version of the C4.5 algorithm is used to calculate and count different data segments. Finally, different data types are used to classify and construct decision trees. The school-enterprise cooperation performance appraisal system has been established and perfected for higher vocational education. In this way, the performance appraisal system is optimized in the school-enterprise cooperation. The system improves work efficiency while reducing manual labor and improves the problems in the performance appraisal of other colleges and universities’ cooperation. Facts have proved that the establishment of decision trees can effectively solve the problems of duplication of indicators and complex calculations in performance appraisal. After being optimized, the C4.5 algorithm will increase the calculation accuracy to 95%. The overall speed of establishment is increased by 5% based on the algorithm before optimization. This breaks the traditional performance appraisal system and promotes the performance appraisal of school-enterprise cooperation to be more open, transparent, fair, and equal.
Due to the matching needs of economic development and talent strategy, China is paying more and more attention to the cultivation of talents in various fields. Therefore, China has increased its support for higher vocational colleges and college cooperative enterprises. The decision tree method of machine learning is used to build the model. This model can deduce the admission category of students before admission based on the scores of each subject in the first semester after admission. It is confirmed that there is a strong correlation between student admission categories and course performance. The fivefold cross-validation of the proposed model has a classification accuracy rate of 81.15% . More and more companies choose to cooperate with colleges and universities to develop projects in order to attract talents and open markets. The cooperation between enterprises and higher vocational colleges is getting closer, but there are many loopholes in the performance appraisal of school-enterprise cooperation. On the one hand, the students facing the school must be guarded; on the other hand, the enterprise is not fully responsible for social responsibility. In addition, although there are advantages of mutual progress, many shortcomings are also exposed in the economic field and social development. When a problem arises, there is no negotiation, and there is no stipulation of the respective responsibilities.
There are many types of decision tree algorithms, such as ID3 and C4.5. The ID3 algorithm data are not accurate enough, which lead to the experimental results that are not particularly ideal. Therefore, the more accurate C4.5 algorithm is used to select the evaluation indicators for the performance system of school-enterprise cooperation. The C4.5 algorithm is an improvement to the ID3 algorithm. It performs data processing and analysis based on the ID3 algorithm, retains the advantages of the ID3 algorithm, and has made great progress in the processing and analysis technology of predictive variables. This algorithm can solve the classification problem of the decision tree number set . A relatively complete performance system has been established. Empowerment is used for index selection. Algorithmic mechanisms are used to classify indicators. Finally, the obscured data are used to construct a performance appraisal decision tree.
Due to the existence of these problems and deficiencies, a system suitable for the performance evaluation of the school-enterprise cooperation platform has been researched and developed based on the decision tree algorithm. The performance appraisal system has broken through the previous manual appraisal and turned to the data-based development, using the characteristics of the big data era to build an intelligent data-based appraisal system. In the performance appraisal system, the appraisal system is established and perfected in combination with machine algorithms. The decision tree is used to select and optimize the performance evaluation of school-enterprise cooperation. This can help the school-enterprise cooperation platform to establish a relatively complete performance appraisal system. Meanwhile, the system also resolves the division of responsibilities for school-enterprise cooperation under some higher vocational education and clarifies the purpose of talent training, local economic development, and improvement of scientific research in cooperative projects.
2. The Performance Evaluation Method of School-Enterprise Cooperation Based on Decision Tree Algorithm
In this section, the advantages of school-enterprise cooperation are properly explained. Then, the concept of school-enterprise performance evaluation is briefly discussed in this study. Section 3 talks about the decision tree algorithm and further discussed the C4.5 algorithm under the decision tree. Additionally, it elaborates the decision tree generation strategy and pruning of decision trees. Lastly, it explains the decision tree data verification.
2.1. Advantages of School-Enterprise Cooperation
School-enterprise cooperation can cultivate talents in many ways. So far, school-enterprise cooperation can improve the economic circulation of enterprises and students’ understanding of knowledge while training students’ practical abilities.
School-enterprise cooperation can overcome various difficulties and promote the progress of scientific and technological achievements. The knowledge learned can be used in the field of production year, improve production technology, and use scientific and technological means to reduce labor expenditure and avoid excessive cheap labor. China’s education department vigorously advocates improving students’ hands-on ability. This will not only open students’ horizons, but also allow students to further experience the fun of hands-on work, and then can make think about the strengths and weaknesses of the actual homework.
2.2. The Concept of School-Enterprise Cooperation Performance Evaluation
The performance evaluation system is one of the systems that companies use to measure value standards, and it is also the most important value system to measure. It is mainly used to appropriately and quantitatively evaluate the characteristics of objects. The performance evaluation system includes three parts: the institutional system, the evaluation organization system, and the performance evaluation index system . The school-enterprise cooperation performance evaluation system is a small branch of the entire performance evaluation, which provides corresponding services for the corresponding school-enterprise cooperation activities. The corresponding cooperation projects are classified. Human resources are the most important goal in the entire project cooperation. In addition to the transmission and training of talents, there are also related management services and product development, and technological innovations. However, these goals are built based on talent training, so human resources are the most important goal for evaluation. Second, related cooperation projects are divided into responsibilities. The overall effectiveness and value are evaluated while evaluating the relevant school-enterprise cooperation projects. The school-enterprise cooperation of higher vocational colleges is mostly based on talent transfer, related projects, and project implementation. The effectiveness of project cooperation is determined by determining the person in charge of the project. The group is set up to be responsible for the related business of school-enterprise cooperation. Performance appraisal is one of the methods of the management team. Based on this feature, the general scheme of building a school-enterprise cooperation platform is shown in Figure 1.
Figure 1 shows how the school-enterprise cooperation process is carried out and how to communicate core information between various platforms. The implementation, planning, and performance appraisal of school-enterprise cooperation projects all require the cooperation of various departments. Therefore, the establishment of the school-enterprise cooperation department has largely solved these problems.
2.2.1. The Hierarchical Structure Model of School-Enterprise Cooperation Performance Evaluation
The performance evaluation indicators of school-enterprise cooperation are always based on people, finances, and materials. Therefore, these three indicators are used in the performance evaluation system. A complete and reasonable performance appraisal system has been established. The design of the index system is the core of the evaluation of higher vocational colleges. At present, China’s research on the evaluation system of higher vocational colleges is concentrated on theories and government policies, with few empirical and model studies . This study puts forward 15 key points for constructing the quality evaluation index system of higher vocational colleges . In addition, these three indicators can be divided into more indicators to establish a performance appraisal system. These refined indicators are used to establish a school-enterprise cooperation performance appraisal system, as shown in Figure 2.
Figure 2 divides the relevant performance appraisal data and influencing factors, and uses data processing technology to quantify the data and assign weights. This facilitates the later use of decision tree algorithms and further research, and accurate decision trees, and makes performance appraisal fairer and more effective.
2.2.2. Optimization and Selection of Performance Appraisal Indicators for School-Enterprise Cooperation
The target parameters are selected. Under the first-level index, the most matching corresponding second-level index is selected to construct the decision tree. We screen and count according to the key performance indicator (KPI) and key success factor (KSF). Each factor influences and restricts each other. Random forest, C4.5 and C5.0, and balanced and unbalanced datasets are used for the landslide sensitivity analysis , as shown in Figure 3.
Figure 3 shows that vocational colleges and enterprises cooperate through cooperation platforms and related projects. In these platforms and projects, the factors that affect the establishment of the performance appraisal system are all the same in higher vocational colleges and enterprises, but there are subtle differences and differences. Therefore, redundant data are subtracted from the decision tree algorithm for filtering and simplification. The influencing factors are optimized, and the weights are determined by the school-enterprise deep integration. The school-enterprise cooperation performance evaluation system is established, the school-enterprise in-depth integration ecosystem model is constructed, and the basic strategy is proposed by local undergraduate colleges to build a school-enterprise integration ecosystem combined with the comprehensive index method .
2.3. Decision Tree Algorithm
In this section, the decision tree algorithm is well explained. First, it is elaborated on theoretical knowledge of the decision tree algorithm. Then, it is explained by the features of the decision tree algorithm.
2.3.1. Theoretical Knowledge of Decision Tree Algorithm
The structure diagram of a decision tree is like a flowchart, which consists of directed edges and related nodes. In addition, decision trees can only output one way. For complex output, different decision trees need to be established for independent output. This is also a data processing technique often used in data mining, which can be used to analyze and predict data. The decision tree algorithm is an inductive learning algorithm based on training sample datasets and is often used in classifiers and predictive models. According to the analysis of the status quo of China’s higher vocational education, this study found the main problems in the current teaching quality management, led to the research significance of the teaching monitoring and evaluation system, and further explained the role of data mining technology in the teaching quality evaluation system of higher vocational colleges. The function is introduced and explained in the designed teaching quality monitoring and evaluation system . It is a top-down recursive algorithm, a tree structure, composed of a certain number of nodes and branches . The general decision tree is shown in Figure 4.
In Figure 4, the design of the tree is discussed. The three types of nodes have different meanings: the root node is the starting point of the entire process. All data are stowed in the root node. An internal node refers to the assortment of numbers and some characteristic indicators. An internal node has its own characteristics and a collection of related attributes. The leaf node indicates possible results after processing. Each opportunity of existence is classified in the leaf node. Each path from the root to leaf has its personal corresponding system and rules. The conversion rules of the decision tree model are easy to classify after the meaning of the three types of nodes (Root node, Internal node, and Leaf node). The performance evaluation model is designed through the corresponding analysis methods. This model transforms qualitative questions into quantitative questions and then provides a comprehensive, objective, scientific, and accurate evaluation method for the function and benefit of school-enterprise cooperation . Teaching evaluation in higher vocational colleges occupies a core position in teaching management . The decision tree is continuously branched, as shown in Figure 5. Finally, useful information is selected and entered into the assessment system according to the node.
In Figure 5, the overall goal is the assessment and design of the school-enterprise cooperation performance system. Therefore, performance appraisal is the root node, using the key to performance appraisal, combined with machine learning and various indicators for classification. The system will continue to work in cycles until the end of the work index screening. After the system work is over, a decision tree will be generated to evaluate the target.
2.3.2. Features of Decision Tree Algorithm
Related algorithms for decision trees include ID3, C4.5, and CARTT. . In addition, the decision tree has its unique characteristics in data processing .(1)The classification rules and related systems corresponding to the decision tree algorithm are easy to grasp and understand. They can be expressed in if-then in the end;(2)Compared with other algorithms, it has a small amount of calculation, but its efficiency is very high, and it can solve data problems in a short time;(3)There are two types of discrete and continuous data in many data. Decision trees are more suitable for processing discrete data, and continuity needs to be processed after being scattered;(4)The final result of the decision tree algorithm is to form a tree structure diagram with certain rules and a complete structure. The level of the node is directly proportional to the corresponding effect; the higher the position of the corresponding node, the greater the impact on the conclusion.
2.4. C4.5 Algorithm under Decision Tree
The C4.5 algorithm is used for research discussion. The C4.5 decision tree is further optimized and improved based on the ID3 decision tree. The partition standard of the node is replaced by the information gain rate, which can handle continuous values, missing values, and pruning operations  (Algorithm 1).
Indicators are classified based on aggregated data. Based on the aggregated data, the indicators are classified. The training set D is given. The key indicator W is used for grouping. The information gain is shown in equation (1) as follows:
Information gain is defined as the difference between the original information demand and the new demand . Expected information is shown in equation (2) as follows:
Pi represents the probability of occurrence of the corresponding value of the random variable. Suppose that the elements in D are divided according to the attribute W, the attribute W divides D into different classes. After the division, the information entropy of classification is as shown in equation (3) as follows:
Through the classification of key indicators, the relative information gain rate of the indicator will further standardize and unify the value gain of “split information” . D’s is like the Info(D). Its definition is shown in equation (4) as follows:where j = 1, 2, 3, …, , and represents corresponding information generated during the testing process by dividing the training dataset D into attributes W. and indicate the number of samples included. Information gain rate is equal to information gain divided by split information. The definition of is shown in equation (5) as follows:
Gain (W) refers to the information gain of W to the fixed training dataset D. It represents the difference between the empirical conditional entropy H(D|W) , as shown in equation (6) as follows:
H(D) is the quotient of the random variable D. If D is a discrete random variable, then its probability distribution is shown in equation (7) as follows:
The quotient of the random variable D is defined in equation (8) as follows:
Conditional entropy H(D|A) shows the uncertainty of D under the condition of random variable A. Under the condition that the random variable A is determined, the conditional entropy corresponding to the random variable D is shown in equation (9) as follows:
The branch with the largest gain rate is selected as the splitting attribute. When the attribute has many values, the attribute entropy corresponding to the information gain increases. Finally, the calculated information gain rate is not very large. Therefore, to a certain extent, this can avoid some node problems caused by excessive selection .
2.5. Decision Tree Generation Strategy
Decision trees are widely used in various research fields. Here, the selection of relevant performance evaluation indicators and the quantification of fixed indicators are determined by the selection of nodes and the execution of decision-making standards. All starting points are derived and grown based on the root node. The careful and accurate division of internal nodes and leaf nodes can ensure that the decision tree is useful . Otherwise, excessive partitioning will cause data redundancy and other systemic problems . The performance appraisal index is graded to determine the label corresponding to each node, and the node content and the range of appraisal value are set. The label and number of each node area should be determined by the result according to the threshold value and the size of the assessment value .
Each area is set with a related threshold and value range, and is identified by naming a wrong line-number matrix. In general, the training sample dataset is used to analyze the dataset with a certain degree of comprehensiveness and history according to actual needs .
2.6. Pruning of Decision Trees
These methods are used to build a decision tree for the quantitative evaluation of performance. However, there are many unrelated branches and leaf nodes in this decision tree. A reasonable pruning process is needed for this decision tree. According to the pruning algorithm of the decision tree, the training dataset of S elements is obtained, and the quantitative performance appraisal data are not suitable for processing .
The decision tree that has basically constituted the performance measurement is generated through the decision tree. Meanwhile, there are many unrelated branches and useless nodes in the newly generated decision tree. Therefore, it needs to be optimized by subtracting useless nodes and branches . Based on these data, inappropriate data are deleted. The calculation is improved based on C4.5. When the retention index is ≤0, the branch is screened . In fact, the pruning of the decision tree is the process of checking and correcting the decision tree generated in the previous stage. This is mainly used to verify the preliminary rules generated during the generation of the decision tree in the new sample dataset (called the test dataset). The pruning of the decision tree can cut off the branches that affect the accuracy of the prebalance .
2.7. Decision Tree Data Verification
According to the performance data collection and the determination of relevant assessment indicators, after the relevant data indicators are analyzed, the assessment data in Table 1 are obtained. Step 1: the information entropy of the three levels of assessment indicators A, B, and C is calculated. Step 2: the level information gain rate is calculated. Step 3: the results are analyzed.
The index system is established according to the calculation equation and the data. The weight table of each evaluation index is shown in Table 2.
In Table 2, the weight values of the assessment indicators are assigned through data collection and related investigations. The decision tree branch nodes are selected through the performance appraisal index weight value, and planning and building of the decision tree model, as shown in Figure 6.
In Figure 6, the quantification is graded by performance appraisal indicators. The overall data after being graded are used to build a model after adjustment to reduce repeated assessment indicators. This greatly improves calculation efficiency, accuracy, and practicability.
In This section the analysis of the application results of the decision tree algorithm in the performance evaluation of school-enterprise cooperation, and enlarged the running results of the optimized C4.5 algorithm are enlarged.
3.1. Analysis of the Application Results of the Decision Tree Algorithm in the Performance Evaluation of School-Enterprise Cooperation
Some problems still exist through the analysis of the school-enterprise cooperation performance appraisal system. First, the reference value of talent training indicators and level optimization is not comprehensive, and the success of talent training cannot be measured by data. The economic development coefficient is not sound. Local economic development may be affected by other factors, not just economic development driven by school-enterprise cooperation. There are no specific data to prove the impact of local economic development on the performance appraisal indicators of school-enterprise cooperation. These data are used to achieve school-enterprise cooperation performance indicators through data mining technology. For enterprises, the influencing factors are not single, but the importance of various factors is different. The index advantage is quantified, and the objective and fairness of the performance evaluation system are realized after enabling the most important index among the influencing factors. This will not only allow effective assessment, but also improve the quality of school-enterprise cooperation, optimize cooperation projects, and more effectively drive positive economic development and talent training.
After data calculation, a decision tree is generated to standardize and unify the performance appraisal. The index class division is obvious, and the calculation process is quite smooth. This not only reduces labor expenditures but also improves the efficiency and accuracy of the assessment. However, due to the low accuracy of the algorithm in the calculation process, some of the evaluation indicators in the process of generating the decision tree are duplicated, resulting in a large overall data flow. The improved C4.5 algorithm is used to improve these shortcomings, as shown in Figures 7 and 8:
Figure 7 shows that when the processed sample data are consistent, the time required for the C4.5 algorithm to process the data is less. The fifth group shows that when processing the same sample size data, the overall estimated time of the optimized C4.5 algorithm is less than that of other algorithms, and the more the data processed, the more the advantageous, which is consistent with the estimated results of this study. The time and accuracy spent on processing the data will increase as the data continue to change. This is conducive to the generation of decision trees.
In Figures 7 and 8, when the overall number of evaluation sample indicators increases, the time it takes for the algorithm to process data also increases. Therefore, the data need to be simplified to increase the speed and accuracy of tree creation. The optimization algorithm reduces the repetition rate of assessment indicators. This not only saves part of the calculation time but also simplifies the branching of the decision tree. The whole operating system is smoother.
3.2. The Running Results of the Optimized C4.5 Algorithm
The optimized C4.5 algorithm calculation and simulated performance appraisal data results are more accurate, and the overall algorithm speed is significantly faster. With the same sample size, its test accuracy is also higher than the previous algorithm, as shown in Figure 9:
In Figure 9, the data processing accuracy is more efficient than the optimized C4.5 algorithm. In terms of processing speed, although it takes time, the speed of using these processed indicators to build a decision tree is relatively faster. For example, when the algorithm needs to process 300 indicators, the accuracy of the algorithm before the optimization is about 92%, and the accuracy of the algorithm after optimization is increased to 95%. This means that the number of repeatedly quantified indicators in the decision tree generation process is reduced, and the overall assessment accuracy and fluency are also improved.
Meanwhile, the optimization algorithm compares the speed of the assessment indicators one by one. The data processing calculation speed is improved by the optimization algorithm, as shown in Figure 10.
In Figure 10, the greater the number of samples being calculated, the greater the speed difference between the two calculation methods. When the sample size is 100, the difference between the two algorithms is not much, both are around 50. However, if the sample size is greater than 500, the difference between the two is very large. This means that the optimized decision tree is more stable in the calculation process and will not cause the entire system to become disordered due to the increase in the evaluation index. When the calculation index is the same, for example, 300 samples, the adjustment data are calculated and branched. The overall speed of the optimization algorithm is about 120 seconds. According to the algorithm before optimization, it takes 150 seconds to calculate and build a tree. This shows that there are a lot of redundant data that have not been filtered in the calculation process.
The optimization algorithm has a certain improvement in the assessment accuracy and accuracy, and the speed of establishment, and can be used in the performance assessment of school-enterprise cooperation.
In the era of data, various machine learning and computer applications need to constantly explore and innovate. Part of the school data are used to study the application of decision trees in the performance appraisal of school-enterprise cooperation. The decision tree algorithm is used to build a complete evaluation model for the performance appraisal system. First, data are collected that affect the performance appraisal indicators. The collected data are classified and weighted. Then, the C4.5 algorithm is used to compute these data. The key data generated are classified and glued to build a decision tree. Subsequently, the decision tree is used to establish a performance appraisal system. The appraisal system has drastically reduced the labor force. The decision tree algorithm answers the loopholes in the recital appraisal of school-enterprise cooperation to a certain level. This method also has a certain reference effect in the future school-enterprise cooperation field. Meanwhile, some shortcomings still exist due to the inaccuracy of related algorithms, for example, node assessment may not be suitable for all school-enterprise cooperation platforms. In the subsequent research process, the assessment indicators can be further subdivided and combined with regional characteristics for algorithm simulation implementation.
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
S. H. Yoo, H. Geng, T. L. Chiu et al., “Deep learning-based decision-tree classifier for COVID-19 diagnosis from chest X-ray imaging,” Frontiers of Medicine, vol. 7, p. 427, 2020.View at: Publisher Site | Google Scholar
B. Charbuty and A. Abdulazeez, “Classification based on decision tree algorithm for machine learning,” Journal of Applied Science and Technology Trends, vol. 2, no. 01, pp. 20–28, 2021.View at: Publisher Site | Google Scholar
H. Abudouaini, C. Huang, H. Liu et al., “Assessment of the self-reported dysphagia in patients undergoing one-level versus two-level cervical disc replacement with the Prestige-LP prosthesis,” Clinical Neurology and Neurosurgery, vol. 207, no. no, p. 106759, 2021.View at: Publisher Site | Google Scholar
W. Zhao and S. Li, “Research and practice of the evaluation system of higher vocational colleges based on data mining,” Vocational Technology, vol. 54, no. 07, pp. 6–11, 2021.View at: Google Scholar
J. Littenberg-Tobias and J. Reich, “Evaluating access, quality, and equity in online learning: a case study of a MOOC-based blended professional degree program,” The Internet and Higher Education, vol. 47, p. 100759, 2020.View at: Publisher Site | Google Scholar
K.-H. Hu, F.-H. Chen, M.-F. Hsu, and G.-H. Tzeng, “Construction of an AI-driven risk management framework for financial service firms using the MRDM approach,” International Journal of Information Technology and Decision Making, vol. 20, no. 03, pp. 1–33, 2021.View at: Publisher Site | Google Scholar
Q. Lin and M. X. Chen, “Evaluation of the cooperative performance of local undergraduate colleges and universities and the construction of the deep integration ecosystem of schools and enterprises,” Journal of Hengshui Normal College, vol. 42, no. 01, pp. 129–135, 2021.View at: Google Scholar
P. Nancy, S. Muthurajkumar, S. Ganapathy, S. V. N. Santhosh Kumar, M. Selvi, and K. Arputharaj, “Intrusion detection using dynamic feature selection and fuzzy temporal decision tree classification for wireless sensor networks,” IET Communications, vol. 14, no. 5, pp. 888–895, 2020.View at: Publisher Site | Google Scholar
X. Meng, P. Zhang, Y. Xu, and H. Xie, “Construction of decision tree based on C4.5 algorithm for online voltage stability assessment,” International Journal of Electrical Power & Energy Systems, vol. 118, p. 105793, 2020.View at: Publisher Site | Google Scholar
K. Gajowniczek and T. Ząbkowski, “Interactive decision tree learning and decision rule extraction based on the ImbTreeEntropy and ImbTreeAUC packages,” Processes, vol. 9, no. 7, p. 1107, 2021.View at: Publisher Site | Google Scholar
H. Lu and X. Ma, “Hybrid decision tree-based machine learning models for short-term water quality prediction,” Chemosphere, vol. 249, p. 126169, 2020.View at: Publisher Site | Google Scholar
M. M. Ghiasi, S. Zendehboudi, and A. A. Mohsenipour, “Decision tree-based diagnosis of coronary artery disease: CART model,” Computer Methods and Programs in Biomedicine, vol. 192, p. 105400, 2020.View at: Publisher Site | Google Scholar
E. A. Toraih, R. M. Elshazli, M. H. Hussein et al., “Association of cardiac biomarkers and comorbidities with increased mortality, severity, and cardiac injury in COVID‐19 patients: a meta‐regression and decision tree analysis,” Journal of Medical Virology, vol. 92, no. 11, pp. 2473–2488, 2020.View at: Publisher Site | Google Scholar
L. la Velle, S. Newman, C. Montgomery, and D. Hyatt, “Initial teacher education in England and the Covid-19 pandemic: challenges and opportunities,” Journal of Education for Teaching, vol. 46, no. 4, pp. 596–608, 2020.View at: Publisher Site | Google Scholar
O. Supriadi, Musthan, R. N. Sa’odah et al., “Did transformational, transactional leadership style and organizational learning influence innovation capabilities of school teachers during covid-19 pandemic?” Systematic Reviews in Pharmacy, vol. 11, no. 9, pp. 299–311, 2020.View at: Google Scholar
A. Shehadeh, O. Alshboul, R. E. Al Mamlook, and O. Hamedat, “Machine learning models for predicting the residual value of heavy construction equipment: an evaluation of modified decision tree, LightGBM, and XGBoost regression,” Automation in Construction, vol. 129, p. 103827, 2021.View at: Publisher Site | Google Scholar
S. Mishra, P. K. Mallick, H. K. Tripathy, A. K. Bhoi, and A. González-Briones, “Performance evaluation of a proposed machine learning model for chronic disease datasets using an integrated attribute evaluator and an improved decision tree classifier,” Applied Sciences, vol. 10, no. 22, p. 8137, 2020.View at: Publisher Site | Google Scholar
M. A. Ferrag, L. Maglaras, A. Ahmim, M. Derdour, and H. Janicke, “Rdtids: rules and decision tree-based intrusion detection system for internet-of-things networks,” Future Internet, vol. 12, no. 3, p. 44, 2020.View at: Publisher Site | Google Scholar
I. H. Sarker, A. Colman, J. Han, A. I. Khan, Y. B. Abushark, and K. Salah, “Behavdt: a behavioral decision tree learning to build user-centric context-aware predictive model,” Mobile Networks and Applications, vol. 25, no. 3, pp. 1151–1161, 2020.View at: Publisher Site | Google Scholar
B. T. Pham, T. V. Phong, T. Nguyen-Thoi et al., “Ensemble modeling of landslide susceptibility using random subspace learner and different decision tree classifiers,” Geocarto International, vol. 37, no. 3, pp. 735–757, 2020.View at: Publisher Site | Google Scholar
W. Chen, Y. Li, W. Xue et al., “Modeling flood susceptibility using data-driven approaches of naïve Bayes tree, alternating decision tree, and random forest methods,” The Science of the Total Environment, vol. 701, p. 134979, 2020.View at: Publisher Site | Google Scholar
A. Suresh, R. Udendhran, and M. Balamurgan, “Hybridized neural network and decision tree based classifier for prognostic decision making in breast cancers,” Soft Computing, vol. 24, no. 11, pp. 7947–7953, 2020.View at: Publisher Site | Google Scholar
M. A. Ganaie, M. Tanveer, and P. N. Suganthan, “Oblique decision tree ensemble via twin bounded SVM,” Expert Systems with Applications, vol. 143, p. 113072, 2020.View at: Publisher Site | Google Scholar
T. T. H. Le, H. Kang, and H. Kim, “Household appliance classification using lower odd-numbered harmonics and the bagging decision tree,” IEEE Access, vol. 8, pp. 55937–55952, 2020.View at: Publisher Site | Google Scholar