Table of Contents Author Guidelines Submit a Manuscript
Journal of Electrical and Computer Engineering
Volume 2016 (2016), Article ID 2168478, 5 pages
http://dx.doi.org/10.1155/2016/2168478
Research Article

A Searching Method of Candidate Segmentation Point in SPRINT Classification

1Science and Technology on Information Transmission and Dissemination in Communication Networks Laboratory, Shijiazhuang, China
2State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China

Received 5 April 2016; Revised 5 August 2016; Accepted 1 September 2016

Academic Editor: Bin-Da Liu

Copyright © 2016 Zhihao Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

SPRINT algorithm is a classical algorithm for building a decision tree that is a widely used method of data classification. However, the SPRINT algorithm has high computational cost in the calculation of attribute segmentation. In this paper, an improved SPRINT algorithm is proposed, which searches better candidate segmentation point for the discrete and continuous attributes. The experiment results demonstrate that the proposed algorithm can reduce the computation cost and improve the efficiency of the algorithm by improving the segmentation of continuous attributes and discrete attributes.

1. Introduction

In recent years, with the rapid development of economy and the continuous improvement of the level of computer technology, a large number of databases are used in business management, scientific research, and engineering development. In the face of massive storage data, how to find valuable information is a very difficult task. Data mining is to help people to extract valuable information from large, incomplete, random fuzzy data. Classification is a very important section in data mining. The purpose of classification is to construct a function or a model by which data can be classified into one of the given categories. The classification model can achieve the goal of forecasting data [1, 2]. The prediction model is derived from historical data records to represent the trend of the given data, so that it can be used to forecast future data.

The ID3 algorithm is a significant algorithm for building a decision tree [3, 4]. The information gain is used in this algorithm to select node’s attributes in a decision tree. But ID3 has the shortcoming of inclining when choosing attributes in the large scale values. The improved method C4.5 is proposed based on the ID3 algorithm [5, 6], and the C4.5 method uses the information gain rate instead of the information gain to select attributes of the decision tree, which improves the efficiency of decision trees. Then many improved algorithms based on the ID3 algorithm have been proposed, including SLIQ, SPRINT, and other algorithms. The SLIQ [7] algorithm can handle classification of large datasets. The SPRINT algorithm [810] based on SLIQ can be unrestricted by memory and its processing speed is considerable.

The SPRINT algorithm has many advantages. This algorithm is unrestricted by memory, and it is a kind of scalable and parallel method of building decision trees. But there are also some shortcomings. For example, finding the best segmentation point of discrete attributes needs a large amount of calculation, and the partition of continuous attributes is unreasonable.

Based on these issues, this paper proposes a new method of searching for the best segmentation point. For the segmentation of discrete attributes, the new method reduces time complexity by avoiding unnecessary computation. For the segmentation of continuous attributes, we can achieve the goal of reducing the depth of decision trees and improving the classification efficiency of decision trees through discretization of continuous attributes.

2. Related Works

Decision tree is one of the most widely used classification models in machine learning applications. Its goal is to extract knowledge from large scale datasets and represent them in a graphically intuitive way.

The paper [1] presents the Importance Aided Decision Tree (IADT), which takes feature importance as an additional domain knowledge for enhancing the performance of learners. Decision tree algorithm finds the most important attributes in each node. Therefore, the mechanism of importance of features in the paper is a relevant domain knowledge for the decision tree algorithm. For automatically designing decision tree, Barros et al. [2] propose a hyperheuristic evolutionary decision tree algorithm tailored to a specific type of classification dataset. The algorithm evolves design components of top-down decision tree induction algorithms.

The key of ID3 algorithm is considering information gain as the reference value for testing attributes, which leads to lower classification accuracy [3]. So the authors in [4] proposed a new scheme for solving the shortcoming of ID3. The paper uses the improved information gain based on dependency degree of condition attributes as a heuristic when it selects the best segmentation attribute.

Ersoy et al. [5] proposed an improved C4.5 classification algorithm with the hypothesis generation process. The algorithm adopts -best Multi-Hypothesis Tracker (MHT) to reduce the number of generated hypothesis especially in high clutter scenarios.

In order to solve the security problems of intrusion detection system (IDS), attack scenarios and patterns should be analyzed and categorized. The enhanced C4.5 [6] is a combination of tree classifiers for solving security risks in the intrusion detection system. The mechanism uses a multiple level hybrid classifier which relies on labeled training data and mixed data. Thus, the IDS system based on C4.5 mechanism can be trained with unlabeled data and is capable of detecting previous attacks.

SLIQ decision tree solves the problem of sharp decision boundaries which are hardly found in classification. Thus the paper [7] proposes a fuzzy supervised learning in Quest decision tree. The authors construct a fuzzy decision boundary instead of a crisp decision boundary. In order to avoid incomprehensible induction rules in a large and deep decision tree, fuzzy SLIQ constructs a fuzzy binary decision tree, which has significant reduction in tree size.

SPRINT decision tree algorithm can predict the quality level of system modules, which is good for software testing [8]. The paper presents an improved SPRINT algorithm to calibrate classification trees. It provides a unique tree-pruning technique based on the minimum description length (MDL) principle. Based on this, SPRINT tree-based software quality classification mechanisms are used to predict whether a software module is fault-prone or not fault-prone.

3. SPRINT Algorithm

3.1. Description of SPRINT Algorithm

The SPRINT algorithm has no limit to the number of input records and its processing speed is considerable. This algorithm creates a list of attributes and a corresponding statistics table for each attribute of the sample data in the initialization phase. Elements in the list of attributes are known as attribute records, which consisted of labels, attribute values, and classes. Statistics tables are used to describe the class distribution of a property, and the C above and C below two lines, respectively, describe the class distribution of processed samples and untreated samples.

Steps of the original SPRINT algorithm are as follows:Maketree (node ) If (node meets the termination conditions) Put node into the queue, labeled as a root node;Return;For (for each attribute ) Update histogram in real time;Calculate and evaluate the index of segmentation for each candidate segmentation points, and find the best segmentation point;Find out the best segmentation for node from the best segmentation for each attribute. Based on it make two part , ;Maketree ();Maketree ();

The termination condition of the algorithm has three kinds of cases. (1) No attribute can be used as testing attribute. (2) If all the training samples in the decision tree belong to the same class, the node is used as a leaf node and labeled by this class. (3) The number of training samples is less than the user-defined threshold.

3.2. Segmentation of Attributes

The traditional SPRINT algorithm uses Gini index [5] to search for the best segmentation attribute, which provides the minimum Gini index representing the largest information gain.

For a dataset containing classes, Gini is defined as

is the frequency of class in . If a partition divides the dataset into two subsets and , and represent the number of records in subsets and , respectively. After the segmentation, the Gini value is

A segmentation of attribute values providing the least Gini value is chosen as the best segmentation [9].

For discrete attributes and continuous attributes, the SPRINT algorithm uses different processing methods.

In order to find discrete attribute segmentation point [7], we assume that the number of a certain attribute’s values is , which should be divided into two parts. All attribute values are considered as possible partition, and then the corresponding Gini value is obtained. There are kinds of possible partitioning ways in total. We need to calculate the Gini value for each partitioning way using exhaustive method and then can obtain the best segmentation.

For the solution of finding the continuous attribute’s partitioning point, the split can only occur between two values. First the values of the continuous attribute should be sorted and the candidate segmentation points are intermediate points between two values.

After a scan of sorted values, the statistics table should be updated when a record is read. The statistics table contains all the information needed to calculate the Gini index. Then we should calculate the Gini index to find the segmentation point with the minimum Gini value.

Although the traditional method can find the best segmentation point, it is necessary to traverse all of the segmentation in discrete attributes [8], which makes this algorithm have high time complexity. For the segmentation of continuous attributes, dividing them into two consecutive parts in most cases can not reflect the distribution of attribute values.

4. Improved SPRINT

4.1. Segmentation of Discrete Attribute

Taking credit risk of bank as an example, the data record is shown in Table 1.

Table 1: Credit risk records of bank.

Values of a discrete attribute with kinds of classes are divided into two sets, and then there are types of partitions, which mean that the Gini index values should be calculated for times. In Table 1, there are four kinds of classes, student, worker, clerk, and retiree, so 24 kinds of partitions should be considered. Taking into account commutative law of addition in formula (2), value remains unchanged when exchanging attribute values in sets of and . For example, and ; in this case, . When attribute values in and are exchanged, that is, and , .

According to this property, the times of calculating Gini index can be reduced for segmentation of discrete attributes in the SPRINT algorithm. In order to reduce the time complexity of SPRINT algorithm, this paper proposes an improved discrete attribute partition algorithm.

is a collection of discrete attribute values, and the number of values in is . Now the attribute value set is divided into two sets and . Select some values from and put them into . The number of selected values is . The initial value of is 1 with one-step growth until . Values are identical in and in the case of . When is odd, it is impossible that is equal to . If is an even number and is equal to , there are kinds of combinations of attribute values in , and 1/2 of attribute combinations are the same as . So after selecting values for , we need search for the same collection in . If there is the same collection in , the collection in will be deleted. And when the number of is more than , the search is stopped.

At the same time when all values in a subset belong to the same class, this subset can be a leaf node that does not need to be partitioned. So we can ignore two cases of and . In summary, this paper firstly proposed a new algorithm to reduce calculation of candidate segmentation points for discrete attributes. There are kinds of different values in a discrete attribute, and the improved algorithm on discrete attributes is as follows.

Step 1. Initialize a class partition table (including four fields: number, first collection, second collection, and Gini value), and set the counter , .

Step 2. If , values are placed in the first collection of the class partition table and the Gini index of this division is calculated and then carry on for the next time.

Step 3. Step  2 ends; ; compare with . If , then return to Step  2; if , execute the next step; if , skip to Step  6.

Step 4. Put values in the first collection and the others into the second collection. Search for the values of the first collection in the list of the second collection. Find out if there is a second collection same as the first collection. If there is, this partition will be deleted; otherwise calculate the Gini index of this partition.

Step 5. ; compare with ; if , skip to Step  4. If not, skip to the next step.

Step 6. Find out the minimum Gini value based on the optimized class partition table.

It can be seen that the improved algorithm eliminates repeated operations and unnecessary operations, which reduces computation greatly and reduces the time of creating a decision tree.

4.2. Segmentation of Continuous Attribute

Candidate segmentation points are middle points of two continuous values for segmentation of continuous attributes in SPRINT algorithm, and the attribute values are divided into two parts by a middle point. For example, there are two values and , and their middle point is a candidate segmentation point for a continuous attribute. Values () of this continuous attribute which are less than belong to a collection, and the values greater than belong to the other collection. However, in many cases this segmentation method is not conducive to the classification of the target attribute.

This algorithm includes three steps: sorting, classifying, and random combination of continuous attributes. And classifying is a new idea that is not included in the SPRINT algorithm. A continuous attribute has values . The target attribute has positive and negative examples of . According to the target attribute values continuous attribute values are classified. For target attribute value , collection 1 is and ; the corresponding record’s target attribute value of is . Similar to , collection 2 is ; . And the values in collection 1 and collection 2 are sorted in ascending or descending order. The following processing is performed on collection 1 and collection 2.

Step 1. Calculate the difference between two neighboring values   .

Step 2. Sort the series of in descending order. Find the top values in series , and the corresponding and ( and ) are candidate segmentation points.

Step 3. There are candidate segmentation points in collection 1 (collection 2). candidate segmentation points are found in total.

Step 4. Sort candidate segmentation points in ascending or descending order, .

Step 5. ; find the minimum value in series . The corresponding and are deleted. Add in series .

Step 6. Repeat Step  5 until the number of series of is .

Step 7. Divide all values into blocks using segmentation points. The values of continuous attribute have been divided into blocks through the above steps, and consider these blocks as discrete attribute values; then the segmentation method of discrete attribute values is used to process these blocks.

Step 8. Initialize a class partition table (including four fields: number, first collection, second collection, and Gini value), and set the counter , .

Step 9. If , blocks are placed randomly in the first collection of the class partition table; the Gini index of this division is calculated and then carry on for the next time.

Step 10. If splitting process ended, then set and compare with . If , then return to the previous step; if , execute the next step. If , then jump to Step  13.

Step 11. Put blocks in the first collection and the others into the second collection. Search for the blocks in the list of the second collection and find out the values as same as the first collection. If there is, this partition will be deleted; otherwise calculate the Gini index of this partition.

Step 12. ; compare with ; if , return to the previous step. If not, proceed to the next step.

Step 13. Find out the minimum Gini value based on the optimized class partition table.

Steps  8–13 are the same as the improved algorithm on discrete attributes.

5. Experiment and Simulation

This experiment uses the dataset of Function [11] as experimental samples. Attributes of the dataset include age, salary, vocation, level, and other attributes. There are discrete attributes, for example, vocation, and continuous attributes, for example, age in the dataset. The VC++ 6.0 is the experiment platform for this experiment. Comparison of the original SPRINT algorithm [9] and the improved SPRINT algorithm is shown in Table 2.

Table 2: Comparison of the SPRINT algorithm and the improved SPRINT algorithm.

Visualization of data on Table 2 is shown as in Figure 1.

Figure 1: Comparison of the SPRINT algorithm and the improved SPRINT algorithm.

The quantities of data in the five sets are increasing, so the costing time is also growing. As shown in Figure 1, the improved SPRINT algorithm greatly reduces the time to generate decision trees. At the same time, the classified accuracy of the decision tree generated by the improved SPRINT algorithm is also tested.

The comparison results of classification accuracy are shown in Table 3.

Table 3: Classification accuracy of original algorithm and improved algorithm.

As shown in Table 3, the improved SPRINT algorithm almost has the same or slightly better classification accuracy ratios as the original algorithm. With the increasing scale of dataset, the classification accuracy ratios have accordingly decreased. The decision tree becomes larger with the increase of the amount of data, which may result in the decreasing in accuracy. Controlling the size of the decision tree needs to be further researched.

6. Conclusion

In summary, the improved SPRINT algorithm improves the calculation for searching the best segmentation by searching better candidate segmentation point for the discrete and continuous attributes, which reduces the unnecessary operations, increases the speed of generating decision trees, and reduces the time cost greatly.

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this article.

Acknowledgments

This work was supported by Open Subject Funds of Science and Technology on Information Transmission and Dissemination in Communication Networks Laboratory (ITD-U15002/KX152600011).

References

  1. M. R. A. Iqbal, S. Rahman, S. I. Nabil, and I. U. A. Chowdhury, “Knowledge based decision tree construction with feature importance domain knowledge,” in Proceedings of the 7th International Conference on Electrical and Computer Engineering (ICECE '12), pp. 659–662, Dhaka, Bangladesh, December 2012. View at Publisher · View at Google Scholar · View at Scopus
  2. R. C. Barros, M. P. Basgalupp, A. A. Freitas, and A. C. P. L. F. De Carvalho, “Evolutionary design of decision-tree algorithms tailored to microarray gene expression data sets,” IEEE Transactions on Evolutionary Computation, vol. 18, no. 6, pp. 873–892, 2014. View at Publisher · View at Google Scholar · View at Scopus
  3. I. Kamwa, S. R. Samantaray, and G. Joós, “On the accuracy versus transparency trade-off of data-mining models for fast-response PMU-based catastrophe predictors,” IEEE Transactions on Smart Grid, vol. 3, no. 1, pp. 152–161, 2012. View at Publisher · View at Google Scholar · View at Scopus
  4. H. He, T. M. McGinnity, S. Coleman, and B. Gardiner, “Linguistic decision making for robot route learning,” IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 1, pp. 203–215, 2014. View at Publisher · View at Google Scholar · View at Scopus
  5. Y. Ersoy, M. Efe, and B. Nakiboglu, “Enhancement of multiple hypothesis tracking algorithm with C4.5 algorithm,” in Proceedings of the 20th Signal Processing and Communications Applications Conference (SIU '12), pp. 1–4, April 2012. View at Publisher · View at Google Scholar · View at Scopus
  6. L. P. Rajeswari and A. Kannan, “An Intrusion Detection System based on multiple level hybrid classifier using enhanced C4.5,” in Proceedings of the International Conference on Signal Processing Communications and Networking (ICSCN '08), pp. 75–79, January 2008. View at Publisher · View at Google Scholar · View at Scopus
  7. B. Chandra and P. P. Varghese, “Fuzzy SLIQ decision tree algorithm,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 38, no. 5, pp. 1294–1301, 2008. View at Publisher · View at Google Scholar · View at Scopus
  8. L. Rutkowski, L. Pietruczuk, P. Duda, and M. Jaworski, “Decision trees for mining data streams based on the McDiarmid's bound,” IEEE Transactions on Knowledge and Data Engineering, vol. 25, no. 6, pp. 1272–1279, 2013. View at Publisher · View at Google Scholar · View at Scopus
  9. M. Guijarro, R. Fuentes-Fernández, P. J. Herrera, Á. Ribeiro, and G. Pajares, “New unsupervised hybrid classifier based on the fuzzy integral: applied to natural textured images,” IET Computer Vision, vol. 7, no. 4, pp. 272–278, 2013. View at Publisher · View at Google Scholar · View at Scopus
  10. M. L. Othman, I. Aris, S. M. Abdullah, M. L. Ali, and M. R. Othman, “Knowledge discovery in distance relay event report: a comparative data-mining strategy of rough set theory with decision tree,” IEEE Transactions on Power Delivery, vol. 25, no. 4, pp. 2264–2287, 2010. View at Publisher · View at Google Scholar · View at Scopus
  11. R. Agrawal, T. Imielinski, and A. Swami, “Database mining: a performance perspective,” IEEE Transactions on Knowledge and Data Engineering, vol. 5, no. 6, pp. 914–925, 1993. View at Publisher · View at Google Scholar · View at Scopus