Abstract

Discretization algorithm for real value attributes is of very important uses in many areas such as intelligence and machine learning. The algorithms related to Chi2 algorithm (includes modified Chi2 algorithm and extended Chi2 algorithm) are famous discretization algorithm exploiting the technique of probability and statistics. In this paper the algorithms are analyzed, and their drawback is pointed. Based on the analysis a new modified algorithm based on interval similarity is proposed. The new algorithm defines an interval similarity function which is regarded as a new merging standard in the process of discretization. At the same time, two important parameters (condition parameter and tiny move parameter ) in the process of discretization and discrepancy extent of a number of adjacent two intervals are given in the form of function. The related theory analysis and the experiment results show that the presented algorithm is effective.

1. Introduction

The intelligent information processing is researching hot spot in today’s information science theory and application. In machine learning and data mining, many algorithms have already been developed according to processing discrete data. Discretization of real value attributes is an important method of compression data and simplification analysis and also is an indeterminable in pattern recognition, machine learning, and rough set analysis domain. The key of discretization lies with dividing the cut point. At present, there are five different axes by which the proposed discretization algorithms can be classified [14]: supervised versus unsupervised, static versus dynamic, global versus local, top-down (splitting) versus bottom-up (merging), and direct versus incremental. Continuous attributes need to be discretized in many algorithms such as rule extraction and tag sort, especially rough set theory in research of data mining. In view of an algorithm for discretization of real value attributes based on rough set, people have conducted extensive research and proposed a lot of new discretization method [5], one kind of thought of which is that the decision table compatibility is not changed during discretion. Rough set and Boolean logical method proposed by Nguyen and Skowron are quite influential [6]. Moreover, there are two quite influential discretization methods which are the algorithms of the correlation based on information entropy and the algorithms of the correlation of Chi2 algorithm based on statistical method for supervised discretization. Reference [7] is an algorithm for discretization of real value attributes based on decision table and information entropy, which belongs to a heuristic and local algorithm that seeks the best results. Reference [8] proposed a discretization algorithm for real value attributes based on information theory, which regards class-attribute interdependence as an important discretization criterion and selects the candidate cut point which can lead to the better correlation between the class labels and the discrete intervals. But this algorithm has the following disadvantages. It uses a user-specified number of intervals when initializing the discretization intervals. The significance test used in the algorithm requires training for selection of a confidence interval. It initializes the discretization intervals using a maximum entropy discretization method. Such initialization may be the worst starting point in terms of the CAIR criterion. And it is very easy to cause the lower degree of discretization which is not immoderate. Huang has solved the above problem, but at the expense of very high-computational cost [9]. Kurgan and Cios have improved in the discretization criterion and attempted to cause class-attribute interdependence maximization [10]. But this criterion merely considered dependence between the most classes in the interval and the attribute, which will cause the excessive discretization and the result is not to be precise. References [3, 4, 11, 12] are the algorithms of the correlation of Chi2 algorithm based on the statistics. The ChiMerge algorithm introduced by Kerber in 1992 is a supervised global discretization method [11]. The method uses test to determine whether the current point is merged or not. Bondu et al. [13] proposed a Chi2 algorithm in 1997 based on the ChiMerge algorithm. In this algorithm, the authors increase the value of the threshold dynamically and decide the intervals’ merging order according to the value of , where and is a fractile decided by the significance level . Tay and Shen further improved the Chi2 algorithm and proposed the modified Chi2 algorithm in [4]. The authors showed that it is unreasonable to decide the degree of freedom by the number of decision classes on the whole system in the Chi2 algorithm. Conversely, the degree of freedom should be determined by the number of decision classes of each two adjacent intervals. In [3], the authors pointed out that the method of calculating the freedom degrees in the modified Chi2 algorithm is not accurate and proposed the extended Chi2 algorithm, which replaced with .

Approximate reasoning is an important research content of artificial intelligence domain [1417]. It needs measuring similarity between the different pattern and the object. Similarity measure is a function that is used in comparing similarity among information, data, shape, and picture etc. [18]. In some domain such as picture matching, information retrieval, computer vision, image fusion, remote sensing, and weather forecast, similarity measure has the extremely vital significance [13, 1922]. The traditional similarity measure method often directly adopts the research results in statistics, such as the cosine distance, the overlap distance, the Euclid distance, and Manhattan distance.

Using statistic and significance level codetermines whether that cut point can be merged is the main role of algorithms related to Chi2 algorithm. In this paper, we point out that using the importance of nodes determined by the distance, divided by , for extended Chi2 algorithm of reference [3] lacks theory basis and is not accurate. It is unreasonable to merge first adjacent two intervals which have the maximal difference value. At the same time, based on the study of applied meaning of statistic, the drawback of the algorithm is analyzed. To solve these problems, a new modified algorithm based on interval similarity is proposed. The new algorithm defines an interval similarity function which is regarded as a new merging standard in the process of discretization. At the same time, two important parameters (condition parameter and tiny move parameter ) in the process of discretization and discrepancy extent of a number of adjacent two intervals are given in the form of function. Besides, two important stipulations are given in the algorithm. The related theory analysis and the experiment results show that the presented algorithm is effective.

2. Correlative Conception of Chi2 Algorithm

At first, a few of conceptions about discretization are introduced as follows.(1) Interval and cut point. A single value of continuous attributes is a cut point; two cut points produce an interval. Adjacent two intervals have a cut point. Discretization algorithm of real value attributes actually is in the process of removing cut point and merging adjacent intervals based on definite rules.(2) and . is a statistic in probability.

The formula for computing the value is where: number of system classes;: number of patterns in the th interval, th class;: number of patterns in th class;: number of patterns in th interval;: total number of patterns;: expected frequency of .

is threshold determined significance level . In statistics, the asymptotic distribution of statistic with degrees of freedom is distribution with degrees of freedom, namely, distribution. is determined by selecting a desired significance level .(3) Inconsistency rate. When condition attribute values of objects are the same and decision attribute value is not the same, the classified information of the decision table has definite inconsistency rate (error rate), where

is approximate precision. Rectified Chi2 algorithm proposed in this paper controls merger extent and information loss in the discretization process with .

Extended Chi2 algorithm is as shown in Algorithm 1 [1].

Step1: Initialization. Set significance level . Calculate inconsistency rate of information
   systems: .
Step2: Sort data in ascending order for each attribute and calculate value of
   each adjacent two intervals according to (1), then using a table to obtain
   the corresponding threshold. Calculate difference .
Step3: Merge.
   While (mergeable cut point)
   {Search cut point that has the maximal difference , then merging it;
    If change
     {Withdraw merging;
      goto Step4;}
    else goto Step2;
   }
Step4: If can not be decreased
       Exit procedure;
   Else ;
      Decreasing the significance level by one level;
      goto Step2; }
Step5: Do until no attribute can be merged
   {For each mergeable attribute
    {Calculate difference ;
      ;
     sign flag=0;
     While (flag= =0)
     {While (mergeable cut point)
        {Search cut point that has the maximal difference , then merging it;
        If change
         {Withdraw merging;
         flag=1;
         break;}
         Else update difference ;
        }
      If can not be decreased
        Break;
      Else {Decreasing the significance level by one level;
       Update difference ;}
     }
    }
   }

3. Interval Similarity Function

3.1. Insufficiency of Chi2 Correlation Algorithm

In formula (1), is the proportion of a number of patterns in th class accounting for a total number of patterns, and is a number of patterns in the th interval. Therefore, statistical indicates the equality degree of the th class distribution of adjacent two intervals. The smaller the value is, the more the similar is class distribution, and the more unimportant the cut point is. It should be merged.

For the newest extended Chi2 algorithm, it is very possible to have such two groups of adjacent intervals: the number of classes of one is more than another, then, the difference of class distribution of adjacent two intervals which have the greater number of classes is bigger and the corresponding value is greater. Yet, the difference of class distribution of adjacent two intervals which have the less number of classes is smaller and the corresponding value is smaller. Moreover, degree of freedom of adjacent two intervals with the greater number of classes is bigger. Then, quantile is possibly much more than (see Figure 1). Therefore, even if , , we still have such situation: . But in fact, adjacent two intervals with the bigger difference of class distribution and the greater number of classes should not be first merged. This merging standard in the computation is not precise. So it is unreasonable to merge first the adjacent two intervals with the maximal difference.

In algorithms of the series of Chi2 algorithm, expansion to is as follows:

In formula (3), under certain situations is not very accurate: there are adjacent two intervals of class distribution adjoin. When the number of some class increases (two intervals both have this class, and are invariable, value of one of two intervals is invariable); the numerator and the denominator of expansion to formula are increasing at the same time. Regarding , its value may be increased first and then turn to be decreased. In other words, when is quite bigger than , value will increase (degree of freedom not to change) and probability of interval merging will be reduced. In fact, when the number of some class increases, this class has stronger independence with intervals, and it has leader's class status. Therefore, compared with not increased, this time should have the same opportunity of competition and even should merge first these two intervals.

The situation when value is 0 is as follows.

There exists the case that class distribution of adjacent two intervals is completely uniform, namely, . Thus, is very big relatively and the two intervals are possibly first merged. But in fact, it is possibly unreasonable that they are first merged. For example (see Table 1), , , and c are condition attributes and is decision attribute. Observing attribute : the same value is in the identical interval. The number of samples of two intervals is the same. Classification in is completely uniform, Namely, ; is quite big relatively. Even if degree of freedom in is bigger than , but because the difference of degree of freedom between and is very small, it is possible that the difference of is bigger than the difference of . From the computation with Table 1, we get and in , then . We can see and get . Regarding in Table 1, .45 and , then . We can see and get . Thus, two intervals of attribute in will be first merged, and then the sample 3, 4 and the sample 1, 5 in could have the conflict, but it is not the case in . So, when value is equal to 0, using difference as the standard of interval merging is inaccurate.

3.2. Interval Similarity Function

Definition 1. Let be a database, or an information table, and let be two arrays then their similar degree is defined as a mapping to the interval .
A good similarity measure should have the following characteristic:for all , ;for all , if and are completely different, then ;for all , compared with , if closes to , then .

The traditional similarity measure method often directly adopts the research results in statistics, such as the cosine distance, the overlap distance, the Euclid distance, and Manhattan distance. Based on the analysis to the drawback of the correlation of Chi2 algorithm, we propose the similarity function as follows.

Definition 2. Given two intervals (objects), let be a class label according to the th value in the first interval, and let be a class label according to the th value in the second interval. Then, the difference between and is where , .

Definition 3. Similarity function of adjacent two intervals , is defined as In the formula (5), is a condition parameter: where is the number of classes of two adjacent intervals, is the number of condition attribute, “” denotes absolute value, is the number of cut points of attribute before discretizing, , is tiny move parameter (), , and .

Considering any adjacent two intervals and , can express the difference degree between adjacent two intervals. But, because the number of each group of adjacent intervals is different, it is unreasonable to merely take as a difference measure standard. In order to obtain a uniform standard of difference measure and a fair compete opportunity among each group of adjacent intervals, it is reasonable to take as a difference measure standard. In formula (5), when the number of adjacent two intervals has only one (), similar degree between them is the biggest obviously. In order to enable similar degree among various intervals to compare in the uniform situation, we can take arc tangent function to normalized processing, making similar value mapped in . The formula expresses the average normative value of cut points before discretizing. And we take it as benchmark of distance of the number between two intervals, carrying on tiny move in the scope.

Reason of parameter selected: when the distance of the number between adjacent two intervals reaches a certain extent, will be small relatively, but will quite possibly be big. Thus, will be relatively very small and not be easily merged. In fact, considering the relations of containing and being contained between two adjacent intervals, they still have the greater merged opportunity and it is unfair. Therefore, parameter as condition parameter can play a fair role: when the distance of the number between adjacent two intervals reaches the certain extent, we select as standard. In addition, by considering that the size of intervals has relation with the number of initial attribute value (the number cut point), we can take as the distance of the number of adjacent two intervals, carrying on the tiny move in the scope on benchmark of . In brief, interval similarity definition not only can inherit the logical aspects of statistic but also can resolve the problems about algorithms of the correlation of Chi2 algorithm, realizing equality.

4. Discretization Algorithm for Real Value Attributes Based on Interval Similarity

In this section we propose a new discretization algorithm for real value attributes based on interval similarity (the algorithm is called SIM for short). The new algorithm defines an interval similarity function which is regarded as a new merging standard in the process of discretization. In the algorithm we adopt two operations.(1)With formula (5) there are many maximal similar values calculated among groups of adjacent intervals; we will merge the adjacent two intervals with the smallest number of classes.(2)When there are many maximal similar values calculated and the number of classes among groups of adjacent intervals is the same, we will merge the adjacent two intervals with the smallest number of samples of adjacent intervals (namely, is the smallest).

The two operations can reduce the influence of merge degree to other intervals or attributes, and the inconsistency rate of system cannot increase beforehand. The algorithm SIM is as shown in Algorithm 2.

Step1: Compute inconsistency rate of information system;
Step2: Sort data in ascending order for each attribute and calculate the similar
    value SIM of each adjacent intervals according to (5) and (6);
Step3: Merge
While (merge-able cut point)
{
 Search cut point that has the maximal similar value, then merging it;
 If (many maximum values)
   
  Merge adjacent two intervals with the smallest number of classes;
  If ( increases)
  {
   Withdraw merging;
   Exit procedure;
     }
  Else {break; goto Step2;}
   }
 If (some several maximum values and the same class number of classes
    among groups of adjacent intervals)
     {
   Merge the adjacent two intervals with the smallest number of samples
   of adjacent intervals;
   If ( increases)
       {
   withdraw merging;
   exit procedure;
   }
       Else {break; goto step2;}
  }
}

5. The Experimental Results and Analysis

We adopt the datasets of UCI machine learning database (see Table 2). The UCI machine learning datasets are commonly used in data mining experiment.

Nine datasets were discrete respectively by the algorithm proposed in this paper (SIM) and the EXT algorithm, the Boolean algorithm. We ran C4.5 on the discreted data. Choosing randomly, 80 percent of examples are training sets; the rest are testing sets. The average predictive accuracy, the average numbers of nodes of decision tree, and the average numbers of rules extracted are computed and compared by different algorithms (see Table 3). Meanwhile, discreted data is classified by multiclass classification method [2326] of SVM. 80 percent of examples are randomly chosen as training sets; the rest are testing sets. Model type is C-SVC. Kernel function type is RBF function. Search range of penalty C is . Kernel function parameter is 0.5. The predictive accuracy (acc) and the number of support vector (svs) are computed and compared for the above three algorithms (see Table 4).

From Table 3, we can see that compared with extended Chi2 algorithm and Boolean discretization algorithm, the average predictive accuracy of decision tree of SIM algorithm for discretization of real value attributes based on interval similarity has been rising except Bupa and Pima datasets for 9 datasets. In particular promotion scope of Glass, Wine, and Machine datasets is very big. The average numbers of nodes of decision tree and the average numbers of rules extracted of algorithm for discretization of real value attributes based on interval similarity have been decreased for most of the data. These results show the superiority of algorithm for discretization of real value attributes based on interval similarity.

From Table 4, we can see that under 1-V-1 classification method the predictive accuracy with SIM algorithm is higher than that of extended Chi2 algorithm and Boolean discretization algorithm except for Breast and Pima datasets.

Figures 2 and 3 visually describe predictive accuracy of decision tree and SVM with different discretization algorithms.

From the experiments we can see that the algorithm for discretization of real value attributes based on interval similarity proposed in this paper can obtain very good discretization effect.

We give a further analysis about the algorithms.

In regard to data set with the greater number of classes, it is very possible that the difference of the number of classes of each group of adjacent intervals is very big. Thus, if extended Chi2 discretization algorithm was used, it is not accurate and unreasonable to merge first adjacent two intervals which have the maximal difference value. The method proposed in this paper has avoided the above situation. This is also the main reason that recognition effect of Glass and Machine datasets is effective.

In regard to data set with the greater number of real value attribute, although the difference of the number of classes of each group of adjacent intervals is small, it may appear very unreasonable many situations in extended Chi2 algorithm in the discretization process: merged standard is not precise in computation, and so on. Regarding such situation, the method proposed in this paper has superiority very well (e.g. Ionosphere and Wine datasets). Under the comparison for two methods, the difference of recognition and forecast effect of Auto and Iris datasets (each of them has three classes) is small. But the method proposed in this paper is good.

In regard to Auto and Iris datasets (each of them has two classes) class distribution difference of each adjacent two intervals is not big. It is improbable to appear unreasonable factors. This time, merged standard of extended Chi2 algorithm is possibly more accurate in computation. However, as the data of Breast is with less attribute and more samples, the intervals are massive in process of discretizing and inconsistency rate will increase easily. At this time, extended Chi2 algorithm produces the lower discretization effect; SIM algorithm proposed in this paper gets better discretization results by means of two important parameters’ choice.

However, from the experiments we can see that SIM algorithm does not outperform extended Chi2 algorithm and Boolean discretization algorithm for all datasets. The characteristic of the data set on which SIM algorithm does not perform well is that it has lesser classes. That is the data set has not enough information of class.

6. Conclusions and Next Step of Work

Study of discretization algorithm of real value attributes operates an important effect for many aspects of computer application. Series of algorithms correlative to Chi2 algorithm based on probability statistics theory offer a new way of thinking to discretization of real value attributes. Based on the study for these algorithms a new algorithm using interval similarity technique is proposed. The new algorithm defines an interval similarity function which is regarded as a new merging standard in the process of discretization. At the same time, two important parameters (condition parameter and tiny move parameter ) which embody equilibrium in the process of discretization and discrepancy of adjacent two intervals are given in the function. The new algorithm gives fair standard and can discrete the real value attributes exactly and reasonably, and not only can it inherit the logical aspects of statistic, but also it can avoid the problems with the correlation of Chi2 algorithm. The theory analysis and the experiment results show that the presented algorithm is effective.

Acknowledgments

This work is partly supported by the National Natural Science Foundation of China (Grant nos. 61105059, 61175055, and 61173100), International Cooperation and Exchange of the National Natural Science Foundation of China (Grant no. 61210306079), China Postdoctoral Science Foundation (Grant no. 2012M510815), Liaoning Excellent Talents in University (Grant no. LJQ2011116), Sichuan Key Technology Research and Development Program (Grant no. 2011FZ0051), Radio Administration Bureau of MIIT of China (Grant no. [2011] 146), China Institution of Communications (Grant no. [2011] 051), and Sichuan Key Laboratory of Intelligent Network Information Processing (Grant no. SGXZD1002-10).