- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents

Mathematical Problems in Engineering

Volume 2012 (2012), Article ID 864652, 24 pages

http://dx.doi.org/10.1155/2012/864652

## Knowledge Reduction Based on Divide and Conquer Method in Rough Set Theory

Chongqing Key Laboratory of Computational Intelligence, Chongqing University of Posts and Telecommunications, Chongqing 400065, China

Received 27 June 2012; Revised 18 September 2012; Accepted 2 October 2012

Academic Editor: P. Liatsis

Copyright © 2012 Feng Hu and Guoyin Wang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The divide and conquer method is a typical granular computing method using multiple levels of abstraction and granulations. So far, although some achievements based on divided and conquer method in the rough set theory have been acquired, the systematic methods for knowledge reduction based on divide and conquer method are still absent. In this paper, the knowledge reduction approaches based on divide and conquer method, under equivalence relation and under tolerance relation, are presented, respectively. After that, a systematic approach, named as the abstract process for knowledge reduction based on divide and conquer method in rough set theory, is proposed. Based on the presented approach, two algorithms for knowledge reduction, including an algorithm for attribute reduction and an algorithm for attribute value reduction, are presented. Some experimental evaluations are done to test the methods on uci data sets and KDDCUP99 data sets. The experimental results illustrate that the proposed approaches are efficient to process large data sets with good recognition rate, compared with KNN, SVM, C4.5, Naive Bayes, and CART.

#### 1. Introduction

In the search for new paradigms of computing, there is a recent surge of interest, under the name of granular computing [1–3], in computations using multiple levels of abstraction and granulation. To a large extent, the majority of existing studies include rough sets, fuzzy sets, cluster analysis, and classical divide and conquer methods [4, 5] and aim at solving specific problems [6].

Rough set (RS) [7–10] is a valid mathematical theory for dealing with imprecise, uncertain, and vague information. It has been applied in such fields as machine learning, data mining, intelligent data analysis, and control algorithm acquiring, successfully since it was proposed by Pawlak and Skowron in 2007 [10]. Knowledge reduction is one of the most important contributions of rough set theory to machine learning, pattern recognition, and data mining. Although the problem of finding a minimal reduction of a given information system was proven to be an NP-hard problem [8], many promising heuristics have been developed that are promising. A variety of methods for knowledge reduction and their applications can be found in [2, 7–43]. Among these existing methods, one group method focuses on the indiscernibility relation in a universe that captures the equivalence of objects, while the other group considers the discernibility relation that explores the differences of objects [42]. For indiscernibility relation, one can employ it to induce a partition of the universe and thereby to construct positive regions whose objects can be undoubtedly classified into a certain class with respect to the selected attributes. Thus, knowledge reduction algorithms based on positive regions have been proposed in [8, 10, 15, 16, 21, 28, 30]. For discernibility relation, we have knowledge reduction algorithms based on a discernibility matrix and information entropy. Reduction methods based on discernibility matrix [34] have high cost of storage with space complexity for a large decision table with objects and conditional attributes. Thus, storing and deleting the element cells in a discernibility matrix is a time-consuming process. Many researchers have studied discernibility matrix construction and contributed to a lot [10, 18, 28, 35, 38, 39, 43]. As well, knowledge reduction algorithms based on information entropy [20, 37, 42] have been developed. Although so many algorithms have been developed, it is valuable to study some new high efficient algorithms.

Divide and conquer method is a simple granular computing method. When the algorithms are designed by divide and conquer method, the decision table can be divided into many subdecision tables recursively in attribute space. That^{'}s to say, an original big data set can be divided into many small ones. If the small ones can be processed one by one, instead the original big one is processed on a whole, it will save a lot time. Thus, it may be an effective way to process large data set. The divide and conquer method consists of three vital stages.

*Stage 1. *Divide the big original problem into many independent subproblems with the same structure.

*Stage 2. *Conquer the sub-problems recursively.

*Stage 3. *Merge the solutions of sub-problems into the one of original problem.

So far, some good results for knowledge reduction based on divide and conquer method have been achieved, such as the computation of the attribute core and the computation of attribute reduction under given attribute order [12, 13]. Besides, decision tree-based methods [26, 27, 44] have been studied and are very popular. In fact, the construction of decision tree is a special case of divide and conquer method, because it can be generated from top to down recursively. In the methods based on decision tree, a tree should be constructed by using decomposition at first. It costs more in the first stage, which is convenient to the following stages, and costs less in the following two ones.

However, the systematic method for knowledge reduction based on divide and conquer method is still absent, especially “how to keep invariance between the solution of original problem and the ones of sub-problems.” It results in the difficulty to design the high efficient algorithms for knowledge reduction based on divide and conquer method. Therefore, it is urgent to discuss the knowledge reduction method based on divide and conquer methods systematically and overall.

Contributions of this work are (1) some principles for “keeping invariance between the solution of original problem and the ones of sub-problems” are concluded. Then, the abstract process for knowledge reduction based on divide and conquer method in the rough set theory is presented, which is helpful to design high efficient algorithm based on divide and conquer method. (2) Fast approaches for knowledge reduction based on divide and conquer method, including an algorithm for attribute reduction and an algorithm for attribute value reduction, are proposed. Experimental evaluations show that the presented methods are efficient.

The remainder of this paper is organized as follows. The basic theory and methods dealing with the application of rough set theory in data mining are presented in Section 2. Section 3 introduces the abstract process for knowledge reduction based on divide and conquer method in the rough set theory. A quick algorithm based on divide and conquer method for attribute reduction is presented in Section 4. In Section 5, a fast algorithm for attribute value reduction using divide and conquer method is proposed. Some simulation experimental evaluations are discussed to show the performance of the developed methods in Section 6. The paper ends with conclusion in Section 7.

#### 2. Preliminaries

Rough set theory was introduced by Pawlak as a tool for concept approximation relative to uncertainty. Basically, the idea is to approximate a concept by three description sets, namely, the lower approximation, upper approximation, and boundary region. The approximation process begins by partitioning a given set of objects into equivalence classes called blocks, where the objects in each block are indiscernible from each other relative to their attribute values. The approximation and boundary region sets are derived from the blocks of a partition of the available objects. The boundary region is constituted by the difference between the lower approximation and upper approximation and provides a basis for measuring the “roughness” of an approximation. Central to the philosophy of the rough set approach to concept approximation is the minimization of the boundary region [28].

For the convenience of description, some basic notions of decision tables are introduced here at first.

*Definition 2.1 (decision table [36]). *A decision table is defined as , where is a non-empty finite set of objects, called the universe, is a nonempty finite set of attributes, , where is the set of conditional attributes, and is the set of decision attributes, . For any attribute , denotes the domain of attribute . Each attribute determines a mapping function .

*Definition 2.2 (indiscernibility relation [36]). *Given a decision table , each subset of attribute determines an indiscernibility relation as follows: .

*Definition 2.3 (lower approximation and upper approximation [36]). *Given a decision table , for any subset and indiscernibility relation , the lower approximation and upper approximation of are defined as , .

*Definition 2.4 (positive region [36]). *Given a decision table , , and , the positive region of is defined as .

*Definition 2.5 (relative core [36]). *Given a decision table , , , , is unnecessary in relative to if and only if , otherwise is unnecessary in relative to . The core of relative to is defined as ; is necessary in relative to .

*Definition 2.6 (see [36]). *Given a decision table , and , for all , if is relatively necessary in relative to , is called independent relative to .

*Definition 2.7 (relative reduction [36]). *Given a decision table , , , , if is indispensable relative to and , is called a reduction of relative to .

*Definition 2.8 (see [18]). *Given a decision table , the element of discernibility matrix can be defined as , where, , , and

#### 3. The Abstract Process for Knowledge Reduction Based on Divide and Conquer Method in Rough Set Theory

##### 3.1. The Knowledge Reduction Based on Divide and Conquer Method under Equivalence Relation

In the research of rough set theory, the divide and conquer method is an effective method to design high effective algorithm. It can be used to compute equivalence classes, positive region, and attribute core of decision table (see Propositions 3.1, 3.2, and 3.3) even execute some operators of discernibility matrix (see Propositions 3.4 and 3.5). In this section, the divide and conquer method under equivalence relation in rough set theory will be discussed.

Proposition 3.1 (see [13]). *Given a decision table , for all , divide into subdecision tables by , where , which satisfies and . Let . Let us denote by the positive region of and the one of by , respectively. Then, .*

Proposition 3.1 presents the approach of computing equivalence classes or positive region by using divide and conquer method. Compared with decision tree-based method (without pruning), the approach allows us to generate “clear” leaves (with the same decision) for objects in the positive region and “unclear” leaves where objects are with different decisions and correspond to the boundary region. It may be an effective way to prevent overfitting, because “conquer” can play a role of pruning of tree. Furthermore, the approach needs less space because the construction of tree is not necessary.

Proposition 3.2 (see [13]). *Given a decision table , for all , divide into sub-decision tables by , where , which satisfies and . Let us denote by the attribute core of and the one of by . Then, .*

Proposition 3.3 (see [13]). *Given a decision table , for all , divide into sub-decision tables by , where , which satisfies and . Let us denote by the attribute reduction of and the one of by . Let , . Then, .*

Propositions 3.2 and 3.3 present the approach of attribute core determining based on divide and conquer method. Compared with the method of computing attribute core on original decision table, it may be more efficient and can process bigger data set, since the big data set has been divided into many small data sets.

Obviously, Propositions 3.1, 3.2, and 3.3 hold.

Discernibility matrix by Skowron is a useful method to design some algorithms in rough set theory. However, due to its high complexity of algorithms based on explicit computing of the discernibility matrices, the efficiency of algorithms based on discernibility matrix needs to be improved. In the literature some useful methods have been proposed (see [26–31] by Nguyen et al., and decomposition methods implemented in RSES). Our methods differ from the existing ones as follows.(1)“How to keep invariance between the solution of original problem and the ones of sub-problems” is a key problem. We conclude some principles for computing positive region, attribute core, attribute reduction, and value reduction (see Propositions 3.1, 3.2, 3.3, 3.4, 3.5, 3.11, and 3.12), which were not concluded before.(2)Although the decision tree-based methods and our approaches both belong to divide and conquer method, our approaches cost more on “conquer” and “merge” while they cost less on “divide,” compared with the decision tree-based methods. Furthermore, our approaches need not to construct a tree, which maybe save space.(3)The existing heuristic ones in [26–28] can improve the efficiency by measuring the discernibility degree of different objects quickly. In our approaches, the element cells of discernibility matrix can be deleted by dividing decision table fast without storing the discernibility matrix. Thus, it may be a quick one for operating discernibility matrix with small space (see Propositions 3.4 and 3.5).

Given a decision table and its discernibility matrix (Definition 2.8); for all , let us denote by a subset of element cells in . can be labeled as , where(1);(2)for all , if , then .

Proposition 3.4. *Given a decision table , for all , divide the decision table into sub-decision tables by , where . Let us denote by the discernibility matrix of and the discernibility matrix of by , respectively. Then, .*

(Note: If , then, )

*Proof. *First, prove .

For all , then, (Definition 2.8). After the partition of , suppose and be divided into sub-decision tables and , respectively.

If , then, , that is, .

If , then, , that is, .

Thus, . That is, .

Similarly, we can proof .

Therefore, Proposition 3.4 holds.

Proposition 3.5. *Given a decision table , for all , divide the decision table into sub-decision tables by , where . Let us denote by the discernibility matrix of . Then, in the viewpoints of operating discernibility matrix, divide the decision table by on attribute set if and only if one deletes all the element cells of from one by one.*

According to Proposition 3.4, it is easy to find that Proposition 3.5 holds.

Propositions 3.4 and 3.5 present the approach of deleting element cells of discernibility matrix. By using the approach, the element cells of discernibility matrix can be deleted quickly without constructing or storing the discernibility matrix. It may be an effective way to operate discernibility matrix quickly within small space. Thus, Propositions 3.4 and 3.5 can be used to design some efficient algorithms without explicit computing of discernibility matrix.

##### 3.2. The Knowledge Reduction Based on Divide and Conquer Method under Tolerance Relation

In the course of attribute value reduction, tolerance relation is often adopted due to some attribute values on condition attribute being deleted. Thus, tolerance relation in attribute value reduction may be needed. A method is introduced by Kryszkiewicz and Rybinski [17] to process incomplete information system, where is used to represent the missing values on condition attributes. Here, can be also represent the deleted values on condition attributes. According to the tolerance relation by Kryszkiewicz, the divide and conquer method under the tolerance relation will be discussed.

*Definition 3.6 (see [17]). *Given an incomplete decision table , a tolerance relation is defined as

*Definition 3.7 (see [17]). *The tolerance class of an object relative to an attribute set is defined as . In the tolerance relation-based extension of rough set theory, the lower approximation and upper approximation of an object set relative to an attribute set are defined as , .

*Definition 3.8 (the covering of the universe under tolerance relation). *Given a decision table and condition attribute set , according to the tolerance relation by Kryszkiewicz, the covering of the universe of can be defined as , where , and .

*Definition 3.9 (certain decision rule). *Given a decision table , for all , the object can result in a certain decision rule , , . and are called the condition attribute set and decision attribute set of , respectively.

*Definition 3.10 (see [36]). *Given a decision table , for arbitrary certain decision rule, there is . For all , if does not hold, then, is necessary in ; otherwise, is not necessary in .

Proposition 3.11. *Given a decision table , and for all , given an decomposing order , according to the order and tolerance relation, can be divided into sub-decision tables , where . Assume and be processed with the same way. For each sub-decision table , for all , let us denote by a decision rule relative to object in and a decision rule relative to in by , respectively. There is *(1)in , if , is necessary in . Then, is necessary in ;(2)in , if , is not necessary in . Then, is not necessary in ; (3).

Proposition 3.12. *Given a decision table , for all , given an decomposing order , and according to the order, can be divided into sub-decision tables , where . Assume and be processed with the same. For each sub-decision table , let us denote by the certain decision rule set of and the one of by , respectively. Then, .*

Propositions 3.11 and 3.12 present the approach of value reduction based on divide and conquer method. It can keep invariance between the solution of original decision table and the ones of sub-decision tables. By using the approach, it allows us to generate decision rules from sub-decision tables, not from original decision table. It may be a feasible way to process big data set.

##### 3.3. The Abstract Process for Knowledge Reduction Based on Divide and Conquer Method in Rough Set Theory

According to the divide and conquer method under equivalence relation and tolerance relation, the abstract process for knowledge reduction in rough set theory based on the divide and conquer method APFKRDAC(, ) will be discussed in this section.

*Algorithm 3.13 (APFKRDAC). *Input: The problem on .

Output: The solution of the problem .

*Step 1 (determine a similarity relation of different objects). * Determine a similarity relation between different objects, such as equivalence relation or tolerance relation. Generally, reflexivity and symmetry of different objects may be necessary.

*Step 2 (determine the decomposing order). * Determine the order for decomposing the universe of decision table. Let .

*Step 3 (determine the decomposing strategy). * Design a judgment criteria for judging whether the universe can be decomposed. Design a decompose function , which can be used to decompose the universe recursively. Design a boolean function , which can be used to judge if the size of problem is small enough to be processed easily. Design a computation function , which can be used to process small problems directly. Design a computation function , which can be used to merge the solutions of sub-problems.

*Step 4 (process the problem based on divide and conquer method). * (4.1) IF THEN , goto Step 5. (4.2) (Divide) According to the decomposing order, divide into sub-decision tables on . (4.3) (Conquer sub-problems recursively) FOR TO DO =APFKRDAC. END FOR. Where, , , is the sub-problem of . (4.4) (Merge the solutions of sub-problems) .

*Step 5 (optimize the solution). *If necessary, optimize the solution .

*Step 6 (return the solution). *RETURN .

Now, let us give an example for computing the positive region of decision table to explain Algorithm 3.13 (see Algorithm 3.14: the algorithm for computing positive region based on divide and conquer method).

*Algorithm 3.14 (CPRDAC). * Input: The problem on : compute positive region. Output: The positive region of .

*Step 1 (Determine the similar relation). *equivalence relation.

*Step 2 (Determine the decomposing order). *.

*Step 3 (Determine the decomposing strategy). * (3.1) Design a judgment criteria : On attribute , for all , IF THEN =true; ELSE =false; END IF (3.2) Design a decompose function : According to , can be divided into sub-decision tables on attributes recursively. (3.3) Design a boolean function : Let be the attribute on which the universe is being decomposed. IF or or THEN . END IF (3.4) Design a computation function : For arbitrary sub-decision table and its universe , IF THEN ; ELSE ; END IF (3.5) Design a computation function : ;

*Step 4 (Process the problem based on the divide and conquer method). * (4.1) IF THEN ; goto Step 5. (4.2) (Divide) According to the order , can be divided into sub- decision tables on by using . (4.3) (Conquer sub-problems recursively) FOR TO DO = CPRDAC. END FOR Where, , denotes computing the positive region of . (4.4) (Merge the solutions of sub-problems) .

*Step 5 (Optimize the solution). * is an optimized result.

*Step 6 (Return the solution). *RETURN .

*Example 3.15. *Given a decision table , , , now compute the positive region of according to Algorithm 3.14. The whole process can be found in Figure 1.

Let us denote by the positive region of , respectively.

Divide into on .

Conquer .

Divide into on .

Conquer . .

Conquer . .

Merge the solutions of and . .

Similarly, we can conquer and .

Conquer . .

Conquer . .

Merge the solutions of , , and .

#### 4. A Fast Algorithm for Attribute Reduction Based on Divide and Conquer Method

Knowledge reduction is the key problem in rough set theory. When the divide and conquer method is used to design the algorithm for knowledge reduction, some good results may be obtained. However, implementing the knowledge reduction based on the divide and conquer method is very complex, though it is only a simple granular computing method. Here, we will discuss the quick algorithm for knowledge reduction based on divide and conquer method.

In the course of attribute reduction, the divide and conquer method is used to compute the equivalence class, the positive region, and the non-empty label attribute set and delete the elements of discernibility matrix. Due to the complexity of attribute reduction, the following algorithm is not presented as Algorithm 3.14 in details.

According to Step 2 of Algorithm 3.13, the attribute set and the order must be determined, on which the universe of decision table will be partitioned in turns. Generally speaking, the decomposing order depends on the problem which needs to be solved. Furthermore, if the order is not given by field experts, it can be computed by the weights in [10, 15, 23, 25, 26, 28, 33, 36, 37, 41]. Of course, if the attribute order is given, it will be more suitable for Algorithm 3.13. Most techniques discussed below are based on a given attribute order and divide and conquer method. In this section, a quick attribute reduction algorithm based on a given attribute order and divide and conquer method is proposed.

##### 4.1. Attribute Order

In 2001, an algorithm for attribute reduction based on the given attribute order is proposed by Jue Wang and Ju Wang [38]. For the convenience of illustration, some basic notions about attribute order are introduced here.

Given a decision table , an attribute order relation over can be defined. Let us denote by the discernibility matrix of . For any , the attributes of discernibility matrix inherit the order relation of from left to right, that is, , where and , and is the first element of by , called the *non-empty label attribute* of [33].

For , a set , inherit the order relation of from left to right and is defined. Hence, can be divided into equivalence classes by label attributes defining a partition of denoted by [33]. Supposing , its maximum non-empty label attribute is .

##### 4.2. Attribute Reduction Based on the Divide and Conquer Method

Lemma 4.1. *) (Definition 2.8) if and only if .*

According to Definition 2.8, obviously Lemma 4.1 holds.

Proposition 4.2. *Given a decision table , for all , let be the discernibility matrix of . Then, if and only if the following conditions hold: (1) ; (2) ; (3) ; (4) .*

* Proof. *(Necessity) according to Lemma 4.1, obviously Proposition 4.2 holds.

(Sufficiency):

for all , according to , then,

.

If , then , that is, . The proposition holds.

If , then, there are two cases.*Case * (). According to , , thus .

That is, . The proposition holds. *Case * (). According to , . If , then ;

if , then . Thus, the proposition holds.

Therefore, Proposition 4.2 holds.

According to the algorithm in [38], in order to compute the attribute reduction of a decision table, its non-empty label attribute set should be first calculated. Using the divide and conquer method, an efficient algorithm for computing the non-empty label attribute set is developed. A recursive function for computing the non-empty label attribute set is used in the algorithm.

*Function 1*

NonEmptyLabelAttr

// is decision table. is the number of attributes .

*Step 1 (Ending Condition). * According to Propositions 4.2 and 5.1,

IF or or or THEN

return;

END IF

*Step 2 (Compute non-empty labels based on divide and conquer method). * Let NonEmptyLabel be an array used to store the solution.*Step *2.1. IF THEN

Denote non-empty label attribute: NonEmptyLabel[] = 1;

END IF*Step *2.2 (Divide). Divide into by ;*Step *2.3 (Conquer sub-problems). FOR = 1 TO DO

NonEmptyLabelAttr;

END FOR.*Step *2.4 (Merge the solutions). Here, there is no operation because the solutions are stored in the array NonEmptyLabel.

END Function 1.

Using the above recursive function, an algorithm for computing the non-empty label attribute set of a decision table is developed.

*Algorithm 4.3. *Computation of The Non-empty Label Attribute Set

Input: A decision table and an attribute order

Output: The non-empty label attribute set of .

*Step 1. *; ;

FOR TO DO

;

END FOR

*Step 2. *NonEmptyLabelAttr;

*Step 3. *FOR TO DO

IF THEN ;

END FOR

*Step 4. *RETURN .

Suppose , . According to the conclusion of literature [45], the average time and space complexities of the Algorithm 4.3 are and .

Obviously, Algorithm 4.3 is an instance of Algorithm 3.13. Given an attribute order of the conditional attributes in a decision table, using the Algorithm 4.3 and divide and conquer method, an efficient attribute reduction algorithm is developed.

*Algorithm 4.4. *Computation of Attribute Reduction Based on Divide and Conquer Method

Input: A decision table and an attribute order

Output: Attribute reduction of .

*Step 1. *, .

*Step 2. *Compute the positive region , according to Algorithm 3.14.

*Step 3. *Compute the non-empty label attribute set by Algorithm 4.3.

*Step 4. *//Suppose be the maximum label attribute of .

;

IF THEN RETURN ;

ELSE

Generate a new attribute order:

;

Compute new non-empty label attribute set of by Algorithm 4.3;

GOTO Step 4.

END IF

Suppose and , the average time complexity of the Algorithm 4.4 is . Its space complexity is .

In Algorithm 4.4, Step 1 is the initialization. In Step 2 of Algorithm 4.4, divide and conquer method is used to compute equivalence classes and positive region, thus Step 2 is an instance of Algorithm 3.13. In Step 3, Algorithm 4.3 is used to compute non-empty label attribute set (Algorithm 4.3 is also an instance of Algorithm 3.13). Step 4 is responding to Step 5 of Algorithm 3.13. In Step 4, Algorithm 4.3 is called repeatedly to reduce the redundant attribute set. That is, Algorithm 4.4 is composed of instances of Algorithm 4.3, which illustrates that Algorithm 4.4 is implemented by divide and conquer method. Basically, divide and conquer method is used primarily to compute equivalence classes, positive region, and non-empty label attribute set and delete element cells in discernibility matrix in Algorithm 4.4.

#### 5. A Fast Algorithm for Value Reduction Based on Divide and Conquer Method

Proposition 5.1. *Given a decision table , for all , a certain rule can be induced by object .*

Proposition 5.2. *Given a decision table , let us denote by a certain rule set of . , must be induced by .*

Proposition 5.3. *Given a decision table , let us denote by certain rule set of . For all , let us denote by the certain rule by . Let . Then, .*

According to Algorithm 3.13, Propositions 5.1, 5.2, and 5.3, a recursive function and an algorithm for value reduction based on divide and conquer method are developed as follows.

*Function 2*

DRAVDAC

//Denote by array CoreValueAttribute the result of value reduction of .

//The values of array CoreValueAttribute are all 0 initially.

*Step 1 (Ending Condition). *IF there is contradiction on , THEN

return;

END IF

*Step 2 (Value reduction on based on divide and conquer method). **Step *2.1 (Divide).

Divide into sub-decision tables on attribute by using tolerance relation.*Step *2.2 (Conquer sub-problems recursively).

Denote by array the solution of .

Where, .

FOR = 1 TO DO

= DRAVDAC;

END FOR*Step *2.3 (Merge the solutions).

FOR = 1 TO DO

FOR =1 TO DO

IF == 1 THEN break; END IF

END FOR

IF THEN CoreValueAttribute =1 END IF

END FOR

END Function 2.

Using Function 2, we present an algorithm for value reduction based on divide and conquer method (see Algorithm 5.4).

*Algorithm 5.4. *An Algorithm for Value Reduction Based on Divide and Conquer Method

Input: A decision table

Output: The certain rule set of .

*Step 1 (Initiation). *, .

*Step 2 (Compute the positive region). * According to Algorithm 3.14, compute the positive region of .

*Step 3 (Compute the non-empty label attribute). * Assume the order for dividing decision table be .

Compute the non-empty label attribute set by using Function 1.

*Step 4 (Value reduction on attribute set ). * Let and the divide order be .

FOR i = TO 1 DO

FOR = 1 TO DO

CoreValueAttribute = 0.

END FOR.

.

Invoke Function 2: DRAVDAC.

Update , according to the array CoreValueAttribute.

END FOR.

*Step 5 (Get the rule set). *FOR 1 TO DO

IF THEN

Construct a rule in terms with ; ;

ENE IF

END FOR

*Step 6. *RETURN .

Suppose . The time complexity of Step 1 is . The average time complexities of Steps 2 and 3 are [45]. The time complexities of Steps 5 and 6 are both . Now, let us analyze the time complexity of Step 4.

In Step 4, let the number of non-empty label attribute set be . Then, the time complexity of Step 4 is , where is an instance of which can be expressed by the following recursive equation: is between and . So the time complexity of Step 4 is between and . Thus, the time complexity of Algorithm 5.4 is between and .

Suppose the data obey the uniform distribution. The time complexity of Algorithm 5.4 is . When , the time complexity of Algorithm 5.4 is less than . When , the time complexity of Algorithm is less than .

The space complexity of Algorithm 5.4 is .

#### 6. Experimental Evaluations

In order to test the efficiency of knowledge reduction based on divide and conquer method, some experiments have been performed on a personal computer. The experiments are shown as follows.

##### 6.1. The Experimental Evaluations on UCI Data Sets

In this experiment, some experimental evaluations are done to present the efficiency and recognition results of Algorithms 4.4 and 5.4. In the mean time, some famous approaches for data mining are used to compare with our methods.

The test course is as follows. First, 11 uci data sets (Zoo, Iris, Wine, Machine, Glass, Voting, Wdbc, Balance-scale, Breast, Crx, and Tic-tac-toe) are used. Second, our methods: the algorithm for discretization [14] (it is an improved one based on the discretization method in [28]), the algorithm for attribute reduction (Algorithm 4.4), and the algorithm for attribute value reduction (Algorithm 5.4) are used to test the 11 uci data sets. Third, 5 methods (KNN, SVM, C4.5, Naive Bayes, and CART) are also used to test the data sets, which belong to the “top 10 algorithms in data mining” [44], and their source codes are afforded by Weka software. Weka (http://en.wikipedia.org/wiki/Weka_(machine_l earning)) is used as the experimental plat and “Java Eclipse Weka” as the developing tool. The test method is LOOCV (Leave One Out Cross Validation). The specifications of the experimental computer are an Intel(R) Core(TM2) Quad CPU Q8200 @2.33 GHz CPU, 2 GB RAM, and Microsoft Windows 7. The specifications of 11 data sets and the experimental results are as follows.

From Table 1 and Figure 2, it can be found that the recognition results of our methods on the 11 uci data sets are closed to the ones of KNN and CART, which are better than Naive Bayes, C4.5, and SVM.

##### 6.2. The Experimental Results on KDDCUP99 Data Sets

In order to test the efficiency of our methods for processing large data sets, some experiments are done on KDDCUP99 data sets with 4898432 records, 41 condition attributes, and 23 decision classifications (http://kdd.ics.uci.edu/databases//kddcup99/kddcup99.html). Our methods consist of the discretization algorithm [14], Algorithms 4.4, and 5.4 still. Weka is used as the experimental plat and “Java Eclipse Weka” as the developing tool (Table 2). The test method is 10 cross-validation. The specifications of the experimentation computer are an Intel(R) Core(TM2) Quad CPU Q8200 @2.33 GHz CPU, 4 GB RAM, and Microsoft Windows Server 2008. The experimental results are as follows.

First, the experiments are done on 10 data sets (≤10^{4} records) from the original KDDCUP99 data sets. The experimental evaluations are showed in Tables 3, 4, and 5, where the time unit is “ms” in Tables 4 and 5.

From Tables 3, 4, and 5, it can be found that it will cost much time to train by SVM, which shows that SVM is not a good way to process KDDCUP99 data sets with large records. Thus, in the following experiments, SVM will be not tested.

Second, the experiments are done on 10 data sets (≤10^{5} records) from the original KDDCUP99 data sets. The experimental evaluations are showed in Tables 6 and 7, where “Tr” is the training time, “Te” is the test time, and the time unit is “ms” in Table 7.

From Tables 6 and 7, it can be found that the recognition is lower than others by Naive Bayes and much test time is needed for KNN. Thus, Naive Bayes and KNN will be not tested in the following experiments.

Third, the experiments are done on 10 data sets (≤10^{6} records) from the original KDDCUP99 data sets. The experimental evaluations are showed in Table 8, where “RRate” is the recognition rate, “Tr” is the training time, “Te” is the testing time, and the time unit is “ms” in Table 8.

Fourth, the experiments are done on 10 data sets (< records) from the original KDDCUP99 data sets. The experimental results are showed in Table 9, where “RRate” is the recognition rate, “Tr” is the training time, “Te” is the testing time, “-” is the overflow of memory, and the time unit is “second” in Table 9.

From Tables 8 and 9, it can be found that C4.5 is the best one and our method is the second best one among C4.5, CART, and our method. Due to the high complexity for discretization, our method can not complete the knowledge reduction of 4898432 records in this experiment.

##### 6.3. The Conclusions of Experimental Evaluations

Now, we will give some conclusions for our approaches compared with KNN, SVM, C4.5, Naive Bayes, and CART, according to the LOOCV experimental results on the uci data sets and 10 cross-validation experimental results on KDDCUP99 data sets.(1)Compared with KNN, SVM, and Naive Bayes, the LOOCV recognition results by our methods on uci data sets are better than KNN, SVM, and Bayes. Furthermore, our methods on KDDCUP99 data sets have higher efficiency than KNN, SVM, and Naive Bayes, while they also have good recognitions results.(2)Compared with CART, the LOOCV recognition results by our methods on uci data sets are closed to CART. But our methods can process larger data sets than CART on KDDCUP99 data sets, while they both have good recognition results.(3)Compared with C4.5, the LOOCV recognition results by our methods on uci data sets are better than C4.5. Furthermore, the test time by our methods on KDDCUP99 data sets is less than C4.5, while C4.5 can process larger data sets than our methods. After these two methods are analyzed, we find that our methods are more complex than C4.5, due to the complex discretization (C4.5 can process the decision table with continuous values directly, while the discretization should be necessary for our methods). As a coin has two sides, enough learning contributes to the better rule sets, thus less test time is needed by our methods than C4.5.

Therefore, the knowledge reduction approaches based on divide and conquer method are efficient to process large data set, although they need to be improved further in the future.

#### 7. Conclusions

In this paper, the abstract process of knowledge reduction based on divide and conquer method is concluded, which is original from the approaches under the equivalence relation and the one under the tolerance relation. Furthermore, an example for computing positive region of the decision table is introduced. After that, two algorithms for knowledge reduction based on divide and conquer method, including an algorithm for attribute reduction and an algorithm for attribute value reduction, are presented, respectively. The proposed algorithms are efficient to process the knowledge reduction on uci data sets and KDDCUP99 data set, according to the experimental evaluations. Therefore, the divide and conquer method is an efficient and, therefore, suitable method to be used to knowledge reduction algorithms in rough set theory. With this efficiency, widespread industrial application of rough set theory may become possible.

#### Acknowledgments

This work is supported by the National Natural Science Foundation of China (NSFC) under Grants no. 61073146, no. 61272060, no. 61203308, and no. 41201378, Scientific and Technological Cooperation Projects between China and Poland Government, under Grant no. 34-5, Natural Science Foundation Project of CQ CSTC under Grant no. cstc2012jjA1649, and Doctor Foundation of Chongqing University of Posts and Telecommunications under Grant no. A2012-08.

#### References

- A. Bargiela and W. Pedryc,
*Human-Centric Information Processing Through Granular Modelling*, Springer, Berlin, Germany, 1997. - W. Pedrycz, A. Skowron, and V. Kreinovich,
*Handbook of Granular Computing*, Wiley Interscience, New York, NY, USA, 2007. - J. T. Yao,
*Novel Developments in Granular Computing, Applications for Advanced Human Reasoning and Soft Computation*, Information Science Reference, Herskey, Pa, USA, 2010. - J. Yao, “A ten-year review of granular computing,” in
*Proceedings of the IEEE International Conference on Granular Computing (GRC '07)*, pp. 734–739, November 2007. View at Scopus - Y. Y. Yao, “Granular computing: past, present and future,” in
*Proceedings of the IEEE International Conference on Granular Computing*, pp. 80–85, 2008. - Y. Y. Yao and J. G. Luo., “Top-down progressive computing,” in
*Proceedings of the RSKT*, pp. 734–742, Springer, Regina, Canada, 2011. - Z. Pawlak, “Rough sets,”
*International Journal of Computer and Information Sciences*, vol. 11, no. 5, pp. 341–356, 1982. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - Z. Pawlak and A. Skowron, “Rudiments of rough sets,”
*Information Sciences*, vol. 177, no. 1, pp. 3–27, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - Z. Pawlak and A. Skowron, “Rough sets: some extensions,”
*Information Sciences*, vol. 177, no. 1, pp. 28–40, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - Z. Pawlak and A. Skowron, “Rough sets and boolean reasoning,”
*Information Sciences*, vol. 177, no. 1, pp. 41–73, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - J. W. Grzymala-Busse, “A new version of the rule induction system LERS,”
*Fundamenta Informaticae*, vol. 31, no. 1, pp. 27–39, 1997. View at Google Scholar · View at Scopus - F. Hu and G. Y. Wang, “A quick reduction algorithm based on attribute order,”
*Chinese Journal of Computers*, vol. 30, no. 8, pp. 1430–1435, 2007 (Chinese). View at Google Scholar - F. Hu, G. Wang, and Y. Xia, “Attribute core computing based on divide and conquer method,” in
*Proceedings of the International Conference on Rough Sets and Intelligent Systems Paradigms (RSEISP '07)*, M. Kryszkiewicz et al., Ed., Lecture Notes in Artificial Intelligence 4585, pp. 310–319, springer, Warsaw, Poland, 2007. - F. Hu, G. Wang, and J. Dai, “Quick discretization algorithm for rough set based on dynamic clustering,”
*Journal of Southwest Jiaotong University*, vol. 45, no. 6, pp. 977–983, 2010 (Chinese). View at Publisher · View at Google Scholar · View at Scopus - K. Hu, Y. Lu, and C. Shi, “Feature ranking in rough sets,”
*AI Communications*, vol. 16, no. 1, pp. 41–50, 2003. View at Google Scholar · View at Scopus - X. Hu, N. Cercone, and N. Cercone, “Learning in relational databases: a rough set approach,”
*Computational Intelligence*, vol. 11, no. 2, pp. 323–338, 1995. View at Google Scholar · View at Scopus - M. Kryszkiewicz and H. Rybinski, “Computation of reducts of composed information systems,”
*Fundamenta Informaticae*, vol. 27, no. 2-3, pp. 183–195, 1996. View at Google Scholar · View at Zentralblatt MATH - D. F. Li, G. B. Li, and W. Zhang, “U/a partition based smallest reduction construction,”
*Journal of Wuhan University*, vol. 51, pp. 269–272, 2005 (Chinese). View at Google Scholar - T. Y. Lin and N. Cercone, Eds.,
*Rough Sets and Data Mining-Analysis of Imperfect Data*, Kluwer Academic Publishers, Boston, Mass, USA, 1997. - Q.-H. Liu, F. Li, F. Min, M. Ye, and G.-W. Yang, “Efficient knowledge reduction algorithm based on new conditional information entropy,”
*Control and Decision*, vol. 20, no. 8, pp. 878–882, 2005 (Chinese). View at Google Scholar · View at Scopus - S. W. Liu, Q.-J. Sheng, B. Wu, Z.-Z. Shi, F. Hu et al., “Research on efficient algorithms for rough set methods,”
*Chinese Journal of Computers*, vol. 40, pp. 637–642, 2003 (Chinese). View at Google Scholar · View at Scopus - D. Miao, C. Gao, N. Zhang, and Z. Zhang, “Diverse reduct subspaces based co-training for partially labeled data,”
*International Journal of Approximate Reasoning*, vol. 52, no. 8, pp. 1103–1117, 2011. View at Publisher · View at Google Scholar - J. M. Mikhail, P. Marcin, and Z. Beata, “On partial covers, reducts and decision rules with weights,” in
*Proceedings on Transactions on Rough Sets 6*, vol. 4374 of*Lecture Notes in Computer Sciences 4374*, pp. 211–246, Springer, Berlin, Germany, 2007. View at Google Scholar · View at Zentralblatt MATH - R. C. Michal, J. W. Grzymala-Busse, W. P. Neil, and T. Soe, “The rule induction system LERSa new version for personal computers,” in
*Proceeding of the International Workshop on Rough Sets and Knowledge Discovery (RSKD '93)*, Alberta, Canada, 1993. - J. M. Moshkov, A. Skowron, and Z. Suraj, “On minimal rule sets for almost all binary information systems,”
*Fundamenta Informaticae*, vol. 80, no. 1–3, pp. 247–258, 2008. View at Google Scholar · View at Zentralblatt MATH - H. S. Nguyen, “From optimal hyperplanes to optimal decision trees,”
*Fundamenta Informaticae*, vol. 34, no. 1-2, pp. 145–174, 1998. View at Google Scholar · View at Zentralblatt MATH - H. S. Nguyen, “A soft decision tree,” in
*Proceedings of the Intelligent Information Systems (IIS '02)*, M. A. Klopotek, S. Wierzchon, and M. Michalewicz, Eds., Advanced in Soft Computing, pp. 57–66, Springer, Berlin, Germany, 2002. - H. S. Nguyen, “Approximate Boolean reasoning: foundations and applications in data mining,” in
*Transactions on Rough Sets 5*, vol. 4100, pp. 334–506, Springer, Berlinm Germany, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - H. S. Nguyen, A. Skowron, and P. Synak, “Discovery of data patterns with applications to decomposition and classification problems,” in
*Rough sets in knowledge discovery 2*, L. Polkowski and A. Skowron, Eds., vol. 19, pp. 55–97, Physica, Berlin, Germany, 1998. View at Google Scholar - S. H. Nguyen and H. S. Nguyen, “Some efficient algorithms for rough set methods,” in
*Proceedings of the Conference of Information Processing and Management of Uncertainty in Knowledge-Based Systems (IPMU '96)*, pp. 1451–1456, Granada, Spain, 1996. - S. H. Nguyen and H. S. Nguyen, “Pattern extraction from data,”
*Fundamenta Informaticae*, vol. 34, no. 1-2, pp. 129–144, 1998. View at Google Scholar · View at Zentralblatt MATH - S. K. Pal, L. Polkowski, and A. Skowron,
*Rough-Neural Computing: Techniques for Computing with Words, Cognitive Technologies*, Springer, Berlin, Germany, 2004. - Y. Qian, J. Liang, W. Pedrycz, and C. Dang, “An efficient accelerator for attribute reduction from incomplete data in rough set framework,”
*Pattern Recognition*, vol. 44, no. 8, pp. 1658–1670, 2011. View at Publisher · View at Google Scholar · View at Scopus - A. Skowron and C. Rauszer, “The discernibility functions matrics and functions in information systems,” in
*Intelligent Decision Support—Handbook of Applications and Advances of the Rough Sets Theory*, R. Slowinski, Ed., pp. 331–362, Kluwer Academic Publisher, Dordrecht, The Netherlands, 1992. View at Google Scholar - A. Skowron, Z. Pawlak, J. Komorowski, and L. Polkowski, “A rough set perspective on data and knowledge,” in
*Handbook of KDD*, W. Kloesgen and J. Zytkow, Eds., pp. 134–149, Oxford University Press, Oxford, UK, 2002. View at Google Scholar - G. Y. Wang,
*Rough Set Theory and Knowledge Acquisition*, Xi'an Jiaotong University Press, 2001. - G. Y. Wang, H. Yu, and D. C. Yang, “Decision table reduction based on conditional information entropy,”
*Chinese Journal of Computers*, vol. 25, no. 7, pp. 759–766, 2002 (Chinese). View at Google Scholar - Jue Wang and Ju Wang, “Reduction algorithms based on discernibility matrix: the ordered attributes method,”
*Journal of Computer Science and Technology*, vol. 16, no. 6, pp. 489–504, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - Y. Yao and Y. Zhao, “Discernibility matrix simplification for constructing attribute reducts,”
*Information Sciences*, vol. 179, no. 7, pp. 867–882, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH - H.-Z. Yang, L. Yee, and M.-W. Shao, “Rule acquisition and attribute reduction in real decision formal contexts,”
*Soft Computing*, vol. 15, no. 6, pp. 1115–1128, 2011. View at Publisher · View at Google Scholar · View at Scopus - M. Zhao,
*The data description based on reduct [Ph.D. dissertation]*, Institute of Automation, Chinese Academy of Sciences, Bejing, China, 2004. - J. Zhou, D. Miao, W. Pedrycz, and H. Zhang, “Analysis of alternative objective functions for attribute reduction in complete decision tables,”
*Soft Computing*, vol. 15, no. 8, pp. 1601–1616, 2011. View at Publisher · View at Google Scholar · View at Scopus - W. Ziarko, N. Cerone, and X. Hu, “Rule discovery from database with decision matrices,” in
*Proceedings of the 9th International Symposium on Foundation of Intelligent Systems (ISMIS '96)*, pp. 653–662, Zakopane, Poland, May 1996. - X. Wu, V. P. Kumar, R. S. Quinlan et al. et al., “Top 10 algorithms in data mining,”
*Knowledge and Information Systems*, vol. 14, no. 1, pp. 1–37, 2008. View at Publisher · View at Google Scholar · View at Scopus - F. Hu and G. Y. Wang, “Analysis of the complexity of quick sort for two-dimensional tables,”
*Chinese Journal of Computers*, vol. 30, no. 6, pp. 963–968, 2007 (Chinese). View at Google Scholar