Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2012, Article ID 970576, 13 pages
http://dx.doi.org/10.1155/2012/970576
Research Article

A Granular Reduction Algorithm Based on Covering Rough Sets

1College of Science, Central South University of Forestry and Technology, Changsha 410004, China
2College of Mathematics and Computer Science, Guangxi University for Nationalities, Nanning 530006, China
3School of Economics and Management, Changsha University of Science and Technology, Changsha 410004, China

Received 31 March 2012; Revised 12 July 2012; Accepted 16 July 2012

Academic Editor: Chong Lin

Copyright © 2012 Tian Yang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The granular reduction is to delete dispensable elements from a covering. It is an efficient method to reduce granular structures and get rid of the redundant information from information systems. In this paper, we develop an algorithm based on discernability matrixes to compute all the granular reducts of covering rough sets. Moreover, a discernibility matrix is simplified to the minimal format. In addition, a heuristic algorithm is proposed as well such that a granular reduct is generated rapidly.

1. Introduction

With the development of technology, the gross of information increases in a surprising way. It is a great challenge to extract valuable knowledge from the massive information. Rough set theory was raised by Pawlak [1, 2] to deal with uncertainty and vagueness, and it has been applied to the information processing in various areas [38].

One of the most important topics in rough set theory is to design reduction algorithms. The reduction of Pawlak's rough sets is to reduce dispensable elements from a family of equivalence relations which induce the equivalence classes, or a partition.

Covering generalized rough set [919] and binary relation generalized rough set [2026] are two main extensions of Pawlak's rough set. The reduction theory of covering rough sets [10, 11, 15, 23, 27, 28] plays an important role in practice. A partition is no longer a partition if any of its elements is deleted, while a covering may still be a covering with invariant set approximations after dropping some elements. Therefore, there are two types of reduction on covering rough sets. One is to reduce redundant coverings from a family of coverings, referred to as the attribute reduction. The other is to reduce redundant elements from a covering, noted as the granular reduction. It is to find the minimal subsets of a covering which generate the same set approximations with the original covering. Employed to reduce granular structures and databases as well as interactive with the attribute reduction, we think the granular reduction should be ignored by no means. In this paper, we devote to investigate granular reduction of covering rough sets.

In order to compute all attribute reducts for Pawlak's rough sets, discernibility matrix is initially presented [29]. Tsang et al. [15] develop an algorithm of discernibility matrices to compute attribute reducts for one type of covering rough sets. Zhu and Wang [17] and Zhu [18] build one type of granular reduction for two covering rough set models initially. In addition, Yang et al. systematically examine the granular reduction in [30] and the relationship between reducts and topology in [31]. Unfortunately, no effective algorithm for granular reduction has hitherto been proposed.

In this paper, we bridge the gap by constructing an algorithm based on discernibility matrixes which is applicable to all granular reducts of covering rough sets. This algorithm can reduce granular structures and get rid of the redundant information from information systems. Then a discernibility matrix is simplified to the minimal format. Meanwhile, based on a simplification of discernibility matrix, a heuristic algorithm is proposed as well.

The remainder of this paper proceeds as follows. Section 2 reviews the relevant background knowledge about the granular reduction. Section 3 constructs the algorithm based on discernibility matrix. Section 4 simplifies the discernibility matrix and proposes a heuristic algorithm. Section 5 concludes the study.

2. Background

Our aim in this section is to give a glimpse of rough set theory.

Let 𝑈 be a finite and nonempty set, and let 𝑅 be an equivalence relation on 𝑈. 𝑅 generates a partition 𝑈/𝑅={[𝑥]𝑅𝑥𝑋} on 𝑈, where [𝑥]𝑅 is an equivalence class of 𝑥 generated by the equivalence relation 𝑅. We call it elementary sets of 𝑅 in rough set theory. For any set 𝑋, we describe 𝑋 by the elementary sets of 𝑅, and the two sets 𝑅[𝑥]=𝑅[𝑥]𝑅𝑋,𝑅[𝑥]=𝑅[𝑥]𝑅𝑋Ø(2.1) are called the lower and upper approximations of 𝑋, respectively. If 𝑅(𝑋)=𝑅(𝑋),𝑋 is an 𝑅-exact set. Otherwise, it is an 𝑅-rough set.

Let be a family of equivalence relations, and let 𝐴, denoted as IND()={𝑅𝑅}. 𝐴 is dispensable in if and only if IND()=IND(𝐴). Otherwise, 𝐴 is indispensable in . The family is independent if every 𝐴 is indispensable in . Otherwise, is dependent. is a reduct of if is independent and IND()=IND(). The sets of all indispensable relations in are called the core of , denoted as CORE(). Evidently, CORE()=RED(), where RED() is the family of all reducts of . The discernibility matrix method is proposed to compute all reducts of information systems and relative reducts of decision systems [29].

𝒞 is called a covering of 𝑈, where 𝑈 is a nonempty domain of discourse, and 𝒞 is a family of nonempty subsets of 𝑈 and 𝒞=𝑈.

It is clear that a partition of 𝑈 is certainly a covering of 𝑈, so the concept of a covering is an extension of the concept of a partition.

Definition 2.1 (minimal description [9]). Let 𝒞 be a covering of 𝑈, 𝑀𝑑𝒞(𝑥)={𝐾𝒞𝑥𝐾(𝑆𝒞𝑥𝑆𝑆𝐾𝐾=𝑆)}(2.2) is called the minimal description of 𝑥. When there is no confusion, we omit the 𝒞 from the subscript.

Definition 2.2 (neighborhood [9, 19]). Let 𝒞 be a covering of 𝑈, and 𝑁𝒞(𝑥)={𝐶𝒞𝑥𝐶} is called the neighborhood of 𝑥. Generally, we omit the subscript 𝒞 when there is no confusion.

Minimal description and neighborhood are regarded as related information granules to describe 𝑥, which are used as approximation elements in rough sets (as shown in Definition 2.3). It shows that 𝑁(𝑥)={𝐶𝒞𝑥𝐶}=𝑀𝑑(𝑥). The neighborhood of 𝑥 can be seen as the minimum description of 𝑥, and it is the most precise description (more details are referred to [9]).

Definition 2.3 (covering lower and upper approximation operations [19]). Let 𝒞 be a covering of 𝑈. The operations 𝐶𝐿𝒞𝑃(𝑈)𝑃(𝑈) and 𝐶𝐿𝒞𝑃(𝑈)𝑃(𝑈) are defined as follows: for all 𝑋𝑃(𝑈), 𝐶𝐿𝒞(𝑋)={𝐾𝒞𝐾𝑋}={𝐾𝑥,s.t.(𝐾𝑀𝑑(𝑥))(𝐾𝑋)},𝐶𝐿𝒞(𝑋)={𝑥𝑁(𝑥)𝑋}={𝑁(𝑥)𝑁(𝑥)𝑋}.(2.3) We call 𝐶𝐿𝒞 the first, the second, the third, or the fourth covering lower approximation operations and 𝐶𝐿𝒞 the fifth, the sixth, or the seventh covering lower approximation operations, with respect to the covering 𝒞.
The operations 𝐹𝐻, 𝑆𝐻, 𝑇𝐻, 𝑅𝐻, 𝐼𝐻, 𝑋𝐻, and 𝑉𝐻𝑃(𝑈)𝑃(𝑈) are defined as follows: for all 𝑋𝑃(𝑈), 𝐹𝐻𝒞(𝑋)=𝐶𝐿(𝑋)({𝑀𝑑(𝑥)𝑥𝑋𝐶𝐿(𝑋)}),𝑆𝐻𝒞,(𝑋)=𝐾𝐾𝒞,𝐾𝑋Ø𝑇𝐻𝒞(𝑋)={𝑀𝑑(𝑥)𝑥𝑋},𝑅𝐻𝒞,(𝑋)=𝐶𝐿(𝑋)𝐾𝐾(𝑋𝐶𝐿(𝑋))Ø𝐼𝐻𝒞(𝑋)=𝐶𝐿(𝑋)({𝑁(𝑥)𝑥𝑋𝐶𝐿(𝑋)}={𝑁(𝑥)𝑥𝑋}),𝑋𝐻𝒞,(𝑋)=𝑥𝑁(𝑥)𝑋Ø𝑉𝐻𝒞.(𝑋)=𝑁(𝑥)𝑁(𝑥)𝑋Ø(2.4)𝐹𝐻𝒞, 𝑆𝐻𝒞, 𝑇𝐻𝒞, 𝑅𝐻𝒞, 𝐼𝐻𝒞, 𝑋𝐻𝒞, and 𝑉𝐻𝒞 are called the first, the second, the third, the fourth, the fifth, the sixth, and the seventh covering upper approximation operations with respect to 𝒞, respectively. We leave out 𝒞 at the subscript when there is no confusion.

As shown in [32], every approximation operation in Definition 2.3 may be applied in certain circumstance. We choose the suitable approximation operation according to the specific situation. So it is important to design the granular reduction algorithms for all of these models.

More precise approximation spaces are proposed in [30]. As a further result, a reasonable granular reduction of coverings is also introduced. Let 𝒞={𝑀𝑑(𝑥)𝑥𝑈}, 𝒩𝒞={𝑁(𝑥)𝑥𝑈}. 𝑈,𝒞 is the approximation space of the first and the third types of covering rough sets, 𝑈,𝒞 is the approximation space of the second and the fourth types of covering rough sets, and 𝑈,𝒩𝒞 is the approximation space of the fifth, the sixth, and the seventh types of covering rough sets (referred to [30] for the details). In this paper, we design the algorithm of granular reduction for the fifth, the sixth, and the seventh type of covering rough sets.

Let 𝒞 be a covering of 𝑈, denoting a covering approximation space. 𝒞 denotes an -approximation space. 𝒩𝒞 represents an 𝒩-approximation space. We omit 𝒞 at the subscript when there is no confusion (referred to [30] for the details).

3. Discernibility Matrixes Based on Covering Granular Reduction

In the original Pawlak's rough sets, a family of equivalence classes induced by equivalence relations is a partition. Once any of its elements are deleted, a partition is no longer a partition. The granular reduction refers to the method of reducing granular structures and to get rid of redundant information in databases. Therefore, granular reduction is not applicable to the original Pawlak's rough sets. However, as one of the most extensions of Pawlak's rough sets, a covering is still working even subject to the omission of its elements, as long as the set approximations are invariant. The purpose of covering granular reduction is to find minimal subsets keeping the same set approximations. It is meaningful and necessary to develop the algorithm for covering granular reduction.

The quintuple (𝑈,𝒞,𝐶𝐿,𝐶𝐻) is called a covering rough set system (CRSS), where 𝒞 is a covering of 𝑈, 𝐶𝐿 and 𝐶𝐻 are the lower and upper approximation operations with respect to the covering 𝒞, and 𝑈,𝒜𝒞 is the approximation space. According to the categories of covering approximation operations in [30], there are two kinds of situations as follows.(1)If 𝒜𝒞=𝒞 or 𝒜𝒞=𝒞, then 𝒜𝒞𝒞: thus; 𝒜𝒞 is the unique granular reduct of 𝒞. There is no need to develop an algorithm to compute granular reducts for the first, the second, the third, and the fourth type of the covering rough sets.(2)If 𝒜𝒞=𝒩𝒞, generally, 𝒜𝒞 is not a subset of 𝒞. Consequently, an algorithm is needed to compute all granular reducts of 𝒞 for the fifth, the sixth, and the seventh type of covering rough set models.

Next we examine the algorithm of granular reduction for the fifth, the sixth, and the seventh type of covering rough sets. Let 𝒞 be a covering of 𝑈, since 𝒩𝒞={𝑁(𝑥)𝑥𝑈}, and 𝒩𝒞 is the collection of all approximation elements of the fifth, the sixth, or the seventh type of lower/upper approximation operations. 𝒩𝒞 is called the 𝒩-approximation space of 𝒞. Given a pair of approximation operations, the set approximations of any 𝑋𝑈 are determined by the 𝒩-approximation spaces. Thus, for the fifth, the sixth, and the seventh type of covering rough set models, the purpose of granular reduction is to find the minimal subsets 𝒞 of 𝒞 such that 𝒩𝒞=𝒩𝒞. The granular reducts based on the 𝒩-approximation spaces are called the 𝒩-reducts. 𝑁red(𝒞) is the set of all 𝒩-reducts of 𝒞, and 𝑁𝐼(𝒞) is the set of all 𝒩-irreducible elements of 𝒞 (referred to [30] for the details).

In Pawlak's rough set theory, for every pair of 𝑥,𝑦𝑈, if 𝑦 belongs to the equivalence class containing 𝑥, we say that 𝑥 and 𝑦 are indiscernible. Otherwise, they are discernible. Let ={𝑅1,𝑅2,,𝑅𝑛} be a family of equivalence relation on 𝑈, 𝑅𝑖. 𝑅𝑖 is indispensable in if and only if there is a pair of 𝑥,𝑦𝑈 such that the relation between 𝑥 and 𝑦 is altered after deleting 𝑅𝑖 from . The attribute reduction of Pawlak's rough sets is to find minimal subsets of which keep the relations invariant for any 𝑥,𝑦𝑈. Based on this statement, the method of discernibility matrix to compute all reducts of Pawlak's rough sets was proposed in [29]. In covering rough sets, however, the discernibility relation between 𝑥,𝑦𝑈 is different from that in Pawlak's rough sets.

Let 𝒞 be a covering on 𝑈, (𝑥,𝑦)𝑈×𝑈. Then we call (𝑥,𝑦) indiscernible if 𝑦𝑁(𝑥), that is, 𝑁(𝑦)𝑁(𝑥). Otherwise, (𝑥,𝑦) is discernible. When 𝒞 is a partition, the new discernibility relation coincides with that in Pawlak's. It is an extension of Pawlak's discernibility relation. In Pawlak's rough sets, (𝑥,𝑦) is indiscernible if and only if (𝑦,𝑥) is indiscernible. However, for a general covering, if 𝑁(𝑦)𝑁(𝑥) and 𝑁(𝑦)𝑁(𝑥), that is, 𝑦𝑁(𝑥) and 𝑥𝑁(𝑦), (𝑦,𝑥) is discernible while (𝑥,𝑦) is indiscernible. Thereafter, we call these relations the relations of (𝑥,𝑦) with respect to 𝒞. The following theorem characterizes these relations.

Proposition 3.1. Let 𝒞={𝐶𝑖𝑖=1,2,3,,𝑛} be a covering on 𝑈, and let 𝒞𝑥={𝐶𝑖𝒞𝑥𝐶𝑖}.(1)𝑦𝑁(𝑥) if and only if 𝒞𝑥𝒞𝑦.(2)𝑦𝑁(𝑥) if and only if there is 𝐶𝑖𝒞 such that 𝑥𝐶𝑖 and 𝑦𝐶𝑖.

Proof. (1)𝑦𝑁(𝑥)=𝒞𝑥 for any 𝐶𝑖𝒞𝑥, 𝑦𝐶𝑖 for any 𝐶𝑖𝒞𝑥, 𝐶𝑖𝒞𝑦𝒞𝑥𝒞𝑦.
(2) It is evident from (1).

Theorem 3.2. Let 𝒞 be a covering on 𝑈, 𝐶𝑖𝒞. Then 𝒩𝒞𝒩𝒞{𝐶𝑖} if and only if there is (𝑥,𝑦)𝑈×𝑈 whose discernibility relation with respect to 𝒞 is changed after deleting 𝐶𝑖 from 𝒞.

Proof. Suppose that 𝒩𝒞𝒩𝒞{𝐶𝑖}, then there is at least one element 𝑥𝑈 such that 𝑁𝒞(𝑥)𝑁𝒞{𝐶𝑖}(𝑥), that is, 𝑁𝒞(𝑥)𝑁𝒞{𝐶𝑖}(𝑥). Since 𝑁𝒞{𝐶𝑖}(𝑥)𝑁𝒞(𝑥)Ø, suppose that 𝑦𝑁𝒞{𝐶𝑖}(𝑥)𝑁𝒞(𝑥), then 𝑦𝑁𝒞{𝐶𝑖}(𝑥) and 𝑦𝑁𝒞(𝑥). Namely, (𝑥,𝑦) is discernible with respect to 𝒞, while (𝑥,𝑦) is indiscernible with respect to 𝒞{𝐶𝑖}.
Suppose that there is (𝑥,𝑦)𝑈×𝑈 whose discernibility relation with respect to 𝒞 is changed after deleting 𝐶𝑖 from 𝒞. Put differently, (𝑥,𝑦) is discernible with respect to 𝒞, while (𝑥,𝑦) is indiscernible with respect to 𝒞{𝐶𝑖}. Then we have 𝑦𝑁𝒞{𝐶𝑖}(𝑥) and 𝑦𝑁𝒞(𝑥), so 𝑦𝑁𝒞{𝐶𝑖}(𝑥)𝑁𝒞(𝑥). Thus, 𝑁𝒞(𝑥)𝑁𝒞{𝐶𝑖}(𝑥). It implies 𝒩𝒞𝒩𝒞{𝐶𝑖}.

The purpose of granular reducts of a covering 𝒞 is to find the minimal subsets of 𝒞 which keep the same classification ability as 𝒞 or, put differently, keep 𝒩𝒞 invariant. In Theorem 3.2, 𝒩𝒞 is kept unchanged to make the discernibility relations of any (𝑥,𝑦)𝑈×𝑈 invariant. Based on this statement, we are able to compute granular reducts with discernibility matrix.

Definition 3.3. Let 𝑈={𝑥1,𝑥2,,𝑥𝑛}, 𝒞 be a covering on 𝑈. 𝑀(𝑈,𝒞) is an 𝑛×𝑛 matrix (𝑐𝑖𝑗)𝑛×𝑛 called a discernibility matrix of (𝑈,𝒞), where(1)𝑐𝑖𝑗=Ø, 𝑥𝑗𝑁(𝑥𝑖),(2)𝑐𝑖𝑗={𝐶𝒞𝑥𝑖𝐶,𝑥𝑗𝐶}, 𝑥𝑗𝑁(𝑥𝑖).

This definition of discernibility matrix is more concise than the one in [11, 15] due to the reasonable statement of the discernibility relations. Likewise, we restate the characterizations of 𝒩-reduction.

Proposition 3.4. Consider that 𝑁𝐼(𝒞)={𝐶𝑐𝑖𝑗={𝐶} for some 𝑐𝑖𝑗𝑀(𝑈,𝒞)}.

Proof. For any 𝐶𝑁𝐼(𝒞), 𝒩𝒞𝒩𝒞{𝐶}, then there is (𝑥𝑖,𝑥𝑗)𝑈×𝑈 such that 𝑥𝑗𝑁𝒞{𝐶}(𝑥𝑖) and 𝑥𝑗𝑁𝒞(𝑥𝑖). It implies that 𝑥𝑖𝐶 and 𝑥𝑗𝐶. Moreover, for any 𝐶𝒞{𝐶}, since 𝑥𝑗𝑁𝒞{𝐶}(𝑥𝑖), we have 𝑥𝑖𝐶 if 𝑥𝑖𝐶. Thus, 𝑐𝑖𝑗={𝐶}.
If 𝑐𝑖𝑗={𝐶} for some 𝑐𝑖𝑗𝑀(𝑈,𝒞), then 𝑥𝑖𝐶 and 𝑥𝑗𝐶. And for any 𝐶𝒞{𝐶}, if 𝑥𝑖𝐶, then 𝑥𝑖𝐶, that is, 𝑥𝑗𝑁𝒞{𝐶}(𝑥𝑖) and 𝑥𝑗𝑁𝒞(𝑥𝑖), then 𝑁𝒞{𝐶}(𝑥𝑖)𝑁𝒞(𝑥𝑖). Namely, 𝒩𝒞𝒩𝒞{𝐶}, which implies 𝐶𝑁𝐼(𝒞).

Proposition 3.5. Suppose that 𝒞𝒞, then 𝒩𝒞=𝒩𝒞 if and only if 𝒞𝑐𝑖𝑗Ø for every 𝑐𝑖𝑗Ø.

Proof. 𝒩𝒞=𝒩𝒞 for any (𝑥𝑖,𝑥𝑗)𝑈×𝑈, 𝑥𝑗𝑁𝒞(𝑥𝑖) if and only if 𝑥𝑗𝑁𝒞(𝑥𝑖), for any (𝑥𝑖,𝑥𝑗)𝑈×𝑈, there is 𝐶𝒞 such that 𝑥𝑖𝐶 and 𝑥𝑗𝐶 if and only if there is 𝐶𝒞 such that 𝑥𝑖𝐶 and 𝑥𝑗𝐶, for any 𝑐𝑖𝑗Ø, 𝒞Ø.

Proposition 3.6. Suppose that 𝒞𝒞, then 𝒞𝑁red(𝒞) if and only if 𝒞 is a minimal set satisfying 𝒞𝑐𝑖𝑗Ø for every 𝑐𝑖𝑗Ø.

Definition 3.7. Let 𝑈={𝑥1,𝑥2,,𝑥𝑛}, let 𝒞={𝐶1,𝐶2,,𝐶𝑚} be a covering of 𝑈, and let 𝑀(𝑈,𝒞)=(𝑐𝑖𝑗)𝑛×𝑛 be the discernibility matrix of (𝑈,𝒞). A discernibility function 𝑓(𝑈,𝒞) is a Boolean function of 𝑚 Boolean variables, 𝐶1,𝐶2,,𝐶𝑚, corresponding to the covering elements 𝐶1,𝐶2,,𝐶𝑚, respectively, defined as 𝑓(𝑈,𝒞)(𝐶1,𝐶2,,𝐶𝑚)={(𝑐𝑖𝑗)𝑐𝑖𝑗𝑀(𝑈,𝒞),𝑐𝑖𝑗Ø}.

Theorem 3.8. Let 𝒞 be a family of covering on 𝑈, let 𝑓(𝑈,𝒞) be the discernibility function, and let 𝑔(𝑈,𝒞) be the reduced disjunctive form of 𝑓(𝑈,𝒞) by applying the multiplication and absorption laws. If 𝑔(𝑈,𝒞)=(𝒞1)(𝒞𝑙), where 𝒞𝑘𝒞, 𝑘=1,2,,𝑙 and every element in 𝒞𝑘 only appears once, then 𝑁red(𝒞)={𝒞1,𝒞2,,𝒞𝑙}.

Proof. For every 𝑘=1,2,,𝑙, 𝒞𝑘𝑐𝑖𝑗 for any 𝑐𝑖𝑗𝑀(𝑈,𝒞), so 𝒞𝑘𝑐𝑖𝑗Ø. Let 𝒞𝑘=𝒞𝑘{𝐶} for any 𝐶𝒞𝑘, then 𝑔(𝑈,𝒞)𝑘1𝑡=1(𝒞𝑡)(𝒞𝑘)(𝑙𝑡=𝑘+1(𝒞𝑡)). If for every 𝑐𝑖𝑗𝑀(𝑈,𝒞), we have 𝒞𝑘𝑐𝑖𝑗Ø, then 𝒞𝑘𝑐𝑖𝑗 for every 𝑐𝑖𝑗𝑀(𝑈,𝒞), that is, 𝑔(𝑈,𝒞)𝑘1𝑡=1(𝒞𝑡)(𝒞𝑘)(𝑙𝑡=𝑘+1(𝒞𝑡)), which is a contradiction. It implies that there is 𝑐𝑖0𝑗0𝑀(𝑈,𝒞) such that 𝒞𝑘𝑐𝑖0𝑗0=Ø. Thus, 𝒞𝑘 is a reduct of 𝒞.
For any 𝒞Red(𝒞), we have 𝒞𝑐𝑖𝑗Ø for every 𝑐𝑖𝑗𝑀(𝑈,𝒞), so 𝑓(𝑈,𝒞)(𝒞)=(𝑐𝑖𝑗)(𝒞)=𝒞, which implies 𝒞𝑓(𝑈,𝒞)=𝑔(𝑈,𝒞). Suppose that, for every 𝑘=1,2,,𝑙, we have 𝒞𝑘𝒞Ø, then for every 𝑘, there is 𝒞𝑘𝒞𝑘𝒞. By rewriting 𝑔(𝑈,𝒞)=(𝑙𝑘=1𝒞𝑘)Φ, 𝒞𝑙𝑘=1𝒞𝑘. Thus, there is 𝐶𝑘0 such that 𝒞𝒞𝑘0, that is, 𝒞𝑘0𝒞, which is a contradiction. So 𝒞𝑘0𝒞 for some 𝑘0, since both 𝒞 and 𝒞𝑘0 are reducts, and it is evident that 𝒞=𝒞𝑘0. Consequently, Red(𝒞)={𝒞1,𝒞2,,𝒞𝑙}.

Algorithm 3.9. Consider the following:
input: 𝑈,𝒞,
output: 𝑁red(𝒞) and 𝑁𝐼(𝒞)// The set of all granular reducts and the set of all 𝒩-irreducible elements.Step 1: 𝑀(𝑈,𝒞)=(𝑐𝑖𝑗)𝑛×𝑛, for each 𝑐𝑖𝑗, let 𝑐𝑖𝑗=Ø. Step 2: for each 𝑥𝑖𝑈, compute 𝑁(𝑥𝑖)={𝐶𝒞𝑥𝑖𝐶}.   If𝑥𝑗𝑁(𝑥𝑖),𝑐𝑖𝑗={𝐶𝒞𝑥𝑖𝐶,𝑥𝑗𝐶}. Step 3: 𝑓(𝑈,𝒞)(𝐶1,𝐶2,,𝐶𝑚)={(𝑐𝑖𝑗)𝑐𝑖𝑗𝑀(𝑈,𝒞),𝑐𝑖𝑗Ø}. Step 4: compute 𝑓(𝑈,𝒞) to 𝑔(𝑈,𝒞)=(𝒞1)(𝒞𝑙)// where 𝒞𝑘𝒞,   𝑘=1,2,,𝑙, and every element in 𝒞𝑘 only appears once.Step 4: output 𝑁red(𝒞)={𝒞1,𝒞2,,𝒞𝑙}, 𝑁𝐼(𝒞)=𝑁red(𝒞).Step 5: end.

The following example is used to illustrate our idea.

Example 3.10. Suppose that 𝑈={𝑥1,𝑥2,,𝑥6}, where 𝑥𝑖,𝑖=1,2,,6 denote six objects, and let 𝐶𝑖,𝑖=1,2,,7 denote seven properties; the information is presented in Table 1, that is, the 𝑖th object possesses the 𝑗th attribute is indicated by a in the 𝑖𝑗-position of the table.

tab1
Table 1

{𝑥1,𝑥2,𝑥3} is the set of all objects possessing the attribute 𝐶1, and it is denoted by 𝐶1={𝑥1,𝑥2,𝑥3}. Similarly, 𝐶2={𝑥1,𝑥4}, 𝐶3={𝑥3,𝑥4,𝑥5}, 𝐶4={𝑥1,𝑥4,𝑥5}, 𝐶5={𝑥1,𝑥4,𝑥6}, 𝐶6={𝑥3,𝑥4}, and 𝐶7={𝑥1,𝑥4,𝑥5,𝑥6}. Evidently, 𝒞={𝐶1,𝐶2,𝐶3,𝐶4,𝐶5,𝐶6,𝐶7} is a covering on 𝑈.

Then, 𝑁(𝑥1)={𝑥1}, 𝑁(𝑥2)={𝑥1,𝑥2,𝑥3}, 𝑁(𝑥3)={𝑥3}, 𝑁(𝑥4)={𝑥4}, 𝑁(𝑥5)={𝑥4,𝑥5}, and 𝑁(𝑥6)={𝑥4,𝑥6}.

The discernibility matrix of (𝑈,𝒞) is exhibited as follows: Ø𝐶2,𝐶4,𝐶5,𝐶7𝐶2,𝐶4,𝐶5,𝐶7𝐶1𝐶1,𝐶2,𝐶5𝐶1,𝐶2,𝐶4𝐶ØØØ1𝐶1𝐶1𝐶3,𝐶6𝐶3,𝐶6Ø𝐶1𝐶1,𝐶6𝐶1,𝐶3,𝐶6𝐶3,𝐶6𝐶2,𝐶3,𝐶4,𝐶5,𝐶6,𝐶7𝐶2,𝐶4,𝐶5,𝐶7Ø𝐶2,𝐶5,𝐶6𝐶2,𝐶3,𝐶4,𝐶6𝐶3𝐶3,𝐶4,𝐶7𝐶4,𝐶7𝐶ØØ3,𝐶4Ø𝐶5,𝐶7𝐶5,𝐶7Ø𝐶5Ø,(3.1)𝑓(𝑈,Δ)𝐶1,𝐶2,𝐶3,𝐶4,𝐶5,𝐶6,𝐶7𝑐=𝑖𝑗𝑖,𝑗=1,2,,6,𝑐𝑖𝑗=𝐶Ø2𝐶4𝐶5𝐶7𝐶2𝐶4𝐶5𝐶7𝐶1𝐶1𝐶2𝐶5𝐶1𝐶2𝐶4𝐶1𝐶1𝐶1𝐶3𝐶6𝐶3𝐶6𝐶1𝐶1𝐶6𝐶1𝐶3𝐶6𝐶3𝐶6𝐶2𝐶3𝐶4𝐶5𝐶6𝐶7𝐶2𝐶4𝐶5𝐶7𝐶2𝐶5𝐶6𝐶2𝐶3𝐶4𝐶6𝐶3𝐶3𝐶4𝐶7𝐶4𝐶7𝐶3𝐶4𝐶5𝐶7𝐶5𝐶7𝐶5=𝐶5𝐶1𝐶3𝐶4𝐶5𝐶1𝐶3𝐶7.(3.2)

So 𝑁red(𝒞)={{𝐶1,𝐶3,𝐶4,𝐶5},{𝐶1,𝐶3,𝐶5,𝐶7}}, 𝑁𝐼(𝒞)={𝐶1,𝐶3,𝐶5}. As a result, Table 1 can be simplified into Table 2 or Table 3, and the ability of classification is invariant. Obviously, the granular reduction algorithm can reduce data sets as shown.

tab2
Table 2
tab3
Table 3

4. The Simplification of Discernibility Matrixes

For the purpose of finding the set of all granular reducts, we have proposed the method by discernibility matrix. Unfortunately, it is at least an NP problem, since the discernibility matrix in this paper is more complex than the one in [33]. Accordingly, we simplify the discernibility matrixes in this section. In addition, a heuristic algorithm is presented to avoid the NP hard problem.

Definition 4.1. Let 𝑀(𝑈,𝒞)=(𝑐𝑖𝑗)𝑛×𝑛 be the discernibility matrix of (𝑈,𝒞). For any 𝑐𝑖𝑗𝑀(𝑈,𝒞), if there is an nonempty element 𝑐𝑖0𝑗0𝑀(𝑈,𝒞){𝑐𝑖𝑗} such that 𝑐𝑖0𝑗0𝑐𝑖𝑗, let 𝑐𝑖𝑗=Ø; otherwise, 𝑐𝑖𝑗=𝑐𝑖𝑗, then we get a new discernibility matrix SIM(𝑈,𝒞)=(𝑐𝑖𝑗)𝑛×𝑛, which called the simplification discernibility matrix of (𝑈,𝒞).

Theorem 4.2. Let 𝑀(𝑈,𝒞) be the discernibility matrix of (𝑈,𝒞), and SIM(𝑈,𝒞) is the simplification discernibility matrix, 𝒞𝒞. Then 𝒞𝑐𝑖𝑗Ø for any nonempty element 𝑐𝑖𝑗𝑀(𝑈,𝒞) if and only if 𝒞𝑐𝑖𝑗Ø for any nonempty element 𝑐𝑖𝑗SIM(𝑈,𝒞).

Proof. If 𝒞𝑐𝑖𝑗Ø for every 𝑐𝑖𝑗Ø and 𝑐𝑖𝑗𝑀(𝑈,𝒞), it is evident that 𝒞𝑐𝑖𝑗Ø for every 𝑐𝑖𝑗Ø and 𝑐𝑖𝑗SIM(𝑈,𝒞).
Suppose that 𝒞𝑐𝑖𝑗Ø for every 𝑐𝑖𝑗Ø and 𝑐𝑖𝑗SIM(𝑈,𝒞). For any nonempty 𝑐𝑖𝑗𝑀(𝑈,𝒞), if there is an nonempty element 𝑐𝑖0𝑗0𝑀(𝑈,𝒞){𝑐𝑖𝑗} such that 𝑐𝑖0𝑗0𝑐𝑖𝑗, and for any nonempty element 𝑐𝑖1𝑗1𝑀(𝑈,𝒞){𝑐𝑖𝑗,𝑐𝑖0𝑗0}, 𝑐𝑖1𝑗1̸𝑐𝑖0𝑗0, then 𝑐𝑖0𝑗0=𝑐𝑖0𝑗0Ø. Since 𝒞𝑐𝑖0𝑗0Ø, then 𝒞𝑐𝑖0𝑗0Ø; thus, 𝒞𝑐𝑖𝑗Ø. If 𝑐𝑖0𝑗0̸𝑐𝑖𝑗 for any nonempty element 𝑐𝑖0𝑗0𝑀(𝑈,𝒞){𝑐𝑖𝑗}, then 𝑐𝑖𝑗=𝑐𝑖𝑗. Since 𝒞𝑐𝑖𝑗Ø, then 𝒞𝑐𝑖𝑗Ø. Thus, 𝒞𝑐𝑖𝑗Ø for every nonempty 𝑐𝑖𝑗𝑀(𝑈,𝒞).

Proposition 4.3. Suppose that 𝒞𝒞, then 𝒞𝑁red(𝒞) if and only if 𝒞 is a minimal set satisfying 𝒞𝑐𝑖𝑗Ø for every 𝑐𝑖𝑗Ø and 𝑐𝑖𝑗<?brm?>SIM(𝑈,𝒞).

Proposition 4.4. Consider that {𝑐𝑖𝑗𝑐𝑖𝑗SIM(𝑈,𝒞)}=𝑁red(𝒞).

Proof. Suppose that 𝐶{𝑐𝑖𝑗𝑐𝑖𝑗SIM(𝑈,𝒞)}, then there is 𝑐𝑖𝑗SIM(𝑈,𝒞) such that 𝐶𝑐𝑖𝑗 and 𝑐𝑖𝑗𝑁𝐼(𝒞)=Ø. For any 𝑐𝑖𝑗SIM(𝑈,𝒞), if 𝐶𝑐𝑖𝑗, let 𝑐1𝑖𝑗={𝐶}. Otherwise, 𝑐1𝑖𝑗={𝐶𝑖𝑗}, where 𝐶𝑖𝑗𝑐𝑖𝑗. Suppose that 𝑀1(𝑈,𝒞)=(𝑐1𝑖𝑗)𝑛×𝑛; it is easy to prove that 𝐶{𝑐1𝑖𝑗𝑐1𝑖𝑗𝑀1(𝑈,𝒞)}𝑁red(𝒞). Thus, 𝐶𝑁red(𝒞).
Suppose that 𝐶𝑁red(𝒞), then there is 𝒞𝑘𝑁red(𝒞) such that 𝐶𝒞𝑘. From Proposition 4.3, we know that 𝒞𝑘 is a minimal set satisfying 𝒞𝑘𝑐𝑖𝑗Ø for every 𝑐𝑖𝑗Ø and 𝑐𝑖𝑗SIM(𝑈,𝒞). So there is a 𝑐𝑖𝑗SIM(𝑈,𝒞) such that 𝐶𝑐𝑖𝑗, or else 𝐶 is redundant in 𝒞𝑘. Thus, 𝐶{𝑐𝑖𝑗𝑐𝑖𝑗SIM(𝑈,𝒞)}.
In summary, {𝑐𝑖𝑗𝑐𝑖𝑗SIM(𝑈,𝒞)}=𝑁red(𝒞).

Proposition 4.5. Let SIM(𝑈,𝒞)=(𝑐𝑖𝑗)𝑛×𝑛 be the simplified discernibility matrix of (𝑈,𝒞), then SIM(𝑈,𝒞) is the minimal matrix to compute all granular reducts of 𝒞, that is, for any matrix 𝑀0(𝑈,𝒞)=(𝑑𝑖𝑗)𝑛×𝑛 where 𝑑𝑖𝑗𝑐𝑖𝑗, 𝑀0(𝑈,𝒞) can compute all granular reducts of 𝒞 if and only if 𝑑𝑖𝑗=𝑐𝑖𝑗 for 1𝑖,𝑗𝑛.

Proof. If 𝑑𝑖𝑗=𝑐𝑖𝑗 for 1𝑖,𝑗𝑛, then 𝑀0(𝑈,𝒞)=SIM(𝑈,𝒞), and 𝑀0(𝑈,𝒞) can compute all granular reducts of 𝒞.
Suppose that there is a nonempty 𝑐𝑖0𝑗0SIM(𝑈,𝒞) such that 𝑑𝑖0𝑗0𝑐𝑖0𝑗0. If |𝑐𝑖0𝑗0|=1, suppose that 𝑐𝑖0𝑗0={𝐶0}, then 𝑑𝑖0𝑗0=Ø. From the definition of the simplification discernibility matrix, we know that 𝐶0𝑐𝑖𝑗 for any 𝑐𝑖𝑗SIM(𝑈,𝒞){𝑐𝑖0𝑗0}, then 𝐶0𝑑𝑖𝑗 for any 𝑑𝑖𝑗𝑀0(𝑈,𝒞). So 𝑀0(𝑈,𝒞) cannot compute any granular reducts of 𝒞. If |𝑐𝑖0𝑗0|2, we suppose that 𝑑𝑖0𝑗0Ø. Then there is a 𝐶(𝑐𝑖0𝑗0𝑑𝑖0𝑗0), and let 𝑐1𝑖0𝑗0={𝐶}. For any 𝑐𝑖𝑗SIM(𝑈,𝒞){𝑐𝑖0𝑗0}, if 𝐶𝑐𝑖𝑗, let 𝑐1𝑖𝑗=Ø. Otherwise, let 𝑐1𝑖𝑗={𝐶𝑖𝑗} where 𝐶𝑖𝑗𝑐𝑖𝑗𝑐𝑖0𝑗0. Let 𝑀1(𝑈,𝒞)=(𝑐1𝑖𝑗)𝑛×𝑛 and 𝒞={𝑐1𝑖𝑗𝑐1𝑖𝑗𝑀1(𝑈,𝒞)}, and it is easy to prove that 𝒞𝑁red(𝒞). However, 𝒞𝑑𝑖0𝑗0=Ø, that is, 𝑀0(𝑈,𝒞) cannot compute all granular reducts of 𝒞. Thus, if 𝑀0(𝑈,𝒞) can compute all granular reducts of 𝒞, then 𝑑𝑖𝑗=𝑐𝑖𝑗 for 1𝑖,𝑗𝑛.

From the above propositions, we know that the simplified discernibility matrix is the minimal discernibility matrix which can compute the same reducts as the original one. Hereafter, we only examine simplified discernibility matrixes instead of general discernibility matrixes. The following example is used to illustrate our idea.

Example 4.6. The discernibility matrix of (𝑈,𝒞) in Example 3.10 is as follows: Ø𝐶5Ø𝐶1𝐶ØØØØØØØØ3𝐶ØØØØØØØØØØØØØ4,𝐶7,ØØØØØØØØØ𝑓(𝑈,Δ)𝐶1,𝐶2,𝐶3,𝐶4,𝐶5,𝐶6,𝐶7𝑐=𝑖𝑗𝑖,𝑗=1,2,,6,𝑐𝑖𝑗Ø=𝐶5𝐶1𝐶3𝐶4𝐶7=𝐶5𝐶1𝐶3𝐶4𝐶5𝐶1𝐶3𝐶7.(4.1) So 𝑁red(𝒞)={{𝐶1,𝐶3,𝐶4,𝐶5},{𝐶1,𝐶3,𝐶5,𝐶7}}, 𝑁𝐼(𝒞)={𝐶1,𝐶3,𝐶5}.

From the above example, it is easy to see that simplified discernibility matrix can simplify the computing processes remarkably. Especially when 𝒞 is a consistent covering proposed in [30], that is, 𝑁red(𝒞)={𝑁𝐼(𝒞)}, the unique reduct 𝑁red(𝒞)={{𝑐𝑖𝑗𝑐𝑖𝑗SIM(𝑈,𝒞)}}.

Unfortunately, although the simplified discernibility matrixes are more simple, the processes of computing reducts by discernibility function are still NP hard. Accordingly, we develop a heuristic algorithm to obtain a reduct from a discernibility matrix directly.

Let 𝑀(𝑈,𝒞)=(𝑐𝑖𝑗)𝑛×𝑛 be a discernibility matrix. We denote the number of the elements in 𝑐𝑖𝑗 by |𝑐𝑖𝑗|. For any 𝐶𝒞, ||𝐶|| denotes the number of 𝑐𝑖𝑗 which contain 𝐶. Let 𝑐𝑖𝑗𝑀(𝑈,𝒞), if for any 𝐶𝑁𝐼(𝒞), 𝐶𝑐𝑖𝑗, then 𝑐𝑖𝑗=𝑐𝑖𝑗. Since {𝑐𝑖𝑗|𝑐𝑖𝑗|2}=𝑁red(𝒞)𝑁𝐼(𝒞), if |𝑐𝑖𝑗|2, then the elements in 𝑐𝑖𝑗 may either be deleted from 𝒞 or be preserved. Suppose that 𝐶0{𝑐𝑖𝑗|𝑐𝑖𝑗|2}, if ||𝐶0||||𝐶|| for any 𝐶{𝑐𝑖𝑗|𝑐𝑖𝑗|2}, 𝐶0 is called the maximal element with respect to the simplified discernibility matrix SIM(𝑈,𝒞). The heuristic algorithm to get a reduct from a discernibility matrix directly proceeds as follows.

Algorithm 4.7. Consider the following:
input: 𝑈,𝒞,
output: granular reducts redStep 1: 𝑀(𝑈,𝒞)=(𝑐𝑖𝑗)𝑛×𝑛, for each 𝑐𝑖𝑗, let 𝑐𝑖𝑗=Ø.Step 2: for each 𝑥𝑖𝑈, compute 𝑁(𝑥𝑖)={𝐶𝒞𝑥𝑖𝐶}.   If𝑥𝑗𝑁(𝑥𝑖),    𝑐𝑖𝑗={𝐶𝒞𝑥𝑖𝐶,𝑥𝑗𝐶}//getthediscernibilitymatrix.Step 3: for each 𝑐𝑖𝑗𝑀(𝑈,𝒞),   if there is a nonempty element 𝑐𝑖0𝑗0𝑀(𝑈,𝒞){𝑐𝑖𝑗} such that    𝑐𝑖0𝑗0𝑐𝑖𝑗, let 𝑐𝑖𝑗=Ø // get the simplified discernibility matrix.Step 4: for each 𝐶𝑖𝑀(𝑈,𝒞), compute ||𝐶𝑖|| and select the maximal     element 𝐶0 of SIM(𝑈,𝒞).   For each 𝑐𝑖𝑗𝑀(𝑈,𝒞),   if 𝐶0𝑐𝑖𝑗,    let 𝑐𝑖𝑗={𝐶0}. Step 5: if there is 𝑐𝑖𝑗𝑀(𝑈,𝒞) such that |𝑐𝑖𝑗|2,    return to Step 3;   else   output red=𝑀(𝑈,𝒞).Step 5: end.

Example 4.8. The simplified discernibility matrix of (𝑈,𝒞) in Example 3.10 is as follows: Ø𝐶5Ø𝐶1𝐶ØØØØØØØØ3𝐶ØØØØØØØØØØØØØ4,𝐶7ØØØØØØØØØ.(4.2)
For a maximal element 𝐶4 of SIM(𝑈,𝒞), let 𝑐153={𝐶4}, then we get 𝑀1(𝑈,𝒞) as follows: Ø𝐶5Ø𝐶1𝐶ØØØØØØØØ3𝐶ØØØØØØØØØØØØØ4ØØØØØØØØØ.(4.3)
Thus, {𝐶1,𝐶3,𝐶4,𝐶5}={𝑐1𝑖𝑗𝑐1𝑖𝑗𝑀1(𝑈,𝒞)} is a granular reduct of 𝒞.
For a maximal element 𝐶7 of SIM(𝑈,𝒞), let 𝑐153={𝐶7}, then we get 𝑀2(𝑈,𝒞) as follows: Ø𝐶5Ø𝐶1𝐶ØØØØØØØØ3𝐶ØØØØØØØØØØØØØ7ØØØØØØØØØ.(4.4) Thus, {𝐶1,𝐶3,𝐶5,𝐶7}={𝑐2𝑖𝑗𝑐2𝑖𝑗𝑀2(𝑈,𝒞)} is also a granular reduct of 𝒞.

From the above example, we show that the heuristic algorithm can avoid the NP hard problem and generate a granular reduct from the simplified discernability matrix directly. With the heuristic algorithm, the granular reduction theory based on discernability matrix is no longer limited to the theoretic level but applicable in practical usage.

5. Conclusion

In this paper, we develop an algorithm by discernability matrixes to compute all the granular reducts with covering rough sets initially. A simplification of discernibility matrix is proposed for the first time. Moreover, a heuristic algorithm to compute a granular reduct is presented to avoid the NP hard problem in granular reduction such that a granular reduct is generated rapidly.

Acknowledgments

This work is supported by National Natural Science Foundation of China under Grant no. 11201490 and no. 11061004, Science and Technology Plan Projects of Hunan Province no. 2011FJ3152.

References

  1. Z. Pawlak, “Rough sets,” International Journal of Computer and Information Sciences, vol. 11, no. 5, pp. 341–356, 1982. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  2. Z. Pawlak, Rough Sets: Theoretical Aspects of Reasoning about Data, Kluwer Acedemic, Boston, Mass, USA, 1991.
  3. G. Dong, J. Han, J. Lam, J. Pei, K. Wang, and W. Zou, “Mining constrained gradients in large databases,” IEEE Transactions on Knowledge and Data Engineering, vol. 16, no. 8, pp. 922–938, 2004. View at Google Scholar
  4. S. Pal and P. Mitra, “Case generation using rough sets with fuzzy representation,” IEEE Transactions on Knowledge and Data Engineering, vol. 16, no. 3, pp. 292–300, 2004. View at Google Scholar
  5. L. Polkowski and A. Skowron, Rough Sets and Current Trends in Computing, vol. 1424, Springer, Berlin, Germany, 1998.
  6. L. Polkowski and A. Skowron, Eds., Rough Sets in Knowledge Discovery, vol. 1, Physica-Verlag, Berlin, Germany, 1998.
  7. L. Polkowski and A. Skowron, Eds., Rough Sets in Knowledge Discovery, vol. 2, Physica-Verlag, Berlin, Germany, 1998.
  8. N. Zhong, Y. Yao, and M. Ohshima, “Peculiarity oriented multidatabase mining,” IEEE Transactions on Knowledge and Data Engineering, vol. 15, no. 4, pp. 952–960, 2003. View at Google Scholar
  9. Z. Bonikowski, E. Bryniarski, and U. Wybraniec-Skardowska, “Extensions and intentions in the rough set theory,” Information Sciences, vol. 107, no. 1–4, pp. 149–167, 1998. View at Publisher · View at Google Scholar
  10. E. Bryniarski, “A calculus of rough sets of the first order,” Bulletin of the Polish Academy of Sciences, vol. 37, no. 1–6, pp. 71–78, 1989. View at Google Scholar · View at Zentralblatt MATH
  11. C. Degang, W. Changzhong, and H. Qinghua, “A new approach to attribute reduction of consistent and inconsistent covering decision systems with covering rough sets,” Information Sciences, vol. 177, no. 17, pp. 3500–3518, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  12. C. Degang, Z. Wenxiu, D. Yeung, and E. C. C. Tsang, “Rough approximations on a complete completely distributive lattice with applications to generalized rough sets,” Information Sciences, vol. 176, no. 13, pp. 1829–1848, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  13. T.-J. Li, Y. Leung, and W.-X. Zhang, “Generalized fuzzy rough approximation operators based on fuzzy coverings,” International Journal of Approximate Reasoning, vol. 48, no. 3, pp. 836–856, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  14. T.-J. Li and W.-X. Zhang, “Rough fuzzy approximations on two universes of discourse,” Information Sciences, vol. 178, no. 3, pp. 892–906, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  15. E. C. C. Tsang, C. Degang, and D. S. Yeung, “Approximations and reducts with covering generalized rough sets,” Computers & Mathematics with Applications, vol. 56, no. 1, pp. 279–289, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  16. E. Tsang, D. Chen, J. Lee, and D. S. Yeung, “On the upper approximations of covering generalized rough sets,” in Proceedings of the 3rd International Conference on Machine Learning and Cybernetics, pp. 4200–4203, 2004.
  17. W. Zhu and F.-Y. Wang, “Reduction and axiomization of covering generalized rough sets,” Information Sciences, vol. 152, pp. 217–230, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  18. W. Zhu, “Topological approaches to covering rough sets,” Information Sciences, vol. 177, no. 6, pp. 1499–1508, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  19. W. Zhu, “Relationship between generalized rough sets based on binary relation and covering,” Information Sciences, vol. 179, no. 3, pp. 210–225, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  20. X. Y. Chen and Q. G. Li, “Construction of rough approximations in fuzzy setting,” Fuzzy Sets and Systems, vol. 158, no. 23, pp. 2641–2653, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  21. T. Y. Lin, “Topological and fuzzy rough sets,” in Intelligent Decision Support: Handbook of Applications and Advances of the Rough Set Theory, R. Slowinsk, Ed., pp. 287–304, Kluwer Acedemic, Boston, Mass, USA, 1992. View at Google Scholar
  22. A. Skowron and J. Stepaniuk, “Tolerance approximation spaces,” Fundamenta Informaticae, vol. 27, no. 2-3, pp. 245–253, 1996. View at Google Scholar · View at Zentralblatt MATH
  23. C. Wang, C. Wu, and D. Chen, “A systematic study on attribute reduction with rough sets based on general binary relations,” Information Sciences, vol. 178, no. 9, pp. 2237–2261, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  24. Y. Y. Yao, “On generalizing pawlak approximation operators,” in LNAI, vol. 1424, pp. 298–307, 1998. View at Google Scholar
  25. Y. Y. Yao, “Constructive and algebraic methods of the theory of rough sets,” Information Sciences, vol. 109, no. 1–4, pp. 21–47, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  26. Y. Y. Yao, “Relational interpretations of neighborhood operators and rough set approximation operators,” Information Sciences, vol. 111, no. 1–4, pp. 239–259, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  27. F. Hu, G. y. Wang, H. Huang et al., “Incremental attribute reduction based on elementary sets,” RSFDGrC, vol. 36, no. 41, pp. 185–193, 2005. View at Google Scholar
  28. R. Jensen and Q. Shen, “Semantics-preserving dimensionality reduction: rough and fuzzy-rough-based approaches,” IEEE Transactions on Knowledge and Data Engineering, vol. 16, no. 12, pp. 1457–1471, 2004. View at Google Scholar
  29. A. Skowron and C. Rauszer, “The discernibility matrices and functions in information systems, intelligent decision suport,” in Handbook of Applications and Advances of the Rough Sets Theory, R. Slowinski, Ed., Kluwer Academic, Boston, Mass, USA, 1992. View at Google Scholar
  30. T. Yang and Q. G. Li, “Reduction about approximation spaces of covering generalized rough sets,” International Journal of Approximate Reasoning, vol. 51, no. 3, pp. 335–345, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  31. T. Yang, Q. G. Li, and B. L. Zhou, “Granular reducts from the topological view of covering rough sets,” in Proceedings of the 8th IEEE International Conference on Granular Computing, Zhejiang University, Hangzhou, China, August 2012.
  32. T. Yang, Q. G. Li, and B. L. Zhou, “Related family: a new method for attribute reduction of covering information systems,” Information Sciences. In press.
  33. S. K. M. Wong and W. Ziarko, “On optimal decision rules in decision tables,” Bulletin of the Polish Academy of Sciences, vol. 33, no. 11-12, pp. 693–696, 1985. View at Google Scholar · View at Zentralblatt MATH