Journal of Applied Mathematics

Journal of Applied Mathematics / 2012 / Article

Research Article | Open Access

Volume 2012 |Article ID 970576 | https://doi.org/10.1155/2012/970576

Tian Yang, Zhaowen Li, Xiaoqing Yang, "A Granular Reduction Algorithm Based on Covering Rough Sets", Journal of Applied Mathematics, vol. 2012, Article ID 970576, 13 pages, 2012. https://doi.org/10.1155/2012/970576

A Granular Reduction Algorithm Based on Covering Rough Sets

Academic Editor: Chong Lin
Received31 Mar 2012
Revised12 Jul 2012
Accepted16 Jul 2012
Published30 Aug 2012

Abstract

The granular reduction is to delete dispensable elements from a covering. It is an efficient method to reduce granular structures and get rid of the redundant information from information systems. In this paper, we develop an algorithm based on discernability matrixes to compute all the granular reducts of covering rough sets. Moreover, a discernibility matrix is simplified to the minimal format. In addition, a heuristic algorithm is proposed as well such that a granular reduct is generated rapidly.

1. Introduction

With the development of technology, the gross of information increases in a surprising way. It is a great challenge to extract valuable knowledge from the massive information. Rough set theory was raised by Pawlak [1, 2] to deal with uncertainty and vagueness, and it has been applied to the information processing in various areas [3–8].

One of the most important topics in rough set theory is to design reduction algorithms. The reduction of Pawlak's rough sets is to reduce dispensable elements from a family of equivalence relations which induce the equivalence classes, or a partition.

Covering generalized rough set [9–19] and binary relation generalized rough set [20–26] are two main extensions of Pawlak's rough set. The reduction theory of covering rough sets [10, 11, 15, 23, 27, 28] plays an important role in practice. A partition is no longer a partition if any of its elements is deleted, while a covering may still be a covering with invariant set approximations after dropping some elements. Therefore, there are two types of reduction on covering rough sets. One is to reduce redundant coverings from a family of coverings, referred to as the attribute reduction. The other is to reduce redundant elements from a covering, noted as the granular reduction. It is to find the minimal subsets of a covering which generate the same set approximations with the original covering. Employed to reduce granular structures and databases as well as interactive with the attribute reduction, we think the granular reduction should be ignored by no means. In this paper, we devote to investigate granular reduction of covering rough sets.

In order to compute all attribute reducts for Pawlak's rough sets, discernibility matrix is initially presented [29]. Tsang et al. [15] develop an algorithm of discernibility matrices to compute attribute reducts for one type of covering rough sets. Zhu and Wang [17] and Zhu [18] build one type of granular reduction for two covering rough set models initially. In addition, Yang et al. systematically examine the granular reduction in [30] and the relationship between reducts and topology in [31]. Unfortunately, no effective algorithm for granular reduction has hitherto been proposed.

In this paper, we bridge the gap by constructing an algorithm based on discernibility matrixes which is applicable to all granular reducts of covering rough sets. This algorithm can reduce granular structures and get rid of the redundant information from information systems. Then a discernibility matrix is simplified to the minimal format. Meanwhile, based on a simplification of discernibility matrix, a heuristic algorithm is proposed as well.

The remainder of this paper proceeds as follows. Section 2 reviews the relevant background knowledge about the granular reduction. Section 3 constructs the algorithm based on discernibility matrix. Section 4 simplifies the discernibility matrix and proposes a heuristic algorithm. Section 5 concludes the study.

2. Background

Our aim in this section is to give a glimpse of rough set theory.

Let π‘ˆ be a finite and nonempty set, and let 𝑅 be an equivalence relation on π‘ˆ. 𝑅 generates a partition π‘ˆ/𝑅={[π‘₯]π‘…βˆ£π‘₯βˆˆπ‘‹} on π‘ˆ, where [π‘₯]𝑅 is an equivalence class of π‘₯ generated by the equivalence relation 𝑅. We call it elementary sets of 𝑅 in rough set theory. For any set 𝑋, we describe 𝑋 by the elementary sets of 𝑅, and the two sets π‘…βˆ—ξ€½[π‘₯]=βˆͺπ‘…βˆ£[π‘₯]π‘…ξ€ΎβŠ†π‘‹,π‘…βˆ—ξ€½[π‘₯]=βˆͺπ‘…βˆ£[π‘₯]π‘…ξ€Ύβˆ©π‘‹β‰ Γ˜(2.1) are called the lower and upper approximations of 𝑋, respectively. If π‘…βˆ—(𝑋)=π‘…βˆ—(𝑋),𝑋 is an 𝑅-exact set. Otherwise, it is an 𝑅-rough set.

Let ℝ be a family of equivalence relations, and let π΄βˆˆβ„, denoted as IND(ℝ)=∩{π‘…βˆΆπ‘…βˆˆβ„}. 𝐴 is dispensable in ℝ if and only if IND(ℝ)=IND(β„βˆ’π΄). Otherwise, 𝐴 is indispensable in ℝ. The family ℝ is independent if every π΄βˆˆβ„ is indispensable in ℝ. Otherwise, ℝ is dependent. β„šβˆˆβ„™ is a reduct of β„™ if β„š is independent and IND(β„š)=IND(β„™). The sets of all indispensable relations in β„™ are called the core of β„™, denoted as CORE(β„™). Evidently, CORE(β„™)=∩RED(β„™), where RED(β„™) is the family of all reducts of β„™. The discernibility matrix method is proposed to compute all reducts of information systems and relative reducts of decision systems [29].

π’ž is called a covering of π‘ˆ, where π‘ˆ is a nonempty domain of discourse, and π’ž is a family of nonempty subsets of π‘ˆ and βˆͺπ’ž=π‘ˆ.

It is clear that a partition of π‘ˆ is certainly a covering of π‘ˆ, so the concept of a covering is an extension of the concept of a partition.

Definition 2.1 (minimal description [9]). Let π’ž be a covering of π‘ˆ, π‘€π‘‘π’ž(π‘₯)={πΎβˆˆπ’žβˆ£π‘₯∈𝐾∧(βˆ€π‘†βˆˆπ’žβˆ§π‘₯βˆˆπ‘†βˆ§π‘†βŠ†πΎβ‡’πΎ=𝑆)}(2.2) is called the minimal description of π‘₯. When there is no confusion, we omit the π’ž from the subscript.

Definition 2.2 (neighborhood [9, 19]). Let π’ž be a covering of π‘ˆ, and π‘π’ž(π‘₯)=∩{πΆβˆˆπ’žβˆ£π‘₯∈𝐢} is called the neighborhood of π‘₯. Generally, we omit the subscript π’ž when there is no confusion.

Minimal description and neighborhood are regarded as related information granules to describe π‘₯, which are used as approximation elements in rough sets (as shown in Definition 2.3). It shows that 𝑁(π‘₯)=∩{πΆβˆˆπ’žβˆ£π‘₯∈𝐢}=βˆ©π‘€π‘‘(π‘₯). The neighborhood of π‘₯ can be seen as the minimum description of π‘₯, and it is the most precise description (more details are referred to [9]).

Definition 2.3 (covering lower and upper approximation operations [19]). Let π’ž be a covering of π‘ˆ. The operations πΆπΏπ’žβˆΆπ‘ƒ(π‘ˆ)→𝑃(π‘ˆ) and πΆπΏβ€²π’žβˆΆπ‘ƒ(π‘ˆ)→𝑃(π‘ˆ) are defined as follows: for all π‘‹βˆˆπ‘ƒ(π‘ˆ), πΆπΏπ’ž(𝑋)=βˆͺ{πΎβˆˆπ’žβˆ£πΎβŠ†π‘‹}=βˆͺ{πΎβˆ£βˆƒπ‘₯,s.t.(πΎβˆˆπ‘€π‘‘(π‘₯))∧(πΎβŠ†π‘‹)},πΆπΏξ…žπ’ž(𝑋)={π‘₯βˆ£π‘(π‘₯)βŠ†π‘‹}=βˆͺ{𝑁(π‘₯)βˆ£π‘(π‘₯)βŠ†π‘‹}.(2.3) We call πΆπΏπ’ž the first, the second, the third, or the fourth covering lower approximation operations and πΆπΏβ€²π’ž the fifth, the sixth, or the seventh covering lower approximation operations, with respect to the covering π’ž.
The operations 𝐹𝐻, 𝑆𝐻, 𝑇𝐻, 𝑅𝐻, 𝐼𝐻, 𝑋𝐻, and π‘‰π»βˆΆπ‘ƒ(π‘ˆ)→𝑃(π‘ˆ) are defined as follows: for all π‘‹βˆˆπ‘ƒ(π‘ˆ), πΉπ»π’ž(𝑋)=𝐢𝐿(𝑋)βˆͺ(βˆͺ{𝑀𝑑(π‘₯)∣π‘₯βˆˆπ‘‹βˆ’πΆπΏ(𝑋)}),π‘†π»π’žξ€½ξ€Ύ,(𝑋)=βˆͺπΎβˆ£πΎβˆˆπ’ž,πΎβˆ©π‘‹β‰ Γ˜π‘‡π»π’ž(𝑋)=βˆͺ{𝑀𝑑(π‘₯)∣π‘₯βˆˆπ‘‹},π‘…π»π’žξ€·βˆͺξ€½,(𝑋)=𝐢𝐿(𝑋)βˆͺ𝐾∣𝐾∩(π‘‹βˆ’πΆπΏ(𝑋))β‰ Γ˜ξ€Ύξ€ΈπΌπ»π’ž(𝑋)=𝐢𝐿(𝑋)βˆͺ(βˆͺ{𝑁(π‘₯)∣π‘₯βˆˆπ‘‹βˆ’πΆπΏ(𝑋)}=βˆͺ{𝑁(π‘₯)∣π‘₯βˆˆπ‘‹}),π‘‹π»π’žξ€½ξ€Ύ,(𝑋)=π‘₯βˆ£π‘(π‘₯)βˆ©π‘‹β‰ Γ˜π‘‰π»π’žξ€½ξ€Ύ.(𝑋)=βˆͺ𝑁(π‘₯)βˆ£π‘(π‘₯)βˆ©π‘‹β‰ Γ˜(2.4)πΉπ»π’ž, π‘†π»π’ž, π‘‡π»π’ž, π‘…π»π’ž, πΌπ»π’ž, π‘‹π»π’ž, and π‘‰π»π’ž are called the first, the second, the third, the fourth, the fifth, the sixth, and the seventh covering upper approximation operations with respect to π’ž, respectively. We leave out π’ž at the subscript when there is no confusion.

As shown in [32], every approximation operation in Definition 2.3 may be applied in certain circumstance. We choose the suitable approximation operation according to the specific situation. So it is important to design the granular reduction algorithms for all of these models.

More precise approximation spaces are proposed in [30]. As a further result, a reasonable granular reduction of coverings is also introduced. Let β„³π’ž=βˆͺ{𝑀𝑑(π‘₯)∣π‘₯βˆˆπ‘ˆ}, π’©π’ž={𝑁(π‘₯)∣π‘₯βˆˆπ‘ˆ}. βŸ¨π‘ˆ,β„³π’žβŸ© is the approximation space of the first and the third types of covering rough sets, βŸ¨π‘ˆ,π’žβŸ© is the approximation space of the second and the fourth types of covering rough sets, and βŸ¨π‘ˆ,π’©π’žβŸ© is the approximation space of the fifth, the sixth, and the seventh types of covering rough sets (referred to [30] for the details). In this paper, we design the algorithm of granular reduction for the fifth, the sixth, and the seventh type of covering rough sets.

Let π’ž be a covering of π‘ˆ, denoting a covering approximation space. β„³π’ž denotes an β„³-approximation space. π’©π’ž represents an 𝒩-approximation space. We omit π’ž at the subscript when there is no confusion (referred to [30] for the details).

3. Discernibility Matrixes Based on Covering Granular Reduction

In the original Pawlak's rough sets, a family of equivalence classes induced by equivalence relations is a partition. Once any of its elements are deleted, a partition is no longer a partition. The granular reduction refers to the method of reducing granular structures and to get rid of redundant information in databases. Therefore, granular reduction is not applicable to the original Pawlak's rough sets. However, as one of the most extensions of Pawlak's rough sets, a covering is still working even subject to the omission of its elements, as long as the set approximations are invariant. The purpose of covering granular reduction is to find minimal subsets keeping the same set approximations. It is meaningful and necessary to develop the algorithm for covering granular reduction.

The quintuple (π‘ˆ,π’ž,𝐢𝐿,𝐢𝐻) is called a covering rough set system (CRSS), where π’ž is a covering of π‘ˆ, 𝐢𝐿 and 𝐢𝐻 are the lower and upper approximation operations with respect to the covering π’ž, and βŸ¨π‘ˆ,π’œπ’žβŸ© is the approximation space. According to the categories of covering approximation operations in [30], there are two kinds of situations as follows.(1)If π’œπ’ž=π’ž or π’œπ’ž=β„³π’ž, then π’œπ’žβŠ†π’ž: thus; π’œπ’ž is the unique granular reduct of π’ž. There is no need to develop an algorithm to compute granular reducts for the first, the second, the third, and the fourth type of the covering rough sets.(2)If π’œπ’ž=π’©π’ž, generally, π’œπ’ž is not a subset of π’ž. Consequently, an algorithm is needed to compute all granular reducts of π’ž for the fifth, the sixth, and the seventh type of covering rough set models.

Next we examine the algorithm of granular reduction for the fifth, the sixth, and the seventh type of covering rough sets. Let π’ž be a covering of π‘ˆ, since π’©π’ž={𝑁(π‘₯)∣π‘₯βˆˆπ‘ˆ}, and π’©π’ž is the collection of all approximation elements of the fifth, the sixth, or the seventh type of lower/upper approximation operations. π’©π’ž is called the 𝒩-approximation space of π’ž. Given a pair of approximation operations, the set approximations of any π‘‹βŠ†π‘ˆ are determined by the 𝒩-approximation spaces. Thus, for the fifth, the sixth, and the seventh type of covering rough set models, the purpose of granular reduction is to find the minimal subsets π’žβ€² of π’ž such that π’©π’ž=π’©π’žξ…ž. The granular reducts based on the 𝒩-approximation spaces are called the 𝒩-reducts. 𝑁red(π’ž) is the set of all 𝒩-reducts of π’ž, and 𝑁𝐼(π’ž) is the set of all 𝒩-irreducible elements of π’ž (referred to [30] for the details).

In Pawlak's rough set theory, for every pair of π‘₯,π‘¦βˆˆπ‘ˆ, if 𝑦 belongs to the equivalence class containing π‘₯, we say that π‘₯ and 𝑦 are indiscernible. Otherwise, they are discernible. Let ℝ={𝑅1,𝑅2,…,𝑅𝑛} be a family of equivalence relation on π‘ˆ, π‘…π‘–βˆˆβ„. 𝑅𝑖 is indispensable in ℝ if and only if there is a pair of π‘₯,π‘¦βˆˆπ‘ˆ such that the relation between π‘₯ and 𝑦 is altered after deleting 𝑅𝑖 from ℝ. The attribute reduction of Pawlak's rough sets is to find minimal subsets of ℝ which keep the relations invariant for any π‘₯,π‘¦βˆˆπ‘ˆ. Based on this statement, the method of discernibility matrix to compute all reducts of Pawlak's rough sets was proposed in [29]. In covering rough sets, however, the discernibility relation between π‘₯,π‘¦βˆˆπ‘ˆ is different from that in Pawlak's rough sets.

Let π’ž be a covering on π‘ˆ, (π‘₯,𝑦)βˆˆπ‘ˆΓ—π‘ˆ. Then we call (π‘₯,𝑦) indiscernible if π‘¦βˆˆπ‘(π‘₯), that is, 𝑁(𝑦)βŠ†π‘(π‘₯). Otherwise, (π‘₯,𝑦) is discernible. When π’ž is a partition, the new discernibility relation coincides with that in Pawlak's. It is an extension of Pawlak's discernibility relation. In Pawlak's rough sets, (π‘₯,𝑦) is indiscernible if and only if (𝑦,π‘₯) is indiscernible. However, for a general covering, if 𝑁(𝑦)βŠ†π‘(π‘₯) and 𝑁(𝑦)≠𝑁(π‘₯), that is, π‘¦βˆˆπ‘(π‘₯) and π‘₯βˆ‰π‘(𝑦), (𝑦,π‘₯) is discernible while (π‘₯,𝑦) is indiscernible. Thereafter, we call these relations the relations of (π‘₯,𝑦) with respect to π’ž. The following theorem characterizes these relations.

Proposition 3.1. Let π’ž={πΆπ‘–βˆ£π‘–=1,2,3,…,𝑛} be a covering on π‘ˆ, and let π’žπ‘₯={πΆπ‘–βˆˆπ’žβˆ£π‘₯βˆˆπΆπ‘–}.(1)π‘¦βˆˆπ‘(π‘₯) if and only if π’žπ‘₯βŠ†π’žπ‘¦.(2)π‘¦βˆ‰π‘(π‘₯) if and only if there is πΆπ‘–βˆˆπ’ž such that π‘₯βˆˆπΆπ‘– and π‘¦βˆ‰πΆπ‘–.

Proof. (1)π‘¦βˆˆπ‘(π‘₯)=βˆ©π’žπ‘₯⇔ for any πΆπ‘–βˆˆπ’žπ‘₯, π‘¦βˆˆπΆπ‘–β‡” for any πΆπ‘–βˆˆπ’žπ‘₯, πΆπ‘–βˆˆπ’žπ‘¦β‡”π’žπ‘₯βŠ†π’žπ‘¦.
(2) It is evident from (1).

Theorem 3.2. Let π’ž be a covering on π‘ˆ, πΆπ‘–βˆˆπ’ž. Then π’©π’žβ‰ π’©π’žβˆ’{𝐢𝑖} if and only if there is (π‘₯,𝑦)βˆˆπ‘ˆΓ—π‘ˆ whose discernibility relation with respect to π’ž is changed after deleting 𝐢𝑖 from π’ž.

Proof. Suppose that π’©π’žβ‰ π’©π’žβˆ’{𝐢𝑖}, then there is at least one element π‘₯βˆˆπ‘ˆ such that π‘π’ž(π‘₯)β‰ π‘π’žβˆ’{𝐢𝑖}(π‘₯), that is, π‘π’ž(π‘₯)βŠ‚π‘π’žβˆ’{𝐢𝑖}(π‘₯). Since π‘π’žβˆ’{𝐢𝑖}(π‘₯)βˆ’π‘π’ž(π‘₯)β‰ Γ˜, suppose that π‘¦βˆˆπ‘π’žβˆ’{𝐢𝑖}(π‘₯)βˆ’π‘π’ž(π‘₯), then π‘¦βˆˆπ‘π’žβˆ’{𝐢𝑖}(π‘₯) and π‘¦βˆ‰π‘π’ž(π‘₯). Namely, (π‘₯,𝑦) is discernible with respect to π’ž, while (π‘₯,𝑦) is indiscernible with respect to π’žβˆ’{𝐢𝑖}.
Suppose that there is (π‘₯,𝑦)βˆˆπ‘ˆΓ—π‘ˆ whose discernibility relation with respect to π’ž is changed after deleting 𝐢𝑖 from π’ž. Put differently, (π‘₯,𝑦) is discernible with respect to π’ž, while (π‘₯,𝑦) is indiscernible with respect to π’žβˆ’{𝐢𝑖}. Then we have π‘¦βˆˆπ‘π’žβˆ’{𝐢𝑖}(π‘₯) and π‘¦βˆ‰π‘π’ž(π‘₯), so π‘¦βˆˆπ‘π’žβˆ’{𝐢𝑖}(π‘₯)βˆ’π‘π’ž(π‘₯). Thus, π‘π’ž(π‘₯)β‰ π‘π’žβˆ’{𝐢𝑖}(π‘₯). It implies π’©π’žβ‰ π’©π’žβˆ’{𝐢𝑖}.

The purpose of granular reducts of a covering π’ž is to find the minimal subsets of π’ž which keep the same classification ability as π’ž or, put differently, keep π’©π’ž invariant. In Theorem 3.2, π’©π’ž is kept unchanged to make the discernibility relations of any (π‘₯,𝑦)βˆˆπ‘ˆΓ—π‘ˆ invariant. Based on this statement, we are able to compute granular reducts with discernibility matrix.

Definition 3.3. Let π‘ˆ={π‘₯1,π‘₯2,…,π‘₯𝑛}, π’ž be a covering on π‘ˆ. 𝑀(π‘ˆ,π’ž) is an 𝑛×𝑛 matrix (𝑐𝑖𝑗)𝑛×𝑛 called a discernibility matrix of (π‘ˆ,π’ž), where(1)𝑐𝑖𝑗=Ø, π‘₯π‘—βˆˆπ‘(π‘₯𝑖),(2)𝑐𝑖𝑗={πΆβˆˆπ’žβˆ£π‘₯π‘–βˆˆπΆ,π‘₯π‘—βˆ‰πΆ}, π‘₯π‘—βˆ‰π‘(π‘₯𝑖).

This definition of discernibility matrix is more concise than the one in [11, 15] due to the reasonable statement of the discernibility relations. Likewise, we restate the characterizations of 𝒩-reduction.

Proposition 3.4. Consider that 𝑁𝐼(π’ž)={πΆβˆ£π‘π‘–π‘—={𝐢} for some π‘π‘–π‘—βˆˆπ‘€(π‘ˆ,π’ž)}.

Proof. For any πΆβˆˆπ‘πΌ(π’ž), π’©π’žβ‰ π’©π’žβˆ’{𝐢}, then there is (π‘₯𝑖,π‘₯𝑗)βˆˆπ‘ˆΓ—π‘ˆ such that π‘₯π‘—βˆˆπ‘π’žβˆ’{𝐢}(π‘₯𝑖) and π‘₯π‘—βˆ‰π‘π’ž(π‘₯𝑖). It implies that π‘₯π‘–βˆˆπΆ and π‘₯π‘—βˆ‰πΆ. Moreover, for any πΆβ€²βˆˆπ’žβˆ’{𝐢}, since π‘₯π‘—βˆˆπ‘π’žβˆ’{𝐢}(π‘₯𝑖), we have π‘₯π‘–βˆˆπΆβ€² if π‘₯π‘–βˆˆπΆβ€². Thus, 𝑐𝑖𝑗={𝐢}.
If 𝑐𝑖𝑗={𝐢} for some π‘π‘–π‘—βˆˆπ‘€(π‘ˆ,π’ž), then π‘₯π‘–βˆˆπΆ and π‘₯π‘—βˆ‰πΆ. And for any πΆβ€²βˆˆπ’žβˆ’{𝐢}, if π‘₯π‘–βˆˆπΆβ€², then π‘₯π‘–βˆˆπΆβ€², that is, π‘₯π‘—βˆˆπ‘π’žβˆ’{𝐢}(π‘₯𝑖) and π‘₯π‘—βˆ‰π‘π’ž(π‘₯𝑖), then π‘π’žβˆ’{𝐢}(π‘₯𝑖)β‰ π‘π’ž(π‘₯𝑖). Namely, π’©π’žβ‰ π’©π’žβˆ’{𝐢}, which implies πΆβˆˆπ‘πΌ(π’ž).

Proposition 3.5. Suppose that π’žβ€²βŠ†π’ž, then π’©π’ž=π’©π’žξ…ž if and only if π’žβ€²βˆ©π‘π‘–π‘—β‰ Γ˜ for every π‘π‘–π‘—β‰ Γ˜.

Proof. π’©π’ž=π’©π’žξ…žβ€‰β‡” for any (π‘₯𝑖,π‘₯𝑗)βˆˆπ‘ˆΓ—π‘ˆ, π‘₯π‘—βˆ‰π‘π’ž(π‘₯𝑖) if and only if π‘₯π‘—βˆ‰π‘π’žξ…ž(π‘₯𝑖), ⇔ for any (π‘₯𝑖,π‘₯𝑗)βˆˆπ‘ˆΓ—π‘ˆ, there is πΆβˆˆπ’ž such that π‘₯π‘–βˆˆπΆ and π‘₯π‘—βˆ‰πΆ if and only if there is πΆβ€²βˆˆπ’žβ€² such that π‘₯π‘–βˆˆπΆβ€² and π‘₯π‘—βˆ‰πΆβ€², ⇔ for any π‘π‘–π‘—β‰ Γ˜, π’žβ€²β‰ Γ˜.

Proposition 3.6. Suppose that π’žβ€²βŠ†π’ž, then π’žξ…žβˆˆπ‘red(π’ž) if and only if π’žβ€² is a minimal set satisfying π’žβ€²βˆ©π‘π‘–π‘—β‰ Γ˜ for every π‘π‘–π‘—β‰ Γ˜.

Definition 3.7. Let π‘ˆ={π‘₯1,π‘₯2,…,π‘₯𝑛}, let π’ž={𝐢1,𝐢2,…,πΆπ‘š} be a covering of π‘ˆ, and let 𝑀(π‘ˆ,π’ž)=(𝑐𝑖𝑗)𝑛×𝑛 be the discernibility matrix of (π‘ˆ,π’ž). A discernibility function 𝑓(π‘ˆ,π’ž) is a Boolean function of π‘š Boolean variables, 𝐢1,𝐢2,…,πΆπ‘š, corresponding to the covering elements 𝐢1,𝐢2,…,πΆπ‘š, respectively, defined as 𝑓(π‘ˆ,π’ž)(𝐢1,𝐢2,…,πΆπ‘š)=∧{∨(𝑐𝑖𝑗)βˆ£π‘π‘–π‘—βˆˆπ‘€(π‘ˆ,π’ž),π‘π‘–π‘—β‰ Γ˜}.

Theorem 3.8. Let π’ž be a family of covering on π‘ˆ, let 𝑓(π‘ˆ,π’ž) be the discernibility function, and let 𝑔(π‘ˆ,π’ž) be the reduced disjunctive form of 𝑓(π‘ˆ,π’ž) by applying the multiplication and absorption laws. If 𝑔(π‘ˆ,π’ž)=(βˆ§π’ž1)βˆ¨β‹―βˆ¨(βˆ§π’žπ‘™), where π’žπ‘˜βŠ†π’ž, π‘˜=1,2,…,𝑙 and every element in π’žπ‘˜ only appears once, then 𝑁red(π’ž)={π’ž1,π’ž2,…,π’žπ‘™}.

Proof. For every π‘˜=1,2,…,𝑙, βˆ§π’žπ‘˜β‰€βˆ¨π‘π‘–π‘— for any π‘π‘–π‘—βˆˆπ‘€(π‘ˆ,π’ž), so π’žπ‘˜βˆ©π‘π‘–π‘—β‰ Γ˜. Let π’žξ…žπ‘˜=π’žπ‘˜βˆ’{𝐢} for any πΆβˆˆπ’žπ‘˜, then 𝑔(π‘ˆ,π’ž)β‰¨βˆ¨π‘˜βˆ’1𝑑=1(βˆ§π’žπ‘‘)∨(βˆ§π’žξ…žπ‘˜)∨(βˆ¨π‘™π‘‘=π‘˜+1(βˆ§π’žπ‘‘)). If for every π‘π‘–π‘—βˆˆπ‘€(π‘ˆ,π’ž), we have π’žξ…žπ‘˜βˆ©π‘π‘–π‘—β‰ Γ˜, then βˆ§π’žξ…žπ‘˜β‰€βˆ¨π‘π‘–π‘— for every π‘π‘–π‘—βˆˆπ‘€(π‘ˆ,π’ž), that is, 𝑔(π‘ˆ,π’ž)β‰₯βˆ¨π‘˜βˆ’1𝑑=1(βˆ§π’žπ‘‘)∨(βˆ§π’žξ…žπ‘˜)∨(βˆ¨π‘™π‘‘=π‘˜+1(βˆ§π’žπ‘‘)), which is a contradiction. It implies that there is 𝑐𝑖0𝑗0βˆˆπ‘€(π‘ˆ,π’ž) such that π’žξ…žπ‘˜βˆ©π‘π‘–0𝑗0=Ø. Thus, π’žπ‘˜ is a reduct of π’ž.
For any π’žβ€²βˆˆRed(π’ž), we have π’žβ€²βˆ©π‘π‘–π‘—β‰ Γ˜ for every π‘π‘–π‘—βˆˆπ‘€(π‘ˆ,π’ž), so 𝑓(π‘ˆ,π’ž)∧(βˆ§π’žβ€²)=∧(βˆ¨π‘π‘–π‘—)∧(βˆ§π’žβ€²)=βˆ§π’žβ€², which implies βˆ§π’žβ€²β‰€π‘“(π‘ˆ,π’ž)=𝑔(π‘ˆ,π’ž). Suppose that, for every π‘˜=1,2,…,𝑙, we have π’žπ‘˜βˆ’π’žβ€²β‰ Γ˜, then for every π‘˜, there is π’žπ‘˜βˆˆπ’žπ‘˜βˆ’π’žβ€². By rewriting 𝑔(π‘ˆ,π’ž)=(βˆ¨π‘™π‘˜=1π’žπ‘˜)∧Φ, βˆ§π’žβ€²β‰€βˆ¨π‘™π‘˜=1π’žπ‘˜. Thus, there is πΆπ‘˜0 such that βˆ§π’žβ€²β‰€π’žπ‘˜0, that is, π’žπ‘˜0βˆˆπ’žβ€², which is a contradiction. So π’žπ‘˜0βŠ†π’žβ€² for some π‘˜0, since both π’žβ€² and π’žπ‘˜0 are reducts, and it is evident that π’žβ€²=π’žπ‘˜0. Consequently, Red(π’ž)={π’ž1,π’ž2,…,π’žπ‘™}.

Algorithm 3.9. Consider the following:
input: βŸ¨π‘ˆ,π’žβŸ©,
output: 𝑁red(π’ž) and 𝑁𝐼(π’ž)// The set of all granular reducts and the set of all 𝒩-irreducible elements. Step 1: 𝑀(π‘ˆ,π’ž)=(𝑐𝑖𝑗)𝑛×𝑛, for each 𝑐𝑖𝑗, let 𝑐𝑖𝑗=Ø.  Step 2: for each π‘₯π‘–βˆˆπ‘ˆ, compute 𝑁(π‘₯𝑖)=∩{πΆβˆˆπ’žβˆ£π‘₯π‘–βˆˆπΆ}.    Ifπ‘₯π‘—βˆ‰π‘(π‘₯𝑖),𝑐𝑖𝑗={πΆβˆˆπ’žβˆ£π‘₯π‘–βˆˆπΆ,π‘₯π‘—βˆ‰πΆ}.  Step 3: 𝑓(π‘ˆ,π’ž)(𝐢1,𝐢2,…,πΆπ‘š)=∧{∨(𝑐𝑖𝑗)βˆ£π‘π‘–π‘—βˆˆπ‘€(π‘ˆ,π’ž),π‘π‘–π‘—β‰ Γ˜}.  Step 4: compute 𝑓(π‘ˆ,π’ž) to 𝑔(π‘ˆ,π’ž)=(βˆ§π’ž1)βˆ¨β‹―βˆ¨(βˆ§π’žπ‘™)// where π’žπ‘˜βŠ†π’ž,β€‰β€ƒβ€ƒβ€ƒβ€‚π‘˜=1,2,…,𝑙, and every element in π’žπ‘˜ only appears once. Step 4: output 𝑁red(π’ž)={π’ž1,π’ž2,…,π’žπ‘™}, 𝑁𝐼(π’ž)=βˆ©π‘red(π’ž). Step 5: end.

The following example is used to illustrate our idea.

Example 3.10. Suppose that π‘ˆ={π‘₯1,π‘₯2,…,π‘₯6}, where π‘₯𝑖,𝑖=1,2,…,6 denote six objects, and let 𝐢𝑖,𝑖=1,2,…,7 denote seven properties; the information is presented in Table 1, that is, the 𝑖th object possesses the 𝑗th attribute is indicated by a βˆ— in the 𝑖𝑗-position of the table.


Objects 𝐢 1 𝐢 2 𝐢 3 𝐢 4 𝐢 5 𝐢 6 𝐢 7

π‘₯ 1 * *   * *  *
π‘₯ 2 *       
π‘₯ 3 *   *    *  
π‘₯ 4   * * * * * *
π‘₯ 5    * *   *
π‘₯ 6     *  *

{π‘₯1,π‘₯2,π‘₯3} is the set of all objects possessing the attribute 𝐢1, and it is denoted by 𝐢1={π‘₯1,π‘₯2,π‘₯3}. Similarly, 𝐢2={π‘₯1,π‘₯4}, 𝐢3={π‘₯3,π‘₯4,π‘₯5}, 𝐢4={π‘₯1,π‘₯4,π‘₯5}, 𝐢5={π‘₯1,π‘₯4,π‘₯6}, 𝐢6={π‘₯3,π‘₯4}, and 𝐢7={π‘₯1,π‘₯4,π‘₯5,π‘₯6}. Evidently, π’ž={𝐢1,𝐢2,𝐢3,𝐢4,𝐢5,𝐢6,𝐢7} is a covering on π‘ˆ.

Then, 𝑁(π‘₯1)={π‘₯1}, 𝑁(π‘₯2)={π‘₯1,π‘₯2,π‘₯3}, 𝑁(π‘₯3)={π‘₯3}, 𝑁(π‘₯4)={π‘₯4}, 𝑁(π‘₯5)={π‘₯4,π‘₯5}, and 𝑁(π‘₯6)={π‘₯4,π‘₯6}.

The discernibility matrix of (π‘ˆ,π’ž) is exhibited as follows: βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽΓ˜ξ€½πΆ2,𝐢4,𝐢5,𝐢7𝐢2,𝐢4,𝐢5,𝐢7𝐢1𝐢1,𝐢2,𝐢5𝐢1,𝐢2,𝐢4ξ€Ύξ€½πΆΓ˜Γ˜Γ˜1𝐢1𝐢1𝐢3,𝐢6𝐢3,𝐢6ξ€ΎΓ˜ξ€½πΆ1𝐢1,𝐢6𝐢1,𝐢3,𝐢6𝐢3,𝐢6𝐢2,𝐢3,𝐢4,𝐢5,𝐢6,𝐢7𝐢2,𝐢4,𝐢5,𝐢7ξ€ΎΓ˜ξ€½πΆ2,𝐢5,𝐢6𝐢2,𝐢3,𝐢4,𝐢6𝐢3𝐢3,𝐢4,𝐢7𝐢4,𝐢7ξ€Ύξ€½πΆΓ˜Γ˜3,𝐢4ξ€ΎΓ˜ξ€½πΆ5,𝐢7𝐢5,𝐢7ξ€ΎΓ˜ξ€½πΆ5ξ€ΎΓ˜βŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ ξ‚€,(3.1)𝑓(π‘ˆ,Ξ”)𝐢1,𝐢2,𝐢3,𝐢4,𝐢5,𝐢6,𝐢7ξ‚ξ€½βˆ¨ξ€·π‘=βˆ§π‘–π‘—ξ€Έβˆ£π‘–,𝑗=1,2,…,6,𝑐𝑖𝑗=ξ€·πΆβ‰ Γ˜2∨𝐢4∨𝐢5∨𝐢7ξ€Έβˆ§ξ€·πΆ2∨𝐢4∨𝐢5∨𝐢7ξ€Έβˆ§πΆ1βˆ§ξ€·πΆ1∨𝐢2∨𝐢5ξ€Έβˆ§ξ€·πΆ1∨𝐢2∨𝐢4ξ€Έβˆ§πΆ1∧𝐢1∧𝐢1βˆ§ξ€·πΆ3∨𝐢6ξ€Έβˆ§ξ€·πΆ3∨𝐢6ξ€Έβˆ§ξ€·πΆ1ξ€Έβˆ§ξ€·πΆ1∨𝐢6ξ€Έβˆ§ξ€·πΆ1∨𝐢3∨𝐢6ξ€Έβˆ§ξ€·πΆ3∨𝐢6ξ€Έβˆ§ξ€·πΆ2∨𝐢3∨𝐢4∨𝐢5∨𝐢6∨𝐢7ξ€Έβˆ§ξ€·πΆ2∨𝐢4∨𝐢5∨𝐢7ξ€Έβˆ§ξ€·πΆ2∨𝐢5∨𝐢6ξ€Έβˆ§ξ€·πΆ2∨𝐢3∨𝐢4∨𝐢6ξ€Έβˆ§πΆ3βˆ§ξ€·πΆ3∨𝐢4∨𝐢7ξ€Έβˆ§ξ€·πΆ4∨𝐢7ξ€Έβˆ§ξ€·πΆ3∨𝐢4ξ€Έβˆ§ξ€·πΆ5∨𝐢7ξ€Έβˆ§ξ€·πΆ5∨𝐢7ξ€Έβˆ§πΆ5=𝐢5∧𝐢1∧𝐢3∧𝐢4ξ€Έβˆ¨ξ€·πΆ5∧𝐢1∧𝐢3∧𝐢7ξ€Έ.(3.2)

So 𝑁red(π’ž)={{𝐢1,𝐢3,𝐢4,𝐢5},{𝐢1,𝐢3,𝐢5,𝐢7}}, 𝑁𝐼(π’ž)={𝐢1,𝐢3,𝐢5}. As a result, Table 1 can be simplified into Table 2 or Table 3, and the ability of classification is invariant. Obviously, the granular reduction algorithm can reduce data sets as shown.



Objects
𝐢 1 𝐢 3 𝐢 4 𝐢 5

π‘₯ 1 *   * *
π‘₯ 2 *    
π‘₯ 3 * *   
π‘₯ 4   * * *
π‘₯ 5   * *  
π‘₯ 6    *


Objects 𝐢 1 𝐢 3 𝐢 5 𝐢 7

π‘₯ 1 *   * *
π‘₯ 2 *    
π‘₯ 3 * *   
π‘₯ 4   * * *
π‘₯ 5   *   *
π‘₯ 6   **

4. The Simplification of Discernibility Matrixes

For the purpose of finding the set of all granular reducts, we have proposed the method by discernibility matrix. Unfortunately, it is at least an NP problem, since the discernibility matrix in this paper is more complex than the one in [33]. Accordingly, we simplify the discernibility matrixes in this section. In addition, a heuristic algorithm is presented to avoid the NP hard problem.

Definition 4.1. Let 𝑀(π‘ˆ,π’ž)=(𝑐𝑖𝑗)𝑛×𝑛 be the discernibility matrix of (π‘ˆ,π’ž). For any π‘π‘–π‘—βˆˆπ‘€(π‘ˆ,π’ž), if there is an nonempty element 𝑐𝑖0𝑗0βˆˆπ‘€(π‘ˆ,π’ž)βˆ’{𝑐𝑖𝑗} such that 𝑐𝑖0𝑗0βŠ†π‘π‘–π‘—, let π‘ξ…žπ‘–π‘—=Ø; otherwise, π‘ξ…žπ‘–π‘—=𝑐𝑖𝑗, then we get a new discernibility matrix SIM(π‘ˆ,π’ž)=(π‘ξ…žπ‘–π‘—)𝑛×𝑛, which called the simplification discernibility matrix of (π‘ˆ,π’ž).

Theorem 4.2. Let 𝑀(π‘ˆ,π’ž) be the discernibility matrix of (π‘ˆ,π’ž), and SIM(π‘ˆ,π’ž) is the simplification discernibility matrix, π’žβ€²βŠ†π’ž. Then π’žβ€²βˆ©π‘π‘–π‘—β‰ Γ˜ for any nonempty element π‘π‘–π‘—βˆˆπ‘€(π‘ˆ,π’ž) if and only if π’žβ€²βˆ©π‘ξ…žπ‘–π‘—β‰ Γ˜ for any nonempty element π‘ξ…žπ‘–π‘—βˆˆSIM(π‘ˆ,π’ž).

Proof. If π’žβ€²βˆ©π‘π‘–π‘—β‰ Γ˜ for every π‘π‘–π‘—β‰ Γ˜ and π‘π‘–π‘—βˆˆπ‘€(π‘ˆ,π’ž), it is evident that π’žβ€²βˆ©π‘ξ…žπ‘–π‘—β‰ Γ˜ for every π‘ξ…žπ‘–π‘—β‰ Γ˜ and π‘ξ…žπ‘–π‘—βˆˆSIM(π‘ˆ,π’ž).
Suppose that π’žβ€²βˆ©π‘ξ…žπ‘–π‘—β‰ Γ˜ for every π‘ξ…žπ‘–π‘—β‰ Γ˜ and π‘ξ…žπ‘–π‘—βˆˆSIM(π‘ˆ,π’ž). For any nonempty π‘π‘–π‘—βˆˆπ‘€(π‘ˆ,π’ž), if there is an nonempty element 𝑐𝑖0𝑗0βˆˆπ‘€(π‘ˆ,π’ž)βˆ’{𝑐𝑖𝑗} such that 𝑐𝑖0𝑗0βŠ†π‘π‘–π‘—, and for any nonempty element 𝑐𝑖1𝑗1βˆˆπ‘€(π‘ˆ,π’ž)βˆ’{𝑐𝑖𝑗,𝑐𝑖0𝑗0}, 𝑐𝑖1𝑗1ΜΈβŠ†π‘π‘–0𝑗0, then π‘ξ…žπ‘–0𝑗0=𝑐𝑖0𝑗0β‰ Γ˜. Since π’žβ€²βˆ©π‘ξ…žπ‘–0𝑗0β‰ Γ˜, then π’žβ€²βˆ©π‘π‘–0𝑗0β‰ Γ˜; thus, π’žβ€²βˆ©π‘π‘–π‘—β‰ Γ˜. If 𝑐𝑖0𝑗0ΜΈβŠ†π‘π‘–π‘— for any nonempty element 𝑐𝑖0𝑗0βˆˆπ‘€(π‘ˆ,π’ž)βˆ’{𝑐𝑖𝑗}, then π‘ξ…žπ‘–π‘—=𝑐𝑖𝑗. Since π’žβ€²βˆ©π‘ξ…žπ‘–π‘—β‰ Γ˜, then π’žβ€²βˆ©π‘π‘–π‘—β‰ Γ˜. Thus, π’žβ€²βˆ©π‘π‘–π‘—β‰ Γ˜ for every nonempty π‘π‘–π‘—βˆˆπ‘€(π‘ˆ,π’ž).

Proposition 4.3. Suppose that π’žβ€²βŠ†π’ž, then π’žξ…žβˆˆπ‘red(π’ž) if and only if π’žβ€² is a minimal set satisfying π’žβ€²βˆ©π‘ξ…žπ‘–π‘—β‰ Γ˜ for every π‘ξ…žπ‘–π‘—β‰ Γ˜ and π‘ξ…žπ‘–π‘—βˆˆ<?brm?>SIM(π‘ˆ,π’ž).

Proposition 4.4. Consider that βˆͺ{π‘ξ…žπ‘–π‘—βˆ£π‘ξ…žπ‘–π‘—βˆˆSIM(π‘ˆ,π’ž)}=βˆͺ𝑁red(π’ž).

Proof. Suppose that 𝐢∈βˆͺ{π‘ξ…žπ‘–π‘—βˆ£π‘ξ…žπ‘–π‘—βˆˆSIM(π‘ˆ,π’ž)}, then there is π‘ξ…žπ‘–π‘—βˆˆSIM(π‘ˆ,π’ž) such that πΆβˆˆπ‘ξ…žπ‘–π‘— and π‘ξ…žπ‘–π‘—βˆ©π‘πΌ(π’ž)=Ø. For any π‘ξ…žπ‘–π‘—βˆˆSIM(π‘ˆ,π’ž), if πΆβˆˆπ‘ξ…žπ‘–π‘—, let 𝑐1𝑖𝑗={𝐢}. Otherwise, 𝑐1𝑖𝑗={𝐢𝑖𝑗}, where πΆπ‘–π‘—βˆˆπ‘ξ…žπ‘–π‘—. Suppose that 𝑀1(π‘ˆ,π’ž)=(𝑐1𝑖𝑗)𝑛×𝑛; it is easy to prove that 𝐢∈βˆͺ{𝑐1π‘–π‘—βˆ£π‘1π‘–π‘—βˆˆπ‘€1(π‘ˆ,π’ž)}βˆˆπ‘red(π’ž). Thus, 𝐢∈βˆͺ𝑁red(π’ž).
Suppose that 𝐢∈βˆͺ𝑁red(π’ž), then there is π’žπ‘˜βˆˆπ‘red(π’ž) such that πΆβˆˆπ’žπ‘˜. From Proposition 4.3, we know that π’žπ‘˜ is a minimal set satisfying π’žπ‘˜βˆ©π‘β€²π‘–π‘—β‰ Γ˜ for every π‘ξ…žπ‘–π‘—β‰ Γ˜ and π‘ξ…žπ‘–π‘—βˆˆSIM(π‘ˆ,π’ž). So there is a π‘ξ…žπ‘–π‘—βˆˆSIM(π‘ˆ,π’ž) such that πΆβˆˆπ‘ξ…žπ‘–π‘—, or else 𝐢 is redundant in π’žπ‘˜. Thus, 𝐢∈βˆͺ{π‘ξ…žπ‘–π‘—βˆ£π‘ξ…žπ‘–π‘—βˆˆSIM(π‘ˆ,π’ž)}.
In summary, βˆͺ{π‘ξ…žπ‘–π‘—βˆ£π‘ξ…žπ‘–π‘—βˆˆSIM(π‘ˆ,π’ž)}=βˆͺ𝑁red(π’ž).

Proposition 4.5. Let SIM(π‘ˆ,π’ž)=(π‘ξ…žπ‘–π‘—)𝑛×𝑛 be the simplified discernibility matrix of (π‘ˆ,π’ž), then SIM(π‘ˆ,π’ž) is the minimal matrix to compute all granular reducts of π’ž, that is, for any matrix 𝑀0(π‘ˆ,π’ž)=(𝑑𝑖𝑗)𝑛×𝑛 where π‘‘π‘–π‘—βŠ†π‘ξ…žπ‘–π‘—, 𝑀0(π‘ˆ,π’ž) can compute all granular reducts of π’ž if and only if 𝑑𝑖𝑗=π‘ξ…žπ‘–π‘— for 1≀𝑖,𝑗≀𝑛.

Proof. If 𝑑𝑖𝑗=π‘ξ…žπ‘–π‘— for 1≀𝑖,𝑗≀𝑛, then 𝑀0(π‘ˆ,π’ž)=SIM(π‘ˆ,π’ž), and 𝑀0(π‘ˆ,π’ž) can compute all granular reducts of π’ž.
Suppose that there is a nonempty π‘ξ…žπ‘–0𝑗0∈SIM(π‘ˆ,π’ž) such that 𝑑𝑖0𝑗0βŠ‚π‘ξ…žπ‘–0𝑗0. If |π‘ξ…žπ‘–0𝑗0|=1, suppose that π‘ξ…žπ‘–0𝑗0={𝐢0}, then 𝑑𝑖0𝑗0=Ø. From the definition of the simplification discernibility matrix, we know that 𝐢0βˆ‰π‘ξ…žπ‘–π‘— for any π‘ξ…žπ‘–π‘—βˆˆSIM(π‘ˆ,π’ž)βˆ’{π‘ξ…žπ‘–0𝑗0}, then 𝐢0βˆ‰π‘‘π‘–π‘— for any π‘‘π‘–π‘—βˆˆπ‘€0(π‘ˆ,π’ž). So 𝑀0(π‘ˆ,π’ž) cannot compute any granular reducts of π’ž. If |π‘ξ…žπ‘–0𝑗0|β‰₯2, we suppose that 𝑑𝑖0𝑗0β‰ Γ˜. Then there is a 𝐢∈(π‘ξ…žπ‘–0𝑗0βˆ’π‘‘π‘–0𝑗0), and let 𝑐1𝑖0𝑗0={𝐢}. For any π‘ξ…žπ‘–π‘—βˆˆSIM(π‘ˆ,π’ž)βˆ’{π‘ξ…žπ‘–0𝑗0}, if πΆβˆˆπ‘ξ…žπ‘–π‘—, let 𝑐1𝑖𝑗=Ø. Otherwise, let 𝑐1𝑖𝑗={𝐢𝑖𝑗} where πΆπ‘–π‘—βˆˆπ‘ξ…žπ‘–π‘—βˆ’π‘ξ…žπ‘–0𝑗0. Let 𝑀1(π‘ˆ,π’ž)=(𝑐1𝑖𝑗)𝑛×𝑛 and π’žβ€²=βˆͺ{𝑐1π‘–π‘—βˆ£π‘1π‘–π‘—βˆˆπ‘€1(π‘ˆ,π’ž)}, and it is easy to prove that π’žβ€²βˆˆπ‘red(π’ž). However, π’žβ€²βˆ©π‘‘π‘–0𝑗0=Ø, that is, 𝑀0(π‘ˆ,π’ž) cannot compute all granular reducts of π’ž. Thus, if 𝑀0(π‘ˆ,π’ž) can compute all granular reducts of π’ž, then 𝑑𝑖𝑗=π‘ξ…žπ‘–π‘— for 1≀𝑖,𝑗≀𝑛.

From the above propositions, we know that the simplified discernibility matrix is the minimal discernibility matrix which can compute the same reducts as the original one. Hereafter, we only examine simplified discernibility matrixes instead of general discernibility matrixes. The following example is used to illustrate our idea.

Example 4.6. The discernibility matrix of (π‘ˆ,π’ž) in Example 3.10 is as follows: βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽΓ˜ξ€½πΆ5ξ€ΎΓ˜ξ€½πΆ1ξ€Ύξ€½πΆΓ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜3ξ€Ύξ€½πΆΓ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜4,𝐢7ξ€ΎβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ ,ξ‚€Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜π‘“(π‘ˆ,Ξ”)𝐢1,𝐢2,𝐢3,𝐢4,𝐢5,𝐢6,𝐢7ξ‚ξ€½βˆ¨ξ€·π‘=βˆ§ξ…žπ‘–π‘—ξ€Έβˆ£π‘–,𝑗=1,2,…,6,π‘π‘–π‘—ξ€Ύβ‰ Γ˜=𝐢5∧𝐢1∧𝐢3βˆ§ξ€·πΆ4∨𝐢7ξ€Έ=𝐢5∧𝐢1∧𝐢3∧𝐢4ξ€Έβˆ¨ξ€·πΆ5∧𝐢1∧𝐢3∧𝐢7ξ€Έ.(4.1) So 𝑁red(π’ž)={{𝐢1,𝐢3,𝐢4,𝐢5},{𝐢1,𝐢3,𝐢5,𝐢7}}, 𝑁𝐼(π’ž)={𝐢1,𝐢3,𝐢5}.

From the above example, it is easy to see that simplified discernibility matrix can simplify the computing processes remarkably. Especially when π’ž is a consistent covering proposed in [30], that is, 𝑁red(π’ž)={𝑁𝐼(π’ž)}, the unique reduct 𝑁red(π’ž)={βˆͺ{π‘ξ…žπ‘–π‘—βˆ£π‘ξ…žπ‘–π‘—βˆˆSIM(π‘ˆ,π’ž)}}.

Unfortunately, although the simplified discernibility matrixes are more simple, the processes of computing reducts by discernibility function are still NP hard. Accordingly, we develop a heuristic algorithm to obtain a reduct from a discernibility matrix directly.

Let 𝑀(π‘ˆ,π’ž)=(𝑐𝑖𝑗)𝑛×𝑛 be a discernibility matrix. We denote the number of the elements in 𝑐𝑖𝑗 by |𝑐𝑖𝑗|. For any πΆβˆˆπ’ž, ||𝐢|| denotes the number of 𝑐𝑖𝑗 which contain 𝐢. Let π‘π‘–π‘—βˆˆπ‘€(π‘ˆ,π’ž), if for any πΆβˆˆπ‘πΌ(π’ž), πΆβˆ‰π‘π‘–π‘—, then π‘ξ…žπ‘–π‘—=𝑐𝑖𝑗. Since βˆͺ{π‘ξ…žπ‘–π‘—βˆ£|π‘ξ…žπ‘–π‘—|β‰₯2}=βˆͺ𝑁red(π’ž)βˆ’π‘πΌ(π’ž), if |π‘ξ…žπ‘–π‘—|β‰₯2, then the elements in π‘ξ…žπ‘–π‘— may either be deleted from π’ž or be preserved. Suppose that 𝐢0∈βˆͺ{π‘ξ…žπ‘–π‘—βˆ£|π‘ξ…žπ‘–π‘—|β‰₯2}, if ||𝐢0||β‰₯||𝐢|| for any 𝐢∈βˆͺ{π‘ξ…žπ‘–π‘—βˆ£|π‘ξ…žπ‘–π‘—|β‰₯2}, 𝐢0 is called the maximal element with respect to the simplified discernibility matrix SIM(π‘ˆ,π’ž). The heuristic algorithm to get a reduct from a discernibility matrix directly proceeds as follows.

Algorithm 4.7. Consider the following:
input: βŸ¨π‘ˆ,π’žβŸ©,
output: granular reducts red Step 1: 𝑀(π‘ˆ,π’ž)=(𝑐𝑖𝑗)𝑛×𝑛, for each 𝑐𝑖𝑗, let 𝑐𝑖𝑗=Ø. Step 2: for each π‘₯π‘–βˆˆπ‘ˆ, compute 𝑁(π‘₯𝑖)=∩{πΆβˆˆπ’žπ‘₯π‘–βˆˆπΆ}.    Ifπ‘₯π‘—βˆ‰π‘(π‘₯𝑖),     𝑐𝑖𝑗={πΆβˆˆπ’žβˆ£π‘₯π‘–βˆˆπΆ,π‘₯π‘—βˆ‰πΆ}//getthediscernibilitymatrix. Step 3: for each π‘π‘–π‘—βˆˆπ‘€(π‘ˆ,π’ž),    if there is a nonempty element 𝑐𝑖0𝑗0βˆˆπ‘€(π‘ˆ,π’ž)βˆ’{𝑐𝑖𝑗} such that     𝑐𝑖0𝑗0βŠ†π‘π‘–π‘—, let 𝑐𝑖𝑗=Ø // get the simplified discernibility matrix. Step 4: for each πΆπ‘–βˆˆβˆͺ𝑀(π‘ˆ,π’ž), compute ||𝐢𝑖|| and select the maximal      element 𝐢0 of SIM(π‘ˆ,π’ž).    For each π‘π‘–π‘—βˆˆπ‘€(π‘ˆ,π’ž),    if 𝐢0βˆˆπ‘π‘–π‘—,     let 𝑐𝑖𝑗={𝐢0}.  Step 5: if there is π‘π‘–π‘—βˆˆπ‘€(π‘ˆ,π’ž) such that |𝑐𝑖𝑗|β©Ύ2,     return to Step 3;    else     output red=βˆͺ𝑀(π‘ˆ,π’ž). Step 5: end.

Example 4.8. The simplified discernibility matrix of (π‘ˆ,π’ž) in Example 3.10 is as follows: βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽΓ˜ξ€½πΆ5ξ€ΎΓ˜ξ€½πΆ1ξ€Ύξ€½πΆΓ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜3ξ€Ύξ€½πΆΓ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜4,𝐢7ξ€ΎβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜.(4.2)
For a maximal element 𝐢4 of SIM(π‘ˆ,π’ž), let 𝑐153={𝐢4}, then we get 𝑀1(π‘ˆ,π’ž) as follows: βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽΓ˜ξ€½πΆ5ξ€ΎΓ˜ξ€½πΆ1ξ€Ύξ€½πΆΓ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜3ξ€Ύξ€½πΆΓ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜4ξ€ΎβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜.(4.3)
Thus, {𝐢1,𝐢3,𝐢4,𝐢5}=βˆͺ{𝑐1π‘–π‘—βˆ£π‘1π‘–π‘—βˆˆπ‘€1(π‘ˆ,π’ž)} is a granular reduct of π’ž.
For a maximal element 𝐢7 of SIM(π‘ˆ,π’ž), let 𝑐153={𝐢7}, then we get 𝑀2(π‘ˆ,π’ž) as follows: βŽ›βŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽœβŽΓ˜ξ€½πΆ5ξ€ΎΓ˜ξ€½πΆ1ξ€Ύξ€½πΆΓ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜3ξ€Ύξ€½πΆΓ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜7ξ€ΎβŽžβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽŸβŽ Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜Γ˜.(4.4) Thus, {𝐢1,𝐢3,𝐢5,𝐢7}=βˆͺ{𝑐2π‘–π‘—βˆ£π‘2π‘–π‘—βˆˆπ‘€2(π‘ˆ,π’ž)} is also a granular reduct of π’ž.

From the above example, we show that the heuristic algorithm can avoid the NP hard problem and generate a granular reduct from the simplified discernability matrix directly. With the heuristic algorithm, the granular reduction theory based on discernability matrix is no longer limited to the theoretic level but applicable in practical usage.

5. Conclusion

In this paper, we develop an algorithm by discernability matrixes to compute all the granular reducts with covering rough sets initially. A simplification of discernibility matrix is proposed for the first time. Moreover, a heuristic algorithm to compute a granular reduct is presented to avoid the NP hard problem in granular reduction such that a granular reduct is generated rapidly.

Acknowledgments

This work is supported by National Natural Science Foundation of China under Grant no. 11201490 and no. 11061004, Science and Technology Plan Projects of Hunan Province no. 2011FJ3152.

References

  1. Z. Pawlak, β€œRough sets,” International Journal of Computer and Information Sciences, vol. 11, no. 5, pp. 341–356, 1982. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  2. Z. Pawlak, Rough Sets: Theoretical Aspects of Reasoning about Data, Kluwer Acedemic, Boston, Mass, USA, 1991.
  3. G. Dong, J. Han, J. Lam, J. Pei, K. Wang, and W. Zou, β€œMining constrained gradients in large databases,” IEEE Transactions on Knowledge and Data Engineering, vol. 16, no. 8, pp. 922–938, 2004. View at: Google Scholar
  4. S. Pal and P. Mitra, β€œCase generation using rough sets with fuzzy representation,” IEEE Transactions on Knowledge and Data Engineering, vol. 16, no. 3, pp. 292–300, 2004. View at: Google Scholar
  5. L. Polkowski and A. Skowron, Rough Sets and Current Trends in Computing, vol. 1424, Springer, Berlin, Germany, 1998.
  6. L. Polkowski and A. Skowron, Eds., Rough Sets in Knowledge Discovery, vol. 1, Physica-Verlag, Berlin, Germany, 1998.
  7. L. Polkowski and A. Skowron, Eds., Rough Sets in Knowledge Discovery, vol. 2, Physica-Verlag, Berlin, Germany, 1998.
  8. N. Zhong, Y. Yao, and M. Ohshima, β€œPeculiarity oriented multidatabase mining,” IEEE Transactions on Knowledge and Data Engineering, vol. 15, no. 4, pp. 952–960, 2003. View at: Google Scholar
  9. Z. Bonikowski, E. Bryniarski, and U. Wybraniec-Skardowska, β€œExtensions and intentions in the rough set theory,” Information Sciences, vol. 107, no. 1–4, pp. 149–167, 1998. View at: Publisher Site | Google Scholar
  10. E. Bryniarski, β€œA calculus of rough sets of the first order,” Bulletin of the Polish Academy of Sciences, vol. 37, no. 1–6, pp. 71–78, 1989. View at: Google Scholar | Zentralblatt MATH
  11. C. Degang, W. Changzhong, and H. Qinghua, β€œA new approach to attribute reduction of consistent and inconsistent covering decision systems with covering rough sets,” Information Sciences, vol. 177, no. 17, pp. 3500–3518, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  12. C. Degang, Z. Wenxiu, D. Yeung, and E. C. C. Tsang, β€œRough approximations on a complete completely distributive lattice with applications to generalized rough sets,” Information Sciences, vol. 176, no. 13, pp. 1829–1848, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  13. T.-J. Li, Y. Leung, and W.-X. Zhang, β€œGeneralized fuzzy rough approximation operators based on fuzzy coverings,” International Journal of Approximate Reasoning, vol. 48, no. 3, pp. 836–856, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  14. T.-J. Li and W.-X. Zhang, β€œRough fuzzy approximations on two universes of discourse,” Information Sciences, vol. 178, no. 3, pp. 892–906, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  15. E. C. C. Tsang, C. Degang, and D. S. Yeung, β€œApproximations and reducts with covering generalized rough sets,” Computers & Mathematics with Applications, vol. 56, no. 1, pp. 279–289, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  16. E. Tsang, D. Chen, J. Lee, and D. S. Yeung, β€œOn the upper approximations of covering generalized rough sets,” in Proceedings of the 3rd International Conference on Machine Learning and Cybernetics, pp. 4200–4203, 2004. View at: Google Scholar
  17. W. Zhu and F.-Y. Wang, β€œReduction and axiomization of covering generalized rough sets,” Information Sciences, vol. 152, pp. 217–230, 2003. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  18. W. Zhu, β€œTopological approaches to covering rough sets,” Information Sciences, vol. 177, no. 6, pp. 1499–1508, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  19. W. Zhu, β€œRelationship between generalized rough sets based on binary relation and covering,” Information Sciences, vol. 179, no. 3, pp. 210–225, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  20. X. Y. Chen and Q. G. Li, β€œConstruction of rough approximations in fuzzy setting,” Fuzzy Sets and Systems, vol. 158, no. 23, pp. 2641–2653, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  21. T. Y. Lin, β€œTopological and fuzzy rough sets,” in Intelligent Decision Support: Handbook of Applications and Advances of the Rough Set Theory, R. Slowinsk, Ed., pp. 287–304, Kluwer Acedemic, Boston, Mass, USA, 1992. View at: Google Scholar
  22. A. Skowron and J. Stepaniuk, β€œTolerance approximation spaces,” Fundamenta Informaticae, vol. 27, no. 2-3, pp. 245–253, 1996. View at: Google Scholar | Zentralblatt MATH
  23. C. Wang, C. Wu, and D. Chen, β€œA systematic study on attribute reduction with rough sets based on general binary relations,” Information Sciences, vol. 178, no. 9, pp. 2237–2261, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  24. Y. Y. Yao, β€œOn generalizing pawlak approximation operators,” in LNAI, vol. 1424, pp. 298–307, 1998. View at: Google Scholar
  25. Y. Y. Yao, β€œConstructive and algebraic methods of the theory of rough sets,” Information Sciences, vol. 109, no. 1–4, pp. 21–47, 1998. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  26. Y. Y. Yao, β€œRelational interpretations of neighborhood operators and rough set approximation operators,” Information Sciences, vol. 111, no. 1–4, pp. 239–259, 1998. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  27. F. Hu, G. y. Wang, H. Huang et al., β€œIncremental attribute reduction based on elementary sets,” RSFDGrC, vol. 36, no. 41, pp. 185–193, 2005. View at: Google Scholar
  28. R. Jensen and Q. Shen, β€œSemantics-preserving dimensionality reduction: rough and fuzzy-rough-based approaches,” IEEE Transactions on Knowledge and Data Engineering, vol. 16, no. 12, pp. 1457–1471, 2004. View at: Google Scholar
  29. A. Skowron and C. Rauszer, β€œThe discernibility matrices and functions in information systems, intelligent decision suport,” in Handbook of Applications and Advances of the Rough Sets Theory, R. Slowinski, Ed., Kluwer Academic, Boston, Mass, USA, 1992. View at: Google Scholar
  30. T. Yang and Q. G. Li, β€œReduction about approximation spaces of covering generalized rough sets,” International Journal of Approximate Reasoning, vol. 51, no. 3, pp. 335–345, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  31. T. Yang, Q. G. Li, and B. L. Zhou, β€œGranular reducts from the topological view of covering rough sets,” in Proceedings of the 8th IEEE International Conference on Granular Computing, Zhejiang University, Hangzhou, China, August 2012. View at: Google Scholar
  32. T. Yang, Q. G. Li, and B. L. Zhou, β€œRelated family: a new method for attribute reduction of covering information systems,” Information Sciences. In press. View at: Google Scholar
  33. S. K. M. Wong and W. Ziarko, β€œOn optimal decision rules in decision tables,” Bulletin of the Polish Academy of Sciences, vol. 33, no. 11-12, pp. 693–696, 1985. View at: Google Scholar | Zentralblatt MATH

Copyright © 2012 Tian Yang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views953
Downloads558
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.