Special Approach to Near Set Theory
The aim of this paper is to introduce two approaches to near sets by using a special neighbourhood. Some fundamental properties and characterizations are given. We obtain a comparison between these new set approximations as well as set approximations introduced by Peters (2011, 2009, 2007, 2006).
Rough set theory, proposed by Pawlak in 1982 [1, 2], can be seen as a new mathematical approach to vagueness. The rough set philosophy is founded on the assumption that with every object of the universe of discourse we associate some information (data, knowledge). For example, if objects are patients suffering from a certain disease, symptoms of the disease form information about patients. Objects characterized by the same information are indiscernible (similar) in view of the available information about them. The indiscernibility relation generated in this way is the mathematical basis of rough set theory. This understanding of indiscernibility is related to the idea of Gottfried Wilhelm Leibniz that objects are indiscernible if and only if all available functionals take on identical values (Leibniz’s Law of Indiscernibility: The Identity of Indiscernibles) . However, in the rough set approach, indiscernibility is defined relative to a given set of partial functions (attributes).
Any set of all indiscernible (similar) objects is called an elementary set and forms a basic granule (atom) of knowledge about the universe. Any union of some elementary sets is referred to as a crisp (precise) set. A set which is not crisp is called rough (imprecise, vague) set.
Consequently, each rough set has boundary region cases, that is, objects that cannot with certainty be classified either as members of the set or of its complement. Obviously, crisp sets have no boundary region elements at all. This means that boundary region cases cannot be properly classified by employing available knowledge.
Thus, the assumption that objects can be seen only through the information available about them leads to the view that knowledge has a granular structure. Due to the granularity of knowledge, some objects of interest cannot be discerned and appeared as the same (identical or similar). Consequently, vague concepts, in contrast to precise concepts, cannot be characterized in terms of information about their elements.
Ultimately, there is interest in selecting probe functions  that lead to descriptions of objects that are minimally near each other. This is an essential idea in the near set approach [5–7] and differs markedly from the minimum description length (MDL) proposed in 1983 by Jorma Rissanen. MDL depends on the identification of possible data models and possible probability models. By contrast, NDP deals with a set that is the domain of a description used to identify similar objects. The term similar is used here to denote the presence of objects that have descriptions that match each other to some degree.
The near set approach leads to partitions of ensembles of sample objects with measurable information content and an approach to feature selection. The proposed feature selection method considers combinations of probe functions taken at a time in searching for those combinations of probe functions that lead to partitions of a set of objects that has the highest information content.
In this paper, we assume that any vague concept is replaced by a pair of precise concepts, called the lower and the upper approximations of the vague concept. The lower approximation consists of all objects which surely belong to the concept, and the upper approximation contains all objects which possibly belong to the concept. The difference between the upper and the lower approximation constitutes the boundary region of the vague concept. These approximations are two basic operations in rough set theory. There is a chance to be useful in the analysis of sample data. The proposed approach does not depend on the joint probability of finding a feature value for input vectors that belong to the same class. In addition, the proposed approach to measuring the information content of families of neighborhoods differs from the rough set approach. The near set approach does not depend on preferential ordering of value sets of functions representing object features. The contribution of this research is the introduction of a generalization of the near set approach to feature selection.
Rough set theory expresses vagueness, not by means of membership, but by employing a boundary region of a set. If the boundary region of a set is empty, it means that the set is crisp, otherwise the set is rough (inexact). The nonempty boundary region of a set means that our knowledge about the set is not sufficient to define the set precisely.
Suppose we are given a set of objects called the universe and an indiscernibility relation , representing our lack of knowledge about elements of . For the sake of simplicity, we assume that is an equivalence relation and is a subset of . We want to characterize the set with respect to . To this end we will need the basic concepts of rough set theory given below .
The equivalence class of determined by element is . Hence -lower, upper approximations and boundary region of a subset are
It is easily seen that approximations are in fact interior and closure operations in a topology generated by the indiscernibility relation .
The rough membership function is a measure of the degree that belongs to in view of information expressed by . It is defined as  where denotes the cardinality of .
A rough set can also be characterized numerically by the accuracy measure of an approximation  that is defined as
Obviously, . If , X is crisp with respect to ( is precise with respect to ), and otherwise, if , is rough with respect to ( is vague with respect to ).
Underlying the study of near set theory is an interest in classifying sample objects by means of probe functions associated with object features. More recently, the term feature is defined as the form, fashion, or shape (of an object).
Let denote a set of features for objects in a set . For any feature , we associate a function that maps to some set (range of ).
is a generalized approximation space, where is a universe of objects, is a set of functions representing object features, is a neighbourhood family function defined as and is an overlap function defined by where , is a member of the family of neighbourhoods and is equal to 1, if .
The overlap function maps a pair of sets to a number in , representing the degree of overlap between the sets of objects with features .
-lower, upper approximations and boundary region of a set with respect to features from the probe functions are defined as
Objects and are minimally near each other if such that . Set to be near to if , such that and are near objects. A set is termed a near set relative to a chosen family of neighborhoods if .
3. Approach to Near Set Theory
We aim in this section to introduce a generalized approach to near sets by using new neighbourhoods. Deduce a modification of some concepts.
Definition 3.1. Let be probe functions on a nonempty set , . A general neighbourhood of an element is where is the absolute value of and is the length of a neighbourhood with respect to the feature .
Definition 3.3. Let be a general relation on a nonempty set . Hence, we can deduce a special neighbourhood of an element as
Remark 3.4. Let be a general relation on a nonempty set , where . The special neighbourhood of an element with respect to two features is defined as Consequently,
Definition 3.5. Let be probe functions defined on a nonempty set . The family of special neighbourhoods with respect to one feature is defined as
Remark 3.6. The family of neighbourhoods with respect to two features is defined as Consequently,
Definition 3.7. Let be probe functions representing features of . Objects and are minimally near each other if such that , where is the length of a general neighbourhood defined in Definition 3.1 with respect to the feature (denoted by .
Definition 3.8. Let and . Set to be minimally near to if , and such that (Denoted by .
Remark 3.9. We can determine a degree of the nearness between the two sets , as
Theorem 3.10. Let be probe functions representing features of . Then is near to if , where , .
Theorem 3.11. Any subset of is near to .
Every set is a near set (near to itself) as every element is near to itself.
Definition 3.12. Let be probe functions on a nonempty set . The lower and upper approximations for any subset by using the special neighbourhood are defined as Consequently, the boundary region is
Definition 3.13. Let be probe functions on a nonempty set . The accuracy measure for any subset by using the special neighbourhood with respect to features is
Remark 3.14. , measures the degree of exactness of any subset . If then is exact set with respect to features.
Definition 3.15. Let be probe functions on a nonempty set . The new generalized lower rough coverage of any subset of the family of special neighbourhoods is defined as If , then .
Remark 3.16. , means the degree that the subset covers the sure region (acceptable objects).
4. Modification of Our Approach to Near Sets
In this section, we introduce a modification of our approach introduced in Section 3. We deduce some of generalized concepts. Finally, we prove that our modified approach in this section is the best.
Definition 4.1. Let be probe functions on a nonempty set . The modified near lower, upper, and boundary approximations for any subset are defined as
Definition 4.2. Let be probe functions on a nonempty set . The new accuracy measure for any subset is
Theorem 4.3. Let , then
(1) is near to and ;
(2) is near to and ;
(3) is near to ;
(4) is near to .
Remark 4.4. A set is called a near set if .
Definition 4.5. Let be probe functions on a nonempty set . The new generalized lower rough coverage of any subset of the family of special neighbourhoods is defined as If , then .
Now, we give an example to explain these definitions.
Example 4.6. Let be three features defined on a nonempty set as in Table 1.
If the length of the neighbourhood of the feature (resp., and ) equals to 0.2 (resp., 0.9 and 0.5), then where ; ; . Hence, Also, we get where ; ; . Hence, Also, we find that
Theorem 4.7. Every rough set is a near set but not every near set is a rough set.
Proof. There are two cases to consider:(1). Given a set that has been approximated with a nonempty boundary, this means is a rough set as well as a near set;(2). Given a set that has been approximated with an empty boundary, this means is a near set but not a rough set.
The following example proves Theorem 4.7.
Example 4.8. From Example 4.6, if , then , , and . Hence is a near set in each case, but is not rough set with respect to three features by using the approximations introduced by Peters, with respect to two features by using our approach defined in Section 3, and with respect to only one feature by using our modified approach defined in Section 4.
Now the following example deduces a comparison between the classical and new general near approaches by using the accuracy measures of them.
From Table 2, we note that when we use our generalized set approximations of near sets with respect to one feature many of subsets become exact sets. Also, with respect to two features, all subsets become completely exact. Consequently we consider that our approximations are a start point of real-life applications in many fields of science.
5. Medical Application
If we consider that in Example 4.6 represents measurements for a kind of diseases and the set of objects are patients, then for any group of patients, we can determine the degree of this disease, by using the lower rough coverage based on the decision class as in the following examples.
Example 5.1. In Example 4.6, if the decision class and we consider the following groups of the patients: , , , and , then, we get the following results: , , , and .
So these sets cover the acceptable objects by the following degrees in Table 3, where
Remark 5.2. If we want to determine the degree that the lower of the decision class covers the set , then we use the following formulas:
From this table, we can say that our modified approach is better than the classical approach of near set theory, as our lower approximations are increasing the acceptable objects.
For example, when we used classical approximations the group with respect to one feature has no disease and with respect to three features has this disease with ratio 50%, unless this group is itself the decision class of this disease.
But when we used our modified set approximations with respect to two or three features, we find the fact of this disease that the degree of disease in this group is 100%.
In this paper, we used a special neighborhood to introduce a generalization of traditional set approximations. In addition we introduce a modification of our special approach to near sets. Our approaches are mathematical tools to modify the traditional approximations. The suggested methods of near approximations open a way for constructing new types of lower and upper approximations.
Z. Pawlak, Rough Sets: Theoretical Aspects of Reasoning about Data, System Theory, Knowledge Engineering and Problem Solving, vol. 9, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1991.
R. Ariew, D. Garber, and G. W. Leibniz, Eds., Philosophical Essays, Hackett, Indianapolis, Ind, USA, 1989.
J. F. Peters, “Classification of objects by means of features,” in Proceedings of the IEEE Symposium Series on Foundations of Computational Intelligence (IEEE SCCI '07),, pp. 1–8, Honolulu, Hawaii, USA, 2007.View at: Google Scholar
J. F. Peters, A. Skowron, and J. Stepaniuk, “Nearness in approximation spaces,” in Proceedings of the Concurrency, Specification & Programming (CSP '06), G. Lindemann, H. Schlilngloff et al., Eds., Informatik-Berichte Nr. 206, pp. 434–445, Humboldt-Universitat zu Berlin, 2006.View at: Google Scholar
Z. Pawlak and A. Skowron, “Rough membership functions,” in Advances in the Dempster-Shafer Theory of Evidence, R. Yager, M. Fedrizzi, and J. Kacprzyk, Eds., pp. 251–271, John Wiley & Sons, New York, NY, USA, 1994.View at: Google Scholar
M. Pavel, Fundamentals of Pattern Recognition, vol. 174, Marcel Dekker, New York, NY, USA, 2nd edition, 1993.