Abstract

A fuzzy classification model is studied in the paper. It is based on the contaminated (robust) model which produces fuzzy expected risk measures characterizing classification errors. Optimal classification parameters of the models are derived by minimizing the fuzzy expected risk. It is shown that an algorithm for computing the classification parameters is reduced to a set of standard support vector machine tasks with weighted data points. Experimental results with synthetic data illustrate the proposed fuzzy model.

1. Introduction

A main goal of the statistical machine learning is to predict an unobserved output value based on an observed input vector . A special very important problem of the statistical machine learning is the classification problem which can be regarded as a task of classifying some objects into classes in accordance with their properties or features. A lot of models have been constructed for solving the machine learning problems in last decades. However, a large part of models is based on restrictive assumptions, for instance, the large amount of training data, the known type of the noise probability distribution, point-valued observations, and so forth. At the same time, real applications cannot satisfy all these assumptions or their part due to several reasons. In order to relax some of the restrictive assumptions, many methods have been proposed. One of the directions for developing the corresponding methods is the fuzzy classification which applies the main ideas of fuzzy set theory to various classification problems.

Most fuzzy classification models can conditionally be divided into three groups. Two groups suppose that there are precise observations, but fuzzy sets are used to take into account different contributions of the observations or in order to take into account imprecision of results, for instance, imprecision of separating surfaces. According to models of the first group with precise observations, a membership value or a membership function is assigned to every point from the training set in accordance with some rules [15]. As a result, different input points can make different contributions to the learning of decision surface. Then the initial classification problem is solved as a weighted classification problem by means of known methods, for example, by means of the support vector machine (SVM) approach proposed by Vapnik [6]. According to models of the second group, a fuzzy separating surface, for instance, a fuzzy hyperplane in the feature space, is constructed [7, 8]. In other words, a result of the classification problem solution is a set of hyperplanes with a membership function derived from the model. The third group of the models supposes that learning observations are interval-valued or fuzzy-valued themselves [911] due to imperfection of measurement tools or imprecision of expert information if used as data. It should be noted that methods based on fuzzy sets can be used for generalizing the two-class classification into multiclass classification. In particular, Wilk and Wozniak [12] proposed the corresponding method by exploiting a fuzzy inference system.

The main difficulty of most fuzzy models is how to determine the membership functions for points from the training set. Therefore, a fuzzy classification model is proposed in the given paper which constructs the membership function on the basis of available statistical data by using an extension of the well-known -contamination neighborhood or -contaminated (robust) models [13]. Main ideas underlying the proposed model are the following.

The empirical probability distribution accepted for deriving the empirical expected risk (a measure of the classification error) and used in the standard SVM is replaced by a set of probability distributions, which is produced by applying the -contaminated model. At that, it is assumed that the contaminating probability distribution can be arbitrary. As a result, we could construct an imprecise classification model for a predefined value . However, the main difficulty of using the -contaminated model is that the value is unknown, and there are no rules or methods for its determination. Therefore, the next idea is to use all values and to construct a model which covers all values .

Note that the produced set of probability distribution on the basis of the -contaminated model for a fixed is convex. This implies that the expected risk as an expectation of a loss function lies in an interval with some lower and upper bounds. Moreover, we get a set of nested intervals for different values of ranging from 0 to 1, which can be regarded as a fuzzy set. This implies that we obtain the fuzzy expected risk parametrized by classification parameters or by parameters of a separating surface. The next task is to find the classification parameters which minimize the fuzzy expected risk. This task is solved in the framework of SVM by taking a special ranking index for ranging the fuzzy expected risk measures. The fuzziness of the expected risk measure strongly depends on the number of points in the training set. It is assumed that the fuzziness increases with decrease of the number of data points.

It can be seen from the previous idea that the proposed model encompasses peculiarities of the first and the second group of models.

In order to simplify the description of the proposed classification model, we consider the one-class classification model or the well-known novelty detection model [1417]. Moreover, we restrict ourselves by studying a model proposed by Schölkopf et al. [16, 18]. Nevertheless, this model can easily be extended in the case of binary classification models. It is important to show main principles for constructing the fuzzy classification model in the paper.

The paper is organized as follows. Section 2 presents the standard one-class classification problem proposed by [16, 18]. The -contaminated robust model and its peculiarities are considered in Section 3. A set of probability distributions produced by the model and the fuzzy expected risk measure are studied in the same section. The SVM approach for computing the optimal classification parameters is provided in Section 4. In this section, one of the possible ways for decomposing the quadratic programming problem is considered. Numerical experiments with synthetic and some real data illustrating accuracy of the proposed model are provided in Section 5. In Section 6, concluding remarks are given.

2. One-Class Classification

Suppose we have unlabeled training data , where is the number of observations, is some set; for instance, it is a compact subset of . According to papers [16, 18], a well-known one-class classification (novelty detection) model aims to construct a function which takes the value in a “small” region capturing most of the data points and elsewhere. It can be done by mapping the data into the feature space corresponding to the kernel and by separating them from the origin with maximum margin.

Let be a feature map such that the data points are mapped into an alternative higher-dimensional feature space . In other words, this is a map into an inner product space such that the inner product in the image of can be computed by evaluating some simple kernel , such as the Gaussian kernel: is the kernel parameter determining the geometrical structure of the mapped samples in the kernel space.

The aim of classification is to find a hyperplane that separates the data from the origin with maximal margin. We use the parameter which is analogous to used for the -SVM [19]. It denotes the fraction of input data for which .

To find the optimal parameters and , the following quadratic program has to be solved: subject to

Slack variables are used to allow points to violate margin constraints. Using multipliers , we introduce a Lagrangian:

It is shown in [19] that the dual problem is of the form subject to The value of can be obtained as

After substituting the obtained solution into the expression for the decision function , we get

3. Robust Models and Fuzzy Expected Losses

3.1. Robust Models and the Expected Risk

Robust models have been exploited in classification problems due to the opportunity to avoid some strong assumptions underlying the standard classification models. As pointed out by Xu et al. [20], the use of robust optimization in classification is not new. There are a lot of published results providing various robust classification and regression models (see, e.g., [2124]) in which box-type uncertainty sets are considered. One of the popular robust classification models is based on the assumption that inputs are subject to an additive noise, and every data point is only known to belong to the interior of an Euclidian ball. Another class of robust models is based on relaxing strong assumptions about a probability distribution of data points (see, e.g., [25]).

We consider a model which can be partially regarded as a special case of these models and is based on using the framework of -contaminated (robust) models ([13]). They are constructed by eliciting a Bayesian prior distribution as an estimate of the true prior distribution. The -contaminated model is a class of probabilities which for fixed and is the set , where is arbitrary and . The rate reflects the amount of uncertainty in [26]. In other words, we take an arbitrary probability distribution from the unit simplex denoted by . According to these models, for , is the set of all probabilities with the lower bound and the upper bound . Of course, the assumption that is restricted by the unit simplex is one of possible types of -contaminated models. Generally, there are a lot of different assumptions which produce specific robust models.

Let us rewrite the problem in a general form of minimizing the expected risk [6] Here the loss function is the hinge loss function which is represented as

The standard SVM technique is to assume that is empirical (nonparametric) probability distribution whose use leads to the empirical expected risk

The assumption of the empirical probability distribution means that every point has the probability . This is a too strong assumption when the number of points is not large. Its validity might give rise to doubt in this case. Therefore, in order to relax the strong condition for probabilities of points, we apply the -contaminated model. According to the model, we replace the probability distribution by the set of probability distributions . In other words, there is an unknown precise “true” probability distribution in , but we do not know it and only know that it belongs to the set .

3.2. The Fuzzy Expected Risk

Let be a probability distribution which belongs to the set . Since the set is convex, then the values of the expected risk for every value are restricted by some lower and upper bounds such that every point of the interval of expected risk corresponds to a probability distribution from . Note that by . This implies that there holds

If we consider all possible values of the parameter , then we get the set of nested intervals with the parameter . Moreover, the expected risk interval is reduced to a point by , and the largest interval takes place by . This set of intervals can be viewed as a fuzzy set of the expected utility with the membership function . The similar idea to obtain fuzzy sets by means of the contamination robust model has been mentioned by Utkin and Zhuk [27].

The upper bound for the expected risk by fixed can be found as a solution to the following programming problem: subject to

The obtained optimization problem is linear with optimization variables , but the objective function depends on . Therefore, it cannot be directly solved by well-known methods. In order to overcome this difficulty, note, however, that all points belong to the simplex in a finite dimensional space. According to some general results from linear programming theory, an optimal solution to the previously mentioned problem is achieved at extreme points of the simplex, and the number of its extreme points is . Extreme points of the simplex are of the form This implies that there holds

The same can be written for the lower bound:

3.3. The Fuzzy Decision Problem and a Way for Its Solving

Now we can write a new criterion of decision making about optimal parameters . The parameters are optimal iff for all , there holds . The next question is how to compare the fuzzy sets. This is one of the most controversial questions in fuzzy literature. Most ranking methods are based on transforming a fuzzy set into a real number called by the ranking index. Here we use the index proposed by [28], which can be written in terms of the considered decision problem as Here is a parameter of pessimism. It can be regarded as a caution parameter proposed by [29]. The caution parameter reflects the degree of ambiguity aversion. The more ambiguity averse the decision maker is, the higher is the influence of the lower interval limit of generalized expected utility. corresponds to strict ambiguity aversion; expresses maximal ambiguity seeking attitudes. It is assumed here that the variable is a function of , for example, .

Then we write a new criterion of decision making. Parameters are optimal iff for all , there holds

It follows from the above that the optimal parameters can be obtained by solving the following optimization problem:

Denote . The optimal expected risk is now of the form where

We can see that the previous objective function consists of two main parts. The first part is the modified empirical expected risk. The second part can be regarded as Hurwicz criterion with optimism parameter , which is exploited in decision problems when we do not have information about states of nature. If takes values from 0 to 1, then the objective function is a convex combination of the Hurwicz criterion and the expected utility under condition that probabilities of states of nature are identical . Moreover, can be also regarded as the objective function with respect to the Hodges-Lehmann criteria [30] by . This means that we are not sure that the probability of every point is , and this disbelief is compensated by a more guaranteed Hurwicz approach with some coefficient depending on the function . It is interesting that the fuzzy classification problem has transformed to the standard decision problem comprising several decision criteria.

Before solving the optimization problem for computing parameters , we have to define the function which determines the membership function of fuzzy expected losses. As we have pointed out, the simplest way is to assume that . Then we get

However, this way does not take into account the possible dependence of the fuzzy set (its fuzziness) on the number of points in the training set. We assume that the fuzziness has to increase with decrease of .

Let us consider the meaning of in the objective function (22). If , then the optimal parameters are defined only by two single points maximizing and minimizing . This is an extreme case when we suppose that the empirical probability distribution is totally wrong. This takes place when we have a small number of points in the training set. Another extreme case is . This case corresponds to the standard approach in classification based on the empirical expected risk. It was shown by Vapnik [6] that the second case can be applied by large values of observations. Hence, we can state that by and by small values of .

One of the possible functions satisfying the above conditions for is or . Hence

The next task is to minimize the function over the parameters and . This task will be solved in the framework of the SVM.

4. The SVM Approach

In order to use the SVM approach to the fuzzy classification problem, we use the hinge loss function (10). Let us add the standard Tikhonov regularization term (this is the most popular penalty or smoothness term) [31] to the objective function (22). The smoothness (Tikhonov) term can be regarded as a constraint which enforces uniqueness by penalizing functions with wild oscillation and effectively restricting the space of admissible solutions (we refer to [32] for a detailed analysis of regularization methods). Moreover, we introduce the following optimization variables:

This leads to the quadratic programming problem subject to

If we suppose that , then the objective function can be decomposed to objective functions of the form

The optimal parameters and correspond to the smallest value of , .

Instead of minimizing the primary objective function, a dual objective function, the so-called Lagrangian, can be formed of which the saddle point is the optimum. Moreover, if we minimize the primary objective function, the dual objective function denoting has to by maximized. The Lagrangian is Here , are Lagrange multipliers. Hence, the dual variables have to satisfy positivity constraints for all . The saddle point can be found by setting the derivatives equal to zero:

Here is the indicator function taking the value if . Using (30)–(33), we get the following dual optimization problem: subject to

The function can be rewritten in terms of Lagrange multipliers as

Hence, we find the optimal value of by taking ; that is, there holds

Let us write the Karush-Kuhn-Tucker complementarity conditions:

It follows from the second condition that for a single value of such that . Here we assume that do not coincide. Therefore, for all . Returning to the constraints (35)–(36), we get and

It follows from the previous constraints that the optimization problem (34)–(36) can be decomposed into problems: subject to (40) and .

The optimal values of , correspond to the smallest value of objective function , and to the largest value of objective functions , .

So, we have simple quadratic programming problems whose solution can be obtained by means of well-known methods and tools.

5. Experiments

We illustrate the method proposed in this paper via several examples; all computations have been performed using the statistical software R. We investigate the performance of the proposed method and compare it with the standard SVM approach by considering the accuracy (ACC), which is the proportion of correctly classified cases on a sample of data and is often used to quantify the predictive performance of classification methods. ACC is an estimate of a classifier's probability of a correct response, and it is an important statistical measure of the performance of a one-class classification test. In novelty detection, ACC is the sum of two accuracy measures: the normal accuracy rate which measures how well the algorithm recognizes new examples of the known examples and the novelty accuracy rate which does the same for examples of an unknown novel example. ACC can formally be written as where is the label of the th test example .

We will denote the accuracy measure for the proposed model as ACCfuzzy and for the standard SVM as ACCst.

All the experiments use a standard Gaussian radial basis function (GRBF) kernel with the kernel parameter . Different values for the parameter have been tested, choosing those leading to the best results.

We consider the performance of our method with synthetic data having two features and . The training set consisting of two subsets is generated in accordance with the normal probability distributions such that examples (the first subset) are generated with mean values and examples (the second subset) have mean values . The standard deviation is for both subsets and both features. Here is a portion of abnormal examples in training set. The parameters and are . The parameter is taken .

Figure 1 illustrates how the contours depend on the parameter for the fuzzy model (thick curve) and for the standard SVM (dashed curve) by the very small number of points in the training set (). The parameter takes values , , . One can see from the pictures that the region bounded by the “fuzzy” contour decreases with . The value provides the cautious strategy of decision making. It can be seen from the first picture that the region bounded by the “fuzzy” contour is larger than the same region corresponding to the standard model. The regions almost coincide for optimistic strategy when (see Figure 1(c)). Figures 2 and 3 illustrate similar dependencies by and , respectively.

The accuracy measure for different conditions is shown in Table 1. It can be seen from the results given in the table that the accuracy measure of the fuzzy model is larger than the same measure of the standard SVM. Of course, this relation takes place when is rather small, and the contamination of the normal data has a large (in examples ) ratio.

As a further example we applied all analyzed models to the well-known “Iris” data set from the UCI Machine Learning Repository [33]. The data set contains classes (Iris Setosa, Iris Versicolour, Iris Virginica) of instances each. The number of features is (sepal length in cm, sepal width in cm, petal length in cm, petal width in cm). We suppose that examples from the Iris Setosa class are abnormal. For the experiment, we randomly select points such that points are taken from the set of positively labelled examples and points are from negatively labelled examples. Here . The parameters for modelling are , . It is investigated how the accuracy measures depend on the amount of training data. In particular, if we take , then , . If we take , then , . One can see that the fuzzy approach provides better numerical results in comparison with the standard approach when the number of examples is rather small.

6. Conclusion

In this paper, a fuzzy one-class classification model has been proposed, which is based on applying the -contaminated model. The algorithm for computing the optimal parameters of classification is reduced to a finite number of standard SVM tasks with weighted data points, where the weights are assigned in accordance with a predefined rule derived from comparison of fuzzy expected risk measures. It is easy to be implemented with standard functions of the statistical software package R.

Experimental results with synthetic data and with the well-known “Iris” data set from the UCI Machine Learning Repository reported have illustrated that the proposed fuzzy model outperforms the standard approach by the small number of observations. Due to the proposed fuzzy model, we do not need to assign a certain value for the contamination parameter . However, we have to define the function and the parameter .

It should be noted that the proposed model can easily be extended on the case of binary or multiclass classification. This is a direction for future work.

We have investigated only one fuzzy ranking index for comparison fuzzy numbers. However, there are many efficient indices whose application to the classification problems in the framework of the proposed model could give better accuracy of classification and better models. This is also another direction for future work.

Acknowledgment

The author would like to express his appreciation to the anonymous referees and the Editor of this Journal whose valuable comments have improved the paper.