Advances in Operations Research

Advances in Operations Research / 2009 / Article

Research Article | Open Access

Volume 2009 |Article ID 252989 | 34 pages |

Mathematical Programming Approaches to Classification Problems

Academic Editor: Mahyar Amouzegar
Received22 Mar 2009
Revised31 Aug 2009
Accepted19 Oct 2009
Published03 Feb 2010


Discriminant Analysis (DA) is widely applied in many fields. Some recent researches raise the fact that standard DA assumptions, such as a normal distribution of data and equality of the variance-covariance matrices, are not always satisfied. A Mathematical Programming approach (MP) has been frequently used in DA and can be considered a valuable alternative to the classical models of DA. The MP approach provides more flexibility for the process of analysis. The aim of this paper is to address a comparative study in which we analyze the performance of three statistical and some MP methods using linear and nonlinear discriminant functions in two-group classification problems. New classification procedures will be adapted to context of nonlinear discriminant functions. Different applications are used to compare these methods including the Support Vector Machines- (SVMs-) based approach. The findings of this study will be useful in assisting decision-makers to choose the most appropriate model for their decision-making situation.

1. Introduction

Discriminant Analysis (DA) is widely applied in many fields such as social sciences, finance and marketing. The purpose of DA is to study the difference between two or more mutually exclusive groups and to classify this new observation into an appropriate group. The popular method used in DA is a statistical approach. The pioneer of these methods is Fisher [1] who proposed a parametric method introducing linear discriminant functions for two-group classification problems. Somewhat later, Smith [2] introduced a quadratic discriminant function, which along with other discriminant analyses, such as logit and probit, has received a good deal of attention over the past several decades. Some recent researches raise the fact that standard assumptions of DA, such as the normality of the data distribution and the equality of the variance-covariance matrices, are not always verified. The MP approach has also been widely used in DA and it can be considered a valuable alternative to the classical models of DA. The aim of these MP models is either to minimize the violations (distance between the misclassified observations and the cutoff value) or to minimize the number of misclassified observations. They require no assumptions about the population’s distribution and provide more flexibility for the analysis by introducing new constraints, such as those of normalization, or by including weighted deviations in the objective functions including higher weightings for misclassified observation deviations and lower weightings for correctly classified observation deviations. However, special difficulties and even anomalous results restrict the performance of these MP models [3]. These difficulties may be classified under the headings of “degeneracy” and “stability” [4, 5]. The solutions can be classed as degenerate if the analysis presents unbounded solutions in which improvement of the objective function is unconstrained. Similarly, the results can be classed as unstable if, for example, they depend on the position of the data in relation to the origin. A solution would be deemed unacceptable in a situation where all of the coefficients of the discriminant function were equal to zero, thus leading to of all the observations being incorrectly classified in the same group [6, 7]. To overcome these problems, different normalization constraints have been identified and variants of MP formulations for classification problems have been proposed [4, 811].

For any given discriminant problem, the choice of an appropriate method for analyzing the data is not always an easy task. Several studies comparing statistical and MP approaches have been carried out by a number of researchers. A number of comparative studies using both statistical and MP approaches have been performed on real data [1214] and most of them use linear discriminant functions. Recently, new MP formulations have been developed based on nonlinear functions which may produce better classification performance than can be obtained from a linear classifier. Nonlinear discriminant functions can be generated from MP methods by transforming the variables [15], by forming dichotomous categorical variables from the original variables [16], based on piecewise-linear functions [17] and on kernel transformations that attempt to render the data linearly separable, or by using Multihyperplanes formulations [18].

The aim of this paper is, thus, to conduct a comparative study in which we analyze the performance of three statistical methods: (1) the Linear Discriminant Function method (LDF), (2) the Logistic function (LG), and (3) Quadratic Discriminant Function (QDF) along with five MP methods based on linear discriminant functions: the MSD model, the Ragsdale and Stam [19] (RS) model, the model of Lam et al. [12] (LPM), the Lam and Moy [10] model MC, and the MCA model [20]. These methods will be compared to the second-order MSD model [15], the popular SVM-based approach, the piecewise-linear models, and the Multihyperplanes models. New classification procedures adapted to the last models based on nonlinear discriminant functions will be proposed. Different applications in the financial and medicine domains are used to compare the different models. We will examine the conditions under which these various approaches give similar or different results.

In this paper, we report on the results of the different approaches cited above. The paper is organized as follows: first, we discuss the standard MP discriminant models, followed by a presentation of MP discriminant models based on nonlinear functions. Then, we develop new classification model based on piecewise-nonlinear functions and hypersurfaces. Next, we present the datasets used in the analysis process. Finally, we compare the performance of the classical and the different MP models including the SVM-based approach and draw our conclusions.

2. The MP Methods

In general, DA is applied to two or multiple groups. In this paper, we discuss the case of discrimination with two groups. Consider a classification problem with 𝑘 attributes. Let 𝑋 be an (𝑛𝑥𝑘) matrix representing the attributing scores of a known sample of 𝑛 objects from the group (𝐺=1,2). Hence, 𝑥𝑖𝑗 is the value of the𝑗th attribute for the 𝑖th object, 𝑎𝑗 is the weight assigned to the 𝑗th attribute in the linear combination which identifies the hyperplane, and (𝑘𝑥𝑘) is the variance-covariance matrices of group .

2.1. The Linear MP Models for Classification Problem

In this section, we will present seven MP formulations for classification problem. These formulations assume that all group G1 (G2) cases are below (above) the cutoff score 𝑐. This score defines the hyperplane which allows the two groups to be separated as follows: 𝑘𝑗=1𝑥𝑖𝑗𝑎𝑗𝑐𝑖𝐺1,(2.1)𝑘𝑗=1𝑥𝑖𝑗𝑎𝑗>𝑐𝑖𝐺2(2.2) (𝑎𝑗 and 𝑐 are free), with 𝑐: the cutoff value or the threshold.

The MP models provide unbounded, unacceptable solutions and are not invariant to a shift of origin. To remove these weaknesses, different normalization constraints are proposed: (N1) 𝑘𝑗=1𝑎𝑗+𝑐=1; (N2) 𝑘𝑗=1𝑎𝑗=𝑐 [4]; (N3) the normalization constant ± 1, that is, 𝑐 = ± 1, by defining binary variables 𝑐+and 𝑐 such as 𝑐=𝑐+𝑐 with 𝑐++𝑐=1; and (N4) the normalization for invariance under origin shift [11]. In the normalization (N4), the free variables 𝑎𝑗 are represented in terms of two nonnegative variables (𝑎+𝑗 and 𝑎𝑗) such as 𝑎𝑗=𝑎+𝑗𝑎𝑗 and constraining the absolute values of the 𝑎𝑗(𝑗=1,2,,𝑘) to sum to a constant as follows: 𝑘𝑗=1𝑎+𝑗+𝑎𝑗=1.(2.3) By using the normalization (N4), two binary variables 𝜁+𝑗 and 𝜁𝑗 will be introduced in the models in order to exclude the occurrence of both 𝑎+𝑗>0 and 𝑎𝑗>0 [11]. The definition of 𝜁+𝑗 and 𝜁𝑗 requires the following constraints: 𝜀𝜁+𝑗𝑎+𝑗𝜁+𝑗,𝜀𝜁𝑗𝑎𝑗𝜁𝑗𝜁𝑗=1,,𝑘,(2.4)+𝑗+𝜁𝑗𝜁1𝑗=1,,𝑘,+𝑗=01,𝜁𝑗=01,𝑎+𝑗0,𝑎𝑗0.(2.5) The classification rule will assign the observation 𝑥0 into group G1 if 𝑘𝑗=1𝑥0𝑗𝑎𝑗𝑐 and into group G2, otherwise.

2.1.1. MSD Model (Minimize the Sum of Deviations)

The problem can be expressed as follows: minimize𝑖𝑑𝑖(2.6) subject to𝑘𝑗=1𝑥𝑖𝑗𝑎𝑗𝑐+𝑑𝑖𝑖𝐺1,(2.6a)𝑘𝑗=1𝑥𝑖𝑗𝑎𝑗>𝑐𝑑𝑖𝑖𝐺2(2.6b)(𝑎𝑗and 𝑐 are free and 𝑑𝑖0 for all 𝑖), where 𝑑𝑖 is the external deviation from the hyperplane for observation 𝑖.

The objective takes zero value if the two groups can be separated by the hyperplane. It is necessary to introduce one of the normalization constraints cited above to avoid unacceptable solutions that assign zero weights to all discriminant coefficients.

2.1.2. Ragsdale and Stam Two-Stage Model (RS) [19]

minimize𝑖𝑑𝑖(2.7) subject to 𝑘𝑗=1𝑥𝑖𝑗𝑎𝑗𝑑𝑖𝑐1𝑖𝐺1,(2.7a)𝑘𝑗=1𝑥𝑖𝑗𝑎𝑗+𝑑𝑖>𝑐2𝑖𝐺2(2.7b) (𝑎𝑗 are  free and 𝑑𝑖0for all 𝑖), where 𝑐1 and 𝑐2 are two predetermined constants with 𝑐1<𝑐2. The values chosen by Ragsdale and Stam are 𝑐1=0 and 𝑐2=1. Two methods were proposed to determine the cutoff value. The first method is to choose the cutoff value equal to (𝑐1+𝑐2)/2. The second method requires the resolution of another LP problem which minimizes only the observation deviations whose classification scores lie between c1 and c2. The observations that have classification scores below 𝑐1 or above 𝑐2 are assumed to be correctly classified. The advantage of this latter method is to exclude any observation with classification scores on the wrong side of the hyperplane. However, for simplicity, we use the first method in our empirical study. Moreover, we will solve the model by considering 𝑐1 and 𝑐2 decision variables by adding the constraints: 𝑐2𝑐1=1.(2.7c)

2.1.3. Lam et al. Method [9, 12]

This model abbreviated as LPC is defined by the following.

minimize𝑖𝐺1𝑑+𝑖+𝑑𝑖+𝑖𝐺2𝑒+𝑖+𝑒𝑖(2.8) subject to𝑘𝑗=1𝑥𝑖𝑗𝑎𝑗+𝑑𝑖𝑑+𝑖=𝑐1𝑖𝐺1,(2.8a)𝑘𝑗=1𝑥𝑖𝑗𝑎𝑗+𝑒𝑖𝑒+𝑖=𝑐2𝑖𝐺2𝑐,(2.8b)2𝑐11(normalizationconstraint)(2.8c)(𝑐1,𝑐2,𝑎𝑗are  free), where 𝑑+𝑖 and 𝑑𝑖 are, respectively, the external and the internal deviations from the discriminant axis to observation 𝑖 in group 1.

𝑒+𝑖 and 𝑒𝑖 are, respectively, the internal and the external deviations from the discriminant axis to observation 𝑖 in group 2.

𝑐1 and 𝑐2 are defined as decision variables and can have different significant definitions.

A particular case of this model is that of Lee and Ord [21] which is based on minimal absolute deviations with 𝑐1=0 and 𝑐2=1.

A new formulation of LPC is to choose 𝑐 (=1,2) as the mean group of the classification scores of the group , as follows (LPM): minimize𝑖𝐺1𝑑+𝑖+𝑑𝑖+𝑖𝐺2𝑒+𝑖+𝑒𝑖(2.9) subject to

𝑘𝑗=1𝑥𝑖𝑗𝜇1𝑗𝑎𝑗+𝑑𝑖𝑑+𝑖=0𝑖𝐺1,(2.9a)𝑘𝑗=1𝑥𝑖𝑗𝜇2𝑗𝑎𝑗+𝑒𝑖𝑒+𝑖=0𝑖𝐺2,(2.9b)𝑘𝑗=1𝜇2𝑗𝜇1𝑗𝑎𝑗1,(2.9c) with 𝜇𝑗=𝑟𝐺𝑥𝑟𝑗/𝑛 as the mean of all 𝑥𝑟𝑗 through the group 𝐺 and 𝑛 is the number of observations in the group 𝐺.

The objective of the LPM model is to minimize the total deviations of the classification scores from their group mean scores in order to obtain the attribute weights 𝑎𝑗 which are considered to be more stable than those of the other LP approaches. The weighting obtained from the resolution of LPM will be utilized to compute the classification scores of all the objects. Lam et al. [12] have proposed two formulations to determine the cutoff value 𝑐. One of these formulations consists of minimizing the sum of the deviations from the cutoff value 𝑐 (LP2).

The linear programing model LP2 is illustrated as follows:

minimize𝑛𝑖=1𝑑𝑖(2.10) subject to 𝑘𝑗=1𝑥𝑖𝑗𝑎𝑗𝑑𝑖𝑐𝑖𝐺1,(2.10a)𝑘𝑗=1𝑥𝑖𝑗𝑎𝑗+𝑑𝑖𝑐𝑖𝐺2(2.10b)(𝑐is free, and 𝑑𝑖0).

2.1.4. Combined Method [10]

This method combines several discriminant methods to predict the classification of the new observations. This method is divided into two stages: the first stage consists of choosing several discriminant models. Each method is then applied independently. The results from the application of each method provide a classification score for each observation. The group having the higher group-mean classification score is denoted as 𝐺𝐻 and the one having the lower group-mean classification score is denoted as 𝐺𝐿. The second stage consists of calculating the partial weights of the observations using the scores obtained in the first stage. For group 𝐺𝐻, the partial weight 𝑡𝑟𝑖 of the 𝑖th observation obtained from solving the 𝑟 method (𝑟=1,,𝑅 where 𝑅 is the number of methods utilized) is calculated as the difference between the observation’s classification scores and the group-minimum classification score divided by the difference between the maximum and the minimum classification scores: 𝑡𝑟𝑖=𝑘𝑗=1𝑥𝑖𝑗𝑎𝑟𝑗Min𝑘𝑗=1𝑥𝑖𝑗𝑎𝑟𝑗,𝑖𝐺𝐻Max𝑘𝑗=1𝑥𝑖𝑗𝑎𝑟𝑗,𝑖𝐺𝐻Min𝑘𝑗=1𝑥𝑖𝑗𝑎𝑟𝑗,𝑖𝐺𝐻𝑖𝐺𝐻.(2.11) The largest partial weight is equal to 1 and the smallest partial weight is equal to zero.

The same calculations are used for each observation of the group 𝐺𝐿, but in this case, the partial weight of each observation is equal to one minus the obtained value in the calculation. Thus, in this group, the observations with the smallest classification scores are the observations with the greatest likelihood of belonging to this group:

𝑡𝑟𝑖=1𝑘𝑗=1𝑥𝑖𝑗𝑎𝑟𝑗Min𝑘𝑗=1𝑥𝑖𝑗𝑎𝑟𝑗,𝑖𝐺𝐿Max𝑘𝑗=1𝑥𝑖𝑗𝑎𝑟𝑗,𝑖𝐺𝐿Min𝑘𝑗=1𝑥𝑖𝑗𝑎𝑟𝑗,𝑖𝐺𝐿𝑖𝐺𝐿.(2.12) The same procedure is repeated for all the discrimination methods used in the combined method. The final combined weight 𝑤𝑖 is the sum of all the partial weights obtained. The final combined weights of all observations are used as the weighting for the objective function of the LP model in the second stage. A larger combined weight for one observation indicates that there is little chance that this observation has been misclassified. For each combined weight, the authors add a small positive constant 𝜀 in order to ensure that all the observations are entered in the classification model, even for those observations with the smallest partial weights obtained by all the discriminant methods.

The LP formulation which combines the results of different discriminant methods is the following weighting MSD (W-MSD) model:

minimize𝑖𝑤𝑖𝑑𝑖(2.13) subject to 𝑘𝑗=1𝑥𝑖𝑗𝑎𝑗𝑐+𝑑𝑖𝑖𝐺1,(2.13a)𝑘𝑗=1𝑥𝑖𝑗𝑎𝑗𝑐𝑑𝑖𝑖𝐺2,(2.13b)𝑘𝑗=1𝑎𝑗+𝑐=1(2.13c)(𝑎𝑗and𝑐are freefor all 𝑗 and 𝑑𝑖0 for all 𝑖). The advantage of this model is its ability to weight the observations. Other formulations are also possible, for example, the weighting RS model (W-RS).

In our empirical study, the three methods LDF, MSD, and LPM are combined in order to form the combined method MC1. Methods LDF, RS, and LPM are combined in order to form the combined method MC2. Other combined methods are also possible.

2.1.5. The MCA Model [22]

maximize2𝑛=1𝑖=1𝛽𝑖(2.14) subject to 𝑘𝑗=1𝑥𝑖𝑗𝑎𝑗𝑐+(𝑀+Δ)𝛽1𝑖𝑀𝑖𝐺1,(2.14a)𝑘𝑗=1𝑥𝑖𝑗𝑎𝑗𝑐(𝑀+Δ)𝛽2𝑖𝑀𝑖𝐺2(2.14b)(𝑎𝑗,𝑐are free, 𝛽𝑖=0,1,=1,2,𝑖=1,,𝑛), with 𝛽𝑖=1 if the observation is classified correctly, Δ,Δ>0 is very small, and 𝑀,𝑀>0 is large. The model must be normalized to prevent trivial solutions.

2.1.6. The MIP EDEA-DA Model (MIP EDEA-DA) [23]

Two stages characterize this model:

First stage (classification and identification of misclassified observations) is to

minimize𝑑(2.15) subject to 𝑘𝑗=1𝑥𝑖𝑗𝑎+𝑗𝑎𝑗𝑐𝑑0𝑖𝐺1,(2.15a)𝑘𝑗=1𝑥𝑖𝑗𝑎+𝑗𝑎𝑗𝑐+𝑑0𝑖𝐺2,(2.15b)𝑘𝑗=1𝑎+𝑗+𝑎𝑗=1,(2.15c)𝜀𝜁+𝑗𝑎+𝑗𝜁+𝑗,𝜀𝜁𝑗𝑎𝑗𝜁𝑗𝜁𝑗=1,,𝑘,(2.15d)+𝑗+𝜁𝑗1𝑗=1,,𝑘,(2.15e)𝑘𝑗=1𝜁+𝑗+𝜁𝑗=𝑘,(2.15f)(𝜁+𝑗=0/1,𝜁𝑗=0/1,𝑑and𝑐arefree), with 𝑎𝑗=(𝑎𝑗+𝑎𝑗) with c and d being the optimal solution of the model (2.15). There are two cases.

(i)If 𝑑<0, then there is no misclassified observations and all the observations are classed in either group 1 or group 2 by 𝑗𝑥𝑖𝑗𝑎𝑗=𝑐. We stop the procedure at this stage.(ii)If 𝑑>0, then there are misclassified observations and then comes stage 2 after classifying the observations in these appropriate ensembles (𝐸1, 𝐸2).

The classification rule is

if𝑘𝑗=1𝑎𝑗𝑥𝑖𝑗<𝑐+𝑑,then𝑖𝐺1(=𝐸1),if𝑘𝑗=1𝑎𝑗𝑥𝑖𝑗>𝑐𝑑,then𝑖𝐺2(=𝐸2),if𝑐𝑑𝑘𝑗=1𝑎𝑗𝑥𝑖𝑗𝑐+𝑑, then the appropriate group of observation 𝑖 is determined by the second stage.

Second stage (classification) is to

minimize𝑖𝐶1𝑟𝑖+𝑖𝐶2𝑟𝑖(2.16) subject to𝑘𝑗=1𝑥𝑖𝑗𝑎+𝑗𝑎𝑗𝑐𝑀𝑟𝑖𝜀𝑖𝐶1,(2.16a)𝑘𝑗=1𝑥𝑖𝑗𝑎+𝑗𝑎𝑗𝑐+𝑀𝑟𝑖0𝑖𝐶2,(2.16b)𝑘𝑗=1𝑎+𝑗+𝑎𝑗=1,(2.16c)𝜀𝜁+𝑗𝑎+𝑗𝜁+𝑗,𝜀𝜁𝑗𝑎𝑗𝜁𝑗𝜁𝑗=1,,𝑘,(2.16d)+𝑗+𝜁𝑗1𝑗=1,,𝑘,(2.16e)𝑘𝑗=1𝜁+𝑗+𝜁𝑗=𝑘,(2.16f)where(𝜁+𝑗=0/1,𝜁𝑗=0/1,𝑎+𝑗0,𝑎𝑗0,𝑟𝑖=0/1,and𝑐isfree), with 𝐶1=𝐺1𝐸1,𝐶2=𝐺2𝐸2.

The classification rule is


The advantage of this model is to minimize the number of misclassified observations. However, the performance of the model depends on the choice of numbers M and 𝜀 which are subjectively determined by the searcher and depends also on the choice of the computer science used for resolving the model.

2.2. The Nonlinear MP Models
2.2.1. The Second-Order MSD Formulation [15]

The form of the second-order MSD model is to minimize𝑖𝐺1𝑑+1𝑖+𝑖𝐺2𝑑2𝑖(2.17) subject to𝑘𝑗=1𝑥𝑖𝑗𝑎𝑗𝐿+𝑘𝑗=1𝑥2𝑖𝑗𝑎𝑗𝑄+𝑚𝑥𝑖𝑥𝑖𝑚𝑎𝑚+𝑑1𝑖𝑑+1𝑖𝑐𝑖𝐺1,(2.17a)𝑘𝑗=1𝑥𝑖𝑗𝑎𝑗𝐿+𝑘𝑗=1𝑥2𝑖𝑗𝑎𝑗𝑄+𝑚𝑥𝑖𝑥𝑖𝑚𝑎𝑚+𝑑2𝑖𝑑+2𝑖𝑐𝑖𝐺2,(2.17b)𝑘𝑗=1𝑎𝑗𝐿+𝑘𝑗=1𝑎𝑗𝑄+𝑚𝑎𝑚+𝑐=1(2.17c) where (𝑎𝑗𝐿,𝑎𝑗𝑄,𝑎𝑚are free, ,𝑗,𝑚=1,,𝑘,𝑑+𝑟𝑖,𝑑𝑟𝑖0,𝑟=1,2,𝑖=1,,𝑛), 𝑎𝑗𝐿 is the coefficient for the linear terms 𝑥𝑖𝑗 of attribute j, 𝑎𝑗𝑄 are the coefficients for quadratic terms 𝑥2𝑖𝑗 of attribute j, 𝑎𝑚 are the coefficients for the cross-product terms involving attributes h and m, 𝑑+𝑟𝑖,𝑑𝑟𝑖 are the external deviations of group r observations, and 𝑐 is the cutoff value.

The constraint (2.17c) is the normalization constraints which prevent trivial solution. Other normalization constraints are also possible [4, 11]. It is interesting to note that the cross-product terms can be eliminated from the model when the attributes are uncorrelated [15].

In order to reduce the influence of the group size and give more importance to each group deviation costs, we propose the replacement of the objective function (2.17) by the following function ( 2.17): (1𝜆)𝑛1𝑖=1d+1𝑖𝑛1+𝜆𝑛2𝑖=1d2𝑖𝑛2,((2.17)) with 𝜆[0,1] a constant representing the relative importance of the cost associated with misclassification of the first and the second groups.

The classification rule is


2.2.2. The Piecewise-Linear Models [17]

Recently, two piecewise-linear models are developed by Glen: the MCA and MSD piecewise models. These methods suppose the nonlinearity of the discriminant function. This nonlinearity is approximated by piecewise-linear functions. The concept is illustrated in Figure 1.

In Figure 1, the piecewise-linear functions are ACB’ and BCA’, while the component linear functions are represented by the lines AA’ and BB’. Note that for the piecewise-linear function ACB’, respectively, BCA’, the region of correctly classified group 2 (group 1) is convex. However, the region for correctly classified group 1 (group 2) observations is nonconvex. The optimal of the linear discriminant function is obtained when the two precedent cases are considered separately. The MP must be solved twice: once to constrain all of group 1 elements to a convex region and once to constrain all of group 2 elements to a convex region. Only the second case is considered in developing the following MP models.

(a) The Piecewise-Linear MCA Model [17]
The MCA model for generating a piecewise-linear function in s segment is: maximize2𝑛=1𝑖=1𝛽𝑖(2.19) subject to𝑘𝑗=1𝑥𝑖𝑗𝑎+𝑙𝑗𝑎𝑙𝑗+(𝑀+𝜀)𝛿𝑙𝑖𝑐𝑙+𝑀𝑖𝐺1,𝑙=1,,𝑠,(2.19a)𝑘𝑗=1𝑥𝑖𝑗𝑎+𝑙𝑗𝑎𝑙𝑗(𝑀+𝜀)𝛽2𝑖𝑐𝑙𝑀𝑖𝐺2,𝑙=1,,𝑠,(2.19b)𝑠𝑙=1𝛿𝑙𝑖𝛽1𝑖0𝑖𝐺1,(2.19c)𝑘𝑗=1𝑎+𝑙𝑗+𝑎𝑙𝑗=1𝑙=1,,𝑠,(2.19d)where 𝑐𝑙isfree,𝑎+𝑙𝑗,𝑎𝑙𝑗0,𝑙=1,,𝑠,𝑗=1,,𝑘,𝛽𝑖=0,1,=1,2,𝑖=1,,𝑛, and 𝛿𝑙𝑖=0,1,𝑙=1,2,,𝑠𝑖=1,,𝑛1, with 𝜀,𝜀>0 being a small interval, within which the observations are considered as misclassified, and 𝑀 is a positive large number,
𝛽𝑖=1 if the observation is correctly classified,
𝛿𝑙𝑖=1 (𝑖𝐺1), if the group 1 observation is correctly classified by function 𝑙 on its own.
The correctly classified group 2 observation can be identified by the s constraints of type (2.19b). An observation of group 1 is correctly classified only if it is correctly classified by at least one of the s segments of the piecewise-linear function (constraint (2.19c)).
The classification rule of an observation 𝑥0 is
if𝑘𝑗=1𝑥0𝑗𝑎𝑙𝑗𝑐𝑙,then𝑥0𝐺1,otherwise𝑥0𝐺2.(2.20) A similar model must also be constructed for the case in which the nonconvex region is associated with group 2 and the convex region is associated with group 1.

(b) The Piecewise-Linear MSD Model [17]:
minimize2𝑛=1𝑖=1𝑑𝑖,(2.21) subject to𝑘𝑗=1𝑥𝑖𝑗𝑎+𝑙𝑗𝑎𝑙𝑗𝑒𝑙𝑖𝑐𝑙𝜀𝑖𝐺1,𝑙=1,...,𝑠,(2.21a)𝑘𝑗=1𝑥𝑖𝑗𝑎+𝑙𝑗𝑎𝑙𝑗+𝑓𝑙𝑖𝑐𝑙+𝜀𝑖𝐺2,𝑙=1,...,𝑠,(2.21b)𝑘𝑗=1𝑎+𝑙𝑗+𝑎𝑙𝑗𝑑=1𝑙=1,,𝑠,(2.21c)2𝑖𝑓𝑙𝑖0𝑖𝐺2𝑒,𝑙=1,,𝑠,(2.21d)𝑙𝑖𝑒𝑝𝑖+𝑈𝛿𝑙𝑖𝑈𝑖𝐺1𝑑,𝑙=1,,𝑠,𝑝=1,,𝑠(𝑝𝑙),(2.21e)1𝑖𝑒𝑙𝑖+𝑈𝛿𝑙𝑖𝑈𝑖𝐺1,𝑙=1,,𝑠,(2.21f)𝑠𝑙=1𝛿𝑙𝑖=1𝑖𝐺1,(2.21g)where 𝑐𝑙is free, 𝑎+𝑙𝑗,𝑎𝑙𝑗0,(𝑙=1,,𝑠,𝑗=1,,𝑘),𝑑𝑖=0,1(=1,2,𝑖=1,,𝑛),𝑒𝑙𝑖0,𝛿𝑙𝑖=0,1(𝑙=1,,𝑠,𝑖=1,,𝑛1), and 𝑓𝑙𝑖0(𝑙=1,,𝑠,𝑖=1,,𝑛2), with 𝜀,𝜀>0 being a small interval and 𝑈,𝑈>0 being an upper bound on 𝑒𝑙𝑖.
𝑒𝑙𝑖 is the deviation of group 1 observation 𝑖 from component function 𝑙 of the piecewise-linear function, where 𝑒𝑙𝑖=0 if the observation is correctly classified by function 𝑙 on its own and 𝑒𝑙𝑖>0 if the observation is misclassified by function 𝑙 on its own.
𝑓𝑙𝑖 is the deviation of group 2 observation 𝑖 from component function 𝑙 of the piecewise-linear function, where 𝑓𝑙𝑖=0 if the observation is correctly classified by function 𝑙 on its own and 𝑓𝑙𝑖>0 if the observation is misclassified by function 𝑙 on its own. A group 2 observation is correctly classified if it is classified by each of the s component functions.
𝑑2𝑖 is the lower bound on the deviation of group 2 observation 𝑖 from the 𝑠 segment piecewise-linear discriminant function, where 𝑑2𝑖=0 if the observation is correctly classified and 𝑑2𝑖>0 if the observation is misclassified.
The binary variable 𝛿𝑙𝑖 is introduced in the model to determine 𝑑1𝑖 by detecting the minimal deviation 𝑒𝑙𝑖.
The classification rule is the same as that of the piecewise-linear MCA.
The two piecewise-linear MCA and MSD models must be solved twice: once to consider all group 1 observations in convex region and once to consider all group 2 observations in convex region, in order to obtain the best classification. Other models have been developed by Better et al. [18]. These models are more effective for more complex datasets than for the piecewise-linear models and do not require that one of the groups belong to a convex region.

2.2.3. The Multihyperplanes Models [18]

The multihyperplanes models can be interpreted as models identifying many hyperplanes used successively. The objective is to generate tree conditional rules to separate the points. This approach constitutes an innovation in the area of Support Vector Machines (SVMs) in the context of successive perfect separation decision tree. The advantage of this approach is to construct a nonlinear discriminant function without the need for kernel transformation of the data as in SVM. The first model using multihyperplanes is the Successive Perfect Separation decision tree (SPS).

(a) The Successive Perfect Separation Decision Tree (SPS)
The specific structure is developed in the context of SPS decision tree. The decision tree is a tree which results from the application of the SPS procedure. In fact, this procedure permits, at each depth 𝑙<𝐷, to compel all the observations of either group 1 or group 2 to lie on one side of the hyperplane. Thus, at each depth the tree has one leaf node that terminates the branch that correctly classifies observations in a given group. In Figure 2, the points represented as circles and triangles must be separate. The PQ, QR, and RS segments of the three hyperplanes separate all the points. We can remark that the circles are correctly classified either by H1 or by H2 and H3. However, the triangles are correctly classified by the tree if it is correctly classified by H1 and H2 or by H1 and H3. Several tree types are possible. Specific binary variables called “slicing variables” are used to describe the specific structure of the tree. These variables define how the tree is sliced in order to classify an observation correctly.
The specific structure SPS decision tree model is formulated as follows:
𝐷=3,minimize𝑛𝑖=1𝛿𝑖(2.22) subject to𝑘𝑗=1𝑥𝑖𝑗𝑎𝑗𝑑𝑀𝛿𝑑𝑖𝑐𝑑𝜀𝑖𝐺1,𝑑=1,2,3,(2.22a)𝑘𝑗=1𝑥𝑖𝑗𝑎𝑗𝑑+𝑀𝛿𝑑𝑖𝑐𝑑+𝜀𝑖𝐺2𝑀,𝑑=1,2,3,(2.22b)𝑠𝑙1+𝑠𝑙2+𝛿𝑖𝛿1𝑖+𝛿2𝑖+𝛿3𝑖2𝑖𝐺1𝑀,(2.22c)𝑠𝑙1+𝑠𝑙2+𝑀𝛿𝑖𝛿1𝑖+𝛿2𝑖+𝛿3𝑖𝑖𝐺2𝑀,(2.22d)2𝑠𝑙1𝑠𝑙2+𝑀𝛿𝑖𝛿1𝑖+𝛿2𝑖+𝛿3𝑖𝑖𝐺1𝑀,(2.22e)2𝑠𝑙1𝑠𝑙2+𝛿𝑖𝛿1𝑖+𝛿2𝑖+𝛿3𝑖2𝑖𝐺2𝑀,(2.22f)1+𝑠𝑙1𝑠𝑙2+𝛿𝑖𝛿1𝑖𝑀𝜇𝑖𝑖𝐺1𝑀,(2.22g)1+𝑠𝑙1𝑠𝑙2+𝑀𝛿𝑖𝛿2𝑖+𝛿3𝑖𝑀1𝜇𝑖𝑖𝐺1𝑀,(2.22h)1+𝑠𝑙1𝑠𝑙2+𝛿𝑖𝛿1𝑖𝑖𝐺2𝑀,(2.22i)1+𝑠𝑙1𝑠𝑙2+𝛿𝑖𝛿2𝑖+𝛿3𝑖1𝑖𝐺2𝑀,(2.22j)1𝑠𝑙1+𝑠𝑙2+𝛿𝑖𝛿1𝑖𝑖𝐺1𝑀,(2.22k)1𝑠𝑙1+𝑠𝑙2+𝛿𝑖𝛿2𝑖+𝛿3𝑖1𝑖𝐺1𝑀,(2.22l)1𝑠𝑙1+𝑠𝑙2+𝛿𝑖𝛿1𝑖𝑀𝜇𝑖𝑖𝐺2𝑀,(2.22m)1𝑠𝑙1+𝑠𝑙2+𝑀𝛿𝑖𝛿2𝑖+𝛿3𝑖𝑀1𝜇𝑖𝑖𝐺2,(2.22n)𝑘3𝑗=1𝑑=1𝑎𝑗𝑑=1,(2.22o)noting that 𝛿𝑖{0,1}(𝑖𝐺1𝐺2),𝛿𝑑𝑖{0,1}(𝑖𝐺1𝐺2,𝑑=1,2,3),𝜇𝑖{0,1}(𝑖𝐺1𝐺2),𝑠𝑙𝑡{0,1}(𝑡=1,2), and 𝑎𝑗𝑑,𝑐𝑑 are free (𝑗=1,,𝑘), where 𝑀 is large, while 𝜀 is very small constant. Consider the following: 𝛿𝑖=𝛿0if𝑖iscorrectlyclassiedbythetree,1otherwise,𝑑𝑖=0if𝑖iscorrectlyclassiedbyhyperplane𝑑,1otherwise.(2.23)
The (2.22c) and (2.22d) constraints represent the type of tree (0,0) and are activated when 𝑠𝑙1=0and𝑠𝑙2=0. Similarly, the (2.22e) and (2.22f) constraints for tree type (1,1) will only be activated when 𝑠𝑙1=1𝑎𝑛𝑑𝑠𝑙2=1. However, for the tree types (0,1) and (1,0) corresponding to (2.22g)–(2.22n) constraints, a binary variable 𝜇𝑖 is introduced in order to activate or deactivate either of the constraints relevant to these trees. In fact, when 𝑠𝑙1=0𝑎𝑛𝑑𝑠𝑙2=1, the (2.22g)–(2.22j) constraints for tree type (0,1) will be activated so that an observation from group 1 will be correctly classified by the tree if it is correctly classified either by the first hyperplane or by both the second and the third hyperplanes. On the other hand, an observation from group 2 is correctly classified by the tree if it is correctly classified either by the hyperplanes 1 and 2 or by the hyperplanes 2 and 3.
This classification is established in the case where 𝜇𝑖=0 which permits to activate constraints (2.22g) and (2.22h). The case that corresponds to tree type (1,0) is just a “mirror image” of previous case. However, the model becomes difficult to resolve when the number of possible tree types increases (D large). In fact, as D increases, the number of possible tree types increases and so does the number of constraints. For these reasons, Better et al. [18] developed the following model.

(b) The General Structure SPS Model (GSPS)
minimize𝑛𝑖=1𝛿𝑖[𝐷](2.24) subject to𝑘𝑗=1𝑥𝑖𝑗𝑎𝑗𝑑𝑀𝑑1=1𝑣𝑖+𝛿𝑖𝑑𝑐𝑑𝜀𝑖𝐺1,𝑑=1,,𝐷,(2.24a)𝑘𝑗=1𝑥𝑖𝑗𝑎𝑗𝑑+𝑀𝑑1=1𝑣𝑖+𝛿𝑖𝑑𝑐𝑑+𝜀𝑖𝐺2𝜇,𝑑=1,,𝐷,(2.24b)𝑑𝛿𝑖𝑑𝑖𝐺1,𝑑=1,,𝐷1,(2.24c)1𝜇𝑑𝛿𝑖𝑑𝑖𝐺2𝑣,𝑑=1,,𝐷1,(2.24d)𝑖𝑑𝜇𝑑𝑖𝐺1𝑣,𝑑=1,,𝐷1,(2.24e)𝑖𝑑1𝜇𝑑𝑖𝐺2𝑣,𝑑=1,,𝐷1,(2.24f)𝑖𝑑1𝛿𝑖𝑑𝑖𝐺1𝐺2,𝑑=1,,𝐷1,(2.24g)𝑘𝐷𝑗=1𝑑=1𝑎𝑗𝑑=1,(2.24h)where 𝛿𝑑𝑖{0,1}(𝑖𝐺1𝐺2,𝑑=1,,𝐷),𝜇𝑑{0,1}(𝑖𝐺1𝐺2,𝑑=1,,𝑑1),0𝑣𝑖𝑑1(𝑑=1,,𝐷1), and 𝑎𝑗𝑑,𝑐𝑑are free(𝑗=1,,𝑘,𝑑=1,,𝐷). The variables 𝜇𝑑 and 𝑣𝑖𝑑 are not included in the final hyperplane (D). The variable 𝜇𝑑 is defined as 𝜇𝑑=01ifall𝑖𝐺1arecompelledtolieononesideofhyperplane𝑑,ifall𝑖𝐺1arecompelledtolieononesideofhyperplane𝑑.(2.25) The constraints (2.24c) and (2.24d) permit to lie all group 1 or group 2 observations on one side of the hyperplane according to 𝜇𝑑 value. In fact, due to constraint (2.24c), if 𝜇𝑑=0, all group 1 observations and possibly some group 2 observations lie on one side of the hyperplane d. However, only observations of group 2 will lie on the other side of hyperplane d and so these observations can be correctly classified. Conversely, due to constraint (2.24d), if 𝜇𝑑=1, the observations correctly classified by the tree will be those belonging to group 1. The variables 𝑣𝑖1 permit to identify the correctly classified and misclassified observations of each group from the permanent value 1---𝛿𝑖1. In fact, in the case where 𝜇1=1, the permanent values 𝛿𝑖1 to establish are those of group1 observations such that 𝛿𝑖1=0, because these particular observations are separate in such manner that we do not need to consider them again. Thus, for these last observations, the fact that 𝜇1=1 and 𝛿𝑖1=0 forces the 𝑣𝑖1 to equal 1. If we consider the case to force 𝑣𝑖1=0 for group 1 observations, it means that these observations have not yet permanently separated from group 2 observations and one or more hyperplanes are necessary to separate them. Thus, 𝑣𝑖1=0 if 𝜇1=0 or 𝛿𝑖1=1 (verified by the constraints (2.24e) and (2.24g)).
For the empirical study, the SPS and GSPS model will be resolved using the two following normalization constraints:
N1𝑘𝐷𝑗=1𝑑=1𝑎𝑗𝑑N=1,2𝑘𝑗=1𝑎𝑗𝑑=1.𝑑=1,,𝐷,(2.26) The developed models presented previously are based either on piecewise-linear separation or on the multihyperplanes separation. New models based on piecewise-nonlinear separation and on multihypersurfaces are proposed in the next section.

3. The Proposed Models

In this section different models are proposed. Some use the piecewise-nonlinear separation and the others use the multihypersurfaces.

3.1. The Piecewise-Nonlinear Models (Quadratic Separation)

The piecewise-linear MCA and MSD models are based on piecewise-linear functions. To ameliorate the performance of the models, we propose two models based on piecewise-nonlinear functions. The base concept of these models is illustrated in Figure 3.

The curves AA’ and BB’ represent the piecewise-nonlinear component functions: ACB’ and BCA’. The interpretations are the same as those in Figure 1. However, we can remark that the use of piecewise-nonlinear functions permits to minimize the number of misclassified observations. Based on this idea, we suggest proposing models based on piecewise-nonlinear functions. In these models we suggest to replace the first constraints of piecewise-linear MCA and MSD models by the linear constraints which are nonlinear in terms of the attributes as follows:𝑗𝑥𝑖𝑗𝑎𝑙𝑗𝐿+𝑗𝑥2𝑖𝑗𝑎𝑙𝑗𝑄+𝑚𝑥𝑖𝑥𝑖𝑚𝑎𝑙𝑚+(𝑀+𝜀)𝛿𝑙𝑖𝑐𝑙+𝑀𝑖𝐺1𝑙=1,,𝑠,(3.22a)𝑗𝑥𝑖𝑗𝑎𝑙𝑗𝐿+𝑗𝑥2𝑖𝑗𝑎𝑙𝑗𝑄+𝑚𝑥𝑖𝑥𝑖𝑚𝑎𝑙𝑚(𝑀+𝜀)𝛽2𝑖𝑐𝑙𝑀𝑖𝐺2𝑙=1,,𝑠,(3.22b) where 𝑎𝑙𝑗𝐿, 𝑎𝑙𝑗𝑄, 𝑎𝑙𝑚 are unrestricted in sign, ,𝑗,𝑚=1,,𝑘and𝑙=1,,𝑠,𝑎𝑙𝑗𝐿 are the linear terms of attribute 𝑗 for the function 𝑙, 𝑎𝑙𝑗𝑄 are the quadratic terms of attribute 𝑗 for the function 𝑙, 𝑎𝑙𝑚 are the cross-product terms of attributes and 𝑚 for the function 𝑙.

Note that if the attributes are uncorrelated, the cross-product terms can be excluded from the models. Other general nonlinear terms can, also, be included in the models. On the other hand, the normalization constraint is replaced by the following constraint: 𝑗𝑎𝑙𝑗𝐿+𝑗𝑎𝑙𝑗𝑄+𝑚𝑎𝑙𝑚=1𝑙=1,,𝑠.(3.22c) The piecewise-quadratic separation models obtained are the following.

3.1.1. The Piecewise-Quadratic Separation MCA Model (QSMCA)

maximize2𝑛𝑟=1𝑟𝑖=1𝛽𝑟𝑖(3.23) subject to 𝑘𝑗=1𝑥𝑖𝑗𝑎𝑙𝑗𝐿+𝑘𝑗=1𝑥2𝑖𝑗𝑎𝑙𝑗𝑄+𝑚𝑥𝑖𝑥𝑖𝑚𝑎𝑙𝑚+(𝑀+𝜀)𝛿𝑙𝑖𝑐𝑙+𝑀𝑖𝐺1𝑙=1,,𝑠,(3.23a)𝑘𝑗=1𝑥𝑖𝑗𝑎𝑙𝑗𝐿+𝑘𝑗=1𝑥2𝑖𝑗𝑎𝑙𝑗𝑄+𝑚𝑥𝑖𝑥𝑖𝑚𝑎𝑙𝑚(𝑀+𝜀)𝛽2𝑖𝑐𝑙+𝑀𝑖𝐺2𝑙=1,,𝑠,(3.23b)𝑠𝑙=1𝛿𝑙𝑖𝛽1𝑖0𝑖𝐺1,(3.23c)𝑘𝑗=1𝑎𝑙𝑗𝐿+𝑘𝑗=1𝑎𝑙𝑗𝑄+𝑚𝑎𝑙𝑚=1𝑙=1,,𝑠,(3.23d)where 𝑐𝑙,𝑎𝑙𝑗𝐿,𝑎𝑙𝑗𝑄, 𝑎𝑙𝑚 are unrestricted in sign( ,𝑗,𝑚=1,,𝑘 and 𝑙=1,,𝑠),𝛽𝑟𝑖=0,1(𝑟=1,2,𝑖=1,,𝑛𝑟), and 𝛿𝑙𝑖=0,1(𝑙=1,2,,𝑠𝑖=1,,𝑛1).The classification rule of an observation 𝑥0 is if𝑘𝑗=1𝑥0𝑗𝑎𝑙𝑗𝐿+𝑘𝑗=1𝑥0𝑗𝑥0𝑗𝑎2𝑙𝑗𝑄+𝑚𝑥0𝑥0𝑚𝑎𝑙𝑚𝑐𝑙,then𝑥0𝐺1,otherwise𝑥0𝐺2.(3.24)

3.1.2. The Piecewise-Quadratic Separation MSD Model (QSMSD)

minimize2𝑛𝑟=1𝑟𝑖=1𝑑𝑟𝑖(3.25) subject to𝑘𝑗=1𝑥𝑖𝑗𝑎𝑙𝑗𝐿+𝑘𝑗=1𝑥2𝑖𝑗𝑎𝑙𝑗𝑄+𝑚𝑥𝑖𝑥𝑖𝑚𝑎𝑙𝑚𝑒𝑙𝑖𝑐𝑙𝜀𝑖𝐺1,𝑙=1,,𝑠,(3.25a)𝑘𝑗=1𝑥𝑖𝑗𝑎𝑙𝑗𝐿+𝑘𝑗=1𝑥2𝑖𝑗𝑎𝑙𝑗𝑄+𝑚𝑥𝑖𝑥𝑖𝑚𝑎𝑙𝑚+𝑓𝑙𝑖𝑐𝑙+𝜀𝑖𝐺2,𝑙=1,,𝑠,(3.25b)𝑘𝑗=1𝑎𝑙𝑗𝐿+𝑘𝑗=1𝑎𝑙𝑗𝑄+𝑚𝑎𝑙𝑚𝑑=1𝑙=1,,𝑠,(3.25c)2𝑖𝑓𝑙𝑖0𝑖𝐺2𝑒,𝑙=1,,𝑠,(3.25d)𝑙𝑖𝑒𝑝𝑖+𝑈𝛿𝑙𝑖𝑈𝑖𝐺1𝑑,𝑙=1,,𝑠,𝑝=1,,𝑠(𝑝𝑙),(3.25e)1𝑖𝑒𝑙𝑖+𝑈𝛿𝑙𝑖𝑈𝑖𝐺1,𝑙=1,,𝑠,(3.25f)𝑠𝑙=1𝛿𝑙𝑖=1𝑖𝐺1,(3.25g)where 𝑐 is free, 𝑎+𝑙𝑗,𝑎𝑙𝑗0(𝑙=1,,𝑠,𝑗=1,,𝑘),𝑑𝑟𝑖=0,1(𝑟=1,2,𝑖=1,,𝑛𝑟),𝑒𝑙𝑖0(𝛿𝑙𝑖=0,1𝑙=1,,𝑠,𝑖=1,,𝑛1), and 𝑓𝑙𝑖0(𝑙=1,,𝑠,𝑖=1,,𝑛2). The interpretation of this model is the same as that of piecewise-linear MSD model. The classification rule is the same as that of QSMCA model.

The construction of piecewise QSMSD and QSMCA models, using the case in which group 1 is in the convex region and the group 2 in the nonconvex region, is also valuable. However, despite the complexity of these models (especially when the datasets are very large), the advantage of piecewise QSMSD and QSMCA models is accelerating the reach of an optimal solution using a reduced number of arcs than segments. But, the disadvantage of the models remains the necessity to resolve twice these models: the case in which group1 is convex and the case in which group 2 is convex. The following quadratic specific structure models could be a way of solving these problems, accelerating the reach of possible solutions to the different models and finding answers to problems of large size.

3.2. The Quadratic Specific Structure Models (QSS)

The quadratic specific structure models are based on the use of nonlinear separation. The following figure illustrates a particular case of the QSS models.

In Figure 4, the points are separated using two curves 𝜑1and 𝜑2. The circle are well classified by 𝜑1 or by 𝜑2 and the triangles are well classified by 𝜑1 and 𝜑2. As for SPS and GSPS, many tree-specific structures are possible for QSS models. Based on this idea, the quadratic SPS and the quadratic GSPS models are proposed.

3.2.1. The Quadratic SPS Model (QSPS)

Similar to the piecewise QSMSD and QSMCA models, the first constraints of SPS model are replaced by the linear constraints which are nonlinear in terms of the attributes. The QSPS model is the following: 𝐷=3,minimize𝑛𝑖=1𝛿𝑖(3.26) subject to𝑘𝑗=1𝑥𝑖𝑗𝑎𝑑𝑗𝐿+𝑘𝑗=1𝑥2𝑖𝑗𝑎𝑑𝑗𝑄+𝑚𝑥𝑖𝑥𝑖𝑚𝑎𝑑𝑚𝑀𝛿𝑑𝑖𝑐𝑑𝜀𝑖𝐺1,𝑑=1,2,3,(3.26a)𝑘𝑗=1𝑥𝑖𝑗𝑎𝑙𝑗𝐿+𝑘𝑗=1𝑥2𝑖𝑗𝑎𝑙𝑗𝑄+𝑚𝑥𝑖𝑥𝑖𝑚𝑎𝑑𝑚+𝑀𝛿𝑑𝑖𝑐𝑑+𝜀𝑖𝐺2𝑀,𝑑=1,2,3,(3.26b)𝑠𝑙1+𝑠𝑙2+𝛿𝑖𝛿1𝑖+𝛿2𝑖+𝛿3𝑖2𝑖𝐺1𝑀,(3.26c)𝑠𝑙1+𝑠𝑙2+𝑀𝛿𝑖𝛿1𝑖+𝛿2𝑖+𝛿3𝑖𝑖𝐺2𝑀,(3.26d)2𝑠𝑙1𝑠𝑙2+𝑀𝛿𝑖𝛿1𝑖+𝛿2𝑖+𝛿3𝑖𝑖𝐺1𝑀,(3.26e)2𝑠𝑙1𝑠𝑙2+𝛿𝑖𝛿1𝑖+𝛿2𝑖+𝛿3𝑖2𝑖𝐺1𝑀,(3.26f)1+𝑠𝑙1𝑠𝑙2+𝛿𝑖𝛿1𝑖𝑀𝜇𝑖𝑖𝐺1𝑀,(3.26g)1+𝑠𝑙1𝑠𝑙2+𝑀𝛿𝑖𝛿2𝑖+𝛿3𝑖𝑀1𝜇𝑖𝑖𝐺1𝑀,(3.26h)1+𝑠𝑙1𝑠𝑙2+𝛿𝑖𝛿1𝑖𝑖𝐺2𝑀,(3.26i)1+𝑠𝑙1𝑠𝑙2+𝛿𝑖𝛿2𝑖+𝛿3𝑖1𝑖𝐺2𝑀,(3.26j)1𝑠𝑙1+𝑠𝑙2+𝛿𝑖𝛿1𝑖𝑖𝐺1𝑀,(3.26k)1𝑠𝑙1+𝑠𝑙2+𝛿𝑖𝛿2𝑖+𝛿3𝑖1𝑖𝐺1𝑀,(3.26l)1𝑠𝑙1+𝑠𝑙2+𝛿𝑖𝛿1𝑖𝑀𝜇𝑖𝑖𝐺2𝑀,(3.26m)1𝑠𝑙1+𝑠𝑙2+𝑀𝛿𝑖𝛿2𝑖+𝛿3𝑖𝑀1𝜇𝑖𝑖𝐺2,(3.26n)𝑘𝑗=1𝑎𝑑𝑗𝐿+𝑘𝑗=1𝑎𝑑𝑗𝑄+𝑚𝑎𝑑𝑚=1𝑑=1,,𝐷,(3.26o)where 𝛿𝑖{0,1}(𝑖𝐺1𝐺2),𝛿𝑑𝑖{0,1}(𝑖𝐺1𝐺2,𝑑=1,2,3),𝜇𝑖{0,1}(𝑖𝐺1𝐺2),𝑠𝑙𝑡{0,1}(𝑡=1,2), and 𝑎𝑑𝑗𝐿,𝑎𝑑𝑗𝑄,𝑎𝑑𝑚,𝑐𝑑are free (𝑗=1,,𝑘).

3.2.2. The Quadratic GSPS (QGSPS)

Replacing the first constraints of GSPS model by the linear constraints which are nonlinear in terms of the attributes like those of the QSPS model, we obtain the following QGSPS model: minimize𝑛𝑖=1𝛿𝑖[𝐷](3.27) subject to

𝑘𝑗=1𝑥𝑖𝑗𝑎𝑙𝑗𝐿+𝑘𝑗=1𝑥2𝑖𝑗𝑎𝑙𝑗𝑄+𝑚𝑥𝑖𝑥𝑖𝑚𝑎𝑑𝑚𝑀𝑑1=1𝑣𝑖+𝛿𝑖𝑑𝑐𝑑𝜀𝑖𝐺1,𝑑=1,,𝐷,(3.27a)𝑘𝑗=1𝑥𝑖𝑗𝑎𝑙𝑗𝐿+𝑘𝑗=1𝑥2𝑖𝑗𝑎𝑙𝑗𝑄+𝑚𝑥𝑖𝑥𝑖𝑚𝑎𝑑𝑚+𝑀𝑑1=1𝑣𝑖+𝛿𝑖𝑑𝑐𝑑+𝜀𝑖𝐺2𝜇,𝑑=1,,𝐷,(3.27b)𝑑𝛿𝑖𝑑𝑖𝐺1,𝑑=1,,𝐷1,(3.27c)1𝜇𝑑𝛿𝑖𝑑𝑖𝐺2𝑣,𝑑=1,,𝐷1,(3.27d)𝑖𝑑𝜇𝑑𝑖𝐺1𝑣,𝑑=1,,𝐷1,(3.27e)𝑖𝑑1𝜇𝑑𝑖𝐺2𝑣,𝑑=1,,𝐷1,(3.27f)𝑖𝑑1𝛿𝑖𝑑𝑖𝐺1𝐺2,𝑑=1,,𝐷1,(3.27g)𝑘𝑗=1𝑎𝑑𝑗𝐿+𝑘𝑗=1𝑎𝑑𝑗𝑄+𝑚𝑎𝑑𝑚=1𝑑=1,,𝐷,(3.27h) where 𝛿𝑑𝑖{0,1}(𝑖𝐺1𝐺2,𝑑=1,,𝐷),𝜇𝑑{0,1}(𝑖𝐺1𝐺2,𝑑=1,,𝐷1),0𝑣𝑖𝑑1(𝑑=1,,𝐷1), and 𝑎𝑑𝑗𝐿,𝑎𝑑𝑗𝑄,𝑎𝑑𝑚,𝑐𝑑are free (𝑗=1,,𝑘,𝑑=1,,𝐷).

As mentioned above, the cross-products terms can be excluded from the quadratic models if the attributes are uncorrelated and other types of nonlinear functions are possible.

4. A Comparative Study

4.1. The Datasets

In this study we choose four datasets.

(i)The first dataset (D1) is data presented by Johnson and Wichern [24] used by Glen [11] who were trying to apply new approaches to the problem of variable selections using an LP model. This dataset consists of 46 firms (21 bankrupt firms and 25 non-bankrupt firms). The four variables measured were the following financial ratios: cash flow to total debt, net income to total assets, current assets to current liabilities, and current assets to net sales.(ii)The second dataset (D2) is a Tunisian dataset. The data concerns 62 tumors of breast. Five variables characterize these tumors: four proteins expression scores (EGFR, Her2, Her3, and estrogens) and the size of these tumors in cm. The tumors are divided into two groups according to the SBR grad (grads II and III) which reflects the advancement state of the cancer (source: Centre of Biotechnology of Sfax).(iii)The third dataset is a Japanese dataset (D3). This data contains 100 Japanese banks divided into two groups of 50. Seven financial ratios (return on total assets, equity to total assets, operating costs to profits, return on domestic assets, bad loan ratio, loss ratio on bad loans, and return on equity) characterize this data [25]. (iv)The fourth dataset is the Wisconsin Breast Cancer data (D4). This data consist of 683 patients screened for breast cancer divided into two groups: 444 representing a benign case and 139 representing a malignant tumor. Nine attributes characterize this data (clump thickness, uniformity of cell size, uniformity of cell shape, Marginal Adhesion, single epithelial cell size, bare nuclei, bland chromatin, normal nucleoli, mitoses) (

The objective is to discriminate between the groups of each dataset using the various methods cited above.

4.2. The Results

Different studies have shown that the reliability of the LDF method depends on the verification of certain hypotheses such as the normality of the data and the equality of the variance-covariance matrices. The results obtained from testing these hypotheses in our datasets are shown in Table 1.

DatasetNormalityEquality of the variance-covariance matrices ( 1 = 2 )


The computer program AMOS 4 is used to verify the normality. To verify the equality of the variance-covariance matrices and to determine the classification rates, the SPSS program is used. According to Table 1, the normality hypothesis is not verified for all datasets, but the equality of variance-covariance matrices is verified for D1 and D3 datasets and not verified for the second and the fourth datasets. The results of the statistical approaches are obtained using SPSS program. The SVM-based approach is solved by the WinSVM package. The experiments are conducted by an Intel (R) Celeron (R) M, processor 1500 MHz in C/C++ environment. Various MPs were solved by CPLEX 10.0 package. For the experiment, we have chosen 𝑀=1000 and 𝜀=Δ=0.005. Microsoft Excel is used to determine the apparent hit rates (proportion of observations classified correctly), the Leave-One-Out (LOO) hit rates, and the holdout sample hit rates which represents the performance measures to be compared between the models. In fact, in order to evaluate the performance of the different approaches, a Leave-One-Out (LOO) procedure is used for the first three datasets. The advantage of this procedure is to overcome the problem of the apparent hit rates bias. The LOO hit rate is calculated by omitting each observation in turn from the training sample and using the remaining observations to generate a discriminant function which is then used to classify the omitted observations. Although the computational efficiency of this procedure can be improved in statistical discriminant analysis, it is not practical in MP analysis unless only a relatively number of observations are included. For this reason, the LOO hit rate was not used for the fourth dataset. The performance of the different MP methods using this dataset (D4) is, then, addressed by considering the “split-half” technique. In fact, the important number of observations available in this dataset permits to adopt this latter approach by partitioning the complete observations (386) into training and holdout samples. The training sample of dataset (D4) consisted of a random sample of 73% of the observations in each group, with 340 observations in group1 and 160 in group 2 (500 observations in total). The remaining 27% of the observations (104 group1 observations and 79 group 2 observations) formed the holdout sample. To evaluate the performance of the various approaches, the training sample was used to generate classification models using the different methods and these classification models were then used to determine the holdout sample hit rates. The performance of LDF using this dataset (D4) was also evaluated in the same way.

Furthermore, the “split-half” technique is also employed for the first three datasets in order to evaluate the performance of the SVM-based approach. Similar to the dataset D4, the three datasets D1, D2, and D3 are partitioned into training and holdout samples. The training sample size of the first dataset is equal to 24 observations (11 observations in group 1 and 13 observations in group 2) and its holdout sample size is equal to 22 observations (10 observations in group 1 and 12 observations in group 2). For the second dataset, the training sample contains 45 observations (15 observations in group 1 and 30 observations in group 2). The remaining 17 observations (7 observations in group 1 and 10 observations in group 2) formed the holdout sample. The third dataset is partitioned into 70 observations (35 observations for each group) forming the training sample and 30 observations formed the holdout sample.

In this study, the complete set of observations of each dataset was first used as the training sample giving the apparent hit rate in order to demonstrate the computational feasibility of the different approaches. The use of the “split-half” and LOO procedures permits to allow the performance of classification models generated by the different methods.

4.2.1. The Results of the Linear Programing Model

The results of the correct classification rates (apparent rates) using MCA and MSD methods with various normalization constraints are presented in Table 2.

𝑛 1 = 2 1 , 𝑛 1 = 2 2 , 𝑛 1 = 5 0 , 𝑛 1 = 4 4 4 ,
𝑛 2 = 2 5 , 𝑛 2 = 4 0 , 𝑛 2 = 5 0 , 𝑛 2 = 2 3 8 ,
𝑛 = 4 6 , 𝑛 = 6 2 , 𝑛 = 1 0 0 , 𝑛 = 6 8 3 ,

(N1) 𝑘 𝑗 = 1 𝑎 𝑗 + 𝑐 = 1 91,384,7883,8764,52968697,294,7
( 4 )( 7 ) ( 1 0 )( 2 2 ) ( 4 ) ( 1 4 )( 1 9 )( 3 6 )
(N2) 𝑘 𝑗 = 1 𝑎 𝑗 = 1 91,389,183,8766,13969197,296,6
( 4 )( 5 ) ( 1 0 )( 2 1 ) ( 4 ) ( 9 )( 1 9 )( 2 3 )
(N3) 𝑐 ± 1 91,389,183,8772,6969197,296,6
( 4 )( 5 ) ( 1 0 )( 1 7 ) ( 4 ) ( 9 )( 1 9 )( 2 3 )
(N4) invariance under91,386,9583,8775,8969197,296,6
origin shift ( 4 )( 6 ) ( 1 0 )( 1 5 ) ( 4 )( 9 )(1 9 )( 2 3 )

The values between parentheses are the numbers of misclassified observations.

According to this table, the MCA model performs better than the MSD model for the different normalization constraints used. However, the best classification rates for the MSD model are given by using the constraints (N3) and (N4), except in the case of D1 and D3, where the difference between the three normalization constraints (N2), (N3), and (N4) is not significant. The classification rates of dataset D2 using these constraints are different. This is may be due to the nature of the data, to the fact that the group size is different, or to the fact that the model with (N2) normalization will generate a discriminant function in which the constant term is properly zero, but it will also exclude solutions in which the variable coefficients sum to zero, and rather should be solved with positive and negative normalization constants [4, 11]. However, the performance of the MCA model remains unchanged using the different normalization constraints. For each of the two methods using the normalization constraint (N4), the LOO hit rates for the three datasets D1, D2, and D3 and the holdout sample hit rates for the dataset D4 are presented in Table 3.


LOOLOOLOOHoldout hit rate

MSD89,1 ( 5 )66,13 ( 2 1 )84 ( 1 6 )95,6 ( 8 )
MCA89,1 ( 5 )70,97 ( 1 8 )84 ( 1 6 )98,36 ( 3 )

From Table 3, we can conclude that the difference between MSD and MCA models is not significant for the first and the third datasets. However, the MCA model performs better than the MSD model for the second and the fourth datasets. Furthermore, the computational time of the MCA models is less than the computational time of the MSD model especially for the fourth dataset. In fact, by using the complete set of observations, the MSD model was solved in less than 7 seconds while the MCA model required less than 4 seconds to obtain the estimated coefficients of the discriminant function. However, for the other datasets, the difference of the solution time between the two models is not significant (less than 2 seconds).

On the other hand, to solve the RS models, two cases are proposed: first 𝑐1 and 𝑐2 take, respectively, the value 0 and 1 (Case  1), and second, the cutoff values 𝑐1 and 𝑐2 are considered decision variables (Case  2). The RS model, for the complete set of observations of the dataset D4, was solved in 3 seconds. The computational time of this model using the other datasets is less than 2 seconds. The apparent and the LOO hit rates for the disriminant function generated by the RS models are shown in Table 4.

hit ratehit ratehit ratehit ratehit rate

𝑐 1 = 0 and 𝑐 2 = 1 89,1 ( 5 )86,9 ( 6 )71 ( 1 8 )62,9 ( 2 3 )94 ( 6 )83 ( 1 7 )96,2 ( 2 6 )96,7 ( 6 )
𝑐 1 and 𝑐 2 decision variables91,3 ( 4 )89, 1 ( 5 )79 ( 1 3 )70,97 ( 1 8 )96 ( 4 )85 ( 1 5 )97,36 ( 1 8 )98,9 ( 2 )

The values in parentheses are the numbers of misclassified observations.

The difference between the apparent and LOO hit rates of the RS model in the two cases is well improved particularly for the second and the fourth datasets. For D1 and D2, the difference between the numbers of misclassified observations in the two cases is marginally significant; only one or two misclassified observations are found. However, for D2 and D4, there is a difference. So, when normality and/or equality of the variance-covariance matrices are not verified, it would be most appropriate to consider the cutoff values decision variables. The results of the three combined models are given in Table 5.

𝑛 1 = 2 1 , 𝑛 2 = 2 5 𝑛 1 = 2 2 , 𝑛 2 = 4 0 𝑛 1 = 5 0 , 𝑛 2 = 5 0
𝑛 = 4 6 𝑛 = 6 2 𝑛 = 1 0 0

2nd stepWeighting86,984,7886,984,7864,562,962,959,6893839485
MSD( 6 )( 7 )( 6 )( 7 )( 2 2 )( 2 3 )( 2 3 )( 2 5 )( 7 )( 1 7 )( 6 )( 1 5 )
RS( 4 ) ( 5 )( 4 )( 5 )( 1 7 ) ( 2 1 )( 1 9 )( 2 2 )( 6 )( 1 6 )( 6 )( 1 5 )

The MSD weighting model (W-MSD) and the RS weighting model (W-RS) are used in the second stage to solve the MC1 and MC2 models. The results show that the choice of model used in the second stage affects the correctly classified rates. These rates are higher when one uses a W-RS model in the second stage. The difference between the models is not very significant for the first and the third datasets when equality of variance-covariance matrices is verified. However, for dataset D2, the MC1 model which combines the LDF, LPM, and RS models performs better than the MC2 model which combines the LDF, RS, and MSD models. In fact, LPM model used in MC1 model has the advantage to force the observations classification scores to cluster around the mean scores of their own groups. The application of the MC1 and MC2 models required a computational effort. In fact, to determine the classification rate, the combined method required to solve each model used in this approach separately. Then, the computational time important is more than 10 seconds for dataset D4, for example. For this reason, the use of such method can not be benefit if the dataset is sufficiently large.

The results of the various models for the four datasets are presented in Table 6.

1 = 2 1 2 1 = 2 1 2
𝑛 1 = 2 1 , 𝑛 2 = 2 5 𝑛 1 = 2 2 , 𝑛 2 = 4 0 𝑛 1 = 5 0 , 𝑛 2 = 5 0 𝑛 1 = 4 4 4 , 𝑛 2 = 2 3 8
𝑛 = 4 6 𝑛 = 6 2 𝑛 = 1 0 0 𝑛 = 6 8 3
hit ratehit ratehit ratehit ratehit rate
( 𝑛 = 6 8 3 )( 𝑛 = 1 8 3 )

LDF89,1 ( 5 )89,1 ( 5 )74,2 ( 1 6 )66,12 ( 2 1 )91 ( 9 )88 ( 1 2 )96,3 ( 2 5 )99,45 ( 1 )
LG91,3 ( 4 )89,1( 5 )74,2 ( 1 6 )66,12( 2 1 )93 ( 7 )88 ( 1 2 )96,9 ( 2 1 )98,9 ( 2 )
MSD89,1( 5 )89,1 ( 5 )75,8 ( 1 5 )66,12( 2 1 )91 ( 9 )84( 1 6 )96,6 ( 2 3 )95,6 ( 8 )
RS91,3 ( 4 )89,1 ( 5 )79 ( 1 3 )70,97 ( 1 8 )96 ( 4 )85 ( 1 5 )97,36 ( 1 8 )98,9 ( 2 )
MCA91,3 ( 4 )89,1 ( 5 )83,87 ( 1 0 )70,97 ( 1 8 ) 96 ( 4 )84 ( 1 6 )97,2 ( 1 9 )98,36 ( 3 )
MIPEDEADA91,3 ( 4 )89,1 ( 5 )85,4 ( 9 )75,8 ( 1 5 )96 ( 4 )91 ( 9 )97,2 ( 1 9 )98,9 ( 2 )
LPM89,1 ( 5 )89,1 ( 5 )74,2 ( 1 6 ) 66,12( 2 1 )93 ( 7 )84 ( 1 6 )96,6 ( 2 3 )96,7 ( 6 )
MC1 (LDF, LPM, RS)91,3( 4 )89,1 ( 5 )72,5 ( 1 7 )66,13 ( 2 1 )94 ( 6 )85 ( 1 5 )96,6 ( 2 3 )97,27 ( 5 )
MC2 (LDF, MSD, RS)91,3 ( 4 )89,1 ( 5 )69,35 ( 1 9 )64,5 ( 2 2 ) 94 ( 6 )84 ( 1 6 )96,3 ( 2 5 )96,7 ( 6 )

The values in parentheses are the numbers of misclassified observations.

Table 6 shows that the correctly classified rates (apparent hit rates) obtained by MCA, RS, and MIPEDE-DA are superior to those obtained by the other models especially when the normality and equality of variance-covariance matrices hypotheses are violated. The two combined methods, MC1 and MC2, give similar results for the first dataset. While for the other datasets, the MC1 performs better than MC2. It must be noted that the performance of the combined method can be affected by the choice of the procedures used in this method. Furthermore, the difference between these models is significant especially for dataset D2. In terms of the computational time, we can remark that the resolution of the statistical methods LDF and LG using the complete dataset is less than one second which is faster than the resolution of the other MP models.

On the other hand, it is important to note that the correct classification rate of the RS model may be changed by selecting the most appropriate cutoff value for c. This cutoff value can be obtained by solving an LP problem in the second stage using a variety of objective functions such as MIP or MSD, instead of simply using the cutoff value equal to (𝑐1+𝑐2)/2 [19]. In fact, for the third dataset D3, the apparent hit rate found by Glen [14] using the RS model is equal to 95% which is marginally below the apparent hit rate of 96% found in our study. Effectively, Glen [14] used the 0 and 1 cutoff value in the first stage and the MSD in the second stage of the RS model. Then, we can conclude that RS model can be most performing if the cutoff values are chosen as decision variables and simply using the cutoff value equal to (𝑐1+𝑐2)/2 in the second stage. Consequently, we do not need to use any binary variables like the case in which MSD or MIP models are applied in the second stage of the RS model. This result is interesting in the sense that the resolution of such model is very easy and does not require much computational time (in general less than 3 seconds). In fact, Glen [14] mentioned that the computational time for the RS model using MIP model in the second stage excluding the process for identifying the misclassified observations of G1 and G2 was lower than the computational time for the MCA model. Indeed, this latter model involves more binary variables than those of the first model (RS). In addition, for the datasets D1 and D3, we remark that the RS, MCA, and MIPEDEA-DA models produce the same apparent hit rates. However, for the second dataset, the MIPEDEA-DA followed by the MCA model performs better than the other approaches. On the other hand, the result obtained by the LOO procedure shows that the MIPEDEA-DA model performs better than the other models for the second and the third datasets, while for the first dataset, the difference between the models is not significant. In terms of the holdout sample hit rate obtained using the classification models generated from the 73% training sample of dataset D4, the statistical method LDF performs better than the other approaches followed by the LG, the RS, and the MIEDEA-DA models.

4.2.2. The Result of Nonlinear MP Models

(a) Comparison of SPS and GSPS Models Using the Two Normalization Constraints N’1 and N’2
To compare between the two normalization constraints N’1 and N’2, the model SPS and GSPS were solved using the first three datasets (D1, D2, and D3). The results are presented in Table 7.
According to Table 7, the models using the second normalization constraint can perform better than the one using the first normalization constraint. An important result found concerns the SPS models which can not give any solution for the second dataset, while the QSPS models perform very well and especially the one using the second normalization constraint. Furthermore, the GSPSN’2 model performs better than the GSPSN’1 model especially for the first dataset. Thus, compared to the normalization (N’1) used by Better et al. [18], our proposed normalization (N’2) can produce better results. The comparison of the different models developed will be discussed in the following section.

D 1 D 2 D 3
( 𝑛 1 = 2 1 ; 𝑛 2 = 2 5 ) ( 𝑛 1 = 2 2 ; 𝑛 2 = 4 0 ) ( 𝑛 1 = 5 0 ; 𝑛 2 = 5 0 )

SPSN’182,6 ( 8 )99 ( 1 )
SPSN’2100 (0)100 (0)
QSPSN’1100 (0)97 ( 3 )100 (0)
QSPSN’2100 (0)100 (0)100 (0)
GSPSN’158,7 ( 1 9 )89,1 ( 5 )67,7 ( 2 0 )100 (0)74 ( 2 6 )100 (0)
GSPSN’2100 (0)83,87 ( 1 0 )100 (0)99 ( 1 )100 (0)
QGSPSN’1100 (0)96,8 ( 2 )100 (0)85 ( 1 5 )100 (0)
QGSPSN’2100 (0)100 (0)97 ( 3 )100 (0)

The values in parentheses are the numbers of misclassified observations.

(b) Comparison of Different Models
The results of the different models are presented in Table 8. From Table 8, the nonlinear MP models outperform the classical approaches. This result may be due to the fact that the performance of these latter approaches requires the verification of some standard hypotheses. In fact, the LDF and QDF have the best performance if the data distribution is normal. However, this hypothesis is not verified for these datasets. Although the LG model does not need the verification of such restriction, this model has not provided higher hit rates compared to those of the other approaches especially for the second dataset. On the other hand, the second-order MSD model, also, performs worse than the other models. Furthermore, the performance of the piecewise QSMCA and QGSPSN’2 models is better than the performance of the piecewise-linear models (MCA and MSD) for the first and second datasets. In fact, the optimal solution is rapidly reached using these models rather than the piecewise-linear approaches (the hit rates are equal 100% on using 𝑆=2 and 𝐷=2). While, for the second data D2, the piecewise-quadratic models (QSMCA and QSMSD), the multihyperplanes and the multihypersurfaces models perform better than the other approaches. Moreover, the difference between these models and the standard piecewise models is not significant for dataset D3 but we can remark that the piecewise QSMCA and QSMSD can reach optimality rapidly using only 𝑆=2. Comparing the nonlinear MP models in terms of computational time, we can remark that the resolution of the QGSPS providing the estimated coefficients of the discriminant function is better than the solution time obtained by GSPS model for all datasets. Using dataset D4, for example, the solution time of the QGSPS with 𝐷=2 is equal to 11 seconds (for 𝐷=3, the solution time is equal to 21 seconds) while the resolution of the GSPS takes more than 960 seconds. For the other datasets, the solution time of the QGSPS model is less than 3 seconds. On the other hand, employing piecewise models using only the case where the group1 is in the convex region, the optimal solution time is obtained in more than 7 seconds. Otherwise, the time for the resolution of these models would have been approximately double. In fact, using dataset D4 the resolution of piecewise QMCA in the case where G1 is in the convex region, for example, required 8 seconds using three arcs (𝑠=3). However, to obtain the optimal solution, the model must be solved also in the case where G2 is in the convex region and then the computational time will double.

D 1 D 2 D 3 D 4

( 𝑛 1 = 2 1 ; 𝑛 2 = 2 5 ) ( 𝑛 1 = 2 2 ; 𝑛 2 = 4 0 ) ( 𝑛 1 = 5 0 ; 𝑛 2 = 5 0 ) ( 𝑛 1 = 4 4 4 ; 𝑛 2 = 2 3 9 )
FDL89,1 ( 5 )74,2( 1 6 )91 ( 9 )96,3 ( 2 5 )
LG91,3 ( 4 )74,2 ( 1 6 )93 ( 7 )96,9 ( 2 1 )
FDQ76,08 ( 1 4 )72,58 ( 1 7 )85 ( 1 5 )90,8 ( 6 3 )
Second order MSD model93,47 ( 3 )75,8 ( 1 5 )85 ( 1 5 )90,8 ( 6 3 )

𝑆 = 2 𝑆 = 3 𝑆 = 2 𝑆 = 3 𝑆 = 2 𝑆 = 3 𝑆 = 2 𝑆 = 3

Piecewise MCA91,3 ( 4 )100 (0)72,5 ( 1 7 )98,39 ( 1 )99 ( 1 )100 (0)87,55 ( 8 5 )
Piecewise MSD97,8 ( 1 )97,8 ( 1 )87,1 ( 8 )96,8 ( 2 )99 ( 1 )100 (0)
Piecewise QSMCA100 (0)100 (0)100 (0)98,97 ( 7 )100 (0)
Piecewise QSMSD97,8 ( 1 )100 (0)100 (0)100 (0)100 (0)

𝐷 = 2 𝐷 = 3 𝐷 = 2 𝐷 = 3 𝐷 = 2 𝐷 = 3 𝐷 = 2 𝐷 = 3

SPSN’2100 (0)_100 (0)98,24 ( 1 2 )
QSPSN’2100 (0)100 (0)100 (0)98,82 ( 8 )
GSPSN’2100 (0)83,87 ( 1 0 )100 (0)99( 1 )100 (0)98,82 ( 8 )99,7 ( 2 )
QGSPSN’2100 (0)100 (0)97 ( 3 )100 (0)99,85 (