- About this Journal ·
- Abstracting and Indexing ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Journal of Applied Mathematics
Volume 2012 (2012), Article ID 315868, 14 pages
An Application of Monte-Carlo-Based Sensitivity Analysis on the Overlap in Discriminant Analysis
Department of Mathematics, Science and Research Branch, Islamic Azad University, Tehran, Iran
Received 22 June 2012; Revised 21 September 2012; Accepted 25 September 2012
Academic Editor: George Jaiani
Copyright © 2012 S. Razmyan and F. Hosseinzadeh Lotfi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Discriminant analysis (DA) is used for the measurement of estimates of a discriminant function by minimizing their group misclassifications to predict group membership of newly sampled data. A major source of misclassification in DA is due to the overlapping of groups. The uncertainty in the input variables and model parameters needs to be properly characterized in decision making. This study combines DEA-DA with a sensitivity analysis approach to an assessment of the influence of banks’ variables on the overall variance in overlap in a DA in order to determine which variables are most significant. A Monte-Carlo-based sensitivity analysis is considered for computing the set of first-order sensitivity indices of the variables to estimate the contribution of each uncertain variable. The results show that the uncertainties in the loans granted and different deposit variables are more significant than uncertainties in other banks’ variables in decision making.
The classification problem of assigning observations to one of different groups plays an important role in decision making. When observations are restricted to one of two groups, the Binary classification has wide applicability in business environments.
Discriminant analysis (DA) is a classification method that can distinguish the group membership of a new observation. A group of observations for which the memberships have already been identified is used for the estimation of a discriminant function by some criteria, such as the minimization of misclassification. A new sample is classified into one of the groups based on the gained results .
Mangasarian  identified that linear programming (LP) could be used to determine separating hyperplanes, namely, when two set of observations are linearly separable use linear discriminant function. Freed and Glover  and Hand  using objectives such as minimization of the sum of deviations (MSD) or maximization of the minimum deviation (MMD) of misclassified observations from the separating hyperplane, when sets of observations that are not necessarily linearly separable, proposed LP methods for generating linear discriminant function.
Then a model is based on the goal programming (GP) extension of LP by choosing different criteria, such as minimizing the maximum deviation, maximizing the minimum deviation, minimizing the sum of interior deviation, minimizing the sum of deviations, minimizing misclassified observations, minimizing external deviations, maximizing internal deviations, maximizing the ratio of internal to external, and hybrid models for which there are both advantages and deficiencies [5–9].
In DA, LP and other mathematical programming (MP) based approaches are non-parametric and more flexible than statistical methods [6, 10]. Retzlaff-Roberts [11, 12] and Tofallis  proposed the use of DEA-ratio model for DA. Sueyoshi  using a data envelopment analysis (DEA) additive model, described a goal programming formulation of DA in which the proposed model is more directly linked to minimizing the sum of deviations from the separating hyperplane; this method was named DEA-DA to distinguish it from other DA and DEA approaches. The original GP version of DEA-DA could not deal with negative data. Therefore, Sueyoshi  extended DEA-DA to overcome this deficiency. This approach was designed to minimize the total distance of misclassified observations and formulated by two-stage GP formulations. The number of misclassifications can, however, be considered as measures of misclassification, in which binary variables indicate whether observations are correctly or in correctly classified. Bajgier and Hill  proposed a Mixed Integer Programming (MIP) model that included the number of misclassifications in the objective function for the two-group discriminant problem. Gehrlein  and Wilson  introduced a MIP approach for minimizing the number of misclassified observations in multigroup problems. Chang and Kuo  proposed a procedure based on benchmarking model of DEA to solve the two-group problems. Sueyoshi  reformulated DEA-DA by MIP to minimize the total number of misclassified observations. When an overlap between two groups is not a serious problem, dropping the first stage of the two-stage MIP approach simplifies the estimation process .
Sensitivity analysis provides an understanding of how the model outputs is affected by changes in the inputs, therefore it can assist to increase the confidence in the model and its predictions. sensitivity analysis can use in deciding whether inputs estimates are sufficiently precise to give reliable predictions or we can find the model parameters that can be eliminated.
Two classes in sensitivity analysis have been distinguished .
Local Sensitivity Analysis
Studies how some small variations of inputs around a given value, change the value of the output. This approach is practical when the variation around a baseline of the input-output relationship to be assumed linear.
Global Sensitivity Analysis
Takes into account all the variation range of the inputs, and has for aim to apportion output uncertainty to inputs’ ones. It quantified the output uncertainty due to the uncertainty in the input parameters. Global sensitivity analysis apportions the output uncertainty to the uncertainty in the input factors, described typically by probability distribution functions that cover the factors’ ranges of existence.
Local methods are less helpful when sensitivity analysis is used to compare the effect of various factors on the output, as in this case the relative uncertainty of each input should be weighted. A global sensitivity analysis technique thus incorporates the influence of the whole range of variation and the form of the probability density function of the input. The variance-based methods can be considered as a quantitative method for global sensitivity analysis. In this Study, the Sobol’ decomposition in the framework of Monte Carlo simulations (MCS), , which is from the family of quantitative methods for global sensitivity analysis, is applied to study of the effect of the variability in DA due to the uncertainty in the variables. The results of the sensitivity analysis can determine which of the variables have a more dominant influence on the uncertainty in the model output.
This paper is organized as follows: Section 2 briefly introduces the DEA-DA model; Section 3 describes the sensitivity analysis based on a Monte-Carlo simulation. Section 4 contains an example and the conclusion is provided in Section 5.
2. Data Envelopment Analysis-Discriminant Analysis (DEA-DA)
The two-stage MIP approach  is used in this study to describe DEA-DA. We considered two groups ( and ) for which the sum of the two groups has observations . Each observation has independent factors , denoted by . It is necessary to identify the group membership of each observation before its computation. In the two stage approach, the computation process consists of classification and overlap identification, and handling overlap. The first stage is formulated as follows .
Here “” indicates a discriminant score for group classification and “” indicates the size of an overlap between two groups.
Let (=) and and be an optimal solution of the model (2.1). Then, the original data set () is classified into the following subsets , where
Then, we determine that observations in belong to and the observations of belong to because their location is identified from model (2.1). The two subsets and consist of the observations have not yet been classified in the first stage.
Stage 2. If , then the existence of an overlap is identified in the fist stage. In this stage, we reclassify all of the observations belonging to the overlap because the group membership of these observations is still undetermined. The second stage is reformulated as follows :
Here, the binary variable () counts the number of observations classified incorrectly. The objective function minimizes the number of such misclassifications. The weight () identifies the importance between and in terms of the number of observations. In the presented model (2.3), it is necessary to prescribe a large number () and a small number (). The equation indicates that some pairs avoid the occurrence of and .
After gaining an optimal solution on and , the second stage classifies observations in the overlap as follows: if , then the th observation belongs to , or if , then it belongs to . Thus, all of the observations in are classified into or at the end of the second stage.
3. Sensitivity Analysis Based on Monte-Carlo Simulation (MCS)
Sensitivity analysis was created to deal simply with uncertainties in the input variables and model parameters . The results of an sensitivity analysis can determine which of the input parameters have a more dominant influence on the uncertainty in the model output . A variance-based sensitivity analysis, which addresses the inverse problem of attributing the output variance to uncertainty in the input, quantifies the contribution that each input factor makes to the variance in the output quantity of interest. A global sensitivity analysis of complex numerical models can be performed by calculating variance-based importance measures of the input variables, such as the Sobol’ indices. These indices are calculated by evaluating a multidimensional integral using a Monte-Carlo technique. This approach allows analyzing the influence of different variables and their subsets, the structure of , and so forth.
It is assumed that a mathematical model having input parameters gathered in an input vector with a joint probability density function (pdf) can be presented as a model function: where . Because of the variables are affected by several kinds of heterogeneous uncertainties that reflect the imperfect knowledge of the system, it is assumed that input variables are independent and that the probability density function is known, even if the are not actually random variables.
The Sobol’ sensitivity method explores the multidimensional space of the unknown input parameters with a certain number of MC samples. The sensitivity indices are generated by a decomposition of the model function in an -dimensional factor space into summands of increasing dimensionality : where the constant is the mean value of the function, and the integral of each summand over any of its independent variables is zero. Due to this property, the summands are orthogonal to each other in the following form:
The sensitivity index, , represents the fractional contribution of a given factor to the variance in a given output variable, . To calculate the sensitivity indices, the total variance, , in the model output, , is apportioned to all of the input factors, ,…, , as follows:
By integrating the square of (2.2) and with (2.3), it is possible to decompose the total variance (3.1) as follows : where , and so on. is referred to as the variance of the conditional expectation and is the variance over all of the values of in the expectation of given that has a fixed value . This is an intuitive measure of the sensitivity of to a factor , as it measures the amount by which ) varies with the value of whilst averaging over the ’s, . Following the above definition for the partial variances, the sensitivity indices are defined as
Higher order indices can be calculated with a similar approach. With regard to (3.2), the decomposition of the sensitivity indices can be written in the following form:
The Sobol’ indices are usually computed with a MC simulation. The mean value and total and partial variance can be derived with samples in the following : In the later equations, is a sampled variable in , and The superscripts (1) and (2) indicate that two different samples are generated and mixed.
4. Illustrative Examples
Classification methods are widely used in economic and finance. They are useful for classifying the sectors based upon their performance in different groups and predict the group memberships of new firms. Most of researchers used classification methods to classifying the firms based upon their performance assessment. DA is a classification method that is used in this study. The purpose of the first stage in DEA-DA is to determine whether there is an overlap between the two groups. The existence of an overlap is the main source of misclassification in DA. By identification of the overlap between two groups, it is possible to increase the number of observations classified correctly. If there is no overlap, any DA method may produce an almost perfect classification. However, if there is an overlap, an additional computation process is needed to deal with such an overlap . So, there is a tradeoff between computational effort/time and a high level of classification capability.
Misclassification can result as a consequence of an intersection between two groups. Many researchers have proposed approaches that try to reveal the advantage of identifying the minimized overlap of two groups for risk management on the classification problem [19, 20, 25, 26]. Given the importance of the banking sector for, in general, the whole economy and, in particular, for the financial system, in this section, we present an application of the sensitivity analysis to overlaps, , on data from a commercial bank of Iran. This assertion is illustrated numerically for bank branches that have more than 20 and 30 personnel, in two different examples.
If we wish to take into account the inherent randomness with respect to what the criteria might experience, we have to bring stochastic characterization into play. The stochastic efficiency assessment of banking branches normally requires performing a set of analyses on DMUs with a suit of variables as criteria.
At first, we use the additive model to discriminate banking branches. Most models need to examine both a DEA efficiency score and slacks or an efficiency score measured by them, which depends upon input-based or output-based measurement. The additive model  aggregates input-oriented and output-oriented measures to produce the efficiency score. Consequently, the efficiency status is more easily determined by the additive model than by the radial model.
In two different examples, the real data set consists of 78 and 18 banking branches. This study selects 31 and 8 branches as inefficient branches, and 47 and 10 branches as efficient branches, respectively, in example 1 and 2, documented in Tables 1 and 2. For determining the classifications based on the additive model, three variables of personnel, payable interest, and non-performing loans are considered as inputs. Nine variables of loans granted, long-term deposit, current deposit, non-benefit deposit, short-term deposit, received interest, and received fee are assumed as outputs.
Then, for sensitivity analysis in DEA-DA, each observation is modeled as random parameters as follows : where and are the mean value and the coefficients of variation (COV) of the random parameters, respectively, and is generated random parameter with a zero mean that is used in MC simulation. The determination of the bank branches’ parameters carries a high degree of uncertainty, and the specification of these parameters can involve a significant degree of expert judgment. Additionally, the COV of these variables plays an important role in the variation of the efficiency. Here, the COV of the all of the parameters are assumed to equal 0.05.
To compute the sensitivity indices, the Sobol’ sampling scheme has been used. Sequences of Sobol’ sampling vectors are essentially quasi-random sequences that are defined as sequences of points that have no intrinsic random properties. In this study, a sensitivity analysis is applied to assess the influence of the banks’ variables on the overlap.
After one hundred estimates with a sample size of 5000, convergence was seen in the first-order Sobol’ indices derived by a Sobol’ sampling of the uniform criteria spaces for different banks’ variables. The sensitivity indices, , are depicted in Figures 1 and 2. These figures present the comparison of the first-order indices of the banks’ variables. The total fraction of the variance captured by the first order functions is approximately 99%. This indicates that, for this problem, higher order contributions to the Sobol’ series are relatively small. The overall variance in banks’ efficiency is affected by the variances in each of the random variables. Figure 1 indicates that 54% and 33% of the overall variance in overlap in DEA-DA is attributable to the variance in loans granted and different deposits, respectively, while the personnel, received interest, fee, and non-performing loans variables have little effect. Also, Figure 2 indicates that uncertainties in loans granted and different deposits are the dominant variables in overlap in DEA-DA.
Due to the inherent complexity and randomness of the data in DEA and problems involving unpredictable or stochastic variables, a probabilistic analysis may be the most rational method of analysis. Therefore, in a probabilistic-based approach, the results open the door to understanding the appropriate estimation of the deciding variables in DEA. In the overlap in DA, the analytical results show that the loans granted and different deposit variables are the main sources of uncertainty, while other variables have a relatively small effect. The main advantage of the used sensitivity analysis approach is that it provides quantified evaluation of the influence of individual variables in DA, and results may be used for decision making.
The authors would like to thank the anonymous reviewers for comments which helped to improve the paper.
- T. Sueyoshi and M. Goto, “Can R&D expenditure avoid corporate bankruptcy? Comparison between Japanese machinery and electric equipment industries using DEA-discriminant analysis,” European Journal of Operational Research, vol. 196, no. 1, pp. 289–311, 2009.
- O. L. Mangasarian, “Linear and nonlinear separation of patterns by linear programming,” Operations Research, vol. 13, pp. 444–452, 1965.
- N. Freed and F. Glover, “A linear programming approach to the discriminant problem,” Decision Sciences, vol. 12, pp. 68–74, 1981.
- D. J. Hand, Discrimination and Classification, John Wiley & Sons, Chichester, UK, 1981.
- N. Freed and F. Glover, “Simple but powerful goal programming models for discriminant problems,” European Journal of Operational Research, vol. 7, no. 1, pp. 44–60, 1981.
- N. Freed and F. Glover, “Evaluating alternative linear programming models to solve the two-group discriminant problem,” Decision Sciences, vol. 17, pp. 151–162, 1986.
- W. J. Banks, Abad, and P. L, “An efficient optimal solution algorithm for the classification problem,” Decision Sciences, vol. 22, pp. 1008–1023, 1991.
- F. Glover, “Improved linear programming models for discriminant analysis,” Decision Sciences, vol. 21, pp. 771–785, 1990.
- D. L. Retzlaff-Roberts, “A ratio model for discriminant analysis using linear programming,” European Journal of Operational Research, vol. 94, no. 1, pp. 112–121, 1996.
- E. A. Joachimsthaler and E. A. Stain, “Four approaches to the classification problem in discriminant analysis: an experimental study,” Decision Sciences, vol. 19, pp. 322–333, 1988.
- D. L. Retzlaff-Roberts, “Relating discriminant analysis and data envelopment analysis to one another,” Computers and Operations Research, vol. 23, no. 4, pp. 311–322, 1996.
- D. L. Retzla-Roberts, “A ratio model for discriminant analysis using linear programming,” European Journal of Operational Research, vol. 94, pp. 112–121, 1996.
- C. Tofallis, “Improving discernment in DEA using profiling,” Omega, vol. 24, no. 3, pp. 361–364, 1996.
- T. Sueyoshi, “DEA-discriminant analysis in the view of goal programming,” European Journal of Operational Research, vol. 115, no. 3, pp. 564–582, 1999.
- T. Sueyoshi, “Extended DEA-discriminant analysis,” European Journal of Operational Research, vol. 131, no. 2, pp. 324–351, 2001.
- S. M. Bajgier and A. V. Hill , “An experimental comparison of statistical and linear programming approaches to the discriminant problem,” Decision Sciences, vol. 13, pp. 604–618, 1982.
- W. V. Gehrlein, “General mathematical programming formulations for the statistical classification problem,” Operations Research Letters, vol. 5, no. 6, pp. 299–304, 1986.
- J. M. Wilson, “Integer programming formulations of statistical classification problems,” Omega, vol. 24, no. 6, pp. 681–688, 1996.
- D. S. Chang and Y. C. Kuo, “An approach for the two-group discriminant analysis: an application of DEA,” Mathematical and Computer Modelling, vol. 47, no. 9-10, pp. 970–981, 2008.
- T. Sueyoshi, “Mixed integer programming approach of extended DEA-discriminant analysis,” European Journal of Operational Research, vol. 152, no. 1, pp. 45–55, 2004.
- T. Sueyoshi, “Financial ratio analysis of electric power industry,” Asia-Pacific Journal of Operational Research, vol. 22, pp. 349–376, 2005.
- A. Saltelli, K. Chan, and E.M. Scott, Sensitivity Analysis, Wiley Series in Probability and Statistics, John Wiley & Sons, Chichester, UK, 2000.
- A. Saltelli, M. Ratto, T. Andress, F. Campolongo, J. Cariboni, and F. Gatelli, Global Sensitivity Analysis Guiding the Worth of Scientific Models, John Wiley and Sons, New York, NY, USA, 2007.
- I. M. Sobol', “Sensitivity estimates for nonlinear mathematical models,” Mathematical Modeling and Computational Experiment, vol. 1, no. 4, pp. 496–515, 1993.
- J. J. Glen, “A comparison of standard and two-stage mathematical programming discriminant analysis methods,” European Journal of Operational Research, vol. 171, no. 2, pp. 496–515, 2006.
- P. C. Pendharkar, “A hybrid radial basis function and data envelopment analysis neural network for classification,” Computers & Operations Research, vol. 38, no. 1, pp. 256–266, 2011.
- A. Charnes, W. W. Cooper, B. Golany, L. Seiford, and J. Stutz, “Foundations of data envelopment analysis for Pareto-Koopmans efficient empirical production functions,” Journal of Econometrics, vol. 30, no. 1-2, pp. 91–107, 1985.
- A. Yazdani and T. Takada, “Probabilistic study of the influence of ground motion variables on response spectra,” Structural Engineering & Mechanics, vol. 39, 2011.