About this Journal Submit a Manuscript Table of Contents
Mathematical Problems in Engineering
Volume 2012 (2012), Article ID 423524, 24 pages
http://dx.doi.org/10.1155/2012/423524
Research Article

A Two-Stage DEA to Analyze the Effect of Entrance Deregulation on Iranian Insurers: A Robust Approach

Department of Industrial Engineering, Iran University of Science and Technology, Narmak, Tehran 1684613114, Iran

Received 17 February 2012; Revised 5 May 2012; Accepted 29 May 2012

Academic Editor: Yi-Chung Hu

Copyright © 2012 Seyed Gholamreza Jalali Naini and Hamid Reza Nouralizadeh. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We use two-stage data envelopment analysis (DEA) model to analyze the effects of entrance deregulation on the efficiency in the Iranian insurance market. In the first stage, we propose a robust optimization approach in order to overcome the sensitivity of DEA results to any uncertainty in the output parameters. Hence, the efficiency of each ongoing insurer is estimated using our proposed robust DEA model. The insurers are then ranked based on their relative efficiency scores for an eight-year period from 2003 to 2010. In the second stage, a comprehensive statistical analysis using generalized estimating equations (GEE) is conducted to analyze some other factors which could possibly affect the efficiency scores. The first results from DEA model indicate a decline in efficiency over the entrance deregulation period while further statistical analysis confirms that the solvency ignorance which is a widespread paradigm among state owned companies is one of the main drivers of efficiency in the Iranian insurance market.

1. Introduction

In line with the Iran’s Third Development Plan in 1999, the Iranian parliament approved the establishment of private insurance companies in August 2002 with the aim of improving efficiency, increasing consumer choices through increased rivalry, and finally enhancing transparency in the market [1]. There are some differences in the literature regarding to the deregulation impact analysis on the efficiency of insurance companies. While Rees et al. [2] reported small improvement in German and Britain life insurance market after deregulation, Hussels and Ward also could not find any strong evidence of deregulation effects on insurance business efficiency between 1992 until 2002 [3], Wang and her colleague showed the whole market getting more competitive due to leaders market-share losing after entrance deregulation in life insurance [4]. Jeng and Lai’s study showed that the deregulation and liberalization, reduction of government or other barriers to market, do not have major adverse impact on the technical, cost, and revenue efficiency of existing firms in the long run. The dominance of existing firms has declined but persisted throughout the sample period [5]. Cummins and Rubio-Misas found a positive effect of deregulation on the Spanish insurance companies’ efficiency [6]. Boonyasai et al. also found similar evidence in some Southeast Asian countries stemmed from joint deregulation and liberalization policies [7]. Based on the previous studies, one could observe that in some cases the developed insurance markets have improved slightly after deregulation, while there is the possibility of having no effect or affecting negatively.

In this paper, we test the effects of deregulation on Iranian insurance market. After Iran’s revolution in 1979, all private insurers were merged compulsorily into four state-owned companies. Following the entrance deregulation, the Iranian insurance market has witnessed considerable increase in the number of companies since 2003. A criticism of the entrance deregulation process was however the lack of sufficient supervision in the insurance market. To find more evidence whether the Iranian insurance companies have been able to improve their efficiency as a result of the deregulation process, this paper analyzes the efficiency of the Iranian insurance market using a two-stage robust DEA model. We use data from 2003 to 2010, which include the accomplishment of the entrance deregulation. To our best knowledge, this paper is the first research conducted to examine the relative efficiency of all Iranian insurance companies for the period of entrance deregulation.

The academic novelty of this research is presenting comprehensive literature review and applying a two-stage robust DEA model to the problem. We show that traditional DEA has a limitation to caver all aspects of insurers’ behavior, and a robust DEA model can be used to overcome the limitations of previous methods. We also use GEE model to extract the most significant factors explaining robust CRS efficiency scores. According to our study, applying GEE model as second stage in analyzing efficiency scores is another novelty of this research. To the best of our knowledge, some efforts have been conducted to analyze the effects of insurance deregulation on the efficiency of the industry throughout the world using DEA, however, few researches, if any, considered this issue within Iranian insurance market.

The rest of the paper is arranged as follows: in Section 1, we review different characteristics of Iranian Insurance market. Section 2 reviews the existing literature on the topic, especially those related to mathematical programming approach in efficiency measurement (DEA), robust optimization, and generalized estimating equations (GEE). Section 3 discusses applications of DEA in institutional changes of insurance industry. In order to describe different approaches in output selection for DEA model, Section 4 investigates the Value Added versus the Financial Intermediary approaches. Section 5 presents the results of efficiency analysis of Iranian insurance companies from 2003 until 2010. This section presents the first stage and compares traditional CRS DEA scores with robust CRS DEA scores and tries to find factors explaining efficiencies. In Section 6, a GEE model is applied to capture the most important factors that explain efficiencies. Finally, Section 7 illustrates the conclusions.

2. Overview of the Iranian Insurance Market

The Iranian insurance market is one of the less developed in the world. Its position also is not different substantially from its neighbors. Over the past 10 years, as shown in the Table 1, Iranian insurance market is in the last rank based on Premium per Capita; while, its rank based on Penetration Ratio is a little better among its neighbors, as shown in Table 2. However, If the index is compared with the global average, 6.89% in 2010 [20], then the level of the ratio would not be satisfactory. It seems that the insurance sector has not been able to play its primary role in the economy when compared with its counterparts in developed countries. During last 10 years, Iranian insurance industry has witnessed two major changes in the institutions. The first one was entrance deregulation which abolished the monopoly of state-owned companies in 2001. Based on the new law, the private insurance companies have been established and number of insurance companies increased to 23 from 4 since 2001. The effects of this institutions change can be seen in Tables 1 and 2, where the premium per capita and penetration ratio is increasing at very high clip from US$ 11.1 to US$ 34.4 and from 0.86% to 1.28%, respectively, within three years. The second institutions change was price deregulation in 2009. Based on the new law, the insurances companies are allowed to set their own premium rates in property and casualty lines of business (P&L). It is also obvious that following price deregulation the growth of the insurance industry has been revitalized after a period of calm. By the way, due to the lack of available data, the main purpose of this paper focuses on the first event.

tab1
Table 1: Premium per Capita (PPC) trend of selected countries 2003–2010 (US Dollar).
tab2
Table 2: Penetration ratio (PR) trend of selected countries 2003–2010 (%).

The last issue is that the share of life insurance in Iran is very low in comparison with its global average. Based on the yearbook of Central Insurance of IR Iran, the average of life insurance share in total written premium of Iran’s market is around 7.5% in the last 10 years. The analysis of this issue is beyond the scope of this paper, but this phenomenon has caused all companies established in Iran to be general (mixed) insurers which mostly do business in P&L lines. This in turn, helps us to deal with all companies in a similar way when assessing the efficiency.

3. Theoretical Framework

3.1. The Economic Efficiency

Frontier methodologies have been used in majority of papers published in recent years. There are two common tools in frontier methodologies: the econometric frontier analysis and the data envelopment analysis (DEA), Both have their own pros and cons. Unlike the data envelopment analysis approach, it’s not allowed in econometric stochastic frontier approach to use various inputs and outputs and it also requires researchers to define functional form on the data and set assumptions about distributional form of the inefficiency term. Both of them presume the production function is known. The economic efficiency is stemmed from production frontier in theory of the firm. Figure 1 shows a production frontier (PF) for a firm with single input/output.

423524.fig.001
Figure 1: Production Frontier (PF) for a firm with one input and one output.

If a firm is producing at point (𝐼,𝐽) in time 𝑡, it could produce more efficiently by moving to the frontier PF𝑡 horizontally or vertically. If it moves to PF𝑡 horizontally by reducing its excess input, this is called input-oriented and, it is called output-oriented if the point moves vertically by producing more output. The technical efficiency of the firm is calculated by the ratio 0𝐻/0𝐼 for input-oriented approach, which is the reciprocal of its distance from the frontier PF𝑡. Defining efficiency by the concept of the distance from the production frontier is formulated by Shephard [21] as below; assume a producer uses input 𝑥=(𝑥1,𝑥2,,𝑥𝑘) to obtain output vector 𝑦=(𝑦1,𝑦2,,𝑦𝑛), then Shephard’s distance by definition is equal to 𝑥𝐷(𝑥,𝑦)=sup𝜃𝜃,𝑦𝜖𝑉(𝑦)=(inf{𝜃(𝜃𝑥,𝑦)𝜖𝑉(𝑦)})1,(3.1) where 𝑉, 𝐷, and 𝜃 are production frontier, distance function, and distance, respectively. The technical efficiency TE(𝑥,𝑦) is therefore defined as TE(𝑥,𝑦)=1/𝐷(𝑥,𝑦). It should be mentioned here that TE(𝑥,𝑦) for each decision making unit can be obtained by linear programming [22]. If a firm uses two or more inputs, inefficiency can be also stemmed from the fact that it’s not deploying the cost minimizing combination of inputs. This kind of inefficiency is called allocative inefficiency. A firm is considered to be fully cost efficient if it operates at a point where both technical and allocative efficiency are met. Cost efficiency is then defined as follows: Costeciency=TechnicalEciency×AllocativeEciency.(3.2)

We can also use production frontier to capture productivity improvement. Productivity means technology improvement between periods, and it differs from efficiency. Figure 1 shows production frontiers for periods 𝑡 and 𝑡+1 (PF𝑡 and PF𝑡+1, resp.) for the one input-one output firm. The frontier for period 𝑡 is on the right of the frontier for period 𝑡+1. This reveals that productivity has improved between period 𝑡 and 𝑡+1. Suppose a firm operating at point (𝑥𝑡,𝑦𝑡) in period 𝑡 and at point (𝑥𝑡+1,𝑦𝑡+1)  at period 𝑡+1. As it is obvious in Figure 1, both productivity and efficiency of the firm have improved between two periods. The firm’s operation at period 𝑡+1 is impossible in period 𝑡 indicating that productivity has been improved, and as the 𝑥𝑡+1 is closer to its frontier than 𝑥𝑡, means that its efficiency has been also improved between 𝑡 and 𝑡+1. Based on these two kinds of distance, the distance can also be defined as the Malmquist Index to capture total factor productivity. If we want to determine whether productivity change has occurred between period 𝑡 and 𝑡+1, we can choose PF𝑡 or PF𝑡+1 as the frontier reference. With respect to the period 𝑡 frontier, an input-oriented Malmquist productivity index can be defined as: 𝑀𝑡=𝐷𝑡𝑥𝑡,𝑦𝑡𝐷𝑡𝑥𝑡+1,𝑦𝑡+1.(3.3)

Similarly, the Malmquist index based on period 𝑡+1 frontier can be defined. To avoid arbitrarily selecting one frontier to compute the index, the geometric mean could be applied as follows: 𝑀𝑥𝑡+1,𝑦𝑡+1,𝑥𝑡,𝑦𝑡=𝐷𝑡𝑥𝑡,𝑦𝑡𝐷𝑡𝑥𝑡+1,𝑦𝑡+1×𝐷𝑡+1𝑥𝑡,𝑦𝑡𝐷𝑡+1𝑥𝑡+1,𝑦𝑡+1.(3.4)

With some mathematical calculations, these expressions can be decomposed into technical change and efficiency. If production frontier changes by scale, one can separate efficiency into pure technical and scale efficiency. Pure technical efficiency is defined as the distance from the variable returns to scale (VRS) frontier, and the relationship TE(𝑥,𝑦)=PT(𝑥,𝑦)×𝑆(𝑥,𝑦) is applicable to separate pure technical and scale efficiency, in which 𝑆(𝑥,𝑦) represents scale efficiency and PT(𝑥,𝑦) pure technical efficiency. Two types of production frontiers can be considered for the single input-single output case; Constant returns to scale (CRS) frontier and variable returns to scale (VRS) frontier. It is socially and economically optimal for firms to operate at constant returns to scale, providing the motivation for separating pure technical and scale efficiency [23].

It is worth noting that assuming efficient frontier to be a convex set is criticized by some scholars. Deprins et al. [24] criticize the DEA methodology for imposing the convexity assumption. They proposed the elimination of the convexity assumption which leads to the free disposal hull (FDH) estimation technique. The FDH implies free disposability supposing that outputs do not change if some inputs increase. It has been shown that FDH increases goodness of fit [25]; however, the convex frontier may be required in some industries [23].

3.2. Mathematical Programming (DEA)

DEA is a mathematical programming introduced by Charnes et al. in 1978 [26]. There are two approaches in DEA modeling; the input oriented and the output oriented. Input oriented models have an objective in such a form that maximizes weighted outputs given the level of inputs. On the other hand, the objective function in output oriented models takes the form of minimizing weighted inputs given the level of outputs [27]. The basic fractional Constant Returns to Scale (CRS) DEA estimates the relative efficiencies of 𝑛 DMUs which are described in model (3.5). Each DMU is shown with 𝑚 for input and 𝑠 for outputs denoted by 𝑏1𝑗,,𝑏𝑚𝑗 and 𝑎1𝑗,,𝑎𝑠𝑗, respectively, where the ratio of the weighted sum of outputs to the weighted sum of inputs for some given 𝐷𝑀𝑈𝑜 is maximized as follows: max𝛿0=𝑠𝑟=1𝑥𝑟𝑎𝑟0𝑚𝑖=1𝑣𝑖𝑏𝑖0subjectto𝑠𝑟=1𝑥𝑟𝑎𝑟𝑗𝑚𝑖=1𝑣𝑖𝑏𝑖𝑗𝑥1,𝑗=1,,𝑛,𝑟,𝑣𝑖0,(3.5) where 𝑥𝑟 and 𝑣𝑖 are the weight factors, and the 𝛿0,𝑎𝑟0, and 𝑏𝑖0 are the observed efficiency, output and input values, respectively, of 𝐷𝑀𝑈𝑜, the DMU to be evaluated. Model (3.5) is a nonlinear fractional programming model which could be converted into the following LP model [26]: max𝛿0subjectto𝑠𝑟=1𝑥𝑟𝑎𝑟𝑗𝑚𝑖=1𝑣𝑖𝑏𝑖𝑗𝑢0,𝑗=1,,𝑛,𝑟,𝑣𝑖0.(3.6) In order to have an objective function without any uncertain parameter, we further reformulate model (3.7) using some auxiliary variable w, as follows: max𝑊subjectto𝑤𝑠𝑟=1𝑥𝑟𝛼𝑠𝑟𝑗0𝑟=1𝑥𝑟𝛼𝑟𝑗𝑚𝑖=1𝑣𝑖𝑏𝑖𝑗0,𝑗=1,,𝑛𝑚𝑖=1𝑣𝑖𝑏𝑖𝑗𝑥=1𝑟,𝑣𝑖0,(3.7) where, 𝑥 and 𝑣 are input and output variables and Indices 𝑖,  𝑟, and 𝑗represent the number of inputs, outputs and DMUs respectively. One could observe that in this formulation, the parameter 𝛼𝑟𝑗 is an uncertain value. We cannot solve this problem with popular Linear Programming techniques, since the primary assumption, that is, certainty of input parameters is violated.

3.3. Robust Optimization

The implicit assumption behind the traditional DEA is that input and output are deterministic. But output of an insurer is not necessarily deterministic. For example, if we consider paid loss as output of an insurer, it may be changed due to incorrect estimate of outstanding loss reserve (Outstanding loss reserve refers to the loss that incurred and reported but not paid yet.) or IBNR reserve (IBNR reserve refers to the loss that incurred but not reported yet.). These kinds of output can be modified over time; however, it is hard to find them in financial statements of insurers which are released to the public, so it’s not possible to determine the real ultimate loss of an insurer for each year.

Robust optimization is one of the leading optimization methodologies to handle uncertainty [28]. In classical optimization modeling input parameters are considered as certain values. However, in real cases we are not certain about all parameter values as mentioned before. Robust optimization is a new approach to incorporate uncertainty within mathematical models. The approach based on robust optimization is the most preferred method among practitioners due to its applicability. Recently the robust optimization techniques become very popular among practitioners and have been applied in different context [29].

Soyster was the first one who addressed the uncertainty in optimization [30]. Soyster considered the worst possible cases for each data input realization where we may lose some part of the optimality but the final solution remains feasible for all possible cases. Ben-Tal and Nemirovski developed a new robust method based on cone programming. They proposed an ellipsoidal uncertainty which turned an ordinary linear programming problem into nonlinear programming [31]. Although, Ben-Tal and Nemirovski’s robust approach has proven to be efficient but it requires solving nonlinear optimization problem which is not popular among many practitioners.

Bertsimas and Sim proposed another robust optimization method for linear programming problem under uncertainty [28]. They assumed the uncertainty set followed a polyhedral shape. Proposing new type of norm, they proved that the method held linearity of the problem. Gharakhani et al. developed such robust DEA model for educational context to measure the efficiency of public high-Scholl in Iran [32]. In this paper we apply a similar approach to insurance companies. In the next section, we formulate DEA model based on Bertsimas et al. approach [28]. In order to get familiar with the type of uncertainty which is used in this paper, consider the following standard linear programming problem: min𝑐𝑥subjectto𝐴𝑥𝑏,𝑥𝑋.(3.8) The uncertainty is assumed to influence the technical coefficient matrix 𝐴. In order to introduce uncertainty in the coefficients, consider a particular row 𝑖 of the matrix 𝐴, and let 𝑗 represent the set of uncertain coefficient in row𝑖. Each entry 𝛼𝑖𝑗,𝑗𝐽𝑖 is assumed as a symmetric and bounded random variable which only can take values in [𝑎𝑖𝑗̂𝑎𝑖𝑗,𝑎𝑖𝑗+̂𝑎𝑖𝑗] centered at the point 𝑎𝑖𝑗 that is the expected value and ̂𝑎𝑖𝑗 denotes the maximum possible deviation from the corresponding estimate.

Bertsimas and Sim proposed an approach for linear optimization that provides full control on the degree of conservatism and keeps the advantages of the linear framework of Soyster. They defined the scaled deviation from nominal value of 𝑎𝑖𝑗 as shown here; 𝜂𝑖𝑗=̃𝑎𝑖𝑗𝑎𝑖𝑗̂𝑎𝑖𝑗,𝑛𝑗=1𝜂𝑖𝑗Γ𝑖,𝑖=1,,𝑚,(3.9) where 𝜂𝑖𝑗 has an unknown but symmetric distribution which could only take value within [1,1]. Despite the fact that the cumulative scaled deviation of constraint 𝑖 can take no value within [𝑛,𝑛], but it is assumed to be confined. In this approach, Γ𝑖 is some control parameter known as the price of robustness. These parameters adjust the robustness of the method against the level of conservatism of the solution. For Γ𝑖=0, we get nominal model and no uncertain coefficient involved. On the other hand, Γ𝑖=𝑛 means that the 𝑖th constraint of the problem is protected against all possible realizations of uncertain coefficients. For any value Γ within (0,𝑛), the decision maker takes into account a tradeoff between the level of the protection of constraint and the level of solution conservation.

Based on Bertsimas approach, we can reformulate DEA model. The reformulated DEA model, known as robust counterpart is as follwos: max𝑊subjectto𝑊𝑠𝑟=1𝑥𝑟𝑎𝑟𝑜+𝑧0Γ+𝑠𝑟=1𝑝𝑟00,𝑠𝑟=1𝑥𝑟𝑎𝑟𝑗𝑚𝑖=1𝑣𝑖𝑏𝑖𝑗+𝑧𝑗Γ+𝑠𝑟=1𝑝𝑟𝑗𝑧0,𝑗=1,,𝑛,𝑗+𝑝𝑟𝑗̂𝑎𝑟𝑗𝑦𝑟,𝑟=1,,𝑠,𝑗=1,,𝑛,𝑚𝑖=1𝑣𝑖𝑏𝑖𝑗=1,𝑗=1,,𝑛,𝑦𝑟𝑥𝑟𝑦𝑟𝑥,𝑟=1,,𝑠,𝑟,𝑧𝑗,𝑦𝑟,𝑣𝑖,𝑝𝑟𝑗0,(3.10) where 𝑎𝑟𝑗 vector of nominal value of ̃𝑎𝑟𝑗, Γ degree of uncertainty within constraint parameters, ̂𝑎𝑟𝑗 precision of estimation of 𝑎𝑟𝑗, 𝑧𝑗 some auxiliary variable related to the robust counterpart denoting the cost of robustness in each constraints, 𝑝𝑟𝑗 some auxiliary variable related to the robust counterpart counting the number of uncertain parameters in each constraints, and 𝑦 decision variable for making the absolute term |𝑥𝑟| to linear one. Other notations are defined in previous equations.

This robust counterpart is obviously a linear programming model which can be solved with popular solver packages. Since original DEA model is linear programming problem, incorporating uncertainty does not deteriorate solvability of the model. In other words, applying this reformulation preserves the type of original linear problem.

3.4. Statistical Analysis of Correlated Data Using Generalized Estimating Equations (GEE)

The generalized estimating equations (GEE) method, an extension of the quasi-likelihood approach, is used to analyze longitudinal and other correlated data. Textbooks all advise researchers not to treat observations from the same cluster as if they were independent and warn against being misled by great masses of observations [33]. Some articles do discuss how much statistical information is obtainable from observations on individuals in clusters such as insurance companies’ efficiencies during 8 years. Investigators often have a conservative approach, but GEE approach uses weighted combinations of observations to extract the appropriate amount of information from correlated data. GEEs belong to a class of semi-parametric regression techniques as they rely on specification of only the first two moments. Under mild regularity conditions, parameter estimates from GEEs are consistent [34]. The parameter estimates typically obtained via the Newton-Raphson algorithm.

𝛿𝑖𝑗 is the response, CRS efficiency score, for DMU 𝑖 at time 𝑗; 𝑥𝑖𝑗, explaining factors, is covariate; 𝛽 is a p × 1 vector of unknown regression coefficients; 𝑔()  is the link function.

Let ω be the vector of all (identifiable) parameters of the covariance structure of the observed dependent variables; let 𝑋𝑛 be the fixed matrix collecting all 𝑥𝑖𝑗, and 𝛿𝑛 be a vector of response. It is assumed throughout that the interpretation of β does not depend on the value of ω. The starting point is the assumption of the existence of a set of unbiased estimating functions for the parameters of the mean structure, denoted as 𝑔𝑛𝑔𝑛(𝛿𝑛,𝑋𝑛,𝛽), such that 𝐸(𝑔𝑛;𝛽,𝜔𝑋𝑛)=0 for all possible 𝛽,  𝜔, which are uncorrelated with each other. Optimal estimating functions in a variance minimizing sense with respect to 𝑔 are given by 𝑔=𝑁𝑛=1𝐸𝜕𝑔𝑛𝜕𝛽𝑇Cov1𝑔𝑛𝑔𝑛,(3.11) where Cov(𝑔𝑛) is the covariance of 𝑔𝑛, conditional on 𝑋𝑛. The use of 𝑔𝑛=(𝛿𝑛𝜇𝑛), where 𝜇𝑛 is a correctly specified model of the conditional mean 𝐸(𝛿𝑛𝑋𝑛) and is a function of 𝛽 but not of ω, leads to estimating functions which have been referred to as the generalized estimating equations (GEEs) by Liang and Zeger [34]. A GEE estimator of 𝛽,  ̂𝛽, is obtained as the root of the unbiased estimating functions as follows: 0=𝑁𝑛=1𝐸𝜕𝜇𝑛𝜕𝛽𝑇Cov1𝛿𝑛𝛿𝑛𝜇𝑛,(3.12) where Cov(𝛿𝑛) is the covariance of 𝛿𝑛, conditional on 𝑋𝑛, and depends on 𝜔. Usually 𝜔 is unknown and must be estimated. However, it can be shown that the nuisance parameter, ω, has only little impact on g and on the solution of 𝑔=0 at least for large 𝑁. Thus, replacing ω by any consistent estimator 𝜔 of 𝜔, for example, the classical minimum distance estimator, the asymptotic variance of ̂𝛽is not affected. To complete the estimating equations for 𝛽, Liang and Zeger propose a working correlation matrix, 𝑅(𝛼), which is common to all units and is a working model of the correlation structure in the observed dependent variables, where 𝛼 is a possibly vector-valued parameter [34].

4. Applying DEA in Institutions Changes Measurement; Cases from Insurance Industry

Due to convenient usage of DEA, the majority of researches have been done in recent years have used DEA approaches in efficiency analysis. Based on working paper of Eling and Luhnen, there are 87 studies which used DEA approach on efficiency analysis in insurance industry. They also found 11 studies of DEA and SFA whose main purpose is assessment of regulation changes in insurance industry [35]. We have completed the previous study in Table 3.

tab3
Table 3: Summary of previous research.

5. Output Selection; Value Added versus Financial Intermediary

As it is shown in Table 3, scholars have consensus about input selection, but do not about outputs. Selecting and measuring outputs have been a challenging step in the insurance frontier efficiency studies. There are two major approaches to measure insurers’ outputs: the value-added approach [23] and the financial intermediary approach [36].

The value-added approach uses outputs related to the amount of financial services insurance companies provide. In the value-added approach, P/L insurer’s outputs consist primarily of intangible financial services; therefore, it is necessary to define suitable proxies that are highly correlated with the quantity of financial services provided. Based on value-added approach and the operating-cost allocations concept developed by Berger and Humphrey in 1992 [37], Cummins and Weiss discussed the three principal services provided by P/L insurers: risk-pooling and risk-bearing, real insurance services, and financial intermediation. They recommend that the most common proxy for the quantity of risk-pooling and real insurance services is losses incurred which is the sum of losses paid plus the net change in loss reserves for the period. Present value of real losses incurred (PV (L)) uses in practice as proxy for loss-based [23].

Alternatively, the financial intermediary approach developed by Brockett et al. in 2004 considers three outputs which have the most crucial rule in financial safety for three groups of stakeholders: firm’s policyholders, employees, and regulators. They use a rule of thumb, ceteris paribus, discussed originally by Charnes and Cooper: an increase in an output or alternatively, a decrease in input should be desirable and should improve the efficiency score. This rule challenges the recommendation of Cummins and Weiss where they choose loss incurred as output. When a researcher wants to test whether a particular variable is an input or an output, while all other things being held constant, s/he should check that an increase in the quantity is favorable or unfavorable. They argue that no insurance firm would try to encourage their employees to perform in a manner that engendered large losses while charging premiums similar to their competitors. A firm that pays great losses due to a catastrophe without an appropriate change in premiums may become insolvent, not efficient [36].

Financial intermediary view an insurance firm provides a bundle of attributes to the stakeholders. In fact, the pledged payment of losses can be viewed as an intermediate stage by that the insurers collect money, investors get rewarded, consumers get a valued promise of quick claim payment, and consumers, regulators, and employees get a promise of future solvency of the firm. In the financial intermediary approach, the frontier efficiency method is used as a goal-directing technique in which the firm’s managers balance maintaining short-term claim paying ability and preserving the long-term ability to meet its fiduciary responsibilities against earning a reasonable financial return.

Leverty and Grace empirically examine two approaches for measuring outputs in property-liability insurer efficiency studies. Their study shows that the value-added approach is closely related to traditional measures of firm performance, but the financial intermediary approach is not. In addition, efficient value-added approach firms are less likely to go insolvent in comparison with firms characterized as efficient by the financial intermediary approach. They also find that the theoretical concern regarding the value-added approach’s use of losses as a measure of output is not validated empirically then they go to conclude that the value-added approach is the appropriate measure for insurer efficiency [38]. Based on work of Leverty and Grace, we choose value-added approach in our research.

6. Computational Results

6.1. First Stage: Efficiency Scores, Traditional DEA versus Robust DEA Results

To calculate insurers’ efficiencies, we used panel data for the years 2003–2010, obtained from the Central Insurance yearbook of IR Iran on 20 mixed (general) insurance companies for a period of 8 years. The sample consists of 139 observations. The insurance companies that are considered in this analysis represent almost all of the market. Keeping in mind the popular DEA rule of thumb, the number of companies in each year is almost greater than three times the number of inputs plus output.

To determine inputs, we followed previous works discussed in the literature review section, and for outputs we followed value-added approach as described in Cummins and Weiss [23] but in order to capture the intermediary function of insurance companies we added ROE as an output. In short, we measured output by: (1) losses incurred; (2) return on equity (ROE); measured inputs by (1) number of employees; (2) general and administrative expenses’ (3) surplus that is total asset minus total liabilities. It should be mentioned here that all insurance companies in Iran are general (mixed), and in the last 10 years the average of life insurance share in total premium of Iran’s market is around 7.5%. In fact Iranian insurers mostly do business in P&L line. This helps us to consider loss incurred as output for all observations. Table 4 presents the CRS efficiency scores for the Iranian insurance companies. Some conclusions can be stemmed from Table 4.

tab4
Table 4: CRS relative efficiency observed in Iranian insurance companies in percent (%): 2003–2010; DEA with certain and uncertain outputs (traditional DEA versus robust DEA).

First, based on DEA with certain and uncertain outputs, the overall inefficiency gap for the Iranian insurance industry is 0.18 and 0.3, respectively. However, it should be mentioned that these gaps do not follow any particular trend. This finding implies that in overall, no major change could be found in the market. It is also true about standard deviations in both DEA models (see Table 4).

Second, as expected, CRS efficiency of models with certain outputs is always greater than that of with uncertain outputs, but the gaps between private and state owned insurers is deferent. As it is shown in Figure 2, the average gap in private insurers is much higher than that of in state owned insurers. It could be observed that the private insurers would be much more venerable if the conditions changed. It convince us to work with Robust CRS efficiency scores instead of traditional CRS scores because of the fact that in case of any change in outputs such as ROE or loss incurred, the efficiency of private insurers will change dramatically. Indeed, the right model for this uncertain data is robust model.

423524.fig.002
Figure 2: Average gap between traditional efficiency scores and robust efficiency scores for different types of insurance companies.

Third, it seems that some companies could be able to compete with their state-owned companies counterparts. For example, DMU8, DMU10, and DMU14 which established in 2003, 2003, and 2004, respectively, could reach to the state owned insurers (DMU1, DMU2, DMU3, and DMU4). It shows that state-owned insurers are not secure. They should upgrade the quality of their management practices, responding to the results of the present research (see Figure 3).

423524.fig.003
Figure 3: Comparison between average of robust CRS efficiency scores for state owned insurers and selective private insurers.

The reason why these insurers could be able to catch their state-owned counterparts is out of our research scope, and can be done in separate research. But it can be briefly noted that all these companies are captive.

6.2. Second Stage: Finding the Factors Explaining Efficiency Scores

To find the factors explaining efficiency scores, first we considered the research of Barros et al. [19], and then we modified it to fit with insurance industry in Iran. Barros et al. defined determinant factors as Life, Nonlife, M&A, Foreign, Big, Quoted, MkShare, and CastNew. Life is a dummy variable, which is one for life insurance companies. Nonlife is a dummy variable which is one for nonlife insurance companies. M&A is a dummy variable which is one for enterprises linked to mergers and acquisitions. Foreign is a dummy variable, which is equal to one for foreign insurance companies in the sample. Big is a dummy variable which is one for big companies measured by the total value of asset. Quoted is a dummy variable which is one for companies quoted in the stock market. MkShare is the logarithm of the market share of the insurance companies analyzed. CastNew is the logarithm of the ratio equity/invested assets.

Due to the fact that all Iranian insurers are general, we omitted the Life and NonLife factors. And because of the fact that there is not any evidence of merger and acquisition, we did not consider it too. There is not any foreign insurer in Iran therefore this factor was not considered as well. Besides, we consider some other factors that are significant in Iran.

Figure 4 demonstrates two significant factors, average premium to surplus (P/S) ratio and ownership state of companies, in Iran from 2003 to 2010. The bars are representative of average P/S ratio and the lines show the average efficiency scores. The P/S ratio is an insurer’s underwriting risk indication which is typically laid between 100% and 300%. The more an insurer is solvent, the more its P/S is closer to 100%. However, as illustrated in Figure 4 this ratio for Iranian companies varies considerably beyond its regular range, from below 100% up to 1200% due to absence of solvency regime. It is obvious that average P/S ratio for state-owned companies fluctuates in a noticeably higher range than private companies. Considering this challenge, the average efficiency of private companies is always lower than state-owned companies. Based on Figure 4 we conclude that ownership and P/S may have a significant effect on efficiency scores.

423524.fig.004
Figure 4: Comparison between premium to surplus ratio and efficiency for different types of insurance companies.

The capital structure for different types of companies is also compared in Figure 5. We use financial leverage index (FLI) as the main measure of the capital structure which is calculated through dividing return on equity (ROE) by return on asset (ROA). The FLI ratio is laid between 0 and 100%. FLI ratio of a company which creates higher liability with a small amount of capital is closer to 0. It could be observed from the Figure 5 that FLI varies increasingly between different types of companies. While their difference in capital structure was significant in the early years of observation period, it decreases over the course of institutions change. Based on Figure 5 we conclude that FLI may have a significant effect on efficiency scores. This factor is similar to CastNew which is used in Barros et al. [19].

423524.fig.005
Figure 5: Comparison between capital structure and efficiency for different types of insurance company Financial Leverage Index (FLI), a ratio indicating the capital structure, is defined as the ratio between assets to equity.

Based on the material before, to find the most significant factors explaining efficiency scores we consider six factors for next stage, GEE model. The factors are: Ownership, Being in stock exchange market, Insurer’s size, Insurer’s market share, capital structure, and solvency index.

7. Second Stage: The Most Important Factors Explaining Efficiencies

In order to examine the hypothesis that Insurers’ efficiency is explained by different contextual variables, we used the two-stage method, as proposed by Coelli et al. [39] estimating the regression as shown below: ̂𝛿𝑖𝑡=𝛽1+𝛽2×𝑃𝑟𝑖𝑣𝑎𝑡𝑒𝑖𝑡+𝛽3×𝑆𝑡𝑜𝑐𝑘𝑖𝑡+𝛽4×𝐵𝑖𝑔𝑖𝑡+𝛽5×𝑀𝑎𝑟𝑘𝑒𝑡𝑠𝑎𝑟𝑒𝑖𝑡+𝛽6×𝐶𝑎𝑝𝑖𝑡𝑎𝑙𝑠𝑡𝑟𝑢𝑐𝑡𝑢𝑟𝑒𝑖𝑡+𝛽7×𝑃𝑟𝑚𝑡𝑜𝑆𝑟𝑝𝑖𝑡,(7.1) where ̂𝛿𝑖𝑡 represents the CRS efficiency score of insurer 𝑖 at time 𝑡. Private is a dummy variable, which is one for state-owned insurance companies. The inclusion of this variable is based on the assumption that private companies may exhibit higher efficiency due to type of ownership. Stock is a dummy variable which is one for insurance companies whose shares could be traded on the stock exchange aiming to capture the effect of transparency due to the stock market governance requirements. Big is a dummy variable which is one for big companies, and is measured by the total value of assets. Following Barros et al. [19] we set a limit for companies’ total assets. If a company’s total assets values more than $300 million U.S., it is considered as a big company, and variable Big for that is equal to 1. It helps us to summarize data without losing valuable information because there are two major categories of companies in Iran. Some of them are old and huge, and the others are recently established and small. Marketshare is the market share of each insurance company within the market. Capitalstructure is the ratio of equity to total assets (Equity to total assets is equal to dividing return on asset (ROA) by return on equity (ROE) or FLI−1.) or FLI−1, aiming to capture the effect of capital structure. PrmtoSrp is the ratio of premium divided by surplus aiming to measure the risk-aversion behavior of insurers and the company’s attitude toward solvency issue.

It is obvious that an insurer’s efficiency in a given year is correlated with its efficiency in the other years. In other words, there are correlations among the yearly observations belong to an insurer. GEE is an extension of the quasi-likelihood approach, is used to analyze longitudinal and other correlated data. Some articles do discuss how much statistical information is obtainable from observations on individuals in clusters such as cluster of an insurance company’s efficiencies during 8 years.

Following Liang and Zeger [34], we employ GEE method to determine the most important variables, with 139 observations categorized into 20 clusters. The results are presented in Table 5. Lots of models are estimated in order to compare the results. The results are quite stable since the variables that were significant in the Model 1, remained significant after dropping the insignificant variables. Based on Table 5, we can conclude that private contributes negatively to efficiency, signifying that this type of companies face different constraints in the Iranian insurance market. Second, the variable Capitalstructure shows that the specific capital structure of Iranian insurance companies exercises a positive effect on efficiency. Finally the variable PrmtoSrp has a positive influence on efficiency. This result along with the result of capitalstructure shows that the companies which do not respect to solvency measures are more efficient than the others. Marketshare, Big, and Stock do not have a significant effect on efficiency.

tab5
Table 5: Second stage—Generalized estimating equations models that capture the most important factors explaining the efficiency score for robust DEA.

Since the GEE model makes a working correlation matrix in order to obtain the optimal estimators, we test higher degrees of correlations between with-in-the-subjects observations. This can be seen in Table 6, where we estimate parameter with 𝑀=5, 𝑀=6, and 𝑀=7. This means that the insurer’s efficiency score in a given year is related to the insurer’s efficiency in adjacent 5 years if we set 𝑀=5. It is obvious that 𝑚 cannot be more than 7 due to the fact that the maximum number of observations within a cluster or with-in-the-subject is equal to 8 (a company’s efficiency scores during 8 years). Table 6 shows that Private and PrmtoSrp are consistently significant while we increase the degree of with-in-the-subject dependency.

tab6
Table 6: Sensitivity analyzing of significant factors, Private and PrmtoSrp, based on the within-subject dependencies.

In GEE model, QIC is a measure used to choose best correlation structure which can be applied to determine the best subsets of covariates for a particular model. The best model is the one with the smallest QIC value. If we consider QIC as the model’s goodness of fit, it gets better while we increase 𝑀, the with-in-the-subject dependency.

8. Concluding Remarks

In this paper, we have analyzed technical efficiency for Iranian insurance companies between 2003 and 2010, a period that insurers experienced intense volatility due to the entrance deregulation of the market. We propose a two-stage procedure to analysis the most important determinants affecting efficiency scores. In the first stage, we obtained the CRS efficiency scores by robust DEA model proposed by Bertsimas and Sim in 2003. In the second stage, we determined the most important factors that can explain the efficiency scores by using GEE developed by Liang and Zeger in 1986. The major results of our study are that ownership type and failure to meet the risk management rules are the main drivers of efficiency. In other words, any state owned insurer which issued more policies without respect to the sufficient capital provision could obtain better score. It should be mentioned here that state owned companies issued more policies and paid more loss, in the hope that the government supports in case of difficult financial situations. Indeed, instead of issuing insurance policies in proportion to their capital, the insurance policies issued on behalf the government credit. This finding shows the violation of competition rules by state, and the inadequacies of the institutions necessary for private sector development.

What should the managers of inefficient insurance companies do to improve efficiency? First, in order to prepare the institutions, they must pursue the new regulations that require state-owned insurers to have sufficient capital and prevent them to issue insurance policies unlimitedly. Also, following the Williamson if we consider culture as the first level of institutions [40], it seems that private companies should try to change the commonly thought that state-owned companies are more reliable. This cannot be achieved unless their quality of services is as good as their state-owned counterparts, and it cannot be done unless they design and manage their processes efficiently. They should establish a benchmark management procedure in order to evaluate their relative position and to adopt appropriate managerial procedures for catching up with the frontier of “best practices.” It seems that some private companies have been able to compete with their state-owned counterparts. Finally, the regulatory authority has an important role in making a fair business environment and improving the efficiency of insurers by (1) participation in developing new rules of business and in enforcing its regulatory duties. (2) Developing indicators to monitor solvency and requiring state-owned companies to comply with. One of the central insurance of IR Iran initial duties is to arrange the market so that insurers could compete fairly while they meet at least the minimum required solvency margin. (3) Publishing information in order to introduce greater transparency into the market especially those related to the sufficient capital and reserves for future commitments.

References

  1. Management and Planning Organization, “The law of the third economic, social, and cultural develpoment plan,” Tech. Rep., Management and Planning Organization, Tehran, Iran, 2003.
  2. R. Rees, E. Kèssner, P. Klemperer, and C. Matutes, “Regulation and efficiency in European insurance markets,” Economic Policy, no. 29, pp. 365–397, 1999. View at Scopus
  3. S. Hussels and D. R. Ward, “The impact of deregulation on the German and UK life insurance markets: an analysis of efficiency and productivity between 1991–2002,” Working Paper, Cranfield Research Paper Series, 2007.
  4. J. L. Wang, L. Y. Tzeng, and E. L. Wang, “The nightmare of the leader: the impact of deregulation on an oligopoly insurance market,” Journal of Insurance, vol. 26, no. 1, pp. 15–28, 2003.
  5. V. Jeng and G. C. Lai, “The impact of deregulation on efficiency: an analysis of life insurance industry in Taiwan from 1981 to 2004,” Risk Management and Insurance Review, vol. 11, no. 2, pp. 349–375, 2008. View at Publisher · View at Google Scholar · View at Scopus
  6. J. D. Cummins and M. Rubio-Misas, “Deregulation, consolidation, and efficiency: evidence from the Spanish insurance industry,” Journal of Money, Credit and Banking, vol. 38, no. 2, pp. 323–355, 2006. View at Publisher · View at Google Scholar · View at Scopus
  7. T. Boonyasai, M. F. Grace, and H. D. Skipper Jr., The Effect of Liberalization and Deregulation on Life Insurer Efficiency [M.S. thesis], Georgia State University, Atlanta, Ga, USA, 1999.
  8. Central Insurance of IR Iran, Statistical Year Book 2009-2010, Central Insurance of IR Iran, Tehran, Iran.
  9. J. D. Cummins, G. Turchetti, and M. A. Weiss, “Productivity and technical. Productivity and technical efficiency in the Italian insurance industry,” Working Paper 96-10, Wharton School, Philadelphia, Pa, USA, 1996.
  10. H. Fukuyama, “Investigating productive efficiency and productivity changes of Japanese life insurance companies,” Pacific Basin Finance Journal, vol. 5, no. 4, pp. 481–509, 1997. View at Scopus
  11. J. D. Cummins, M. A. Weiss, and H. Zi, “Organizational form and efficiency: the coexistence of stock and mutual property-liability insurers,” Management Science, vol. 45, no. 9, pp. 1254–1269, 1999. View at Scopus
  12. B. Mahlberg and T. Url, “The transition to the single market in the German insurance industry,” Working Paper, Austrian Institute of Economic Research, 2000.
  13. A. G. Noulas, T. Hatzigayios, J. Lazaridis, and K. Lyroudi, “Non-parametric production frontier approach to the study of efficiency of non-life insurance companies in Greece,” Journal of Financial Management and Analysis, vol. 14, no. 1, pp. 19–26, 2001.
  14. B. Mahlberg and T. Url, “Effects of the single market on the Austrian insurance industry,” Empirical Economics, vol. 28, no. 4, pp. 813–838, 2003. View at Publisher · View at Google Scholar · View at Scopus
  15. K. C. Ennsfellner, D. Lewis, and R. I. Anderson, “Production efficiency in the Austrian insurance industry: a Bayesian examination,” Journal of Risk and Insurance, vol. 71, no. 1, pp. 135–159, 2004. View at Scopus
  16. C. P. Barros, N. Barroso, and M. R. Borges, “Evaluating the efficiency and productivity of insurance companies with a malmquist index: a case study for Portugal,” Geneva Papers on Risk and Insurance, vol. 30, no. 2, pp. 244–267, 2005. View at Publisher · View at Google Scholar · View at Scopus
  17. O. Badunenko, B. Grechanyuk, and O. Talavera, “Development under regulation: the way of the Ukrainian insurance market,” Discussion Papers of DIW Berlin, German Institute for Economic Research, Berlin, Germany, 2006.
  18. C. P. Barros, A. Ibiwoye, and S. Managi, “Productivity change of Nigerian insurance companies: 1994–2005,” African Development Review, vol. 20, no. 3, pp. 505–528, 2008. View at Publisher · View at Google Scholar · View at Scopus
  19. C. P. Barros, M. Nektarios, and A. Assaf, “Efficiency in the Greek insurance industry,” European Journal of Operational Research, vol. 205, no. 2, pp. 431–436, 2010. View at Publisher · View at Google Scholar · View at Scopus
  20. D. Stalb and L. Bevere, World Insurance in 2010, Swiss Reinsurance, 2011.
  21. R. W. Shephard, Theory of Cost and Production Functions, Princeton University Press, Princeton, NJ, USA, 1970. View at Zentralblatt MATH
  22. A. Charnes, W. W. Cooper, A. Y. Lewin, and L. M. Seiford, Data Envelopment Analysis: Theory, Methodology and Applications, Kluwer Academic, New York, NY, USA, 1995.
  23. J. D. Cummins and M. A. Weiss, “Analyzing firm performance in the insurance industry using frontier efficiency methods,” Center for Financial Institutions Working Papers, Wharton School Center for Financial Institutions, University of Pennsylvania, 1998.
  24. D. Deprins, L. Simar, and H. Tulkens, “Measuring labor-efficiency in post offices,” in The Performance of Public Enterprises: Concepts and Measurement, pp. 243–268, Elsevier Science, Amsterdam, The Netherlands, 1984.
  25. J. D. Cummins and H. Zi, “Comparison of frontier efficiency methods: an application to the U.S. life insurance industry,” Journal of Productivity Analysis, vol. 10, no. 2, pp. 131–152, 1998. View at Scopus
  26. A. Charnes, W. W. Cooper, and E. Rhodes, “Measuring the efficiency of decision making units,” European Journal of Operational Research, vol. 2, no. 6, pp. 429–444, 1978. View at Scopus
  27. S. T. Cooper and E. Cohn, “Estimation of a frontier production function for the South Carolina educational process,” Economics of Education Review, vol. 16, no. 3, pp. 313–327, 1997. View at Scopus
  28. D. Bertsimas and M. Sim, “Robust discrete optimization and network flows,” Mathematical Programming B, vol. 98, no. 1–3, pp. 49–71, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  29. M. A. Shafiaa, M. Aghaeeb, and A. Jamilia, “A new mathematical model for the job shop scheduling problem with uncertain processing times,” International Journal of Industrial Engineering Computations, vol. 2, pp. 295–306, 2010.
  30. A. Soyster, “Convex programming with set-inclusive constraints and applications to inexact linear programming,” Operations Research, vol. 21, no. 5, pp. 1154–1157, 1973.
  31. A. Ben-Tal and A. Nemirovski, “Robust solutions of linear programming problems contaminated with uncertain data,” Mathematical Programming B, vol. 88, no. 3, pp. 411–424, 2000. View at Scopus
  32. M. Gharakhani, I. Kazemi, and H. Alizadeh Haji, “A robust DEA model for measuring the relative efficiency of Iranian high schools,” Management Science Letter, vol. 1, no. 3, pp. 389–404, 2011.
  33. J. A. Hanley, A. Negassa, M. D. D. B. Edwardes, and J. E. Forrester, “Statistical analysis of correlated data using generalized estimating equations: an orientation,” American Journal of Epidemiology, vol. 157, no. 4, pp. 364–375, 2003. View at Publisher · View at Google Scholar · View at Scopus
  34. K. Y. Liang and S. L. Zeger, “Longitudinal data analysis using generalized linear models,” Biometrika, vol. 73, no. 1, pp. 13–22, 1986. View at Publisher · View at Google Scholar · View at Scopus
  35. M. Eling and M. Luhnen, “Frontier efficiency methodologies to measure performance in the insurance industry: overview and new empirical evidence,” University of St. Gallen Working Papers on Risk Management and Insurance Paper 56, 2008.
  36. P. L. Brockett, W. W. Cooper, L. L. Golden, J. J. Rousseau, and Y. Wang, “Financial intermediary versus production approach to efficiency of marketing distribution systems and organizational structure of insurance companies,” Journal of Risk and Insurance, vol. 72, no. 3, pp. 393–412, 2005. View at Publisher · View at Google Scholar · View at Scopus
  37. A. N. Berger and D. B. Humphrey, “Measurement and efficiency issues in commercial banking,” in Output Measurement in the Service Sectors, pp. 245–300, University of Chicago Press, Chicago, Ill, USA, 1992.
  38. J. T. Leverty and M. F. Grace, “The robustness of output measures in property-liability insurance efficiency studies,” Journal of Banking and Finance, vol. 34, no. 7, pp. 1510–1524, 2010. View at Publisher · View at Google Scholar · View at Scopus
  39. T. J. Coelli, P. Rao, and G. E. Battese, An Introduction to Efficiency and Productivity Analysis, Kluwer Academic Press, New York, NY, USA, 1998.
  40. O. E. Williamson, “The new institutional economics: taking stock, looking ahead,” Journal of Economic Literature, vol. 38, no. 3, pp. 595–613, 2000. View at Scopus