Abstract

Recently, a series of divergence measures have emerged from information theory and statistics and numerous inequalities have been established among them. However, none of them are a metric in topology. In this paper, we propose a class of metric divergence measures, namely, , and study their mathematical properties. We then study an important divergence measure widely used in credit scoring, called information value. In particular, we explore the mathematical reasoning of weight of evidence and suggest a better alternative to weight of evidence. Finally, we propose using as alternatives to information value to overcome its disadvantages.

1. Introduction

The information measure is an important concept in Information theory and statistics. It is related to the system of measurement of information or the amount of information based on the probabilities of the events that convey information. Divergence measures are an important type of information measures. They are commonly used to find appropriate distance or difference between two probability distributions.

Let be the set of finite discrete probability distributions as in [1]. For all , , the following divergence measures are well known in the literature of information theory and statistics.

Hellinger Discrimination [2].

Shannon’s Entropy [3]. which is sometimes referred to as measure of uncertainty. The entropy of a discrete random variable is defined in terms of its probability distribution and is a good measure of randomness or uncertainty.

Note that in the original definition of Shannon’s Entropy the log is to the base 2 and entropy is expressed in bits in information theory. The log can be any other bases and the entropy will be a constant factor of the one in base 2 by the change-base formula of the logarithm function. Hence, without loss of generality, we will assume all the logs are natural logarithms.

Kullback and Leibler’s Relative Information [4]. Its symmetric form is the well-known -divergence.

J-Divergence (Jeffreys [5], Kullback, and Leibler [4]).

Triangular Discrimination [6].

Symmetric Chi-Square Divergence (Dragomir et al. [7]). One has where is the well-known -divergence (Pearson [8]).

Jensen-Shannon Divergence (Sibson [9], Burbea, and Rao [10, 11]).

Arithmetic-Geometric Divergence (Taneja [12]). Moreover

Taneja’s Divergence (Taneja [12]). One has The information measures , and can be written as

Relative Information of Type s. Cressie and Read [13] considered the one-parametric generalization of information measure , called the relative information of type given by It also has some special cases:(i), (ii), (iii), (iv), (v), (vi), (vii).

It is shown that is nonnegative and convex in and in [1].

J-Divergence of Type s [14].

It admits the following particular cases:(i), (ii), (iii), (iv), (v).

Unified Generalization of Jensen-Shannon Divergence and Arithmetic-Geometry Mean Divergence [14].

It admits the following particular cases:(i), (ii), (iii), (iv), (v).

Taneja proved [14] that all the 3 -type information measures , , and are nonnegative and convex in the pair . He also obtained inequalities regarding the various divergence measures: Here, we observe that. Hence, from the above inequalities we see that , , , , , and are all positive.

We also note that all in the original definition of in [1] are required to be positive. Yet, in realty some may be 0. In this case , , , and will be undefined. We have extended the definition of to include the cases when . We assume that , which is easily justified by continuity since as . For convenience, we also assume and for .

A problem with the above divergence measures is that none of them are a real distance, that is, a metric, in topology. In this paper, we will study a class of metric divergence measures . We then study the underlying mathematics of a special divergence measure called information value, which is widely used in credit scoring. We propose using as alternatives to IV in order to overcome the disadvantages of information value.

The rest of this paper is organized as follows. In Section 2, after reviewing the metric space, we disprove that the above divergence measures are metrics. We then study a class of metric divergence measures . Section 3 is concerned with information value. We examine a rule of thumb and weight of evidence and suggest a better alternative to weight of evidence. We then propose using as alternatives of IV to overcome the disadvantages of information value. Section 4 presents some numerical results. Finally, the paper is concluded in Section 5.

2. Metric Divergence Measures

2.1. Review of Metric Space

Definition 1. Suppose a real valued function and that for all , , of set (1) (nonnegative),(2) if and only if (identity),(3) (symmetry),(4) (triangle inequality).
Such a “distance function” is called a metric on , and the pair is called a metric space. If satisfies (1)–(3) but not necessarily (4), it is called a semimetric.
A metric space is a topological space in a natural manner, and therefore all definitions and theorems about general topological spaces also apply to a metric space. For instance, in a metric space one can define open and closed sets, convergence of sequences of points, compact space, and connected space.

Definition 2. A metric is said to be upper bounded by another metric if there exists a positive constantsuch that for all , . In this case, is said to be lower bounded by.
If is upper bounded by , then the convergence in the metric space implies the convergence in the metric space .

Definition 3. Two metrics and are equivalent if there exist 2 positive constants and such that .
If two metrics and are equivalent, they will have the same convergence.

2.2. Nonmetric Divergence Measures

Proposition 4. None of the divergence measures, , , , , , and are a metric in topology. Indeed, none of them satisfy the triangle inequality.

Proof. We disprove them either numerically or analytically by constructing counter examples in . (a)Let, , and . Then, , and. (b)Let, , and . Then , , , and (c)Let, , and . Then, , and .(d)Let , , and . Then Hence,. Indeed, is not symmetric either (see [15]).(e)Let, , . Then Hence,.(f)Let, and . Then , , , and.(g)Let , , . Then, , , , and .

2.3. A Natural Metric Divergence Measure

If we pick up the common part of and , we will obtain a metric divergence measure Sinceand for all, bothandare upper bounded by; that is, , .

2.4. -Divergence

Recall that for a real number , the-norm of vectoris defined by We will apply the-metric from the-norm to divergence measures to obtain-divergence. For convenience, we will use the upper case notation.

Definition 5. For two probability distributionsand, one defines their -divergence as Here,is used for superscript, subscript, and radial root. It should not be confused with the vector.
In particular, when, we have the distance: When, we obtain the Euclidean Distance: When, we have It is known that-norms are decreasing in. Moreover, allmetrics are equivalent.

Lemma 6. If, then the-norms insatisfy

Corollary 7. If, then the-divergences satisfy

Theorem 8. -divergences are all bounded by constant 2 for; that is,In particular,and

Proof. We first prove the general case. From Corollary 7, it is sufficient to prove that the-divergence is bounded by 2. Letandbe two probability distributions. Without loss of generality, let us assume that, , and . Then, we have Noting that for, we haveHence, Therefore, we have proved the 2 particular cases.

The following result shows that the relative entropyis lower bounded by the square of the. Its proof can be found at [15].

Lemma 9.

Theorem 10. The square root of is lower bounded by; that is,

Proof. Applying lemma toand, we obtain Note that the left hand side is nothing butThe proof is completed by taking the square root on both sides.

Remark 11. and henceare unbounded by any constants. This can be seen by takingand, , and taking limit Since-divergences are all bounded by constant 2 for, and henceare not equivalent to-divergences.
We now establish the convexity property for -divergence, which is useful in optimization.

Theorem 12. is convex in the pair, that is, if, andare two pairs of probability distributions, then for all.

Proof. Let , , and . Then Here, the first inequality is from the well-known Minkowski’s Inequality.

It follows from the following results that we can generate infinitely many metric divergence measures using the existing ones.

Proposition 13. If and are two metric divergence measures, so are the following 3 measures:(1) for all , and ,(2),(3)

Proof. The proof of (1) and (2) is trivial and hence will be omitted. As for (3), it is sufficient to verify the triangle inequality since nonnegative, identity, and symmetry are all easy to verify.
To begin with, let us first prove an inequality: for any nonnegative,,,, It is easy to see that inequality (33) is equivalent to the following inequality: Inequality (34) is equivalent to the following inequality: Inequality (35) is equivalent to the following inequality: Since inequality (36) is always true, inequality (34) is true.
Now, let us assume, , are 3 arbitrary probability distributions. Sinceandsatisfy the triangle inequality, we have The last inequality results from inequality (33).

Remark 14. da Costa and Taneja [16] show that and are metrics divergence measures for all . Since, , , , , , and are all constant factors of special cases of or , they are all metric divergence measures by Proposition 13. Yet, da Costa and Taneja did not disprove or discuss any applications of these divergence measures.

3. Information Value in Credit Scoring

Information value, or IV in short, is a widely used measure in credit scoring in the financial industry. It is a numerical value to quantify the predictive power of an independent continuous variablein capturing the binary dependent variable. Mathematically, it is defined as [17] where is the number of bins or groups of variable , and are the numbers of good and bad accounts with bin, and and are the total number of good accounts and bad accounts in the population. Hence,andare distributions of good accounts and bad accounts. Therefore, Usually, “good” means and “bad” means . It could be the other way, since IV is symmetric about good and bad. If for all, then IV = 0; that is, has no information on .

IV is mainly used to reduce the number of variables as the initial step in the logistic regression, especially in big data with many variables. IV is based on an analysis of each individual predictor in turn without taking into account the other predictors.

3.1. IV and WOE

One advantage of IV is its close tie with weight of evidence (WOE), defined by WOE measures the strength of each grouped attribute in separating good and bad accounts. According to [17], WOE is the log of odds ratio, which measures odds of being good. Moreover, WOE is monotonic and linear.

Yet, WOE is not an accurate measure in that it is not the log of odds ratio and hence its linearity is not guaranteed. Indeed, and are from two different probability distributions. They represent the number of good accounts in bindivided by the total number of good accounts in the population and the number of bad accounts in bindivided by the total number of bad accounts in the population, respectively. In general, as can be seen from Exhibit  6.2 in [17].

To make WOE a log of odds, let us change its definition to and denote it by WOE1. The cancelled is the number of accounts in bin, and so .

As is well known, the logistic regression models the log odds, expressed in conditional probabilities, as a linear function of the independent variable; that is, When falls into bin , becomes . Hence, the WOE1 values are either continuously increasing or continuously decreasing in a linear fashion.

IV and WOE1 can be used together to select independent variables for logistic regression. When a continuous variable has a large IV, we make it a candidate variable for logistic regression if WOE1 values are linear. It is common to plot the WOE1 values versus the mean values of at bin .

3.2. A Rule of Thumb of IV

Intuitively, the larger the IV, the more predictive the independent variable. However, if IV is too large, it should be checked for over predicting. For instance, may be a postknowledge variable.

To quantify IV, a rule of thumb is proposed in [17, 18]: (i)less than 0.02: unpredictive,(ii)0.02 to 0.1: weak,(iii)0.1 to 0.3: medium,(iv)0.3+: strong.

In addition, mathematical reasoning of the rule of thumb is given in [18]. In more detail, IV can be expressed as the average of 2 likelihood ratio test statistics and of Chi-square distributions with degrees of freedom: The close relationship between IV and the likelihood ratio test allows using the Chi-square distribution to assign a significance level.

However, this is doubtful. On the one hand, and are not necessarily independent. On the other hand, even if they are independent, it is not enough. Let us assume that 2 × IV follows a Chi-square distribution with degrees of freedom. Yet, the critical values of the Chi-square distribution are too large compared with the values in the rule of thumb, as can be seen from the Chi-square table in many books about Probability, say [19].

We only list the first several rows of Table 1. When Table 1 grows as DF increases, the values in each column will increase. One may use the Excel function or its newer and more accurate version to build Table 1, which returns the inverse of the right-tailed probability of the Chi-square distribution with degrees of freedom.

The critical values are as small as the values in the rule of thumb only when the degrees of freedom are as small as 6. For instance, there is a probability of that a Chi-square distribution with 6 degrees of freedom will be larger than or equal to 0.68, that is, Yet, there is a probability of 0.995 that a Chi-square random variable with 10 degrees of freedom will be larger than or equal to 2.16. There is a probability of 0.995 that a Chi-square random variable with 18 degrees of freedom will be larger than or equal to 6.26. On the basis of the above, the rule of thumb is more or less empirical.

3.3. Calculation of IV

The calculation of IV is simple once binning is done. In this sense, IV is a subjective measure. It depends on how the binning is done and how many bins are used. Different binning methods may result in different IV values, whereas the logistic regression in the later stages will not use the information of these bins.

In practice, 10 or 20 bins are used. The more the bins, the better the chance the good accounts will be separated from the bad accounts. Yet, we cannot divide the values of indefinitely since we may not avoid 0 good account or 0 bad account in some bins. To overcome the limitation of the logarithm function in the -divergence, the binning should avoid 0 good account or 0 bad account in any bins.

The idea of binning is to assign values ofwith similar behaviors to the same group or bin. In particular, the same values ofmust fall into the same bin. A natural way of binning is to sort the data first and then divide them into bins with an equal number of observations (the last bin may have less number of observations). This works well if has no repeating values at all. In reality, often has repeating values (called the tied values in statistics), which may cause problems when the tied values of fall into different bins.

Proc Rank in SAS serves, a good candidate for binning (as opposed to function cut in ). When there are no tied values in , it simply divides the values ofinto bins. When there are tied values in, it treats the tied values by its option TIES.

Proc Rank begins with sorting the values of within a BY group. It then assigns each nonmissing value an ordinal number that indicates its rank or position in the sequence. In case of ties, option TIES will be used. Depending on whether TIES = LOW, HIGH, or MEAN (default one), the lowest rank, highest rank, or the average rank will be assigned to all the tied values. Next, the following formula is used to calculate the binning value of each nonmissing value of: whereis the floor function, rank the value’s rank, the number of bins, and the number of nonmissing observations. Note that the range of the binning values is from 0 to . Finally, all the values of are binned according to their binning values. In case one bin has less than 5% of the population, we may combine this bin with its neighboring bin.

To illustrate the use of Proc Rank withand TIES = MEAN, let us look at an imaginary dataset with one variable age and 100 observations. Assume this dataset has been sorted and has fifty observations with a value of 10, thirty observations with a value of 20, ten observations with a value of 30, nine observations with a value of 40, and one observation with a value of 50.

The first 50 observations have a tied value of 10. Each of them will be assigned an average rank of and hence a binning value of . The next 30 observations have a tied value of 20. Each of them will be assigned an average rank of and hence a binning value of . The next 10 observations have a tied value of 30. Each of them will be assigned an average rank of and hence a binning value of . The next 9 observations have a tied value of 40. Each of them will be assigned an average rank of and hence a binning value of . The last observation has a rank of 100 and hence will be assigned a binning value of . In summary, the 100 observations are divided into 4 bins: the first 50 observations, the next 30 observations, the next 10 observations, and the last 10 observations.

Remark 15. Missing values are not ranked and are left missing in Proc Rank. Yet, they may be kept in a separate bin by means of Proc Summary or Proc Means in the calculation of IV.

Remark 16. If has less thandifferent values, the number of bins by Proc Rank will be less that .

After binning is done for , a simple SAS program can be written to calculate IV. Meanwhile, WOE1 are calculated per bin as for WOE in [17]. If IV is less than 0.02, we will throw this independent variable. If IV is large than 0.3, over predicting will be checked. If IV is between 0.02 and 0.3 and WOE1 are linear, we will include this independent variable as a candidate variable in logistic regression. If IV is between 0.02 and 0.3 but WOE1 are not linear, we may make transformations of the independent variable to make WOE1 more linear. If a transformation can preserve the rank of the original independent variable, the binning by Proc Rank will be preserved. Therefore, we have obtained the following result.

Proposition 17. IV, when binning by Proc Rank, is invariant under any strictly monotonic transformations.

3.4. Mathematical Properties of IV

IV is the information statistic for the difference between the information in the good accounts and the information in the bad accounts. Indeed, IV is the -divergence with distributions of good accounts and bad accounts. Thus, IV is lower bounded by the square ofby Theorem 10.

Property 1. .

Property 2. IV satisfies the inequalities (15); that is, In particular, IV is upper bounded by: Note that there is a direct proof of the above inequality, which is much easier than that in [14]. Let us assume without loss of generality that , , and .

Proof. Using the identity and making Taylor’s expansion of function around 1 for , we obtain Multiplying and summing up from to , we obtain Similarly,Multiplyingand summing up from to , we obtain The proof is completed by noting that

Theorem 18. IV is convex in the pair.

Proof. Applying Theorem 2.7.2 from [15] to both and , we obtain

Remark 19. is not convex albeit a metric.

Property 3. If more than 95% of population ofhave the same value, then IV = 0. In particular, ifhas just one value, then IV = 0.

Proof. Assume more than 95% of population of have the same value. Then, all the population with valuewill fall into the same bin, called the majority bin. The rest of population whose values are different fromwill be combined into the majority bin. Thus, there will be only one bin for all the values ofTherefore , , and hence IV = 0.

Remark 20. If the population whose values are notare not combined into the majority bin, then IV could be larger than 0.02. As shown in Table 2,has 10000 observations, where 95.8% or 9580 observations have the same value, say 2, and the rest of 420 observations have another value, say 4. Both bins contribute a value larger than 0.02 to IV. Statistically, 4.2% of the population are outliers and can be neglected. Hence, it is more meaningful to sayhas no information to.

3.5. Alternatives to IV

As we have seen above, IV has 3 shortcomings: (1) it is not a metric; (2) no groups are allowed to have 0 bad accounts or 0 good accounts; and (3) its range is too broad, from 0 to.

Theoretically, any divergence measures of the difference or distance between good distributions and bad distributions can be alternatives to IV. In particular,() are good alternatives to IV. They overcome all the 3 shortcomings of IV: (1)are all metrics; (2) they allow bins to have 0 bad accounts or 0 good accounts; and (3) They all have a much narrow range, from 0 to 2.

Whiledo not seem to have a tie with weight of evidence, they can be as quantifiable as IV. For instance, we may adopt a rule of thumb for:(i)weak: 6% of its upper bound,(ii)medium: 6% to 30% of its upper bound,(iii)strong: Larger than 30% of its upper bound.

In particular, (i)weak: 0.12 for , 0.085 for , and 0.06 for ,(ii)medium: 0.12 to 0.60 for , 0.085 to 0.424 for , and 0.06 to 0.30 for,(iii)strong: 0.60+ for, 0.424+ for, 0.3+ for.

Remark 21. If , then IV > 0.0144.

The lower bound 0.12 of can be adjusted as needed. It can also be combined with IV to enhance the accuracy. For instance, for the number of independent variables is large enough, we may select only those which satisfy both lower bounds of and IV.

4. Numerical Results

To illustrate our results, we use Exhibit  6.2 in [17] but add one column for WOE1. We use the real WOE, not its “more user-friendly” form—100 times WOE.

From Table 3, we see that . From Figure 1, we also see that the WOE1 for nonmissing values has a linear trend for variable age.

5. Conclusions

In this paper, we have proposed a class of metric divergence measures, namely, , , and studied their mathematical properties. We studied information value, an important divergence measure widely used in credit scoring. After exploring the mathematical reasoning of a rule of thumb and weight of evidence, we suggested an alternative to weight of evidence. Finally, we proposed using as alternatives to information value to overcome its disadvantages.