Abstract

The negative binomial distribution becomes highly skewed under extreme dispersion. Even at moderately large sample sizes, the sample mean exhibits a heavy right tail. The standard normal approximation often does not provide adequate inferences about the data's expected value in this setting. In previous work, we have examined alternative methods of generating confidence intervals for the expected value. These methods were based upon Gamma and Chi Square approximations or tail probability bounds such as Bernstein's inequality. We now propose growth estimators of the negative binomial mean. Under high dispersion, zero values are likely to be overrepresented in the data. A growth estimator constructs a normal-style confidence interval by effectively removing a small, predetermined number of zeros from the data. We propose growth estimators based upon multiplicative adjustments of the sample mean and direct removal of zeros from the sample. These methods do not require estimating the nuisance dispersion parameter. We will demonstrate that the growth estimators' confidence intervals provide improved coverage over a wide range of parameter values and asymptotically converge to the sample mean. Interestingly, the proposed methods succeed despite adding both bias and variance to the normal approximation.

1. Introduction

Confidence intervals are routinely applied to limited samples of data based upon their asymptotic properties. For instance, the central limit theorem states that the sample mean will have an approximately Normal distribution for large sample sizes provided that the data’s second moment is finite. This Normal approximation is a fundamental tool for inferences about the data’s expected value in a wide variety of settings. Confidence intervals for the mean are often based upon Normal quantiles even when the sample size is very moderate (e.g. 30 or 50). However, the Normal approximation’s quality cannot be ensured for highly skewed distributions [1]. In this setting, the sample mean may converge to the Normal in distribution at a much slower rate. The Negative Binomial distribution is known to have an extremely heavy right tail, especially under high dispersion.

In previous work [2], we established that the Normal confidence interval significantly undercovers the mean at moderate sample sizes. We also suggested alternatives based upon Gamma and Chi Square approximations along with tail probability bounds such as Bernstein’s inequality. We now propose growth estimators for the mean. These estimators seek to account for the relative overrepresentation of zero values in highly dispersed negative binomial data. This may be accomplished by imposing a correction factor as an adjustment to the mean or by directly removing a small number of zero values from the sample. We will demonstrate that these alternative procedures provide a confidence interval with improved coverage. Estimators based on the growth method can also be shown to asymptotically converge in sample size to the Normal approximation.

Section 2 reviews the negative binomial distribution and provides some discussion of parameter estimation under high dispersion. Section 3 reviews existing methods of constructing confidence intervals for the mean of negative binomial random variables. In Section 4, we introduce the growth method and propose two new confidence intervals for the negative binomial mean. Section 5 conducts a simulation experiment to compare these methods to existing procedures in terms of coverage probability. Finally, we will conclude the paper with a discussion in Section 6.

2. The Negative Binomial Distribution

The Negative Binomial distribution models the probability that a total of failures will result before successes are observed. Each count is constructed from (possibly unobserved) independent trials that each result in success with a fixed probability . The expected value of is . The Negative Binomial distribution may be alternatively parameterized in terms of the mean and dispersion directly [3]. For any and , a Negative Binomial random variable has the probability mass function

The variance of is given by . As grows large, (1) converges to the probability mass function of a Poisson random variable. Smaller values of lead to larger variances, so the selection of controls the degree of dispersion in the data. At very small values of , the dispersion becomes extreme, and the data may be relatively sparse. Because the Negative Binomial distribution has a heavy right tail, the small number of nonzero values may be spread over an extremely wide range. In light of these concerns, it is not surprising that the normal approximation exhibits a slow convergence as a function of sample size.

For all methods, we assume that the data consist of independent, identically distributed (i.i.d.) random variables, where and are unknown. We seek to generate accurate and reliable inferences about . The dispersion may be considered a nuisance parameter. In the previous work of Shilane et al. [2], some methods relied upon estimates of while others directly estimated the variance with the unbiased estimator . In general, we prefer to estimate the variance directly where possible. A variety of research suggests that estimating small values of is especially difficult in small sample sizes. Some existing procedures include the method of moments estimator and an iterative maximum likelihood estimator (MLE) [4, 5]. Aragón et al. [6] and Ferreri [7] provide conditions for the existence and uniqueness of the MLE. Meanwhile, Pieters et al. [8] compare an MLE procedure to the method of moments at small sample sizes. These procedures encounter difficulties when the variance estimate is less than the sample mean . The method of moments estimator will provide an implausible negative number, while maximum likelihood procedures will produce highly variable results by constraining to be at least as large as . (Additionally, the glm.nb method in the Statistical programming language will often produce computational errors in this setting rather than return an MLE for .)

For any , we seek to construct high-quality confidence intervals for the mean based upon a sample of i.i.d. random variables. A method’s coverage probability is the chance that the interval will contain the parameter of interest as a function of the sample size and the parameters and . We will primarily judge the quality of a confidence interval in terms of its coverage probability, which ideally would be exactly across all sample sizes and parameter values. However, there are many secondary factors that can impact the selection of methods. Shorter intervals provide greater precision and insight about the underlying scientific problem. The variability of this length should also be minimized. When this variability is too great, the resulting interval may significantly understate or overstate the degree of certainty about the parameter range. Where possible, we prefer methods that can be assured of producing plausible parameter ranges. Since the Negative Binomial distribution draws from a nonnegative sample space, we therefore prefer methods that will result in a nonnegative confidence interval.

3. Prior Methods

The normal approximation and the bootstrap Bias Correct and accelerated (BCA) method [9] are considered standard techniques for the construction of confidence intervals for the mean. Shilane et al. [2] found that these procedures perform similarly over a wide variety of sample sizes and parameter values in negative binomial models. They also proposed several alternative methods, including Gamma and Chi square approximations along with tail probability bounds such as Bernstein’s inequality. These methods improved upon the standard techniques in terms of coverage probabilities over complimentary subspaces of parameter values. We will briefly review these techniques in the following subsections.

3.1. The Gamma Approximation

Shilane et al. [2] proved a limit theorem stating that converges to a Gamma distribution as the sample size grows large and the dispersion parameter approaches zero. The shape parameter is , and the rate parameter is given by . The Gamma approximation therefore requires estimates of and . The sample mean may be plugged in for directly. Due to the difficulties previously discussed with MLE methods under high dispersion, we will rely upon a method of moments estimator of as a default. (In practice, an MLE may be substituted where possible). To account for the possibility of negative values, we recommend truncating all values below a small number (such as by default) to ensure positive estimates. Once the shape and rate parameters are estimated, a confidence interval for is given by the nd and th quantiles of the Gamma distribution.

The Gamma approximation generally performs best when is large and is small. In this setting, the Gamma method improves upon the Normal approximation by a coverage probability of about 1-2%. At more moderate sample sizes and dispersions, the Gamma approximation is not especially accurate and often performs worse than the Normal approximation.

3.2. The Chi Square Approximation

A special case of the Gamma approximation of Section 3.1 occurs when . In this setting, the Gamma parameters correspond to a Chi Square distribution with degrees of freedom. This Chi Square approximation works especially well when the parameter relationship is approximately equivalent. The parameter relationship may also be phrased in terms of a ratio statistic . Simulation studies conducted by Shilane et al. [2] demonstrate that the Chi Square approximation will provide reasonably good coverage when the ratio statistic is reasonably close to 1, such as values between and 1.5. Moreover, the ratio statistic provides information about whether the Chi Square interval is too narrow or too wide. When the ratio statistic exceeds 1, the interval is too wide, and when the ratio is less than 1, it is too narrow. This information may be used to compare the results of other procedures even when the Chi Square method performs poorly. However, it should be emphasized that the Chi Square approximation should be limited in its applications, and asymptotically it will severely overcover the mean.

3.3. Bernstein’s Inequality

At small sample sizes and high dispersion, parametric methods that construct confidence intervals by inverting hypothesis testing procedures [1013] may be inadequate. Tail probability bounds provide an alternative methodology that typically relies upon more mild assumptions about the data. Bounds such as Bernstein’s inequality [14], Bennett’s inequality [15, 16], or methods based on the work of Hoeffding [17] and Berry-Esseen [1821] may be employed. Rosenblum and van der laan [22] apply these bounds to produce confidence intervals based on the estimators’ empirical influence curves.

In the Negative Binomial setting, Shilane et al. [2] adapted Bernstein’s inequality to provide an improvement over a naive confidence interval. The method requires only independent data with finite variance and imposes a heuristic assumption of boundedness in a range . Under these conditions, a variant [23] of Bernstein’s inequality may be applied to generate the following confidence interval for :

In addition to estimating the variance with , the Bernstein confidence interval requires the selection of a bounding range . Since Negative Binomial variables are nonnegative, is a natural choice. However these data are unbounded above, so any finite selection of will impose a heuristic bound on the data. As a default, one may select the sample’s maximum value or some multiple thereof. Shilane et al. [2] also considered a variant of Bernstein’s Inequality for unbounded data. However, they found the method to be impractical due to the extremely conservative nature of the tail bound in this setting.

The bounded variant of Bernstein’s Inequality improved upon the alternative methods in coverage for small sample sizes and high dispersions. However, its confidence intervals were far wider and more variable than the other candidates’ results. While this is an improvement over a naive interval for the data’s mean (e.g., all values between zero and the sample’s maximum), the Bernstein method lacks the interpretative quality provided by parametric approximations. Furthermore, Bernstein’s Inequality is a conservative bound, so asymptotically the method will significantly overcover the mean.

4. Growth Estimators for

The Gamma, Chi Square, and Normal approximations all seek to utilize the existing data to generate an inference for under parametric assumptions about the distribution of . However, none of these techniques directly considers the data’s relative sparsity under high dispersion. When the probability of a zero value is high, many samples of data will include more than this expected proportion due to chance error. This overrepresentation of zeros exacerbates the difficulty of estimation in what is already a sparse data setting. For instance, in the case of and , we expect 85.3% of the sample to be zeros. If the underlying experiment was repeated a large number of times, then roughly half of the samples would have more than 85.3% zeros. Furthermore, if samples are drawn, a simulation experiment suggests that the resulting sample mean will be less than roughly 66% of the time.

We propose growth estimators for as a method of accounting for the potential overrepresentation of zeros in the data set. We will construct growth estimators using two separate procedures: adjusting the mean through a multiplicative growth factor and direct removal of some zero values from the data. The details of these procedures are provided in Sections 4.2 and 4.1.

These growth procedures are motivated by shrinkage estimators. For instance, in constructing a confidence interval for the Binomial proportion , Agresti and Coull [24] suggest augmenting the existing data with two additional successes and two additional failures. A Normal approximation confidence interval for based on these augmented data can be shown to perform measurably better than the Normal approximation alone. The method of Agresti and Coull [24] effectively shrinks the estimate of toward the value 0.5 by adding a small amount of data. By contrast, we seek to grow the estimate of the Negative Binomial mean by removing a small amount of zero-valued data.

4.1. Growth by Adjustment (GBA)

Suppose we believe that the data set contains approximately too many zero values. The removal of these zeros is tantamount to reweighting the sample mean by the growth factor Therefore, we estimate with the value This growth estimator has the expected value and the standard error is

4.2. Growth by Removal (GBR)

The GBA method artificially inflates the sample mean by the growth factor . However, this growth may also be achieved through direct removal of zeros. We will refer to this procedure as Growth by Removal (GBR). As in the GBA method, this procedure depends upon the value of . In general, should be selected according to a predetermined rule, and it may not exceed the overall number of zeros in the data set. In practice, should be less than this maximum because it is intended to remove only extraneous values.

In the original sample, the data’s sum had the expected value . Since removing zero values does not change this sum, the mean of the remaining values has the expected value , as in (5). However, the associated standard error is computed differently from in the GBA method. The standard deviation of the remaining data is centered around rather than . The standard error then divides this quantity by . Assuming the data are sorted in decreasing order, this is given by

4.3. Convergence to the Normal Approximation

Whether we employ the GBR or GBA methods, the mean may be adjusted by the growth factor . For any fixed value of , the growth estimator will asymptotically converge in sample size to . Since the central limit theorem applies, we propose a Normal approximation based upon this adjustment. We will rely upon the unbiased estimate of the variance from the full data set (with no zeros removed). With defined as the th quantile of the standard Normal distribution, the GBA confidence interval will have the form Meanwhile, the GBR method’s interval applies its alternative computation of the standard error. That is,

4.4. Selection of

Interestingly, the growth estimator adds both bias and variance to the Normal approximation of . The degree of additional error may be controlled through the selection of , the number of zeros to remove from the data set. We emphasize that this selection should be made with extreme caution. In Section 5, we will explore how the Growth method’s coverage probability is impacted by the choice of . Based upon the results of these simulation experiments, we recommend the following default choices of : The intuition behind these choices is as follows: at small sample sizes, no more than one zero may be removed per ten data points. When the dispersion is high (), we allow for more aggressive removal of zeros—up to a maximum of 15—to account for a higher degree of overrepresentation. At more moderate dispersions, we limit this removal to no more than 5 zeros. The value of may be estimated using maximum likelihood estimation or the method of moments when the former is not available.

Equation (10) may also be refined to incorporate a continuous function of . A simple linear interpolation between the respective maxima could be considered. However, deriving a more analytic relationship between , and to optimize the coverage of the resulting confidence interval encounters a number of difficulties. The analytic distribution of growth method confidence intervals is a function of the negative binomial likelihood, which already has an extremely complex form. Additionally, the difficulty of estimating in small sample sizes may lead to misspecification of . In combination, it would be very challenging to extrapolate the functional form of to extremely small values of . In light of these concerns, we recommend using (10) as a default choice, and modifications may be considered in specific cases with additional simulation studies.

5. Simulation Studies

5.1. Experimental Design

We designed a simulation experiment to compare the Growth methods to the Bernstein, Gamma, Chi Square, and Normal confidence intervals. We did not include the bootstrap BCA method in the experiment due to its computational requirements and similarity to the Normal approximation in the simulations of Shilane et al. [2]. We selected an extensive set of parameters and sample sizes, with values displayed in Table 1. The most extreme case of and would roughly correspond to flipping a coin with a one-fourth of one percent chance of landing heads. Meanwhile, the most moderate case of and is equivalent to flipping a coin with a chance of heads.

Each combination of , , and led to a unique and independent simulation experiment. Each experiment generated 10000 independent size sets of i.i.d. random variables in the statistical programming language. On each size data set, 95% confidence was generated according to each proposed method. We then approximated the coverage probability of each method by the empirical proportion of confidence intervals containing the true value of .

5.2. Coverage Probability Results

Figures 1, 2, and 3 provide examples of the simulation results at high, medium, and low dispersions. Each figure displays estimated coverage probabilities for the Normal, Gamma, Chi Square, Bernstein, GBA, and GBR methods as a function of sample size at particular combinations of and . Figure 1 depicts the case of and , where the dispersion is the most extreme. Here even the Bernstein method requires a significantly large sample size to reach the desired 95% coverage probability. The Chi Square approximation performs as expected by reaching 95% coverage almost exactly when and overcovering for larger sample sizes. Because is very small, we expect the Gamma approximation to outperform the normal. However, because of the extreme dispersion, both methods require an extremely large sample size to appropriately cover the mean. Even at , the Gamma method only covers 87.91% of the time, and the Normal only reaches a rate of 85.92%. Meanwhile, the Growth methods provide small but steady improvements over the Gamma approximation.

Figure 2 displays coverage probabilities for and , where the dispersion is moderate and more typical of a Negative Binomial study. Both the Bernstein and Chi Square methods quickly overcover the mean, surpassing 95% by and , respectively. The Gamma and Normal approximations are roughly in equipoise at this moderate level of dispersion. The Growth method improves upon their coverage by about 3% at sample sizes up to 100 and maintains at least a 1% improvement even out to . Despite the fairly moderate dispersion, the Normal and Gamma approximations appear to require 250 or more data points to approach convergence, whereas the Growth method reaches this point at about .

Figure 3 displays coverage probabilities in the case of and . Because the dispersion is quite small, the Negative Binomial model is reasonably close to the Poisson distribution. We therefore expect the Normal approximation to perform quite well. Even in this case, the GBA method provides small improvements over the Normal at small and moderate sample sizes. The GBR method also provides early improvements. However, its coverage briefly dips below that of the Normal approximation at about . With relatively few zeros removed from the data, the Growth methods and Normal approximation quickly fall into agreement. By contrast, the Gamma approximation does not perform well in this scenario because its underlying limit theorem requires high dispersion relative to the sample size. Similarly, the Chi Square and Bernstein methods almost immediately overcover the mean.

5.3. The Effect of Misspecified Growth

The Growth Method simulation results of Section 5.2 were obtained under the default selections of given by (10). These recommended settings were obtained through a process of trial and error as applied to the simulation studies. The intuition of these recommendations is that zero values may be removed more aggressively under higher dispersions. We believe that the extensive range of parameters tested in the simulation study provide is reasonable evidence that these recommendations will generalize well. Other approaches to selecting may also consider the observed sample mean or alter the gradations by sample size.

We urge the practitioner to exercise caution in selecting how many zeros to remove. An overzealous selection of may lead to poor performance in the growth method’s coverage probability. As an example, we repeated the simulation study in the case of and with instead of the recommended value of . This aggressive approach removes zeros at double the rate and allows for a maximum that triples the recommendations for the low dispersion case of . Figure 4 displays the consequences of misspecified growth. Rather than the small improvement over or close agreement to the Normal approximation as seen in Figure 3, the growth method decreases significantly in coverage. This dip in performance continues until the maximum removal is reached at . For larger sample sizes, the Growth Method begins to rebound toward the Normal approximation. However, this case suggests that the practitioner should be careful not to select a value of that is too large, especially when the dispersion is mild.

By contrast, higher dispersion settings allow for far more aggressive growth. We also repeated the simulation study in the case of and , the most extreme dispersion considered. Figure 5 depicts the Growth Method’s coverage when , which effectively removes half the data points until sample size 100. In this example, the Growth method crosses the coverage threshold by , a mark not reached by the Gamma or Normal approximations by . Indeed, the Growth Method accelerates in coverage even faster than the Bernstein Method.

Overall, the GBA method appears to perform slightly better than the GBR method across all simulation experiments. This difference may be attributed to the selection of . The GBR method requires an integer value so that exactly zeros may be removed. By contrast, the GBA method allows for adjustments using continuous values of . The recommended selection procedure of (10) allows for fractional proportions of the overall sample size. This additional fraction allows for increased growth, which in turn leads to improved coverage. The GBA and GBR methods also differ in the computation of their standard errors. It appears that these standard errors are largely similar in value. We will substantiate this claim further in Section 5.4.

5.4. Confidence Interval Length Considerations

We have adopted coverage probability as our preferred metric of a confidence interval’s quality. However, the length of these intervals is an important secondary consideration. Shorter intervals suggest greater precision in the estimator when the coverage probability is approximately equal. Figures 6 and 7 provide a comparison of the proposed methods’ lengths. In each simulation experiment, we recorded the median and standard deviation of length for the 10000 confidence intervals generated by each method. We then computed the ratio of each method’s median length to that of the Normal approximation in each experiment. Figure 6 displays the distribution of these ratios, and Figure 7 depicts the ratio of the standard deviation of length.

In general, it appears that the Bernstein method produces confidence intervals that are typically a factor of 1.8 longer than the corresponding Normal approximation. Because the GBA and GBR methods usually remove about one zero per ten data points, their median lengths were typically a factor of larger than the Normal approximation’s interval. Likewise, the same growth factor applies to the standard deviations of length in Figure 7. The Bernstein Method has a length variability that is roughly double that of the Normal approximation. This increased variability of length leads to the overcoverage in the method, as extremely long confidence intervals are far more likely to contain the mean. By contrast, the Gamma method typically produces a confidence interval that is 95% the length of the Normal approximation. This length depends on the degree of dispersion, with higher lengths (and improved coverage) occurring at smaller values of .

6. Discussion

The proposed growth methods provide improved confidence intervals for the mean of Negative Binomial random variables with unknown dispersion. Removing a small number of zeros from the data set bolsters the coverage probability at small and moderate sample sizes. Asymptotically, the GBA and GBR methods converge to the Normal approximation. Overall, applying a growth estimator produces intervals that are longer and more variable than the Normal approximation. The degree of increase may be controlled through a selection of the growth factor , or, equivalently, the removal factor . This selection depends on the sample size and degree of dispersion in the data. We emphasize that the number of zeros to remove should be selected cautiously to prevent coverage drop-offs such as the example depicted in Figure 4.

The GBA and GBR methods often provide similar coverage results, so other factors may be considered in selecting between them. First, the GBA growth factor (3) allows for noninteger values, while GBR must remove an integer number of zeros. The GBA method’s possibility of continuous adjustments may present an opportunity for finer tuning of the method to the problem at hand. By contrast, a manual removal of zeros is more intuitively appealing. The remainder of the inference follows the same methods as the Normal approximation, which will be more familiar to many practitioners. When there are fewer than overall zeros in the sample, it is also more reasonable to only remove the existing zeros rather than to adjust by a growth factor dependent on . With all of this in mind, the GBR method may be a more practical choice even if the GBA method can provide slight improvements in the results.

The previous work of Shilane et al. [2] provided a piecewise solution to performing inference on the Negative Binomial mean. The Gamma, Chi Square, and Normal approximations performed well in largely complimentary settings, and Bernstein’s inequality was used at smaller sample sizes and high dispersions. However, we have demonstrated that the GBA and GBR methods can perform well in a wide variety of settings. Using the relatively simple guidelines for selecting in (10), these procedures outperformed the parametric approximations at both high and low dispersions. Because it allows for continuous values of , the GBA method generally provided small improvements over the GBR results. A tail probability bound such as Bernstein’s Inequality may still be considered at very small sample sizes and extremely high dispersions, but the growth method accelerates quickly as a function of sample size.

Future work on this problem may provide more solid theory for how the value of should be selected. Growth estimators add both bias and variance to the Normal approximation, so a traditional bias-variance trade-off calculation does not apply. Indeed, a criterion such as the mean squared error would be optimized with the selection of , which is equivalent to the Normal approximation. The cautionary tale depicted in Figure 4 suggests that coverage is optimized at some intermediate value of . However, the analytic coverage probability calculation is intractable because it depends upon the permutation distribution of the Negative Binomial sample.

The Growth Method may be extended to other sparse estimation problems. Shilane and Bean [25] previously examined the quality of the Normal approximation in two-sample Negative Binomial inference, and growth estimators may be easily adapted to this setting. We are also presently examining an application of the growth method for estimating the mean of Gamma random variables.