Abstract

We first review existing sequential methods for estimating a binomial proportion. Afterward, we propose a new family of group sequential sampling schemes for estimating a binomial proportion with prescribed margin of error and confidence level. In particular, we establish the uniform controllability of coverage probability and the asymptotic optimality for such a family of sampling schemes. Our theoretical results establish the possibility that the parameters of this family of sampling schemes can be determined so that the prescribed level of confidence is guaranteed with little waste of samples. Analytic bounds for the cumulative distribution functions and expectations of sample numbers are derived. Moreover, we discuss the inherent connection of various sampling schemes. Numerical issues are addressed for improving the accuracy and efficiency of computation. Computational experiments are conducted for comparing sampling schemes. Illustrative examples are given for applications in clinical trials.

1. Introduction

Estimating a binomial proportion is a problem of ubiquitous significance in many areas of engineering and sciences. For economical reasons and other concerns, it is important to use as fewer as possible samples to guarantee the required reliability of estimation. To achieve this goal, sequential sampling schemes can be very useful. In a sequential sampling scheme, the total number of observations is not fixed in advance. The sampling process is continued stage by stage until a prespecified stopping rule is satisfied. The stopping rule is evaluated with accumulated observations. In many applications, for administrative feasibility, the sampling experiment is performed in a group fashion. Similar to group sequential tests [1, 8], [2], an estimation method based on taking samples by groups and evaluating them sequentially is referred to as a group sequential estimation method. It should be noted that group sequential estimation methods are general enough to include fixed-sample-size and fully sequential procedures as special cases. Particularly, a fixed-sample-size method can be viewed as a group sequential procedure of only one stage. If the increment between the sample sizes of consecutive stages is equal to , then the group sequential method is actually a fully sequential method.

It is a common contention that statistical inference, as a unique science to quantify the uncertainties of inferential statements, should avoid errors in the quantification of uncertainties, while minimizing the sampling cost. That is, a statistical inferential method is expected to be exact and efficient. The conventional notion of exactness is that no approximation is involved, except the round-off error due to finite word length of computers. Existing sequential methods for estimating a binomial proportion are dominantly of asymptotic nature (see, e.g., [37] and the references therein). Undoubtedly, asymptotic techniques provide approximate solutions and important insights for the relevant problems. However, any asymptotic method inevitably introduces unknown error in the resultant approximate solution due to the necessary use of a finite number of samples. In the direction of nonasymptotic sequential estimation, the primary goal is to ensure that the true coverage probability is above the prespecified confidence level for any value of the associated parameter, while the required sample size is as low as possible. In this direction, Mendo and Hernando [8] developed an inverse binomial sampling scheme for estimating a binomial proportion with relative precision. Tanaka [9] developed a rigorous method for constructing fixed-width sequential confidence intervals for a binomial proportion. Although no approximation is involved, Tanaka’s method is very conservative due to the bounding techniques employed in the derivation of sequential confidence intervals. Franzén [10] studied the construction of fixed-width sequential confidence intervals for a binomial proportion. However, no effective method for defining stopping rules is proposed in [10]. In his later paper [11], Franzén proposed to construct fixed-width confidence intervals based on sequential probability ratio tests (SPRTs) invented by Wald [12]. His method can generate fixed-sample-size confidence intervals based on SPRTs. Unfortunately, he made a fundamental flaw by mistaking that if the width of the fixed-sample-size confidence interval decreases to be smaller than the prespecified length as the number of samples is increasing, then the fixed-sample-size confidence interval at the termination of sampling process is the desired fixed-width sequential confidence interval guaranteeing the prescribed confidence level. More recently, Frey published a paper [13] in The American Statistician (TAS) on the classical problem of sequentially estimating a binomial proportion with prescribed margin of error and confidence level. Before Frey submitted his original manuscript to TAS in July 2009, a general framework of multistage parameter estimation had been established by Chen [1418], which provides exact methods for estimating parameters of common distributions with various error criterion. This framework is also proposed in [19]. The approach of Frey [13] is similar to that of Chen [1418] for the specific problem of estimating a binomial proportion with prescribed margin of error and confidence level.

In this paper, our primary interests are in the exact sequential methods for the estimation of a binomial proportion with prescribed margin of error and confidence level. We first introduce the exact approach established in [1418]. In particular, we introduce the inclusion principle proposed in [18] and its applications to the construction of concrete stopping rules. We investigate the connection among various stopping rules. Afterward, we propose a new family of stopping rules which are extremely simple and accommodate some existing stopping rules as special cases. We provide rigorous justification for the feasibility and asymptotic optimality of such stopping rules. We prove that the prescribed confidence level can be guaranteed uniformly for all values of a binomial proportion by choosing appropriate parametric values for the stopping rule. We show that as the margin of error tends to be zero, the sample size tends to the attainable minimum as if the binomial proportion were exactly known. We derive analytic bounds for distributions and expectations of sample numbers. In addition, we address some critical computational issues and propose methods to improve the accuracy and efficiency of numerical calculation. We conduct extensive numerical experiment to study the performance of various stopping rules. We determine parametric values for the proposed stopping rules to achieve unprecedentedly efficiency while guaranteeing prescribed confidence levels. We attempt to make our proposed method as user-friendly as possible so that it can be immediately applicable even for layer persons.

The remainder of the paper is organized as follows. In Section 2, we introduce the exact approach proposed in [1418]. In Section 3, we discuss the general principle of constructing stopping rules. In Section 4, we propose a new family of sampling schemes and investigate their feasibility, optimality, and analytic bounds of the distribution and expectation of sample numbers. In Section 5, we compare various computational methods. In particular, we illustrate why the natural method of evaluating coverage probability based on gridding parameter space is neither rigorous nor efficient. In Section 6, we present numerical results for various sampling schemes. In Section 7, we illustrate the applications of our group sequential method in clinical trials. Section 8 is the conclusion. The proofs of theorems are given in appendices. Throughout this paper, we shall use the following notations. The empty set is denoted by . The set of positive integers is denoted by . The ceiling function is denoted by . The notation denotes the probability of the event associated with parameter . The expectation of a random variable is denoted by . The standard normal distribution is denoted by . For , the notation denotes the critical value such that . For , in the case that are i.i.d. samples of , we denote the sample mean by , which is also called the relative frequency when is a Bernoulli random variable. The other notations will be made clear as we proceed.

2. How Can It Be Exact?

In many areas of scientific investigation, the outcome of an experiment is of dichotomy nature and can be modeled as a Bernoulli random variable , defined in probability space , such that where is referred to as a binomial proportion. In general, there is no analytic method for evaluating the binomial proportion . A frequently used approach is to estimate based on i.i.d. samples of . To reduce the sampling cost, it is appropriate to estimate by a multistage sampling procedure. More formally, let and , with , be the prespecified margin of error and confidence level, respectively. The objective is to construct a sequential estimator for based on a multistage sampling scheme such that for any . Throughout this paper, the probability is referred to as the coverage probability. Accordingly, the probability is referred to as the complementary coverage probability. Clearly, a complete construction of a multistage estimation scheme needs to determine the number of stages, the sample sizes for all stages, the stopping rule, and the estimator for . Throughout this paper, we let denote the number of stages and let denote the number of samples at the th stages. That is, the sampling process consists of stages with sample sizes . For , define and . The stopping rule is to be defined in terms of . Of course, the index of stage at the termination of the sampling process, denoted by , is a random number. Accordingly, the number of samples at the termination of the experiment, denoted by , is a random number which equals . Since for each , is a maximum-likelihood and minimum-variance unbiased estimator of , the sequential estimator for is taken as In the above discussion, we have outlined the general characteristics of a multistage sampling scheme for estimating a binomial proportion. It remains to determine the number of stages, the sample sizes for all stages, and the stopping rule so that the resultant estimator satisfies (2) for any .

Actually, the problem of sequential estimation of a binomial proportion has been treated by Chen [1418] in a general framework of multistage parameter estimation. The techniques of [1418] are sufficient to offer exact solutions for a wide range of sequential estimation problems, including the estimation of a binomial proportion as a special case. The central idea of the approach in [1418] is the control of coverage probability by a single parameter , referred to as the coverage tuning parameter, and the adaptive rigorous checking of coverage guarantee by virtue of bounds of coverage probabilities. It is recognized in [1418] that, due to the discontinuity of the coverage probability on parameter space, the conventional method of evaluating the coverage probability for a finite number of parameter values is neither rigorous not computationally efficient for checking the coverage probability guarantee.

As mentioned in the introduction, Frey published an article [13] in TAS on the sequential estimation of a binomial proportion with prescribed margin of error and confidence level. For clarity of presentation, the comparison of the works of Chen and Frey is given in Section 5.4. In the remainder of this section, we shall only introduce the idea and techniques of [1418], which had been precedentially developed by Chen before Frey submitted his original manuscript to TAS in July 2009. We will introduce the approach of [1418] with a focus on the special problem of estimating a binomial proportion with prescribed margin of error and confidence level.

2.1. Four Components Suffice

The exact methods of [1418] for multistage parameter estimation have four main components as follows.(i)Stopping rules parameterized by the coverage tuning parameter such that the associated coverage probabilities can be made arbitrarily close to by choosing to be a sufficiently small number.(ii)Recursively computable lower and upper bounds for the complementary coverage probability for a given and an interval of parameter values.(iii)Adapted branch and bound algorithm.(iv)Bisection coverage tuning.

Without looking at the technical details, one can see that these four components are sufficient for constructing a sequential estimator so that the prescribed confidence level is guaranteed. The reason is as follows. As lower and upper bounds for the complementary coverage probability are available, the global optimization technique, branch and bound (B&B) algorithm [20], can be used to compute exactly the maximum of complementary coverage probability on the whole parameter space. Thus, it is possible to check rigorously whether the coverage probability associated with a given is no less than the prespecified confidence level. Since the coverage probability can be controlled by , it is possible to determine as large as possible to guarantee the desired confidence level by a bisection search. This process is referred to as bisection coverage tuning in [1418]. Since a critical subroutine needed for bisection coverage tuning is to check whether the coverage probability is no less than the prespecified confidence level, it is not necessary to compute exactly the maximum of the complementary coverage probability. Therefore, Chen revised the standard B&B algorithm to reduce the computational complexity and called the improved algorithm as the adapted B&B Algorithm. The idea is to adaptively partition the parameter space as many subintervals. If for all subintervals, the upper bounds of the complementary coverage probability are no greater than , then declare that the coverage probability is guaranteed. If there exists a subinterval for which the lower bound of the complementary coverage probability is greater than , then declare that the coverage probability is not guaranteed. Continue partitioning the parameter space if no decision can be made. The four components are illustrated in the sequel under the headings of stopping rules, interval bounding, adapted branch and bound, and bisection coverage tuning.

2.2. Stopping Rules

The first component for the exact sequential estimation of a binomial proportion is the stopping rule for constructing a sequential estimator such that the coverage probability can be controlled by the coverage tuning parameter . For convenience of describing some concrete stopping rules, define where and are integers such that . Assume that . For the purpose of controlling the coverage probability by the coverage tuning parameter, Chen has proposed four stopping rules as follows.

Stopping Rule A. Continue sampling until for some .

Stopping Rule B. Continue sampling until for some .

Stopping Rule C. Continue sampling until and for some .

Stopping Rule D. Continue sampling until for some .

Stopping Rule A was first proposed in [14, ] and restated in [15, ]. Stopping Rule B was first proposed in [16, ] and represented as the third stopping rule in [21, ]. Stopping Rule C originated from [17, ] and was restated as the first stopping rule in [21, ]. Stopping Rule D was described in the remarks following of [22]. All these stopping rules can be derived from the general principles proposed in [18, ] and [19, ].

Given that a stopping rule can be expressed in terms of and for , it is possible to find a bivariate function on , taking values from , such that the stopping rule can be stated as the following: continue sampling until for some . It can be checked that such representation applies to Stopping Rules A, B, C, and D. For example, Stopping Rule B can be expressed in this way by virtue of function such that The motivation of introducing function is to parameterize the stopping rule in terms of design parameters. Function determines the form of the stopping rule and, consequently, the sample sizes for all stages can be chosen as functions of design parameters. Specifically, let To avoid unnecessary checking of the stopping criterion and thus reduce administrative cost, there should be a possibility that the sampling process is terminated at the first stage. Hence, the minimum sample size should be chosen to ensure that . This implies that the sample size for the first stage can be taken as . On the other hand, since the sampling process must be terminated at or before the th stage, the maximum sample size should be chosen to guarantee that . This implies that the sample size for the last stage can be taken as . If the number of stages is given, then the sample sizes for stages in between and can be chosen as integers between and . Particularly, if the group sizes are expected to be approximately equal, then the sample sizes can be taken as Since the stopping rule is associated with the coverage tuning parameter , it follows that the number of stages and the sample sizes can be expressed as functions of . In this sense, it can be said that the stopping rule is parameterized by the coverage tuning parameter . The above method of parameterizing stopping rules has been used in [1417] and proposed in [21, , page 9].

2.3. Interval Bounding

The second component for the exact sequential estimation of a binomial proportion is the method of bounding the complementary coverage probability for in an interval contained by interval . Applying of [15] to the special case of a Bernoulli distribution immediately yields for all . The bounds of (9) can be shown as follows. Note that for . As a consequence of the monotonicity of and with respect to , where is a real number independent of , the lower and upper bounds of for can be given as and , respectively.

In page 15, equation of [15], Chen proposed to apply the recursive method of Schultz et al. [23, ] to compute the lower and upper bounds of given by (9). It should be pointed out that such lower and upper bounds of can also be computed by the recursive path-counting method of Franzén [10, page 49].

2.4. Adapted Branch and Bound

The third component for the exact sequential estimation of a binomial proportion is the adapted B&B algorithm, which was proposed in [15, ], for quick determination of whether the coverage probability is no less than for any value of the associated parameter. Such a task of checking the coverage probability is also referred to as checking the coverage probability guarantee. Given that lower and upper bounds of the complementary coverage probability on an interval of parameter values can be obtained by the interval bounding techniques, this task can be accomplished by applying the B&B algorithm [20] to compute exactly the maximum of the complementary coverage probability on the parameter space. However, in our applications, it suffices to determine whether the maximum of the complementary coverage probability with respect to is greater than the confidence parameter . For fast checking whether the maximal complementary coverage probability exceeds , Chen proposed to reduce the computational complexity by revising the standard B&B algorithm as the Adapted B&B Algorithm in [15, ]. To describe this algorithm, let denote the parameter space . For an interval , let denote the maximum of the complementary coverage probability with respect to . Let and be, respectively, the lower and upper bounds of , which can be obtained by the interval bounding techniques introduced in Section 2.3. Let be a prespecified tolerance, which is much smaller than . The adapted B&B algorithm of [15] is represented with a slight modification as in Algorithm 1.

Let ,   and .
Let if . Otherwise, let be empty.
While is nonempty, and is greater than max , do the following:
    Split each interval in as two new intervals of equal length.
       Let denote the set of all new intervals obtained from this splitting procedure.
    Eliminate any interval from such that .
    Let be the set processed by the above elimination procedure.
    Let and . Let .
If is empty and , then declare max .
 Otherwise, declare max .

It should be noted that for a sampling scheme of symmetrical stopping boundary, the initial interval may be taken as for the sake of efficiency. In Section 5.1, we will illustrate why the adapted B&B algorithm is superior than the direct evaluation based on gridding parameter space. As will be seen in Section 5.2, the objective of the adapted B&B algorithm can also be accomplished by the Adaptive Maximum Checking Algorithm due to Chen [21, ] and rediscovered by Frey [13, Appendix]. An explanation is given in Section 5.3 for the advantage of working with the complementary coverage probability.

2.5. Bisection Coverage Tuning

The fourth component for the exact sequential estimation of a binomial proportion is Bisection Coverage Tuning. Based on the adaptive rigorous checking of coverage probability, Chen proposed in [14, ] and [15, ] to apply a bisection search method to determine maximal such that the coverage probability is no less than for any value of the associated parameter. Moreover, Chen has developed asymptotic results in [15, page 21, ] for determining the initial interval of needed for the bisection search. Specifically, if the complementary coverage probability associated with tends to as , then the initial interval of can be taken as , where is the largest integer such that the complementary coverage probability associated with is no greater than for all . By virtue of a bisection search, it is possible to obtain such that the complementary coverage probability associated with is guaranteed to be no greater than for all .

3. Principle of Constructing Stopping Rules

In this section, we shall illustrate the inherent connection between various stopping rules. It will be demonstrated that a lot of stopping rules can be derived by virtue of the inclusion principle proposed by Chen [18, ].

3.1. Inclusion Principle

The problem of estimating a binomial proportion can be considered as a special case of parameter estimation for a random variable parameterized by , where the objective is to construct a sequential estimator for such that for any . Assume that the sampling process consists of stages with sample sizes . For , define an estimator for in terms of samples of . Let be a sequence of confidence intervals such that for any , is defined in terms of and that the coverage probability can be made arbitrarily close to by choosing to be a sufficiently small number. In of [18], Chen proposed the following general stopping rule: At the termination of the sampling process, a sequential estimator for is taken as , where is the index of stage at the termination of sampling process.

Clearly, the general stopping rule (10) can be restated as follows.

Continue sampling until the confidence interval is included by interval for some .

The sequence of confidence intervals are parameterized by for purpose of controlling the coverage probability . Due to the inclusion relationship , such a general methodology of using a sequence of confidence intervals to construct a stopping rule for controlling the coverage probability is referred to as the inclusion principle. It is asserted by of [18] that provided that for and . This demonstrates that if the number of stages is bounded respective to , then the coverage probability associated with the stopping rule derived from the inclusion principle can be controlled by . Actually, before explicitly proposing the inclusion principle in [18], Chen had extensively applied the inclusion principle in [1417] to construct stopping rules for estimating parameters of various distributions such as binomial, Poisson, geometric, hypergeometric, and normal distributions. A more general version of the inclusion principle is proposed in [19, ]. For simplicity of the stopping rule, Chen had made effort to eliminate the computation of confidence limits.

In the context of estimating a binomial proportion , the inclusion principle immediately leads to the following general stopping rule: Consequently, the sequential estimator for is taken as according to (3). It should be pointed out that the stopping rule (12) had been rediscovered by Frey in Section 2, the 1st paragraph of [13]. The four stopping rules considered in his paper follow immediately from applying various confidence intervals to the general stopping rule (12).

In the sequel, we will illustrate how to apply (12) to the derivation of Stopping Rules A, B, C, and D introduced in Section 2.2 and other specific stopping rules.

3.2. Stopping Rule from Wald Intervals

By virtue of Wald’s method of interval estimation for a binomial proportion , a sequence of confidence intervals for can be constructed such that and that for and . Note that, for , the event is the same as the event . So, applying this sequence of confidence intervals to (12) results in the stopping rule “continue sampling until for some ”. Since for any , there exists a unique number such that , this stopping rule is equivalent to “Continue sampling until for some .” This stopping rule is actually the same as Stopping Rule D, since for .

3.3. Stopping Rule from Revised Wald Intervals

Define for , where is a positive number. Inspired by Wald’s method of interval estimation for , a sequence of confidence intervals can be constructed such that and that for and . This sequence of confidence intervals was applied by Frey [13] to the general stopping rule (12). As a matter of fact, such idea of revising Wald interval by replacing the relative frequency involved in the confidence limits with had been proposed by Chen [24, ].

As can be seen from , page 243, of Frey [13], applying (12) with the sequence of revised Wald intervals yields the stopping rule “Continue sampling until for some .” Clearly, replacing in Stopping Rule D with also leads to this stopping rule.

3.4. Stopping Rule from Wilson’s Confidence Intervals

Making use of the interval estimation method of Wilson [25], one can obtain a sequence of confidence intervals for such that and that for and . It should be pointed out that the sequence of Wilson’s confidence intervals has been applied by Frey [13, , page 243] to the general stopping rule (12) for estimating a binomial proportion.

Since a stopping rule directly involves the sequence of Wilson’s confidence intervals is cumbersome, it is desirable to eliminate the computation of Wilson’s confidence intervals in the stopping rule. For this purpose, we need to use the following result.

Theorem 1. Assume that and . Then, Wilson’s confidence intervals satisfy for .

See Appendix A for a proof. As a consequence of Theorem 1 and the fact that for any , there exists a unique number such that , applying the sequence of Wilson’s confidence intervals to (12) leads to the following stopping rule.

Continue sampling until for  some.

3.5. Stopping Rule from Clopper-Pearson Confidence Intervals

Applying the interval estimation method of Clopper-Pearson [26], a sequence of confidence intervals for can be obtained such that for and , where the upper confidence limit satisfies the equation if ; and the lower confidence limit satisfies the equation if . The well-known equation in [27, page 173] implies that , with , is decreasing with respect to and that , with , is increasing with respect to . It follows that for . Consequently, for . This demonstrates that applying the sequence of Clopper-Pearson confidence intervals to the general stopping rule (12) gives Stopping Rule C.

It should be pointed out that Stopping Rule C was rediscovered by Frey as the third stopping rule in , page 243 of his paper [13].

3.6. Stopping Rule from Fishman’s Confidence Intervals

By the interval estimation method of Fishman [28], a sequence of confidence intervals for can be obtained such that Under the assumption that and , by similar techniques as the proof of of [22], it can be shown that for . Therefore, applying the sequence of confidence intervals of Fishman to the general stopping rule (12) gives Stopping Rule A.

It should be noted that Fishman’s confidence intervals are actually derived from the Chernoff bounds of the tailed probabilities of the sample mean of Bernoulli random variable. Hence, Stopping Rule A is also referred to as the stopping rule from Chernoff bounds in this paper.

3.7. Stopping Rule from Confidence Intervals of Chen et al.  

Using the interval estimation method of Chen et al. [29], a sequence of confidence intervals for can be obtained such that and that for and . Under the assumption that and , by similar techniques as the proof of of [30], it can be shown that for . This implies that applying the sequence of confidence intervals of Chen et al. to the general stopping rule (12) leads to Stopping Rule B.

Actually, the confidence intervals of Chen et al. [29] are derived from Massart’s inequality [31] on the tailed probabilities of the sample mean of Bernoulli random variable. For this reason, Stopping Rule B is also referred to as the stopping rule from Massart’s inequality in [21, ].

4. Double-Parabolic Sequential Estimation

From Sections 2.2, 3.2, and 3.7, it can be seen that, by introducing a new parameter and letting take values and , respectively, Stopping Rules B and D can be accommodated as special cases of the following general stopping rule.

Continue the sampling process until for some , where .

Moreover, as can be seen from (16), the stopping rule derived from applying Wilson’s confidence intervals to (12) can also be viewed as a special case of such general stopping rule with .

From the stopping condition (21), it can be seen that the stopping boundary is associated with the double-parabolic function such that and correspond to the sample mean and sample size, respectively. For , , and , stopping boundaries with various are shown by Figure 1.

For fixed and , the parameters and affect the shape of the stoping boundary in a way as follows. As increases, the span of stopping boundary is increasing in the axis of sample mean. By decreasing , the stopping boundary can be dragged toward the direction of increasing sample size. Hence, the parameter is referred to as the dilation coefficient. The parameter is referred to as the coverage tuning parameter. Since the stopping boundary consists of two parabolas, this approach of estimating a binomial proportion is referred to as the double-parabolic sequential estimation method.

4.1. Parametrization of the Sampling Scheme

In this section, we shall parameterize the double-parabolic sequential sampling scheme by the method described in Section 2.2. From the stopping condition (21), the stopping rule can be restated as follows. Continue sampling until for some , where the function is defined by Clearly, the function associated with the double-parabolic sequential sampling scheme depends on the design parameters and . Applying the function defined by (22) to (6) yields Since is usually small in practical applications, we restrict to satisfy . As a consequence of and the fact that for any , it must be true that for any . It follows from (23) that , which implies that the minimum sample size can be taken as On the other hand, applying the function defined by (22) to (7) gives Since for any , it follows from (25) that , which implies that maximum sample size can be taken as Therefore, the sample sizes can be chosen as functions of , and which satisfy the following constraint: In particular, if the number of stages is given and the group sizes are expected to be approximately equal, then the sample sizes, , for all stages can be obtained by substituting defined by (24) and defined by (26) into (8). For example, if the values of design parameters are and , then the sample sizes of this sampling scheme are calculated as The stopping rule is completely determined by substituting the values of design parameters into (21).

4.2. Uniform Controllability of Coverage Probability

Clearly, for prespecified , and , the coverage probability depends on the parameter , the number of stages , and the sample sizes . As illustrated in Section 4.1, the number of stages and the sample sizes can be defined as functions of . That is, the stopping rule can be parameterized by . Accordingly, for any , the coverage probability becomes a function of . The following theorem shows that it suffices to choose small enough to guarantee the prespecified confidence level.

Theorem 2. Let and be fixed. Assume that the number of stages and the sample sizes are functions of such that the constraint (27) is satisfied. Then, is no less than for any provided that

See Appendix B for a proof. For Theorem 2 to be valid, the choice of sample sizes is very flexible. Particularly, the sample sizes can be arithmetic or geometric progressions or any others, as long as the constraint (27) is satisfied. It can be seen that for the coverage probability to be uniformly controllable, the dilation coefficient must be greater than . Theorem 2 asserts that there exists such that the coverage probability is no less than , regardless of the associated binomial proportion . For the purpose of reducing sampling cost, we want to have a value of as large as possible such that the prespecified confidence level is guaranteed for any . This can be accomplished by the technical components introduced in Sections 2.1, 2.3, 2.4, and 2.5. Clearly, for every value of , we can obtain a corresponding value of (as large as possible) to ensure the desired confidence level. However, the performance of resultant stopping rules are different. Therefore, we can try a number of values of and pick the best resultant stopping rule for practical use.

4.3. Asymptotic Optimality of Sampling Schemes

Now we shall provide an important reason why we propose the sampling scheme of that structure by showing its asymptotic optimality. Since the performance of a group sampling scheme will be close to its fully sequential counterpart, we investigate the optimality of the fully sequential sampling scheme. In this scenario, the sample sizes are consecutive integers such that The fully sequential sampling scheme can be viewed as a special case of a group sampling scheme of stages and group size . Clearly, if and are fixed, the sampling scheme is dependent only on . Hence, for any , if we allow to vary in , then the coverage probability and the average sample number are functions of . We are interested in knowing the asymptotic behavior of these functions as , since is usually small in practical situations. The following theorem provides us the desired insights.

Theorem 3. Assume that and are fixed. Define for and . Then, for any .

See Appendix C for a proof. From (32), it can be seen that for any if . Such value can be taken as an initial value for the coverage tuning parameter . In addition to providing guidance on the coverage tuning techniques, Theorem 3 also establishes the optimality of the sampling scheme. To see this, let denote the minimum sample size required for a fixed-sample-size procedure to guarantee that for any , where . It is well known that from the central limit theorem, Applying (33), (34), and letting , we have for and , which implies the asymptotic optimality of the double-parabolic sampling scheme. By virtue of (33), an approximate formula for computing the average sample number is given as follows: for and . From (34), one obtains , which is a well-known result in statistics. In situations that no information of is available, one usually uses as the sample size for estimating the binomial proportion with prescribed margin of error and confidence level . Since the sample size formula (36) can lead to under-coverage, researchers in many areas are willing to use a more conservative but rigorous sample size formula which is derived from the Chernoff-Hoeffding bound [32, 33]. Comparing (35) and (37), one can see that under the premise of guaranteeing the prescribed confidence level , the double-parabolic sampling scheme can lead to a substantial reduction of sample number when the unknown binomial proportion is close to or .

4.4. Bounds on Distribution and Expectation of Sample Number

We shall derive analytic bounds for the cumulative distribution function and expectation of the sample number associated with the double-parabolic sampling scheme. In this direction, we have obtained the following results.

Theorem 4. Let . Define for . Let denote the index of stage such that . Then, for . Moreover, .

See Appendix D for a proof. By the symmetry of the double-parabolic sampling scheme, similar analytic bounds for the distribution and expectation of the sample number can be derived for the case that .

5. Comparison of Computational Methods

In this section, we shall compare various computational methods. First, we will illustrate why a frequently used method of evaluating the coverage probability based on gridding the parameter space is not rigorous and is less efficient as compared to the adapted B&B algorithm. Second, we will introduce the Adaptive Maximum Checking Algorithm of [21] which has better computational efficiency as compared to the adapted B&B algorithm. Third, we will explain that it is more advantageous in terms of numerical accuracy to work with the complementary coverage probability as compared to direct evaluation of the coverage probability. Finally, we will compare the computational methods of Chen [1418] and Frey [13] for the design of sequential procedures for estimating a binomial proportion.

5.1. Verifying Coverage Guarantee without Gridding Parameter Space

For purpose of constructing a sampling scheme so that the prescribed confidence level is guaranteed, an essential task is to determine whether the coverage probability associated with a given stopping rule is no less than . In other words, it is necessary to compare the infimum of coverage probability with . To accomplish such a task of checking coverage guarantee, a natural method is to evaluate the infimum of coverage probability as follows:(i)choose grid points from parameter space ;(ii)compute for ;(iii)Take as .

This method can be easily mistaken as an exact approach and has been frequently used for evaluating coverage probabilities in many problem areas.

It is not hard to show that if the sample size of a sequential procedure has a support , then the coverage probability is discontinuous at , where  is a nonnegative integer no greater than . The set typically has a large number of parameter values. Due to the discontinuity of the coverage probability as a function of , the coverage probabilities can differ significantly for two parameter values which are extremely close. This implies that an intolerable error can be introduced by taking the minimum of coverage probabilities of a finite number of parameter values as the infimum of coverage probability on the whole parameter space. So, if one simply uses the minimum of the coverage probabilities of a finite number of parameter values as the infimum of coverage probability to check the coverage guarantee, the sequential estimator of the resultant stopping rule will fail to guarantee the prescribed confidence level.

In addition to the lack of rigorousness, another drawback of checking coverage guarantee based on the method of gridding parameter space is its low efficiency. A critical issue is on the choice of the number, , of grid points. If the number is too small, the induced error can be substantial. On the other hand, choosing a large number for results in high computational complexity.

In contrast to the method based on gridding parameter space, the adapted B&B algorithm is a rigorous approach for checking coverage guarantee as a consequence of the mechanism for comparing the bounds of coverage probability with the prescribed confidence level. The algorithm is also efficient due to the mechanism of pruning branches.

5.2. Adaptive Maximum Checking Algorithm

As illustrated in Section 2, the techniques developed in [1418] are sufficient to provide exact solutions for a wide range of sequential estimation problems. However, one of the four components, the adapted B&B algorithm, requires computing both the lower and upper bounds of the complementary coverage probability. To further reduce the computational complexity, it is desirable to have a checking algorithm which needs only one of the lower and upper bounds. For this purpose, Chen had developed the Adaptive Maximum Checking Algorithm (AMCA) in [21, ] and [19, ]. In the following introduction of the AMCA, we shall follow the description of [21]. The AMCA can be applied to a wide class of computational problems dependent on the following critical subroutine.

Determine whether a function is smaller than a prescribed number for every value of contained in interval .

Particularly, for checking the coverage guarantee in the context of estimating a binomial proportion, the parameter is the binomial proportion and the function is actually the complementary coverage probability. In many situations, it is impossible or very difficult to evaluate for every value of in interval , since the interval may contain infinitely many or an extremely large number of values. Similar to the adapted B&B algorithm, the purpose of AMCA is to reduce the computational complexity associated with the problem of determining whether the maximum of over is less than . The only assumption required for AMCA is that, for any interval , it is possible to compute an upper bound such that for any and that the upper bound converges to as the interval width tends to . The backward AMCA proceeds as in Algorithm 2.

Choose initial step size .
Let , and .
While , do the following:
     Let and ;
      While st = 0, do the following:
      Let and .
      If , then let and .
      Otherwise, let and .
      If , then let st 1 and .
      If , then let st 1 and F 1.
Return .

The output of the backward AMCA is a binary variable such that “” means “” and “” means “.” An intermediate variable is introduced in the description of AMCA such that “” means that the left endpoint of the interval is reached. The backward AMCA starts from the right endpoint of the interval (i.e., ) and attempts to find an interval such that . If such an interval is available, then, attempt to go backward to find the next consecutive interval with twice width. If doubling the interval width fails to guarantee , then try to repeatedly cut the interval width in half to ensure that . If the interval width becomes smaller than a prescribed tolerance , then AMCA declares that “.” For our relevant statistical problems, if for some , it is sure that “” will be declared. On the other hand, it is possible that “” is declared even though for any . However, such situation can be made extremely rare and immaterial if we choose to be a very small number. Moreover, this will only introduce negligible conservativeness in the evaluation of if is chosen to be sufficiently small (e.g., ). Clearly, the backward AMCA can be easily modified as forward AMCA. Moreover, the AMCA can also be easily modified as Adaptive Minimum Checking Algorithm (forward and backward). For checking the maximum of complementary coverage probability , one can use the AMCA with over interval . We would like to point out that, in contrast to the adapted B&B algorithm, it seems difficult to generalize the AMCA to problems involving multidimensional parameter spaces.

5.3. Working with Complementary Coverage Probability

We would like to point out that, instead of evaluating the coverage probability as in [13], it is better to evaluate the complementary coverage probability for purpose of reducing numerical error. The advantage of working on the complementary coverage probability can be explained as follows. Note that, in many cases, the coverage probability is very close to and the complementary coverage probability is very close to . Since the absolute precision for computing a number close to is much lower than the absolute precision for computing a number close to , the method of directly evaluating the coverage probability will lead to intolerable numerical error for problems involving small . As an example, consider a situation that the complementary coverage probability is in the order of . Direct computation of the coverage probability can easily lead to an absolute error of the order of . However, the absolute error of computing the complementary coverage probability can be readily controlled at the order of .

5.4. Comparison of Approaches of Chen and Frey

As mentioned in the introduction, Frey published a paper [13] in The American Statistician (TAS) on the sequential estimation of a binomial proportion with prescribed margin of error and confidence level. The approaches of Chen and Frey are based on the same strategy as follows. First, construct a family of stopping rules parameterized by (and possibly other design parameters) so that the associated coverage probability can be controlled by parameter in the sense that the coverage probability can be made arbitrarily close to by increasing . Second, apply a bisection search method to determine the parameter so that the coverage probability is no less than the prescribed confidence level for any .

For the purpose of controlling the coverage probability, Frey [13] applied the inclusion principle previously proposed in [18, ] and used in [1417]. As illustrated in Section 3, the central idea of inclusion principle is to use a sequence of confidence intervals to construct stopping rules so that the sampling process is continued until a confidence interval is included by an interval defined in terms of the estimator and margin of error. Due to the inclusion relationship, the associated coverage probability can be controlled by the confidence coefficients of the sequence of confidence intervals. The critical value used by Frey plays the same role for controlling coverage probabilities as that of the coverage tuning parameter used by Chen. Frey [13] stated stopping rules in terms of confidence limits. This way of expressing stopping rules is straightforward and insightful, since one can readily see the principle behind the construction. For convenience of practical use, Chen proposed to eliminate the necessity of computing confidence limits.

Similar to the AMCA proposed in [21, ], the algorithm of Frey [13, Appendix] for checking coverage guarantee adaptively scans the parameter space based on interval bounding. The adaptive method used by Frey for updating step size is essentially the same as that of the AMCA. Ignoring the number in Frey’s expression “,” which has very little impact on the computational efficiency, Frey’s step size can be identified as the adaptive step size in the AMCA. The operation associated with “” has a similar function as that of the command “Let and ” in the outer loop of the AMCA. The operation associated with Frey’s expression “, ” is equivalent to that of the command “Let and ” in the inner loop of the AMCA. Frey proposed to declare a failure of coverage guarantee if “the distance from to the candidate value for falls below .” The number “” actually plays the same role as “” in the AMCA, where “” is recommended by [21].

6. Numerical Results

In this section, we shall illustrate the proposed double-parabolic sampling scheme through examples. As demonstrated in Sections 2.2 and 4, the double-parabolic sampling scheme can be parameterized by the dilation coefficient and the coverage tuning parameter . Hence, the performance of the resultant stopping rule can be optimized with respect to and by choosing various values of from interval and determining the corresponding values of by the computational techniques introduced in Section 2 to guarantee the desired confidence interval.

6.1. Asymptotic Analysis May Be Inadequate

For fully sequential cases, we have evaluated the double-parabolic sampling scheme with , , , and . The stopping boundary is displayed in the left side of Figure 2. The function of coverage probability with respect to the binomial proportion is shown in the right side of Figure 2, which indicates that the coverage probabilities are generally substantially lower than the prescribed confidence level . By considering as a small number and applying the asymptotic theory, the coverage probability associated with the sampling scheme is expected to be close to . This numerical example demonstrates that although the asymptotic method is insightful and involves virtually no computation, it may not be adequate.

In general, the main drawback of an asymptotic method is that there is no guarantee of coverage probability. Although an asymptotical method asserts that if the margin of error tends to , the coverage probability will tend to the prespecified confidence level , it is difficult to determine how small the margin of error is sufficient for the asymptotic method to be applicable. Note that implies the average sample size tends to . However, in reality, the sample sizes must be finite. Consequently, an asymptotic method inevitably introduces unknown statistical error. Since an asymptotic method does not necessarily guarantee the prescribed confidence level, it is not fair to compare its associated sample size with that of an exact method, which guarantees the prespecified confidence level.

This example also indicates that, due to the discrete nature of the problem, the coverage probability is a discontinuous and erratic function of , which implies that Monte Carlo simulation is not suitable for evaluating the coverage performance.

6.2. Parametric Values of Fully Sequential Schemes

For fully sequential cases, to allow direct application of our double-parabolic sequential method, we have obtained values of coverage tuning parameter , which guarantee the prescribed confidence levels, for double-parabolic sampling schemes with and various combinations of as shown in Table 1. We used the computational techniques introduced in Section 2 to obtain this table.

To illustrate the use of Table 1, suppose that one wants a fully sequential sampling procedure to ensure that for any . This means that one can choose , and the range of sample size is given by (30). From Table 1, it can be seen that the value of corresponding to and is . Consequently, the stopping rule is completely determined by substituting the values of design parameters , , , and into its definition. The stopping boundary of this sampling scheme is displayed in the left side of Figure 3. The function of coverage probability with respect to the binomial proportion is shown in the right side of Figure 3.

6.3. Parametric Values of Group Sequential Schemes

In many situations, especially in clinical trials, it is desirable to use group sequential sampling schemes. In Tables 2 and 3, assuming that sample sizes satisfy (8) for the purpose of having approximately equal group sizes, we have obtained parameters for concrete schemes by the computational techniques introduced in Section 2.

For dilation coefficient and confidence parameter , we have obtained values of coverage tuning parameter , which guarantee the prescribed confidence level , for double-parabolic sampling schemes, with the number of stages ranging from to , as shown in Table 2.

For dilation coefficient and confidence parameter , we have obtained values of coverage tuning parameter , which guarantee the prescribed confidence level , for double-parabolic sampling schemes, with the number of stages ranging from to , as shown in Table 3.

To illustrate the use of these tables, suppose that one wants a ten-stage sampling procedure of approximately equal group sizes to ensure that for any . This means that one can choose , and sample sizes satisfying (8). To obtain appropriate parameter values for the sampling procedure, one can look at Table 3 to find the coverage tuning parameter corresponding to and . From Table 3, it can be seen that can be taken as . Consequently, the stopping rule is completely determined by substituting the values of design parameters , , , , and into its definition and (8). The stopping boundary of this sampling scheme and the function of coverage probability with respect to the binomial proportion are displayed, respectively, in the left and right sides of Figure 4.

6.4. Comparison of Sampling Schemes

We have conducted numerical experiments to investigate the impact of dilation coefficient on the performance of our double-parabolic sampling schemes. Our computational experiences indicate that the dilation coefficient is frequently a good choice in terms of average sample number and coverage probability. For example, consider the case that the margin of error is given as and the prescribed confidence level is with . For the double-parabolic sampling scheme with the dilation coefficient chosen as , and , we have determined that, to ensure the prescribed confidence level , it suffices to set the coverage tuning parameter as and , respectively. The average sample numbers of these sampling schemes and the coverage probabilities as functions of the binomial proportion are shown, respectively, in the left and right sides of Figure 5. From Figure 5, it can be seen that a double-parabolic sampling scheme with dilation coefficient has better performance in terms of average sample number and coverage probability as compared to that of the double-parabolic sampling scheme with smaller or larger values of dilation coefficient.

We have investigated the impact of confidence intervals on the performance of fully sequential sampling schemes constructed from the inclusion principle. We have observed that the stopping rule derived from Clopper-Pearson intervals generally outperforms the stopping rules derived from other types of confidence intervals. However, via appropriate choice of the dilation coefficient, the double-parabolic sampling scheme can perform uniformly better than the stopping rule derived from Clopper-Pearson intervals. To illustrate, consider the case that and . For stopping rules derived from Clopper-Pearson intervals, Fishman’s intervals, Wilson’s intervals, and revised Wald intervals with , we have determined that to guarantee the prescribed confidence level , it suffices to set the coverage tuning parameter as , and , respectively. For the stopping rule derived from Wald intervals, we have determined to ensure the confidence level, under the condition that the minimum sample size is taken as . Recall that for the double-parabolic sampling scheme with , we have obtained for purpose of guaranteeing the confidence level. The average sample numbers of these sampling schemes are shown in Figure 6. From these plots, it can be seen that as compared to the stopping rule derived from Clopper-Pearson intervals, the stopping rule derived from the revised Wald intervals performs better in the region of close to or , but performs worse in the region of in the middle of . The performance of stopping rules from Fishman’s intervals (i.e., from Chernoff bound) and Wald intervals are obviously inferior as compared to that of the stopping rule derived from Clopper-Pearson intervals. It can be observed that the double-parabolic sampling scheme uniformly outperforms the stopping rule derived from Clopper-Pearson intervals.

6.5. Estimation with High Confidence Level

In some situations, we need to estimate a binomial proportion with a high confidence level. For example, one might want to construct a sampling scheme such that, for and , the resultant sequential estimator satisfies for any . By working with the complementary coverage probability, we determined that it suffices to let the dilation coefficient and the coverage tuning parameter . The stopping boundary and the function of coverage probability with respect to the binomial proportion are displayed, respectively, in the left and right sides of Figure 7. As addressed in Section 5.3, it should be noted that it is impossible to obtain such a sampling scheme without working with the complementary coverage probability.

7. Illustrative Examples for Clinical Trials

In this section, we shall illustrate the applications of our double-parabolic group sequential estimation method in clinical trials.

An example of our double-parabolic sampling scheme can be illustrated as follows. Assume that is given and that the sampling procedure is expected to have stages with sample sizes satisfying (8). Choosing , we have determined that it suffices to take to guarantee that the coverage probability is no less than for all . Accordingly, the sample sizes of this sampling scheme are calculated as , and . This sampling scheme, with a sample path, is shown in the left side of Figure 8. In this case, the stopping rule can be equivalently described by virtue of Figure 8 as the following: continue sampling until hit a green line at some stage. The coverage probability is shown in the right side of Figure 8.

To apply this estimation method in a clinical trial for estimating the proportion of a binomial response with margin of error and confidence level , we can have seven groups of patients with group sizes , and . In the first stage, we conduct experiment with the patients of the first group. We observe the relative frequency of response and record it as . Suppose that there are patients having positive responses, then the relative frequency at the first stage is . With the values of , we check if the stopping rule is satisfied. This is equivalent to see if the point hits a green line at the first stage. For such value of , it can be seen that the stopping condition is not fulfilled. So, we need to conduct the second stage of experiment with the patients of the second group. We observe the response of these patients. Suppose we observe that patients among this group have positive responses. Then, we add with , the number of positive responses before the second stage, to obtain positive responses among patients. So, at the second stage, we get the relative frequency . Since the stopping rule is not satisfied with the values of , we need to conduct the third stage of experiment with the patients of the third group. Suppose we observe that patients among this group have positive responses. Then, we add with , the number of positive responses before the third stage, to get positive responses among patients. So, at the third stage, we get the relative frequency . Since the stopping rule is not satisfied with the values of , we need to conduct the fourth stage of experiment with the patients of the fourth group. Suppose we observe that patients among this group have positive responses. Then, we add with , the number of positive responses before the fourth stage, to get positive responses among patients. So, at the fourth stage, we get the relative frequency . Since the stopping rule is not satisfied with the values of , we need to conduct the fifth stage of experiment with the patients of the fifth group. Suppose we observe that patients among this group have positive responses. Then, we add with , the number of positive responses before the fifth stage, to get positive responses among patients. So, at the fifth stage, we get the relative frequency . It can be seen that the stopping rule is satisfied with the values of . Therefore, we can terminate the sampling experiment and take as an estimate of the proportion of the whole population having positive responses. With a confidence level, one can believe that the difference between the true value of and its estimate is less than .

In this experiment, we only use samples to obtain the estimate for . Except the round-off error, there is no other source of error for reporting statistical accuracy, since no asymptotic approximation is involved. As compared to fixed-sample-size procedure, we achieved a substantial save of samples. To see this, one can check that using the rigorous formula (37) gives a sample size , which is overly conservative. From the classical approximate formula (35), the sample size is determined as , which has been known to be insufficient to guarantee the prescribed confidence level . The exact method of [34] shows that at least samples are needed. As compared to the best-fixed-sample size obtained by the method of [34], the reduction of sample sizes resulted from our double-parabolic sampling scheme is . It can be seen that the fixed-sample-size procedure wastes samples as compared to our group sequential method, which is also an exact method. This percentage may not be serious if it were a save of a number of simulation runs. However, as the number count is for patients, the reduction of samples is important for ethical and economical reasons. Using our group sequential method, the worst-case sample size is equal to , which is only more than the minimum sample size of fixed-sample procedure. However, a lot of samples can be saved in the average case.

As or become smaller, the reduction of samples is more significant. For example, let and , we have a double-parabolic sample scheme with stages. The sampling scheme, with a sample path, is shown in the left side of Figure 9. The coverage probability is shown in the right side of Figure 9.

8. Conclusion

In this paper, we have reviewed recent development of group sequential estimation methods for a binomial proportion. We have illustrated the inclusion principle and its applications to various stopping rules. We have introduced computational techniques in the literature, which suffice for determining parameters of stopping rules to guarantee desired confidence levels. Moreover, we have proposed a new family of sampling schemes with stopping boundary of double-parabolic shape, which are parameterized by the coverage tuning parameter and the dilation coefficient. These parameters can be determined by the exact computational techniques to reduce the sampling cost, while ensuring prescribed confidence levels. The new family of sampling schemes are extremely simple in structure and asymptotically optimal as the margin of error tends to . We have established analytic bounds for the distribution and expectation of the sample number at the termination of the sampling process. We have obtained parameter values via the exact computational techniques for the proposed sampling schemes such that the confidence levels are guaranteed and that the sampling schemes are generally more efficient as compared to existing ones.

Appendices

A. Proof of Theorem 1

Consider function for and . It can be checked that , which shows that for any fixed , is a unimodal function of , with a maximum attained at . By such a property of and the definition of Wilson’s confidence intervals, we have for , where we have used the fact that and . Recall that . It follows that for . This completes the proof of the theorem.

B. Proof of Theorem 2

By the assumption that , we have and, consequently, . It follows from the definition of the sampling scheme that the sampling process must stop at or before the th stage. In other words, . This allows one to write for . By virtue of the well-known Chernoff-Hoeffding bound [32, 33], we have for . Making use of (B.1), (B.2), and the fact that as can be seen from (30), we have for any . Therefore, to guarantee that for any , it is sufficient to choose such that . This inequality can be written as or, equivalently, . The proof of the theorem is thus completed.

C. Proof of Theorem 3

First, we need to show that for any . Clearly, the sample number is a random number dependent on . Note that for any , the sequences and are subsets of . By the strong law of large numbers, for almost every , the sequence converges to . Since every subsequence of a convergent sequence must converge, it follows that the sequences and converge to as provided that as . Since it is certain that as , we have that is a sure event. It follows that is an almost sure event. By the definition of the sampling scheme, we have as a sure event. Hence, is an almost sure event. Define . We need to show that is an almost sure event. For this purpose, we let and expect to show that . As a consequence of , By the continuity of the function with respect to and , we have On the other hand, as a consequence of , Making use of the continuity of the function with respect to and , we have Combining (C.3) and (C.5) yields and thus . This implies that is an almost sure event and thus for .

Next, we need to show that for any . For simplicity of notations, let and . Note that . Clearly, for any , Recall that we have established that almost surely as . This implies that and in probability as tends to zero. It follows from Anscombe’s random central limit theorem [35] that as tends to zero, converges in distribution to a Gaussian random variable with zero mean and unit variance. Hence, from (C.6), and from (C.7), Since this argument holds for arbitrarily small , it must be true that So, for any .

Now, we focus our attention to show that for any . For this purpose, it suffices to show that for any . For simplicity of notations, we abbreviate as in the sequel. Since we have established , we can conclude that Noting that we have Combining (C.12) and (C.14) yields On the other hand, using , we can write Since , for the purpose of establishing , it remains to show that Consider functions and for . Note that for all . For , there exists a positive number such that for any , since is a continuous function of . From now on, let be sufficiently small such that . Then, for all . This implies that for all . Taking complementary events on both sides of (C.20) leads to for all . Since for all , it follows that for all . Therefore, we have shown that if is sufficiently small, then there exists a number such that for all . Using this inclusion relationship and the Chernoff-Hoeffding bound [32, 33], we have for all provided that is sufficiently small. Letting and using (C.24), we have provided that is sufficiently small. Consequently, since and as . So, we have established (C.11). Since the argument holds for arbitrarily small , it must be true that for any . This completes the proof of the theorem.

D. Proof of Theorem 4

Recall that denotes the index of stage at the termination of the sampling process. Observing that we have . Making use of this result and the fact , we have By the definition of the stopping rule, we have for , where for . By the assumption that and are nonnegative, we have for . It follows from (D.3) that for . By the definition of , we have for . Making use of this fact, the inclusion relationship ,  , and Chernoff-Hoeffding bound [32, 33], we have for . It follows from (D.2) and (D.4) that This completes the proof of the theorem.

Acknowledgment

This paper is supported in part by NIH/NCI Grants no. 1 P01 CA116676, P30 CA138292-01 and 5 P50 CA128613.