Abstract

An appropriate sample size is crucial for the success of many studies that involve a large number of comparisons. Sample size formulas for testing multiple hypotheses are provided in this paper. They can be used to determine the sample sizes required to provide adequate power while controlling familywise error rate or false discovery rate, to derive the growth rate of sample size with respect to an increasing number of comparisons or decrease in effect size, and to assess reliability of study designs. It is demonstrated that practical sample sizes can often be achieved even when adjustments for a large number of comparisons are made as in many genomewide studies.

1. Introduction

With the recent advancement in high-throughput technologies, simultaneous testing of a large number of hypotheses has become a common practice for many types of genomewide studies. Examples include genetic association studies and DNA microarray studies. In a genomewide association analysis, a large number of genetic markers are tested for association with the disease [1]. In DNA microarray studies, the interest is typically to identify differentially expressed genes between patient groups among a large number of candidate genes [2].

The challenges for designing such large-scale studies include the selection of features of scientific importance to be investigated, selection of appropriate sample size to provide adequate power, and choices of methods appropriate for the adjustment of multiple testing [3–7]. There exist recent methodological breakthroughs on multiple comparisons, such as in the frontier of controlling the false discovery rate (FDR) [8, 9], which is particularly useful for the study of DNA microarray and protein arrays. It is also increasingly used in genomewide association studies [10]. On the other hand, the Bonferroni type adjustment is still surprisingly useful. For example, Klein et al. [1] successfully identified two SNPs which are associated with the age-related macular degeneration disease (AMD) using a Bonferroni adjustment. Witte et al. [11] provided an interesting observation that the relative sample size, based on Bonferroni adjustment, is approximately in a linear relationship to the logarithm of the number of comparisons.

An appropriate sample size is crucial for the success of studies involving a large number of comparisons. However, optimal and reliable sample size is extremely challenging to identify, as it typically depends on other design parameters that often have to be estimated based on preliminary data. Preliminary data are often limited at the design stage of studies, which lead to unreliable estimates of design parameters and create extra uncertainty in sample size estimation. Thus, it is of great practical interest to examine the relationship between sample size and other design parameters, such as the number of comparisons to be made. In this paper, we analyze this problem beyond witte et al.’s [11] observation by providing explicit sample size formulas, examining various genomic analyses, and deriving sample size formula for FDR control. The explicit sample size formulas are desirable because they elucidates how the change in other design parameters would affect sample size. This is of fundamental importance for understanding the reliability of study designs.

2. Sample Size Formulas

For testing a single hypothesis, the sample size problem is typically formulated as finding the number of subjects needed to ensure desired power 1βˆ’π›½ for detecting an effect size Ξ” at a prespecified significance level 𝛼. Consider an one-sided test for equality of two normal means assuming known variances 𝜎21 and 𝜎22, respectively. The sample size per group (𝑛) is as follows [12]: 𝑧𝑛=𝛼+𝐢𝑧𝛽2Ξ”2,(2.1) where Ξ”=|πœ‡1βˆ’πœ‡2|/𝜎21+𝜎22, 𝐢=1, Ξ¦(𝑧𝑑)=1βˆ’π‘‘, and Ξ¦(𝑧) is the distribution function (CDF) of the standard normal distribution.

Many of the most widely used statistical tests have similar sample size formulas as in (2.1). For example, the commonly used Mann-Whitney test for comparing two continuous distributions without normality assumption has the same form of sample size formula as in (2.1). Similarly, for testing equality of two binomial proportions, using independent samples or using correlated samples as in McNemar’s test, the sample size formulas are also of form (2.1) as discussed in Rosner [12].

For testing a single hypothesis, the influences of 𝛼, 𝛽, and Ξ” on the sample size 𝑛 can be inferred easily from the above sample size formula (2.1), and are well known. When testing multiple hypotheses, one must guard against an abundance of false-positive results. The traditional criterion for error control in such situations is the familywise error rate (FWER), which is the probability of rejecting one or more true null hypotheses. The simplest and most commonly used method for controlling FWER is the Bonferroni correction, which is discussed in the next subsection.

2.1. FWER Control

In this section, we present sample size formulas for multiple comparisons in the context of controlling the familywise error rate (FWER). Suppose we make multiple comparisons with Ξ” being the same. If we wish to retain a familywise error rate 𝛼, and power (1βˆ’π›½), then with the Bonferroni adjustment, 𝛼bon=𝛼/𝑀, the sample size corresponding to (2.1) becomes 𝑛𝑀=𝑧𝛼/𝑀+𝐢𝑧𝛽2Ξ”2.(2.2) To see how 𝑛𝑀 changes as 𝑀 increases, we can use the following well-known fact: when 𝛼<0.5, πœ™(𝑧𝛼)(1/π‘§π›Όβˆ’1/𝑧3𝛼)≀1βˆ’Ξ¦(𝑧𝛼)β‰€πœ™(𝑧𝛼)/𝑧𝛼. Since 𝛼/𝑀=1βˆ’Ξ¦(𝑧𝛼/𝑀), we can approximate 𝑧𝛼/𝑀 by π‘§βˆ—π›Ό/𝑀, where π‘§βˆ—2𝛼/𝑀𝑀≑2logπ›Όξ‚ξ‚€π‘€βˆ’log(2πœ‹)loglog𝛼.(2.3) The explicit approximation of 𝑧2𝛼/𝑀 in (2.3) works extremely well for 𝑀 ranging from 10 to 1010. Putting (2.3) into (2.2) yields the following approximation of the required sample size 𝑛𝑀: π‘›βˆ—π‘€=ξ€·π‘§βˆ—π›Ό/𝑀+𝐢𝑧𝛽2Ξ”2.(2.4) Then, for fixed (𝛼,𝛽,Ξ”), from (2.3) and (2.4), we have π‘›π‘€β‰ˆπ‘›βˆ—π‘€β‰ˆ2Ξ”2𝑀log𝛼,asπ‘€βŸΆ+∞.(2.5) A few facts are self-evident from the above approximation. First, 𝑛𝑀 is an approximately linear function of log𝑀 (base 10) with slope 2/Ξ”2. Second, the impact of 𝛽 on 𝑛𝑀 (or π‘›βˆ—π‘€) is negligible when 𝑀 is large. Third, a decrease in 𝛼 is equivalent to an increase in 𝑀 on 𝑛𝑀 (or π‘›βˆ—π‘€). The impact of Ξ” on 𝑛𝑀 (or π‘›βˆ—π‘€) is demonstrated in Figure 1 with 𝛼=0.05, 1βˆ’π›½=0.90, and Ξ”=0.5,1, and 2, respectively. It shows that 𝑛𝑀 (open circles) can indeed be approximated well by a linear function of log𝑀. The lines are calculated based on approximate normal quantiles (2.4) for π‘›βˆ—π‘€. Moreover, when Ξ” is large (e.g., Ξ”=2), the slope is very small.

The simple Bonferroni correction is very useful, when the number of true alternatives is small. This often occurs, for example, in candidate gene association studies. The Bonferroni approach is easy to apply, for example, it is convenient when the hypotheses involve many covariates and nuisance parameters, whereas the permutation approaches may not be applicable, because they require some symmetry or exchangeability on the null hypotheses [13, 14]. Next, we give two practical examples to illustrate the growth rate of sample size relative to the number of tests 𝑀 to be performed.

The AMD Example
Age-related macular degeneration (AMD) is a major cause of blindness in the elderly. Klein et al. [1] reported a genomewide screen of 96 cases and 50 controls for polymorphisms associated with AMD. They examined 116,204 single-nucleotide polymorphisms (SNPs). Two of the SNPs are found to be strongly associated with the disease phenotype. This is an example to test equality of two binomial proportions of two independent groups (cases and controls). The required sample size for each marker is given in (2.2) or (2.4) with Ξ”2=2(𝑝1βˆ’π‘2)2π‘π‘ž, √𝐢=(𝑝1π‘ž1+𝑝2π‘ž2)/(2π‘π‘ž), and 𝑝=(𝑝1+𝑝2)/2. Illustration for sample size growth with the Bonferroni correction is plotted in Figure 2 against log𝑀 using the SNP rs1329428 (Table 1) identified in Klein et al. [1]. Using Bonferroni adjustment, the sample sizes are calculated to provide 90% power to detect the association at the familywise significance level 𝛼=5%. The open circles and plus signs are sample sizes 𝑛𝑀 using (2.2) according to the dominant and recessive odds ratios, respectively. The corresponding lines are sample sizes π‘›βˆ—π‘€ based on (2.4).

The TDT Example
To test for linkage or association in family-based studies, the transmission/disequilibrium test (TDT) of Spielman et al. [15] examines the transmission of an allele from heterozygous parents to their affected offspring. If an allele is associated with the disease risk, its transmission may occur more than 50% of the times. Risch and Merikangas [16] studied the required sample size for TDT in affected sib pairs. TDT is equivalent to McNemar’s test for two correlated proportions with the hypothesis 𝐻0βˆΆπ‘=0.5 versus 𝐻1βˆΆπ‘>0.5, for the specified alternative 𝑝=𝑝𝐴, where 𝑝𝐴 is the probability that an 𝐴/𝐡 parent transmits allele 𝐴 to an affected offspring. The sample size (matched pairs) needed is given in (2.1) with √𝐢=2𝑝𝐴(1βˆ’π‘π΄), Ξ”2=2(π‘π΄βˆ’0.5)2𝑝𝐷, and 𝑝𝐷 is the projected proportion of discordant pairs among all matched pairs. If we assume that each family used in the analysis has only one marker heterozygous parent, then 𝑛 is the number of families required. Demonstration of sample sizes for TDT is plotted in Figure 3 using the setup given in Risch and Merikangas [16]. Using Bonferroni adjustment, the sample sizes are calculated to provide 1βˆ’π›½=90% power to identify a disease gene at the familywise significance level 𝛼=5%. The plus signs and open triangles are the sample size 𝑛𝑀 calculated based on (2.2) corresponding to disease frequencies equal to 0.1 and 0.5, respectively. The corresponding lines are for π‘›βˆ—π‘€ based on (2.4).

2.2. FDR Control

For the test of multiple hypotheses, such as the analysis of many genes using microarray, the outcomes can be described in Table 2.

It is likely that many genes are differentially expressed in a microarray study [7]. A natural way to control the overall false positives is to control the expected proportion of false positives. Benjamini and Hochberg [8] defined the false discovery rate (FDR), using Table 2, as 𝑉FDR=𝑃(𝑅>0)πΈπ‘ˆξ‚„βˆ£π‘…>0,FDR=0for𝑅=0.(2.6) Storey [9] defines positive FDR (pFDR) as pFDR=FDR/𝑃(𝑅>0). When 𝑀 is large as assumed next, 𝑃(𝑅>0)β‰ˆ1, unless the power 1βˆ’π›½ is too small, then FDR β‰ˆ pFDR.

The required sample size for multiple testing depends on 𝛼,(1βˆ’π›½), 𝑀, and Ξ” of each individual gene. For easy exposition, we assume an equal effect size Ξ” for all differentially expressed genes, say π‘š1 genes; thus, the power (1βˆ’π›½) of detecting any individual differentially expressed gene is the same for all of the π‘š1 genes between samples of two conditions of sizes 𝑛1 and 𝑛2. The expected outcomes in multiple testing can be expressed as functions of 𝛼, 𝛽, π‘š0, and π‘š1 and are summarized in Table 3.

By law of large numbers, from Table 3, FDR=𝐸(𝑉/𝑅)=π‘š0𝛼/(π‘š0𝛼+π‘š1(1βˆ’π›½)). Denote the desired FDR level by 𝑓. Then from the above equation, we have 𝛼fdr=π‘“ξ‚Έξ‚€π‘š1βˆ’π‘“1βˆ’1π‘€ξ‚βˆ’1ξ‚Ήβˆ’1(1βˆ’π›½).(2.7) To account for the dependence among tests, we follow Shao and Tseng [17]. Let 𝑇𝑖 be the test statistic of an one-sided two sample z-test for the 𝑖th alternative hypothesis, let 𝑝𝑖 be its 𝑃 value, and let 𝑒𝑖=𝐼(𝑝𝑖<𝛼) be the rejection status at the level 𝛼; 𝑒𝑖=1 if the 𝑖th test result is a rejection and 0 otherwise. Furthermore, if we denote the pairwise correlation coefficient between two tests by πœŒπ‘ˆπ‘–π‘—=Corr(𝑇𝑖,𝑇𝑗), then it can be shown that the correlation between 𝑒𝑖 and 𝑒𝑗, πœƒπ‘ˆπ‘–π‘—=Corr(𝑒𝑖,𝑒𝑗) can be derived from the correlations of test statistics as follows: πœƒπ‘ˆπ‘–π‘—=𝐹̃𝑧𝛼,̃𝑧𝛼;πœŒπ‘ˆπ‘–π‘—ξ€Έβˆ’(1βˆ’π›½)2,𝛽(1βˆ’π›½)(2.8) where 𝐹 is the CDF of the standard bivariate normal distribution, and ̃𝑧𝛼=βˆ’π‘§π›Όξ”+Ξ”/𝑛1βˆ’1+𝑛2βˆ’1 [18]. Under local dependence assumptions, the total number of true discoveries, βˆ‘π‘ˆ=π‘š1𝑖=1𝑒𝑖, has an approximately normal distribution: π‘ˆβˆΌπ‘(π‘š1(1βˆ’π›½),𝜎2π‘ˆ), where 𝜎2π‘ˆ=π‘š1𝛽(1βˆ’π›½)[1+πœƒπ‘ˆ(π‘š1βˆ’1)], and πœƒπ‘ˆ=(π‘š1(π‘š1βˆ’1))βˆ’1βˆ‘π‘–β‰ π‘—πœƒπ‘ˆπ‘–π‘— is the average correlation among true discoveries. The local dependence assumption can be viewed in a simplified formulation of the central limit theorem under the β€œstrong mixing” given in Theorem 27.4 of Billingsely [19]. β€œMixing” means, roughly, that random variables temporally far apart from one another are nearly independent. We think that the local dependence assumption is reasonable in many genetic studies. For example, linkage disequilibrium can result in local dependence of genetic markers. In biomarkers study, biomarkers of the same pathway are often correlated and result in local dependence.

It is often desirable to find sample size to ensure a familywise power Ξ¨ of identifying at least a given fraction π‘Ÿβˆˆ(0,1) out of π‘š1 true discoveries: Ξ¨=𝑃(π‘ˆβ‰₯[π‘š1π‘Ÿ]). The above normal approximation of π‘ˆ allows a closed form solution for the comparison-wise 𝛽: 𝛽fdr√=1βˆ’π‘Ÿβˆ’1βˆ’2π‘Ÿ+4π‘šβˆ—1π‘Ÿ(1βˆ’π‘Ÿ)+12π‘šβˆ—1,+2(2.9) where π‘šβˆ—1=π‘š1/{[1+πœƒπ‘ˆ(π‘š1βˆ’1)]𝑧21βˆ’Ξ¨}. When π‘š1 is large, to have a family-wise power Ξ¨ in detecting at least 100π‘Ÿ% out of π‘š1 true alternatives, and with an FDR 𝑓, the sample size needed for a one-sided z-test is given by (2.1), with 𝛼 and 𝛽 determined by (2.7) and (2.9) iteratively.

A Microarray Example.
We now consider a well-known dataset from a study of leukemia in Gloub et al. [2] to demonstrate the relationship between sample size and number of multiple comparisons when controlling FDR. The original purpose of the experiment described in Gloub et al. [2] is to identify the susceptible genes related to clinical heterogeneity in two subclass of leukemia: acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML). The dataset contains 7129 attributes from 47 patients with ALL and 25 patients with AML. We can apply (2.1), (2.7), and (2.9) iteratively to obtain the required sample size when controlling FDR. Figure 4 provides 3 different settings for controlling FDR 𝑓=5% with Ξ¨=90%. Based on the top 100 most differentially expressed genes in Gloub et al. [2], πœƒπ‘ˆ=0.07 (see (2.9)). The open circles represent the sample sizes 𝑛𝑀 needed when the number of true alternatives π‘š1 stays constant (π‘š1=40). In this case, we observe that the sample size is a linear function of log𝑀 as 𝑀 increases. The β€œplus” signs denote the sample sizes 𝑛𝑀 when the number of true alternatives increases in a slower pace than 𝑀 (π‘š1=2log𝑀); the sample size is also approximately a linear function of log 𝑀. The triangles denote the sample sizes 𝑛𝑀 when the proportion of true alternatives is constant (π‘š1/𝑀=10%), and the sample sizes roughly remain constant as the number of tests increases which is expected from (2.7). The lines in Figure 4 represent sample sizes π‘›βˆ—π‘€ based on (2.4).

3. Discussion

In this short paper, we have shown that a large increase in the number of comparisons often only requires a small increase in the sample size. We further demonstrated that when controlling FDR, the sample size may even sometimes stay constant as the number of comparisons increases (Figure 4). The sample size required for testing 𝑀 hypotheses is generally not growing faster than a linear function of log𝑀, even when a simple Bonferroni adjustment is used, and the slope of the linear growth rate (in log𝑀) is small when detecting a large effect size. These results have important implications in practice due to the wide use of multiple comparisons.

In this paper, we discuss the sample size formulas based on fixed effect size in alternative hypotheses. In reality, the effect sizes may follow a distribution, and simulation method may be useful in determining the sample size. We used 𝑧-test to derive the sample size formula, because large sample size is usually required for studies with multiple comparisons. If the effect size is large and sample size is small, 𝑑-test may be more appropriate. However, we expect the relationship between sample size and the logarithm of number of comparisons made is still linear.

In practice, if feasible, using a conservative sample size can reduce the chance of obtaining false-positive results and ensure reproducibility [6]. The simple sample size formulas provided in this paper might be used to select a suitable sample size by varying other design parameters and by taking into consideration the reliability of the proposed designs. While FDR is very useful and is increasingly used in multiple comparisons, our experience in helping biomedical investigators and the analysis in this paper indicate that the simple Bonferroni approach can often provide conservative but useful sample sizes in many situations.