International Journal of Plant Genomics

International Journal of Plant Genomics / 2008 / Article

Review Article | Open Access

Volume 2008 |Article ID 286561 | 8 pages |

Statistical Methods for Mapping Multiple QTL

Academic Editor: Shizhong Xu
Received17 Dec 2007
Accepted29 Apr 2008
Published08 Jun 2008


Since Lander and Botstein proposed the interval mapping method for QTL mapping data analysis in 1989, tremendous progress has been made in the last many years to advance new and powerful statistical methods for QTL analysis. Recent research progress has been focused on statistical methods and issues for mapping multiple QTL together. In this article, we review this progress. We focus the discussion on the statistical methods for mapping multiple QTL by maximum likelihood and Bayesian methods and also on determining appropriate thresholds for the analysis.

1. Introduction

Quantitative genetics studies the variation of quantitative traits and their genetic basis. When R. A. Fisher laid down the basic theoretical foundations of quantitative genetics, the focus of study was to partition the overall variation into genetic and environmental ones. With the development of polymorphic markers for many species, current research interest is to partition genetic variation to individual quantitative trait loci (QTL) in the genome as well as interaction among them [1]. A QTL is a chromosomal region that is likely to contain causal genetic factors for the phenotypic variation under study.

The basic principle of QTL mapping has been established in Sax's work [2] work in beans. If there is a linkage disequilibrium (LD) between the causal factor and a marker locus, mean values of the trait under study will differ among subject groups with different genotypes at the marker locus [3]. Though this idea is still directly used in certain settings (e.g., LD-based QTL mapping in unrelated human), the advance of QTL mapping methodology has allowed simultaneous use of multiple marker information to improve the accuracy and power to estimate QTL locations and effects. Lander and Botstein [4] presented a likelihood-based framework for interval mapping (IM), where the putative QTL genotype was conditional upon a pair of flanking markers' genotypes as well as the phenotype. A least square equivalence of IM [5] was also proposed where phenotypic values were regressed onto expected genetic coefficients of a putative QTL. Motivated by the conditional independency between marker genotypes, composite interval mapping [6] proposed to introduce additional flanking markers as covariates into the likelihood function to reduce the confounding effects from nearby QTL when scanning the current interval. However, most of these methods were still designed to detect a single QTL at a time based on a statistical test that a candidate position for a QTL has significant effect or not. The test was constructed to test each position in a genome and thus created a genome scan for QTL analysis.

Though intuitive and widely used, these methods are still insufficient to study the genetic architecture of complex quantitative traits that are affected by multiple QTL. When a trait is affected by multiple loci, it is more efficient statistically to search for those QTL together. Also in order to study epistasis of QTL, multiple QTL need to be analyzed together. In this setting, QTL analysis is basically a model-selection problem. In this paper, we discuss recent research progress and outstanding statistical issues associated with mapping multiple QTL in experimental cross populations.

2. Multiple Interval Mapping (MIM)

Multiple interval mapping is targeted to analyze multiple QTL with epistasis together through a model selection procedure to search for the best genetic model for the quantitative trait [1, 7, 8].

For 𝑚 putative causal genes for the trait, the model of MIM is specified as 𝑦𝑖=𝑢+𝑚𝑟=1𝛼𝑟𝑥∗𝑖𝑟+𝑡𝑟≠𝑠⊂(1,…,𝑚)𝛽𝑟𝑠(𝑥∗𝑖𝑟𝑥∗𝑖𝑠)+𝑒𝑖,(1)where(i)𝑦𝑖 is the phenotypic value of individual 𝑖, 𝑖=1,2,…,𝑛;(ii)𝑢 is the mean of the model;(iii)𝛼𝑟 is the main effect of the 𝑟th putative causal gene, 𝑟=1,…,𝑚;(iv)𝑥∗𝑖𝑟 is an indicator variable denoting genotype of the 𝑟th putative causal gene, which follows a multinomial distribution conditional upon flanking marker genotypes and genetic distances;(v)𝛽𝑟𝑠 is the possible epistatic effect between the 𝑟th and the 𝑠th putative causal genes, assuming there are 𝑡 such effects;(vi)𝑒𝑖 is an environmental effects assumed to be normally distributed.

As shown by Kao and Zeng [7], Kao et al. [8], given a genetic model (number, location, and interaction of multiple QTL), this linear model suggests a likelihood function similar to that in IM but with more complexity. An expectation/maximmization (EM) algorithm can be used to maximize the likelihood and obtain maximum likelihood estimates (MLE) of parameters.

The following model-selection method is used to transverse the genetic model space in QTL cartographer [1, 9, 10].(1)Forward selection of QTL main effects sequentially. In each cycle of selection, pick the best position of an additional QTL, and then perform a likelihood ratio test for its main effect. If a test statistic exceeds the critical value, this effect is retained in the model. Stop when no more QTL can be found.(2)Search for epistatic effects between QTL main effects included in the model, and perform likelihood ratio tests on them. If a test statistic exceeds the critical value, the epistatic effect is retained in the model. Repeat the process until no more significant epistatic effects can be found.(3)Reevaluate the significance of each QTL main effect in the model. If the test statistic for a QTL falls below the significant threshold conditional on other retained effects, this QTL is removed from the model. However, if a QTL is involved in a significant epistatic effect with other QTL, it is not subject to this backward elimination process. This process is performed stepwisely until no effects can be dropped.(4)Optimize estimates of QTL positions based on the currently selected model. Instead of performing a multidimensional search around the regions of current estimates of QTL positions, estimates of QTL positions are updated in turn for each region. For the 𝑟th QTL in the model, the region between its two neighbor QTLs is scanned to find the position that maximizes the likelihood (conditional on the current estimates of positions of other QTL and QTL epistasis). This refinement process is repeated sequentially for each QTL position until there is no change on estimates of QTL positions.

An important issue in model selection is the significance level to include or eliminate effects. In regression analysis, such threshold is usually decided based on information criteria, which has the following general form𝐼𝐶=−2(log𝐿𝑘−𝑘𝑐(𝑛)2),(2)where 𝐿𝑘 is the likelihood of data given a genetic model with 𝑘 parameters, 𝑐(𝑛) is a penalty function and can take a variety of forms, such as, (i)𝑐(𝑛)=log(𝑛), which is the classical Bayesian information criterion (BIC);(ii)𝑐(𝑛)=2, which is Akaike information criteron (AIC);(iii)𝑐(𝑛)=2log(log(𝑛));(iv)𝑐(𝑛)=2log(𝑛);(v)𝑐(𝑛)=3log(𝑛).

When the penalty for an additional parameter in the model is low, more QTL and epistatic effects are likely to be included in the model. Thus it is particularly important to determine an appropriate penalty function for model selection.

Sequential genome scans require detectable main effects for the components of interaction effect. An alternative approach, exhaustive search of all marker combinations, is a computational and statistical problem even in two dimensions. From a yeast eQTL mapping data with over 6000 expression traits and 112 individuals [11], Storey et al. [12] showed that the sequential search was more powerful than exhaustive one to detect pair-wise QTL main effects and interaction effects. However, in a different setting using simulations under a series of quantitative trait model assumptions, Evans et al. [13] showed that the exhaustive search can be more powerful than the sequential one with over 1000 individuals in the mapping population and a Bonferroni correction for 100 000 tests. The inconsistency is partially related to sample size. A larger sample can make unadjusted 𝑃 values more significant. Witte et al. [14] showed that the required sample size increases linearly as the number of tests increases logarithmically with a simple Bonferroni correction.

3. Threshold to Claim QTL

We need to decide the threshold for declaring QTL from the profile of test statistics across the genome. General asymptotic results for regression and likelihood ratio tests are not directly applicable for genome scans given the large number of correlated tests performed in the scans and the limited sample size.

3.1. Type I Error Rate Control

When markers are dense and the sample size is large, Lander and Botstein [4] showed that an appropriate threshold for LOD score was (2log10)𝑡𝛼, where 𝑡𝛼 solves the equation 𝑡𝛼=(𝐶+2𝐺𝑡𝛼)𝜒2(𝑡𝛼). 𝐶 is the number of chromosomes of the organism. 𝐺 is the length of the genetic map, measured in Morgans. 𝜒2(𝑡𝛼) is the probability that a random variable from a 𝜒21 distribution is less than 𝑡𝛼.

Churchill and Doerge [15] proposed a method based on permutation tests to find an empirical threshold specifically for a QTL mapping study. Data were shuffled by randomly pairing one individual's genotypes with another's phenotypes, in order to simulate the null hypothesis of no intrinsic relationship between genotypes and phenotypes. Thus, this method takes into account sample size, genome size of the organism under study, genetic marker density, segregation ratio distortions, and missing data.

According to Churchill and Doerge [15], the genome-wide threshold to control type I error rate for mapping a single trait can be found in the following procedure.(1)Shuffle the data 𝑁 times by randomly pairing trait values with genotypes. When there are multiple traits under study, these phenotypes should be shuffled together to keep their correlation structure.(2)Perform mapping analysis and obtain the maximum test statistic in each of 𝑁 shuffled data. This provides an empirical distribution 𝐹𝑀 of the test statistic for the genome scan at the null.(3)The 100(1−𝛼) percentile of 𝐹𝑀 will provide an estimated critical value.

This permutation procedure is equivalent to the Bonferroni correction for multiple testing when test statistics are independent. Suppose there are 𝑛 such statistics 𝑡𝑖 (for 𝑖=1,…,𝑛) from a null distribution 𝐹. 𝐹𝑀(𝑇), the distribution function of maximum of the 𝑛 statistics, can be expressed as 𝑃𝑟(max(𝑡𝑖)<𝑇)=𝐹(𝑇)𝑛. When we find a threshold 𝑇, such that 𝑃𝑟(max(𝑡𝑖)>𝑇)=1−𝑃𝑟(max(𝑡𝑖)<𝑇)≤𝛼, it is equivalent to require 1−𝐹(𝑇)𝑛=1−, or (1−𝑃𝑟(𝑡𝑖>𝑇))𝑛≤𝛼, the Bonferroni adjusted threshold. When test statistics are correlated, the permutation method provides a threshold that is less than that from Bonferroni.

A related permutation procedure was also suggested by Doerge and Churchill [16] for mapping procedures that QTLs are declared sequentially using a forward selection procedure. Two methods were suggested to find a genome-wide threshold for the second QTL while controlling effects of the first QTL.(i)Conditional empirical threshold (CET). Mapping subjects are put into blocks according to the genotype of the marker identified as (or closest to) the first QTL. Permutation is applied within each block. Following the procedure described above by Churchill and Doerge [15], maximal test statistic of each genome scan is collected and CET is obtained. One problem of CET is that markers linked to the first QTL will continue to show assocation with the trait variation as in the original data. To avoid CET being elevated by such markers, it is suggested to exclude the complete chromosome where the first QTL is located when collecting null statistics.(ii)Residual empirical threshold (RET). The residues from the genetic model with the first QTL are used as new phenotypic values to be permuted. Maximal null statistics from genome-wide scans are then collected to find RET.

Applied to multiple interval mapping, Zeng et al. [1] also suggested to use a residual permutation or bootstrap test to estimate appropriate threshold for the model selection in each step of sequential test. In this test, after fitting a model with identified QTL effects, residuals of individuals are calculated and permutated or bootstrapped to generate a null model for the conditional test to identify additional QTL. This threshold is more appropriate for the conditional search, but computationally more intensive.

3.2. Score-Statistic-Based Resampling Methods

Permutation is a computationally intensive method for generating empirical threshold. Zou et al. [17] suggested that the genome-wide threshold could be more efficiently computed based on score statistic and a resamplng method. A score statistic can be computed at each genome position. If we multiply a score function by a standard normal random variable with mean zero and variance one, the resulting score statistic mimics that under the null hypothesis. Thus by multiplying a number of standard normal variables, we can very efficiently generate an empirical distribution of score statistic under the null. This method is flexible and can be used to test a null hypothesis in a complex model. A similar algorithm was also suggested by Seaman and Müller-Myhsok [18]. Conneely and Boehnke [19] extended the approach by replacing the resampling step with a high dimensional numerical integration.

Unlike these approaches, Nyholt [20] suggested another method that addresses the multiple testing issue: the number of independent tests across the genome can be approximately estimated as a function of eigenvalues derived from the correlation matrix of marker genotypes.

3.3. False Discovery Rate (FDR)

In QTL mapping, as marker coverage in a genome increases, it is less likely that a casual variant is not in LD with any marker and may be missed. On the other hand, the number of markers showing significant correlation with the phenotype by chance is also expected to grow, if the type I error rate for each test is controlled at a preset level. To handle this multiple testing problem, stringent family-wise control of type I error is usually applied, which is designed to control the probability of making at least one false discovery in a genome-wide test. However, a more powerful approach may be to control false discovery rate (FDR) [21], or to control the expected proportion of false discoveries among all the markers passing a threshold. This essentially allows multiple false positive declarations when many “significant” test statistics are found. Such a relaxation is driven by the nature of the problem under study. “It is now often up to the statistician to find as many interesting features in a data set as possible rather than test a very specific hypothesis on one item” [22].

According to the notation from Storey [22], Table 1 shows the possible outcomes when 𝑃𝑟(𝑡𝑖>𝑇)≤𝛼/𝑛 hypotheses 𝑚 are tested. For independent tests, Benjamini and Hochberg [21] provided a procedure (known as linear step-up procedure or BH procedure) to control expected FDR, that is, 𝐻1,𝐻2,…,𝐻𝑚 at the desired level 𝐸[(𝑉/𝑅)∣𝑅>0]×𝑃𝑟(𝑅>0) or 𝛼(𝑚0/𝑚) (since 𝛼 is generally unknown, a conservative up-bound estimate 𝑚0 was used) as follows:(i)sort 𝑚0=𝑚 values from the smallest to the largest such that 𝑃;(ii)starting from 𝑃(1)≤𝑃(2),…,𝑃(𝑚), compare 𝑃(𝑚) with 𝑃(𝑖);(iii)let 𝛼(𝑖/𝑚) be the first time 𝑘, reject all 𝑃(𝑖)≤𝛼(𝑖/𝑚) through 𝑃(1).

Accepted nullRejected nullTotal

Null true 𝑚 𝑈 𝑉
Alternative true 𝑚 0 𝑇 𝑆
𝑚 1 𝑊 𝑅

Benjamini and Yekutieli [23] showed that “if test statistics are positively regression dependent on each hypothesis from the subset corresponding to true null hypotheses (PRDS), the BH procedure controls FDR at level 𝑚.” For QTL mapping, PRDS can be interpreted as follows [24]: if two markers have correlated allele frequencies and neither is related biologically to the trait, test statistics associated with the two markers should be positively correlated. Such positive correlation is intuitively correct and supported by simulation results [24].

To check the performance of BH procedure on FDR control in genome-wide QTL scan for a single trait, Sabatti et al. [24] considered a simulated case-control study in human. Three susceptibility genes were simulated to affect the disease status. The genes were assumed to be additive and located on different chromosomes. The results showed that the BH procedure can control the expected value of the FDR for single-trait genome-wide scan. For multiple-trait QTL analysis, Benjamini and Yekutieli [25] considered 8 positively or negatively correlated traits. Using simulation, they showed that BH approach seemed to work for multiple trait analysis too.

According to Benjamini and Yekutieli [25], to control FDR for QTL analysis in each trait at level 𝛼(𝑚0/𝑚) does not always mean that the overall FDR for these multiple traits is also 𝛼 : if there are 𝛼 independent and nonheritable traits, the overall FDR should be 𝑘. It is safer to control FDR for all the tests simultaneously.

Yekutieli and Benjamini [26] suggested to make use of dependency structure in data, rather than treat them as annoying cases. They expected an increase of testing power when using an empirical true null statistic distribution instead of assuming some theoretical ones to get 1−(1−𝛼)𝑘≈𝑘𝛼 values. Empirical null distributions are used extensively in pFDR and local FDR as discussed below.

Though BH approach is a handy and intuitive tool, it should be used with caution when applied to QTL mapping. First, BH approach controls the expected value of FDR. Simulation studies showed that the actual FDR for a particular QTL mapping dataset can be higher [24]. Second, 𝑃 may be tricky to interpret when FDR=𝐸[(𝑉/𝑅)∣𝑅>0]𝑃𝑟(𝑅>0) is far below than 1. Weller et al. [27] are the first ones to apply FDR criteria in QTL mapping area. They claimed that 𝑃𝑟(𝑅>0) of the 10 QTL declared in their study were probably true by controlling FDR at 75% using BH approach. Zaykin et al. [28] however objected to the interpretation because 25% could be much higher than 𝐸[(𝑉/𝑅)∣𝑅>0] when FDR=25% was much smaller than 1. It is 𝑃𝑟(𝑅>0), also known as pFDR discussed in Section 3.4, that really contains the information about the proportion of false discovery. Weller [29] further argued that if one assumes 𝐸[(𝑉/𝑅)∣𝑅>0] follows a Poisson distribution, 𝑅 should be very close to 1 when they observed 𝑃𝑟(𝑅>0). In later FDR literature, the assumption that 𝑅=10 has been widely adopted [25, 30].

3.4. Positive Discovery Rate (pFDR)

pFDR, or 𝑃𝑟(𝑅>0)≈1, was considered less favorable than FDR [21] without the additional term 𝐸[(𝑉/𝑅)∣𝑅>0]: we cannot decide an arbitrary threshold 𝑃𝑟(𝑅>0), 𝛼, and guarantee that pFDR 0<𝛼<1 regardless of the actual proportion of true null hypothesis in all tests. For example, when ≤𝛼, pFDR is always 1 and cannot be controlled at a small 𝑚0/𝑚=1. In this case, however, FDR = pFDR 𝛼 can be controlled at ×𝑃𝑟(𝑅>0) by reducing the rejection region to push 𝛼, and then FDR, towards 𝑃𝑟(𝑅>0). Thus, Benjamini and Yekutieli [25] considered that pFDR should be only estimated after a fixed rejection threshold was decided; or in QTL mapping, pFDR should estimate (instead of control) the proportion of true/false QTL, after the null hypothesis was rejected at 𝛼 linkage peaks by a certain rule (e.g., a type-I error control procedure).

From the discussion in Section 3.3 we know that allowing 𝑅 much smaller than 1 might bring trouble in interpreting the results; and when 𝑃𝑟(𝑅>0), we might want to see an FDR measure that is close to 1 [31]. This helped to bring up the interest in pFDR. Storey [31] presented a Bayesian interpretation of pFDR based on 𝑚0/𝑚≈1 values. Assuming there are 𝑃 tests 𝑚 with associated 𝐻1,𝐻2,…,𝐻𝑚 values 𝑃, and(i)𝑃1,𝑃2,…,𝑃3 denots the 𝐻𝑖=0th null hypothesis is true, and 𝑖 otherwise;(ii)to model the uncertainty in each hypothesis test, each 𝐻𝑖=1 is viewed as an identical and independent random variable from a Bernoulli distribution with 𝐻𝑖 and 𝑃𝑟(𝐻𝑖=0)=𝜋0;(iii)𝑃𝑟(𝐻𝑖=1)=𝜋1=1−𝜋0 follows a distribution 𝑃𝑖 if 𝐹0 and 𝐻𝑖=0 if 𝐹1; it follows a mixed distribution unconditionally: 𝐻𝑖=1;(iv)𝐹𝑚=𝜋0𝐹0+𝜋1𝐹1 is the common rejection region for all Γ=[0,𝛾], then

Notice that, when 𝐻𝑖 and 𝐻pFDR(𝛾)=𝑃𝑟𝑖=0∣𝑃𝑖∈Γ(3) is reasonably large, formula ((5) is roughly equal to =𝐻𝑃𝑟𝑖𝑃=0𝑃𝑟𝑖≤𝛾∣𝐻𝑖=0𝑃𝑃𝑟𝑖≤𝛾𝑃𝑟(𝑅>0)(4), which is the BH approach to estimate FDR. We can also see from formula ((5) that instead of assuming ≈𝜋0𝛾𝑅/𝑚[1−(1−𝛾)𝑚].(5), a data-driven estimate of 𝜋0=1 will give a smaller pFDR estimate or a more powerful procedure if 𝑚 is estimated smaller than 1. Efron et al. [32] gave an upper bound of 𝛾(𝑚/𝑅) by assuming an “accept region” 𝜋0=1 such that 𝜋0:𝜋0 where 𝜋0 are the density functions of 𝐴=[𝜆,1], respectively. The left-hand side of this equation is equivalent with the following estimator [30]:𝑃𝑟(𝐻=0∣𝑃∈𝐴)≈1

These formulas lead to ∫𝐴𝑓𝑚(𝑧)𝑑𝑧∫𝐴𝑓0=∫(𝑧)𝑑𝑧𝐴𝜋0𝑓0(𝑧)+1−𝜋0𝑓1(𝑧)𝑑𝑧∫𝐴𝑓0≥∫(𝑧)𝑑𝑧𝐴𝜋0𝑓0(𝑧)𝑑𝑧∫𝐴𝑓0(𝑧)𝑑𝑧=𝜋0,(6) value, the minimal pFDR when rejecting a hypothesis with 𝑓0,𝑓1,𝑓𝑚 value 𝐹0,𝐹1,𝐹𝑚. This approach is more powerful than the BH approach [21] in genomics applications. An R package “𝜋0=#𝑃>𝜆𝑚(1−𝜆).(7) value” is available to convert a list of ğ‘ž values to 𝑃 values [30]. The 𝑃𝑖 value approach can actually find ğ‘ž while controlling a specific pFDR rate besides finding pFDR given 𝑃. However, as discussed above, FDR control in QTL mapping should be applied with caution.

When ğ‘ž are identical and independent distributed, pFDR becomes identical to the definition of proportion of false positives (PFP) [22, 33]:ğ‘žPFP is also estimated similarly as pFDR (cf. formula (5) [33]:ΓContrary to FDR discussed in Section 3.3, PFP has a property that when it is controlled at Γ for each of 𝑃𝑖 sets of independent tests, the overall PFP is still PFP=𝐸(𝑉)=∑𝐸(𝑅)(8)𝑚𝑖=1𝐻𝛾𝑃𝑟𝑖=0∑𝑚𝑖=1𝐻[𝛾𝑃𝑟𝑖𝐻=0+𝑃𝑟𝑖𝑃=1𝑃𝑟𝑖≤𝛾∣𝐻𝑖]=∑=1(9)𝑚𝑖=1𝜋0∑𝑚𝑖=1[𝜋0+𝜋1𝑃𝑃𝑟𝑖≤𝛾∣𝐻𝑖𝑃=1/𝑃𝑟𝑖≤𝛾∣𝐻𝑖],=0(10). 𝜋PFP=0𝛾𝑅/𝑚.(11) from different sets of tests can have different distributions. In this case, PFP can be different from pFDR [33].

Currently available procedures to control or estimate pFDR or PFP may have variable utility in various mapping designs. Chen and Storey [34] noted that the threshold to control FDR at marker level from one-dimensional genome scan for a single trait could be “dubious” because FDR is affected by the marker densities along the chromosomes. Since a true discovery is to claim a QTL at a marker which is in strong LD with a causal polymorphism, markers that are in strong LD with the true discovery can be additional true discoveries. Thus, FDR decreases when we genotype more markers around a true linkage peak. Using simulations, Chen and Storey [34] showed that the threshold is obtained by controlling FDR varied with the marker density. However, in real applications, people generally consider all markers surrounding a test statistic peak as parts of one QTL, rather than distinct positive discoveries. On the other hand, we can still estimate FDR from 𝛼 values at linkage peaks for different traits that pass certain cutoff value. This is a common situation from an expression QTL study where thousands of traits are analyzed together.

Zou and Zuo [35] showed that familywise error rate control via Bonferroni correction can be more powerful than PFP control. In their simulation, they assumed 1 to 5 true non-null hypotheses out of 1000 independent tests, corresponding to 𝑛%, which might be too pessimistic for certain QTL mapping studies. As we can see from formula (10), when 𝛼 is so small, 𝑃𝑖 has to be very large so that its product with 𝑃 is considerably larger than 𝜋1≤0.5 and there is an acceptable PFP level. When the density function of 𝜋1 is monotonously decreasing in 𝑃𝑟(𝑃𝑖≤𝛾∣𝐻𝑖=1)/𝑃𝑟(𝑃𝑖≤𝛾∣𝐻𝑖=0), which is quite common in reality, 𝜋1 has to be very small to increase the ratio of power over type-I error. Thus, it is not surprising that such a 𝜋0 from PFP would result in a more stringent test than that under Bonferroni correction. [36] made a similar argument and pointed out that familywise error rate control was effective when 𝑃𝑖∣𝐻𝑖=1 was relatively low, and PFP or pFDR approach can be more powerful when [0,𝛾] was high. Again, expression QTL studies stand out as an example when pFDR is more favorable: in the yeast eQTL mapping data [11], Storey et al. [12] estimated that 𝛾 among the genome-wide maximal test statistic for each expression trait.

3.5. Local FDR

The Bayesian interpretation of pFDR extends naturally to local FDR [32], denoted as FDR here, 𝛾where 𝜋1 is the test statistic associated with 𝜋1; 𝜋1=0.85 denotes that the FDR(𝑇𝑖)=𝑃𝑟(𝐻𝑖=0∣𝑇𝑖)=𝜋(12)0𝑓0𝜋0𝑓0+𝜋1𝑓1=𝜋0𝑓0𝑓𝑚,(13)th null hypothesis is true, and 𝑇𝑖 otherwise; 𝐻𝑖 has a density 𝐻𝑖=0 if 𝑖, and 𝐻𝑖=1 if 𝑇𝑖; or 𝑓0 when hypotheses are mixed together.

There is great similarity between formula (3) and formula (12). Actually Efron [37] showed that 𝐻𝑖=0 value associated with 𝑓1 is equivalent with 𝐻𝑖=1. Storey et al. [12] showed that FDR could be estimated within each cycle of a sequential genome scan for thousands of expression traits; then the average of the FDR in the rejection region was an estimate of pFDR.

The key part in estimating FDR is to estimate 𝑓𝑚=𝜋0𝑓0+𝜋1𝑓1. It is possible to assume certain standard distribution under the null hypothesis as ğ‘ž and estimate 𝑇𝑖 from nonparametric regression [37]. The following procedure is modified from [12, 32]:(1)permute data under null hypothesis 𝐸𝑡≥𝑇𝑖FDR(𝑡) times, and obtain test statistics 𝑓0/𝑓𝑚, 𝑓0, 𝑓𝑚;(2)estimate 𝐵 from 𝑧𝑖𝑗 and 𝑖=1,…,𝑚 (see below);(3)estimate 𝑗=1,…,𝐵 using formula (7).

𝑓0/𝑓𝑚 is estimated in the following way:(1)pool all 𝑇𝑖 and 𝑧𝑖𝑗 into bins;(2)create an indicator variable 𝜋0, let 𝑓0/𝑓𝑚 for each 𝑇𝑖 and 𝑧𝑖𝑗 for each 𝑦, thus, 𝑦=1 in each bin;(3)obtain a smooth estimate of 𝑇𝑖 in each bin from an overall regression curve across bins, by combining natural cubic spline with generalized linear models for binomially distributed response variables;(4)equate 𝑦=0 with 𝑧𝑖𝑗, and get a moment estimate of 𝑃𝑟(𝑦=1)=𝑓𝑚/(𝑓𝑚+𝐵𝑓0).

It is noticed that 𝑃𝑟(𝑦=1) and its associated 𝑃𝑟(𝑦=1) or 𝑓𝑚/(𝑓𝑚+𝐵𝑓0) are assumed to be from a mixture distribution in both pFDR and FDR estimation. Thus, as pointed out by Storey [22], there is a connection between multiple hypothesis testing and classification. For each test, the test procedure is to classify 𝑓0/𝑓𝑚 as 0 or 1, or accepted or rejected. Classification decisions can be made based upon 𝐻𝑖, with a rejection region 𝑇𝑖: if 𝑃𝑖, we classify 𝐻𝑖 as 1.

4. Bayesian Methods

Bayesian QTL mapping methods try to take full account of the uncertainties in QTL number, location, and effects by studying their joint distributions. Such a method takes the prior knowledge about these parameters as a prior distribution, reduces the uncertainty by integrating the information from the data, and expresses the remaining uncertainty as a posterior distribution of parameters.

Satagopan et al. [38] illustrated the use of Markov chain Monte Carlo (MCMC) algorithm to generate a sequence of samples from the joint posterior distribution of QTL parameters conditional upon the number of QTL. Gibbs sampling algorithm approximates the joint distribution by sampling each parameter in turn, conditional upon the current values of the rest of parameters. Conjugate priors can be chosen so that most of these conditional distributions have explicit distribution functions and can be sampled directly. Otherwise, Metropolis-Hastings algorithm is to be used. Point estimates of individual parameters are obtained by taking the averages of the corresponding marginal distributions. Confidence intervals are obtained as high posterior density regions.

Sen and Churchill [39] proposed to sample QTL genotypes from their posterior distribution conditional upon marker genotypes and trait phenotypes. This multiple imputation method offers a framework to handle several issues in QTL mapping: nonnormal and multivariate phenotypes, covariates, and genotyping errors. It differs from usual MCMC procedures in that a two-step approach is used: first, QTL genotypes are sampled conditioning only on marker genotypes; then, weights can be calculated as the likelihood of the phenotypes given a genotype. These genotypes and weights are then used to estimate posterior distributions.

In both studies, MCMC simulations were conditional upon the number of QTL (𝑇𝑖) underlying the phenotype. Different values of Γ were tried and Bayes factors were used to decide which value of 𝑇𝑖∈Γ was more plausible. Bayes factor is the ratio of the probability density of data under two hypotheses. It is a likelihood ratio in some simple cases. In Bayesian QTL mapping, however, likelihoods are weighted by prior distributions of parameters under different hypotheses to get Bayes factors [38].

The development of reversible jump MCMC algorithm [40] suggested one way to treat 𝐻𝑖 as a parameter and generate its posterior distribution. Satagopan and Yandell [41] introduced this method to QTL mapping: when updating the current value of 𝐿, a single QTL may be added or deleted from the model. Yi et al. [42] extended this method to model interacting QTL by allowing 3 types of transdimensional exploration: a main effect or epistatic effect, a single QTL with its effects or a pair of QTL may be added or deleted in one updating step. However, they also observed low acceptance rates for transdimensional jump proposals and hence slow mixing of the markov chains and high computational demand associated with the algorithm.

Yi [43] introduced composite model space approach as a unified Bayesian QTL mapping frame work. The approach incorporates reversible jump MCMC as a special case, and turns the transdimensional search into a fixed-dimensional one. The central idea is to use a fixed length vector to record the current locations of 𝐿 putative QTL and another indicator vector to record whether a QTL's main or interaction effect is included or excluded. These two vectors decide the actual number of QTL. The fixed length sets an upper bound for the number of detectable QTL. This approach has been implemented in the R/qtlbim package [44, 45].

Without considering computational efficiency, the upper bound can actually be fairly large. Unlike multiple regression analysis based upon least square criteria, which gets in trouble when the number of explanatory variables is larger than the number of observations, Bayesian analysis can handle a large number of explanatory variables through a large number of cycles within each step of Gibbs sampling. Xu [46] showed an example where markers across genome were used simultaneously in Bayesian linear regression and suggested that the Bayesian method could result in model selection naturally: markers with weak effects are “shrunk” so that their posterior mean effects are around zero, markers with strong effects remain strong.

A closely related procedure is ridge regression, where the parameters to an usual linear model 𝐿 are estimated as 𝐿 (𝐿, 𝐿, 𝑌=𝑋𝛽+𝜖 is an identity matrix). The estimator turns out to be identical to a Bayesian one which assumes that ̂𝛽=(𝑋𝑇𝑋+𝐾)−1𝑋𝑇𝑌 has a prior normal distribution 𝐾=𝜆𝐼 [47]. Ridge regression was initially applied to marker-assisted selection by Whittaker et al. [48] to reduce the standard errors in estimation. Xu [46] found ridge regression was not applicable to genome-wide markers simultaneously because each marker demand a different 𝜖∼𝑁(0,ğœŽ2𝐼) in its prior distribution variance. This problem of ridge regression can be fixed in a generalized ridge estimator where 𝐼 is a diagonal matrix with different diagonal elements 𝛽 [49]. However, an iterative procedure is required to find these 𝑁(0,ğœŽ2/𝜆), which is similar to the MCMC sampling in Bayesian analysis.

A Bayesian QTL mapping analysis usually results in a posterior distribution over the QTL model space. Numerical characteristics from such a distribution provide estimates for parameters. Such an estimation is based upon entire model space, weighted by the posterior probabilities, and hence is more robust than usual MLE. In terms of hypothesis test, Bayes factors, together with prior odds, are used to compare two hypotheses. Unlike 𝜆 values, Bayes factors are calculated under both competing hypotheses; like 𝐾 values, they have to be compared with some commonly used cutoff values (like 0.05 for 𝜆1,𝜆2,… values) to decide which hypothesis to prefer [50].

There is very clearly a null hypothesis and an alternative hypothesis in single QTL scan [4]. In model selection for a multiple QTL model, however, the number of hypotheses in the model space is huge and the data may well support many models for a complex trait. Posterior distribution emphasizes such uncertainty. Picking a better supported model to interpret mapping results may not fully convey the uncertainty of inference. On the other hand, a pragmatic mathematical model may choose to simplify the complexity and present an inference with basic structural characteristics. See Kass and Raftery [50] for more discussion.

Bayesian QTL mapping does not have the multiple testing issue discussed above. Bayesians agree with the idea to require very strong evidence to call QTL from genome. However, they believe that the reason is that the genome is so large that the prior probability that any one variant or variant combination is causative is very low. Thus, in Bayesian QTL mapping, the multiple testing issue is handled by the low prior density on any one marker or low prior odds for any one hypothesis; and such a stringent requirement is recommended even when exploring a very limited QTL model space unless there is strong prior knowledge against that [51].

5. Conclusion

To document the evolution of the statistical approaches for QTL mapping, we attempt to follow some threads of methodological development on multiple QTL mapping, threshold determination, and Bayesian QTL mapping methods. We see that this area has been advanced greatly by the interaction between genotyping technologies and statistical methodologies in the last several years, and will continue to be so in the future. The availability of efficient computational algorithms/softwares is essential to the QTL mapping community. However, it is equally important that these tools are applied with thorough understanding of the genetic data and the tools themselves.


This work was partially supported by the USDA Cooperative State Research, Education and Extension Service, Grant no. 2005-00754.


  1. Z.-B. Zeng, C.-H. Kao, and C. J. Basten, “Estimating the genetic architecture of quantitative traits,” Genetical Research, vol. 74, no. 3, pp. 279–289, 1999. View at: Publisher Site | Google Scholar
  2. K. Sax, “The association of size differences with seed-coat pattern and pigmentation in phaseolus vulgaris,” Genetics, vol. 8, no. 6, pp. 552–560, 1923. View at: Google Scholar
  3. T. F. C. Mackay, “The genetic architecture of quantitative traits,” Annual Review of Genetics, vol. 35, pp. 303–339, 2001. View at: Publisher Site | Google Scholar
  4. E. S. Lander and S. Botstein, “Mapping mendelian factors underlying quantitative traits using RFLP linkage maps,” Genetics, vol. 121, no. 1, pp. 185–199, 1989. View at: Google Scholar
  5. C. S. Haley and S. A. Knott, “A simple regression method for mapping quantitative trait loci in line crosses using flanking markers,” Heredity, vol. 69, no. 4, pp. 315–324, 1992. View at: Google Scholar
  6. Z.-B. Zeng, “Theoretical basis for separation of multiple linked gene effects in mapping quantitative trait loci,” Proceedings of the National Academy of Sciences of the United States of America, vol. 90, no. 23, pp. 10972–10976, 1993. View at: Publisher Site | Google Scholar
  7. C.-H. Kao and Z.-B. Zeng, “General formulas for obtaining the MLEs and the asymptotic variance-covariance matrix in mapping quantitative trait loci when using the EM algorithm,” Biometrics, vol. 53, no. 2, pp. 653–665, 1997. View at: Publisher Site | Google Scholar
  8. C.-H. Kao, Z.-B. Zeng, and R. D. Teasdale, “Multiple interval mapping for quantitative trait loci,” Genetics, vol. 152, no. 3, pp. 1203–1216, 1999. View at: Google Scholar
  9. Z.-B. Zeng, J. Liu, L. F. Stam, C.-H. Kao, J. M. Mercer, and C. C. Laurie, “Genetic architecture of a morphological shape difference between two Drosophila species,” Genetics, vol. 154, no. 1, pp. 299–310, 2000. View at: Google Scholar
  10. C. Basten, B. S. Weir, and Z.-B. Zeng, QTL Cartographer, Version 1.17, North Carolina State University, Raleigh, NC, USA, 2002.
  11. R. B. Brem, G. Yvert, R. Clinton, and L. Kruglyak, “Genetic dissection of transcriptional regulation in budding yeast,” Science, vol. 296, no. 5568, pp. 752–755, 2002. View at: Publisher Site | Google Scholar
  12. J. D. Storey, J. M. Akey, and L. Kruglyak, “Multiple locus linkage analysis of genomewide expression in yeast,” PLoS Biology, vol. 3, no. 8, pp. 1–11, 2005. View at: Publisher Site | Google Scholar
  13. D. M. Evans, J. Marchini, A. P. Morris, and L. R. Cardon, “Two-stage two-locus models in genome-wide association,” PLoS Genetics, vol. 2, no. 9, pp. 1424–1432, 2006. View at: Publisher Site | Google Scholar
  14. J. S. Witte, R. C. Elston, and L. R. Cardon, “On the relative sample size required for multiple comparisons,” Statistics in Medicine, vol. 19, no. 3, pp. 369–372, 2000. View at: Publisher Site | Google Scholar
  15. G. A. Churchill and R. W. Doerge, “Empirical threshold values for quantitative trait mapping,” Genetics, vol. 138, no. 3, pp. 963–971, 1994. View at: Google Scholar
  16. R. W. Doerge and G. A. Churchill, “Permutation tests for multiple loci affecting a quantitative character,” Genetics, vol. 142, no. 1, pp. 285–294, 1996. View at: Google Scholar
  17. F. Zou, J. P. Fine, J. Hu, and D. Y. Lin, “An efficient resampling method for assessing genome-wide statistical significance in mapping quantitative trait loci,” Genetics, vol. 168, no. 4, pp. 2307–2316, 2004. View at: Publisher Site | Google Scholar
  18. S. R. Seaman and B. Müller-Myhsok, “Rapid simulation of P values for product methods and multiple-testing adjustment in association studies,” The American Journal of Human Genetics, vol. 76, no. 3, pp. 399–408, 2005. View at: Publisher Site | Google Scholar
  19. K. N. Conneely and M. Boehnke, “So many correlated tests, so little time! Rapid adjustment of P values for multiple correlated tests,” The American Journal of Human Genetics, vol. 81, no. 6, pp. 1158–1168, 2007. View at: Publisher Site | Google Scholar
  20. D. R. Nyholt, “A simple correction for multiple testing for single-nucleotide polymorphisms in linkage disequilibrium with each other,” The American Journal of Human Genetics, vol. 74, no. 4, pp. 765–769, 2004. View at: Publisher Site | Google Scholar
  21. Y. Benjamini and Y. Hochberg, “Controlling the false discovery rate: a practical and powerful approach to multiple testing,” Journal of the Royal Statistical Society: Series B, vol. 57, no. 1, pp. 289–300, 1995. View at: Google Scholar
  22. J. D. Storey, “The positive false discovery rate: a Bayesian interpretation and the q-value,” Annals of Statistics, vol. 31, no. 6, pp. 2013–2035, 2003. View at: Publisher Site | Google Scholar
  23. Y. Benjamini and D. Yekutieli, “The control of the false discovery rate in multiple testing under dependency,” Annals of Statistics, vol. 29, no. 4, pp. 1165–1188, 2001. View at: Publisher Site | Google Scholar
  24. C. Sabatti, S. Service, and N. Freimer, “False discovery rate in linkage and association genome screens for complex disorders,” Genetics, vol. 164, no. 2, pp. 829–833, 2003. View at: Google Scholar
  25. Y. Benjamini and D. Yekutieli, “Quantitative trait loci analysis using the false discovery rate,” Genetics, vol. 171, no. 2, pp. 783–790, 2005. View at: Publisher Site | Google Scholar
  26. D. Yekutieli and Y. Benjamini, “Resampling-based false discovery rate controlling procedure for dependent test statistics,” Journal of Statistical Planning and Inference, vol. 82, no. 1-2, pp. 171–196, 1999. View at: Publisher Site | Google Scholar
  27. J. I. Weller, J. Z. Song, D. W. Heyen, H. A. Lewin, and M. Ron, “A new approach to the problem of multiple comparisons in the genetic dissection of complex traits,” Genetics, vol. 150, no. 4, pp. 1699–1706, 1998. View at: Google Scholar
  28. D. V. Zaykin, S. S. Young, and P. H. Westfall, “Using the false discovery rate approach in the genetic dissection of complex traits: a response to Weller et al,” Genetics, vol. 154, no. 4, pp. 1917–1918, 2000. View at: Google Scholar
  29. J. I. Weller, “Using the false discovery rate approach in the genetic dissection of complex traits: a response to Zaykin et al,” Genetics, vol. 154, no. 4, p. 1919, 2000. View at: Google Scholar
  30. J. D. Storey and R. Tibshirani, “Statistical significance for genomewide studies,” Proceedings of the National Academy of Sciences of the United States of America, vol. 100, no. 16, pp. 9440–9445, 2003. View at: Publisher Site | Google Scholar
  31. J. D. Storey, “A direct approach to false discovery rates,” Journal of the Royal Statistical Society: Series B, vol. 64, no. 3, pp. 479–498, 2002. View at: Publisher Site | Google Scholar
  32. B. Efron, R. Tibshirani, J. D. Storey, and V. Tusher, “Empirical Bayes analysis of a microarray experiment,” Journal of the American Statistical Association, vol. 96, no. 456, pp. 1151–1160, 2001. View at: Publisher Site | Google Scholar
  33. R. L. Fernando, D. Nettleton, B. R. Southey, J. C. M. Dekkers, M. F. Rothschild, and M. Soller, “Controlling the proportion of false positives in multiple dependent tests,” Genetics, vol. 166, no. 1, pp. 611–619, 2004. View at: Publisher Site | Google Scholar
  34. L. Chen and J. D. Storey, “Relaxed significance criteria for linkage analysis,” Genetics, vol. 173, no. 4, pp. 2371–2381, 2006. View at: Publisher Site | Google Scholar
  35. G. Zou and Y. Zuo, “On the sample size requirement in genetic association tests when the proportion of false positives is controlled,” Genetics, vol. 172, no. 1, pp. 687–691, 2006. View at: Publisher Site | Google Scholar
  36. K. F. Manly, D. Nettleton, and J. T. G. Hwang, “Genomics, prior probability, and statistical tests of multiple hypotheses,” Genome Research, vol. 14, no. 6, pp. 997–1001, 2004. View at: Publisher Site | Google Scholar
  37. B. Efron, “Local false discovery rates,” 2005, View at: Google Scholar
  38. J. M. Satagopan, B. S. Yandell, M. A. Newton, and T. C. Osborn, “A Bayesian approach to detect quantitative trait loci using Markov chain Monte Carlo,” Genetics, vol. 144, no. 2, pp. 805–816, 1996. View at: Google Scholar
  39. Ś. Sen and G. A. Churchill, “A statistical framework for quantitative trait mapping,” Genetics, vol. 159, no. 1, pp. 371–387, 2001. View at: Google Scholar
  40. P. J. Green, “Reversible jump Markov chain Monte Carlo computation and Bayesian model determination,” Biometrika, vol. 82, no. 4, pp. 711–732, 1995. View at: Google Scholar
  41. J. M. Satagopan and B. S. Yandell, “Estimating the number of quantitative trait loci via Bayesian model determination,” in Proceedings of the Joint Statistical Meetings, Chicago, Ill, USA, August 1996. View at: Google Scholar
  42. N. Yi, S. Xu, and D. B. Allison, “Bayesian model choice and search strategies for mapping interacting quantitative trait loci,” Genetics, vol. 165, no. 2, pp. 867–883, 2003. View at: Google Scholar
  43. N. Yi, “A unified Markov chain Monte Carlo framework for mapping multiple quantitative trait loci,” Genetics, vol. 167, no. 2, pp. 967–975, 2004. View at: Publisher Site | Google Scholar
  44. N. Yi, B. S. Yandell, G. A. Churchill, D. B. Allison, E. J. Eisen, and D. Pomp, “Bayesian model selection for genome-wide epistatic quantitative trait loci analysis,” Genetics, vol. 170, no. 3, pp. 1333–1344, 2005. View at: Publisher Site | Google Scholar
  45. B. S. Yandell, T. Mehta, S. Banerjee et al., “R/qtlbim: QTL with Bayesian interval mapping in experimental crosses,” Bioinformatics, vol. 23, no. 5, pp. 641–643, 2007. View at: Publisher Site | Google Scholar
  46. S. Xu, “Estimating polygenic effects using markers of the entire genome,” Genetics, vol. 163, no. 2, pp. 789–801, 2003. View at: Google Scholar
  47. T. Hsiang, “A Bayesian view on ridge regression,” The Statistician, vol. 24, no. 4, pp. 267–268, 1975. View at: Publisher Site | Google Scholar
  48. J. C. Whittaker, R. Thompson, and M. C. Denham, “Marker-assisted selection using ridge regression,” Genetical Research, vol. 75, no. 2, pp. 249–252, 2000. View at: Publisher Site | Google Scholar
  49. V. M. Roso, F. S. Schenkel, S. P. Miller, and L. R. Schaeffer, “Estimation of genetic effects in the presence of multicollinearity in multibreed beef cattle evaluation,” Journal of Animal Science, vol. 83, no. 8, pp. 1788–1800, 2005. View at: Google Scholar
  50. R. E. Kass and A. E. Raftery, “Bayes factors,” Journal of the American Statistical Association, vol. 90, no. 430, pp. 773–795, 1995. View at: Publisher Site | Google Scholar
  51. D. J. Balding, “A tutorial on statistical methods for population association studies,” Nature Reviews Genetics, vol. 7, no. 10, pp. 781–791, 2006. View at: Publisher Site | Google Scholar

Copyright © 2008 Wei Zou and Zhao-Bang Zeng. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

2056 Views | 670 Downloads | 17 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19.