About this Journal Submit a Manuscript Table of Contents
Journal of Probability and Statistics
VolumeΒ 2012Β (2012), Article IDΒ 537474, 15 pages
Research Article

Testing Homogeneity in a Semiparametric Two-Sample Problem

1Department of Statistics and Actuarial Science, School of Finance and Statistics, East China Normal University, Shanghai 200241, China
2Department of Statistics and Actuarial Science, University of Waterloo, Waterloo, ON, Canada N2L 3G1
3Department of Mathematics and Statistics, York University, Toronto, ON, Canada M3J 1P3

Received 18 November 2011; Accepted 24 January 2012

Academic Editor: YongzhaoΒ Shao

Copyright Β© 2012 Yukun Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


We study a two-sample homogeneity testing problem, in which one sample comes from a population with density 𝑓(π‘₯) and the other is from a mixture population with mixture density (1βˆ’πœ†)𝑓(π‘₯)+πœ†π‘”(π‘₯). This problem arises naturally from many statistical applications such as test for partial differential gene expression in microarray study or genetic studies for gene mutation. Under the semiparametric assumption 𝑔(π‘₯)=𝑓(π‘₯)𝑒𝛼+𝛽π‘₯, a penalized empirical likelihood ratio test could be constructed, but its implementation is hindered by the fact that there is neither feasible algorithm for computing the test statistic nor available research results on its theoretical properties. To circumvent these difficulties, we propose an EM test based on the penalized empirical likelihood. We prove that the EM test has a simple chi-square limiting distribution, and we also demonstrate its competitive testing performances by simulations. A real-data example is used to illustrate the proposed methodology.

1. Introduction

Let π‘₯1,…,π‘₯𝑛0 be a random sample from a population with distribution function 𝐹, and let 𝑦1,…,𝑦𝑛1 be a random sample from a population with distribution function 𝐻. Testing whether the two populations have the same distribution, that is, 𝐻0∢𝐹=𝐻 versus 𝐻1βˆΆπΉβ‰ π», with both 𝐹 and 𝐻 completely unspecified, will require a nonparametric test. Since 𝐻1βˆΆπΉβ‰ π» is a very broad hypothesis, many times one may want to consider some more specified alternative, for example, the two populations only differ in location. In the present paper, we will consider a specified alternative in which one of the two samples has a mixture structure. More specifically, we haveπ‘₯1,…,π‘₯𝑛0i.i.d.βˆΌπ‘“(π‘₯),𝑦1,…,𝑦𝑛1i.i.d.βˆΌβ„Ž(𝑦)=(1βˆ’πœ†)𝑓(𝑦)+πœ†π‘”(𝑦),(1.1) where 𝑓(π‘₯)=𝑑𝐹(π‘₯)/𝑑π‘₯, 𝑔(𝑦)=𝑑𝐺(𝑦)/𝑑𝑦, β„Ž(𝑦)=𝑑𝐻(𝑦)/𝑑𝑦, and πœ†βˆˆ(0,1) is an unknown parameter sometimes called contamination proportion. The problem of interest is to test 𝐻0βˆΆπ‘“=β„Ž or equivalently πœ†=0. This particular two-sample problem arises naturally in a variety of statistical applications such as test for partial differential gene expression in microarray study, genetic studies for gene mutation, case-control studies with contaminated controls, or the test of a treatment effect in the presence of nonresponders in biological experiments (see Qin and Liang [1] for details).

If no auxiliary information is available, this is merely the usual two-sample goodness-of-fit problem. There has been extensive literature on it; see Zhang [2] and references therein. However, these tests are not suitable for the specific alternative with a mixture structure as they might be inferior comparing with methods that are designed for the specific alternative. In this paper, we will propose an empirical likelihood-based testing procedure for this specific mixture alternative under Anderson’s semiparametric assumption [3]. Motivated by the logistic regression model, the semiparametric assumption proposed by Anderson [3] links the two distribution functions 𝐹 and 𝐺 through the following equation:log𝑔(π‘₯)𝑓(π‘₯)=𝛼+𝛽π‘₯,(1.2) where 𝛼 and 𝛽 are both unknown parameters. There are many examples where the logarithm of the density ratio is linear in the observations.

Example 1.1. Let 𝐹 and 𝐺 be the distribution functions of Binomial (π‘š,𝑝1) and Binomial (π‘š,𝑝2), respectively. We refer the densities 𝑓 and 𝑔 to the probability mass functions corresponding to 𝐹 and 𝐺, respectively. Then, log𝑔(π‘₯)𝑓(π‘₯)=π‘šlog1βˆ’π‘21βˆ’π‘1𝑝+log2ξ€·1βˆ’π‘1𝑝1ξ€·1βˆ’π‘2ξ€Έξƒ°π‘₯.(1.3)

Example 1.2. Let 𝐹 be the distribution function of 𝑁(πœ‡1,𝜎2) and 𝐺 the distribution function of 𝑁(πœ‡2,𝜎2). Then, log𝑔(π‘₯)=1𝑓(π‘₯)2𝜎2ξ€·πœ‡21βˆ’πœ‡22ξ€Έ+1𝜎2ξ€·πœ‡2βˆ’πœ‡1ξ€Έπ‘₯.(1.4)
In practice, one may need to apply some sort of transformation to the data (e.g., logarithm transformation) in order to justify the use of the semiparametric model assumption (1.2).

Example 1.3. Let 𝐹 and 𝐺 be the distribution functions of log𝑁(πœ‡1,𝜎2) and log𝑁(πœ‡2,𝜎2), respectively. It is clear that the density ratio is a linear function of the log-transformed data: log𝑔(π‘₯)=1𝑓(π‘₯)2𝜎2ξ€·πœ‡21βˆ’πœ‡22ξ€Έ+1𝜎2ξ€·πœ‡2βˆ’πœ‡1ξ€Έlogπ‘₯.(1.5)

Example 1.4. Let 𝐹 and 𝐺 be the distribution functions of Gamma(π‘š1,πœƒ) and Gamma(π‘š2,πœƒ), respectively. In this case, log𝑔(π‘₯)ξƒ―Ξ“ξ€·π‘šπ‘“(π‘₯)=log1ξ€ΈΞ“ξ€·π‘š2ξ€Έξƒ°+ξ€·π‘š1βˆ’π‘š2ξ€Έξ€·π‘šlogπœƒ+2βˆ’π‘š1ξ€Έlogπ‘₯.(1.6)

The semiparametric modeling assumption (1.2) is very flexible and has the advantage of not putting any specific restrictions on the functional form of 𝑓. Under this assumption, various approaches have been proposed to test homogeneity in the two-sample problem (see [1, 4, 5] and references therein). This paper adds to this literature by introducing a new type of test statistics which are based on the empirical likelihood [6, 7].

The empirical likelihood (EL) is a nonparametric likelihood method which has many nice properties paralleling to the likelihood methods, for example, it is range-preserving, transform-respect, Bartlett correctable, and a systematic approach to incorporating auxiliary information [8–11]. In general, if the parameters are identifiable, the empirical likelihood ratio (ELR) test has a chi-square limiting distribution under null hypothesis. However, for the aforementioned testing problem, the parameters under 𝐻0 are not identifiable, which results in an intractable null limiting distribution for the ELR test. To circumvent this problem, we would add a penalty to the log EL to penalize πœ† being too close to zero. Working like a soft threshold, the penalty makes the parameters roughly identifiable. Intuitively, the penalized (or modified) ELR test should restore the usual chi-square limiting distribution. Unfortunately two things hinder the direct use of the penalized ELR test. One is that, to the best of our knowledge, there is no feasible algorithm to compute the penalized ELR test statistic. The other one is that there has been no research on the asymptotic properties of the penalized ELR test. Therefore, one cannot obtain critical values for the penalized ELR test regardless through simulations or an asymptotic reference distribution. We find that the EM test [12, 13] based on the penalized EL is a nice solution to the testing problem.

The remainder of this paper is organized as follows. In Section 2, we introduce the ELR and the penalized ELR. The penalized EL-based EM test is given in Section 3. A key computational issue of the EM test is discussed in Section 4. Sections 5 and 6 contain a simulation study and a real-data application, respectively. For clarity, all proofs are postponed to the appendix.

2. Empirical Likelihood

Let {𝑑1,…,𝑑𝑛0,𝑑𝑛0+1,…,𝑑𝑛}={π‘₯1,…,π‘₯𝑛0,𝑦1,…,𝑦𝑛1} denote the combined two-sample data, where 𝑛=𝑛0+𝑛1. Under Anderson’s semiparametric assumption (1.2), the likelihood of two-sample data (1.1) is𝐿=𝑛0𝑖=1𝑑𝑑𝐹𝑖𝑛𝑗=𝑛0+1ξ€Ί1βˆ’πœ†+πœ†π‘’π›Ό+𝛽𝑑𝑗𝑑𝑑𝐹𝑗.(2.1) Let π‘β„Ž=𝑑𝐹(π‘‘β„Ž), β„Ž=1,…,𝑛. The EL is just the likelihood 𝐿 with constraints π‘β„Žβ‰₯0, βˆ‘π‘›β„Ž=1π‘β„Ž=1 and βˆ‘π‘›β„Ž=1π‘β„Ž(𝑒𝛼+π›½π‘‘β„Žβˆ’1)=0. The corresponding log-EL is𝑙=π‘›ξ“β„Ž=1logπ‘β„Ž+𝑛1𝑗=1ξ€Ίlog1βˆ’πœ†+πœ†π‘’π›Ό+𝛽𝑦𝑗.(2.2) We are interested in testing𝐻0βˆΆπœ†=0or(𝛼,𝛽)=(0,0).(2.3) Under the null hypothesis, the constraint βˆ‘π‘›β„Ž=1π‘β„Ž(𝑒𝛼+π›½π‘‘β„Žβˆ’1)=0 will always hold and sup𝐻0𝑙=βˆ’π‘›log𝑛. Under alternative hypothesis, for any fixed (πœ†,𝛼,𝛽), maximizing 𝑙 with respect to π‘β„Žβ€™s leads to the log-EL function of (πœ†,𝛼,𝛽):𝑙(πœ†,𝛼,𝛽)=βˆ’π‘›ξ“β„Ž=1𝑒log1+πœ‰π›Ό+π›½π‘‘β„Žβˆ’1ξ€Έξ€»βˆ’π‘›log𝑛+𝑛1𝑗=1ξ€Ίlog1βˆ’πœ†+πœ†π‘’π›Ό+𝛽𝑦𝑗,(2.4) where πœ‰ is the solution to the following equation:π‘›ξ“β„Ž=1𝑒𝛼+π›½π‘‘β„Žβˆ’1𝑒1+πœ‰π›Ό+π›½π‘‘β„Žξ€Έβˆ’1=0.(2.5) Hence, the EL ratio function 𝑅(πœ†,𝛼,𝛽)=2{𝑙(πœ†,𝛼,𝛽)+𝑛log𝑛} and the ELR is denoted as 𝑅=sup𝑅(πœ†,𝛼,𝛽).

The null hypothesis 𝐻0 holds for πœ†=0 regardless of (𝛼,𝛽), or (𝛼,𝛽)=(0,0) regardless of πœ†. This implies that the parameter (πœ†,𝛼,𝛽) is not identifiable under 𝐻0, resulting in rather complicated asymptotic properties of the ELR. One may consider the modified or penalized likelihood method [14] and define the penalized log-EL function 𝑝𝑙(πœ†,𝛼,𝛽)=𝑙(πœ†,𝛼,𝛽)+log(πœ†). Accordingly the penalized EL ratio function is 𝑝𝑅(πœ†,𝛼,𝛽)=2{𝑝𝑙(πœ†,𝛼,𝛽)βˆ’π‘π‘™(1,0,0)}=βˆ’2π‘›ξ“β„Ž=1𝑒log1+πœ‰π›Ό+π›½π‘‘β„Žβˆ’1ξ€Έξ€»+2𝑛1𝑗=1ξ€·log1βˆ’πœ†+πœ†π‘’π›Ό+𝛽𝑦𝑗+2log(πœ†),(2.6) where πœ‰ is the solution to (2.5). The penalty function log(πœ†) goes to βˆ’βˆž as πœ† approaches 0. Therefore, πœ† is bounded away from 0, and the null hypothesis in (2.3) then reduces to (𝛼,𝛽)=(0,0). That is, the parameters in the penalized log-EL function is asymptotically identifiable. However, the asymptotic behavior of the penalized ELR test is still complicated. Meanwhile, the computation of the penalized ELR test statistic is another obstacle of the implementation of the penalized ELR method. No feasible and stable algorithm has been found for this purpose. An EL-based EM test proposed in this paper provides an efficient way to solve the problem.

3. EL-Based EM Test

Motivated by Chen and Li [12] and Li et al. [13], we propose an EM test based on the penalized EL to test the hypothesis (2.3). The EM test statistics are derived iteratively. We first choose a finite set of Ξ›={πœ†1,…,πœ†πΏ}βŠ‚(0,1], for instance, Ξ›={0.1,0.2,…,0.9,1.0}, and a positive integer 𝐾 (2 or 3 in general). For each 𝑙=1,…,𝐿, we proceed the following steps.

Step 1. Let π‘˜=1 and πœ†π‘™(π‘˜)=πœ†π‘™. Calculate (𝛼𝑙(π‘˜),𝛽𝑙(π‘˜))=argmax𝛼,𝛽𝑝𝑅(πœ†π‘™(π‘˜),𝛼,𝛽).

Step 2. Update (πœ†,𝛼,𝛽) by using the following algorithm for πΎβˆ’1 times.
Substep 2.1. Calculate the posterior distribution, 𝑀(π‘˜)𝑗𝑙=πœ†π‘™(π‘˜)𝛼exp𝑙(π‘˜)+𝛽𝑙(π‘˜)𝑦𝑗1βˆ’πœ†π‘™(π‘˜)+πœ†π‘™(π‘˜)𝛼exp𝑙(π‘˜)+𝛽𝑙(π‘˜)𝑦𝑗,𝑗=1,…,𝑛1,(3.1) and update πœ† by πœ†π‘™(π‘˜+1)=argmaxπœ†ξƒ―π‘›1𝑗=1ξ‚€1βˆ’π‘€(π‘˜)𝑗𝑙log(1βˆ’πœ†)+𝑛1𝑗=1𝑀(π‘˜)𝑗𝑙log(πœ†)+log(πœ†).(3.2)
Substep 2.2. Update (𝛼,𝛽) by (𝛼𝑙(π‘˜+1),𝛽𝑙(π‘˜+1))=argmax𝛼,𝛽𝑝𝑅(πœ†π‘™(π‘˜+1),𝛼,𝛽).
Substep 2.3. Let π‘˜=π‘˜+1 and continue.

Step 3. Define the test statistics 𝑀𝑛(𝐾)(πœ†π‘™)=𝑝𝑅(πœ†π‘™(𝐾),𝛼𝑙(𝐾),𝛽𝑙(𝐾)).

The EM test statistic is defined asEM𝑛(𝐾)𝑀=max𝑛(𝐾)ξ€·πœ†π‘™ξ€Έξ‚‡,𝑙=1,…,𝐿.(3.3) We reject the null hypothesis 𝐻0 when the EM test statistic is greater than some critical value determined by the following limiting distribution.

Theorem 3.1. Suppose 𝜌=𝑛1/π‘›βˆˆ(0,1) is a constant. Assume the null hypothesis 𝐻0 holds and 𝐸(π‘‘β„Ž)=0 and Var(π‘‘β„Ž)=𝜎2∈(0,∞) for β„Ž=1,…,𝑛. For 𝑙=1,…,𝐿 and any fixed π‘˜, it holds that πœ†π‘™(π‘˜)βˆ’πœ†π‘™=π‘œπ‘(1),𝛼𝑙(π‘˜)=π‘‚π‘ξ€·π‘›βˆ’1ξ€Έ,𝛽𝑙(π‘˜)=π‘¦βˆ’π‘₯πœ†π‘™πœŽ2+π‘œπ‘ξ€·π‘›βˆ’1/2ξ€Έ,(3.4) where π‘₯=(1/𝑛0)βˆ‘π‘›0𝑖=1π‘₯𝑖 and 𝑦=(1/𝑛1)βˆ‘π‘›1𝑗=1𝑦𝑗.

Remark 3.2. The assumption πΈπ‘‘β„Ž=0 is only for convenience purpose and unnecessary. Otherwise, we can replace π‘‘β„Ž and 𝛼 with π‘‘β„Žβˆ’πΈ(π‘‘β„Ž) and 𝛼+𝛽𝐸(π‘‘β„Ž).

Theorem 3.3. Assume the conditions of Theorem 3.1 hold and 1βˆˆΞ›. Under the null hypothesis (2.3), EM𝑛(𝐾)β†’πœ’21 in distribution, as π‘›β†’βˆž.

We finish this section with an additional remark.

Remark 3.4. We point out that the idea of the EM-test can also be generalized to more general models such as log(𝑔(π‘₯)/𝑓(π‘₯))=𝛼+𝛽1π‘₯+β‹―+π›½π‘˜π‘₯π‘˜ for some integer π‘˜ or log(𝑔(π‘₯)/𝑓(π‘₯))=𝛼+𝛽1𝑑1(π‘₯)+β‹―+π›½π‘˜π‘‘π‘˜(π‘₯)with 𝑑𝑖(β‹…)’s being known functions.

4. Computation of the EM Test

A key step of the EM test procedure is to maximize 𝑝𝑅(πœ†,𝛼,𝛽) with respect to (𝛼,𝛽) for fixed πœ†. In this section, we propose a computation strategy which provides stable solution to this optimization problem. Throughout this section, πœ† is suppressed to be fixed.

The objective function is 𝑝𝑅(πœ†,𝛼,𝛽)=𝐺(πœ‰βˆ—,𝛼,𝛽) where𝐺(πœ‰,𝛼,𝛽)=βˆ’2π‘›ξ“β„Ž=1𝑒log1+πœ‰π›Ό+π›½π‘‘β„Žβˆ’1ξ€Έξ€»+2𝑛1𝑗=1ξ€Ίlog1βˆ’πœ†+πœ†π‘’π›Ό+𝛽𝑦𝑗+2log(πœ†)(4.1) and πœ‰βˆ—=πœ‰βˆ—(𝛼,𝛽) is the solution toπœ•πΊπœ•πœ‰=βˆ’2π‘›ξ“β„Ž=1𝑒𝛼+π›½π‘‘β„Žβˆ’1𝑒1+πœ‰π›Ό+π›½π‘‘β„Žξ€Έβˆ’1=0.(4.2) If (𝛼,𝛽) is the maximum point of 𝑝𝑅(πœ†,𝛼,𝛽), it should generally satisfyπœ•πΊπœ•π›Ό=βˆ’2π‘›ξ“β„Ž=1πœ‰π‘’π›Ό+π›½π‘‘β„Žξ€·π‘’1+πœ‰π›Ό+π›½π‘‘β„Žξ€Έβˆ’1+2𝑛1𝑗=1πœ†π‘’π›Ό+𝛽𝑦𝑗1βˆ’πœ†+πœ†π‘’π›Ό+𝛽𝑦𝑗=0.(4.3) Combining (4.2) and (4.3) leads to1πœ‰=𝑛𝑛1𝑗=1πœ†π‘’π›Ό+𝛽𝑦𝑗1βˆ’πœ†+πœ†π‘’π›Ό+𝛽𝑦𝑗.(4.4) Putting this expression of πœ‰ back into (4.1), we have a new function𝐻(𝛼,𝛽)=βˆ’2π‘›ξ“β„Ž=1𝑒log1+𝛼+π›½π‘‘β„Žξ€Έ1βˆ’1𝑛𝑛1𝑗=1πœ†π‘’π›Ό+𝛽𝑦𝑗1βˆ’πœ†+πœ†π‘’π›Ό+𝛽𝑦𝑗+2𝑛1𝑗=1ξ€·log1βˆ’πœ†+πœ†π‘’π›Ό+𝛽𝑦𝑗.(4.5) It can be verified that 𝐻(𝛼,𝛽) is almost surely concave in a neighborhood of (0,0) given πœ†, which means that maximizing 𝐻(𝛼,𝛽) with respect to (𝛼,𝛽) gives the maximum of 𝑝𝑅(πœ†,𝛼,𝛽) for fixed πœ†. The stability of the method is illustrated by the following simulation study.

5. Simulation Study

We consider two models in Examples 1.3 and 1.4 with πœ‡1=0, πœ‡2=πœ‡, and 𝜎2=1 for Example 1.3, and π‘š1=1, π‘š2=π‘š, and πœƒ=1 for Example 1.4. Nominal levels of 0.01, 0.05, and 0.10 are considered. The logarithm transformation is applied to the original data before using the EM test. The initial set Ξ›={0.1,0.2,…,1} and iteration number 𝐾=3 are used to calculate the EM test statistic.

One competitive method for testing homogeneity under the semiparametric two-sample model is the score test proposed by Qin and Liang [1]. This method is based on𝑆(𝛼,𝛽)=πœ•π‘™(πœ†,𝛼,𝛽)|||πœ•πœ†πœ†=0=𝑛1𝑗=1𝑒𝛼+π›½π‘¦π‘—ξ€Έβˆ’1,(5.1) where 𝑙(πœ†,𝛼,𝛽) is the log empirical likelihood function given in (2.4). Let (𝛼1,̂𝛽1)=argmax𝛼,𝛽𝑙(1,𝛼,𝛽). The score test statistic was defined as 𝑇1=𝑆(𝛼1,̂𝛽1)/(1+𝑛1/𝑛0), which has a πœ’21 limiting distribution under the null hypothesis.

We compare the EM test and the score test in terms of type I error and power. We calculate the type I errors of each method under the null hypothesis based on 20,000 repetitions and the power under the alternative models based on 2,000 repetitions. For fair comparison, simulated critical values are used to calculate the power. We consider two sample sizes: 50 and 200 and 𝐾=1,2,3. Tables 1 and 2 contain the simulation results for the log-normal models and Tables 3 and 4 for the gamma models.

Table 1: Type I error and power comparisons (%) of the EM test and the score test (SC test) for log-normal model: 𝑛0=𝑛1=50.
Table 2: Type I error and power comparisons (%) of the EM test and the score test (SC test) for log-normal model: 𝑛0=𝑛1=200.
Table 3: Type I error and power comparisons (%) of the EM test and the score test (SC test) for gamma model:𝑛0=𝑛1=50.
Table 4: Type I error and power comparisons (%) of the EM test and the score test (SC test) for gamma model: 𝑛0=𝑛1=200.

The results show that the EM test and the score test have similar type I errors. For both methods, the type I errors are somehow larger than the nominal levels when the sample size is 𝑛=50; they are close to the nominal levels when the sample size is increased to 𝑛=200. For the log-normal models, two methods have almost the same power when the alternatives are close to each other such as πœ‡=1; the EM test becomes much more powerful when the alternatives are distant and the sample size increases. In the case of 𝑛=50, πœ†=0.2, πœ‡=3, and nominal level 0.01, the EM test has a 10% gain in power compared with the score test; the gain rushes up to almost 30% when πœ†=0.1, πœ‡=3, and the sample size increases to 𝑛=200. For the gamma models, the advantage of the EM test is more obvious. For both sample sizes 𝑛=50 and 200, the EM test is more powerful than the score test.

6. Real Example

We apply our EM test procedure to the drug abuse data [15] in a study of addiction to morphine in rats. In this study, rats got morphine by pressing a lever and the frequency of lever presses (self-injection rates) after six-day treatment with morphine was recorded as response variable. The data consist of the number of lever presses for five groups of rats: four treatment groups with different dose levels and one saline group (control group).

We analyzed the response variables (the number of lever presses by rats) of the treatment group at the first dose level and the control group. The data is tabulated in Table  3 of Fu et al. [5]. Following Boos and Browine [16] and Fu et al. [5], we analyze the transformed data, log10(𝑅+1) with 𝑅 being the number of lever presses by rats. Instead of using the parametric models as Boos and Browine [16] and Fu et al. [5], we adopt Anderson’s semiparametric approach. That is, we assume that the response variables in control group comes from 𝑓(π‘₯), while the response variables in treatment group comes from β„Ž(π‘₯)=(1βˆ’πœ†)𝑓(π‘₯)+πœ†π‘”(π‘₯) with 𝑔(π‘₯)/𝑓(π‘₯)=exp(𝛼+𝛽π‘₯). The EM test statistics for testing homogeneity under the semiparametric two-sample model are found to be EM𝑛(1)=14.090, EM𝑛(2)=14.150, and EM𝑛(3)=14.167. Calibrated by the πœ’21 limiting distribution, the 𝑃values are all around 0.02%. We also applied the score test of Qin and Liang [1]. The score test statistic is 9.417 with the 𝑃 value equal to 0.2% calibrated by the πœ’21 limiting distribution. We also used the permutation methods to get the 𝑃values of the two types of tests. Based on 50,000 permutations, the 𝑃 values of the three EM test statistics are all around 0.03%, and the 𝑃 value of the score test is around 0.5%. In accordance with Fu et al. [5], both methods suggest a significant treatment effect, while the proposed EM test has much stronger evidence than the score test.



The proofs of Theorems 3.1 and 3.3 are based on the three lemmas given below. Lemma A.1 assesses the order of the maximum empirical likelihood estimators of 𝛼 and 𝛽 with πœ† bounded away from 0 under the null hypothesis. Lemma A.2 shows that the EM iteration updates the value of πœ† by the amount of order π‘œπ‘(1). Theorem 3.1 is then proved by iteratively using Lemmas A.1 and A.2. Lemma A.3 gives an approximation of the penalized ELR for any πœ† bounded away from 0, based on which we prove Theorem 3.3.

Lemma A.1. Assume the conditions of Theorem 3.1. Let πœ†βˆˆ[πœ–,1] for some constant πœ–>0 and (𝛼,𝛽)=argmax𝛼,𝛽𝑝𝑅(πœ†,𝛼,𝛽). Then, we have 𝛼=π‘‚π‘ξ€·π‘›βˆ’1ξ€Έ,𝛽=π‘¦βˆ’π‘₯πœ†πœŽ2+π‘œπ‘ξ€·π‘›βˆ’1/2ξ€Έ(A.1) with π‘₯=1/𝑛0βˆ‘π‘›0𝑖=1π‘₯𝑖 and 𝑦=1/𝑛1βˆ‘π‘›1𝑗=1𝑦𝑗.

Proof. Since πœ†β‰₯πœ–>0, the parameters (𝛼,𝛽) in the empirical likelihood ratio are identifiable. Therefore, (𝛼,𝛽) are βˆšπ‘›-consistent to the true value (0,0), that is, 𝛼=𝑂𝑝(π‘›βˆ’1/2) and 𝛽=𝑂𝑝(π‘›βˆ’1/2) [10].
Following the arguments in Section 4, the maximum empirical likelihood estimate (𝛼,𝛽) should satisfy (here πœ† is suppressed to πœ†) ξ‚€πœ•πΊπœ‰,𝛼,π›½ξ‚πœ•π›Ό=βˆ’2π‘›ξ“β„Ž=1πœ‰π‘’π›Ό+π›½π‘‘β„Ž1+πœ‰ξ‚€π‘’π›Ό+π›½π‘‘β„Žξ‚βˆ’1+2𝑛1𝑗=1πœ†π‘’π›Ό+𝛽𝑦𝑗1βˆ’πœ†+πœ†π‘’π›Ό+𝛽𝑦𝑗=0,(A.2)πœ•πΊπœ‰,𝛼,π›½ξ‚πœ•π›½=βˆ’2π‘›ξ“β„Ž=1πœ‰π‘’π›Ό+π›½π‘‘β„Žπ‘‘β„Ž1+πœ‰ξ‚€π‘’π›Ό+π›½π‘‘β„Žξ‚βˆ’1+2𝑛1𝑗=1πœ†π‘’π›Ό+𝛽𝑦𝑗𝑦𝑗1βˆ’πœ†+πœ†π‘’π›Ό+𝛽𝑦𝑗=0(A.3) with 1πœ‰=𝑛𝑛1𝑗=1πœ†π‘’π›Ό+𝛽𝑦𝑗1βˆ’πœ†+πœ†π‘’π›Ό+𝛽𝑦𝑗.(A.4)
Applying Taylor expansion on the right-hand side of (A.4), we get π‘›πœ‰=1π‘›πœ†+π‘œπ‘(1).(A.5) Further applying first-order Taylor expansion to (A.2) and using (A.4), we get π‘›πœ‰ξ‚€1βˆ’πœ‰ξ‚π›Ό+πœ‰ξ‚€1βˆ’πœ‰ξ‚π‘›ξ“β„Ž=1π‘‘β„Žπ›½βˆ’π‘›1πœ†ξ‚€1βˆ’πœ†ξ‚π›Όβˆ’πœ†ξ‚€1βˆ’πœ†ξ‚π‘›1𝑗=1𝑦𝑗𝛽=𝑂𝑝(𝑛)𝛼2+𝛽2.(A.6) Note that both 𝛼 and 𝛽 are of order 𝑂𝑝(π‘›βˆ’1/2) and that both βˆ‘π‘›β„Ž=1π‘‘β„Ž and βˆ‘π‘›1𝑗=1𝑦𝑗 have order 𝑂𝑝(𝑛1/2). Combining (A.5) and (A.6) yields 𝑛1πœ†(πœ†βˆ’π‘›1/π‘›πœ†)𝛼=𝑂𝑝(1). Therefore, 𝛼=𝑂𝑝(π‘›βˆ’1).
Similarly, first-order Taylor expansion of (A.3) results in 0=βˆ’πœ‰π‘›ξ“β„Ž=1π‘‘β„Žβˆ’πœ‰ξ‚€1βˆ’πœ‰ξ‚π‘›ξ“β„Ž=1π‘‘β„Žπ›Όβˆ’πœ‰ξ‚€1βˆ’πœ‰ξ‚π‘›ξ“β„Ž=1𝑑2β„Žπ›½+πœ†π‘›1𝑗=1𝑦𝑗+πœ†ξ‚€1βˆ’πœ†ξ‚π‘›1𝑗=1𝑦𝑗𝛼+πœ†ξ‚€1βˆ’πœ†ξ‚π‘›1𝑗=1𝑦2𝑗𝛽+𝑂𝑝(𝑛)𝛼2+𝛽2.(A.7) With the same reasoning as for 𝛼, it follows from (A.7) that 𝑛1πœ†ξ‚΅π‘›1βˆ’1π‘›πœ†ξ‚ΆπœŽ2βˆ’π‘›1πœ†ξ‚€1βˆ’πœ†ξ‚πœŽ2𝛽=πœ†π‘›1𝑗=1π‘¦π‘—βˆ’π‘›1π‘›πœ†π‘›ξ“β„Ž=1π‘‘β„Ž+π‘œπ‘ξ€·π‘›1/2ξ€Έ.(A.8) After some algebra, we have 𝛽=(π‘¦βˆ’π‘₯)/(πœ†πœŽ2)+π‘œπ‘(π‘›βˆ’1/2), which completes the proof.

Suppose that πœ†, 𝛼, and 𝛽 have the properties given in Lemma A.1. For 𝑗=1,…,𝑛1, let 𝑀𝑗=πœ†exp(𝛼+𝛽𝑦𝑗)/(1βˆ’πœ†+πœ†exp(𝛼+𝛽𝑦𝑗)). The updated value of πœ† isπœ†βˆ—=argmaxπœ†ξƒ―π‘›1𝑗=1ξ€·1βˆ’π‘€π‘—ξ€Έlog(1βˆ’πœ†)+𝑛1𝑗=1𝑀𝑗log(πœ†)+log(πœ†).(A.9) It can be verified that the close form of πœ†βˆ— is given by πœ†βˆ—=(1/(𝑛1βˆ‘+1))(𝑛1𝑗=1𝑀𝑗+1). We now show that the above iteration only changes the value of πœ† by an π‘œπ‘(1) term.

Lemma A.2. Assume the conditions of Lemma A.1 hold. Then, πœ†βˆ—=πœ†+π‘œπ‘(1).

Proof. Let Μ‚βˆ‘πœ†=𝑛1𝑗=1𝑀𝑗/𝑛1. According to Lemma A.1, 𝛼=π‘œπ‘(1) and 𝛽=π‘œπ‘(1). Applying the first-order Taylor expansion, we have Μ‚1πœ†=𝑛1𝑛1𝑗=1ξ‚€πœ†exp𝛼+𝛽𝑦𝑗1βˆ’πœ†+ξ‚€πœ†exp𝛼+𝛽𝑦𝑗=πœ†+𝑂𝑝(1)𝛼+𝛽=πœ†+π‘œπ‘(1).(A.10) Some simple algebra work shows that πœ†βˆ—βˆ’Μ‚πœ†=1βˆ’πœ†π‘›1+1=π‘œπ‘(1).(A.11) Therefore, πœ†βˆ—=πœ†+π‘œπ‘(1), and this finishes the proof.

Proof of Theorem 3.1. With the above two technical lemmas, the proof is the same as that of Theorem  1 in Li et al. [13] and therefore is omitted.

The next lemma is a technical preparation for proving Theorem 3.3. It investigates the asymptotic approximation of the penalized ELR for any πœ† bounded away from 0.

Lemma A.3. Assume the conditions of Theorem 3.1 and πœ†βˆˆ[πœ–,1] for some πœ–>0. Then, ξ‚€π‘π‘…πœ†,𝛼,𝛽=π‘›πœŒ(1βˆ’πœŒ)πœŽβˆ’2ξ€·π‘¦βˆ’π‘₯ξ€Έ2ξ‚€+2logπœ†ξ‚+π‘œπ‘(1).(A.12)

Proof. With Lemma A.1, we have 𝛼=𝑂𝑝(π‘›βˆ’1) and 𝛽=𝑂𝑝(π‘›βˆ’1/2). Applying second-order Taylor expansion on 𝑝𝑅(πœ†,𝛼,𝛽) and noting that πœ•π‘π‘…/πœ•π›Ό|(𝛼,𝛽)=(0,0)=0, we have ξ‚€π‘π‘…πœ†,𝛼,π›½ξ‚ξƒ©βˆ’=2πœ‰π‘›ξ“β„Ž=1π‘‘β„Ž+πœ†π‘›1𝑗=1𝑦𝑗ξƒͺξƒ―π›½βˆ’πœ‰ξ‚€1βˆ’πœ‰ξ‚π‘›ξ“β„Ž=1𝑑2β„Žβˆ’πœ†ξ‚€1βˆ’πœ†ξ‚π‘›1𝑗=1𝑦2𝑗𝛽2ξ‚€+2logπœ†ξ‚+π‘œπ‘(1).(A.13) Using (A.5) and the facts that both βˆ‘π‘›β„Ž=1𝑑2β„Ž/𝑛 and βˆ‘π‘›1𝑗=1𝑦2𝑗/𝑛1 converge to 𝜎2 in probability, the above expression can be simplified to ξ‚€π‘π‘…πœ†,𝛼,𝛽𝑛=21𝑛0π‘›πœ†ξ€·π‘¦βˆ’π‘₯ξ€Έπ‘›π›½βˆ’1𝑛0π‘›πœ†2𝜎2𝛽2ξ‚€+2logπœ†ξ‚+π‘œπ‘(1).(A.14) Plugging in the approximation 𝛽=(π‘¦βˆ’π‘₯)/(πœ†πœŽ2)+π‘œπ‘(π‘›βˆ’1/2), we get ξ‚€π‘π‘…πœ†,𝛼,𝛽=𝑛1𝑛0π‘›ξ€·π‘¦βˆ’π‘₯ξ€Έ2𝜎2ξ‚€+2logπœ†ξ‚+π‘œπ‘(1)=π‘›πœŒ(1βˆ’πœŒ)πœŽβˆ’2ξ€·π‘¦βˆ’π‘₯ξ€Έ2ξ‚€+2logπœ†ξ‚+π‘œπ‘(1).(A.15) This completes the proof.

Proof of Theorem 3.3. Without loss of generality, we assume 0<πœ†1<πœ†2<β‹―<πœ†πΏ=1. According to Theorem 3.1 and Lemma A.3, for 𝑙=1,…,𝐿, we have ξ‚€πœ†π‘π‘…π‘™(𝐾),𝛼𝑙(𝐾),𝛽𝑙(𝐾)=π‘›πœŒ(1βˆ’πœŒ)πœŽβˆ’2ξ€·π‘¦βˆ’π‘₯ξ€Έ2ξ€·πœ†+2log𝑙+π‘œπ‘(1).(A.16) This leads to EM𝑛(𝐾)=max1β‰€π‘™β‰€πΏξ‚€πœ†π‘π‘…π‘™(𝐾),𝛼𝑙(𝐾),𝛽𝑙(𝐾)=π‘›πœŒ(1βˆ’πœŒ)πœŽβˆ’2ξ€·π‘¦βˆ’π‘₯ξ€Έ2+π‘œπ‘(1),(A.17) where the remainder is still π‘œπ‘(1) since the maximum is taken over a finite set.
Note that when 𝑛 tends to infinity, βˆšπ‘›(π‘¦βˆ’π‘₯)→𝑁(0,𝜎2/[𝜌(1βˆ’πœŒ)])in distribution. Therefore, EM𝑛(𝐾)βŸΆπœ’21(A.18) in distribution as 𝑛 goes to infinity. This completes the proof.


  1. J. Qin and K. Y. Liang, β€œHypothesis testing in a mixture case-control model,” Biometrics, vol. 67, pp. 182–193, 2011. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  2. J. Zhang, β€œPowerful two-sample tests based on the likelihood ratio,” Technometrics, vol. 48, no. 1, pp. 95–103, 2006. View at Publisher Β· View at Google Scholar
  3. J. A. Anderson, β€œMultivariate logistic compounds,” Biometrika, vol. 66, no. 1, pp. 17–26, 1979. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  4. T. Lancaster and G. Imbens, β€œCase-control studies with contaminated controls,” Journal of Econometrics, vol. 71, no. 1-2, pp. 145–160, 1996. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  5. Y. Fu, J. Chen, and J. D. Kalbfleisch, β€œModified likelihood ratio test for homogeneity in a two-sample problem,” Statistica Sinica, vol. 19, no. 4, pp. 1603–1619, 2009. View at Google Scholar Β· View at Zentralblatt MATH
  6. A. B. Owen, β€œEmpirical likelihood ratio confidence intervals for a single functional,” Biometrika, vol. 75, no. 2, pp. 237–249, 1988. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  7. A. B. Owen, β€œEmpirical likelihood ratio confidence regions,” The Annals of Statistics, vol. 18, no. 1, pp. 90–120, 1990. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  8. P. Hall and B. La Scala, β€œMethodology and algorithms of empirical likelihood,” International Statistical Review, vol. 58, pp. 109–127, 1990. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  9. T. DiCiccio, P. Hall, and J. Romano, β€œEmpirical likelihood is Bartlett-correctable,” The Annals of Statistics, vol. 19, no. 2, pp. 1053–1061, 1991. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  10. J. Qin and J. Lawless, β€œEmpirical likelihood and general estimating equations,” The Annals of Statistics, vol. 22, no. 1, pp. 300–325, 1994. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  11. S. E. Ahmed, A. Hussein, and S. Nkurunziza, β€œRobust inference strategy in the presence of measurement error,” Statistics & Probability Letters, vol. 80, no. 7-8, pp. 726–732, 2010. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  12. J. Chen and P. Li, β€œHypothesis test for normal mixture models: the EM approach,” The Annals of Statistics, vol. 37, no. 5, pp. 2523–2542, 2009. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  13. P. Li, J. Chen, and P. Marriott, β€œNon-finite Fisher information and homogeneity: an EM approach,” Biometrika, vol. 96, no. 2, pp. 411–426, 2009. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  14. J. Chen, β€œPenalized likelihood-ratio test for finite mixture models with multinomial observations,” The Canadian Journal of Statistics, vol. 26, no. 4, pp. 583–599, 1998. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH
  15. J. R. Weeks and R. J. Collins, β€œPrimary addiction to morphine in rats,” Federation Proceedings, vol. 30, p. 277, 1971. View at Google Scholar
  16. D. D. Boos and C. Brownie, β€œMixture models for continuous data in dose-response studies when some animals are unaffected by treatment,” Biometrics, vol. 47, pp. 1489–1504, 1991. View at Publisher Β· View at Google Scholar