Abstract

We study a two-sample homogeneity testing problem, in which one sample comes from a population with density 𝑓(𝑥) and the other is from a mixture population with mixture density (1𝜆)𝑓(𝑥)+𝜆𝑔(𝑥). This problem arises naturally from many statistical applications such as test for partial differential gene expression in microarray study or genetic studies for gene mutation. Under the semiparametric assumption 𝑔(𝑥)=𝑓(𝑥)𝑒𝛼+𝛽𝑥, a penalized empirical likelihood ratio test could be constructed, but its implementation is hindered by the fact that there is neither feasible algorithm for computing the test statistic nor available research results on its theoretical properties. To circumvent these difficulties, we propose an EM test based on the penalized empirical likelihood. We prove that the EM test has a simple chi-square limiting distribution, and we also demonstrate its competitive testing performances by simulations. A real-data example is used to illustrate the proposed methodology.

1. Introduction

Let 𝑥1,,𝑥𝑛0 be a random sample from a population with distribution function 𝐹, and let 𝑦1,,𝑦𝑛1 be a random sample from a population with distribution function 𝐻. Testing whether the two populations have the same distribution, that is, 𝐻0𝐹=𝐻 versus 𝐻1𝐹𝐻, with both 𝐹 and 𝐻 completely unspecified, will require a nonparametric test. Since 𝐻1𝐹𝐻 is a very broad hypothesis, many times one may want to consider some more specified alternative, for example, the two populations only differ in location. In the present paper, we will consider a specified alternative in which one of the two samples has a mixture structure. More specifically, we have𝑥1,,𝑥𝑛0i.i.d.𝑓(𝑥),𝑦1,,𝑦𝑛1i.i.d.(𝑦)=(1𝜆)𝑓(𝑦)+𝜆𝑔(𝑦),(1.1) where 𝑓(𝑥)=𝑑𝐹(𝑥)/𝑑𝑥, 𝑔(𝑦)=𝑑𝐺(𝑦)/𝑑𝑦, (𝑦)=𝑑𝐻(𝑦)/𝑑𝑦, and 𝜆(0,1) is an unknown parameter sometimes called contamination proportion. The problem of interest is to test 𝐻0𝑓= or equivalently 𝜆=0. This particular two-sample problem arises naturally in a variety of statistical applications such as test for partial differential gene expression in microarray study, genetic studies for gene mutation, case-control studies with contaminated controls, or the test of a treatment effect in the presence of nonresponders in biological experiments (see Qin and Liang [1] for details).

If no auxiliary information is available, this is merely the usual two-sample goodness-of-fit problem. There has been extensive literature on it; see Zhang [2] and references therein. However, these tests are not suitable for the specific alternative with a mixture structure as they might be inferior comparing with methods that are designed for the specific alternative. In this paper, we will propose an empirical likelihood-based testing procedure for this specific mixture alternative under Anderson’s semiparametric assumption [3]. Motivated by the logistic regression model, the semiparametric assumption proposed by Anderson [3] links the two distribution functions 𝐹 and 𝐺 through the following equation:log𝑔(𝑥)𝑓(𝑥)=𝛼+𝛽𝑥,(1.2) where 𝛼 and 𝛽 are both unknown parameters. There are many examples where the logarithm of the density ratio is linear in the observations.

Example 1.1. Let 𝐹 and 𝐺 be the distribution functions of Binomial (𝑚,𝑝1) and Binomial (𝑚,𝑝2), respectively. We refer the densities 𝑓 and 𝑔 to the probability mass functions corresponding to 𝐹 and 𝐺, respectively. Then, log𝑔(𝑥)𝑓(𝑥)=𝑚log1𝑝21𝑝1𝑝+log21𝑝1𝑝11𝑝2𝑥.(1.3)

Example 1.2. Let 𝐹 be the distribution function of 𝑁(𝜇1,𝜎2) and 𝐺 the distribution function of 𝑁(𝜇2,𝜎2). Then, log𝑔(𝑥)=1𝑓(𝑥)2𝜎2𝜇21𝜇22+1𝜎2𝜇2𝜇1𝑥.(1.4)
In practice, one may need to apply some sort of transformation to the data (e.g., logarithm transformation) in order to justify the use of the semiparametric model assumption (1.2).

Example 1.3. Let 𝐹 and 𝐺 be the distribution functions of log𝑁(𝜇1,𝜎2) and log𝑁(𝜇2,𝜎2), respectively. It is clear that the density ratio is a linear function of the log-transformed data: log𝑔(𝑥)=1𝑓(𝑥)2𝜎2𝜇21𝜇22+1𝜎2𝜇2𝜇1log𝑥.(1.5)

Example 1.4. Let 𝐹 and 𝐺 be the distribution functions of Gamma(𝑚1,𝜃) and Gamma(𝑚2,𝜃), respectively. In this case, log𝑔(𝑥)Γ𝑚𝑓(𝑥)=log1Γ𝑚2+𝑚1𝑚2𝑚log𝜃+2𝑚1log𝑥.(1.6)

The semiparametric modeling assumption (1.2) is very flexible and has the advantage of not putting any specific restrictions on the functional form of 𝑓. Under this assumption, various approaches have been proposed to test homogeneity in the two-sample problem (see [1, 4, 5] and references therein). This paper adds to this literature by introducing a new type of test statistics which are based on the empirical likelihood [6, 7].

The empirical likelihood (EL) is a nonparametric likelihood method which has many nice properties paralleling to the likelihood methods, for example, it is range-preserving, transform-respect, Bartlett correctable, and a systematic approach to incorporating auxiliary information [811]. In general, if the parameters are identifiable, the empirical likelihood ratio (ELR) test has a chi-square limiting distribution under null hypothesis. However, for the aforementioned testing problem, the parameters under 𝐻0 are not identifiable, which results in an intractable null limiting distribution for the ELR test. To circumvent this problem, we would add a penalty to the log EL to penalize 𝜆 being too close to zero. Working like a soft threshold, the penalty makes the parameters roughly identifiable. Intuitively, the penalized (or modified) ELR test should restore the usual chi-square limiting distribution. Unfortunately two things hinder the direct use of the penalized ELR test. One is that, to the best of our knowledge, there is no feasible algorithm to compute the penalized ELR test statistic. The other one is that there has been no research on the asymptotic properties of the penalized ELR test. Therefore, one cannot obtain critical values for the penalized ELR test regardless through simulations or an asymptotic reference distribution. We find that the EM test [12, 13] based on the penalized EL is a nice solution to the testing problem.

The remainder of this paper is organized as follows. In Section 2, we introduce the ELR and the penalized ELR. The penalized EL-based EM test is given in Section 3. A key computational issue of the EM test is discussed in Section 4. Sections 5 and 6 contain a simulation study and a real-data application, respectively. For clarity, all proofs are postponed to the appendix.

2. Empirical Likelihood

Let {𝑡1,,𝑡𝑛0,𝑡𝑛0+1,,𝑡𝑛}={𝑥1,,𝑥𝑛0,𝑦1,,𝑦𝑛1} denote the combined two-sample data, where 𝑛=𝑛0+𝑛1. Under Anderson’s semiparametric assumption (1.2), the likelihood of two-sample data (1.1) is𝐿=𝑛0𝑖=1𝑡𝑑𝐹𝑖𝑛𝑗=𝑛0+11𝜆+𝜆𝑒𝛼+𝛽𝑡𝑗𝑡𝑑𝐹𝑗.(2.1) Let 𝑝=𝑑𝐹(𝑡), =1,,𝑛. The EL is just the likelihood 𝐿 with constraints 𝑝0, 𝑛=1𝑝=1 and 𝑛=1𝑝(𝑒𝛼+𝛽𝑡1)=0. The corresponding log-EL is𝑙=𝑛=1log𝑝+𝑛1𝑗=1log1𝜆+𝜆𝑒𝛼+𝛽𝑦𝑗.(2.2) We are interested in testing𝐻0𝜆=0or(𝛼,𝛽)=(0,0).(2.3) Under the null hypothesis, the constraint 𝑛=1𝑝(𝑒𝛼+𝛽𝑡1)=0 will always hold and sup𝐻0𝑙=𝑛log𝑛. Under alternative hypothesis, for any fixed (𝜆,𝛼,𝛽), maximizing 𝑙 with respect to 𝑝’s leads to the log-EL function of (𝜆,𝛼,𝛽):𝑙(𝜆,𝛼,𝛽)=𝑛=1𝑒log1+𝜉𝛼+𝛽𝑡1𝑛log𝑛+𝑛1𝑗=1log1𝜆+𝜆𝑒𝛼+𝛽𝑦𝑗,(2.4) where 𝜉 is the solution to the following equation:𝑛=1𝑒𝛼+𝛽𝑡1𝑒1+𝜉𝛼+𝛽𝑡1=0.(2.5) Hence, the EL ratio function 𝑅(𝜆,𝛼,𝛽)=2{𝑙(𝜆,𝛼,𝛽)+𝑛log𝑛} and the ELR is denoted as 𝑅=sup𝑅(𝜆,𝛼,𝛽).

The null hypothesis 𝐻0 holds for 𝜆=0 regardless of (𝛼,𝛽), or (𝛼,𝛽)=(0,0) regardless of 𝜆. This implies that the parameter (𝜆,𝛼,𝛽) is not identifiable under 𝐻0, resulting in rather complicated asymptotic properties of the ELR. One may consider the modified or penalized likelihood method [14] and define the penalized log-EL function 𝑝𝑙(𝜆,𝛼,𝛽)=𝑙(𝜆,𝛼,𝛽)+log(𝜆). Accordingly the penalized EL ratio function is 𝑝𝑅(𝜆,𝛼,𝛽)=2{𝑝𝑙(𝜆,𝛼,𝛽)𝑝𝑙(1,0,0)}=2𝑛=1𝑒log1+𝜉𝛼+𝛽𝑡1+2𝑛1𝑗=1log1𝜆+𝜆𝑒𝛼+𝛽𝑦𝑗+2log(𝜆),(2.6) where 𝜉 is the solution to (2.5). The penalty function log(𝜆) goes to as 𝜆 approaches 0. Therefore, 𝜆 is bounded away from 0, and the null hypothesis in (2.3) then reduces to (𝛼,𝛽)=(0,0). That is, the parameters in the penalized log-EL function is asymptotically identifiable. However, the asymptotic behavior of the penalized ELR test is still complicated. Meanwhile, the computation of the penalized ELR test statistic is another obstacle of the implementation of the penalized ELR method. No feasible and stable algorithm has been found for this purpose. An EL-based EM test proposed in this paper provides an efficient way to solve the problem.

3. EL-Based EM Test

Motivated by Chen and Li [12] and Li et al. [13], we propose an EM test based on the penalized EL to test the hypothesis (2.3). The EM test statistics are derived iteratively. We first choose a finite set of Λ={𝜆1,,𝜆𝐿}(0,1], for instance, Λ={0.1,0.2,,0.9,1.0}, and a positive integer 𝐾 (2 or 3 in general). For each 𝑙=1,,𝐿, we proceed the following steps.

Step 1. Let 𝑘=1 and 𝜆𝑙(𝑘)=𝜆𝑙. Calculate (𝛼𝑙(𝑘),𝛽𝑙(𝑘))=argmax𝛼,𝛽𝑝𝑅(𝜆𝑙(𝑘),𝛼,𝛽).

Step 2. Update (𝜆,𝛼,𝛽) by using the following algorithm for 𝐾1 times.
Substep 2.1. Calculate the posterior distribution, 𝑤(𝑘)𝑗𝑙=𝜆𝑙(𝑘)𝛼exp𝑙(𝑘)+𝛽𝑙(𝑘)𝑦𝑗1𝜆𝑙(𝑘)+𝜆𝑙(𝑘)𝛼exp𝑙(𝑘)+𝛽𝑙(𝑘)𝑦𝑗,𝑗=1,,𝑛1,(3.1) and update 𝜆 by 𝜆𝑙(𝑘+1)=argmax𝜆𝑛1𝑗=11𝑤(𝑘)𝑗𝑙log(1𝜆)+𝑛1𝑗=1𝑤(𝑘)𝑗𝑙log(𝜆)+log(𝜆).(3.2)
Substep 2.2. Update (𝛼,𝛽) by (𝛼𝑙(𝑘+1),𝛽𝑙(𝑘+1))=argmax𝛼,𝛽𝑝𝑅(𝜆𝑙(𝑘+1),𝛼,𝛽).
Substep 2.3. Let 𝑘=𝑘+1 and continue.

Step 3. Define the test statistics 𝑀𝑛(𝐾)(𝜆𝑙)=𝑝𝑅(𝜆𝑙(𝐾),𝛼𝑙(𝐾),𝛽𝑙(𝐾)).

The EM test statistic is defined asEM𝑛(𝐾)𝑀=max𝑛(𝐾)𝜆𝑙,𝑙=1,,𝐿.(3.3) We reject the null hypothesis 𝐻0 when the EM test statistic is greater than some critical value determined by the following limiting distribution.

Theorem 3.1. Suppose 𝜌=𝑛1/𝑛(0,1) is a constant. Assume the null hypothesis 𝐻0 holds and 𝐸(𝑡)=0 and Var(𝑡)=𝜎2(0,) for =1,,𝑛. For 𝑙=1,,𝐿 and any fixed 𝑘, it holds that 𝜆𝑙(𝑘)𝜆𝑙=𝑜𝑝(1),𝛼𝑙(𝑘)=𝑂𝑝𝑛1,𝛽𝑙(𝑘)=𝑦𝑥𝜆𝑙𝜎2+𝑜𝑝𝑛1/2,(3.4) where 𝑥=(1/𝑛0)𝑛0𝑖=1𝑥𝑖 and 𝑦=(1/𝑛1)𝑛1𝑗=1𝑦𝑗.

Remark 3.2. The assumption 𝐸𝑡=0 is only for convenience purpose and unnecessary. Otherwise, we can replace 𝑡 and 𝛼 with 𝑡𝐸(𝑡) and 𝛼+𝛽𝐸(𝑡).

Theorem 3.3. Assume the conditions of Theorem 3.1 hold and 1Λ. Under the null hypothesis (2.3), EM𝑛(𝐾)𝜒21 in distribution, as 𝑛.

We finish this section with an additional remark.

Remark 3.4. We point out that the idea of the EM-test can also be generalized to more general models such as log(𝑔(𝑥)/𝑓(𝑥))=𝛼+𝛽1𝑥++𝛽𝑘𝑥𝑘 for some integer 𝑘 or log(𝑔(𝑥)/𝑓(𝑥))=𝛼+𝛽1𝑡1(𝑥)++𝛽𝑘𝑡𝑘(𝑥)with 𝑡𝑖()’s being known functions.

4. Computation of the EM Test

A key step of the EM test procedure is to maximize 𝑝𝑅(𝜆,𝛼,𝛽) with respect to (𝛼,𝛽) for fixed 𝜆. In this section, we propose a computation strategy which provides stable solution to this optimization problem. Throughout this section, 𝜆 is suppressed to be fixed.

The objective function is 𝑝𝑅(𝜆,𝛼,𝛽)=𝐺(𝜉,𝛼,𝛽) where𝐺(𝜉,𝛼,𝛽)=2𝑛=1𝑒log1+𝜉𝛼+𝛽𝑡1+2𝑛1𝑗=1log1𝜆+𝜆𝑒𝛼+𝛽𝑦𝑗+2log(𝜆)(4.1) and 𝜉=𝜉(𝛼,𝛽) is the solution to𝜕𝐺𝜕𝜉=2𝑛=1𝑒𝛼+𝛽𝑡1𝑒1+𝜉𝛼+𝛽𝑡1=0.(4.2) If (𝛼,𝛽) is the maximum point of 𝑝𝑅(𝜆,𝛼,𝛽), it should generally satisfy𝜕𝐺𝜕𝛼=2𝑛=1𝜉𝑒𝛼+𝛽𝑡𝑒1+𝜉𝛼+𝛽𝑡1+2𝑛1𝑗=1𝜆𝑒𝛼+𝛽𝑦𝑗1𝜆+𝜆𝑒𝛼+𝛽𝑦𝑗=0.(4.3) Combining (4.2) and (4.3) leads to1𝜉=𝑛𝑛1𝑗=1𝜆𝑒𝛼+𝛽𝑦𝑗1𝜆+𝜆𝑒𝛼+𝛽𝑦𝑗.(4.4) Putting this expression of 𝜉 back into (4.1), we have a new function𝐻(𝛼,𝛽)=2𝑛=1𝑒log1+𝛼+𝛽𝑡11𝑛𝑛1𝑗=1𝜆𝑒𝛼+𝛽𝑦𝑗1𝜆+𝜆𝑒𝛼+𝛽𝑦𝑗+2𝑛1𝑗=1log1𝜆+𝜆𝑒𝛼+𝛽𝑦𝑗.(4.5) It can be verified that 𝐻(𝛼,𝛽) is almost surely concave in a neighborhood of (0,0) given 𝜆, which means that maximizing 𝐻(𝛼,𝛽) with respect to (𝛼,𝛽) gives the maximum of 𝑝𝑅(𝜆,𝛼,𝛽) for fixed 𝜆. The stability of the method is illustrated by the following simulation study.

5. Simulation Study

We consider two models in Examples 1.3 and 1.4 with 𝜇1=0, 𝜇2=𝜇, and 𝜎2=1 for Example 1.3, and 𝑚1=1, 𝑚2=𝑚, and 𝜃=1 for Example 1.4. Nominal levels of 0.01, 0.05, and 0.10 are considered. The logarithm transformation is applied to the original data before using the EM test. The initial set Λ={0.1,0.2,,1} and iteration number 𝐾=3 are used to calculate the EM test statistic.

One competitive method for testing homogeneity under the semiparametric two-sample model is the score test proposed by Qin and Liang [1]. This method is based on𝑆(𝛼,𝛽)=𝜕𝑙(𝜆,𝛼,𝛽)|||𝜕𝜆𝜆=0=𝑛1𝑗=1𝑒𝛼+𝛽𝑦𝑗1,(5.1) where 𝑙(𝜆,𝛼,𝛽) is the log empirical likelihood function given in (2.4). Let (𝛼1,̂𝛽1)=argmax𝛼,𝛽𝑙(1,𝛼,𝛽). The score test statistic was defined as 𝑇1=𝑆(𝛼1,̂𝛽1)/(1+𝑛1/𝑛0), which has a 𝜒21 limiting distribution under the null hypothesis.

We compare the EM test and the score test in terms of type I error and power. We calculate the type I errors of each method under the null hypothesis based on 20,000 repetitions and the power under the alternative models based on 2,000 repetitions. For fair comparison, simulated critical values are used to calculate the power. We consider two sample sizes: 50 and 200 and 𝐾=1,2,3. Tables 1 and 2 contain the simulation results for the log-normal models and Tables 3 and 4 for the gamma models.

The results show that the EM test and the score test have similar type I errors. For both methods, the type I errors are somehow larger than the nominal levels when the sample size is 𝑛=50; they are close to the nominal levels when the sample size is increased to 𝑛=200. For the log-normal models, two methods have almost the same power when the alternatives are close to each other such as 𝜇=1; the EM test becomes much more powerful when the alternatives are distant and the sample size increases. In the case of 𝑛=50, 𝜆=0.2, 𝜇=3, and nominal level 0.01, the EM test has a 10% gain in power compared with the score test; the gain rushes up to almost 30% when 𝜆=0.1, 𝜇=3, and the sample size increases to 𝑛=200. For the gamma models, the advantage of the EM test is more obvious. For both sample sizes 𝑛=50 and 200, the EM test is more powerful than the score test.

6. Real Example

We apply our EM test procedure to the drug abuse data [15] in a study of addiction to morphine in rats. In this study, rats got morphine by pressing a lever and the frequency of lever presses (self-injection rates) after six-day treatment with morphine was recorded as response variable. The data consist of the number of lever presses for five groups of rats: four treatment groups with different dose levels and one saline group (control group).

We analyzed the response variables (the number of lever presses by rats) of the treatment group at the first dose level and the control group. The data is tabulated in Table  3 of Fu et al. [5]. Following Boos and Browine [16] and Fu et al. [5], we analyze the transformed data, log10(𝑅+1) with 𝑅 being the number of lever presses by rats. Instead of using the parametric models as Boos and Browine [16] and Fu et al. [5], we adopt Anderson’s semiparametric approach. That is, we assume that the response variables in control group comes from 𝑓(𝑥), while the response variables in treatment group comes from (𝑥)=(1𝜆)𝑓(𝑥)+𝜆𝑔(𝑥) with 𝑔(𝑥)/𝑓(𝑥)=exp(𝛼+𝛽𝑥). The EM test statistics for testing homogeneity under the semiparametric two-sample model are found to be EM𝑛(1)=14.090, EM𝑛(2)=14.150, and EM𝑛(3)=14.167. Calibrated by the 𝜒21 limiting distribution, the 𝑃values are all around 0.02%. We also applied the score test of Qin and Liang [1]. The score test statistic is 9.417 with the 𝑃 value equal to 0.2% calibrated by the 𝜒21 limiting distribution. We also used the permutation methods to get the 𝑃values of the two types of tests. Based on 50,000 permutations, the 𝑃 values of the three EM test statistics are all around 0.03%, and the 𝑃 value of the score test is around 0.5%. In accordance with Fu et al. [5], both methods suggest a significant treatment effect, while the proposed EM test has much stronger evidence than the score test.

Appendix

Proofs

The proofs of Theorems 3.1 and 3.3 are based on the three lemmas given below. Lemma A.1 assesses the order of the maximum empirical likelihood estimators of 𝛼 and 𝛽 with 𝜆 bounded away from 0 under the null hypothesis. Lemma A.2 shows that the EM iteration updates the value of 𝜆 by the amount of order 𝑜𝑝(1). Theorem 3.1 is then proved by iteratively using Lemmas A.1 and A.2. Lemma A.3 gives an approximation of the penalized ELR for any 𝜆 bounded away from 0, based on which we prove Theorem 3.3.

Lemma A.1. Assume the conditions of Theorem 3.1. Let 𝜆[𝜖,1] for some constant 𝜖>0 and (𝛼,𝛽)=argmax𝛼,𝛽𝑝𝑅(𝜆,𝛼,𝛽). Then, we have 𝛼=𝑂𝑝𝑛1,𝛽=𝑦𝑥𝜆𝜎2+𝑜𝑝𝑛1/2(A.1) with 𝑥=1/𝑛0𝑛0𝑖=1𝑥𝑖 and 𝑦=1/𝑛1𝑛1𝑗=1𝑦𝑗.

Proof. Since 𝜆𝜖>0, the parameters (𝛼,𝛽) in the empirical likelihood ratio are identifiable. Therefore, (𝛼,𝛽) are 𝑛-consistent to the true value (0,0), that is, 𝛼=𝑂𝑝(𝑛1/2) and 𝛽=𝑂𝑝(𝑛1/2) [10].
Following the arguments in Section 4, the maximum empirical likelihood estimate (𝛼,𝛽) should satisfy (here 𝜆 is suppressed to 𝜆) 𝜕𝐺𝜉,𝛼,𝛽𝜕𝛼=2𝑛=1𝜉𝑒𝛼+𝛽𝑡1+𝜉𝑒𝛼+𝛽𝑡1+2𝑛1𝑗=1𝜆𝑒𝛼+𝛽𝑦𝑗1𝜆+𝜆𝑒𝛼+𝛽𝑦𝑗=0,(A.2)𝜕𝐺𝜉,𝛼,𝛽𝜕𝛽=2𝑛=1𝜉𝑒𝛼+𝛽𝑡𝑡1+𝜉𝑒𝛼+𝛽𝑡1+2𝑛1𝑗=1𝜆𝑒𝛼+𝛽𝑦𝑗𝑦𝑗1𝜆+𝜆𝑒𝛼+𝛽𝑦𝑗=0(A.3) with 1𝜉=𝑛𝑛1𝑗=1𝜆𝑒𝛼+𝛽𝑦𝑗1𝜆+𝜆𝑒𝛼+𝛽𝑦𝑗.(A.4)
Applying Taylor expansion on the right-hand side of (A.4), we get 𝑛𝜉=1𝑛𝜆+𝑜𝑝(1).(A.5) Further applying first-order Taylor expansion to (A.2) and using (A.4), we get 𝑛𝜉1𝜉𝛼+𝜉1𝜉𝑛=1𝑡𝛽𝑛1𝜆1𝜆𝛼𝜆1𝜆𝑛1𝑗=1𝑦𝑗𝛽=𝑂𝑝(𝑛)𝛼2+𝛽2.(A.6) Note that both 𝛼 and 𝛽 are of order 𝑂𝑝(𝑛1/2) and that both 𝑛=1𝑡 and 𝑛1𝑗=1𝑦𝑗 have order 𝑂𝑝(𝑛1/2). Combining (A.5) and (A.6) yields 𝑛1𝜆(𝜆𝑛1/𝑛𝜆)𝛼=𝑂𝑝(1). Therefore, 𝛼=𝑂𝑝(𝑛1).
Similarly, first-order Taylor expansion of (A.3) results in 0=𝜉𝑛=1𝑡𝜉1𝜉𝑛=1𝑡𝛼𝜉1𝜉𝑛=1𝑡2𝛽+𝜆𝑛1𝑗=1𝑦𝑗+𝜆1𝜆𝑛1𝑗=1𝑦𝑗𝛼+𝜆1𝜆𝑛1𝑗=1𝑦2𝑗𝛽+𝑂𝑝(𝑛)𝛼2+𝛽2.(A.7) With the same reasoning as for 𝛼, it follows from (A.7) that 𝑛1𝜆𝑛11𝑛𝜆𝜎2𝑛1𝜆1𝜆𝜎2𝛽=𝜆𝑛1𝑗=1𝑦𝑗𝑛1𝑛𝜆𝑛=1𝑡+𝑜𝑝𝑛1/2.(A.8) After some algebra, we have 𝛽=(𝑦𝑥)/(𝜆𝜎2)+𝑜𝑝(𝑛1/2), which completes the proof.

Suppose that 𝜆, 𝛼, and 𝛽 have the properties given in Lemma A.1. For 𝑗=1,,𝑛1, let 𝑤𝑗=𝜆exp(𝛼+𝛽𝑦𝑗)/(1𝜆+𝜆exp(𝛼+𝛽𝑦𝑗)). The updated value of 𝜆 is𝜆=argmax𝜆𝑛1𝑗=11𝑤𝑗log(1𝜆)+𝑛1𝑗=1𝑤𝑗log(𝜆)+log(𝜆).(A.9) It can be verified that the close form of 𝜆 is given by 𝜆=(1/(𝑛1+1))(𝑛1𝑗=1𝑤𝑗+1). We now show that the above iteration only changes the value of 𝜆 by an 𝑜𝑝(1) term.

Lemma A.2. Assume the conditions of Lemma A.1 hold. Then, 𝜆=𝜆+𝑜𝑝(1).

Proof. Let ̂𝜆=𝑛1𝑗=1𝑤𝑗/𝑛1. According to Lemma A.1, 𝛼=𝑜𝑝(1) and 𝛽=𝑜𝑝(1). Applying the first-order Taylor expansion, we have ̂1𝜆=𝑛1𝑛1𝑗=1𝜆exp𝛼+𝛽𝑦𝑗1𝜆+𝜆exp𝛼+𝛽𝑦𝑗=𝜆+𝑂𝑝(1)𝛼+𝛽=𝜆+𝑜𝑝(1).(A.10) Some simple algebra work shows that 𝜆̂𝜆=1𝜆𝑛1+1=𝑜𝑝(1).(A.11) Therefore, 𝜆=𝜆+𝑜𝑝(1), and this finishes the proof.

Proof of Theorem 3.1. With the above two technical lemmas, the proof is the same as that of Theorem  1 in Li et al. [13] and therefore is omitted.

The next lemma is a technical preparation for proving Theorem 3.3. It investigates the asymptotic approximation of the penalized ELR for any 𝜆 bounded away from 0.

Lemma A.3. Assume the conditions of Theorem 3.1 and 𝜆[𝜖,1] for some 𝜖>0. Then, 𝑝𝑅𝜆,𝛼,𝛽=𝑛𝜌(1𝜌)𝜎2𝑦𝑥2+2log𝜆+𝑜𝑝(1).(A.12)

Proof. With Lemma A.1, we have 𝛼=𝑂𝑝(𝑛1) and 𝛽=𝑂𝑝(𝑛1/2). Applying second-order Taylor expansion on 𝑝𝑅(𝜆,𝛼,𝛽) and noting that 𝜕𝑝𝑅/𝜕𝛼|(𝛼,𝛽)=(0,0)=0, we have 𝑝𝑅𝜆,𝛼,𝛽=2𝜉𝑛=1𝑡+𝜆𝑛1𝑗=1𝑦𝑗𝛽𝜉1𝜉𝑛=1𝑡2𝜆1𝜆𝑛1𝑗=1𝑦2𝑗𝛽2+2log𝜆+𝑜𝑝(1).(A.13) Using (A.5) and the facts that both 𝑛=1𝑡2/𝑛 and 𝑛1𝑗=1𝑦2𝑗/𝑛1 converge to 𝜎2 in probability, the above expression can be simplified to 𝑝𝑅𝜆,𝛼,𝛽𝑛=21𝑛0𝑛𝜆𝑦𝑥𝑛𝛽1𝑛0𝑛𝜆2𝜎2𝛽2+2log𝜆+𝑜𝑝(1).(A.14) Plugging in the approximation 𝛽=(𝑦𝑥)/(𝜆𝜎2)+𝑜𝑝(𝑛1/2), we get 𝑝𝑅𝜆,𝛼,𝛽=𝑛1𝑛0𝑛𝑦𝑥2𝜎2+2log𝜆+𝑜𝑝(1)=𝑛𝜌(1𝜌)𝜎2𝑦𝑥2+2log𝜆+𝑜𝑝(1).(A.15) This completes the proof.

Proof of Theorem 3.3. Without loss of generality, we assume 0<𝜆1<𝜆2<<𝜆𝐿=1. According to Theorem 3.1 and Lemma A.3, for 𝑙=1,,𝐿, we have 𝜆𝑝𝑅𝑙(𝐾),𝛼𝑙(𝐾),𝛽𝑙(𝐾)=𝑛𝜌(1𝜌)𝜎2𝑦𝑥2𝜆+2log𝑙+𝑜𝑝(1).(A.16) This leads to EM𝑛(𝐾)=max1𝑙𝐿𝜆𝑝𝑅𝑙(𝐾),𝛼𝑙(𝐾),𝛽𝑙(𝐾)=𝑛𝜌(1𝜌)𝜎2𝑦𝑥2+𝑜𝑝(1),(A.17) where the remainder is still 𝑜𝑝(1) since the maximum is taken over a finite set.
Note that when 𝑛 tends to infinity, 𝑛(𝑦𝑥)𝑁(0,𝜎2/[𝜌(1𝜌)])in distribution. Therefore, EM𝑛(𝐾)𝜒21(A.18) in distribution as 𝑛 goes to infinity. This completes the proof.