Abstract

This paper presents an upper bound for each of the generalized values for testing the one population variance, the difference between two population variances, and the ratio of population variances for lognormal distribution when coefficients of variation are known. For each of the proposed generalized values, we derive a closed form expression of the upper bound of the generalized value. Numerical computations illustrate the theoretical results.

1. Introduction

The problem of statistical inference for the population variances has been widely discussed by various authors; see, for example, Singh et al. [1], Agrawal and Sthapit [2], Arcos Cebrián and Rueda García [3], and Arcos et al. [4]. Kadilar and Cingi [5] proposed some ratio estimators for the population variance in simple and stratified random sampling. Cojbasic and Tomovic [6] proposed the bootstrap methods to construct the confidence intervals of the population variance for one sample and the difference of variances of two samples. Cojbasic and Loncar [7] proposed one-sided bootstrap method to construct the confidence intervals of the population variance of skewed distributions. Rajic et al. [8] proposed the new method for the testing one population variance and the difference of variances of two samples, based on -statistics and bootstrap method. Singh and Malik [9] proposed a family of estimators for the population variance using auxiliary attributes. In this paper, we used the generalized p values, proposed by Tsui and Weerahandi [10] and Weerahandi [11], to find new generalized p values for testing: one population variance, the difference between two population variances, and the ratio of population variances of lognormal distributions when coefficients of variation are known. This problem is analogous to the Behrens-Fisher problem; see, for example, Tang and Tsui [12] and Somkhuean et al. [13]. In Section 2, the outline for some basic steps to construct the generalized p value for testing hypothesis in this problem is presented. The process of deriving each of the upper bounds as mentioned above is presented in Section 3. The numerical results are shown in Section 4 and the conclusion is presented in Section 5.

2. Generalized Values

The concept of the generalized p values has been introduced by Tsui and Weerahandi [10] and Weerahandi [11]. We briefly review this concept as follows.

Let be a random variable with a density function , where , is the parameter of interest, and is a nuisance parameter.

Suppose we want to testwhere is a specified quantity. Let be a particular observed sample. The generalized test variable, , is required to satisfy the following conditions:(A1)For fixed and , the distribution is free from the nuisance parameter .(A2) is free from any unknown parameter.(A3) is either stochastically increasing or decreasing in for any given and fixed values of and .

Under the above conditions, if is a stochastically increasing test variable then the subset of space is extreme region . For the one-sided hypothesis given above they defined a data-based extreme region is of the formGiven the observed sample , the generalized p value is defined asfor further details and for several applications based on the generalized p value; we refer to the book by Weerahandi [11].

Moreover, Tsui and Weerahandi [10] used the generalized p value for the Behrens-Fisher problem of testing the difference of two independent normal distribution means with possibly unequal variances. Later, Tang and Tusi [12] extended the works of Weerahandi [11], Gamage and Weerahandi [14] to derive the formula of the upper bound of the generalized p value which is in the form of (see also, e.g., Kabaila and Lloyd [15])

In this paper, we also extend Tang and Tsui [12] work to find upper bound of each of the generalized p values, , for hypotheses testing of the one population variance, the difference between two population variances, and the ratio of population variances of lognormal distributions with known coefficients of variation.

3. Main Results for the Population Variance of Lognormal Distributions with Known Coefficients of Variation

Let , for be random samples having lognormal distribution and let where and denote the mean and variance of , respectively. In particular, the mean, variance, and the coefficient of variation for lognormal distribution are, respectively, given by where denotes the coefficient of variation of which is computed from.

It is easy to see thatand the parameters of interest areFor testing the null hypothesis, vs , vs , and vs , the sufficient statistics involving these problems are , whereIt is known that the probability distributions of the statistics below are independent:We denote , and , Here and are the vector of the observed samples. Let be the observed value of the sufficient statistic Following Tang and Tsui [12] and Somkhuean et al. [13], the repeated sampling, follows the same probability distributions as (9).

Case  1. The hypothesis testing isThe parameter of the one population variance of lognormal distribution when the coefficient of variation is known isUsing the generalized test variable for which iswhere , .

It is easy to see that in (12) satisfies conditions (A1)–(A3) in Section 2.

The generalized p value, , is defined, under the null hypothesis , to be Following (13) the generalized p value for (10) can be defined aswhere is an expectation operator with respect to and is cdf of the standard normal distribution.

Theorem 1. If then is a convex function of where is - distribution with degrees of freedom.

Proof. See Appendix.

Theorem 2. The upper bound of in (14) takes the form , for , , where is a cdf of -distribution function with degrees of freedom and is a cdf of the standard normal distribution.

Proof. See Appendix.

Case  2. The hypothesis testing isThe parameter of the difference between two population variances for lognormal distributions isWithout loss of generality, suppose , Using generalized test variable for isIt easy to see that in (18) satisfies conditions (A1)–(A3) in Section 2.

The generalized p value, , is defined, under the null hypothesis , to beFollowing (19) the generalized p value for (15) can be defined aswhere is a cdf of -distribution with degrees of freedom and is an expectation operator with respect to .

Theorem 3. If then for fixed and , is convex function for .

Proof. See  Appendix.

Theorem 4. If and , , are independent random variable such that , , , then is convex function for .

Proof. See  Appendix.

Theorem 5. The upper bound of is for , , where is the -distribution function with degrees of freedom and is the inverse function of .

Proof. See Appendix.

Case  3. The hypothesis testing isThe parameter of the ratio of population variances for lognormal distributions iswhere Using generalized test variable for isIt easy to see that in (24) satisfies conditions (A1)–(A3) in Section 2.

The generalized p value, , is defined, under the null hypothesis , to beFollowing (25) the generalized p value for (21) can be defined aswhere is a cdf of -distribution with degrees of freedom and is an expectation operator with respect to .

Theorem 6. The upper bound of is for , , where is the -distribution function with degrees of freedom and is the inverse function of .

Proof. It is similar to Theorem 5.

4. Numerical Results

In this section, we used functions written in the program [16] to compute the values of the upper bounds of the generalized p values proposed in Theorems 2, 5, and 6. For given values of , , , , , , and , we computed the upper bounds of , , and , by using the results from Theorems 2, 5, and 6 shown in Tables 13. As we can see in these tables, all results of the upper bounds of the generalized p values proposed in Theorems 2, 5, and 6 depend mainly on a variety of values of , , , , , , and . As a result, these upper bounds confirm our proof in Theorems 2, 5, and 6.

5. Conclusion

We proposed three new generalized p values for testing the hypotheses of one population variance, the difference between two population variances, and the ratio of population variances of lognormal distributions when the coefficients of variation are known. We also proved new upper bounds for our proposed generalized p values. We note here that the results for these results for case , case , and case were analogous to the upper bound of the generalized p value for the Behrens-Fisher problem proposed by Tang and Tsui [12]. Numerical results shown in Tables 13 confirmed our results of the upper bounds of the generalized p values proposed in Theorems 2, 5, and 6; we also found that the proposed upper bounds are increasing up on the parameter values of and . For example, for , , and , the upper bound of p value using Theorem 2 is 0.022261291 and this upper bound of p value approaches when is increasing. Similar results are applied for other cases. For the two-tailed test, that is, and ; it is easy to apply the results of Theorem of Tang and Tsui [12] to all hypotheses in this paper. So we skipped it.

Appendix

Proof of Theorem 1. Defining , we have . Let be the probability density function of .
HenceFor ,  . Hence and
Moreover Hence , and is convex in .

Proof of Theorem 2. Denote and .
From (14), we haveFor any and , hence, by Theorem 1, we have For , we have

Proof of Theorem 3. Define , , and is the probability density function of -distribution of degrees of freedom. Hence For , implies that , , and .
We have We have . Hence is convex function for .

Proof of Theorem 4. where is the cdf of a standard normal distribution.
Let , , and is the probability density function of a standard normal distribution. Hence We have We have . Hence    is convex function for . As a result, is convex in .

Proof of Theorem 5. DenoteFrom (24)For any and , hence, by Theorem 1, such that For , we have

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The second author is grateful to Grant no. KMUTNB-GOV-59-36 from King Mongkut’s University of Technology North Bangkok.