Abstract

We give multivariate generalization of Macdonald distribution and study several of its properties. We also define the multivariate Macdonald-gamma distribution and derive a number of results pertaining to it.

1. Introduction

The three parameter Macdonald distribution (Nagar et al. [1, 2]) is defined by the probability density function (p.d.f.):where the extended gamma function, , is defined asFor and by taking , it is clear that the extension of the gamma function reduces to the classical gamma function, . The generalized gamma function (extended) has been proved very useful in various problems in engineering and physics; see, for example, Aslam Chaudhry and Zubair [38].

We will denote by if the random variable follows the Macdonald distribution. If in the density above, then we will simply write . It has been shown in Nagar et al. [9] that the product of two independent gamma variables follows a Macdonald distribution. A random variable is said to have a two parameter gamma distribution, denoted by , if its p.d.f. is given by

Note that for , the above distribution reduces to a standard gamma distribution and in this case we write . There are several univariate, multivariate, and matrix variate generalizations of gamma distribution; for example, see standard text such as Johnson et al. [10], Kotz et al. [11], and Gupta and Nagar [12]. Kalla et al. [13], by using the generalized gamma function defined and studied by Al-Musallam and Kalla [14, 15], have defined a generalization of the gamma distribution which includes a number of distributions as special cases. In a recent article, Gupta et al. [16] have generalized the matrix variate gamma distribution. If and are independent, and , then .

Recently, Nagar et al. [17] have used the Macdonald distribution to construct a new bivariate distribution which they call Macdonald-gamma distribution. The random variables and are said to have a Macdonald-gamma distribution, denoted by , if their joint p.d.f. is given bywhere , , and . The bivariate distribution defined by the above density has many interesting features. For example, the marginal and the conditional distributions of are Macdonald and gamma, the marginal distribution of is gamma, and the conditional distribution of given is extended gamma. The Macdonald-gamma distribution is positively likelihood ratio dependent (PLRD). Several results pertaining to this distribution such as marginal and conditional distributions, moments, entropies, information matrix, and distributions of sum are derived in Nagar et al. [17]. The distributions of the product and the ratio of independent or correlated Macdonald random variables are derived in Nagar et al. [9].

In this article, we study multivariate generalization of the Madcdonal distribution defined by the density (1) and derive properties such as marginal and conditional distributions, moments, distribution of sums, and factorization. We also give a multivariate generalization of Macdonald-gamma and study its properties.

2. Some Definitions and Preliminary Results

In this section, we give some definitions and results which are used in subsequent sections.

The gamma function was first introduced by Leonard Euler in 1729 as the limit of a discrete expression and later as an absolutely convergent improper integral:The gamma function has many beautiful properties and has been used in almost all the branches of science and engineering. Replacing by , , in (5), the gamma function with an additional parameter can be given asThe extended gamma function is very similar to the modified Bessel function of type 2. An integral representation of the modified Bessel function type 2 (Gradshteyn and Ryzhik [18, Eq. 3.471.9]) is given bywhere and . Comparing (2) and (7), it can easily be seen that

Also, substituting in (2), it can be checked that

Finally, we define beta type 1 and beta type 2 distributions. These definitions can be found in Johnson et al. [19].

Definition 1. The random variable is said to have a beta type 1 distribution with parameters , , , denoted as , if its p.d.f. is given bywhere is the beta function defined by

Definition 2. The random variable is said to have a beta type 2 distribution with parameters , denoted as , , , if its p.d.f. is given by

Definition 3. The random variables are said to have a Dirichlet type 1 distribution with parameters , , and , denoted by if their joint p.d.f. is

The matrix variate generalizations of beta type 1 and beta type 2 distributions have been defined and studied extensively. For example, see Gupta and Nagar [12].

If , then the th moment of is given as

Theorem 4. If , then the p.d.f. of the sum is given bywhere , , , is the Laguerre polynomial of degree , and represents the confluent hypergeometric function.

Proof. See Nagar et al. [17].

3. Density Function

We propose a multivariate generalization of the Macdonald distribution as follows.

Definition 5. The random variables are said to have a multivariate Macdonald distribution with parameters , and , denoted as , if their joint p.d.f. is given bywhere , , , , and is the normalizing constant.

Since, the integration of the p.d.f. (16) over its support set is one, we haveReplacing by its equivalent integral, namely,in (17), the normalizing constant is derived as

Theorem 6. If , then for , .

Proof. By using (18), the joint density of can be written aswhere , . Now, integrating in the above density, we get the marginal density of asNow, writing the above integral in terms of extended gamma function and simplifying, we get the desired result.

Corollary 7. If , then for the marginal distribution of   is Macdonald, .

Theorem 8. Let and define for and . Then, and are independent, , and .

Proof. Substituting for and with the Jacobian in (16), the joint density of and is derived aswhere , , , and . From the above factorization, it is clear that and are independent, , and .

Corollary 9. If , then

4. Joint Moments

We derive the joint moments of random variables jointly distributed as multivariate Macdonald. These moments will facilitate for us to compute several expected values such as mean and variance.

Using (16) and (19), the joint moments of are obtained asFrom the above expression the th moment of and , for , is given bySubstituting , , and in (25) and using the duplication formula, the th moment of is obtained asNow, comparing the above moment expression with (14), we can conclude that . For , the above expression reduces towhich shows that has a standard beta type 2 distribution with parameters and . Substituting appropriately in (25), means and variances of and and the covariance between and are computed asThe correlation coefficient between and is given byFurther, for , the correlation coefficient between and is given by

5. Factorizations

In this section, we give several factorizations of the multivariate Macdonald density.

Theorem 10. Let . For , define and . Then, the random variables are independent, , , and .

Proof. From the transformation given in the theorem, we obtain , , and with the Jacobian . Now, making appropriate substitutions in the joint p.d.f. of , we obtainFurther, writing the above expression is simplified aswhere and . Now, from the above factorization, we get the result.

Theorem 11. Let . Define and , for . Then, are independent, , , and .

Proof. This result is obtained from Theorem 10, by observing that , for , , and , where .

Theorem 12. Let . Define and , for . Then, , are independent, for and .

Proof. The result is obtained from Theorem 11 by taking into account that for , , and , where .

Theorem 13. Let . Define and for . Then, are independent, , for and .

Proof. Making the substitutions , , and with the Jacobian in (16), we obtain the joint p.d.f. of aswhich can be rewritten asNow, the desired result follows from the above factorization.

Theorem 14. Let . Define and , for . Then, , are independent, , for and .

Proof. The result is obtained from Theorem 13, by noting that for , and for .

Theorem 15. Let . Define and , for . Then, are independent, for and .

Proof. This result follows from Theorem 14, by observing that for , and , where .

6. The Multivariate Macdonald-Gamma Distribution

We propose a multivariate generalization of the Macdonald-gamma distribution as follows.

Definition 16. The random variables are said to have a multivariate Macdonald-gamma distribution with parameters , and , denoted as , if their joint p.d.f. iswhere , , , , and the normalizing constant is given by (19).

Theorem 17. If , then for , .

Proof. Integrating in (36), we get the marginal density of , aswhere and ,  .

Corollary 18. If , then for the joint distribution of and is Macdonald-gamma, .

Theorem 19. If , then for , .

Proof. Integrating in (37) by using the definition of the extended gamma function, we obtain the desired result.

Corollary 20. If , then for the marginal distribution of is Macdonald, .

Theorem 21. Let and define for and . Then, and are independent, and .

Proof. Substituting for and with the Jacobian in (36), the joint density of , and is derived as where ,  , , and . From the above factorization, it is clear that and are independent, , and .

Corollary 22. If , then

Corollary 23. If and for and . Then, and are independent, and .

Theorem 24. If , then the p.d.f. of is given by where and .

Proof. From Theorem 21, the joint distribution of and is Macdonald-gamma, . Further, from Theorem 4, we get the distribution of .

Using (36) and (19), the joint moments of , are obtained as Next, we give several factorizations of the multivariate Macdonald-gamma density.

Theorem 25. Let . For , define and . Then, and are independent, , , and .

Proof. Similar to the proof of Theorem 10.

Theorem 26. Let . Define and , for . Then, and are independent, , , and .

Proof. This result is obtained from Theorem 25, by observing that , for , , and , where .

Theorem 27. Let . Define and , for . Then, , and are independent, for , and .

Proof. The result is obtained from Theorem 26 by taking into account that for , , and , where .

Theorem 28. Let . Define and for . Then, and are independent, , for , and .

Proof. Similar to the proof of Theorem 13.

Theorem 29. Let . Define and , for . Then, and are independent, , for , and .

Proof. The result is obtained from Theorem 28, by noting that for , , and for .

Theorem 30. Let . Define and , for . Then, and are independent, for and .

Proof. This result follows from Theorem 29, by observing that for , , and , where .

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this article.

Acknowledgments

The research work of Daya K. Nagar and Luz Estela Sánchez was supported by the Sistema Universitario de Investigación, Universidad de Antioquia, under the Project no. IN10231CE.