Abstract

We study matrix variate confluent hypergeometric function kind 1 distribution which is a generalization of the matrix variate gamma distribution. We give several properties of this distribution. We also derive density functions of , , and , where independent random matrices and follow confluent hypergeometric function kind 1 and gamma distributions, respectively.

1. Introduction

The matrix variate gamma distribution has many applications in multivariate statistical analysis. The Wishart distribution, which is the distribution of the sample variance covariance matrix when sampling from a multivariate normal distribution, is a special case of the matrix variate gamma distribution.

The purpose of this paper is to give a generalization of the matrix variate gamma distribution and study its properties.

We begin with a brief review of some definitions and notations. We adhere to standard notations (cf. Gupta and Nagar [1]). Let be an matrix. Then, denotes the transpose of ; ; ; = determinant of ; norm of = maximum of absolute values of eigenvalues of the matrix ; means that is symmetric positive definite; and denotes the unique symmetric positive definite square root of . The multivariate gamma function is defined by

The symmetric positive definite random matrix is said to have a matrix variate gamma distribution, denoted by , if its probability density function (p.d.f.) is given by where is a symmetric positive definite matrix of order , , and . For , the above density reduces to a standard matrix variate gamma density and in this case we write . Further, if and are independent gamma matrices, then the random matrix follows a matrix variate beta type 1 distribution with parameters and .

By replacing by the confluent hypergeometric function of matrix argument , a generalization of the matrix variate gamma distribution can be defined by the p.d.f.:where and is the normalizing constant. In Section 2, it has been shown that, for , , , , and , the normalizing constant can be evaluated as Therefore, the p.d.f. in (3) can be written explicitly aswhere , , , , , and is the confluent hypergeometric function of the first kind of matrix argument (Gupta and Nagar [1]). Since the density given above involves the confluent hypergeometric function, we will call the corresponding distribution a confluent hypergeometric function distribution. We will write to say that the random matrix has a confluent hypergeometric function distribution defined by the density (5). It has been shown by van der Merwe and Roux [2] that the above density can be obtained as a limiting case of a density involving the Gauss hypergeometric function of matrix argument. For , the density (5) reduces to a matrix variate gamma density and for it slides towhere , , , and . In this case we will write . The matrix variate confluent hypergeometric function kind 1 distribution occurs as the distribution of the matrix ratio of independent gamma and beta matrices. For , (6) reduces to a univariate confluent hypergeometric function kind 1 density given by (Orozco-Castañeda et al. [3])where , , , , and is the confluent hypergeometric function of the first kind (Luke [4]). The random variable having the above density will be designated by . Since the matrix variate confluent hypergeometric function kind 1 distribution is a generalization of the matrix variate gamma distribution, it is reasonable to say that the matrix variate confluent hypergeometric function kind 1 distribution can be used as an alternative to the gamma distribution quite effectively.

Although ample information about matrix variate gamma distribution is available, little appears to have been done in the literature to study the matrix variate confluent hypergeometric function kind 1 distribution.

In this paper, we study several properties including stochastic representations of the matrix variate confluent hypergeometric function kind 1 distribution. We also derive the density function of the matrix quotient of two independent random matrices having confluent hypergeometric function kind 1 and gamma distributions. Further, densities of several other matrix quotients and matrix products involving confluent hypergeometric function kind 1, beta type 1, beta type 2, and gamma matrices are derived.

2. Some Definitions and Preliminary Results

In this section we give some definitions and preliminary results which are used in subsequent sections.

A more general integral representation of the multivariate gamma function can be obtained aswhere and . The above result can be established for real by substituting with the Jacobian in (1) and it follows for complex by analytic continuation.

The multivariate generalization of the beta function is given bywhere and .

The generalized hypergeometric function of one matrix, defined in Constantine [5], is given bywhere , , , are arbitrary complex numbers, is an complex symmetric matrix, is the zonal polynomial of complex symmetric matrix corresponding to the ordered partition , , , and denotes the summation over all partitions . The generalized hypergeometric coefficient used above is defined by where , , with . Conditions for convergence of the series in (10) are available in the literature. From (10), it follows thatBy taking in (10), it can be observed thatSubstituting , in (15) and using (12), the Gauss hypergeometric function is reduced as

The integral representations of the confluent hypergeometric function and the Gauss hypergeometric function are given bywhere and .

Further generalizations of (8) and (9) in terms of zonal polynomials, due to Constantine [5], are given asrespectively.

For and , we have

We can establish (21) and (22) by expanding in series form by using (10) and integrating term by term by applying (19) and (20) and finally summing the resulting series.

Note that the series expansions for and given in (13) and (14) can be obtained by expanding and , , in (17) and (18) and integrating using (20). Substituting in (18) and integrating, we obtainwhere , , , and . The hypergeometric function satisfies Kummer’s relation

For properties and further results on these functions the reader is referred to Constantine [5], James [6], Muirhead [7], and Gupta and Nagar [1]. The numerical computation of a hypergeometric function of matrix arguments is very difficult. However, some numerical methods are proposed in recent years; see, Hashiguchi et al. [8] and Koev and Edelman [9].

The generalized hypergeometric function with complex symmetric matrices and is defined by It is clear from the above definition that the order of and is unimportant; that is, Also, if one of the argument matrices is the identity, this function reduces to the one argument function. Further, the two-matrix argument function can be obtained from the one-matrix function by averaging over the orthogonal group using a result given in James [6, Equation ]; namely, where denotes the normalized invariant measure on . That is,given in James [6, Equation ].

Finally, we define the inverted matrix variate gamma, matrix variate beta type 1, and matrix variate beta type 2 distributions. These definitions can be found in Gupta and Nagar [1] and Iranmanesh et al. [10].

Definition 1. An random symmetric positive definite matrix is said to have an inverted matrix variate gamma distribution with parameters , , and , denoted by , if its p.d.f. is given by where , , and is a symmetric positive definite matrix of order .

Definition 2. An random symmetric positive definite matrix is said to have a matrix variate beta type 1 distribution with parameters and , denoted as , if its p.d.f. is given by

Definition 3. An random symmetric positive definite matrix is said to have a matrix variate beta type 2 distribution with parameters and , denoted as , if its p.d.f. is given by

Note that if , then . Further, if and are independent, and , then and .

We conclude this section by evaluating the normalizing constant in (3). Since the density over its support set integrates to one, we have By rewriting using Kummer’s relation (24) and integrating by applying (21), we get where . Finally, writing in terms of multivariate gamma functions by using (23), we obtainwhere , , , , and .

3. Properties

In this section we study several properties of the confluent hypergeometric function kind 1 distribution defined in Section 1. For the sake of completeness, we first state the following results established in Gupta and Nagar [1].(1)Let and let be an constant nonsingular matrix. Then, .(2)Let and let be an orthogonal matrix whose elements are either constants or random variables distributed independent of . Then, the distribution of is invariant under the transformation if is a matrix of constants. Further, if is a random matrix, then and are independent.(3)Let . Then, the cumulative distribution function (cdf) of is derived as where .(4)Let , where is a matrix. Define and . If , then (i) and are independent, and , , and (ii) and are independent, and .(5)Let be a constant matrix of rank . If , then and .(6)Let and let be a nonzero -dimensional column vector of constants, then and . Further, if is an -dimensional random vector, independent of , and , then it follows that and .

It may also be mentioned here that properties ()–() given above are modified forms of results given in Section 8.10 of Gupta and Nagar [1].

If the random matrices and are independent, and , , then Roux and van der Merwe [11] have shown that has matrix variate beta type 2 distributions with parameters and .

The matrix variate confluent hypergeometric function kind 1 distribution can be derived as the distribution of the matrix ratio of independent gamma and beta matrices. It has been shown in Gupta and Nagar [1] that if and , then .

The expected values of and , for , can easily be obtained from the above results. For any fixed , ,where , andwhere . Hence, for all ,which implies that

The Laplace transform of the density of , where , is given bywhere we have used (24) and (21). From the above expression, the Laplace transform of the density of , where , is derived as

Theorem 4. Let ; thenwhere , , and .

Proof. From the density of , we have Now, evaluating the above integral by using (34), we get where , , and . Finally, simplifying the above expression, we get the desired result.

Corollary 5. Let ; thenwhere , , and .

Using (42) the mean and the variance of are derived aswhere , , andwhere and . For symmetric matrix , is derived as Replacing by its integral representation, namely, where and , one obtainsNow, evaluating the above integral by using (19), we obtainFinally, evaluating the integral involving by using (Khatrzi [12])we getwhere and .

Proceeding similarly and using the result (Khatrzi [12]) the expected value of is derived aswhere . Finally, evaluating the above integral using (20) we obtain

In the next theorem, we derive the confluent hypergeometric function kind 1 distribution using independent beta and gamma matrices.

Theorem 6. Let and be independent, and . Then, .

Proof. See Gupta and Nagar [1].

Theorem 7. Let and be independent, and . Then, .

Proof. The result follows from Theorem 6 and the fact that and have same eigenvalues, and the matrix variate confluent hypergeometric function kind 1 distribution is orthogonally invariant.

Theorem 8. Let and be independent, and . Then, .

Proof. Noting that and using Theorem 6 we get the result.

Theorem 9. Let and be independent, and . Then, .

Proof. The desired result is obtained by observing that and using Theorem 6.

Theorem 10. Let and be independent, and . Then, .

Proof. Noting that and using Theorem 6 we get the result.

Theorem 11. Let and be independent, and . Then, .

Proof. It is well known that and are independent, and . Therefore, using Theorem 7, .

Theorem 12. Let and be independent, and . Then, .

Proof. The proof is similar to the proof of Theorem 11.

4. Distributions of Sum and Quotients

In statistical distribution theory it is well known that if and are independent, and , then , , and . In this section we derive similar results when and are independent confluent hypergeometric function kind 1 and gamma matrices, respectively.

Theorem 13. Let and be independent, and . Then, the p.d.f. of is given by

Proof. Using the independence, the joint p.d.f. of and is given bywhereMaking the transformation , , with the Jacobian we obtain the joint p.d.f. of and asNow, integrating in (60) by applying (21) and substituting for , we obtain the desired result.

Corollary 14. Let and be independent, and . Then, the p.d.f. of is given by

Corollary 15. Let and be independent, and . Then, the p.d.f. of is given by

Proof. Interchanging subscripts and in Corollary 14, the p.d.f. of is given by where and . Now, the result follows from the fact that and have the same eigenvalues, and the distribution is orthogonally invariant.

Corollary 16. Let and be independent, and . Then, .

Corollary 17. Let the random matrices and be independent, and . Then, .

Corollary 18. Let and be independent, and . Then, .

Theorem 19. Let , , and be independent, , , and . Then, the p.d.f. of is given by

Proof. Using the independence of and and Theorem 6, . Further, using independence of and and Theorem 13, we obtain the desired result.

Corollary 20. Let , , and be independent, , , and . Then, .

Proof. For , the p.d.f. of given in the above theorem slides to Now, simplifying the Gauss hypergeometric function as where we have used (16), the desired result is obtained.

Corollary 21. Let and be independent, and . Then, the p.d.f. of is given by Further, if , then .

Proof. Observe that and have same eigenvalues and the distribution of is orthogonally invariant. Therefore, the random matrices and have identical distribution. Now, setting and , where and , we observe that and have identical distribution.

Theorem 22. Let and be independent, and . Then, the p.d.f. of is given by and the p.d.f. of is given by

Proof. Substituting and with the Jacobian in (58) we obtain the joint p.d.f. of and as Now, integration of by using (21) yields the density of . The marginal density of is obtained by integrating by using (22).

It may be remarked here that the density of given in the above theorem can also be obtained from the density of derived in Theorem 13 by making the transformation .

Corollary 23. Let and be independent random matrices, and . Then, the p.d.f. of is given by and the p.d.f. of is given by

Proof. Interchanging subscripts and in Theorem 22, the p.d.f. of is given by where now and . The desired result is now obtained by observing that . Similarly, the p.d.f. of is obtained by interchanging subscripts and in the p.d.f. of .

Corollary 24. Let the random matrices and be independent, and . Then, .

Proof. The desired result is obtained by substituting in the p.d.f. of and simplifying the resulting expression by using (16).

Corollary 25. Let the random matrices and be independent, and . Then, .

Proof. The desired result is obtained by substituting in the p.d.f. of and simplifying the resulting expression by using (16).

Corollary 26. Let the random matrices and be independent, and . Then .

Proof. The result is obtained by substituting in the p.d.f. of and simplifying the resulting expression by using (15).

Corollary 27. Let the random matrices and be independent, and . Then .

Proof. The result is obtained by substituting in the p.d.f. of and simplifying the resulting expression by using (15).

Corollary 28. Let the random matrices and be independent, and . Then, and .

Proof. Substitute in the p.d.f. of or in the p.d.f. of and simplify the resulting expression to get the desired result.

This section gives distributional results for the determinant of the random matrix distributed as confluent hypergeometric function kind 1.

In an unpublished report, Coelho et al. [13] have shown that if is a positive random variable and is defined for in some neighborhood of zero, then the moments uniquely identify the distribution of . In the next theorem, we will use this result to derive the distribution of the product of two independent confluent hypergeometric function kind 1 variables.

Theorem 29. If and are independent, then .

Proof. The th moment of is derived asNow, using the duplication formula for gamma function, namely,the th moment of is rewritten aswhere , , , , , and .
Finally, comparison of the above expression with the one given in (45) yields the desired result.

Theorem 30. If , then is distributed as , where are independent, , .

Proof. Writing multivariate gamma functions in terms of ordinary gamma function, (42) is rewritten as Now, comparing the above expression with (45), we get .

Corollary 31. If , then

Proof. For , is distributed as , where and are independent, and . From Theorem 29, we have .

Corollary 32. If , then is distributed as , where are independent, , .

6. Distribution of Eigenvalues

In this section, we derive density of eigenvalues of random matrix distributed as confluent hypergeometric function kind 1.

Theorem 33. Let be a positive definite random matrix of order with the p.d.f. . Then, the joint p.d.f. of the eigenvalues of is given bywhere , , and is the unit invariant Haar measure on the group of orthogonal matrices .

Proof of Theorem 33 and several other results can be found in Muirhead [7].

Theorem 34. If , then the joint p.d.f. of the eigenvalues of is given by where , , and is the two-matrix argument confluent hypergeometric function.

Proof. The p.d.f. of is given by (5). Applying Theorem 33, we obtain the joint p.d.f. of the eigenvalues of as Now, using (28), we obtain the desired result.

7. A Generalized Form

In this section, we give a more general form of the matrix variate confluent hypergeometric function kind 1 distribution by introducing an additional factor in the p.d.f. (5). The p.d.f. of , in this case, is given bywhere . We will write if the density of is given by (82). For and , the above p.d.f. reduces to which is a special case of the generalized hypergeometric function density defined by Roux [14].

Theorem 35. Let . Further, let the prior distribution of be a generalized matrix variate confluent hypergeometric kind 1 distribution with parameters , , , , , and , . Then, the marginal distribution of is a generalized inverted matrix variate beta with the density where .

Proof. By definition, the marginal density of , denoted by , is obtained asNow, substituting for and , we getFinally, evaluating the above expression by using (21) and simplifying, we get the desired result.

Theorem 36. Let . Further, let the prior distribution of be a generalized matrix variate confluent hypergeometric function kind 1 distribution parameters , , , , , and , . Then, the posterior distribution of is a generalized matrix variate confluent hypergeometric kind 1 distribution with parameters , , , , , and .

Proof. By definition and Theorem 35, we have Now, substituting appropriately, we getwhich is the desired result.

From the above results it is quite clear that the generalized matrix variate confluent hypergeometric function kind 1 distribution as the prior distribution is conjugate. Thus, this distribution may be used as an alternative to matrix variate gamma distribution.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The research work of Daya K. Nagar was supported by the Sistema Universitario de Investigación, Universidad de Antioquia, by Project no. IN10164CE.