#### Abstract

We study matrix variate confluent hypergeometric function kind 1 distribution which is a generalization of the matrix variate gamma distribution. We give several properties of this distribution. We also derive density functions of , , and , where independent random matrices and follow confluent hypergeometric function kind 1 and gamma distributions, respectively.

#### 1. Introduction

The matrix variate gamma distribution has many applications in multivariate statistical analysis. The Wishart distribution, which is the distribution of the sample variance covariance matrix when sampling from a multivariate normal distribution, is a special case of the matrix variate gamma distribution.

The purpose of this paper is to give a generalization of the matrix variate gamma distribution and study its properties.

We begin with a brief review of some definitions and notations. We adhere to standard notations (cf. Gupta and Nagar [1]). Let be an matrix. Then, denotes the transpose of ; ; ; = determinant of ; norm of = maximum of absolute values of eigenvalues of the matrix ; means that is symmetric positive definite; and denotes the unique symmetric positive definite square root of . The multivariate gamma function is defined by

The symmetric positive definite random matrix is said to have a matrix variate gamma distribution, denoted by , if its probability density function (p.d.f.) is given by where is a symmetric positive definite matrix of order , , and . For , the above density reduces to a standard matrix variate gamma density and in this case we write . Further, if and are independent gamma matrices, then the random matrix follows a matrix variate beta type 1 distribution with parameters and .

By replacing by the confluent hypergeometric function of matrix argument , a generalization of the matrix variate gamma distribution can be defined by the p.d.f.:where and is the normalizing constant. In Section 2, it has been shown that, for , , , , and , the normalizing constant can be evaluated as Therefore, the p.d.f. in (3) can be written explicitly aswhere , , , , , and is the confluent hypergeometric function of the first kind of matrix argument (Gupta and Nagar [1]). Since the density given above involves the confluent hypergeometric function, we will call the corresponding distribution a confluent hypergeometric function distribution. We will write to say that the random matrix has a confluent hypergeometric function distribution defined by the density (5). It has been shown by van der Merwe and Roux [2] that the above density can be obtained as a limiting case of a density involving the Gauss hypergeometric function of matrix argument. For , the density (5) reduces to a matrix variate gamma density and for it slides towhere , , , and . In this case we will write . The matrix variate confluent hypergeometric function kind 1 distribution occurs as the distribution of the matrix ratio of independent gamma and beta matrices. For , (6) reduces to a univariate confluent hypergeometric function kind 1 density given by (Orozco-Castañeda et al. [3])where , , , , and is the confluent hypergeometric function of the first kind (Luke [4]). The random variable having the above density will be designated by . Since the matrix variate confluent hypergeometric function kind 1 distribution is a generalization of the matrix variate gamma distribution, it is reasonable to say that the matrix variate confluent hypergeometric function kind 1 distribution can be used as an alternative to the gamma distribution quite effectively.

Although ample information about matrix variate gamma distribution is available, little appears to have been done in the literature to study the matrix variate confluent hypergeometric function kind 1 distribution.

In this paper, we study several properties including stochastic representations of the matrix variate confluent hypergeometric function kind 1 distribution. We also derive the density function of the matrix quotient of two independent random matrices having confluent hypergeometric function kind 1 and gamma distributions. Further, densities of several other matrix quotients and matrix products involving confluent hypergeometric function kind 1, beta type 1, beta type 2, and gamma matrices are derived.

#### 2. Some Definitions and Preliminary Results

In this section we give some definitions and preliminary results which are used in subsequent sections.

A more general integral representation of the multivariate gamma function can be obtained aswhere and . The above result can be established for real by substituting with the Jacobian in (1) and it follows for complex by analytic continuation.

The multivariate generalization of the beta function is given bywhere and .

The generalized hypergeometric function of one matrix, defined in Constantine [5], is given bywhere , , , are arbitrary complex numbers, is an complex symmetric matrix, is the zonal polynomial of complex symmetric matrix corresponding to the ordered partition , , , and denotes the summation over all partitions . The generalized hypergeometric coefficient used above is defined by where , , with . Conditions for convergence of the series in (10) are available in the literature. From (10), it follows thatBy taking in (10), it can be observed thatSubstituting , in (15) and using (12), the Gauss hypergeometric function is reduced as

The integral representations of the confluent hypergeometric function and the Gauss hypergeometric function are given bywhere and .

Further generalizations of (8) and (9) in terms of zonal polynomials, due to Constantine [5], are given asrespectively.

For and , we have

We can establish (21) and (22) by expanding in series form by using (10) and integrating term by term by applying (19) and (20) and finally summing the resulting series.

Note that the series expansions for and given in (13) and (14) can be obtained by expanding and , , in (17) and (18) and integrating using (20). Substituting in (18) and integrating, we obtainwhere , , , and . The hypergeometric function satisfies Kummer’s relation

For properties and further results on these functions the reader is referred to Constantine [5], James [6], Muirhead [7], and Gupta and Nagar [1]. The numerical computation of a hypergeometric function of matrix arguments is very difficult. However, some numerical methods are proposed in recent years; see, Hashiguchi et al. [8] and Koev and Edelman [9].

The generalized hypergeometric function with complex symmetric matrices and is defined by It is clear from the above definition that the order of and is unimportant; that is, Also, if one of the argument matrices is the identity, this function reduces to the one argument function. Further, the two-matrix argument function can be obtained from the one-matrix function by averaging over the orthogonal group using a result given in James [6, Equation ]; namely, where denotes the normalized invariant measure on . That is,given in James [6, Equation ].

Finally, we define the inverted matrix variate gamma, matrix variate beta type 1, and matrix variate beta type 2 distributions. These definitions can be found in Gupta and Nagar [1] and Iranmanesh et al. [10].

Definition 1. An random symmetric positive definite matrix is said to have an inverted matrix variate gamma distribution with parameters , , and , denoted by , if its p.d.f. is given by where , , and is a symmetric positive definite matrix of order .

Definition 2. An random symmetric positive definite matrix is said to have a matrix variate beta type 1 distribution with parameters and , denoted as , if its p.d.f. is given by

Definition 3. An random symmetric positive definite matrix is said to have a matrix variate beta type 2 distribution with parameters and , denoted as , if its p.d.f. is given by

Note that if , then . Further, if and are independent, and , then and .

We conclude this section by evaluating the normalizing constant in (3). Since the density over its support set integrates to one, we have By rewriting using Kummer’s relation (24) and integrating by applying (21), we get where . Finally, writing in terms of multivariate gamma functions by using (23), we obtainwhere , , , , and .

#### 3. Properties

In this section we study several properties of the confluent hypergeometric function kind 1 distribution defined in Section 1. For the sake of completeness, we first state the following results established in Gupta and Nagar [1].(1)Let and let be an constant nonsingular matrix. Then, .(2)Let and let be an orthogonal matrix whose elements are either constants or random variables distributed independent of . Then, the distribution of is invariant under the transformation if is a matrix of constants. Further, if is a random matrix, then and are independent.(3)Let . Then, the cumulative distribution function (cdf) of is derived as where .(4)Let , where is a matrix. Define and . If , then (i) and are independent, and , , and (ii) and are independent, and .(5)Let be a constant matrix of rank . If , then and .(6)Let and let be a nonzero -dimensional column vector of constants, then and . Further, if is an -dimensional random vector, independent of , and , then it follows that and .

It may also be mentioned here that properties ()–() given above are modified forms of results given in Section 8.10 of Gupta and Nagar [1].

If the random matrices and are independent, and , , then Roux and van der Merwe [11] have shown that has matrix variate beta type 2 distributions with parameters and .

The matrix variate confluent hypergeometric function kind 1 distribution can be derived as the distribution of the matrix ratio of independent gamma and beta matrices. It has been shown in Gupta and Nagar [1] that if and , then .

The expected values of and , for , can easily be obtained from the above results. For any fixed , ,where , andwhere . Hence, for all ,which implies that

The Laplace transform of the density of , where , is given bywhere we have used (24) and (21). From the above expression, the Laplace transform of the density of , where , is derived as

Theorem 4. Let ; thenwhere , , and .

Proof. From the density of , we have Now, evaluating the above integral by using (34), we get where , , and . Finally, simplifying the above expression, we get the desired result.

Corollary 5. Let ; thenwhere , , and .

Using (42) the mean and the variance of are derived aswhere , , andwhere and . For symmetric matrix , is derived as Replacing by its integral representation, namely, where and , one obtainsNow, evaluating the above integral by using (19), we obtainFinally, evaluating the integral involving by using (Khatrzi [12])we getwhere and .

Proceeding similarly and using the result (Khatrzi [12]) the expected value of is derived aswhere . Finally, evaluating the above integral using (20) we obtain

In the next theorem, we derive the confluent hypergeometric function kind 1 distribution using independent beta and gamma matrices.

Theorem 6. Let and be independent, and . Then, .

Proof. See Gupta and Nagar [1].

Theorem 7. Let and be independent, and . Then, .

Proof. The result follows from Theorem 6 and the fact that and have same eigenvalues, and the matrix variate confluent hypergeometric function kind 1 distribution is orthogonally invariant.

Theorem 8. Let and be independent, and . Then, .

Proof. Noting that and using Theorem 6 we get the result.

Theorem 9. Let and be independent, and . Then, .

Proof. The desired result is obtained by observing that and using Theorem 6.

Theorem 10. Let and be independent, and . Then, .

Proof. Noting that and using Theorem 6 we get the result.

Theorem 11. Let and be independent, and . Then, .

Proof. It is well known that and are independent, and . Therefore, using Theorem 7, .

Theorem 12. Let and be independent, and . Then, .

Proof. The proof is similar to the proof of Theorem 11.

#### 4. Distributions of Sum and Quotients

In statistical distribution theory it is well known that if and are independent, and , then , , and . In this section we derive similar results when and are independent confluent hypergeometric function kind 1 and gamma matrices, respectively.

Theorem 13. Let and be independent, and . Then, the p.d.f. of is given by

Proof. Using the independence, the joint p.d.f. of and is given bywhere