Abstract

We generalize the univariate Pareto distribution of the second kind to the matrix case and give its derivation using matrix variate gamma distribution. We study several properties such as cumulative distribution function, marginal distribution of submatrix, triangular factorization, moment generating function, and expected values of the Pareto matrix. Some of these results are expressed in terms of special functions of matrix arguments, zonal, and invariant polynomials.

1. Introduction

The Lomax distribution, also called the Pareto distribution of the second kind is given by the p.d.f. where shape parameter and location parameter . The Lomax distribution, named after Lomax, is a heavy-tail probability distribution often used in business, economics, and actuarial modeling. The standard Pareto Distribution of the second kind has with the p.d.f. Although a wealth of results on Pareto distribution is available in the literature (see Johnson et al. [1]) nothing appears to have been done to define and study matrix variate Pareto distribution.

Therefore, in this paper, we define matrix variate Pareto distribution and study several of its properties.

We will use the following standard notations (cf. Gupta and Nagar [2]). Let be an matrix. Then, denotes the transpose of ; means that is symmetric positive definite and denotes the unique symmetric positive definite square root of . The submatrices and , of the matrix are defined as , and , respectively.

The multivariate gamma function which is frequently used in multivariate statistical analysis is defined by The multivariate generalization of the beta function is given by where and . Further, by using the matrix transformation in (1.4) with the Jacobian one can easily establish the identity

The beta type 1 and beta type 2 families of distributions are defined by the density functions (Johnson et al. [1]): respectively, where , and Recently, Cardeño et al. [3] have defined and studied the family of beta type 3 distributions. A random variable is said to follow a beta type 3 distribution if its density function is given by

If a random variable has the p.d.f. (1.6), then we will write , and if the p.d.f. of the random variable is given by (1.7), then . The distribution given by the density (1.9) will be designated by . The matrix variate generalizations of (1.6), (1.7), and (1.9) are defined as follows (Gupta and Nagar [2, 4, 5]).

Definition 1.1. An random symmetric positive definite matrix is said to have a matrix variate beta type 1 distribution with parameters , denoted as , if its p.d.f. is given by where and .

Definition 1.2. An random symmetric positive definite matrix is said to have a matrix variate beta type 2 distribution with parameters , denoted as , if its p.d.f. is given by where and .

Definition 1.3. An random symmetric positive definite matrix is said to have a matrix variate beta type 3 distribution with parameters , denoted as , if its p.d.f. is given by where and .

2. The Density Function

First we define the matrix variate Pareto distribution of the second kind.

Definition 2.1. An random symmetric positive definite matrix is said to have a matrix variate Pareto distribution of the second kind, denoted as , if its p.d.f. is given by

Definition 2.2. An random symmetric positive definite matrix is said to have a matrix variate Lomax distribution with parameters and , denoted as , if its p.d.f. is given by where is an symmetric positive definite matrix and .

From Definitions 2.1 and 2.2 it is clear that if , then for an symmetric positive definite constant matrix and if , then .

For , the matrix variate Pareto distribution and matrix variate Lomax distribution reduce to their respective univariate forms.

The matrix variate Pareto distribution can be derived by using independent gamma matrices. A random matrix is said to have a matrix variate gamma distribution with parameters and , denoted by , if its p.d.f. is given by

Theorem 2.3. Let and be independent, and . Then, .

Proof. The joint density function of and is given by Transforming with the Jacobian in the joint density of and , we obtain the joint density of and as Now, the desired result is obtained by integrating using (1.3).

The cumulative distribution function of is obtained as where the last line has been obtained by substituting with the Jacobian . Now, writing the above expression is rewritten as Finally, using the integral representation of the Gauss hypergeometric function (Herz [6], Constantine [7], James [8], and Gupta and Nagar [2]), namely, where , and , we obtain

The moment generating function of is derived as where . Now, evaluating the above integral, we obtain where the confluent hypergeometric function , with symmetric matrix as argument, is defined by the integral valid for and .

3. Properties

In this section, we give several properties of the matrix variate Pareto distribution of the second kind defined in the previous section.

Theorem 3.1. Let and let be an constant nonsingular matrix. Then, the density of is

Theorem 3.2. Let and let be an orthogonal matrix, whose elements are either constants or random variables distributed independent of . Then, the distribution of is invariant under the transformation . Further, if is a random matrix, then and are distributed independently.

Theorem 3.3. If , then, and . Further, if , then .

Theorem 3.4. Let , where is a matrix. Define and . If , then (i) and are independent, and ; (ii) and are independent, and .

Proof. From the partition of , we have Now, making the transformation and with the Jacobian in the density of , we get the joint density of , and as Further, transforming with the Jacobian , the joint density of , and is derived as where , and . From the above factorization, it is clear that and are independent, and . The second part is similar.

Theorem 3.5. Let be a constant matrix of rank . If , then

Proof. Write , where is a nonsingular matrix and is an orthogonal matrix. Now, where , is a matrix, and . From Theorem 3.4, , and Theorem 3.1, has the p.d.f. proportional to Now, noting that and making the transformation with the Jacobian in the above density, we get the desired result. The proof of the second part is similar.

From Theorem 3.5, it is clear that if and , then . Further, if is a random vector, independent of , and , then it follows that and .

From the above results, it is straightforward to show that if is a nonzero constant vector or a random vector independent of with , then

The expectation of , , can easily be obtained from the above result. For any fixed , where . Hence, for all , which implies that

Theorem 3.6. If and , where is a lower triangular matrix with positive diagonal elements, then are all independent, .

Proof. Making the transformation with the Jacobian in (2.1), the density of is derived as where , and for . Now, partition as where is an vector and is an lower triangular matrix. Then Now, make the transformation with the Jacobian in (3.12) to get the joint density of , and as From the above factorization, it is clear that , and are all independent, and the density of is proportional to which has the same form as the density (3.12) with replaced by . Repeating the argument given above on the density function of , we observe that and is independent of . Continuing further with the same argument, we get the desired result.

Corollary 3.7. If , then the distribution of is the same as the distribution of the product of independent beta type 2 variables, that is, where .

Corollary 3.8. If , then are independently distributed. Further, for , .

Theorem 3.9. If and , where is an upper triangular matrix with positive diagonal elements, then are all independent, .

Proof. Making the transformation with the Jacobian in (2.1), the density of is derived as where , and for . Now, partition as where is an vector and is an upper triangular matrix. Then Now, make the transformation with the Jacobian in (3.19) to get the joint density of , and as From the above factorization, it is clear that , , and are all independent, and the density of is proportional to which has the same form as the density (3.19) with replaced by . Repeating the argument given above on the density function of , we observe that and is independent of . Continuing further with the same argument, we get the desired result.

Corollary 3.10. If , then the distribution of is the same as the distribution of the product of independent beta type 2 variables, that is, where , .

Corollary 3.11. If , then are independently distributed. Further, for .

We conclude this section by deriving moments of and .

Theorem 3.12. Let , then where and .

Proof. By definition Now, evaluating the above integral using (1.5), we get the result.

Corollary 3.13 .1. If , then

By writing multivariate gamma functions in terms of ordinary gamma functions, expressions and can be simplified as Substituting , the first and second order moments of and are calculated as where the Pochhammer notation is defined by with .

4. Results Involving Zonal and Invariant Polynomials

Let be the zonal polynomial of an symmetric matrix corresponding to the partition . Then, for small values of , explicit formulas for are available as (James [8]) From the above results, it is straightforward to show that

For an ordered partition of , , and are defined by where the generalized hypergeometric coefficient is defined by Further, , in terms of zonal polynomials, can be expanded as where denotes summation over all ordered partitions of .

For properties and further results, the reader is referred to Constantine [7] and Gupta and Nagar [2].

Lemma 4.1. Let be an arbitrary complex symmetric matrix. Then

Davis [9, 10] introduced a class of polynomials of symmetric matrix arguments and , which are invariants under the transformation , . For properties and applications of invariant polynomials, we refer to Davis [9, 10], Chikuse [11], and Nagar and Gupta [12]. Let , , , and be ordered partitions of the nonnegative integers , , , and , respectively. Then where denotes that irreducible representation of , the group of real invertible matrices, indexed by , appears in the decomposition of the tensor product of the irreducible representation indexed by and . Further,

From the density of , we have where the last line has been obtained by using (4.6).

Using results on zonal polynomials, it is easy to see that Further, using the invariance of the distribution of and the above results, one obtains

Theorem 4.2. Let and be independent, . Define and . Then, the density of is given by Further, the density of is derived as

Proof. The joint density of and is given by Making the transformation and with the Jacobian in (4.17), the joint density of and is derived as where , , and . Since, and , using (4.5), we can write where and are the ordered partitions of and , respectively. Now, the application of yields Finally, substituting (4.20) in (4.18), the joint density of and is obtained as Now, the integration of in (4.21) using (4.9) yields the density of . The density of is obtained by substituting with the Jacobian in (4.21) and integrating by using (4.10).

Acknowledgment

The research work of D. K. Nagar was supported by the Comité para el Desarrollo de la Investigación, Universidad de Antioquia, research Grant no. IN560CE.