• Views 307
• Citations 0
• ePub 3
• PDF 116
`Mathematical Problems in EngineeringVolume 2019, Article ID 6963493, 12 pageshttps://doi.org/10.1155/2019/6963493`
Research Article

## A New Generalized Inequality for Covariance in Dimensions

1School of Mathematics and Statistics, Hainan Normal University, Haikou, Hainan, 571158, China
2Institute of Plasma Physics, Czech Academy of Sciences, Za Slovankou 3, 18200 Prague 8, Czech Republic

Correspondence should be addressed to Shi-you Lin; moc.liamxof@1111ysnil

Received 19 August 2018; Revised 15 December 2018; Accepted 24 December 2018; Published 16 January 2019

Copyright © 2019 Mei Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Inspired by the work of Zhefei He and Mingjin Wang which was published in the Journal of Inequalities and Applications in 2015, this paper further generalizes some related results to the case of multidimensional random variables. The resulting inequality for covariance is then applied to different multidimensional statistical distributions (multiuniform, multinomial, and multinormal). Coordinate dependence of the inequality is also examined. The obtained formulas could be useful for making estimates in multivariate statistics.

#### 1. Introduction

The concept of covariance appears ubiquitously in probability theory and statistics as the basic measure of correlation between random variables (see, e.g., standard textbooks [15]). An intuitive though elusive idea of correlation was transformed into a sound mathematical language in the century by Auguste Bravais and Francis Galton. Readers interested in the fascinating history of this subject are advised to consult a comprehensive work of Stingler [6] or two tutorial articles [7, 8]. An analysis of covariance possesses great practical importance in applied sciences [9, 10], especially in engineering (error analysis, optimum control, probabilistic design, and system identification) [11, 12], in biotechnology [13] and medical sciences [14, 15], or in economy [16].

Naturally, the means/variances/covariances of any ensemble of random variables are uniquely determined by the probability distribution (PD) corresponding to the concrete problem under study. However, both in pure mathematics and in applications, one frequently encounters a case when the pertinent PD is unknown. Under such circumstances, it is often much desirable to provide at least well defined general estimates (constraints) regarding mean/variance/covariance, namely, such estimates which are independent upon the specific PD. These constraints take typically the form of an inequality.

One remarkable result of the abovementioned sort was obtained by Chebyshev as early as in 1882 [13, 6]. More explicitly, the so-called Chebyshev inequality enables one to estimate which maximum fraction of values of a given random variable can be located further than a prescribed distance from the mean. Closely related are the subsequently found Ostrowski and Grüss type inequalities [13] in all their different variants (as listed in comprehensive monographs [1719]).

Until nowadays, the works of Chebyshev, Ostrowski, and Grüss have continued to inspire active mathematical research focused on inequalities/estimates of (co)variance. This fact is clearly documented by rich literature dealing with the subject [2037]. For the purposes of the present article we specifically highlight an inequality for covariance which was derived recently by He and Wang in [20]. Namely, the following theorem has been proven to hold.

Theorem I. Here is a single random variable. One assumes that has a finite expectation value and a finite variance . Furthermore, are any bounded differentiable functions, such thatThe covariance is defined explicitly below in Section 2.

In addition, He and Wang have applied inequality (1) to several concrete probability distributions (uniform, Gamma, Beta, Poisson, and binomial) and derived in this way various other, more specific inequalities.

The purpose of the present article is to generalize the work of He and Wang to the situation when is replaced by an -dimensional random variable with . More explicitly, we wish to generalize statement (1) and examine subsequently different multidimensional statistical distributions (multiuniform, multinomial, and multinormal).

The paper is structured as follows: Section 2 reviews all the necessary (well known and standard) prerequisites needed for an appropriate multidimensional generalization of statement (1) of Theorem I, such as the concept and basic properties of an expectation, variance, and covariance; the Cauchy-Schwartz inequality; and the Lagrange mean value theorem in dimensions. Section 3 represents the main core of our work, since it generalizes (1) as mentioned above. In Section 4 we apply our newly derived multidimensional random inequality to concrete multidimensional probability distributions (multiuniform, multinomial, normal, and multinormal). The purpose of Section 5 is to examine coordinate dependence of our generalized inequality (33) stated by Theorem II. Section 6 contains a brief conclusion and prospects of our future research work.

#### 2. Preparatory Considerations

##### 2.1. Multidimensional Random Variables
###### 2.1.1. Discrete Case

Assume that(i) is a countable set, (ii) is an -dimensional random variable, (iii) are the associated probabilities completely determining the statistical distribution of

Then(a)the probabilities satisfy the normalization condition(b)an expectation of is given by formulaHereafter we shall assume that is well defined and finite(c)the variance of () is given by formulaHereafter we shall assume that is well defined and finite. Relation (5) can be recast into an equivalent formLet be a function which is bounded in . Boundedness of means thatThen (α)an expectationis finite and well defined, since(β)also the varianceis finite and well defined, since

###### 2.1.2. Continuous Case

Assume that(i) is an open set, (ii) is an -dimensional random variable, (iii) are the associated probability densities completely determining the statistical distribution of

Then(a)the probability densities satisfy the normalization condition(b)an expectation of is given by formulaHereafter we shall assume that is well defined and finite(c)the variance of () is given by formulaHereafter we shall assume that is well defined and finite. Similarly as above in (5), also relation (14) can be recast into an equivalent formLet be a function which is measurable and bounded in . Boundedness of means thatThen (α)an expectationis finite and well defined, since(β)also the varianceis finite and well defined, since

##### 2.2. Covariance

Let be an -dimensional random variable. We define the covariance by prescriptionwith . Equivalently one can writeRecall that as follows from (5)-(6).

After taking an absolute value of (21) one getsThe well known Cauchy-Schwarz inequality (to be reviewed in the following subsection) implies howeverHence one may conclude thatThis means also that is well defined and finite as long as all the variances are well defined and finite.

##### 2.3. The Cauchy-Schwartz Inequality

Let and be any two vectors in .The proof is discussed in all standard textbooks of functional analysis, e.g., in [38].

##### 2.4. The Lagrange Mean Value Theorem in Dimensions

Let , be real valued differentiable functions defined in an open convex set . Let , . Convexity of implies that a straight line segment connecting with is entirely contained within . The Lagrange mean value theorem states that there always exists a number with the basic propertyBecause it is not easy to find a proof of statement (27) in available standard textbooks, we prefer to supplement here our own short proof. It is inspired by Theorem 4.2 given on page 378 of [39]: Direct calculation confirms thatThe Lagrange mean value theorem of the integral calculus (see, e.g., [40]) implies however thatwhere is a generally unknown fixed number (depending of course not only upon and but also on the function ). Combination of (28) and (29) yields now immediately the desired claim (27).

#### 3. Multidimensional Generalization of Theorem I

##### 3.1. Preliminaries

Before formulating the above advertised multidimensional generalization of Theorem (1), let us introduce some additional notations and conventions. Recall that is an -dimensional random variable with a finite expectation and a finite variance (see Section 2.1 for details). We define an auxiliary quantitySymbol will hereafter stand for an open convex subset of (case is also allowed to occur). If is a discrete random vector, then we tacitly assume .

Assume now that two functions are continuous and differentiable in . Assume also that all the partial derivatives , are bounded in . Then one may define auxiliary symbolsSubsequently one may introduce additional notations and through the formulas

##### 3.2. Our Basic Theorem

Now we are ready to state our own multidimensional generalization of Theorem I.

Theorem II. Assuming the above specified notations and conventions, one has

Proof. Since the proof is a bit lengthy, we shall conveniently divide it into parts: (i)let be a fixed parameter. Then is just a fixed number, andDefine a quantityThe Lagrange mean value theorem (Section 2.4) states thatwhere . HenceIn the last line of (37) we have used the triangle inequality(ii)by definition (17), we haveBut the involved integral satisfies an inequalityRelations (38)-(39) correspond of course to continuous statistical distributions. Analogical formulas apply also in the case of discrete random variables (one merely replaces by ). We leave the details to the reader(iii)after plugging (39) into (37) one finds thatCauchy-Schwarz inequality (26) implies howeverHence alsowhereconsonantly with (32)(iv)proceeding further, the variance relationimplies , giving in turnYet the variance formulayields .This is valid since and . We may thus conclude that(v)comparison of (35) and (47) provides an inequalitySo far, an entity was treated as a fixed parameter(vi)now we shall set to be a random variable equivalent to . Consequently we take the expectation over (48). This results inYet, by the definition of the variance (19), one can convert (49) into a simple outcomewhere of courseconsonantly with (30)(vii)completely analogical sequence of considerations can be applied also to the case of . One would arrive towards an inequalitywhereCombination of (25), (50), and (52) provides now the desired final claim This is exactly as stated above in (33). Thus, our Theorem II is proven

#### 4. An Application of Theorem II on Different Probability Distributions

In the present section, we shall derive some new inequalities by applying our basic Theorem II to three different types of probability distributions of an -dimensional random variable. The definitions of the distributions discussed below can be found in [4, 41].

##### 4.1. Multiuniform Distribution
###### 4.1.1. Definition

The probability density function is given by formulaNotice that (a)the probability densities satisfy the normalization conditionThereforeDefinethis is the volume of region . Then(b)the expectation of () is given by formula combining (55), (59), and (13),(c)the variance of () is then given by formula combining (15) and (60),After recalling (30) one can write

###### 4.1.2. An Application of Theorem II

Theorem II-1. Assume that two functions are continuous and differentiable in . Assume also that all the partial derivatives , are bounded in . Recalling notations (31) and (32), one haswhere

Proof. Let be an -dimensional random variable which possesses the multiuniform distribution. According to (55), (59), and (17) we haveand thereforeThendue to Theorem II, (33). After inserting (62) one getsThus the proof is completed.

###### 4.1.3. Special Case: Defined within an -Dimensional Rectangular Box

Assume that corresponds to a rectangular box. Stated mathematically,where and . Then, (a)according to (58),(b)the expectation of is equal towhereas the expectation of is(c)the variance of can be expressed asSubsequently we have

Now we are ready to make the following statement.

Theorem II-2. One haswhere

Proof. The proof follows immediately after substituting (69), (70), and (74) into (63) and (64).

##### 4.2. Multinomial Distribution
###### 4.2.1. Definition

Assume that(i), where (a) is a given fixed number of independent trials (b) is the number of possible outcomes in each trial,  and here the outcomes can be labeled as  (c) is the number of occurrences of the outcome during the whole trials(ii) is the prescribed probability of the outcome for a single trial, and one hasThen(a)the probability distribution function is given by formula(b)expectation is equal to(c)variance is equal to

###### 4.2.2. An Application of Theorem II

Theorem II-3. Assume that two functions are continuous and differentiable in where . Assume also that all the partial derivatives , are bounded in . Recalling notations (31) and (32), one has

Proof. Let be an -dimensional random variable which possesses the multinomial distribution. According to (78) and (8) we havethereforeYet (33) of Theorem II impliesCombination of (83), (84), and (80) yields desired inequality (81); thus the proof is completed.

##### 4.3. One-Dimensional Normal Distribution

Since the normal distribution was not discussed in paper [20] of He and Wang, we shall discuss first the case of a single random variable. Later in the subsequent subsection we shall extend our result to the case of two random variables ().

###### 4.3.1. Definition

Assume here(i)(ii) is a one-dimensional random variable(iii) are given prescribed parameters

Then(a)the probability density function is given by formula(b)the expectation is equal to(c)the variance is equal to

###### 4.3.2. An Application of Theorem I

Theorem I-1. Assume that two functions are continuous and differentiable in . Assume also that the derivatives , are bounded in . Then

Proof. Let be a random variable which possesses the normal distribution. According to (85) and (17) we havethereforeYet (1) of Theorem I impliesCombination of (90), (91), and (87) yields desired inequality (88); thus the proof is completed.

###### 4.3.3. Standard Normal Distribution

The standard normal distribution corresponds to choosing and . Accordingly we haveAfter substituting (92) into (88) one arrives to the following outcome.

Theorem I-2.

##### 4.4. Standard Bivariate Normal Distribution
###### 4.4.1. Definition

Assume that(i)(ii) with and independent(iii) denotes the standard normal probability density function of , with :Then (a)the probability density function is given by formula(b)the expectation is equal to(c)the variance is equal to

###### 4.4.2. An Application of Theorem II

Theorem II-4. Assume that two functions are continuous and differentiable in . Assume also that all the partial derivatives , are bounded in . Recalling (31) and (32), one has

Proof. Let be a 2-dimensional random variable which possesses the standard bivariate normal distribution. According to (95) and (17), one hasthereforeYet (33) of Theorem II impliesCombination of (100), (101), and (97) yields desired inequality (98); thus the proof is completed.

#### 5. Coordinate Dependence of Theorem II

Our basic relation (33) of Theorem II is formulated in terms of an -dimensional random variable . Clearly, for a given -dimensional statistical problem there are many different (mutually equivalent) choices of independent random variables (coordinates) in terms of which the probability distribution can be expressed. It is not a priori clear whether or not our basic random inequality (33) is coordinate dependent. As we show below on a concrete example, an answer is affirmative. This means also that an inequality (33) can be optimized (made stronger) via carrying out a suitable coordinate transformation.

##### 5.1. Correlated Bivariate Normal Distribution

As an illustrative example we shall take the correlated bivariate normal distribution described in [4]. Correspondingly, we have , , and . The probability distribution of is characterized by formula where is a fixed correlation parameter.

Importantly, the above introduced correlated bivariate normal distribution can be converted into an uncorrelated normal distribution via performing a bijective coordinate transformation . Here, according to [4],The associated probability density takes the form already discussed above in (95), i.e.,Direct calculation yields ; thus . Hence also

##### 5.2. An Application of Theorem II: Correlated Case

For the sake of maximum simplicity, we shall apply Theorem II on a concrete case of two elementary functionsDirect calculation yieldsRelation (33) of Theorem II boils then down towhere

##### 5.3. An Application of Theorem II: Uncorrelated Case

In the coordinates introduced above by (104) we haveDirect calculation yieldsAccording to our above derived statement (98) of Theorem II-4, relation (33) of Theorem II boils down towhere

##### 5.4. Analysis

Functions and represent the same random variable, just expressed in different coordinates. This is similar for and . If so, then covariances (111) and (116) must be equal. This can easily be verified explicitly as a consistency check. One starts with (111) and performs a substitution following (104). The corresponding Jacobian is equal toStraightforward manipulations confirm then thatexactly as being claimed.

According to (118), the l.h.s. of (110) and (115) are equal. However, the r.h.s. of (110) differs from the r.h.s. of (115), the latter being -dependent. This means in turn that statement (33) of our basic Theorem II does generally depend upon the coordinates chosen. Coordinate transformations can be thus exploited to optimize (strengthen) inequality (33).

#### 6. Conclusion and Prospects

In summary, our basic Theorem II stated above generalizes a recently derived random inequality of [20] to the case of multidimensional random variables. Six subsequent additional results (Theorems I-1~I-2, Theorems II-1~II-4) apply then Theorem II to different frequently encountered statistical distributions (multiuniform, multinomial, normal, and multinormal). Furthermore, we show in Section 5 that basic inequality (33) of Theorem II is coordinate dependent (and thus optimizable via carrying out suitable coordinate transformations). The just mentioned formulas and insights could be useful for making estimates in multivariate statistics.

Finally, we find it useful to list below three open questions which may be worthy of examining in the future. Namely,(i)we have assumed above that is a convex open set. Any nonconvex path connected open set can actually be made convex via a suitable coordinate transformation. So Theorem II turns out to be even more general than stated in Section 3. It is not a priori clear, however, what would happen in the case when is a disconnected open set. This remains to be seen(ii)an optimization of inequality (33) via coordinate transformations represents a very promising direction of further research(iii)it might be desirable to supplement some real life application which would reflect the practical value of our inequality (33). For example, one may think about applications in physics or economy

#### Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

#### Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

#### Acknowledgments

This work was supported by the Natural Science Foundation of Hainan Province (Grant no. 2018CXTD338; contribution rate: 30%), the National Natural Science Foundation of China (Grant no. 11761027; contribution rate: 50%), the Scientific Research Foundation of Hainan Province Education Bureau (Grant no. Hnky2016-14; contribution rate: 10%), and the Educational Reform Foundation of Hainan Province Education Bureau (Grant no. Hnjg2017ZD-13; contribution rate: 10%).

#### References

1. S. Ross, A First Course in Probability, Pearson, London, UK, 8th edition, 2010.
2. J. T. McClave, Statistics, Pearson, London, UK, 2017.
3. W. Mendenhall, R. J. Beaver, and B. M. Beaver, Introduction to Probability and Statistics, Brooks/Cole Publishing Co., Pacific Grove, Calif, USA, 14th edition, 2013.
4. J. Pitman, Probability, Springer, New York, NY, USA, 2011.
5. M. Bilodeau and D. Brenner, Theory of Multivariate Statistics, Springer, New York, NY, USA, 1999.
6. S. M. Stigler, The History of Statistics: The Measurement of Uncertainty before 1900, Harvard University Press, Cambridge, Mass, USA, 1990.
7. L. Debnath and K. Basu, “A short history of probability theory and its applications,” International Journal of Mathematical Education in Science and Technology, vol. 46, no. 1, pp. 13–39, 2015.
8. S. E. Fienberg, “A brief history of statistics in three and one-half chapters: A review essay,” Statistical Science, vol. 7, no. 2, pp. 208–225, 1992.
9. S. M. Ross, Introduction to Probability and Statistics for Engineers and Scientists, Academic Press, San Diego, Calif, USA, 5th edition, 2014.
10. T. T. Soong, Fundamentals of Probability and Statistics for Engineers, Wiley, England, UK, 2004.
11. T. Z. Fahidy, “Basic applications of the analysis of variance and covariance in electrochemical science and engineering,” in Modern Aspects of Electrochemistry, vol. 40, pp. 37–74, Springer, New York, NY, USA, 2007.
12. G. G. Vining and S. Kowalski, Statistical Methods for Engineers, Centage Learning, Boston, Mass, USA, 3rd edition, 2011.
13. T. Coffey and H. Yang, Statistics for Biotechnology Process Development, Chapman and Hall/CRC, Boca Raton, Fla, USA, 2018.
14. J. C. Bailar and D. C. Hoaglin, Medical Uses of Statistics, Wiley, Hoboken, NJ, USA, 3rd edition, 2009.
15. T. Park and Y. J. Lee, “Covariance models for nested repeated measures data: analysis of ovarian steroid secretion data,” Statistics in Medicine, vol. 21, no. 1, pp. 143–164, 2002.
16. N. Balakrishnan, Methods and Applications of Statistics in Business, Finance, and Management Science, Wiley, Hoboken, NJ, USA, 2010.
17. M. J. Cloud, B. C. Drachman, and L. Lebedev, Inequalities: With Applications to Engineering, Springer, New York, NY, USA, 2014.
18. D. S. Mitrinovic, J. Pecaric, and A. M. Fink, Classical and New Inequalities in Analysis, Springer, New York, NY, USA, 1993.
19. G. A. Anastassiou, Advanced Inequalities, World Scientific Publishing, Singapore, 2011.
20. Z. He and M. Wang, “An inequality for covariance with applications,” Journal of Inequalities and Applications, vol. 413, 2015.
21. G. Anastassiou, “On Grüss type multivariate integral inequalities,” Mathematica Balkanica, vol. 17, no. 1-2, 2003.
22. G. A. Anastassiou, “Multivariate Chebyshev-Grüss and comparison of integral means type inequalities via a multivariate Euler type identity,” Demonstratio Mathematica, vol. 40, no. 3, pp. 537–558, 2007.
23. M. W. Alomari, “New Grüss type inequalities for double integrals,” Applied Mathematics and Computation, vol. 228, pp. 102–107, 2014.
24. A. Florea and C. P. Niculescu, “A note on Ostrowskis inequality,” Journal of Inequalities and Applications, vol. 2005, no. 5, Article ID 459, 2005.
25. N. S. Barnett and S. S. Dragomir, “An inequality of Ostrowski's type for cumulative distribution functions,” Kyungpook Mathematical Journal, vol. 39, no. 2, pp. 303–311, 1999.
26. N. S. Barnett, P. Cerone, and S. S. Dragomir, “Inequalities for Random Variables Over a Finite Interval,” in RGMIA Monographs, Victoria University, Melbourne, Australia, 2004.
27. B. G. Pachpatte, “On Grüss type inequalities for double integrals,” Journal of Mathematical Analysis and Applications, vol. 267, no. 2, Article ID 454, 2002.
28. B. G. Pachpatte, “On multidimensional Grüss type inequalities,” Journal of Inequalities in Pure and Applied Mathematics, vol. 3, no. 2, 2002.
29. X. Li, R. N. Mohapatra, and R. S. Rodriguez, “Grüss-type inequalities,” Journal of Mathematical Analysis and Applications, vol. 267, no. 2, pp. 434–443, 2002.
30. R. P. Agarwal, N. S. Barnett, P. Cerone, and S. S. Dragomir, “A survey on some inequalities for expectation and variance,” Computers & Mathematics with Applications, vol. 49, no. 2-3, pp. 429–480, 2005.
31. F. Ahmad, N. S. Barnett, and S. S. Dragomir, “New weighted Ostrowski and Chebyshev type inequalities,” Nonlinear Analysis, vol. 71, no. 12, pp. e1408–e1412, 2009.
32. P. Cerone and S. S. Dragomir, “Bounding the Chebyshev functional for the Riemann-Stieltjes integral via a Beesack inequality and applications,” Computers & Mathematics with Applications, vol. 58, no. 6, pp. 1247–1252, 2009.
33. G. Farid, “Straightforward Proofs of Ostrowski Inequality and Some Related Results,” International Journal of Analysis, vol. 2016, Article ID 3918483, 5 pages, 2016.
34. W. Liu, A. Tuna, and Y. Jiang, “On weighted Ostrowski type, Trapezoid type, Grüss type and Ostrowski-Grüss like inequalities on time scales,” Applicable Analysis: An International Journal, vol. 93, no. 3, pp. 551–571, 2014.
35. A. Tuna and D. Daghan, “Generalization of Ostrowski and Ostrowski-Grüss type inequalities on time scales,” Computers & Mathematics with Applications, vol. 60, no. 3, pp. 803–811, 2010.
36. A. Tuna, Y. Jiang, and W. Liu, “Weighted Ostrowski, Ostrowski-Grüss and Ostrowski-Chebyshev type inequalities on time scales,” Publicationes Mathematicae Debrecen, vol. 81, no. 1-2, pp. 81–102, 2012.
37. B. Zheng and Q. Feng, “Generalized n-dimensional Ostrowski type and Grüss type inequalities on time scales,” Journal of Applied Mathematics, vol. 2014, Article ID 434958, 11 pages, 2014.
38. J. Conway, A Course in Point Set Topology, Springer International Publishing, Switzerland, 2014.
39. S. Lang, Undergraduate Analysis, Springer-Verlag, New York, NY, USA, 1983.
40. M. Comenetz, Calculus: The elements, World Scientific, Singapore, 2002.
41. K. Chung and F. AitSahlia, Elementary Probability Theory, Springer, New York, NY, USA, 2011.