Abstract

We define and study several properties of what we call Maximal Strichartz Family of Gaussian Distributions. This is a subfamily of the family of Gaussian Distributions that arises naturally in the context of the Linear Schrödinger Equation and Harmonic Analysis, as the set of maximizers of certain norms introduced by Strichartz. From a statistical perspective, this family carries with itself some extrastructure with respect to the general family of Gaussian Distributions. In this paper, we analyse this extrastructure in several ways. We first compute the Fisher Information Matrix of the family, then introduce some measures of statistical dispersion, and, finally, introduce a Partial Stochastic Order on the family. Moreover, we indicate how these tools can be used to distinguish between distributions which belong to the family and distributions which do not. We show also that all our results are in accordance with the dispersive PDE nature of the family.

1. Introduction

The most important multivariate distribution is the Multivariate Normal (MVN) Distribution. To fix the notation, we give here its definition.

Definition 1. One says that a random variable is distributed as a Multivariate Normal Distribution if its probability density function (pdf) takes the form where is the mean value vector and is the positive definite symmetric Variance-Covariance Matrix.

Its importance derives mainly (but not only) from the Multivariate Central Limit Theorem which has the following statement.

Theorem 2. Suppose that is a random vector with Variance-Covariance Matrix . Assume also that for every . If is a sequence of i.i.d. random variables distributed as , then where represents the convergence in distribution.

Due to its importance, several authors have tried to give characterizations of this family of distributions. See, for example, [1, 2] for an extended discussion on multivariate distributions and their properties. Here, we concentrate on characterizing the MVN through variational principles, such as the maximization of certain functionals. A well-known characterization of the Gaussian Distribution is through the maximization of the Differential Entropy, under the constraint of fixed variance . We focus on the case of when the support of the pdf is the whole Euclidean Space .

Theorem 3. Let be a random variable whose pdf is . The Differential Entropy is defined by the following functional: The Multivariate Normal Distribution has the largest Differential Entropy amongst all the random variables with equal variance . Moreover, the maximal value of the Differential Entropy is .

We refer to Appendix for a proof of this well-known theorem. This characterization is, in some sense, not completely satisfactory because it is given just with the restriction of fixed variance. A more general characterization of the Gaussian Distribution has been given in a setting which, at first sight, seems very far, and it is the one of Harmonic Analysis and Partial Differential Equations. We first introduce the so-called admissible exponents.

Definition 4. Fix . One calls a set of exponents admissible if and

Remark 5. These exponents are characteristic quantities of certain norms, the Strichartz Norms, naturally arising in the context of Dispersive Equations and can vary from equation to equation. We refer to [3] for more details.

Here is the precise characterization of the Multivariate Normal Distribution, through Strichartz Estimates.

Theorem 6 (see [47]). Suppose or . Then, for every and admissible and for every such that , we havewhere is the Sharp Homogeneous Strichartz Constant, defined byand given byMoreover, inequality (6) becomes an equality if and only if is the pdf of a Multivariate Normal Distribution.

For several other important results on Strichartz Estimates, we refer to [811] and the references therein.

Remark 7. This characterization does not need the restriction of fixed variance as the one achieved using the Differential Entropy Functional and so it is, in some sense, more “general.” The result is conjectured to be true for any dimension . See, for example, [7], where the optimal constant has been computed in any dimension , under the hypothesis that the maximizers are Gaussians also in dimension .

We refer to [7] for the relation of this result with harmonic analysis and restriction theorems.

Strichartz Estimates are a fundamental tool in the problem of global well-posedness of PDEs and measure a particular type of dispersion (see, e.g., [35, 7, 12, 13]). Strichartz Estimates bring with themselves some interesting statistical features and this is what we want to analyse in the present paper.

The symmetries of the functional in (6) give rise to a family of distributions that we call Maximal Strichartz Family of Gaussian Distributions:We refer to Section 2 for its precise construction. This is a subfamily of the family of Gaussian Distributions and, among the other things, it has the feature that the Mean Vector and the Variance-Covariance Matrix depend on common parameters. Therefore, from a statistical perspective, this family carries with itself some extrastructure with respect to the general family of Gaussian Distributions. This extrastructure becomes evident from the form of the Fisher Information Metric of the family.

Theorem 8. Consider , a probability distribution function belonging to the Maximal Strichartz Family of Gaussian Distributions , defined in (9). The vector of parameters , indexing , is given by Then, the Fisher Information Matrix of is given (i)in the spherical case by(ii)in the elliptical case (; see Section 3 for the precise definition) by

Remark 9. Technically, the only possible case inside the Maximal Strichartz Family of Gaussian Distributions is when , since (the spherical case, with ). The form of the Fisher Information Matrix, in that case, is simplified to a lower dimension. Nevertheless, the computation performed in the way we did gives the possibility to compute a distance (in some sense centred at the Maximal Strichartz Family of Gaussian Distributions) between members of the Maximal Strichartz Family of Gaussian Distributions and other Gaussian Distributions, for which the orthogonal matrix condition is not necessarily satisfied. In particular, it can distinguish between Gaussians evolving through the PDE flow (see Section 2) and Gaussians which do not.

Remark 10. We believe that using the flow of a Partial Differential Equation is a natural way to produce probability density functions, in particular, in this case, since the flow of the PDE, that we are using, preserves the probability constraint. See Section 2.2 for more details on this comment.

As we said, Strichartz Estimates are a way to measure the dispersion caused by the flow of the PDE to which they are related. In statistics, dispersion explains how stretched or squeezed is a distribution. A measure of statistical dispersion is a nonnegative real number which is small for data which are very concentrated and increases as the data become more spread-out. Common examples of measures of statistical dispersion are the variance, the standard deviation, the range, and many others. Here, we connect the two closely related concepts (dispersion in statistics and PDEs) by introducing some measures of statistical dispersion like the Index of Dispersion in Definition 38 (see Section 4) which reflect the dispersive PDE nature of the Maximal Strichartz Family of Gaussian Distributions.

Definition 11. Consider the norms and on the space of Variance-Covariance Matrices and on the space of mean values . One defines the following Index of Dispersion:with and where is as follows:while is given by One calls the -Dispersion Index of the Maximal Family of Gaussians and one calls -Static Dispersion Index of the Maximal Family of Gaussians.

We compute this Index of Dispersion for our family of distributions and show that it is consistent with PDE results. We refer to Definition 38 for more details.

Another important concept in probability and statistics is the one of Stochastic Order. A Stochastic Order is a way to consistently put a set of random variables in a sequence. Most of the Stochastic Orders are partial orders, in the sense that an order between the random variables exists, but not all the random variables can be put in the same sequence. Many different Stochastic Orders exist and have different applications. For more details on Stochastic Orders, we refer to [14]. Here, we use our Index of Dispersion to define a Stochastic Order on the Maximal Strichartz Family of Gaussian Distributions and see how there are natural ways of partially ranking the distributions of the family (see Section 5), in agreement with the flow of the PDE.

Definition 12. Consider two random variables and such that , for any and . One says that the two random variables are ordered according to their Dispersion Index if and only if the following condition is satisfied:

Remark 13. In this definition the index can vary according to the context and the choices of the norms in the definition of the index.

An important tool which will be fundamental in our analysis is what we call -Characteristic Function (see Section 2 and [7, 15]). We conclude the paper with an appendix in which, among the other things, we use the concept of -Characteristic Function to define generalized types of Momenta that exist also for the Multivariate Cauchy Distributions.

2. Construction of the Maximal Strichartz Family of Gaussian Distributions

This section is devoted to the construction of the Maximal Strichartz Family of Gaussian Distributions; see Figure 1. This is basically done through PDE methods. The program is the following.(1)We define -Characteristic Functions.(2)We prove that if generates a probability distribution , then (see below its precise definition) still defines a probability distribution .(3)By means of -Characteristic Functions, we give the explicit expression of the generator of the family.(4)We use symmetries and invariances to build the complete family .

2.1. The -Characteristic Functions

Following the program, we first need to introduce the tool of -Characteristic Functions to characterize . It is basically the Fourier Transform, but, differently from the Characteristic Function, the average is not taken with the pdf, but with a power of the pdf.

Definition 14. Consider to be a Schwartz function, namely, a function belonging to the space with and being multi-indices, endowed with the following norm: Moreover, suppose that namely, defines a continuous probability distribution function. Then, one defines One calls the -Characteristic Function of . Moreover, one defines the Inverse -Characteristic Function by

We refer to the note [15] for examples and properties of -Characteristic Functions and to Appendix for a simple straightforward application of this tool. In particular, we notice that .

Remark 15. If is complex valued (not just real valued) and, for example, , then there are -distinct complex roots of . In our discussion, this will not create to us any problem, because our process starts with and produces . We want to remark that the map is a multivalued function. For this reason, we cannot reconstruct uniquely a generator, given the family that it generates. See formula (39) below and [15] for more details.

Remark 16. We could define -Characteristic Functions for more general functions with a locally compact abelian group and a general field. We do not pursue this direction here and we will leave it for a future work. We notice that can be considered also as a -Expected Value:

2.2. Conservation of Mass and Flow on the Space of Probability Measures

In this subsection, we show that if defines a probability distribution, then also defines a probability distribution. This is mainly a consequence of the property of of being a unitary operator.

Theorem 17. Consider , the set of all probability distributions on and a solution to (27). Then induces a flow in the space of probability distributions.

Proof. Consider such that ; so is a probability distribution on . Consider , the solution of (27) with initial datum . Then So and henceTherefore, for every , is a probability distribution.

Remark 18. This situation is in striking contrast with respect to the heat equation, where if you start with a probability distribution as initial datum, instantaneously the constraint of being a probability measure is broken.

2.3. Fundamental Solution for the Linear Schrödinger Equation Using -Characteristic Functions

In this subsection, we solve the Linear Schrödinger Equationwith initial datum . This will produce a natural generator of a family of probability distributions, due to Theorem 17.

We first notice that the initial datum becomes a probability density function, if and only if multiplied by a constant. But, since the equation is linear, we can do everything without that constant and then include it at the end.

Remark 19. These computations are well known, but we perform them in detail here, in order to clarify what we will compute in the context of -Characteristic Functions.

Since , then also and . So, we can apply the -Characteristic Function to both sides of (27) and get whose solution is We now need to compute the -Characteristic Function of the initial datum and then the Inverse -Characteristic Function of to get the explicit form of the solution. We haveby using contour integrals. We notice that, with a simple change of variables, we have Hence With this, we can concludeNow, we make the change of variables to get Hence, we obtainNow, we have to find a constant such that, for every , the function is a probability density function. The condition to be satisfied is the following: which implies . Therefore, the function induces the probability density function:sois going to be the generator of the family of distributions .

Remark 20. This procedure works because the Gaussian Distribution is, up to constants, a fixed point of the -Characteristic Function. Indeed, if is Gaussian, then for some normalizing constant . Moreover, the Schrödinger flow preserves Gaussian Distributions; namely, if your initial datum is Gaussian, then the solution is Gaussian for any future and past times.

2.4. Strichartz Estimates and Their Symmetries

In this subsection, we deduce the Strichartz Estimates for the Schrödinger equation in the case of probability distributions and discuss their symmetries. For clarity, we repeat here the definition of admissible exponents and the Strichartz Estimate.

Definition 21. Fix . One calls a set of exponents admissible if and

Theorem 22 (see [47]). For dimension or (and for any , supposing that Gaussians are maximizers) and admissible pair, the Sharp Homogeneous Strichartz Constant defined byis given byMoreover, if one defines bythen, for every (always supposing that Gaussians are maximizers), one has that is a decreasing function of and For any and admissible pair, the Sharp Dual Homogeneous Strichartz Constant is defined byOne has that .

This is the version of the theorem on Strichartz Estimates without the restriction , as proved in [7]. From this, we can very easily deduce Theorem 6.

Proof of Theorem 6. Just substitute the condition in all the statements.

As explained for example in [12], Strichartz Estimates are invariant by the following set of symmetries.

Lemma 23 (see [12]). Let be the group of transformations generated by(i)space-time translations: , with , ;(ii)parabolic dilations: , with ;(iii)change of scale: , with ;(iv)space rotations: , with ;(v)phase shifts: , with ;(vi)Galilean transformations: with .Then, if solves (27) and , solves (27) also. Moreover, the constants , , and are left unchanged by the action of .

The only point here is that not all these symmetries leave the set of probability distributions invariant. Therefore, we need to reduce the set of symmetries in our treatment and, in particular, we need to combine the scaling and the parabolic dilations in order to have all the family inside the space of probability distributions .

Lemma 24. Consider such that maximizes (6); then .

Proof. Considerso .

Remark 25. We notice that some of the symmetries can be seen just at the level of the generator of the family but not by the family of probability distributions . For example, the phase shifts , with , give rise to the same probability distribution function because and, partially, the Galilean transformations , with , reduces to a space translation with , since . In some sense, the parameter can be seen as a latent variable.

Therefore, we have the complete set of probability distributions induced by the generator .

Theorem 26. Consider a probability distribution function generated by (see Section 2.3). Let be the group of transformations generated by (i)inertial-space translations and time translations: , with , and ;(ii)scaling-parabolic dilations: , with ;(iii)space rotations: , with ;Then, if solves (27) and , solves (27) also, is still a probability distribution for every , and the constant is left unchanged by the action of .

This theorem produces the following definition.

Definition 27. One calls Maximal Strichartz Family of Gaussian Distributions the following family of distributions:

Remark 28. Let be the pdf defined in (39). Then, choose with , , and . This implies that . For this reason, we call the Family Generator of . We notice also that, in the definition of the family and with respect to Theorem 26, we used as scale parameter instead of . This is done without loss of generality, since .

Right away we can compute the Variance-Covariance Matrix and Mean Vector of the family.

Corollary 29. Suppose is a random variable with pdf . Then its Expected Value is and its Variance is

Proof. The proof is a direct computation.

Remark 30. We see here that, differently from the general family of Gaussian distributions, here the Mean Vector and the Variance-Covariance Matrix are related by a parameter, which represents the time flow.

3. The Fisher Information Metric of the Maximal Strichartz Family

Information geometry is a branch of mathematics that applies the techniques of differential geometry to the field of statistics and probability theory. This is done by interpreting probability distributions of a statistical model as the points of a Riemannian Manifold, forming in this way a statistical manifold. The Fisher Information Metric provides a natural Riemannian Metric for this manifold, but it is not the only possible one. With this tool, we can define and compute meaningful distances between probability distributions, in both the discrete and the continuous cases. Crucial is then the set of parameters on which a certain family of distributions is indexed and the geometrical structure of the parameter set is also crucial. We refer to [16] for a general reference on information geometry. The first one to introduce the notion of distance between two probability distributions has been Rao in [17], who used the Fisher Information Matrix as a Riemannian Metric on the space of parameters.

In this section, we restrict our attention to the Fisher Information Metric of the Maximal Strichartz Family of Gaussian Distributions and provide details on the additional structure that the family has with respect to the hyperbolic model of the general Family of Gaussian Distributions. See, for example, [1820].

3.1. The Fisher Information Metric for the Multivariate Gaussian Distribution

First, we give the general definition of the Fisher Information Metric.

Definition 31. Consider a statistical manifold , with coordinates given by and with probability density function . Here, is a specific observation of the discrete or continuous random variables . The probability is normalized, so that for every . The Fisher Information Metric is defined by the following formula:

Remark 32. The integral is performed over all values that the random variable can take. Again, the variable is understood as a coordinate on the statistical manifold , intended as a Riemannian Manifold. Under certain regularity conditions (any that allows integration by parts), can be rewritten as

Now, to compute explicitly the Fisher Information Matrix of the family , we use the following theorem that you can find in [21].

Theorem 33. The Fisher Information Matrix for an -variate Gaussian Distribution can be computed in the following way. Let be the vector of Expected Values and let be the Variance-Covariance Matrix. Then, the typical element , of the Fisher Information Matrix for is where denotes the transpose of a vector, denotes the trace of a square matrix, and

Now, we have just to compute the Fisher Information Matrix entry by entry, following the theorem. We recall here that we are considering the following family of Gaussian Distributions: and that, in particular, we have that the Expected Value of a random variable with distribution belonging to the family is given by while the Variance-Covariance Matrix is given by

Remark 34. We remark again that and depend on some common parameters, like the time .

3.2. Proof of Theorem 8: The Spherical Multivariate Gaussian Distribution

Here, we consider the case in which , namely, the case where the Variance-Covariance Matrix is given by . In this case, the vector of parameters is given bywith and being , while , , and are scalars. In order to fix the notation, we call , , , , and . Now, we want to compute all the coefficients of . We use the symmetry of the information matrix , so . The relevant coefficients are(i), (ii), (iii), because does not depend on and does not depend on ;(iv), because does not depend on and does not depend on ;(v), (vi), (vii), because does not on and does not depend on ;(viii), because does not depend on and does not depend on ;(ix), (x), (xi), (xii), (xiii), (xiv), (xv),

In conclusion, we have

3.3. Proof of Theorem 8: The Elliptical Multivariate Gaussian Distribution

We define We define also Using this notations, we are going to compute the matrix . The relevant coefficients are (i), (ii), (iii), because does not depend on and does not depend on ;(iv), because does not depend on and does not depend on ;(v), (vi), (vii), because does not depend on and does not depend on ;(viii), because does not depend on and does not depend on ;(ix), (x), (xi), (xii), (xiii), (xiv), (xv), In conclusion, we haveThis concludes the proof of Theorem 8.

3.4. The General Multivariate Gaussian Distribution

As pointed out in [18, 19], for general Multivariate Normal Distributions, the explicit form of the Fisher distance has not been computed in closed form yet even in the simple case where the parameters are , , and . From a technical point of view, as pointed out in [18, 19], the main difficulty arises from the fact that the sectional curvatures of the Riemannian Manifold induced by and endowed with the Fisher Information Metric are not all constant. We remark again here that the distance induced by our Fisher Information Matrix is centred at the Maximal Strichartz Family of Gaussian Distributions, to enlighten the difference between members of the Maximal Strichartz Family of Gaussian Distributions and other Gaussian Distributions, for which is not necessarily satisfied. In particular, our metric distinguishes between Gaussians evolving through the PDE flow (see Section 2) and Gaussians who do not.

Remark 35. We say that two parameters and are orthogonal if the elements of the corresponding rows and columns of the Fisher Information Matrix are zero. Orthogonal parameters are easy to deal with in the sense that their maximum likelihood estimates are independent and can be calculated separately. In particular, for our family the parameters and are both orthogonal to both the parameters and . Some partial results, for example, when either mean or variance is kept constant, can be deduced. See, for example, [1820].

Remark 36. The Fisher Information Metric is not the only possible choice to compute distances between pdfs of the family of Gaussian Distributions. For example, in [20], the authors parametrize the family of normal distribution as the symmetric space endowed with the following metric: Moreover, the authors in [20] computed the Riemann Curvature Tensor of the metric and, in any dimension, the distance between two normal distributions with the same mean and different variance and also the distance between two normal distributions with the same variance and different mean.

Remark 37. If we consider just the submanifold given by the restriction to the coordinates and on the ellipse , we recover the hyperbolic distance: The geometry, however, does not seem the one of a product space, at least considering the fact that mixed entries are not zero, in our parametrization.

4. Overdispersion, Equidispersion, and Underdispersion for the Family

As we said, Strichartz Estimates are a way to measure the dispersion caused by the flow of the PDE to which they are related. In statistics, dispersion explains how a distribution is spread-out. In this section, we connect the two closely related concepts (dispersion in statistics and PDEs) by introducing some measures of statistical dispersion like the Index of Dispersion in Definition 38 (see Section 4) which reflect the dispersive PDE nature of the Maximal Strichartz Family of Gaussian Distributions. We compute this Index of Dispersion for our family of distributions and show that it is consistent with PDE results.

Definition 38. Consider the norms and on the space of Variance-Covariance Matrices and on the space of mean values . One defines the following Index of Dispersion:with and where is as follows:while is given by One calls the -Dispersion Index of the Maximal Family of Gaussians and one calls -Static Dispersion Index of the Maximal Family of Gaussians. Moreover, one says that the distribution is (i)-overdispersed, if ;(ii)-equidispersed, if ;(iii)-underdispersed, if .Analogously, one says that the distribution is (i)-overdispersed, if ;(ii)-equidispersed, if ;(iii)-underdispersed, if .

Here, we discuss some particular cases and compute the dispersion indexes and for certain specific norms , and .

(i) In the case , the -Static Dispersion Index of the Maximal Family of Gaussians that we choose is given by the variance of the distribution. We choose and so we get Now, in the spherical case , one gets So, the distribution is (i)-overdispersed, if ;(ii)-equidispersed, if ;(iii)-underdispersed, if .Therefore, with , the type of dispersion does not depend on the dimension .

Remark 39. In the strictly Strichartz case , we have that the dispersion is measured just by the scaling factor .

Choosing instead as -Static Dispersion Index of the Maximal Family of Gaussians, we have some small differences: Now, in the spherical case , we get So, the distribution is (i)-overdispersed, if ;(ii)-equidispersed, if ;(iii)-underdispersed, if .So with the type of dispersion does depend on the dimension .

(ii) In the case , when is different from zero, we can express as a function of . In fact we haveand so For example, if now we choose and we get so for and we get, in the spherical case , So the distribution is(i)-overdispersed, if ;(ii)-equidispersed, if ;(iii)-underdispersed, if .

Remark 40. In particular, from this, we notice that if at the distribution is -equidispersed, an instant after the distribution is -overdispersed, in fact This is in agreement with the dispersive properties of the family and legitimates, in some sense, our choice of Indexes of Dispersion. Moreover, if is actually different from , namely, , we can argue that the Gaussian Distribution that we are analysing does not come from the Maximal Strichartz Family of Gaussian Distributions.

Remark 41. This index is different from the Fisher Index which is basically the variance to mean ratio and it is the natural one for count data. The index is then more appropriate for families of distributions related to the Poisson distribution and that are dimensionless. In fact, in our case and in contrast with the Poisson case, we scale the Variance-Covariance Matrix as the square of the Expected Value: .

Remark 42. The characterization of the Gaussian Distribution given by Theorems 6 and 22 can be used also to give a measure of dispersion with respect to the Maximal Family of Gaussian Distributions, considering the Strichartz Norm: By Theorem 6, one has that . When the index is close to one, the distribution is close, in some sense, to the family , while, when the index is close to zero, the distribution is very far from . This index clearly does not distinguish between distributions in the family . It would be very interesting to see if the closeness to one of the Indexes of Dispersion computed on a general distribution implies a proximity to the Maximal Family of Gaussian Distributions from the distribution point of view also and not just from the point of view of the dispersion.

5. Partial Stochastic Ordering on

Using the concept of Index of Dispersion, we can give a Partial Stochastic Order to the family . For a more complete treatment on Stochastic Orders, we refer to [14]. We start the analysis of this section with the definition of Mean-Preserving Spread.

Definition 43. A Mean-Preserving Spread (MPS) is a map from to itself where and are, respectively, the pdf of the random variables and with the property of leaving the Expected Value unchanged: for any and in the space of parameters.

The concept of a Mean-Preserving Spread provides a partial ordering of probability distributions according to their level of dispersion. We then give the following definition.

Definition 44. Consider two random variables and such that , for any and . One says that the two random variables are ordered according to their Dispersion Index if and only if the following condition is satisfied:

Now, we give some examples of ordering according to the Indexes of Dispersion that we discussed previously.

(i) In case , we choose and so we get Now, in the spherical case , one gets Using this index, we have the following partial ordering:This order does not depend on the dimension . By choosing instead , we obtain Now, again in the spherical case , one gets which is the same ordering as before. This order does not depend on the dimension again and this seems to suggest that even if the value of the Dispersion Index might depend on the choice of the norms, the Partial Order is less sensible to it.

Remark 45. In the strictly Strichartz case , we have that the Stochastic Order is given just by the scaling factor .

(ii) In the case when is different from zero, we haveIf now we choose and , we get so, for and , we get, in the spherical case , the following Partial Order:

Remark 46. Again, in the strictly Strichartz case , we have that the Stochastic Order is given just by the scaling factor .

Remark 47. In the case of the the -Static Dispersion Index of the Maximal Family of Gaussians , the role of and seems interchangeable. This suggests a dimensional reduction in the parameter space, but, when , and the parameter decouple and start to play a slightly different role. This suggests again a way to distinguish between Gaussian Distributions which come from the family and Gaussians which do not and so to distinguish between Gaussians which are solutions of the Linear Schrödinger Equation and Gaussians which are not.

Remark 48. Using the definition of Entropy, we deduce that, for Gaussian Distributions, . We see that, for our family , the Entropy increases, every time we increase , , and , but not when we increase and . In particular, the fact that the Entropy increases with is in accordance with the Second Principle of Thermodynamics.

Remark 49. It seems that the construction of similar indexes can be performed in more general situations. In particular, we think that an index similar to can be computed in every situation in which a family of distributions has the Variance-Covariance Matrix and the Expected Value which depend on common parameters.

6. Conclusions

In this paper, we have constructed and studied the Maximal Strichartz Family of Gaussian Distributions. This subfamily of the family of Gaussian Distributions arises naturally in the context of Partial Differential Equations and Harmonic Analysis, as the set of maximizers of certain functionals introduced by Strichartz [4] in the context of the Schrödinger Equation. We analysed the Fisher Information Matrix of the family and we showed that this matrix possesses an extrastructure with respect to the general family of Gaussian Distributions. We studied the spherical and elliptical case and computed explicitly the Fisher Information Metric in both cases. We interpreted the Fisher Information Metric as a distance which can distinguish between Gaussians which maximize the Strichartz Norm and Gaussians which do not and also as a distance between Gaussians which are solutions of the Linear Schrödinger Equation and Gaussians which are not. After this, we introduced some measures of statistical dispersion that we called -Dispersion Index of the Maximal Family of Gaussian Distributions and -Static Dispersion Index of the Maximal Family of Gaussian Distributions. We showed that these Indexes of Dispersion are consistent with the dispersive nature of the Schrödinger Equation and can be used again to distinguish between Gaussians belonging to the family and other Gaussians. Moreover, we showed that our Indexes of Dispersion induce a Partial Stochastic Order on the Maximal Strichartz Family of Gaussian Distributions, which is in accordance with the flow of the PDE.

Appendix

In this Appendix, we give the proof of Theorem 3 (for completeness) and we will use the concept of -Characteristic Function to define a generalized type of Moments that exist also for the Multivariate Cauchy Distribution.

A. Proof of Theorem 3

The proof is very simple and can be found in several places (see, e.g., a nice treatment in [22]). We report here, for completeness, the computation in the case . We consider the variational derivative of with the constrain of being a probability distribution and with the constraint of having a fixed variance . This gives rise to the following Euler Lagrange Equation with two Lagrangian multipliers and : where is some function with Expected Value . The two Lagrangian multipliers appear because of the two constraints. One constraint is related to the normalization condition and the other is related to the requirement of fixed variance: Now, we take the variational derivative of the functional . To be at a critical point, we need to impose that this variational derivative is zero. Therefore, we getSince this must hold for any variation , the term in brackets must be zero, and so, solving for yieldsNow, we use the constraint of the problem and solve for and . From , we get the condition and from , we get Solving for and we get and which altogether give the Gaussian Distribution:

B. On the -Momenta of Order and the Cauchy Distribution

In this subsection, we discuss another application of the concept of -Characteristic Functions. In particular, we build -Momenta in a similar way of what happens for the usual characteristic function and usual Momenta. We apply this tool to the case of the Cauchy Distribution and see that, in certain cases, in contrast to the well-known case of , we can build some finite generalized Momenta. We refer to [15] for a more detailed discussion on -Characteristic Functions.

Definition 50. Consider the -Characteristic Function of : Then, if is times that is continuously differentiable on , one defines the -Moment of order by the formula

Proof. This is a direct and simple computation.

Remark 51. Here, we do not consider the possibility of different roots of the unity that can appear in the computation of the -Characteristic Function. We refer, for the precise theory, to [15].

From now on, we concentrate only on the case of the Multivariate Cauchy Distribution.

Definition 52. One says that a random variable is distributed as a Multivariate Cauchy Distribution if and only if its pdf takes the form

Now, we want to determine for which and the -Momenta of order exist and are finite. In other words, we want to find for which values of , , and , we have that . Therefore, we computewhere the constant may vary from step to step. So, we have that the -Momenta of order exist and are finite, when namely, if and only if the order of the momentum satisfies the following condition: In the case , we need and, in general, we need in order to have that the -Moment of order , is well defined.

Competing Interests

The author declares that he has no competing interests.

Acknowledgments

The author wants to thank their family and Victoria Ban for their constant support. They thank Professor Maung Min-Oo for useful discussions on the Fisher Information Metric. The author thanks also their Supervisor Professor Narayanaswamy Balakrishnan for his constant help and guidance and for an inspiring talk on dispersion indexes.