Abstract
Let X be an (N, d)-anisotropic Gaussian random field. Under some general conditions on X, we establish a relationship between a class of continuous functions satisfying the Lipschitz condition and a class of polar functions of X. We prove upper and lower bounds for the intersection probability for a nonpolar function and X in terms of Hausdorff measure and capacity, respectively. We also determine the Hausdorff and packing dimensions of the times set for a nonpolar function intersecting X. The class of Gaussian random fields that satisfy our conditions includes not only fractional Brownian motion and the Brownian sheet, but also such anisotropic fields as fractional Brownian sheets, solutions to stochastic heat equation driven by space-time white noise, and the operator-scaling Gaussian random field with stationary increments.
1. Introduction
Gaussian random fields have been extensively studied in probability theory and applied in a wide range of scientific areas including physics, engineering, hydrology, biology, economics, and finance. Two of the most important Gaussian random fields are, respectively, the Brownian sheet and fractional Brownian motion.
On the other hand, many data sets from various areas such as image processing, hydrology, geostatistics, and spatial statistics have anisotropic nature in the sense that they have different geometric and probabilistic characteristics along different directions. Hence fractional Brownian motion, which is isotropic in the sense that the distribution of its increments depends only on the Euclidean distance of the time interval, is not adequate for modelling such phenomena. Many people have proposed to apply anisotropic Gaussian random fields as more realistic models; see [1, 2] and the references therein for more information.
Typical examples of anisotropic Gaussian random fields are fractional Brownian sheets and the solution to the stochastic heat equation. It has been known that the sample path properties such as fractal dimensions of these anisotropic Gaussian random fields can be very different from those of isotropic ones such as Levy's fractional Brownian motion; see, for example, [3–7]. Recently, Xiao [2] systematically studied the analytic and geometric properties of anisotropic Gaussian random fields under certain general conditions. Biermé et al. [1] studied the hitting probabilities and the Hausdorff dimension of the inverse of anisotropic Gaussian random fields under some conditions. Their main goal is to characterize the anisotropic nature of the Gaussian random fields by a multiparameter index . This index is often related to the operator-self-similarity or multi-self-similarity of the Gaussian random field under study. In this paper, we further discuss the polar functions of anisotropic Gaussian random fields.
We will continue to use the same setting as in Biermé et al. [1]. Let be a fixed vector and, for with (), let denote a compact interval (or a rectangle). For example, we may take , where is a fixed constant.
Let , , be a Gaussian random field on a probability space with mean zero and whose components , , are independent. Suppose that for each , satisfies the following general conditions.(C1)There exist positive and finite constants , , and such that for all and (C2)There exists a positive and finite constant such that, for all ,
Here denotes the conditional variance of given . We will call an -Gaussian random field. Xiao [2] and Biermé et al. [1] gave some remarks on the above conditions. We point out that the class of Gaussian random fields that satisfy conditions (C1) and (C2) is large. It includes not only the well-known fractional Brownian motion and the Brownian sheet, but also such anisotropic random fields as fractional Brownian sheets (cf. [3, 4, 7]), solutions to stochastic heat equation driven by space-time white noise (cf. [5, 6, 8–10]), and many more.
In the following, we present some notations about several classes of functions satisfying certain conditions. The relationship between them will be studied in Section 3.
Let . As usual, a function is said to be a polar function for the random field if Let denote the collection of the continuous functions satisfying (3).
Let be a fixed vector, and let denote the collection of all Hölder continuous functions of any order less than along the th direction in time; that is, there exists a finite and positive constant , depending only on and , such that for all , , and ,
Moreover, let denote the collection of all functions satisfying the following condition: there exist finite and positive constants and , depending only on and , such that for all and ,
Note that if , then the functions in are called Hölder continuous of any order less than , and the functions in are called quasi-spiral with order ; see Kahane [11]. Hence and can be regarded as a nature generalization of Hölder continuous function and quasi-spiral, respectively.
In the studies of random fields, it is interesting to consider the following questions.(i)Given a nonrandom continuous function , when is it nonpolar for in the sense that ? When is it polar for in the sense that ?(ii)Given a nonrandom Borel set , what is the probability for the random set ? What is the Hausdorff and packing dimensions of the set ifis nonpolar or ?
The above questions are some important questions in fractal theory of random fields and the related results have only been known for a few types of random fields. For example, Graversen [12] studied the characteristics of the polar functions for the two dimensional Brownian motions. Le Gall [13] made a further discussion for the d-dimensional Brownian motion and proposed an open problem about the existence of its no-polar continuous function satisfying the Hölder condition. Some of these results have been extended partially to fractional Brownian motion with stationary increments by Xiao [14], to the Brownian sheet with independent increments by Chen [15], and recently to the fractional Brownian sheets with anisotropy by Chen [4].
In all these papers, the isotropic properties of the Brownian sheet and fractional Brownian motion have played crucial roles. Since, in general, the anisotropic random fields have neither the isotropic properties nor the properties of independent increment and stationary increments due to their general dependence structure, it is more difficult to investigate fine properties of their sample paths. The main objective of this paper is to further investigate the characteristics of the polar functions and the intersection probabilities for satisfying conditions (C1) and (C2) by using the approach of Biermé et al. in [1] and Xiao in [2]. Our main results, in some cases, strengthen the results in the aforementioned works, and their proofs are different from the proofs for the Brownian sheet and the fractional Brownian motion. Of particular significance, we determine the exact Hausdorff and packing dimensions of the times set for a nonpolar function intersecting . However, for the intersection probability, we can only establish an inequality in terms of Hausdorff measure and capacity, respectively; see Theorem 16. It is still an open problem to prove the best upper bound in terms of capacity. We should also point out that, compared with the isotropic case, the anisotropic nature of induces far richer fractal structure into the properties of the nonpolar functions for .
The rest of the paper is organized as follows. In Section 2, we derive a few preliminary estimates and lemmas for that will be useful to our arguments. In Section 3, we obtain the relationship between the class of continuous functions satisfying Lipschitz condition and the class of polar functions of . We also give upper and lower bounds for the probabilities for a nonpolar function intersecting and determine the Hausdorff and packing dimensions of the times points for a nonpolar function intersecting . A question proposed by Le Gall [13] about the existence of no-polar, continuous Hölder functions for the Brownian motion is also solved. Finally in Section 4, we show that our main results in Section 3 can be applied to solutions to stochastic partial differential equations.
Throughout this paper we will use to denote unspecified positive and finite constant whose precise values are not important and may be different in each appearance. More specific constants in Section are numbered as .
2. Some Preliminary Estimates
Because of the complex dependence structure for the anisotropic Gaussian random fields, the proofs of the main results in Sections 3 and 4 are quite involved. Therefore, we split the proofs into several lemmas to be used in Sections 3 and 4.
Let be a compact set in . denotes the covariance matrix of the random vector . Then, for all , where .
We need to estimate upper and lower bounds of the covariance determinant in (6). For the sake of completeness, we provide a simple proof by using the expression for the characteristic functions and the density functions of Gaussian random fields.
Lemma 1. Let be an -Gaussian random field satisfying conditions (C1) and (C2) and let be a compact set on . Then there exist positive constants and , such that for all , ,
Proof. Since is a compact set in , then there exists a positive constant , such that . In order to prove (7), it suffices to show that (7) holds for all with . We claim that for all, with ,
If , then by using the expression for the characteristic functions and the density functions of Gaussian random fields, it turns out that
By applying the fact that the conditional distribution of given is still Gaussian with mean and variance , one can evaluate the integral in the right-hand side of (9) and thus deduce that (8) holds.
If , then we can deduce that the related coefficient of and is equal to , so there exists such that a.s., and, in particular, a simply estimation implies that (8) still holds in this case.
We now prove the upper bound in (7). Note that is a mean zero Gaussian vector. Since is a positive continuous function on , then there exists a positive constant such that, for all ,
This, together with (1), (2), and (8), implies that the upper bound in (7) holds. The lower bound in (7) follows from (2), (8), and (10). This completes the proof of Lemma 1.
Similar to the argument of Testard in [16], we will provide a proof of the following lemma.
Lemma 2. Let be an -Gaussian random field satisfying conditions (C1) and (C2) and let be a compact set on . Then there exist positive constants and , such that, for all , we have where , .
Proof. Since is a compact set on , then there exists a positive constant , such that . As usual, the proof is divided into proving the lower and upper bounds separately. We first prove the lower bound in (11). By (1) and (7), we have By taking , then for all , It follows from (12) and (13) that Note that Then inequalities (14) and (15) imply Now we prove the upper bound in (11). By using (1) and (7) and repeating the procedure in (12), we can derive By taking , then for all , It follows from (17) and (18) that Combining (15), (18), and (19), we obtain By inequalities (16) and (20), we finish the proof of Lemma 2.
Let be a metric on defined by
In the following, we will provide a slightly more general result in the proof of Proposition 4.4 by modifying the argument [8].
Lemma 3. Let be an -Gaussian random field satisfying conditions (C1) and (C2). Then there exist positive constants and , such that, for all , , and all , where denotes the ball of radius centered at in the metric defined by (21).
Proof. Using the Gaussian regressions, we have Note that, for all , the Gaussian random variables and are independent. By using the triangle inequality, we can deduce that, for all , where . Then By the Cauchy-Schwarz inequality, (1), and (23), we have Therefore, there exists a positive constant such that, for all and , we can deduce that . Recall that, for the unimodality of the centered Gaussian process , we have Note that and are independent. It follows from (27) that In order to estimate , we denote that the Gaussian process − and note that and the canonical metric Therefore, by the Hölder inequality and the Cauchy-Schwarz inequality, we have By using (1), (10), (30), and the fact that , we have Then where is the metric entropy number of and . It follows from Dudley’s theorem of Kahane [11] that Combining (25), (27), (28), and (33) and using the coordinate processes independence of , we have This finishes the proof of Lemma 3.
Lemma 4. Let be an -Gaussian random field satisfying conditions (C1) and (C2) and let be a compact set on . Then there exists a positive constant such that, for all , and ,
Proof. Note that
Denote by the identity matrix of order 2 and let . Then the inverse of is given by
where denotes the determinant of .
By (36), Lemma 2, Fubini’s theorem, and some elementary calculations, we derive
If , then
For all , we can deduce
If , by the inequality above and taking and , we have
It follows from Lemma 1 that, for all ,
By using (42) and the fact that , we have
Combining (36) through (43), we prove that Lemma 4 holds.
For proving the lower bound in Theorem 11, we will use two lemmas below, which are slightly more general results, by modifying the argument in [3, 17].
Lemma 5. Let , and be given constants. Then for all constants , and , there exists a positive and finite constant , depending on , , , , , , , and only, such that, for all ,
Proof. Let and . By the symmetry of the integrand, we get
Putting and using the fact that , we see that the above integral is bound by
where we have used the substitution .
Let . If , then for , it follows from (46) and Hölder's inequality that there exists a positive and finite constant , which depends on , , , , , , , and only, such that
where we have used the fact that .
If , then some elementary calculations imply that, for all ,
where depends on , , , , , , , and only. By (47) and (48), the proof of Lemma 5 is finished.
Lemma 6. Let , , , and be positive constants. For and , let Then there exist positive and finite constants and , depending on , , , and only, such that the following hold for all reals satisfying :(i)if , then (ii)if , then (iii)if and , then
Proof. If , by using Lemma 10 in [3], we can prove that inequalities (50), (51), and (52) hold. If , then we can split the integral in (49) so that Let . Since and , , and are positive constants, we get By using (53), (54), and Lemma 10 in [3] again, we can also prove (50), (51), and (52); in this case . Thus, the proof of Lemma 6 is finished.
Let and be given vectors. For convenience, we may further assume
Lemma 7. Let be an -Gaussian random field satisfying conditions (C1) and (C2). If , then there exists a positive and finite constant , depending on , , , , , and only, such that, for all ,
Proof. Note that . Then, by using Lemma 1, we have Let be the unique positive integer such that Then, we choose positive constants such that for each and Applying Lemma 5 to (57) with we obtain that By repeatedly using Lemma 5 to the integral in above inequality for steps, we have Since the satisfy (59), we have Thus, the integral in the right-hand side of (62) is finite. This completes the proof of Lemma 7.
Lemma 8. Let be an -Gaussian random field satisfying conditions (C1) and (C2). If , then there exist positive and finite constants and , such that for all , , where and depends on , , , , , , and only.
Proof. For our purpose, let us note that (55) implies
Then, there exist such that for all and , we have . By using Lemma 1 and , we have
By a change of variable, we have
In order to show the integral in (67) is finite, we will integrate iteratively. We only need to consider the case when
Here and in the sequel . Then, by using (55), we can deduce that
where is the unique integer satisfying (68).
If in (68), we integrate . Note that and = . Then we can use (52) of Lemma 6 with and to get
since .
If in (68), we integrate first. Since , we can use (50) of Lemma 6 with and to get
We can repeat this procedure for integrating .
Note that if , then . We need to use (51) of Lemma 6 with and to integrate and obtain
since .
On the other hand, if , then . By using (50) of Lemma 6 with and to integrate , we can deduce
Note that and = for a small enough . Applying (52) to integrate in (73), we see that
since . Combining (70) through (74) yields (64). This completes the proof of Lemma 8.
3. Characteristics of Polar Functions
In this section, we provide some necessary conditions and sufficient conditions for a function to be polar for . We also give the intersection probabilities for a nonpolar function and and determine the Hausdorff and packing dimensions of the set .
Let us note that If and only if there exists a rectangle , such that For our purpose, it suffices to consider the polar functions of in a rectangle with .
Theorem 9. Let be an -Gaussian random field satisfying conditions (C1) and (C2) on . If , then .
Proof. For any constants and any rectangle with , it follows from a similar argument as in the proof of Theorem 4.2 in [2] that there is a random variable of finite moments of all orders and an event of probability 1 such that, for all ,
For any , in order to prove , it suffices to prove that, for any and any rectangle ,
Fix , and choose such that . By , a.s., then there exist and such that and for any , . For any integer , divide the rectangle into subrectangles with sides parallel to the axes and side lengths . Let be the lower-left vertex of .
Let , be fixed. If there exists such that , then by (77) and ,