Abstract

In this article we examine some properties of the solutions of the parabolic Anderson model. In particular we discuss intermittency of the field of solutions of this random partial differential equation, when it occurs and what the field looks like when intermittency doesn't hold. We also explore the behavior of a polymer model created by a Gibbs measure based on solutions to the parabolic Anderson equation.

1. Introduction

It has been twenty years since the publication of the seminal work “Parabolic Anderson Problem and Intermittency” by Carmona and Molchanov. Their memoir has inspired an enormous amount of research on the subject in the intervening years. In this paper we hope to give an account of what is now known about the behavior of solutions of the parabolic Anderson equation as well as the behavior of typical paths under the Anderson polymer measure. Perhaps the initial and still most compelling reason for studying the parabolic Anderson model is physical. In the three-dimensional case it provides a model for the growth of magnetic fields in young stars. In addition, it also has an interpretation as a population growth model. Further, since the work of [1] it has also provided a model for a polymer in a random media. Furthermore, it is a part of the theory of stochastic partial differential equations. Besides its interest as what can be described as a canonical object, in the sense that Brownian motion is a canonical object, its interest also derives from its relations to many other important models. In particular, it is a close relative of other models, for example, the stepping stone model [2], catalytic branching [3], super random walk, the Burger's equation [4], and the KPZ equation [4]. The original motivation of Anderson concerned the question of whether there were bound states for electrons in crystals with impurities. The crystal structure is taken to be and the impurities are modeled by means of an random field . The phenomena of localization can be expressed as the existence of eigenfunctions of the Hamiltonian .

The equation satisfied by a magnetic field generated by a turbulent flow leads naturally to an analogous parabolic equation with a time varying random field as opposed to the time stationary field which arises in the original localization question. The difference is that the random medium changes rapidly in the case of the magnetic field generated by turbulent flow whereas the impurities in the localization can be considered to be unchanging in time. In the latter case, the fluctuations in the medium are slow compared to the phenomena of interest, capture of electrons.

The magnetic field in a star is generated by the turbulent flow of electrical charges. Turbulent flows are modeled by means of a randomly evolving velocity field, see, for example, [5] or [6] for a canonical example. Let denote such a velocity field on . Typically, this field is incompressible, Markov in time and Gaussian together with other spectral properties. The magnetic field generated by charges carried by the Lagrangian flow of is incompressible and evolves according to an equation of the form where in this case is the standard Laplacian on . This equation has been studied in [711], to name but a few references. We mention now some of the results of [9], on a tractable version of the model which one gets when is replaced by and in the environment one takes the time correlation in the velocity field to . In this version of the model, the field is replaced by a matrix Wiener process, on some probability space satisfying Then (1) satisfied by takes the form where , the discrete Laplacian, is given by and is the Stratonovich differential of . Converting to the Itô differential instead leads to the following equivalent form:

The average (over the media) of the magnetic field, is easily seen to satisfy Since the spectrum of is , for , this equation implies that first moment Lyapunov exponent satisfies In [9], by simply using Itô's formula, an equation was derived for the average (over the media) magnetic field energy This satisfies Under an assumption of homogeneity on , one has and so for all time , where Writing , this becomes In the physically relevant dimension , taking the Fourier transform of the eigenvalue equation reveals that the operator possesses a positive eigenvalue if and only if where is the three-dimensional torus. This implies that the solution satisfies This implies whereas This last inequality is the definition of full intermittency, and the second moment grows strictly faster than twice the first moment. As a consequence, the field has widely separated large peaks. This explains the well-known phenomena of sun spots. They are widely separated sites of high magnetic field energy. The main question of interest in astrophysics is whether the magnitude of grows exponentially, a.s., in other words, is the a.s. Lyapunov exponent positive? This is the question of whether A further question regards the physically relevant asymptotic behavior of the a.s. Lyapunov exponent as . Since is the inverse Reynolds number, which in this situation is very small. Another interesting question is whether has a limiting distribution. These questions have an affirmative answer in the scalar case and remain difficult open problems in the multidimensional model just discussed.

2. Parabolic Anderson Model

The commonly studied parabolic Anderson equation is a scalar version of the magnetic field equation (1) and most progress has occurred in and we will treat this case first. The velocity field may be replaced by a random environment that is either a stationary in time random field or an evolving random field . Typically, the variables and , in both cases are assumed to be . The stationary in time field can be thought of as a model where the phenomena of interest evolves much more rapidly than the evolution of the environment. The nonstationary case models a phenomena whose evolution is on a time scale comparable to the time scale of the evolution of the random environment. In this paper we will only discuss the nonstationary case. The enormous literature has been developed in this time stationary case. A partial random sample of works on this topic includes [1217]. We will only discuss the nonstationary case. When considering the discrete model, that is, , we take the operator to be the discrete Laplacian as mentioned above, whereas it is the ordinary Laplacian when the model is set in . The parameter in the model is called the diffusivity. The parabolic Anderson model is defined as a parabolic partial differential equation. A canonical version of this model is with the random environment provided by a white noise potential. In the case, one takes to be standard, one-dimensional Brownian motions defined on some probability space , where . This field is assumed to be correlated in time and space, that is, where denotes expectation with respect to . The differential form of the parabolic Anderson equation is then The notation indicates the Stratonovich differential, (see [18] for a description of Stratonovich differentials) and this is preferred over the Itô differential because of the simplicity of the Feyneman-Kac representation which will appear shortly. The equivalent integral formulation is The differential formulation can be expressed converting to the Itô differential by At times it will be convenient to discuss the Itô solution defined by The relation between the two is that . Typically, it is assumed that and that it is bounded from above, which guarantees existence and uniqueness of the solution. The positivity of assures that for all . Fundamental information on this equation, including existence and uniqueness results, and its applications are contained in [19]. The field of solutions exhibits interesting behavior as revealed by the growth properties of its moments which will be discussed at length below. At the risk of stating what is obvious, we stress that the random variables in the field are dependent. The correlation structure of this field is examined in detail in Theorem 2 later in this paper.

The solution of (20) has a Feynman-Kac representation as an average over a path space. This is done by means of a family of measures , on the set is the space of right-continuous paths possessing left limits from which have a finite number of jumps on any compact subset of . Define analogously. We let be the canonical process on , that is, for . Then the measures are taken to be the ones that make the pure jump Markov process on with generator . This process satisfies at and waits at for a random amount of time which is exponentially distributed with parameter , and then selects, using the uniform measure on its neighbors, one of the neighbors of and jumps there and then proceeds afresh as if starting from time at the new position.

The solution to (20) can then be expressed as Unless explicitly mentioned otherwise, we will take so that

3. Relations to Other Processes and Equations

The parabolic Anderson equation is closely related to equations for many other models. Probably the best known connection to another equation is provided by the Hopf-Cole transformation. If we take a solution to a parabolic Anderson equation with a spatially smooth random force term and set and , then satisfies Burger's equation Similarly, setting , where solves (23), yields a solution of the KPZ equation, see, for example, [4],

Equation (20) may be cast as a particular case of a pair of equations driven by two jointly Gaussian, standard Brownian fields, which also relates it to the mutually catalytic branching and stepping stone models. Assume the correlations between the fields is controlled by a parameter by

An interesting observation of Etheridge and Fleischmann, [20], neatly ties the symbiotic branching and stepping stone models together with the parabolic Anderson model by taking different values for the parameter . Consider the following pair of equations: When in (26), this is known as the mutually catalytic branching model for two interacting populations and has been studied in [3, 2025] to name just a small random sample.

If in (26) and , then and solves This is known as the stepping stone model from population genetics and has been the subject of the works [2, 16, 26] among others.

Finally, when in (26) and one gets and is the solution of (20) with .

Other versions of noise have been considered as well and this can lead to substantial technical difficulties. For example, the recent works [2731] replace the field by catalysts which are from interacting particle systems such as the voter model or exclusion processes. Among the problems unresolved here is the existence of the a.s. Lyapunov exponent. The subadditive argument which worked so well for the environment fails for these models.

4. Moment Lyapunov Exponents

One of the most interesting properties of the solutions of (20) is that of intermittency. Intermittency is defined in terms of the moment Lyapunov exponents. These are the limits for of The existence of these limits was proved in [19]. For one uses (116) to compute . This is easily done with the observation that for any fixed path , the random variable is Gaussian with mean and variance . Thus, by Fubini's theorem, so that . An interesting quantity, the overlap, arises when computing . Here we will use to denote the product measure of with itself on . Then we use the observation that for two fixed paths, and , the variables and are jointly Gaussian with Thus, is Gaussian with mean and variance by (116), This implies

For general , Hölder's inequality implies and is convex. Full intermittency is then the property that is strictly convex on , that is, It is by now classical and was proven in [19], that in dimensions , full intermittency holds for all but in dimensions , full intermittency only holds for where is a dimension-dependent constant. This was done in [19] by noting that in dimensions and the operator has a positive eigenvalue for any . By contrast, in dimensions there is a positive constant such that has a positive eigenvalue only for . When there is a positive eigenvalue, , one can conclude that When the spectrum of has no eigenvalue, that is, when in dimension with , one has Using this with (120) we have the following consequence due to Carmona and Molchanov [19].

Theorem 1. Full intermittency holds in dimensions and for any . For each , there is a positive such that for , full intermittency holds.

Below we will give a probabilistic proof of this result which identifies the value . In addition, in [19], it was shown that there is a sequence which was proved to be strictly increasing in [32], such that This has been refined, extended, and improved in the recent work of Greven and den Hollander, [32].

The physical phenomena of intermittency is the property that a random field possesses widely separated high peaks. The most well-known field exhibiting this property is the magnetic field energy in a star. In our sun, this exhibits itself as sun spots where most of the magnetic field energy is concentrated, thereby lowering the temperature and causing the darkening which appears as a spot. Intermittency properties of the field were established in [19] and this will be discussed below.

5. Itô Solution and Probabilistic Proof of Intermittency

The Itô solution, , is the solution when the Itô differential is used in (20) instead of the Stratonovich differential. Recalling this solution satisfies The relation between and is simply Then defining , it follows that and so full intermittency holds if and only if .

We show that in dimensions greater than , that the Itô solution has bounded second moment for and that the second moment grows exponentially for . This was shown in [19] using spectral considerations outlined in the previous section, so here we will give a probabilistic argument which was alluded to in [19] but does not seem to have appeared anywhere. First, recall that the jump rate of is . Thus, if is an independent copy of , the jump rate of is , that is, . Recalling that we have

Now we introduce some helpful notation. We let and for , If the underlying chain is recurrent, then all of these stopping times are finite. If the chain is transient, then only a finite number of these times is finite. Set, for , The distribution of with respect to is exponential with parameter . For all , the random variable is the duration of the th sojourn at and is the duration of the th excursion from . In the recurrent case, the random variables and are all finite and independent. They are also independent in the transient case “up to the time they all become infinite.” For a given , if , then writing , The skeletal random walk of can now be defined using these stopping times. This is the Markov chain on which keeps track of the sites visited by , namely, . Observe that if , then is a geometric random variable with parameter , the probability of no return to the origin which is independent of . For , and it is well known that , see, for example, [2]. Since and the random variables are independent, an easy argument shows that the total time spent at the origin is thus and is an exponentially distributed random variable with parameter with respect to the measure . If , one has We remark that by stationarity of the field and the choice of , it follows that is independent of . When , one has . An important consequence of this little computation is the lack of intermittency for . Indeed, for , Consequently, since , for , and there is no intermittency for large in dimensions .

For or , however, we proceed as follows to show that there is intermittency for all . It will be useful to make a change of variable and to set up some more notation. First we observe that . Thus, We use Theorem 2 and it then suffices to show . For simplicity, we denote

Notice that the random variables are exponentially distributed with parameter with respect to .

Recall for or that . Choose sufficiently large so that . Then, The central limit theorem implies that for sufficiently large. Observe that Hence, Consequently, As a result, intermittency holds for all in dimensions and .

We return now to so that and show that intermittency holds for . Again, it suffices to show . Recall that the are exponential distributed with parameter with respect to . Standard large deviation estimates (see, e.g., [33, Theorem 9.3]) show that for fixed , Also, As in the previous paragraph, for every we can find a such that Choosing , we get Recalling (56), it follows that for all sufficiently large, If and we choose with , with the corresponding choice of as in (61), then for all sufficiently large , Therefore, , for every and intermittency holds in dimension or for all .

6. Covariance Structure and Association

Now we will examine the covariance structure of the field in the intermittent regime, that is, when so that . Recall that this range for corresponds exactly to the range of for which the operator has a positive eigenvalue. We will also establish that this field is associated. The mixed second moments are significant in understanding the structure of the field . They are given by The asymptotics of the function can be evaluated as follows. Define Then observe that But there is a scaling relation since is a rate simple symmetric random walk on with respect to . This shows that The function arises as the partition function of a homopolymer, [34], and by the Feyman-Kac formula, solves The spectrum of the operator on the right hand side, satisfies and for , there is a dimension-dependent such that In the above, is a simple eigenvalue for and the portion is purely a.c. spectrum. In fact, one now sees that from the section on intermittency. We denote by the eigenfunction corresponding to and note that it is given by, see [34], where is the symbol (Fourier transform) of and is the -dimensional torus. The representation (74) can be used to establish the exponential decay of , and there is a positive constant such that By the spectral theorem, letting be the resolution of the identity for the operator , one has Note that . The following result was used in [19] to prove a central limit theorem for sum of the form which will be described later in this paper.

Theorem 2. For and any or for and , where In addition, when there is a positive eigenvalue for , this eigenfunction satisfies where depends on and .

This implies exponential decay in the spatial variable for the covariance of solutions of (20). Recalling the equivalence in law stated in (69) it follows from (78) that

Corollary 3. For and any or for and , Consequently,

Note that (84) gives a quantitative expression for the intermittency condition . Since , we see that We note that as . Its rate of decay depends on the dimension.

An important property of the field is that the random variables in this field are associated. A collection of random variables , where is a countable set, is said to be associated if for any and coordinate-wise increasing functions , and any finite subcollection , it holds that This notion was introduced in [35] and is of course related to the FKG inequality. One important aspect of this property was developed in [36] where Newman established a central limit theorem for the collection under the assumptions that the are associated, stationary and satisfy finite susceptibility Note that the bound provided by (82) implies that the field has finite susceptibility in the intermittent regime. A classical application of the Newman's central limit theorem is to take the , the spins of a ferromagnetic stochastic Ising model and derive a central limit theorem for sums, , over growing boxes , with respect to a Gibbs state. The spins are correlated, but they possess the property of being associated and stationary with respect to the Gibbs state.

The solutions of the parabolic Anderson equation (20) are associated. The following result was established in [14]. The proof uses a result of Pitt, [37], which states that a necessary and sufficient condition for the associativity of a Gaussian vector is the point-wise nonnegativity of its correlation function. Since associativity is preserved by convergence in distribution, the result below is proved using a simple approximation procedure.

Theorem 4. Let be a field of , standard, one-dimensional Brownian motions on some probability space . Then , and the field of solutions of (20) is associated.

7. Almost Sure Lyapunov Exponents

In the previous section we examined the behavior of the moments of . We now turn our attention to the a.s. behavior of the solution of (20); that is, we consider the a.s. Lyapunov exponent defined by The existence of this limit was first established by Carmona and Molchanov in [19] in the case when either or, more generally, when has compact support. The technique of proof used Liggett's subadditive ergodic theorem, [38]. The sub-additivity, when , is an easy consequence of the Feynman-Kac representation (116) The Markov property is used in going from line to line 3 and this technique broke down at this step in the case of . The latter case was established in [39] using a block argument from percolation theory. This type of block argument originated in [40]. Using the fact that time increments of the field are independent over disjoint space-time blocks in , the proof established an oriented percolation scheme and applied a recurrence result from [41] for such schemes.

In view of the application to stellar magnetic fields, a significant aspect of is its asymptotic behavior as . The exact asymptotics were established independently in and in [42].

The asymptotics are derived from the Feynman-Kac representation (116) through analysis of the Gaussian field where This field places a natural metric on by means of

The index set of the field is too large from the point of view of the metric entropy, see [43] for an explanation of the metric entropy, induced by the metric . Thus we restrict the index set by specifying the number of jumps of its elements in the interval. So, using to denote the number of jumps of in we can define the space of paths

The superadditive functional is the supremum of a Gaussian field indexed by the set . This set has a suitably bounded entropy, which, by a theorem of Fernique and Talagrand, implies . This bound allows, by means of Liggett's subadditive ergodic theorem, the conclusion that there is a positive constant such that

An interesting and presumably difficult problem is to determine the proper scale in order to obtain a limit law. It is conjectured that should have a nontrivial limit law, possibly related to the Airy distribution. This conjecture comes from related results arising in random matrix theory such as [44, 45]. Similarly, one would expect nontrivial limit laws for .

The asymptotics established in [39, 42] for is that

These asymptotics are arrived at by decomposing the Feynman-Kac representation, (116), where denotes equality in law (distribution.) Note that the only change has been to have the time direction be the same in both and . The intuition now is that the conditional expectation in the th term should be nearly . One quickly realizes that sum over for suitable choices of and is not significant, then only terms of the from matter. But, by Brownian scaling, . Using this in (97) and simple large deviations results for the Poisson distributed leads to an upper bound for the asymptotics of . The lower bound comes from looking at a particular path that dominates the Feynamn-Kac expectation and using similar estimates.

Thus, for small which says the a.s. behavior is much smaller than the first moment behavior. This just reflects the fact that the expectation of is dominated by large values of which occur with small, but not too small, probability. This is related to the intermittency and will be examined in the section on sums over boxes below.

We would like to point out that is an increasing function of . Also, since one has for all . Moreover, it was pointed out in [19] that for . The argument given there goes as follows. First define From the fact that , it follows that . Note for that and . Thus, for every , there is a such that . This implies . But obviously, so we conclude that for one has and, consequently, .

We end this section with a discussion of the relation between the a.s. Lyapunov exponent and the moment Lyapunov exponents from [39].

Theorem 5. We have the following: where .

We give a brief sketch of the proof. In [46], it was shown that . In [39] the large deviation estimate for every there is a such that was established by means of a block argument. Thus, for , Thus, for every , which gives .

8. Solution of PAM as Interacting Diffusion

An interesting point of view regarding the solution of (20) was proposed in [47, 48] which grew out of work on the stepping stone model in [26]. In [48], Shiga and Shimizu, the authors view the entire field as a Markov process in a subset of a particular -space. In [47, 48] and the related works [32, 49], a more general underlying Markov process is used than the one with generator . However, for simplicity we will confine our discussion to the case where the operator appearing in (20) is . Take any summable collection of positive numbers that also satisfies for some positive , Then set for , The space is endowed with the product topology. We recall the following theorem of [48].

Theorem 6. Given , the SDE (20) has a unique strong solution with and strongly continuous paths in a.s. The process is a Markov process on with semigroup which satisfies for depending smoothly on only finitely many coordinates and where If then and is a Feller diffusion.

One interesting aspect of this theorem is the door it opens into applying the techniques of diffusion processes to the process . So one can ask what are the invariant measures for the semigroup Since the components of are interacting processes, it also brings the point of view of interacting particle systems onto the scene. In the latter field, characterization of all the invariant, shift invariant, and ergodic measures is a common theme. The reader is referred to [50] or [51] for examples and many references. One example of such a result is the following due to Shiga from [47]. To describe his results we introduce some notation and relevant objects. Denote by the probability measures on where is the Borel field on . Denote by the action of the dual of on the space , that is, with , Then we can define the invariant measures for the process by We can also consider the class of probability measures that are invariant with respect to the group of shifts , Let Consider the initial configuration which means all coordinates take on the value . View this as starting the process with initial distribution . An important result of Shiga, with additions made by [32, 49], explains the asymptotic behavior of in the nonintermittent regime, .

Theorem 7. Assume and . Then (i) for each exists and for ;(ii)  the set of extreme points of is exactly . If is ergodic with respect to the group of translations and , then (iii) the measure is associated.

We observe that the association of follows from the association of the field since this property is preserved by limits in distribution. As a complement, Shiga in [47] also gave the behavior of the field when so that the process is in the intermittent regime. More recently, this was extended to by Cox and Greven, [49], and then Greven and den Hollander, [32], into the intermittent region, . In this case, it turns out that the process dies out in the following sense that its law tends to the point mass on the configuration of all 's. By this we mean the element all of whose components are . We denote the point mass on by .

Theorem 8. If then for every and every if the initial law of is then

As pointed out in [32], one may take , in which case . Thus, while the system is dying out, this implies that there are very high, widely separated peaks in the field for large values of . This is the phenomena of intermittency.

9. Intermittency and Sums over Boxes

In an effort to quantify the intermittency effect, we consider sums of the field of solutions over boxes in that grow in size as . This is inspired as well by the developments of [52], where limit laws for sums of products of exponentials of nonnegative, random variables , namely, were studied. Under a Cramér type condition, for some , a weak law of large numbers, central limit theorem, and convergence to stable laws was established for under appropriate rates of growth of and proper centerings and scalings. Earlier efforts in this direction for time-stationary models were done in [53, 54]. The transition to this setting is provided by considering the Feynman-Kac representation of solutions of the parabolic Anderson model which resembles sums of the form with a random walk on started at under the measure . Recall that the solution of the stochastic equation is given by means of the Feynman-Kac formula Thus, the analog of (113) would be where and is the number.

Two observations follow from our previous considerations. First, if is fixed while , then the existence of the a.s. Lyapunov exponent implies that We will refer to (118) as the quenched average. On the other hand, if remains fixed while , then the ergodic theorem implies We will refer to (119) as the annealed average. When and there is no discrepancy between the quenched and annealed averages since, as was shown above, . However, when or or when and , one has and as a result there arises a discrepancy between the quenched and annealed averages. This discrepancy is a manifestation of intermittency and if we allow to grow at an appropriate rate, we can begin to quantify how large a box must be in order to capture the large peaks in the field that are giving rise to the annealed average.

Recall that for , full intermittency of the field occurs which was defined as

It is the condition (120) which is at work behind the scenes in [55] and the main result gives some information on the spread of the high peaks as . We refer the reader to a more comprehensive exposition on intermittency in Grimmett and Kesten [56].

We now define two critical values. First, for , set Here, the derivative is with respect to . The critical values are and . Since , we have that . Full intermittency implies that . Among the results of [55] were the following.

Theorem 9. Assume and that if or that when .
Quenched Asymptotics. If , then Transition Range. If with , then Annealed Asymptotics. If with , then for every ,

In the range with , we have This means that at time , the box with and is large enough to capture the high peaks which yield the annealed asymptotics, while in the subexponential regime, Thus, boxes of subexponential size do not contain high peals in the field . Notice that at while at . The curve as goes from to describes the transition of from the quenched asymptotics at to the annealed asymptotics at . Loosely speaking, if we interpret in as a temperature, there is a transition from the low temperature phase at where one “low energy state” dominates (i.e., the behavior of is dominated by one summand) to the high temperature phase at where there is averaging. Going further, for , we will see in the next section that the central limit theorem holds. This can be seen as a manifestation of complete disorder.

10. Association and Central Limit Theorem

In a previous section we discussed the property of association for the . This property also held for the limiting laws of the field which arose when the distribution of was .

A central limit theorem for sums of the elements of the field was established in [55]. This theorem took the form with provided that with . The proof followed Bernstein's method of decomposing the sum (127) into sums over disjoint, slightly separated boxes. The proof in [55] was quite technical and relied on approximation of the solution to obtain some degree of independence and a difficult large deviation result from [39]. An alternate proof of a stronger result, Theorem 11, using the ideas of Newman about associated random variables was given in [49]. The proof is simpler than the proof in [55] and yields more information about the variance of , relating it to the first eigenfunction of . The new proof also gives the joint distribution of these sums over disjoint growing boxes.

The key to the proof of a central limit theorem for associated random variables is the following inequality of [36].

Theorem 10 (Newman). Suppose have finite variance and are associated. Then, for any ,

The content of this theorem is that if the sum of the covariances can be controlled, then the distribution of associated random variables can be compared to the distribution of independent random variables. The application of Newman's ideas only require an extension to triangular arrays of random variables and a verification of the finite susceptibility condition in Theorem 10.

Switching to to denote time, we will be concerned with sums of the variables over boxes for . The central limit result is then the following.

Theorem 11. Suppose and let be the solution of the parabolic Anderson equation (20). Define the random variables If with , then where the field is composed of random variables with

In order to verify the finite susceptibility condition, one uses (82), which gives It then follows that Then, since one gets Define which, by stationarity, does not depend on and By inverting the Fourier transform in (74), we have so that by (74), we get Then, by (137), one can conclude that which gives the variance of the limit in the theorem. Several open problems remain. The first question is whether a central limit theorem holds for a field with distribution . This would only require checking the finite susceptibility condition and applying (87). Work similar to this is found in [57]. The second question is whether an invariance principle holds. There are invariance principles for associated fields established in [5860]. Finally, one expects convergence of properly normalized and centered sums to a stable law of index when and . Results that suggest this have appeared in [52, 61].

11. Large Deviations and Concentration Effects

In this section we examine some results on the distribution of the elements of the field . We will begin with a concentration of measure result for an element of the field . Early works on the concentration of measure phenomenon appeared in [62], for example. Recently, this subject has justifiably received a lot of attention, typical references might include [6365] and the wonderful monograph of Ledoux [66], which has an extensive list of references. We recall an observation of Talagrand from [65]. The Chernoff bound for Bernoulli random variables states that Talagrand's observation is “a random variable that smoothly depends on the influence of many independent random variables satisfies Chernoff-type bounds.” The solution of (20), , depends smoothly on many independent random variables, namely, the increments in the Brownian field . In order to make sense of this, it is natural to make use of the Malliavin calculus. Perhaps the first use of Mallaivin in disordered systems appeared in [67]. In the context of solutions of (20), Rovira and Tindel, [68], used the Malliavin calculus to establish the concentration inequality.

The use of the Malliavin calculus, see [69] for more details, starts by expressing the functional, for a fixed path , in the form where depends on by the relation . Obviously, . The family is called a centered, isonormal Gaussian family and defines an abstract Wiener space as in [69]. The Malliavin derivative of a square integrable random variable defined on this space is, when it exists, a random element of that we will view as a stochastic process indexed by time and space. The Malliavin derivative is heuristically equal to and can be formally computed as such. The Malliavin derivative of is thus the element of defined by Then taking and applying the chain rule, we find the Malliavin derivative of is given by Setting , and then differentiating yields Using again the chain rule, we obtain where is the probability measure on defined by Notice then that Also important is the bound Thus Thus the functional of the Brownian field is in the space of -functionals whose first derivative is in . This allows the application of a result from [70]: if with and , for any , The application of this result, cited in [68], is the concentration inequality Since , the concentration inequality is seen to hold for as well. Thus, has sub-Gaussian tails. Now as we have seen, holds a.s. so one can conclude from (155) that

The concentration results are thus closely related to large deviation results established in [39, 7173]. For example, by means of a block argument, it was shown in [39] that for every there is a such that This is not as precise as the bound in (155) which gives the upper bound . An interesting effect here is that the upper bound for is of the same order of magnitude in as the lower bound, but the probability of the event has much smaller order. The first point is verified by simply noting

The probability of lower deviations for below has a much smaller order of magnitude than the probability for deviations above . This is similar to phenomena found in the first passage percolation [74] or in increasing subsequences in samples as in [75], to mention just a couple of instances. However, in contrast to the present situation, in the cited examples, the random variables involved have been positive. Since the model in [74] is close to the present case, we will give a brief description of the first passage percolation model and refer the reader to [16, 56, 76, 77]. The functionals involved in the first passage percolation stand in close analogy to the functional in Section 7. In first passage percolation on the edge set is the set of edges between adjacent vertices in the usual lattice structure. The edges are endowed with an random field of non-negative random variables with common distribution function . The random variable assigned to an edge is thought of as the time required to traverse the edge . One then considers “up-right” paths in the lattice, that is, sequences of alternating vertices and edges, where and are connected by the edge and the components of are greater than or equal to the components of . The passage time of is Then in [56] it was shown that the functional which satisfies It was also observed there that if and only if where is the critical probability for the existence of an infinite cluster in Bernoulli percolation in . By means of a sub-additive argument, Chow and Zhang [74] showed that whereas The intuitive reason for the difference is that for the event to occur, the entire media need to be anomalous. However, for the event to occur, the media only need to be anomalous along a single path from to .

In [73], a similar phenomenon was pointed out for the functional defined in (93). The limit (95) holds as in the case of the first passage percolation. There are two differences: the first is the rather minor one that the functional involves a rather than an . This has the effect of switching the roles of upper versus lower deviations. Now the event can occur if the media are anomalous along one path, whereas can only occur if the entire media up to time are anomalous. The fact that the random variables involved in the functional can be negative. However, due to the fact that the negative tails are not too heavy, this difficulty can be overcome and the following result holds.

Theorem 12. For as defined above in Model 1 in dimension and , for the lower large deviations one has whereas for the upper large deviations, One has

An analogous result was established in [72] for solutions of (20). Generalizations of these results appeared in [71].

Theorem 13. For Model 1 and each for the lower large deviations, One has and for the upper large deviations, One has

12. Parabolic Anderson in

The situation becomes technically more difficult when one considers the version of (20). For the model we let be a Gaussian field of identically distributed standard Brownian motions, defined on a probability space . We can no longer assume that the motions are independent and obtain a solution to the analog of (20). The dependence will need to be spatially smooth for this so denote the correlation function of the field by , Notice that we have the symmetry . We will assume that is continuously differentiable with first derivative Holder continuous of order for some . The normalization is forced by the assumption that is a standard Brownian motion. This gives the important approximation It is necessary to assume that in order to avoid the degenerate case .

The continuous space version of (20) in integrated form is thus where denotes the Laplacian in .

Equation (170) is called the parabolic Anderson model in . As noted in [78], unless the function is twice continuously differentiable, (170) will not have a solution in that any prospective solution would lack a well-defined spatial Laplacian. Accordingly, the equation was reformulated as the integral equation for the Gausian kernel corresponding to speed Brownian motion. This equation has the same solution as (170) in the case of a sufficiently smooth field . In [78], existence of solutions and other results was established. In particular, the Feynman-Kac representation remains valid for the following solution: Here the expectation is taken with respect to a speed -dimensional Brownian motion, that is, the diffusion with generator .

The principal result of [79] is the existence of the a.s. Lyapunov exponent for (170). The subadditivity results which worked so readily in the discrete spatial setting do not easily carry over to the present case. This technical difficulty can be overcome by a probabilistic version of a parabolic Harnack inequality.

Theorem 14. There exists a positive constant such that for any nonnegative bounded function on with on a set of strictly positive Lebesgue measure in , the solution of (170) with satisfies for any ,

The asymptotics are of a different order of magnitude in than in . In [79] it was shown that However, recently, Rael [80] has shown that is the correct order of magnitude, It is an open problem to show that for some constant ,

13. Anderson Polymer Models

The field can be viewed as a random media through which the process will evolve up to time . The influence of the media on the process is obtained by a change of measure. The resulting measure on is viewed as a measure on polymers. The Anderson polymer model is the Gibbs measure on defined by for bounded measurable , where is the partition function. This model has received a lot of attention in recent years and a partial list of references would include [33, 68, 8195].

By the Feynman-Kac formula, is the solution of the time-dependent parabolic Anderson equation (or stochastic heat equation) The functions and thus have the same distribution so one can make use of the properties of and apply them to . In particular, as in the case of the a.s. Lyapunov exponent, the limit exists a.s. Brownian scaling gives that Thus, by (96), it follows that

We will use the notation and when .

Similar discrete models have received considerable attention in recent years. Earlier works by [87, 9698] and many others focused on a discrete model. In this model, the path space is and is the coordinate map. The process is simple random walk started at in . The random media consist of random variables, defined on a probability space . A typical choice would be standard centered Gaussians so that the log moment generating function defined by is defined for all . Then the discrete polymer measure on is defined by where the partition function . The questions now focus on the behavior of the paths with respect to the measure . It was observed by Bolthausen [96] that is a positive martingale with respect to the field Moreover, for , the event is measurable with respect to the tail field This implies or . The case is referred to as weak disorder and is called strong disorder. The behavior of under is reflected in the behavior of . For example, denote the product measure of with itself by . With this notation, define the overlap by Then, a result of Comets-Shiga-Yoshida says that and in the case of weak disorder there exist positive constants such that for all sufficiently large , The outcome is called weak localization.

Returning to the continuous model, is a positive martingale with respect to which therefore converges, . Again the weak disorder-strong disorder distinction is defined by the dichotomy or . Denote the product measure of with itself by . In [99], two versions of the overlap were considered as The quantity measures, for two independent samples, and , sharing the same environment, the proportion of time spent together. The version of the overlap measures the amount of time up to that the endpoints of independent samples drawn with respect to the measure agree. By taking the logarithmic Malliavin derivative of the partition function with respect to , one arrives naturally at . By taking the logarithmic Itô derivative of the partition function, one arrives at . The overlap occurs in statistical mechanics in a natural way and of course the present model is essentially similar to standard statistical mechanical models in that it involves a Gibbs measure. In statistical mechanics, counterparts of these overlaps can be found, for the Sherrington-Kirkpatrick model and other ones for disordered systems. Coming via integration by parts, the first overlap has been the most successful in the last decade [100103] to study the low temperature regime.

Now in [68, 84, 87], it was demonstrated that strong disorder and weak localization are equivalent as

In [85], the stronger result was established as

There are several related results on localization. Strong concentration for the directed polymer in a random environment for parabolic Anderson model (space dependent only) with a Pareto potential was established in [17]. The main difference is that there the favourable sites in the environment have a simple characterization in terms of the potential. In the discrete time case with heavy tailed potentials, see [82] for similar conclusions. When the tails are less heavy, the favourite corridors can no longer be characterized by maxima of the potential, and they are no longer explicit, but complete site localization can still be proved [95]. Note that in the discrete case, only little is known on the random geodesics [104] in first passage percolation, which are the zero-temperature favourite paths. For the solution of the KPZ equation in one dimension, the distribution of the favourite end-point has been recently computed in [93], and it is the of an process minus a parabola.

In [99], the behavior of the overlaps and was quantified. This used, among other things, the relation (182) and the identity which is obtained by the Malliavin calculus and integration by parts. Observe from (194) that

Theorem 15. (i) For all and the limit exists almost surely, is nonrandom, and is equal to (ii) The limit exists for all and all except for an at most countable set of values of , and (iii) As , we have and
(iv)

(v) Weak versus Strong Disorder. From (196) and (197) it follwos that for some critical value depending only on the dimension. This is the weak disorder regime.

In dimension , it is known by second moment method [85] that converges to a positive limit, so that the equality holds in the left member of (202) for small. Hence, in that case.

There are many interesting open problems on Anderson polymers. (i) The favorite point is defined in [99] as The proper scaling for with respect to is unknown. (ii) The favorite path was defined in [99] by and little is known about the path .(iii) The proper scaling for with respect to is also unknown. (iv) In dimension , it is expected, in view of results for discrete models [90], as mentioned above, that , but this remains open.

Acknowledgment

M. Cranston was supported by a grant from NSF, DMS 0854940.