Review Article | Open Access

# Properties of the Parabolic Anderson Model and the Anderson Polymer Model

**Academic Editor:**S. Sagitov

#### Abstract

In this article we examine some properties of the solutions of the parabolic Anderson model. In particular we discuss intermittency of the field of solutions of this random partial differential equation, when it occurs and what the field looks like when intermittency doesn't hold. We also explore the behavior of a polymer model created by a Gibbs measure based on solutions to the parabolic Anderson equation.

#### 1. Introduction

It has been twenty years since the publication of the seminal work “Parabolic Anderson Problem and Intermittency” by Carmona and Molchanov. Their memoir has inspired an enormous amount of research on the subject in the intervening years. In this paper we hope to give an account of what is now known about the behavior of solutions of the parabolic Anderson equation as well as the behavior of typical paths under the Anderson polymer measure. Perhaps the initial and still most compelling reason for studying the parabolic Anderson model is physical. In the three-dimensional case it provides a model for the growth of magnetic fields in young stars. In addition, it also has an interpretation as a population growth model. Further, since the work of [1] it has also provided a model for a polymer in a random media. Furthermore, it is a part of the theory of stochastic partial differential equations. Besides its interest as what can be described as a canonical object, in the sense that Brownian motion is a canonical object, its interest also derives from its relations to many other important models. In particular, it is a close relative of other models, for example, the stepping stone model [2], catalytic branching [3], super random walk, the Burger's equation [4], and the KPZ equation [4]. The original motivation of Anderson concerned the question of whether there were bound states for electrons in crystals with impurities. The crystal structure is taken to be and the impurities are modeled by means of an random field . The phenomena of localization can be expressed as the existence of eigenfunctions of the Hamiltonian .

The equation satisfied by a magnetic field generated by a turbulent flow leads naturally to an analogous parabolic equation with a time varying random field as opposed to the time stationary field which arises in the original localization question. The difference is that the random medium changes rapidly in the case of the magnetic field generated by turbulent flow whereas the impurities in the localization can be considered to be unchanging in time. In the latter case, the fluctuations in the medium are slow compared to the phenomena of interest, capture of electrons.

The magnetic field in a star is generated by the turbulent flow of electrical charges. Turbulent flows are modeled by means of a randomly evolving velocity field, see, for example, [5] or [6] for a canonical example. Let denote such a velocity field on . Typically, this field is incompressible, Markov in time and Gaussian together with other spectral properties. The magnetic field generated by charges carried by the Lagrangian flow of is incompressible and evolves according to an equation of the form where in this case is the standard Laplacian on . This equation has been studied in [7–11], to name but a few references. We mention now some of the results of [9], on a tractable version of the model which one gets when is replaced by and in the environment one takes the time correlation in the velocity field to . In this version of the model, the field is replaced by a matrix Wiener process, on some probability space satisfying Then (1) satisfied by takes the form where , the discrete Laplacian, is given by and is the Stratonovich differential of . Converting to the Itô differential instead leads to the following equivalent form:

The average (over the media) of the magnetic field, is easily seen to satisfy Since the spectrum of is , for , this equation implies that first moment Lyapunov exponent satisfies In [9], by simply using Itô's formula, an equation was derived for the average (over the media) magnetic field energy This satisfies Under an assumption of homogeneity on , one has and so for all time , where Writing , this becomes In the physically relevant dimension , taking the Fourier transform of the eigenvalue equation reveals that the operator possesses a positive eigenvalue if and only if where is the three-dimensional torus. This implies that the solution satisfies This implies whereas This last inequality is the definition of full intermittency, and the second moment grows strictly faster than twice the first moment. As a consequence, the field has widely separated large peaks. This explains the well-known phenomena of sun spots. They are widely separated sites of high magnetic field energy. The main question of interest in astrophysics is whether the magnitude of grows exponentially, a.s., in other words, is the a.s. Lyapunov exponent positive? This is the question of whether A further question regards the physically relevant asymptotic behavior of the a.s. Lyapunov exponent as . Since is the inverse Reynolds number, which in this situation is very small. Another interesting question is whether has a limiting distribution. These questions have an affirmative answer in the scalar case and remain difficult open problems in the multidimensional model just discussed.

#### 2. Parabolic Anderson Model

The commonly studied parabolic Anderson equation is a scalar version of the magnetic field equation (1) and most progress has occurred in and we will treat this case first. The velocity field may be replaced by a random environment that is either a stationary in time random field or an evolving random field . Typically, the variables and , in both cases are assumed to be . The stationary in time field can be thought of as a model where the phenomena of interest evolves much more rapidly than the evolution of the environment. The nonstationary case models a phenomena whose evolution is on a time scale comparable to the time scale of the evolution of the random environment. In this paper we will only discuss the nonstationary case. The enormous literature has been developed in this time stationary case. A partial random sample of works on this topic includes [12–17]. We will only discuss the nonstationary case. When considering the discrete model, that is, , we take the operator to be the discrete Laplacian as mentioned above, whereas it is the ordinary Laplacian when the model is set in . The parameter in the model is called the diffusivity. The parabolic Anderson model is defined as a parabolic partial differential equation. A canonical version of this model is with the random environment provided by a white noise potential. In the case, one takes to be standard, one-dimensional Brownian motions defined on some probability space , where . This field is assumed to be correlated in time and space, that is, where denotes expectation with respect to . The differential form of the parabolic Anderson equation is then The notation indicates the Stratonovich differential, (see [18] for a description of Stratonovich differentials) and this is preferred over the Itô differential because of the simplicity of the Feyneman-Kac representation which will appear shortly. The equivalent integral formulation is The differential formulation can be expressed converting to the Itô differential by At times it will be convenient to discuss the Itô solution defined by The relation between the two is that . Typically, it is assumed that and that it is bounded from above, which guarantees existence and uniqueness of the solution. The positivity of assures that for all . Fundamental information on this equation, including existence and uniqueness results, and its applications are contained in [19]. The field of solutions exhibits interesting behavior as revealed by the growth properties of its moments which will be discussed at length below. At the risk of stating what is obvious, we stress that the random variables in the field are dependent. The correlation structure of this field is examined in detail in Theorem 2 later in this paper.

The solution of (20) has a Feynman-Kac representation as an average over a path space. This is done by means of a family of measures , on the set is the space of right-continuous paths possessing left limits from which have a finite number of jumps on any compact subset of . Define analogously. We let be the canonical process on , that is, for . Then the measures are taken to be the ones that make the pure jump Markov process on with generator . This process satisfies at and waits at for a random amount of time which is exponentially distributed with parameter , and then selects, using the uniform measure on its neighbors, one of the neighbors of and jumps there and then proceeds afresh as if starting from time at the new position.

The solution to (20) can then be expressed as Unless explicitly mentioned otherwise, we will take so that

#### 3. Relations to Other Processes and Equations

The parabolic Anderson equation is closely related to equations for many other models. Probably the best known connection to another equation is provided by the Hopf-Cole transformation. If we take a solution to a parabolic Anderson equation with a spatially smooth random force term and set and , then satisfies Burger's equation Similarly, setting , where solves (23), yields a solution of the KPZ equation, see, for example, [4],

Equation (20) may be cast as a particular case of a pair of equations driven by two jointly Gaussian, standard Brownian fields, which also relates it to the mutually catalytic branching and stepping stone models. Assume the correlations between the fields is controlled by a parameter by

An interesting observation of Etheridge and Fleischmann, [20], neatly ties the symbiotic branching and stepping stone models together with the parabolic Anderson model by taking different values for the parameter . Consider the following pair of equations: When in (26), this is known as the mutually catalytic branching model for two interacting populations and has been studied in [3, 20–25] to name just a small random sample.

If in (26) and , then and solves This is known as the stepping stone model from population genetics and has been the subject of the works [2, 16, 26] among others.

Finally, when in (26) and one gets and is the solution of (20) with .

Other versions of noise have been considered as well and this can lead to substantial technical difficulties. For example, the recent works [27–31] replace the field by catalysts which are from interacting particle systems such as the voter model or exclusion processes. Among the problems unresolved here is the existence of the a.s. Lyapunov exponent. The subadditive argument which worked so well for the environment fails for these models.

#### 4. Moment Lyapunov Exponents

One of the most interesting properties of the solutions of (20) is that of intermittency. Intermittency is defined in terms of the moment Lyapunov exponents. These are the limits for of The existence of these limits was proved in [19]. For one uses (116) to compute . This is easily done with the observation that for any fixed path , the random variable is Gaussian with mean and variance . Thus, by Fubini's theorem, so that . An interesting quantity, the overlap, arises when computing . Here we will use to denote the product measure of with itself on . Then we use the observation that for two fixed paths, and , the variables and are jointly Gaussian with Thus, is Gaussian with mean and variance by (116), This implies

For general , Hölder's inequality implies and is convex. Full intermittency is then the property that is strictly convex on , that is, It is by now classical and was proven in [19], that in dimensions , full intermittency holds for all but in dimensions , full intermittency only holds for where is a dimension-dependent constant. This was done in [19] by noting that in dimensions and the operator has a positive eigenvalue for any . By contrast, in dimensions there is a positive constant such that has a positive eigenvalue only for . When there is a positive eigenvalue, , one can conclude that When the spectrum of has no eigenvalue, that is, when in dimension with , one has Using this with (120) we have the following consequence due to Carmona and Molchanov [19].

Theorem 1. *Full intermittency holds in dimensions and for any . For each , there is a positive such that for , full intermittency holds.*

Below we will give a probabilistic proof of this result which identifies the value . In addition, in [19], it was shown that there is a sequence which was proved to be strictly increasing in [32], such that This has been refined, extended, and improved in the recent work of Greven and den Hollander, [32].

The physical phenomena of intermittency is the property that a random field possesses widely separated high peaks. The most well-known field exhibiting this property is the magnetic field energy in a star. In our sun, this exhibits itself as sun spots where most of the magnetic field energy is concentrated, thereby lowering the temperature and causing the darkening which appears as a spot. Intermittency properties of the field were established in [19] and this will be discussed below.

#### 5. Itô Solution and Probabilistic Proof of Intermittency

The Itô solution, , is the solution when the Itô differential is used in (20) instead of the Stratonovich differential. Recalling this solution satisfies The relation between and is simply Then defining , it follows that and so full intermittency holds if and only if .

We show that in dimensions greater than , that the Itô solution has bounded second moment for and that the second moment grows exponentially for . This was shown in [19] using spectral considerations outlined in the previous section, so here we will give a probabilistic argument which was alluded to in [19] but does not seem to have appeared anywhere. First, recall that the jump rate of is . Thus, if is an independent copy of , the jump rate of is , that is, . Recalling that we have

Now we introduce some helpful notation. We let and for , If the underlying chain is recurrent, then all of these stopping times are finite. If the chain is transient, then only a finite number of these times is finite. Set, for , The distribution of with respect to is exponential with parameter . For all , the random variable is the duration of the th sojourn at and is the duration of the th excursion from . In the recurrent case, the random variables and are all finite and independent. They are also independent in the transient case “up to the time they all become infinite.” For a given , if , then writing , The skeletal random walk of can now be defined using these stopping times. This is the Markov chain on which keeps track of the sites visited by , namely, . Observe that if , then is a geometric random variable with parameter , the probability of no return to the origin which is independent of . For , and it is well known that , see, for example, [2]. Since and the random variables are independent, an easy argument shows that the total time spent at the origin is thus and is an exponentially distributed random variable with parameter with respect to the measure . If , one has We remark that by stationarity of the field and the choice of , it follows that is independent of . When , one has . An important consequence of this little computation is the lack of intermittency for . Indeed, for , Consequently, since , for , and there is no intermittency for large in dimensions .

For or , however, we proceed as follows to show that there is intermittency for all . It will be useful to make a change of variable and to set up some more notation. First we observe that . Thus, We use Theorem 2 and it then suffices to show . For simplicity, we denote

Notice that the random variables are exponentially distributed with parameter with respect to .

Recall for or that . Choose sufficiently large so that . Then, The central limit theorem implies that for sufficiently large. Observe that Hence, Consequently, As a result, intermittency holds for all in dimensions and .

We return now to so that and show that intermittency holds for . Again, it suffices to show . Recall that the are exponential distributed with parameter with respect to . Standard large deviation estimates (see, e.g., [33, Theorem 9.3]) show that for fixed , Also, As in the previous paragraph, for every we can find a such that Choosing , we get Recalling (56), it follows that for all sufficiently large, If and we choose with , with the corresponding choice of as in (61), then for all sufficiently large , Therefore, , for every and intermittency holds in dimension or for all .

#### 6. Covariance Structure and Association

Now we will examine the covariance structure of the field in the intermittent regime, that is, when so that . Recall that this range for corresponds exactly to the range of for which the operator has a positive eigenvalue. We will also establish that this field is associated. The mixed second moments are significant in understanding the structure of the field . They are given by The asymptotics of the function can be evaluated as follows. Define Then observe that But there is a scaling relation since is a rate simple symmetric random walk on with respect to . This shows that The function arises as the partition function of a homopolymer, [34], and by the Feyman-Kac formula, solves The spectrum of the operator on the right hand side, satisfies and for , there is a dimension-dependent such that In the above, is a simple eigenvalue for and the portion is purely a.c. spectrum. In fact, one now sees that from the section on intermittency. We denote by the eigenfunction corresponding to and note that it is given by, see [34], where is the symbol (Fourier transform) of and is the -dimensional torus. The representation (74) can be used to establish the exponential decay of , and there is a positive constant such that By the spectral theorem, letting be the resolution of the identity for the operator , one has Note that . The following result was used in [19] to prove a central limit theorem for sum of the form which will be described later in this paper.

Theorem 2. *For and any or for and ,
**
where**
In addition, when there is a positive eigenvalue for , this eigenfunction satisfies
**
where depends on and .*

This implies exponential decay in the spatial variable for the covariance of solutions of (20). Recalling the equivalence in law stated in (69) it follows from (78) that

Corollary 3. *For and any or for and ,
**
Consequently,
*

Note that (84) gives a quantitative expression for the intermittency condition . Since , we see that We note that as . Its rate of decay depends on the dimension.

An important property of the field is that the random variables in this field are associated. A collection of random variables , where is a countable set, is said to be associated if for any and coordinate-wise increasing functions , and any finite subcollection , it holds that This notion was introduced in [35] and is of course related to the FKG inequality. One important aspect of this property was developed in [36] where Newman established a central limit theorem for the collection under the assumptions that the are associated, stationary and satisfy finite susceptibility Note that the bound provided by (82) implies that the field has finite susceptibility in the intermittent regime. A classical application of the Newman's central limit theorem is to take the , the spins of a ferromagnetic stochastic Ising model and derive a central limit theorem for sums, , over growing boxes , with respect to a Gibbs state. The spins are correlated, but they possess the property of being associated and stationary with respect to the Gibbs state.

The solutions of the parabolic Anderson equation (20) are associated. The following result was established in [14]. The proof uses a result of Pitt, [37], which states that a necessary and sufficient condition for the associativity of a Gaussian vector is the point-wise nonnegativity of its correlation function. Since associativity is preserved by convergence in distribution, the result below is proved using a simple approximation procedure.

Theorem 4. *Let be a field of , standard, one-dimensional Brownian motions on some probability space . Then , and the field of solutions of (20) is associated.*

#### 7. Almost Sure Lyapunov Exponents

In the previous section we examined the behavior of the moments of . We now turn our attention to the a.s. behavior of the solution of (20); that is, we consider the a.s. Lyapunov exponent defined by The existence of this limit was first established by Carmona and Molchanov in [19] in the case when either or, more generally, when has compact support. The technique of proof used Liggett's subadditive ergodic theorem, [38]. The sub-additivity, when , is an easy consequence of the Feynman-Kac representation (116) The Markov property is used in going from line to line 3 and this technique broke down at this step in the case of . The latter case was established in [39] using a block argument from percolation theory. This type of block argument originated in [40]. Using the fact that time increments of the field are independent over disjoint space-time blocks in , the proof established an oriented percolation scheme and applied a recurrence result from [41] for such schemes.

In view of the application to stellar magnetic fields, a significant aspect of is its asymptotic behavior as . The exact asymptotics were established independently in and in [42].

The asymptotics are derived from the Feynman-Kac representation (116) through analysis of the Gaussian field where This field places a natural metric on by means of

The index set of the field is too large from the point of view of the metric entropy, see [43] for an explanation of the metric entropy, induced by the metric . Thus we restrict the index set by specifying the number of jumps of its elements in the interval. So, using to denote the number of jumps of in we can define the space of paths

The superadditive functional is the supremum of a Gaussian field indexed by the set . This set has a suitably bounded entropy, which, by a theorem of Fernique and Talagrand, implies . This bound allows, by means of Liggett's subadditive ergodic theorem, the conclusion that there is a positive constant such that

An interesting and presumably difficult problem is to determine the proper scale in order to obtain a limit law. It is conjectured that should have a nontrivial limit law, possibly related to the Airy distribution. This conjecture comes from related results arising in random matrix theory such as [44, 45]. Similarly, one would expect nontrivial limit laws for .

The asymptotics established in [39, 42] for is that

These asymptotics are arrived at by decomposing the Feynman-Kac representation, (116), where denotes equality in law (distribution.) Note that the only change has been to have the time direction be the same in both and . The intuition now is that the conditional expectation in the th term should be nearly . One quickly realizes that sum over for suitable choices of and is not significant, then only terms of the from matter. But, by Brownian scaling, . Using this in (97) and simple large deviations results for the Poisson distributed leads to an upper bound for the asymptotics of . The lower bound comes from looking at a particular path that dominates the Feynamn-Kac expectation and using similar estimates.

Thus, for small which says the a.s. behavior is much smaller than the first moment behavior. This just reflects the fact that the expectation of is dominated by large values of which occur with small, but not too small, probability. This is related to the intermittency and will be examined in the section on sums over boxes below.

We would like to point out that is an increasing function of . Also, since one has for all . Moreover, it was pointed out in [19] that for . The argument given there goes as follows. First define From the fact that , it follows that . Note for that and . Thus, for every , there is a such that . This implies . But obviously, so we conclude that for one has and, consequently, .

We end this section with a discussion of the relation between the a.s. Lyapunov exponent and the moment Lyapunov exponents from [39].

Theorem 5. *We have the following:
**
where .*

We give a brief sketch of the proof. In [46], it was shown that . In [39] the large deviation estimate for every there is a such that was established by means of a block argument. Thus, for , Thus, for every , which gives .

#### 8. Solution of PAM as Interacting Diffusion

An interesting point of view regarding the solution of (20) was proposed in [47, 48] which grew out of work on the stepping stone model in [26]. In [48], Shiga and Shimizu, the authors view the entire field as a Markov process in a subset of a particular -space. In [47, 48] and the related works [32, 49], a more general underlying Markov process is used than the one with generator . However, for simplicity we will confine our discussion to the case where the operator appearing in (20) is . Take any summable collection of positive numbers that also satisfies for some positive , Then set for , The space is endowed with the product topology. We recall the following theorem of [48].

Theorem 6. *Given , the SDE (20) has a unique strong solution with and strongly continuous paths in a.s. The process is a Markov process on with semigroup which satisfies
**
for depending smoothly on only finitely many coordinates and where
**
If *