## Function Spaces, Hyperspaces, and Asymmetric and Fuzzy Structures

View this Special IssueResearch Article | Open Access

# A Weak Comparison Principle for Reaction-Diffusion Systems

**Academic Editor:**S. Romaguera

#### Abstract

We prove a weak comparison principle for a reaction-diffusion system without uniqueness of solutions. We apply the abstract results to the Lotka-Volterra system with diffusion, a generalized logistic equation, and to a model of fractional-order chemical autocatalysis with decay. Moreover, in the case of the Lotka-Volterra system a weak maximum principle is given, and a suitable estimate in the space of essentially bounded functions is proved for at least one solution of the problem.

#### 1. Introduction

Comparison results for parabolic equations and ordinary differential equations are well known in the literature (see, e.g., [1–4] among many others). One of the important applications of such kind of results is the theory of monotone dynamical systems, which leads to a more precise characterization of -limit sets and attractors. In the last years, several authors have been working in this direction (see, e.g., [4–8] for the deterministic case, and [9–12] for the stochastic case). In all these papers, it is considered the classical situation where the initial-value problem possesses a unique solution.

However, the situation is more complicated when we consider a differential equation for which uniqueness of the Cauchy problem fails (or just it is not known to hold). Let us consider an abstract parabolic problem: for which we can prove that for every initial data in the phase space (with a partial order ) there exists at least one solution.

If we try to compare solutions of (1.1) for two ordered initial data , then we can consider a strong comparison principle and a weak one.

The strong version would imply the existence of a solution with such that for any solution with , and, viceversa, the existence of a solution with such that (1.2) is satisfied for any solution with . This kind of result is established in [13] for a delayed ordinary differential equations, defining then a multivalued order-preserving dynamical system.

The weak version of the comparison principle says that if , then there exist two solutions of (1.1) such that , , and (1.2) hold.

There is in fact an intermediate version of the comparison principle, which says that if we fix a solution of (1.1) with , then there exists a solution with such that (1.2) is satisfied (and vice versa). This is proved in [14] for a differential inclusion generated by a subdifferential map.

In this paper, we establish a weak comparison principle for a reaction-diffusion system in which the nonlinear term satisfies suitable dissipative and growth conditions, ensuring existence of solutions but not uniqueness. This principle is applied to several well-known models in physics and biology. Namely, a weak comparison of solutions is proved for the Lotka-Volterra system, the generalized logistic equation and for a model of fractional-order chemical autocatalysis with decay. Moreover, in the case of the Lotka-Volterra system a weak maximum principle is given, and a suitable estimate in the space of essentially bounded functions is proved for at least one solution of the problem.

We note that in the papers [15, 16] the existence of a global attractor is proved for such kind of reaction-diffusion systems. In the near future, we will apply these results to obtain theorems concerning the structure of the global attractor.

#### 2. Comparison Results for Reaction-Diffusion Systems

We shall denote by and the norm and scalar product in the space , . Let be an integer and be a bounded open subset with smooth boundary. Consider the problem: where , , , , is a real matrix with a positive symmetric part , . Moreover, is jointly continuous on and satisfies the following conditions: where , .

Let , , and let be the dual space of . By , we denote the norm by and , respectively. For , we define the spaces We take , where .

We say that the function is a weak solution of (2.1) if , , , and for all , where denotes pairing in the space , and .

Under conditions (2.2), it is known [17, page 284] that for any there exists at least one weak solution of (2.1), and also that the function is absolutely continuous on and for a.a. .

Denote . Any weak solution satisfies and

If, additionally, we assume that that is continuously differentiable with respect to for any , and where , , the weak solution of (2.1) is unique. Here, denotes the Jacobian matrix of .

We consider also the following assumption: there exists such that for any and any such that and if , and , which means that the systems is cooperative in the ball with radius centered at .

Consider the two problems: where are jointly continuous on . Among conditions (2.2) and (2.6)-(2.7), we shall consider the following:

Lemma 2.1. *If satisfy (2.2) and (2.10), then the constants have to be the same for and .*

* Proof. * Denote by , , and the constants corresponding to in (2.2). By contradiction let, for example, . Take the sequence , where as . Then by (2.2), (2.10), and Young’s inequality, we have
But implies the existence of such that , which is a contradiction. Hence, .

Conversely, let . Then we take with as so that
As before, we obtain a contradiction, so .

Repeating similar arguments for the other , we obtain that for .

We recall [15] that under conditions (2.2) any solution of (2.8) satisfies the inequality: for some constant . Of course, the same is valid for any solution of (2.9). From (2.13), for any we obtain

We shall denote by the solution of (2.8) corresponding to the initial data and by the solution of (2.9) corresponding to the initial data . Also, we take for .

We obtain the following comparison result.

Theorem 2.2. *Assume that satisfy (2.2), (2.6), and (2.10). If and we suppose that satisfies (2.7) with , where is taken from (2.14), we have , for all .*

*Remark 2.3. *The results remain valid if, instead, satisfies (2.7) with .

*Remark 2.4. *If satisfies (2.7) for an arbitrary (i.e., in the whole space ), then the result is true for any initial data .

* Proof. * Let . The function satisfies (2.6) with . For any define by
Note that and , so for all . For the function , we can obtain by (2.10) and the mean value theorem that

For all , we have
where , for , and if . For any , we have that , and then by , , (2.14), and (2.7), we get

By Gronwall’s lemma, we get
Thus , which means that , for a.a. , and all , .

*Remark 2.5. *In the scalar case, that is, , condition (2.7) is trivially satisfied.

When condition (2.6) fails to be true, we will obtain a weak comparison principle.

Define a sequence of smooth functions satisfying

For every we put , where . Then and for any ,

Let be a mollifier, that is, , , and for all . We define the functions Since for any is uniformly continuous on , there exist such that for all satisfying , and for all for which we have We put . Then , for all .

For further arguments we need the following technical result [16, Lemma 2].

Lemma 2.6. *Let satisfy (2.2). For all , the following statements hold:
**
where is a nonnegative number, and the positive constants , , do not depend on .*

Consider first the scalar case.

Theorem 2.7. *Let . Assume that satisfy (2.2) and (2.10). If , there exist two solutions (of (2.8) and (2.9), resp.) such that for all .*

* Proof. * For the functions we take the approximations (defined in Lemma 2.6), which satisfy (2.24)–(2.26), and consider the problems
for . Problem (2.27) has a unique solution for any initial data . In view of Lemma 2.1, the constant is the same for and . We note that
Then, it is clear that for every .

By Theorem 2.2 we know that as , we have , for all , for the corresponding solutions of (2.27).

In view of Lemma 2.6, one can obtain in a standard way that (2.13) is satisfied for the solutions of (2.27) with a constant not depending on and replacing by . Hence, the sequences are bounded in . It follows from (2.25) that are bounded in and also that is bounded in , where . By the Compactness Lemma [18], we have that for some functions , :
Also, arguing as in [19, page 3037] we obtain
Moreover, by (2.24) and (2.31) we have for a.a. and then the boundedness of in implies that converges to weakly in [18]. It follows that are weak solutions of (2.8) and (2.9), respectively, with , .

Moreover, one can prove that
Indeed, we define the functions , , which are nonincreasing in view of (2.13). Also, from (2.30) we have for a.a. . Then one can prove that for all (see [15, page 623] for the details). Hence, . Together with (2.35) this implies (2.36) (see again [15, page 623] for more details).

Hence, passing to the limit we obtain

Further, let us prove the general case for an arbitrary .

Theorem 2.8. *Assume that satisfy (2.2) and (2.10). Also, suppose that either or satisfies (2.7) for an arbitrary . If , there exist two solutions (of (2.8) and (2.9), resp.) such that , for all .*

* Proof. * Let be the function which satisfies (2.7). We take the approximations (defined in Lemma 2.6), which satisfy (2.24)–(2.26). Then, we consider problems (2.27).

In view of Lemma 2.1, the constants are the same for and . We note that
Then, it is clear that for every .

Using Lemma 2.6 it is standard to obtain estimate (2.14) with a constant not depending on . Hence, the solutions of (2.27) satisfy
We note that
if , since in such a case . Hence, if , the functions satisfy condition (2.7) with . Therefore, for any and any such that and if , and , we have
Thus, if , the functions satisfy condition (2.7) with .

By Theorem 2.2 we know that as , we have , for all , , for the corresponding solutions of (2.27).

Repeating the same proof of Theorem 2.7, we obtain that the sequences converge (up to a subsequence) in the sense of (2.29)–(2.36) to the solutions of problems (2.8) and (2.9), respectively. Also, it holds

In the applications we need to generalize this theorem to the case where the constant can be negative. We shall do this when have sublinear growth (i.e., for all ). Consider for (2.1) the following conditions: where , and .

Let satisfy (2.43) with constants . Then if , we make in (2.1) the change of variable , where . Hence, multiplying (2.8) and (2.9) by we have

It is easy to check that if is a weak solution of (2.44), then is a weak solution of (2.8) (and the same is true, of course, for (2.45) and (2.9)). Conversely, if is a weak solution of (2.8), then is a weak solution of (2.44).

The functions satisfy (2.2) with for all . Indeed, where .

Then, we obtain the following.

Theorem 2.9. *Assume that satisfy (2.43) and (2.10). Also, suppose that either or satisfies (2.7) for an arbitrary . If , there exist two solutions (of (2.8) and (2.9), resp.) such that , for all .*

* Proof. *We consider problems (2.44) and (2.45). In view of (2.46) satisfy (2.2). Also, defining it is clear that (2.10) holds. Finally, if, for example, satisfies (2.7) for any , then it is obvious that for is true as well.

Hence, by Theorem 2.8 there exist two solutions (of (2.44) and (2.45), resp.), with such that for all . Thus
and are solutions of (2.8) and (2.9), respectively such that .

*Remark 2.10. *If satisfy (2.6), then the solutions given in Theorem 2.9 are unique for the corresponding initial data.

#### 3. Comparison for Positive Solutions

Denote . Let us consider the previous results in the case where the solutions have to be positive. Consider now the following conditions. for all , a.e. and if . Obviously, in the scalar case these conditions just mean that for a.e. .

It is well known (see [16, Lemma 5] for a detailed proof) that if we assume conditions (2.2) only for , and also (2.6) and (3.1)-(3.2), then for any there exists a unique weak solution of (2.1). Moreover, is such that for all .

On the other hand, if we assume these conditions except (2.6), then there exists at least one weak solution of (2.1) such that for all [16, Theorem 4]. Moreover, we can prove the following.

Lemma 3.1. *Assume conditions (2.2), (2.6) only for , and also (3.1)-(3.2). Then there exists a weak solution of (2.1), which is unique in the class of solutions satisfying for all .*

* Proof. *Let be two solutions with , such that for all . Denote . Then in a standard way by the mean value theorem, we obtain
where so that . The uniqueness follows from Gronwall’s lemma

We prove also a result, which is similar to Lemma 2.1. Denote by , , and the constants corresponding to in (2.2) for problems (2.8) and (2.9), respectively. Arguing as in the proof of Lemma 2.1 we obtain the following lemma.

Lemma 3.2. *If satisfy (2.2) and (2.10) for , then for all .*

Theorem 3.3. *Let satisfy (2.6) and (3.1)-(3.2). Assume that satisfy (2.2) and (2.10) for . If and one supposes that satisfies (2.7) for with , where is taken from (2.14), one has for all , where are the solutions corresponding to and , respectively.*

* Proof. *As the solutions , corresponding to and are nonnegative, repeating exactly the same steps of the proof of Theorem 2.2 we obtain the desired result.

*Remark 3.4. *The results remain valid if, instead, satisfies (2.7) with the same .

*Remark 3.5. *If satisfies (2.7) for an arbitrary (i.e., in the whole space ), then the result is true for any initial data .

We shall need also the following modification of Theorem 3.3.

Theorem 3.6. *Let satisfy (2.6) and (3.1)-(3.2). Assume that satisfy (2.2) and (2.10) for . Let . One supposes that satisfies
**
for any and any such that and if , and with , where is taken from (2.14).**Then there exists a constant such that
**
where are the solutions corresponding to and , respectively.*

* Proof. *Arguing as in the proof of Theorem 2.2 we obtain the inequality
where is defined in (2.15).

Using (2.17), , (2.14), and (3.5), we get
Thus
for some constant . By Gronwall’s lemma, we get

Let us consider now the multivalued case. We will obtain first some auxiliary statements.

We shall define suitable approximations. For any we put , where , and was defined in (2.20). Then and for any , We will check first that satisfy conditions (2.2) for , where the constants do not depend on .

Lemma 3.7. *Let satisfy (2.2) for . For all one has
**
for , where the positive constants , , and do not depend on .**If , then for any one has
**
Moreover, if satisfy (3.2), then also satisfies this condition.*

* Proof. *In view of (2.2) we get
where , for some constant . Also,
for some constant . Thus, for , , we have (3.12) for the functions .

Moreover, if , then for any ,

Finally, if (3.2) is satisfied, then
for all , a.e. and such that and if .

Let , . We define also the following approximations , where . Then (3.11) holds. We check that satisfy conditions (2.2) for , where the constants do not depend on .

Lemma 3.8. *Let satisfy (2.2) for . For all one has
**
for , where the positive constants , , and do not depend on .**If , then for any one has
**Moreover, if satisfy (3.2), then also satisfy this condition.*

* Proof . *In view of (3.12), we have
where we have used that implies . Finally, (3.19) and condition (3.2) are proved in the same way as in Lemma 3.7.

For every consider the sequence defined by , where either or , defined before. Since any are uniformly continuous on , for any , there exist such that for all satisfying , and for all for which we have We put . Then, for all . Since for any compact subset and any we have uniformly on , we obtain the existence of a sequence such that , as , and , for any and any satisfying . We define the function given by where .

Lemma 3.9. *Let satisfy (2.2) for . For all we have
**
for , where the positive constants , and do not depend neither on nor .**Moreover, if satisfy (3.2), then also satisfy this condition if .*

* Proof. *Since satisfy (3.12) and satisfies (3.18), we have
for some constant .

On the other hand,