Abstract

We use the steepest descent method in an Orlicz–Wasserstein space to study the existence of solutions for a very broad class of kinetic equations, which include the Boltzmann equation, the Vlasov–Poisson equation, the porous medium equation, and the parabolic p-Laplacian equation, among others. We combine a splitting technique along with an iterative variational scheme to build a discrete solution which converges to a weak solution of our problem.

1. Introduction

The general model describing the kinetic equations is about an evolution equation of unknown function , representing a time-depending density of probability distribution of a material in a given domain of the space. In the present work, may measure the density distribution of a system of identical particles of a bulk material. The density depends on the time and the position and the velocity of some particles at . Roughly speaking, the equation is considered as the evolution of the density function in the phase space , with as an open bounded domain with periodic boundary. As a probability density, remains positive in the court of time and satisfies the mass conservation principle: , for all :where the initial datum is a probability density on . Here, is an open, bounded, convex, and smooth domain of , with periodic, is the Legendre transform of a cost function , and is a convex function.

Equation (1) can be viewed as a balance result of a streaming phenomenon with a general nonlinear interaction phenomenon between the particles described, respectively, as

Accordingly, the transport equation (2) can somewhat be interpreted as a relaxation of (1) at the absence of the interaction phenomena, whereas it reduced to (3) in absence of streaming.

One of the interests in considering (1) under a general nonlinearity is that it covers a very broad range of problems which occurred in physics and it is a purely mathematical challenge. Of course, it has been motivated by some previous works in the literature, namely, the works in [17], where (1) is investigated in some particular cases. Indeed, in [3], the authors dealt with the heat equation:

By fixing an probability density with finite and a time step , they define the mass density as a discrete solution of (4) at time , which minimizes the functionalon , where is the set of all probability density on having finite second moments and is the 2-Wasserstein metric defined as

By defining as follows: , they tend to 0 and then show that the sequence converges to a nonnegative function , which solves (4) in a weak sense.

In [1], the existence of solutions for the spatially homogeneous equations associated with (1), that is, the equation for fixed has been proved by M. Agueh, using a similar variational scheme as in [6]. Here, is an bounded and convex domain (see [6] for more details).

A particular case of (1), namely, the kinetic equationobtained by choosing and , has been studied in [5] by using a discretization scheme basing on the “splitting method.” This enables the authors to decompose a discrete solution of the kinetic equation (8) in the form , where stands for a discrete solution of the free transport equationwhen is fixed and a discrete solution of the diffusion equation in (8) when is fixed.

Defining as they show that converges to a nonnegative function which solves the kinetic equation (8) in a weak sense.

Such a decomposition is not suitable in the case of problem (1) because of its nonlinear structure. To deal with the more general class of kinetic equation (1), we combine some ideas from the splitting method in [5] along with some techniques developed in [1] for the spatially homogeneous equations:

For the best of our knowledge, our technique is new and is stated in a more general setting. It is worth mentioning that the class of the kinetic equation (1) also includes the Vlasov–Poisson equationobtained whenand the parabolic Laplacian equationin the case and with .

In order to facilitate the reading of the paper, we summarize below the main steps and technical schemes according to which ours results will be carried out:(1)First of all, we fix a time step and define as a discrete solution of the kinetic equation (1) at time , for (see Section 2.1).(2)Next, we prove that the solution of the Monge problemis defined bywhere and are as in Section 2.1. We use (16) to show that the sequence satisfies the time-discretization equation of the kinetic equation (1)weakly, for , where tends to 0 and when tends to 0.(3)Then, we define an approximate solution of the kinetic equation (1) (see (118)), and we prove that the sequence converges to a nonnegative function which solves the kinetic equation (1) in a weak sense when tends to 0.

The convergence result has been achieved as follows:(a)The weak convergence of to in for follows from the displacement convexity of the functional on the set of all probability density (see Proposition 3), and its strong convergence in is obtained, thanks to a diagonal method combined with a result obtained in [1].(b)The convexity of and the boundedness of in help to prove a weak convergence of the nonlinear term to in .(c)Finally, the strong convergence of to in and the weak convergence of the nonlinear term to in enable us to establish that is a weak solution of the kinetic equation (1).

The paper is structured as follows. In Section 2, we state the required hypotheses and set some tools relevant for our problem. In Section 3, we set the variational formulation of the discrete problem related to our problem and construct the discrete solution. Section 4 concludes our main result by proving the convergence of the discrete problem to the considered problem. Section 5 ends the paper by giving an illustration example followed by an appendix on some regularity results.

2. Preliminaries

Throughout this work, we will assume the following: is a convex function of class such that is convex. is even and convex function such that and , for all , with . is a probability density on such that with and with .

Remark 1. Typical examples satisfying assumption are the functions and .

Proposition 1. Assume that satisfies . Let be a convex function such that is convex and decreasing.

Then, the functionalis displacement convex.

Proof. Let and be a -optimal map that pushes forward to . We define and , where .
Then, we haveFrom [1] and Proposition 3, we have thatReplacing (21) in (20), we obtainRecalling again [1] and Proposition 3, we get that is diagonalizable with positive eigenvalues. So, using the fact that the map is concave on the set of diagonalizable matrices with positive eigenvalues, we getSince is decreasing, thenFrom (23) and (24) and the fact that is convex, we obtainHence, we conclude that the functional is displacement convex.

Corollary 1. Since the functional is displacement convex, we have that

and is the -optimal map that pushes forward to .

Definition 1. Let be nonnegative. We say that a nonnegative function on is a weak solution of (1) in the time’s interval for some , if for every test function with time support , we have

2.1. The Flow and Descend Algorithm

Assume that the probability density satisfies and fix a time step, then we define the following:(1)(2)(3) for fixed such that (4)

For each fixed, denotes the unique minimizer of the variational problem:

is the set of all probability density on having a finite -moment, and stands for the Kantorovich work defined as

We obtain the terms , for , by induction as follows:(i)By fixing , we define .(ii)By fixing , we definewhereand is the unique minimizer of the variational problem.with

Existence of and will be proved farther in Sections 2 and 3, respectively.

2.2. c-Wasserstein Metric

In this section, we define a Wasserstein metric corresponding to a cost function , and we study its topology.

Definition 2. Assume that satisfies . Let two probability measures on . We define the -Wasserstein metric between and by

Theorem 1. Assume that satisfies . Then, is a distance on the probability space . Furthermore, if is a sequence in and , then converges to in the metric space if and only if converges narrowly to in .

Proof. Let be two probability measures on such that . Then, there exists a sequence in which converges to 0 such thatDenote by the solution of Kantorovich problem:Then, we obtainSince converges to 0, then tends to and when goes to , for all such that . Then, using the fact that is coercive, we deduce that there exists such thatThis with (37) implies thatThus, we conclude thatLet be the solution of the Kantorovich problem:Then, using (40), we obtainWe deduce that a.e. So for all , we haveConsequently, .
Let us fix two probability measures and on . Since is even, thenfor all . We deduce from (44) thatLet be three probability measures on . Define and . Denote by the solution of Kantorovich problemand denote by the solution of the Kantorovich problem:Using the Gluing lemma [8], there exists a probability measure on such thatfor some Borel subsets and of . Let be a probability measure on defined by . Then, , and we use the convexity of to get thatSo, , and we conclude thatHence, is a distance on .
Let us now study the topology of .
Let be a sequence on and such that converges to 0 when tends to . Define , since converges to 0, then we use the fact that is coercive to havewhen . We deduce thatNote that the 1-Wasserstein metric between and isWe deduce that converges to 0 when tends to . Since the 1-Wasserstein metric induces the narrow topology of , we conclude that the sequence converges narrowly to in .
Assume now that the sequence converges narrowly to in . Fix and denote by the solution of Kantorovich problem:Since converges narrowly to , then converges narrowly to some andSo,for all . Hence, for all , there exists such thatSo, for all . Then,Consequently, converges narrowly to in the metric space .
We establish now the existence of solution for the variational problem defined byin the metric space , where is the set of all probability measures on having -finite moment, that is,and being a time step.

Lemma 1. Assume that , , and satisfy, respectively, , , and . Then, the following is obtained:(i)The map is lower semicontinuous in .(ii)The functional is lower semicontinuous in .(iii)The set is a closed subset of ,

Proof. (i)Let . Let be a sequence in such that converges to in metric space . Denote by the solution of the Kantorovich problem:We haveSince converges to narrowly, then converges to narrowly andThis implies thatThus, we obtain the proof of .(ii)Since is convex and ,Hence, if converges to weakly in with , we haveand thenwhich complete the proof of . The proof of is a consequence of .

2.3. Existence Results for the Discrete Problem

First of all, we introduce here for unbounded domains analogous of the maximum principle stated for bounded domains in [1]. This maximum principle plays a central role in the searching of solution for the discrete problem . It is also used to further establish the convergence of our algorithm towards a weak solution of the kinetic equation (1).

Proposition 2. (maximum principle). Assume that the initial datum satisfies with , satisfies , and satisfies .
Then, any solution of the variational problemsatisfies , with and .

Proof. We define . Assume by contradiction that has a positive Lebesgue measure. Then, , where is the minimizer ofOtherwise,This yields a contradiction.
Define , i.e., , for all , and denote by and the marginals of . We have with and then and .
Let and denote, respectively, the density functions of and . Becausewe have on and on .
For is small enough, define and byfor all .
, , and we getTaking into account some ideas from [1], we shall prove that , which leads to a contradiction.
Indeed,Since is nonnegative on and , we haveAlso, we have is being convex and of class , thenon andon .
Hence, we haveSince is continuous, thenConsequently, there exists such that, for , thenSo, we fix small, such that , and we use (81) and (83) to obtainNow, fixing and combining (77) and (84) yieldIt is a contradiction since is a minimizer of on . Consequently, is negligible and then .
The proof of is analogous to that of . Thus, we conclude that .

Lemma 2. Let be a probability density on such that and

The variational problem defined in (32) admits a unique minimizer in and .

Proof. Since and , then . Then, we use Proposition 2, and we obtain that any minimizer of the variational problem satisfies , with and .
By using Proposition 2 and the fact that is convex, we obtain thatfor all probability density such that . We use now Lemma 1 to get that the functional is lower semicontinuous on the Wasserstein space . We conclude then that the problem admits a solution . The strict convexity of and implies the strict convexity of the map and that of the maps and accordingly the uniqueness of the minimizer of .

3. Euler–Lagrange Equation for the Problem

In this section, we prove that the sequence is a time discretization of the kinetic equation (1). In order to achieve it, we need the following lemma.

Lemma 3. (explicit expression for optimal maps). Assume that satisfies and satisfies . Then, the Monge problemadmits a unique solution such thatwhere is the unique minimizer of the variational problem.

Proof. Let be a test function and consider the diffeomorphism map in defined byDefine the probability density on . Since satisfies and is a diffeomorphism pushing forward to , we obtain the following Monge–Kantorovich-type energy inequality:Recalling the definition of , we haveDividing (91) by and using (92) and the dominated convergence theorem, we haveFurthermore, since , then the Monge–Kantorovich-type energy inequality givesThis implies thatWe combine now (93) and (95) to deriveSince and , then . So,Thus, for , we haveWe use the convexity of and the fact that to obtainNow, from (92) and the dominated convergence theorem, we obtainedSince minimizes the functionalon the probability space, the Euler–Lagrange equation yieldsConsequently, by using (96) and (100), we haveNext, by replacing by in (96), we obtain the desired equation:Thus, we conclude thatNote that is inversible and . Then, we obtain the explicit expression of the optimal map :The proof of this lemma is complete.
We are now ready to show that the sequence satisfies the time discretization (17) of the kinetic equation (1).
Let , thenBy using , , and , we haveNext, we use the Taylor formula and the expression of the optimal to obtainHere, and .
Now defineIf we show that tends to 0 as goes to 0, then we are done.
Indeed, from the maximum principle, , and thenwhere is a compact subset of such that .
Since minimizes the general functional energy on the metric space , thenWe use in (112) the expression of the optimal maps and the definition of the , and then, we haveBecause is convex and , we haveBy using (114) and the fact that , then (113) becomesRecalling (115) and (111), we obtainWe combine (116) and (117) to conclude that tends to 0, when goes to 0. Accordingly, we conclude that the sequence resulted from a time discretization of the kinetic equation (1).
Recalling Section 2.1, we define an approximate solution over of the kinetic equation (1) as follows:In the next section, we establish the convergence of the sequence to a weak solution of (1).

4. Convergence Results

4.1. Weak Convergence of

Let us consider the sequence as defined in (118).

Lemma 4. Assume that satisfies . Then, for all , we have

Proof. Taking , in Corollary 1, we obtainwhere is -optimal map that pushes forward to . We use expression of and we obtainSince and , (121) becomesFrom the convexity of and , we haveThus, using (123) with and , (122) becomesSince and , then ; hence,Multiplying (125) by , we obtain after integrationand then by an iteration process on , we get the proof of Lemma 4.

Lemma 5. Assume that with and . For , we have

Proof. Let and let be a step such that .
We use Lemma 3 and the definition of in (118) to obtain

4.2. Weak Convergence of the Linear Term

Here, we study the weak convergence of the linear term.

Proposition 3. Assume that satisfy . Then, there exists a function in until a subsequence converges to weakly in and weakly-∗ in .

Moreover,for every test function , with .

Proof. Since (see Proposition 2 and Lemma 4), thenHence, the sequence is bounded in , and then, there is subsequence of still denoted that converges to a nonnegative function weakly in and weakly-∗ in .
Since , for every such that , we finally obtain Proposition 3.

4.3. Weak Convergence of the Nonlinear Term

In this section, we establish the weak convergence of the nonlinear term.

First, we prove that the sequence is bounded in , and that is bounded in where .

Lemma 6. Assume that , , and satisfy, respectively, , , and . Then, for all fixed,

Proof. Using (115), we haveBut , and thenRecalling (123), with and , we havewith .
Then,Moreover,and then,Because , we have for all and thenConsequently,Using , we obtain

4.4. Strong Convergence of

In order to prove the strong convergence result, we need to establish the following compactness results for in .

Lemma 7.(velocity-compactness). Assume that satisfies . Fix and such that is small. Then,where is a constant.

Proof. We use , , and (140) to produce thatThen, and and approximating by functions in , we obtainThis implies that

Lemma 8. (position-compactness). Assume that satisfies . We fix and such that is small. Then,where

Proof. By fixing , (17) becomeswhere tends to 0 when goes to 0. We use (146), and the fact that is bounded in to deduce that is bounded in . Then, by using Lemma 4, we haveThus, , and approximating by functions, we use the fact that is bounded in to deduce that

Lemma 9. Assume that holds. Let be quite small such that where is fixed. We havewhere is a positive constant depending on .

Proof. Choose such that and . DefineUsing the definition of in (118), we obtainwhere .
Note thatUsing (152), we havewhereUsing as a -optimal map that pushes forward to , we haveOn the contrary, (see Lemma 6), and from assumptions on , is continuous on , and then there exists a constant such that for . Since (see Lemma 2) and , then . So .
Approximating by functions and using the dominated convergence theorem, we haveNote that and (see Lemma 6).
So, by using Hölder’s inequality, we have thatUsing the algorithm defined in Section 2.1 and Lemma 6, we haveThus,Applying the Hölder’s inequality in the previous relation yieldsWe notice thatNext, we use Lemma 6, and we haveWe combine (160), (161), and (162) to getWriting , we then haveOn the contrary,Since and then , we approximate by functions and we getNote that , and according to Lemma 8, . So, since is continuous and , we have .
Consequently,Next, we use the fact that and the boundedness in , of (see Lemma 8), to getThus,where .
Let us process now to the following estimate.

Lemma 10. Assume that satisfies . Then, for every fixed and for all small, we havewhere is positive constant.

Proof. Since , we use Taylor’s formula to have thatwhere and .
Since , we have (see Lemma 2). Thus, using (171) and the fact that is continuous and nonnegative, we haveDefine . We use Lemma 9 and obtainUsing , we havewhere is positive constant.
We will use Lemmas 7, 8, and 10, to prove that the sequence is precompact in .

Lemma 11. Assume that , , and satisfy, respectively, , , and . Then, is precompact in , where is the open ball centered at origin of radius .

Proof. Define . Then, , with . , , and is compact and So, there exists such that .
Define and we have and
Let , , such that , , and , where .
We use Lemmas 7, 8, and 10 to have, and are defined in Lemmas 7, 8, and 10. So, by using the Riesz–Fréchet–Kolmogorov’s theorem, we deduce that the sequence is precompact in . Therefore, , and then for all , there exists such that , and since is bounded in , we have .
We use corollary of [9], to conclude that the sequence is precompact in , which implies that converges strongly to some function in .
Now, let us prove that the sequence is precompact in .

Lemma 12. Assume that , , and satisfy, respectively, , , and . Then, the sequence is precompact in .

Proof. Since , Lemma 3 implies that is bounded in . Then, converges weakly to some function in (up to a subsequence).
Let be an open ball of radius centered at origin, with . Then, and .
Using previous results, the sequence is precompact in . A subsequence converges strongly to some function in . Let us denote by the subsequence of which converges strongly to in . Fix in . We havewith tends to 0, when goes to ; for , there exists such thatfor all . Since converges strongly to in , we deduce thatwhen goes to 0 and . We conclude that converges weakly to in , for . So for .
Now, let prove that tends to 0 as and .
Note thatLet us recall thatSince , for , aswe can find such that for every , we haveThen, for all ,Consequently, converges strongly to in .
Using the previous lemma, converges strongly to in (up to a subsequence). is being continuous, then converges strongly to in . We conclude that the sequence converges to in . Let us recall that is bounded in , so converges weakly to in .
On the contrary, is bounded in . Then, a subsequence converges weakly to in .
We derive from the proof of Theorem 3.10 in [1] that the sequence converges weakly to in .

Theorem 2. Assume that , , and satisfy, respectively, , , and . Let and be such that and . Then,

Moreover, converges weakly to in and in the weak sense.

Proof. Let us recall the following:(i) converges strongly to in (ii) converges weakly to in (iii) converges weakly to in Then, we haveSince and is bounded in , we use and to haveAlso, we haveUsing , , and the fact that and that is bounded in , we getTo conclude with Theorem 2, we need to establish the following limit, whose proof is derived from the three following lemmas:

Lemma 13.

Proof. Since is convex and , thenSo,Passing to the limit, we obtainTaking into account (187) and (189) in (194) enables us to conclude the proof of Lemma 13.

Lemma 14.

Proof. Using the proof of Lemma 6, we havewhereis a -optimal map that pushes forward . Thus,Since , thenAlso,where .
Let us prove thatIndeed,Using , the continuity of , the boundedness of in , and the fact that converges uniformly to on the compact , we obtainNext, we combine (199), (200), and (203) to getSince is strictly convex, Consequently, replacing and in (5), we obtain Lemma 14.

Lemma 15.

Proof. Let us write for . we have .
Approximating by functions and using (17), we havewhere is defined in (17) and tends to 0 as . By integrating (206) over , one obtainsNext, from the definition of , we derive thatWe combine (208) and (209) to obtainwhereLet us show that tends to 0 as .
Assume that .(i) If , we use (116) and we haveRecalling the fact that , we haveFrom the statements in Section 2.1, we get and accordingly, So and then, From the proof of Lemma 6, we have Then, we obtain So, tends to 0 as .(ii) If we have and then,From the statements in Section 2.1, we obtainand then,Thus, tends to 0 as .
By the change of variable , we havewhereRecalling the definition of , we getThus,Besides,and since ,We also haveButand then by the dominated convergence theorem, we haveand using (232) and (233), we obtain thatNext, the equalityfollows by recalling (229), (227), and (234), and accordingly,We notice thatwhere , and since is convex, thenAccordingly,and after integration, we getSince , for , and then, we obtainTending to 0 in the relation above enables us to haveMoreover, expressions (236) and (242) giveSo, tending to in (243), we obtain Lemma 15.
Now, we derive (185) by combining Lemmas 1315.
So, let us show that the sequence converges weakly to in and in the weak sense.
So, fix and let such that .
We haveIndeed,We use , , and the fact that and is bounded in , and we tend to 0 in (245) to obtain (244).
Let us show that in the weak sense. So, let and be a test function and , with . Define .
Since is convex and , we haveSplitting (246) as follows,we derive from (185) thatConsequently,We divide (249) by , and we tend to 0 and haveReplacing by in the previous relationship, we obtainWe conclude that converges weakly to in , and then, the proof of Theorem 2 is complete.

4.5. Existence of Solutions

Now, we combine the weak convergence of the sequence and that of the nonlinear term to prove that the kinetic equation (1) admits a weak solution in the sense of Definition 1.

Theorem 3. Let be a density of probability on such that , , and with and satisfying, respectively, and .

Then, for any test function such that , we havewhere is the limit function obtained in Proposition 3.

Proof. Fix and let be a test function such that . Integrating (17) over , we obtainwhere tends to 0 as .
We observe thatHence,We combine (254), (255), and (256) to obtainHere,Finally, we use the weak convergence of to , the weak convergence of to , and the uniform convergence of to on the compact set when tend to 0, to derive the following equality:i.e.,In conclusion, we prove that solves (1) in the weak sense.

5. Numerical Implementation

In this section, we use the iterative operator-splitting methods as in [10] to construct an approximate weak solution of the kinetic equation (1) as stated in Section 2.1, and we establish the rate of convergence of our algorithm to the exact solution of the kinetic equation. The scheme consisting after fixing a time step and denoting , with , in defining the approximate solution , of (1) is as follows:where , with and , ; denotes the approximate solution of (1) at defined in Section 2.1.

Remark 2. By fixing , we obtainBy fixing , we obtain as follows:wherewithNote that, for and for , , we obtain , , , , and , where , , , and are defined as in the descent algorithm (see Section 2.1).

5.1. Error Analysis

An estimate of the rate of convergence of to the exact solution of (1) is given as follows.

Theorem 4. Assume that , , and satisfy, respectively, , , and , and , for all ; . Then, we obtain the following error estimation:where , is the error between the exact solution and the approximate solution and and , and is a constant depending on , , , and .

Proof. We use the error function and the linearized equation of (1) via iterative splitting methods. Then, we have for :Since and , then Lemma 16 gives that and andWe conclude that and .
Thus,where , , and are constant depending of , , , and . By using (267), we haveWe now use (270) and (271) to obtainwith . As in (272), we haveNext, from (269), (270), and (271), we deriveAccordingly,where depends on and . Finally, we use (273) and (276) to conclude thatfor

5.2. Numerical Example

In this section, we solves (1), when , , and , i.e., the following Boltzmann equation:

We suppose .

The initial datum is a probability density on .

Let us show that the function defined byis one solution of the kinetic equation (278). Indeed,with the initial datum

Moreover,

Hence, satisfies the kinetic equation (280).

Suppose fixed, one haswith

Consequently, for all .

5.3. The Figures

Here, we perform the first-order approximation of the solution of (278) using the algorithm in (261). We limit our self to the first-order approximation just for the sake of simplicity and to avoid very complex and cumbersome calculations. Numerical computation carried out for and the graphical representations of both the analytical solution and the approximate solution with Scilab software shows a coherence between the approximate solution and the analytical solution.

Moreover, an error tables obtained for some values of the time in the interval for different values of the step are presented below.

By a simple computation, the expression of appears as follows:

For some values of , we draw the figures of both the analytical solution and the numerical solution (approximate solution). We notice that Figure 1 represents an approximation of Figure 2 when , Figure 3 represents an approximation of Figure 4 when , and Figure 5 is an approximation of Figure 6 when .

Errors progression is in terms of the and the norms between the numerical solution and the analytical solution . One can actually notice the convergence of our method in the and in norms when is decreasing to 0, which is explained as follows (Table 1).

Appendix

Here, we give the regularity result necessary in the proof of Theorem 4.

Lemma 16. Assume that , , and satisfy, respectively, , , and , , for all and . Then, , for all and for all .

Moreover, and

Proof. Since , then , where is the complement of . In Section 2.1, we have , whereand is the unique solution of the variational problem.withSince , then .
The -optimal map that pushes forward to satisfies thenWe now use the explicit expression of (see Lemma 3):along with (A.5) and to getThis implies that , and then, . Thus, we deduce that .
Since , we obtain , for all . Finally, we obtain by induction , for all .
Since and have compact support and that is strictly convex, then the -optimal maps that pushes forward to is differentiable, and we haveHence, satisfies the Jacobian equation:Since , the maximum principle gives . Then, we deduce thatNote that is diagonalizable and has positive eigenvalues (see [1]). Then, we deduce that and . And by using (A.8), we obtain

Data Availability

There are no data underlying the findings in this paper to be shared.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This subject was suggested to us by the late Professor and friend Martial Agueh, University of Victoria, Canada. The authors pay their heartfelt respect to his memory. This work was supported by IMSP under the CEA-SMA project grant.