Abstract

This paper presents a computational study on a quasi-Galerkin projection-based method to deal with a class of systems of random ordinary differential equations (r.o.d.e.’s) which is assumed to depend on a finite number of random variables (r.v.’s). This class of systems of r.o.d.e.’s appears in different areas, particularly in epidemiology modelling. In contrast with the other available Galerkin-based techniques, such as the generalized Polynomial Chaos, the proposed method expands the solution directly in terms of the random inputs rather than auxiliary r.v.’s. Theoretically, Galerkin projection-based methods take advantage of orthogonality with the aim of simplifying the involved computations when solving r.o.d.e.’s, which means to compute both the solution and its main statistical functions such as the expectation and the standard deviation. This approach requires the previous determination of an orthonormal basis which, in practice, could become computationally burden and, as a consequence, could ruin the method. Motivated by this fact, we present a technique to deal with r.o.d.e.’s that avoids constructing an orthogonal basis and keeps computationally competitive even assuming statistical dependence among the random input parameters. Through a wide range of examples, including a classical epidemiologic model, we show the ability of the method to solve r.o.d.e.’s.

1. Introduction and Motivation

The nondeterministic nature of phenomena in areas such as engineering, physics, chemistry, and epidemiology often leads to mathematical continuous models formulated by random ordinary differential equations (r.o.d.e.’s). The uncertainty involving such phenomena appears through the input model parameters (coefficients, source/forcing term, initial/boundary conditions) which then are considered as random variables (r.v.’s) and stochastic processes (s.p.’s) rather than constants and ordinary functions, respectively. The complexity of such stochastic continuous models becomes greater when random inputs are assumed to be statistically dependent, a hypothesis that is met in practice.

Solving a r.o.d.e. consists of not only computing its solution, which is a s.p., but also its main statistical characteristics such as the mean and standard deviation/variance functions. To achieve these objectives, a high number of methods have been proposed, although we underline that most of them rely on the statistical independence of the random input parameters. A good account of such methods can be found in [1].

Due to the fact that it shares common basics with the method that we will presented later, here we highlight a family of powerful methods to deal with r.o.d.e.’s, usually referred to as generalized Polynomial Chaos (gPC). These techniques are based on the Galerkin projection method [26]. In this context, all involved random magnitudes are assumed to be elements of the set , where is a probability space. This means that its elements have finite variance. The set endowed with the inner product , , where denotes the expectation operator, is a Hilbert space [7]. gPC method takes advantage of the following key property: finite set of orthogonal functions in , which depend on a number of r.v.’s, , , generates a finite-dimensional subspace of , . Then, given an element in , which can be a r.v., , or a s.p., , we can obtain an approximation of or as the projection of , or onto ; that is, where the so-called Fourier coefficients and are given by respectively.

In the following, we summarize how gPC technique works in dealing with r.o.d.e.’s. Let us consider the r.o.d.e as follows: where denotes a differential operator, the vector represents the random parameters, is the solution s.p. that is to be determined, and is a forcing term. In this context, is usually referred to as the order of the chaos. In order to avoid a cumbersome notation in the subsequent presentation, we will assume that ; that is, there is only a random parameter: . In a first step, one substitutes the approximated representations of and given by (1) in the r.o.d.e. (3). Second, one multiplies the r.o.d.e. by every element of the expansion basis , and then, the ensemble average defined by the inner product is taken. This leads to that corresponds to a deterministic system of coupled with differential equations whose unknowns are the node functions . These unknowns can be computed by standard numerical techniques such as the Runge-Kutta scheme, for instance. According to expression (1), it determines an approximation of the solution s.p. . Approximations to the expectation and variance functions of are computed as follows: respectively.

Now, we remark some aspects related to gPC that will help to highlight better the differences with the method that will be presented in the next section. In dealing with r.o.d.e.’s which depend on a number of random parameters, , in general, gPC method represents these parameters as well as the solution s.p., , in terms of a family of r.v.’s, rather than directly in terms of the random inputs: , . The selection of r.v.’s as well as the orthogonal basis must be made according to the probability distributions of the random inputs . In [3], authors provide a comprehensive criterion to set this choice in such a way that an exponential convergence of the error measures for the average and variance of the approximations of the solution holds. In [3], the usefulness of this guideline is shown when there is just one random input parameter, and it also belongs to some of the standard distributions (Poisson, binomial, gaussian, beta, exponential, etc.). In the usual case that two or more r.v.’s appear as input parameters, none of optimal criteria have been established, even assuming that the involved r.v.’s have standard probabilistic distributions.

From a theoretical point of view, the orthogonality of the basic functions used in the previous development permits to cancel some involved computations (inner products) when setting the deterministic system of o.d.e.’s (4), although in practice, the total number of cancelations depends strongly on the specific form of the r.o.d.e. (1). However, this theoretical advantage may not compensate the high computational cost that usually entails the orthogonalization process. Moreover, orthogonalization is a critical part of the process, and the obtained result may return vectors with loss of orthogonality what may ruin the computations of the involved inner products and the building and resolution of system (4), [8, pp.230–232] Golub. This leads us to present a method, based on the Galerkin projection, that overcomes this drawback. The study focuses on computational aspects.

The paper is organized as follows. In Section 2, we present the Galerkin projection-based method that allows us to solve a certain class of systems of r.o.d.e.’s whose uncertainty is considered through a finite number of dependent r.v.’s. Expressions for the expectation and the variance of the solution s.p. are also given. Section 3 is addressed to present the algorithm of the method proposed in Section 2 as well as to discuss its most relevant computational aspects. Section 4 begins with a test example whose exact expressions for the mean and variance are available. Hence, the quality of the approximations provided by the proposed method is better assessed. This comparative study is completed by showing the competitiveness of our approach against the Monte-Carlo simulations. The rest of the section is devoted to show a wide range of examples, for both r.o.d.e.’s and systems of r.o.d.e.’s, where the proposed method is satisfactorily applied. In all these examples, the randomness is also considered through different joint probability distributions to illustrate better the power of the proposed method. Conclusions are drawn in Section 5.

2. Method

Hereinafter, we will consider systems of r.o.d.e.’s of the form: where is the independent variable, denotes the vector of random input parameters which can appear in the coefficients and/or the forcing term of the r.o.d.e. as well as the initial condition. Components , , of are assumed to be elements of the Hilbert space . will denote the joint probability density function (p.d.f.) of . Also, is the null vector of size ;   and are the vectors of unknown functions and initial conditions, respectively. Here, denotes the transpose operator for vectors and matrices. is a map which is assumed to be a polynomial transformation of , , and , . At this point, notice that an important feature of our approach is that we do not assume statistical independence among input parameters.

As in this paper, we are concerned with constructive computational aspects, and we will assume that conditions for the existence and uniqueness of a solution stochastic process to initial value problem (6) are satisfied. We point out that the available results mainly consist of a natural generalization of the classical Picard theorem based upon convergence in the space of successive approximations [1, page 118].

In , we consider the linear subspace spanned by the canonical polynomials of degree to be equal to or less than of the random model parameters, , ; that is, The number of elements of the set is .

Next, we approximate the unknown s.p. of (6) in terms of elements of (thus, directly in terms of the random model parameters): Therefore, We substitute both expressions in the r.o.d.e. (6): In order to compute the unknown functions , we are going to establish a deterministic system of o.d.e.’s. This is achieved by multiplying the previous expressions by the elements of the set and then taking the ensemble average defined by the -dimensional integral: where denotes the support of the vector random model parameters: . This leads to As we are directly developing the involved random magnitudes in terms of , , in contrast with the development presented in Section 1 (see expression (4)), now we do not need to substitute the components of the vector random model parameters . Likewise, we can establish the corresponding deterministic initial conditions following a similar reasoning. First, we consider the truncated approximation of the initial condition: and following an analogous reasoning, one gets Expressions (12)–(14) constitute a deterministic initial value problem (i.v.p.), usually referred to as auxiliary system, whose unknowns, , , can be computed using some of the numerous numerical techniques available such as the Runge-Kutta scheme. In this manner, an approximation to the solution s.p. defined by (8) is calculated. In addition, the approximations of the expectation and the variance-covariance matrix of are given by respectively. Notice that is a square matrix of size whose element of its diagonal represents the variance of , of .

Remark 1. For the sake of clarity in the presentation, we have used the multi-index notation to express the polynomial basis , being as with , . As a consequence, we have represented the solution s.p. in the form (8) rather than (1), where , with , is the elements of the correspondent basis. However, notice that both expressions are equivalent except for orthogonality. In fact, writing instead of (since now we are directly using the vector random models parameters to represent the solution), this identification can be checked considering the following simple tensor product: for some mapping starting from that corresponds to . Specifically, one gets the following identification:

3. Algorithm

Based on the method previously developed, in this section, we will give an algorithm to compute the expectation and the standard deviation of the solution s.p. of a random i.v.p. of the form (6) whose analytical expressions are given by (15)-(16), respectively.

The inputs of the algorithm are as follows:(i)the r.o.d.e.’s (model): , with random initial condition: ;(ii)the random model parameters (this includes both the coefficients/forcing terms of the r.o.d.e. and the initial conditions): ;(iii)the joint probability density function (p.d.f.) of : ;(iv)the truncation order of the polynomial expansion of random model parameters (see expression (7)).

Now, we describe the algorithm.(i)Step  1. Define the scalar product given by (11).(ii)Step  2. Build the basis of canonical polynomials of degree of the random model parameters, as given in (7).(iii)Step  3. Consider the truncated expansions to the solution s.p., its derivative, and the random initial condition given by (8), (9), and (13), respectively.(iv)Step  4. Substitute the above expansions into the model to obtain the random i.v.p. (10) and (13).(v)Step  5. Obtain the auxiliary system in two phases: first, multiplying each equation of the random i.v.p. (10) and (13) by the polynomials of the basis built in Step  2 and, second, taking the ensemble average inferred by the inner product constructed in Step  1. In this manner, expressions (12) and (14) are obtained, respectively.(vi)Step  6. Solve numerically the auxiliary system set in Step  5.(vii)Step   7. Compute the expectation and the standard deviation (or equivalently, the variance) of the solution s.p. taking into account expressions in (15) and (16).

3.1. Computational Aspects and Implementation

Under the computational point of view, Step  5 of the above algorithm is the most demanding (the building of the auxiliary system) because we have to evaluate many inner products that may involve unbounded integrals with transcendent functions. During this step, using the linearity of the inner product, we can obtain the equations of the auxiliary system in function of the simplest inner products of the form the less difficult to be computed, but not straightforward. This fact has led us to consider directly the canonical basis defined in (7) in opposition to other possibilities, for instance, an orthonormal basis, theoretically more appropriate. Eventually, any expansion will disappear when expanding expressions. Moreover and, specifically for the case of the orthonormal basis, we should say that their calculation uses QR decomposition, stable in certain implementations but computationally expensive [8], more if we realise that we deal with inner products that may be defined by unbounded integrals of transcendent functions.

The algorithm has been implemented using Mathematica [9], and it is available at http://gpcdep.imm.upv.es. We have used symbolic manipulation features of this computational software program to optimise the algorithm performance. Therefore, even though the natural implementation order seems to be the one shown in the presented algorithm, we have decided to implement a symbolic version of Steps  1, 3, 4 and 5 as the first step, where all the inner products have been decomposed into auxiliary inner products as in (19), and these new inner products have not been carried out (they have been manipulated symbolically). Doing this, we have realised that a lot of the inner products are repeated. In fact, regarding Examples 28 of Section 4 only around 2%–20% of them are different, what implied an important computational saving in the most critical step. The more independence in the r.v.’s, the more saving, because the inner products corresponding to independent r.v.’s can be computed separately. Table 1 collects the percentages of computational saving related to the involved inner products in the examples shown in Section 4.

Once the auxiliary system has been stated symbolically, we define the inner product and try to use the simplification commands provided by Mathematica to obtain the inner product (19) in a form that can be computed faster. It is used to be easy when the inner product is defined in a bounded domain. In unbounded domains, simplification is often no possible, and the computation of the inner products turns difficult to be carried out when the degree of the polynomial increases.

Then, we compute the nonrepeated inner products of the form (19) appearing into the auxiliary system. We must point out that this is the most demanding step.

Next, we substitute the obtained inner products into the auxiliary system, which is then solved numerically using NDSolve command. Finally, the expectation and the standard deviation are computed.

4. Examples

In this section we will present a variety of examples where the method previously developed is applied. With the aim of illustrating better the ability of the method to deal with a certain number of dependent r.v.’s, in Examples 16 we will consider the following pattern model: which is a Riccati-type r.o.d.e. that includes as a particular case the linear model when . In these examples, we will increase the number of parameters , , that are assumed to be r.v.’s. The introduction of different joint probability distributions to the involved r.v.’s will also be considered.

The two first ones are test examples since their exact solutions are available. Then, the quality of the approximations provided by our approach can be better assessed. In the former, we will compare the approximations provided by our approach with the ones corresponding to Monte-Carlo simulations. Monte-Carlo technique can be considered as the most commonly approach used to deal with r.o.d.e.’s. In the second test example, we will detail all the involved computations to clarify the previous notation and methodology.

In Example 7, we illustrate the technique using another r.o.d.e. Finally, we complete the study examples showing the ability of the proposed technique to deal with a classical SIRS-epidemiological model.

The Mathematica source code, its explanation and application to solve Examples 17 can be found at http://gpcdep.imm.upv.es.

Example 1. Let us consider the linear r.o.d.e. as follows: where the input random vector is assumed to have a bivariate gaussian distribution; that is, , where Denoting by the joint p.d.f. of vector and taking into account that the exact solution s.p. of (21) is , it is easy to check that its (exact) expectation and variance are given by respectively.
In order to illustrate the competitiveness of the proposed method against Monte-Carlo simulations, in Figures 1 and 2 we show their relative errors with respect to the exact expectation and standard deviation on the interval , respectively. To highlight better the differences, a logarithmic scale has been used in the vertical axis in both plots. In accordance with the notation introduced in the previous section, the approximations to both statistical moments have been carried out having the following: , , and (i.e., polynomials of degree equal to or less than 10 have been used), whereas the Monte-Carlo approximations have been performed using simulations. These plots show that the proposed method provides very good approximations which improve the ones generated by Monte-Carlo technique.

Example 2. Let us consider the r.o.d.e. as follows: where , are dependent r.v.’s whose joint p.d.f. is given by In Figure 3, we show the approximations of the expectation (solid line) and plus/minus the standard deviation (dashed lines) of the solution s.p. on the interval . Both approximations have been computed using polynomials of degree equal to or less than 3. Then according to the previous notation: ,  ,  , the total number of polynomials in the canonical basis is , and these are

The quality of the approximations to the expectation and the standard deviation can be assessed in this example since the corresponding exact expressions are available. Figures 4(a) and 4(b) show the absolute errors for the approximations to the expectation and the standard deviation of the solution s.p. on the interval , respectively. Both plots show that the approximations provided by the method presented in Section 2 are reliable.

Example 3. Let us consider the following r.o.d.e. based on (20) where now uncertainty appears through the two coefficients and : We assume that the random vector has a bivariate Gaussian distribution; that is, , where Figure 5 shows the approximations of the expectation (solid line) and plus/minus the standard deviation (dashed lines) of the solution s.p. on the interval . In this example, both approximations have been computed using polynomials of degree equal to or less than 2. Notice that according to the previous notation one has , , and .

Example 4. Now, we analyse the r.o.d.e. (20) where randomness is considered through three random parameters. Specifically, let the random i.v.p. be as follows: We will assume that is a random vector whose joint p.d.f. is given by Using the method presented in Section 2 with ( random polynomials of degree equal to or less than 3, that is, and , we have obtained approximations to the expectation and plus/minus the standard deviation of the solution s.p. on the interval . Notice that in this example . Results are depicted in Figure 6.

Example 5. In this example, we deal with r.o.d.e. (20) assuming that uncertainty appears through all the parameters , . We will assume that the random vectors and have multivariate Gaussian distributions; that is, , , where Figure 7 shows the approximations of the expectation (solid line) and plus/minus the standard deviation (dashed lines) of the solution s.p. on the interval . In this example, both approximations have been computed using polynomials of degree equal to or less than 2. Notice that according to the previous notation one has , , and .

Example 6. In this example, we deal with r.o.d.e. (20) assuming that uncertainty appears through all the parameters , . We will assume that the random vector has a multivariate Gaussian distribution; that is, , where Figure 8 shows the approximations of the expectation (solid line) and plus/minus the standard deviation (dashed lines) of the solution s.p. on the interval . In this example, both approximations have been computed using polynomials of degree equal to or less than 2. Notice that according to the previous notation one has , , and .

Example 7. In this example we will study the following nonlinear r.o.d.e.: where , is assumed to be a bivariate Gaussian distribution; that is, , where Figure 9 shows the approximations of the expectation (solid line) and plus/minus the standard deviation (dashed lines) of the solution s.p. on the interval . In this example, both approximations have been computed using polynomials of degree equal to or less than 2. Notice that according to the previous notation one has , , and .

Example 8. In this example, we consider the SIRS (Susceptible-Infectious-Recovered-Susceptible) model for the transmission of dynamics of the Respiratory Syncitial Virus (RSV) proposed by Weber et al. in [10]. This model is based on the following nonautonomous system of differential equations: where , , and are the population of susceptible, infectious, and recovered, respectively, is the birth rate and it is supposed to be equal to the mortality rate, is the rate of loss of immunity, is the rate of loss of infectiousness, is the infection transmission rate, with being the average of transmission, is the amplitude of the seasonal fluctuation, and is the seasonal phase.
In [10], the authors study the spread of RSV in Finland, obtaining the following parameter values: ; ; ; ; ; . We will assume that the random vectors and have bivariate Gaussian distributions; that is, , , where Notice that in this way, all the above r.v.’s have as average the deterministic values given in [10].
Now, we assume that of total population is infectious, so initial conditions are given by , , and .
In Figures 10, 11, and 12 we have plotted the approximations of the expectation (solid line) and plus/minus the standard deviation (dashed lines) of the susceptible (), infected (), and recovered (), respectively, on the time-interval . These approximations have been computed using polynomials of degree equal to or less than 2. Notice that according to the previous notation one has , , and .

5. Conclusions

In this paper, we have presented a method to deal with dependent randomness into a class of continuous models (systems of random ordinary differential equations) focusing on computational aspects and its applicability to a wide range of examples. The method shares similarities with generalized polynomial chaos (gPC) in its basics since it is based on a variation of Galerkin projection techniques. However, the first main difference with respect to gPC approach is that it represents both the random model inputs as well as the solution stochastic process directly in terms of the random model parameters; therefore, accurate approximations of the expectation and standard deviation of the solution are provided. Second, the method avoids constructing an orthogonal basis to set such representations. In this manner, one of the computational bottlenecks to gPC, which is the construction of the so-called auxiliary system (see expressions (12)–(14)) from the orthogonal basis, is avoided. Finally, we have shown through a wide variety of examples the ability of the method to deal successfully with random continuous models whose uncertainty is given by statistical dependent random variables, which is the most complex situation.

Acknowledgments

This work has been partially supported by the Spanish M.C.Y.T. Grants: DPI2010-20891-C02-01 and FIS PI-10/01433; the Universitat Politècnica de València Grant: PAID06-11-2070, and the Universitat de València Grant: UV-INV-PRECOMP12-80708.