Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2018, Article ID 7403745, 23 pages
https://doi.org/10.1155/2018/7403745
Research Article

Numerical Procedures for Random Differential Equations

1Mathematical Modeling and Control, Department of Mathematics, Faculty of Sciences and Techniques of Tangier, Abdelmalek Essaadi University, Tangier, Morocco
2Research Center STIS, Department of Applied Mathematics and Informatics, ENSET, Mohammed V University, Rabat, Morocco
3Department of Mechanical Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah, Saudi Arabia
4LTI, Ecole Nationale des Sciences Appliquées de Tanger, Abdelmalek Essaadi University, Tangier, Morocco

Correspondence should be addressed to Lahcen Azrar; am.ten.s5mu@rarza.l

Received 2 February 2018; Accepted 26 March 2018; Published 21 May 2018

Academic Editor: M. Shariyat

Copyright © 2018 Mohamed Ben Said et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Some methodological approaches based on generalized polynomial chaos for linear differential equations with random parameters following various types of distribution laws are proposed. Mainly, an internal random coefficients method ‘IRCM’ is elaborated for a large number of random parameters. A procedure to build a new polynomial chaos basis and a connection between the one-dimensional and multidimensional polynomials are developed. This allows handling easily random parameters with various laws. A compact matrix formulation is given and the required matrices and scalar products are explicitly presented. For random excitations with an arbitrary number of uncertain variables, the IRCM is couplet to the superposition method leading to successive random differential equations with the same main random operator and right-hand sides depending only on one random parameter. This methodological approach leads to equations with a reduced number of random variables and thus to a large reduction of CPU time and memory required for the numerical solution. The conditional expectation method is also elaborated for reference solutions as well as the Monte-Carlo procedure. The applicability and effectiveness of the developed methods are demonstrated by some numerical examples.

1. Introduction

Stochastic and random differential equations constitute a growing field of great scientific interest. There are mainly three categories of random differential equations. The first and the simplest class is one where only the initial conditions are random. The second class is characterized by the presence of random nonhomogeneous or input terms and the third one is the differential equations with random coefficients. To deal with errors and uncertainties, random coefficients have been increasingly used in the last few decades.

This paper focuses on the combined second and third classes because this type of equations offers a natural and rational approach for mathematical modeling of many physical phenomena. The last decades have witnessed an enormous effort in the fields of parameters uncertainty and random or stochastic differential processes. This is due to the fact that any physical system contains uncertainties and its real phenomena may be modeled by stochastic differential equations with random or stochastic process coefficients. These equations take into account the approximate knowledge of the numerical values of the physical parameters on which the system depends and have been a matter of intensive investigation.

A number of techniques are available for uncertainty sensitivity and propagation such as Monte-Carlo procedure [1, 2], sensitivity analysis methods [3], and polynomial chaos [4, 5] among others. Monte-Carlo (MC) method has been the mainstream uncertainty quantification technique for decades. It is the most used method and is valid for a wide range of problems. However, it is very computationally expensive since it requires a large number of simulations using the full model.

An alternative approach is based on the expansion of the response in terms of a series of polynomials that are orthogonal with respect to mean value operations. Polynomial chaos was first introduced by Wiener [6] where Hermite polynomials were used to model stochastic processes with Gaussian random variables. A number of other expansions have been proposed in the literature for representing non-Gaussian process [7, 8]. Recent review papers by Stefanou [9] and by Schuëller and Pradlwarter [10] summarized the assessment of the past and current status of the procedure for stochastic structural analysis.

This polynomial representation provides a framework suitable for computational simulation and then widespread in mathematical and numerical analysis of many engineering problems. Various problems have been solved based on this approximation such as solution of stochastic differential equations [11], linear structural dynamics [4, 5], and nonlinear random vibration [12, 13]; soil-structure interaction [14], structural reliability [15], and identification [16, 17]. More recently, Trcala used polynomial chaos for nonlinear diffusion problems of moisture transfer in wood [18]. The accuracy of the PC approximation has been evaluated by Field and Grigoriu [19]. A convergence of the decomposition of the solution into the polynomials chaos is studied by Dvurecenskij et al. [20] and the conditions associated with the distribution function of the random vector appearing in the solution for a convergence toward the solution are given by Ernst et al. [21].

The polynomial chaos has been used in many finite elements problems [4]. Accurate discrete modeling of complex industrial structures leads to a large finite element model. To reduce the CPU time, reducing the order of the models is very useful. Component mode synthesis (CMS) is a well-established method for efficiently constructing models to analyze the dynamics of large and complex structures that are often described by separate substructure (or components) models. Sarsri et al. [22] have coupled the CMS methods with the projection chaos polynomials methods in the first and second orders to compute the frequency transfer functions of stochastic structures. This coupling methodological approach has been used by Sarsri and Azrar in time domain [23] as well as a coupling with the perturbation method [24].

The polynomial chaos methods are well suited for the random differential equations, RDE, with a very few number of random variables defining their main coefficients. It is well known that if the number of the considered random variables increases, the needed number of unknowns to be determined for solving the random systems increases very rapidly with the degree of the polynomials. Thus, for accurate solution, the CPU time and memory required may be prohibitive. This greatly limits these methods to random differential equations with very few numbers of random parameters.

An alternative approach called internal random coefficients method (IRCM) is developed in this paper. A careful presentation is given in the frame of higher-order random differential equations. This method is based on generalized polynomial chaos and the superposition principle. It can be used to solve random differential equations with a large number of random variables and an input right-hand side decomposed in an arbitrary number of random coefficients. The considered random parameters may follow various distribution laws.

A procedure to build a new polynomial chaos basis and a connection between the one-dimensional and multidimensional polynomials is established. Different distribution laws can be easily considered. Based on the superposition principle, the random differential equation with an input depending on several random variables is decomposed on a sequence of RDE with the same main random operator and reduced right-hand sides. A series of RDEs with reduced number of random variables have thus to be solved based on the generalized polynomial chaos decomposition. The global system is then solved by a drastic reduction of the CPU time and memory space. For the sake of comparison, the conditional expectation method is developed for the considered random differential equations as well as the Monte-Carlo method. The applicability and effectiveness of the presented methodological approach have been demonstrated by numerically solving various examples.

2. Mathematical Formulation

In this work, various methodological approaches are elaborated to solve higher-order initial value problems with linear and nonlinear random variables subjected to a random input right-hand side. For this aim, the following stochastic differential equation is considered:with deterministic initial conditions, where is the stochastic process response and is a linear random operator of order defined by

It is assumed that the random coefficients depend on the random vector which is defined in a probability space The input right-hand side, , is assumed to be dependent on the random vector that is defined in the probability space (). Explicit expression of is given later.

For numerical solution of (1), a new procedure based on the general polynomial chaos, GPC, expansion procedure is elaborated. Herein, the classical GPC procedure is reviewed in a clear manner.

In the present work, the random variables, component of the vector , are assumed to be independent but may have general distinct distributions . If are classical distributions, such as normal, uniform, gamma, and beta, the associated known polynomial chaos can be used. Otherwise, the procedure, developed in this paper, will be used to build the needed polynomial basis. Explicit expressions of this basis for general cases are given. In addition, the number of random variables and the differential order are arbitrary.

The concept of internal random coefficients is introduced and combined with the superposition principle and generalized polynomial chaos expansion. For the sake of comparison, conditional expectation and Monte-Carlo methods are also elaborated.

2.1. General Polynomial Chaos Formulation
2.1.1. General Formulation

For general purpose, let us consider random vectors and , presented in the following forms:where for to are random variables defined from the probabilistic field to . The random vectors and are assumed to be independent and gathered in the vector :We assume that the random vector has a distribution function with respect to the Lebesgue measure denoted by . denotes the set of square-integrable functions with respect to the weight measure :with the following associated inner product:Let us note that the distribution may be Gaussian or non-Gaussian. In the present analysis, various types of distribution functions may be considered.

The general polynomial chaos associated with the random vector is denoted by . These polynomials coincide with the orthogonal polynomials associated with the inner product defined in (6) and verifywhere are given by

The solution of the main equation (1) is a time stochastic process depending on the random vector and decomposed in the polynomial chaos basis :

For general purpose, the random coefficients are assumed to depend linearly and nonlinearly on the random variables ; and written in the following general form:in which , and .

The right-hand side of (1), , is assumed to be a time-dependent random function that depends linearly on and expressed bywhere and are considered deterministic function and constants.

Note that a more general right-hand side excitation can be decomposed in the form (11) using the Karhunen-Loéve expansion [25].

Based on a reduced decomposition using the first terms, the stochastic process can be approximated bywhere is a multidimensional general polynomial chaos depending on the random vector .

The insertion of these expressions in (1) leads to the following order random differential equation:

Projecting this equation with respect to for to , the following deterministic differential system is then obtained:

Note that the first- and mainly the second-order differential equations of the above kind, or 2, have been investigated by many authors when the number n of random variables is too small. When the random variables are Gaussian, Hermite-chaos polynomials in ][ are used in [4]. This standard approach is very often used in structural dynamics. Various other works are elaborated when are uniform, gamma, or beta and thus Legendre-chaos in , Laguerre-chaos in , and Jacobi-chaos in are, respectively, used [8].

The expansion on polynomial basis of the vector is related to the polynomial basis associated wih each random variable in the case of independent variables. This relation is clarified and the correspondence is clearly established herein. The procedure allowing clarifying the inner product used in the differential system (14) is established and an explicit simple procedure is given, in the next subsection.

Firstly, this procedure is established in the next paragraphs for independent random variables, based on the relationships between the random variables and random vector . Secondly, new variables are introduced that are not necessarily independent and the procedure is established for general cases.

2.1.2. Condensed Formulation

In order to formulate the considered problem in a condensed form, the following mathematical developments will be used. As the variables are assumed to be pairwise independent, the joint distribution function is then given bywhere , to are functions of marginal distributions associated with each variable .

The general polynomial chaos associated with each variable is denoted by . These polynomials coincide with the orthogonal polynomials associated with the inner product defined in with respect to the weight function :with the associated inner product given by

The set of orthogonal polynomials satisfies the orthogonal conditions:where

In order to make a correspondence between the set of polynomial chaos associated with each variable and that associated with the random vector , the following total order is introduced on the set by

This order allows considering a bijection from to , defined by

This bijection relates each element in by its order defined in (20). Let be a nonzero integer; the integer used in the decomposition (12) is taken asThe choice of allows decomposing the solution in a set of polynomial chaos associated with the vector of degree less than or equal to . So, for all integers between 0 and , there is a single such that

This one-to-one correspondence allows writing the multidimensional polynomial chaos associated with the random vector as a function of the one-dimensional polynomial chaos corresponding to each variable bywhere () is the multi-index associated with the integer , introduced by (23).

For all integers and between 0 and the square matrix of order is defined by

Let , the set of polynomials of degree less than or equal to . Then, the set of the classical polynomial chaos is an orthogonal basis of for the inner product defined by (17). Let be the canonical basis of and be the passage matrix from the canonical basis to the chaos basis . This matrix can be obtained in a standard way, using the Gram-Schmidt procedure or a recursive method.

Let the vectorsThen, one has

For and between 0 and , the moment of order of the random variable is defined by

The square matrices of order , , are defined by

This gives explicitly matrix by

Let and be two integers between 0 and . From (23) we have, respectively, unique elements and in such that and . Rewriting using expression (26), the following expression is explicitly obtained:

The expression in the right-hand side of (14) is thus given by

For independent random variables, a relationship between the multidimensional and associated one-dimensional generalized polynomials is established. This leads to explicit and closed forms of the used scalar product and needed terms to be numerically computed. Using these relationships resulted in the following deterministic differential system:

For a compact formulation and using notation (26), the square matrices of order and the time-dependent vector of dimension are introduced for all integers , by

Using these notations, differential system (34) is rewritten in the following closed form system:

Using the presented methodological approach, the truncated solution (12) can be numerically obtained. Its mean and variance are given by

Equation (14) is usually given for Hermite polynomials when random variables are Gaussian. This kind of projection is classically done and many authors follow this procedure.

In this paper, the random variables may follow various types of laws. The presented generalized formalism allows one to handle multilaws by using the canonical basis and the orthogonalization principle with respect to a scalar product associated with the distribution function of the random vector. Explicit and closed forms of the used scalar products and needed terms to be numerically computed are given.

It should to be noted that the major problem of this classical decomposition into polynomials chaos expansion is that the number of unknowns to estimate increases very rapidly when the degree of the polynomial chaos and the number of random parameters increase. More clearly, for n random variables, the number of unknown coefficients in the polynomial chaos of orders less than or equal to is . Table 1 presents the numbers of the needed unknown terms for various and . This very fast growth of dimensionality is the main limitation of this classical approach.

Table 1: Number of needed unknown terms in the polynomial chaos.

To overcome this drawback, a concept of internal random coefficients is introduced herein. This allows reducing drastically the number of random variables, especially if the initially considered number of random variables is large.

2.2. Internal Random Coefficients Method (IRCM)

Let us recall that is the derivative order of the random operator and is the number of the random variables defining .

If , then better use the methodological approach presented in the previous sections.

If , then the coefficients will be considered as the new random variables and called here internal random coefficients. In fact, the following procedure will be of big interest when the number of random variables is too large with respect to the differential order . A large reduction will result based on the procedures developed herein.

Let us consider the worst case: . The coefficients and the random vector can be gathered in a new random vector:which depends on the initial random vector . It has to be noted that the coefficients are not independent of each other but independent of .

In this section, it is assumed that the distribution function of the random vector is a continuous function in and has a compact support denoted by .

This hypothesis ensures the quadratic convergence of the series formed by polynomials chaos to the solution [20, 21]. Further, we assume that there exists a diffeomorphism from to , defined by

Such that, for all between 0 and , one hasand, for all between and , one has

The other components of , from to , are chosen from the components of to complete the construction of .

Let us put . The random vector , so defined, has a distribution function with respect to the Lebesgue measure, denoted by . This function is given over the distribution function associated with the random vector by where is the determinant of the Jacobean matrix of .

Let be the distribution function related to the considered Lebesgue measure of the random vector given bywhere is an observation of the random vector .

For normalized parameters, the following reduced variables are introduced:where andThen, for, all integers between 0 and , one has

Using these new expressions, the main random equation (1) is reduced to the following simplified random differential equation:

It has to be noted here that the number of random variables is reduced and the resulting random differential equation is thus easier to handle.

Recall that the coefficients and the random variables, used in the right-hand side of (48), are independent of each other as well as of . This is due to the fact that the random vector depends only on the random vector . and are then independent.

Let and be, respectively, the distribution functions of the random vectors and with respect to the associated Lebesgue measures. Taking into account the independence of random variables one has

The decomposition of the solution of the random equation (48), according to the multidimensional polynomial chaos basis, is given by

Using first terms of the series (50), the process can be approximated by

Inserting these expressions in (48), the following random differential equation resulted:

Projecting this equation with respect to for , leads to

Let be polynomial chaos associated with the random vector

Using bijection (21), there exist two elements in and , such that for all integers , and

The multidimensional polynomial chaos associated with the random vector can be expressed by the polynomial chaos associated with the random vector and the polynomial chaos associated with the random variables using the notations (23) and (54) by

Inserting this relationship into (53), the following random differential equation resulted:

Based on the independence of the random vector and the random variables , the inner products in (56) are rearranged aswhere is the Kronecker symbol.

Let be the canonical basis of . This basis can be ordered in the following form:

The orthogonalization of the basis gives the polynomial chaos associated with the random vector . This orthogonalization is performed using a classical method such as the Gram-Schmidt procedure in and the following inner product:

Let be the tensor defined for all , by

This integral can be computed using the distribution function of the vector and the diffeomorphism by

The integer , considered in decomposition (51), is chosen such that the considered polynomials have the degree less than or equal to and expressed by

Let be the canonical basis of , then

Let us denote by the passage matrix from the canonical basis to the polynomial chaos basis associated with the vector of degree less than or equal to . Using the following notations,

one gets

The matrices , and , defined for all integers and between 0 and , are introduced for , by

Let , elements in such that

The matrices of order () for are defined by

This allows defining the matrices and by

The random differential system (52) is thus reduced to the following deterministic one:where and are integers related to and and introduced in (54).

Finally, the last differential system is rewritten in the following condensed matrix form:where

Based on the presented mathematical procedures, the truncated solution (51) can be numerically obtained.

An equivalent form of (73) has been introduced in (37) based on original random variables. These two equations give the approximate solution of U. The coefficients of these two equations are given, respectively, by the inner products in (57) for (73) and (32)-(33) for (37).

It should be noted, on one hand, that these coefficients are hard to be determined for (37) but easier for (73). On the other hand, the number of the random variables used in (73) is largely reduced. This reduces the number of unknowns used to determine the solution U and greatly simplifies the required numerical computation.

Note that when uncertainties come from random parameters of the system parameters, they can be efficiently handled by the previous mathematical procedures. When the right-hand side excitation depends on many random variables, the previous internal coefficients procedure can be largely improved by the so-called superposition method.

2.3. Superposition Method

Thanks to the linear behavior of the considered system, the superposition principle can be applied to the standard random differential equation (13) and to the resulting equation from the internal random coefficients procedure (52). In both cases, the considered random right-hand side function depends linearly on random variables . The superposition method consists of solving differential equations with elementary right-hand sides given bywhere may be the left-hand side differential operator of (13) or (52). In fact, this superposition is adding only one random variable to the system and the resulting random vectors are for (13) and for (52) for each .

As the internal random coefficient procedure leads to a reduced number of random variables, the superposition principle is applied to (52). Our focus is then on the resulting random differential equation: with the deterministic initials conditions given by

The polynomial chaos associated with the random vector is considered and denoted by . The solution is in this case a process that depends on , on the random vector , and on the random vector . The solution is then written in the following form:

Since random vectors and are independent, the factorization by the polynomial chaos associated with the random vector is used in the decomposition (50). The solution can be rewritten as follows:

The insertion of this decomposition in the differential equation (77) and the projection of the equation and of the initial conditions over each polynomial chaos for lead to the following simplified system of random differential equations:For ,For ,For ,

It has to be noted that the case leads to null initial conditions and null right-hand side. In this case, the solution is then null.

The solution given in (80) is then reduced to

For and considering the decomposition of the solution according to the polynomials chaos associated with the random vector , one gets

Based on this decomposition, the differential equations (81a)–(81c) are then reduced to the following:For ,For ,

Projecting this expression over each polynomial chaos for to , one gets and, for ,

Using the matrices and introduced in (71), the last equations become the following.

For to ,and, for ,where the integers and are related to and by notations (54).

These deterministic systems are then rewritten in the following compact matrix form.

For to ,and, for ,where

The final truncated solution is then given by

Its mean and variance are given:

This solution is the main approximate solution of the random initial value problem (1) elaborated herein. Following (89) and (90), one has to solve NL simple initial values differential systems. These systems can be solved by standard methods such as the Runge-Kutta method.

The main advantages of this methodological approach are as follows:(i)The deterministic differential systems have the same left-hand side that is built only once and in a compact matrix form.(ii)Simple right-hand sides are obtained for an arbitrary number of random variables defining the excitation.(iii)A large reduction of the number of unknowns based on polynomial chaos can result.(iv)The CPU time and memory needed can be largely reduced.(v)The numerical solution can be done in parallel manner and particularly for a large number of uncertain parameters.

2.4. Conditional Expectation Method

For the sake of comparison, a methodological approach based on the conditional expectation method is also elaborated here to solve (1). Remember that the solution of (1) is a stochastic process that depends on the random vectors and and time . Let be an observation of the random vector . The conditional expectation of such that , noted by , is the solution of the deterministic equation:where is a deterministic operator defined by

Like in the first section, the random vector is assumed to have a density function relative to the Lebesgue measure denoted by . The moments of order () of the solution of (1) are then given by the Bayes formulas:

The numerical computation of the integral (95) needs the solution of (93) for any observation vector . The analytical solution can be obtained in general cases. Assume that , where is a bounded domain in .

The integral can be approximated by the Gauss-Legendre quadrature method. Let the sets and be Gauss points and associated Gauss weights. The moments can be approximated by

The main advantage of this method is that it allows obtaining a general solution that is considered here as a reference solution as well as the obtained one based on the Monte-Carlo method. But, the inconvenience of these two methods is that a very large CPU time is needed for accurate numerical solutions.

3. Applications

For the sake of clarity, the developed methodological approaches are explicitly presented for the most standard initial value problem. Let us consider the following second-order differential equation modeling the dynamical forced behavior of Euler-Bernoulli beams with the following uncertain parameters: thickness , mass density , and Young modulus :wherein which , , , , , , , , and are deterministic constants. The considered physical random variables , , and are written in the following forms:where are random variables with zero means and one variance, two by two independent, and possessing distribution functions relative to the Lebesgue measure, respectively, denoted by , .

It has to be noted that the internal random coefficients , , and depend nonlinearly on the random variables , , and and is an arbitrary integer. For a clear presentation, the three methodological approaches, elaborated here, are developed for the solution of (97).

3.1. Generalized Polynomial Chaos Method

The random vector , the order of linear random operator is , and the internal coefficients in this case are , , . Note that these coefficients depend nonlinearly on , , and and the order of nonlinearity is 2 for and 4 for and . Based on the notations of the first section, these coefficients can be expanded as follows:where these coefficients are given by

Let and to , and, using the notation (23), one gets and , where and are in .

The matrices for to 2, introduced in (35), are then given by the following.

For , ,