• Views 603
• Citations 0
• ePub 3
• PDF 146
`Journal of Applied MathematicsVolume 2018, Article ID 7403745, 23 pageshttps://doi.org/10.1155/2018/7403745`
Research Article

## Numerical Procedures for Random Differential Equations

1Mathematical Modeling and Control, Department of Mathematics, Faculty of Sciences and Techniques of Tangier, Abdelmalek Essaadi University, Tangier, Morocco
2Research Center STIS, Department of Applied Mathematics and Informatics, ENSET, Mohammed V University, Rabat, Morocco
3Department of Mechanical Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah, Saudi Arabia
4LTI, Ecole Nationale des Sciences Appliquées de Tanger, Abdelmalek Essaadi University, Tangier, Morocco

Correspondence should be addressed to Lahcen Azrar; am.ten.s5mu@rarza.l

Received 2 February 2018; Accepted 26 March 2018; Published 21 May 2018

Copyright © 2018 Mohamed Ben Said et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Some methodological approaches based on generalized polynomial chaos for linear differential equations with random parameters following various types of distribution laws are proposed. Mainly, an internal random coefficients method ‘IRCM’ is elaborated for a large number of random parameters. A procedure to build a new polynomial chaos basis and a connection between the one-dimensional and multidimensional polynomials are developed. This allows handling easily random parameters with various laws. A compact matrix formulation is given and the required matrices and scalar products are explicitly presented. For random excitations with an arbitrary number of uncertain variables, the IRCM is couplet to the superposition method leading to successive random differential equations with the same main random operator and right-hand sides depending only on one random parameter. This methodological approach leads to equations with a reduced number of random variables and thus to a large reduction of CPU time and memory required for the numerical solution. The conditional expectation method is also elaborated for reference solutions as well as the Monte-Carlo procedure. The applicability and effectiveness of the developed methods are demonstrated by some numerical examples.

#### 1. Introduction

Stochastic and random differential equations constitute a growing field of great scientific interest. There are mainly three categories of random differential equations. The first and the simplest class is one where only the initial conditions are random. The second class is characterized by the presence of random nonhomogeneous or input terms and the third one is the differential equations with random coefficients. To deal with errors and uncertainties, random coefficients have been increasingly used in the last few decades.

This paper focuses on the combined second and third classes because this type of equations offers a natural and rational approach for mathematical modeling of many physical phenomena. The last decades have witnessed an enormous effort in the fields of parameters uncertainty and random or stochastic differential processes. This is due to the fact that any physical system contains uncertainties and its real phenomena may be modeled by stochastic differential equations with random or stochastic process coefficients. These equations take into account the approximate knowledge of the numerical values of the physical parameters on which the system depends and have been a matter of intensive investigation.

A number of techniques are available for uncertainty sensitivity and propagation such as Monte-Carlo procedure [1, 2], sensitivity analysis methods [3], and polynomial chaos [4, 5] among others. Monte-Carlo (MC) method has been the mainstream uncertainty quantification technique for decades. It is the most used method and is valid for a wide range of problems. However, it is very computationally expensive since it requires a large number of simulations using the full model.

An alternative approach is based on the expansion of the response in terms of a series of polynomials that are orthogonal with respect to mean value operations. Polynomial chaos was first introduced by Wiener [6] where Hermite polynomials were used to model stochastic processes with Gaussian random variables. A number of other expansions have been proposed in the literature for representing non-Gaussian process [7, 8]. Recent review papers by Stefanou [9] and by Schuëller and Pradlwarter [10] summarized the assessment of the past and current status of the procedure for stochastic structural analysis.

This polynomial representation provides a framework suitable for computational simulation and then widespread in mathematical and numerical analysis of many engineering problems. Various problems have been solved based on this approximation such as solution of stochastic differential equations [11], linear structural dynamics [4, 5], and nonlinear random vibration [12, 13]; soil-structure interaction [14], structural reliability [15], and identification [16, 17]. More recently, Trcala used polynomial chaos for nonlinear diffusion problems of moisture transfer in wood [18]. The accuracy of the PC approximation has been evaluated by Field and Grigoriu [19]. A convergence of the decomposition of the solution into the polynomials chaos is studied by Dvurecenskij et al. [20] and the conditions associated with the distribution function of the random vector appearing in the solution for a convergence toward the solution are given by Ernst et al. [21].

The polynomial chaos has been used in many finite elements problems [4]. Accurate discrete modeling of complex industrial structures leads to a large finite element model. To reduce the CPU time, reducing the order of the models is very useful. Component mode synthesis (CMS) is a well-established method for efficiently constructing models to analyze the dynamics of large and complex structures that are often described by separate substructure (or components) models. Sarsri et al. [22] have coupled the CMS methods with the projection chaos polynomials methods in the first and second orders to compute the frequency transfer functions of stochastic structures. This coupling methodological approach has been used by Sarsri and Azrar in time domain [23] as well as a coupling with the perturbation method [24].

The polynomial chaos methods are well suited for the random differential equations, RDE, with a very few number of random variables defining their main coefficients. It is well known that if the number of the considered random variables increases, the needed number of unknowns to be determined for solving the random systems increases very rapidly with the degree of the polynomials. Thus, for accurate solution, the CPU time and memory required may be prohibitive. This greatly limits these methods to random differential equations with very few numbers of random parameters.

An alternative approach called internal random coefficients method (IRCM) is developed in this paper. A careful presentation is given in the frame of higher-order random differential equations. This method is based on generalized polynomial chaos and the superposition principle. It can be used to solve random differential equations with a large number of random variables and an input right-hand side decomposed in an arbitrary number of random coefficients. The considered random parameters may follow various distribution laws.

A procedure to build a new polynomial chaos basis and a connection between the one-dimensional and multidimensional polynomials is established. Different distribution laws can be easily considered. Based on the superposition principle, the random differential equation with an input depending on several random variables is decomposed on a sequence of RDE with the same main random operator and reduced right-hand sides. A series of RDEs with reduced number of random variables have thus to be solved based on the generalized polynomial chaos decomposition. The global system is then solved by a drastic reduction of the CPU time and memory space. For the sake of comparison, the conditional expectation method is developed for the considered random differential equations as well as the Monte-Carlo method. The applicability and effectiveness of the presented methodological approach have been demonstrated by numerically solving various examples.

#### 2. Mathematical Formulation

In this work, various methodological approaches are elaborated to solve higher-order initial value problems with linear and nonlinear random variables subjected to a random input right-hand side. For this aim, the following stochastic differential equation is considered:with deterministic initial conditions, where is the stochastic process response and is a linear random operator of order defined by

It is assumed that the random coefficients depend on the random vector which is defined in a probability space The input right-hand side, , is assumed to be dependent on the random vector that is defined in the probability space (). Explicit expression of is given later.

For numerical solution of (1), a new procedure based on the general polynomial chaos, GPC, expansion procedure is elaborated. Herein, the classical GPC procedure is reviewed in a clear manner.

In the present work, the random variables, component of the vector , are assumed to be independent but may have general distinct distributions . If are classical distributions, such as normal, uniform, gamma, and beta, the associated known polynomial chaos can be used. Otherwise, the procedure, developed in this paper, will be used to build the needed polynomial basis. Explicit expressions of this basis for general cases are given. In addition, the number of random variables and the differential order are arbitrary.

The concept of internal random coefficients is introduced and combined with the superposition principle and generalized polynomial chaos expansion. For the sake of comparison, conditional expectation and Monte-Carlo methods are also elaborated.

##### 2.1. General Polynomial Chaos Formulation
###### 2.1.1. General Formulation

For general purpose, let us consider random vectors and , presented in the following forms:where for to are random variables defined from the probabilistic field to . The random vectors and are assumed to be independent and gathered in the vector :We assume that the random vector has a distribution function with respect to the Lebesgue measure denoted by . denotes the set of square-integrable functions with respect to the weight measure :with the following associated inner product:Let us note that the distribution may be Gaussian or non-Gaussian. In the present analysis, various types of distribution functions may be considered.

The general polynomial chaos associated with the random vector is denoted by . These polynomials coincide with the orthogonal polynomials associated with the inner product defined in (6) and verifywhere are given by

The solution of the main equation (1) is a time stochastic process depending on the random vector and decomposed in the polynomial chaos basis :

For general purpose, the random coefficients are assumed to depend linearly and nonlinearly on the random variables ; and written in the following general form:in which , and .

The right-hand side of (1), , is assumed to be a time-dependent random function that depends linearly on and expressed bywhere and are considered deterministic function and constants.

Note that a more general right-hand side excitation can be decomposed in the form (11) using the Karhunen-Loéve expansion [25].

Based on a reduced decomposition using the first terms, the stochastic process can be approximated bywhere is a multidimensional general polynomial chaos depending on the random vector .

The insertion of these expressions in (1) leads to the following order random differential equation:

Projecting this equation with respect to for to , the following deterministic differential system is then obtained:

Note that the first- and mainly the second-order differential equations of the above kind, or 2, have been investigated by many authors when the number n of random variables is too small. When the random variables are Gaussian, Hermite-chaos polynomials in ][ are used in [4]. This standard approach is very often used in structural dynamics. Various other works are elaborated when are uniform, gamma, or beta and thus Legendre-chaos in , Laguerre-chaos in , and Jacobi-chaos in are, respectively, used [8].

The expansion on polynomial basis of the vector is related to the polynomial basis associated wih each random variable in the case of independent variables. This relation is clarified and the correspondence is clearly established herein. The procedure allowing clarifying the inner product used in the differential system (14) is established and an explicit simple procedure is given, in the next subsection.

Firstly, this procedure is established in the next paragraphs for independent random variables, based on the relationships between the random variables and random vector . Secondly, new variables are introduced that are not necessarily independent and the procedure is established for general cases.

###### 2.1.2. Condensed Formulation

In order to formulate the considered problem in a condensed form, the following mathematical developments will be used. As the variables are assumed to be pairwise independent, the joint distribution function is then given bywhere , to are functions of marginal distributions associated with each variable .

The general polynomial chaos associated with each variable is denoted by . These polynomials coincide with the orthogonal polynomials associated with the inner product defined in with respect to the weight function :with the associated inner product given by

The set of orthogonal polynomials satisfies the orthogonal conditions:where

In order to make a correspondence between the set of polynomial chaos associated with each variable and that associated with the random vector , the following total order is introduced on the set by

This order allows considering a bijection from to , defined by

This bijection relates each element in by its order defined in (20). Let be a nonzero integer; the integer used in the decomposition (12) is taken asThe choice of allows decomposing the solution in a set of polynomial chaos associated with the vector of degree less than or equal to . So, for all integers between 0 and , there is a single such that

This one-to-one correspondence allows writing the multidimensional polynomial chaos associated with the random vector as a function of the one-dimensional polynomial chaos corresponding to each variable bywhere () is the multi-index associated with the integer , introduced by (23).

For all integers and between 0 and the square matrix of order is defined by

Let , the set of polynomials of degree less than or equal to . Then, the set of the classical polynomial chaos is an orthogonal basis of for the inner product defined by (17). Let be the canonical basis of and be the passage matrix from the canonical basis to the chaos basis . This matrix can be obtained in a standard way, using the Gram-Schmidt procedure or a recursive method.

Let the vectorsThen, one has

For and between 0 and , the moment of order of the random variable is defined by

The square matrices of order , , are defined by

This gives explicitly matrix by

Let and be two integers between 0 and . From (23) we have, respectively, unique elements and in such that and . Rewriting using expression (26), the following expression is explicitly obtained:

The expression in the right-hand side of (14) is thus given by

For independent random variables, a relationship between the multidimensional and associated one-dimensional generalized polynomials is established. This leads to explicit and closed forms of the used scalar product and needed terms to be numerically computed. Using these relationships resulted in the following deterministic differential system:

For a compact formulation and using notation (26), the square matrices of order and the time-dependent vector of dimension are introduced for all integers , by

Using these notations, differential system (34) is rewritten in the following closed form system:

Using the presented methodological approach, the truncated solution (12) can be numerically obtained. Its mean and variance are given by

Equation (14) is usually given for Hermite polynomials when random variables are Gaussian. This kind of projection is classically done and many authors follow this procedure.

In this paper, the random variables may follow various types of laws. The presented generalized formalism allows one to handle multilaws by using the canonical basis and the orthogonalization principle with respect to a scalar product associated with the distribution function of the random vector. Explicit and closed forms of the used scalar products and needed terms to be numerically computed are given.

It should to be noted that the major problem of this classical decomposition into polynomials chaos expansion is that the number of unknowns to estimate increases very rapidly when the degree of the polynomial chaos and the number of random parameters increase. More clearly, for n random variables, the number of unknown coefficients in the polynomial chaos of orders less than or equal to is . Table 1 presents the numbers of the needed unknown terms for various and . This very fast growth of dimensionality is the main limitation of this classical approach.