Dynamic Neural Networks for Model-Free Control and IdentificationView this Special Issue
3D Nonparametric Neural Identification
This paper presents the state identification study of 3D partial differential equations (PDEs) using the differential neural networks (DNNs) approximation. There are so many physical situations in applied mathematics and engineering that can be described by PDEs; these models possess the disadvantage of having many sources of uncertainties around their mathematical representation. Moreover, to find the exact solutions of those uncertain PDEs is not a trivial task especially if the PDE is described in two or more dimensions. Given the continuous nature and the temporal evolution of these systems, differential neural networks are an attractive option as nonparametric identifiers capable of estimating a 3D distributed model. The adaptive laws for weights ensure the “practical stability” of the DNN trajectories to the parabolic three-dimensional (3D) PDE states. To verify the qualitative behavior of the suggested methodology, here a nonparametric modeling problem for a distributed parameter plant is analyzed.
1.1. 3D Partial Differential Equations
Partial differential equations (PDEs) are of vast importance in applied mathematics, physics, and engineering since so many real physical situations can be modelled by them. The dynamic description of natural phenomenons are usually described by a set of differential equations using mathematical modeling rules . Almost every system described in PDE has already appeared in the one- and two-dimensional situations. Appending a third dimension ascends the dimensional ladder to its ultimate rung, in physical space at least. For instance, linear second-order 3D partial differential equations appear in many problems modeling equilibrium configurations of solid bodies, the three-dimensional wave equation governing vibrations of solids, liquids, gases, and electromagnetic waves, and the three-dimensional heat equation modeling basic spatial diffusion processes. These equations define a state representing rectangular coordinates on . There are some basic underlying solution techniques to solve 3D PDEs: separation of variables and Green’s functions or fundamental solutions. Unfortunately, the most powerful of the planar tools, conformal mapping, does not carry over to higher dimensions. In this way, many numerical techniques solving such PDE, for example, the finite difference method (FDM) and the finite element method (FEM), have been developed (see [2, 3]). The principal disadvantage of these methods is that they require the complete mathematical knowledge of the system to define a mesh (domain discretization), where the functions are approximated locally. The construction of a mesh in two or more dimensions is a nontrivial task. Usually, in practice, only low-order approximations are employed resulting in a continuous approximation of the function across the mesh but not in its partial derivatives. The approximation discontinuities of the derivative can adversely affect the stability of the solution. However, all those methods are well defined if the PDE structure is perfectly known. Actually, the most of suitable numerical solutions could be achieved only if the PDE is linear. Nevertheless, there are not so many methods to solve or approximate the PDE solution when its structure (even in a linear case) is uncertain. This paper suggests a different numerical solution for uncertain systems (given by a 3D PDE) based on the Neural Network approach .
1.2. Application of Neural Networks to Model PDEs
Recent results show that neural networks techniques seem to be very effective to identify a wide class or systems when we have no complete model information, or even when the plant is considered as a gray box. It is well known that radial basis function neural networks (RBFNNs) and MultiLayer Perceptrons (MLPs) are considered as a powerful tool to approximate nonlinear uncertain functions (see ): any continuous function defined on a compact set can be approximated to arbitrary accuracy by such class of neural networks . Since the solutions of interesting PDEs are uniformly continuous and the viable sets that arise in common problems are often compact, neural networks seem like ideal candidates for approximating viability problems (see [7, 8]). Neural networks may provide exact approximation for PDE solutions; however, numerical constraints avoid this possible exactness because it is almost impossible to simulate NN structures with infinite number of nodes (see [7, 9, 10]). The Differential Neural Network (DNN) approach avoids many problems related to global extremum search converting the learning process to an adequate feedback design (see [11, 12]). Lyapunov’s stability theory has been used within the neural networks framework (see [4, 11, 13]). The contributions given in this paper regard the development of a nonparametric identifier for 3D uncertain systems described by partial differential equations. The method produces an artificial mathematical model in three dimensions that is able to describe the PDEs dynamics. The required numerical algorithm to solve the non-parametric identifier was also developed.
2. 3D Finite Differences Approximation
The problem requires the proposal of a non-parametric identifier based on DNN in three dimensions. The problem here may be treated within the PDEs framework. Therefore, this section introduces the DNN approximation characteristics to reconstruct the trajectories profiles for a family of 3D PDEs.
Consider the set of uncertain second-order PDEs here , where represents , , , , , , , , , , and has components () defined in a domain given by , , is a noise in the state dynamics. This PDE has a set of initial and boundary conditions given by In (1), stands for .
System (1) armed with boundary and initial conditions (2) is driven in a Hilbert space equipped with an inner product . Let us consider a vector function to be a piecewise continuous in . By we denote the set of -valued functions such that is Lebesgue measurable for all and , . Suppose that the nonlinear function satisfies the Lipschitz condition , where is positive constant and is used just to ensure that there exists some such that the state equation with has a unique solution over (see ). The norm defined above stands for the Sobolev space defined in  as follows.
Definition 1. Let be an open set in , and let . Define the norm of as (). This is the Sobolev norm in which the integration is performed in the Lebesgue sense. The completion of the space of function : with respect to is the Sobolev space . For , the Sobolev space is a Hilbert space (see [14, 15]).
Below we will use the norm (4) for the functions for each fixed .
2.1. Numerical Approximation for Uncertain Functions
Now consider a function in . By , can be rewritten as where are functions constituting a basis in . Last expression is referred to as a vector function series expansion of . Based on this series expansion, an NN takes the following mathematical structure: that can be used to approximate a nonlinear function with an adequate selection of integers , , , , , , where Following the Stone-Weierstrass Theorem , if is the NN approximation error, then for any arbitrary positive constant there are some constants such that for all The main idea behind the application of DNN  to approximate the 3D PDEs solution is to use a class of finite-difference methods but for uncertain nonlinear functions. So, it is necessary to construct an interior set (commonly called grid or mesh) that divides the subdomain in equidistant sections, in , and in equidistant sections, each one of them (Figure 1) defined as in such a way that and .
Using the mesh description, one can use the next definitions:
Analogously, we may consider the other cases (, , , , , ). Using the mesh description and applying the finite-difference representation, one gets and it follows for all cases such that the -approximation of the nonlinear PDE (1) can be represented as
2.2. 3D Approximation for Uncertain PDE
By simple adding and subtracting the corresponding terms, one can describe (1) as where , , , , , the same for , , , , , , , , , , , and (), , , , , , , , , , , , .
Here represents the modelling error term, and () any constant matrices and the set of sigmoidal functions have the corresponding size (, , , , , , , , , , and ) and are known as the neural network set of activation functions. These functions obey the following sector conditions: which are bounded in , , and , that is, Following the methodology of DNN  and applying the same representation to (12), we get for each , , the following robust adaptive non-parametric identifier: In the sum we have that , represents functions , , , , and can be taken as the corresponding , , , , , , , , , , . In this equation the term , which is usually recognized as the modeling error, satisfies the following identify, and here, it has been omitted the dependence to of each sigmoidal function where , represents functions and represents the corresponding , , , , , , , , , , .
We will assume that the modeling error terms satisfy the following.
Assumption 2. The modeling error is absolutely bounded in :
Assumption 3. The error modeling gradient is bounded as, where and are constants.
3. DNN Identification for Distributed Parameters Systems
3.1. DNN Identifier Structure
Based on the DNN methodology , consider the DNN identifier for all ; , where represents activation functions , , , and , , is each one of the states , , , , , , , , , , , and is a constant matrix to be selected, is the estimate of . Obviously that proposed methodology implies the designing of individual DNN identifier for each point , , . The collection of such identifiers will constitute a DNN net containing connected DNN identifiers working in parallel. Here , , , , , , , , , , and are the NN activation vectors. This means that the applied DNN-approximation significantly simplifies the specification of , , , , , , and , , , which now are constant for any , , fixed.
3.2. Learning Laws for Identifier’s Weights
For each , define the vector-functions defining the error between the trajectories produced by the model and the DNN-identifier as well as their derivatives with respect to , , and for each : Let , be time-variant matrices. These matrices satisfy the following nonlinear matrix differential equations: where (, ) represents the corresponding sigmoidal functions , , , and . Here with positive matrices and , , and () which are positive definite solutions (, , ) and (, ) of the algebraic Riccati equations defined as follows: where each has the form where can be , , , and and .
Matrices have the form where can be , , , or , representing the partial derivative; for it is with respect to , for with respect to , and for with respect to , and (), where Here, , .
The special class of Riccati equation has a unique positive solution if and only if the four conditions given in  (page 65, chapter 2 Nonlinear System Identification: Differential Learning) are fulfilled (see ): matrix is stable, pair is controllable, pair is observable, and matrices should be selected in such a way to satisfy the following inequality: which restricts the largest eigenvalue of guarantying the existence of a unique positive solution. The main result obtained in this part is in the practical stability framework.
4. Practical Stability and Stabilization
The following definition and proposition are needed for the main results of the paper. Consider the following ODE nonlinear system: with , and , and an external perturbation or uncertainty such that .
Definition 4 (Practical Stability). Assume that a time interval and a fixed function over are given. Given , the nonlinear system (30) is said to be -practically stable over under the presence of if there exists a ( depends on and the interval ) such that , for all , whenever .
Similarly to the Lyapunov stability theory for nonlinear systems, it was applied the aforementioned direct method for the -practical stability of nonlinear systems using-practical Lyapunov-like functions under the presence of external perturbations and model uncertainties. Note that these functions have properties differing significantly from the usual Lyapunov functions in classic stability theory.
The subsequent proposition requires the following lemma.
Lemma 5. Let a nonnegative function satisfying the following differential inequality: where and . Then with and the function defined as
Proof. The proof of this lemma can be obtained directly by the application of the Gronwall-Bellman Lemma.
Proposition 6. Given a time interval and a function over a continuously differentiable real-valued function satisfying , for all , is said to be -practical Lyapunov-like function over under if there exists a constant such that with a bounded nonnegative nonlinear function with upper bound . Moreover, the trajectories of belong to the zone when . In this proposition denotes the derivative of along , that is, .
Proof. The proof follows directly from Lemma 5.
Definition 7. Given a time interval and a function over , nonlinear system (30) is -practically stable, under if there exists an -practical Lyapunov-like function over under .
5. Identification Problem Formulation
The state identification problem for nonlinear systems (13) analyzed in this study, could be now stated as follows.
Problem. For the nonlinear system, given by the vector PDE (20), to study the quality of the DNN identifier supplied with the adjustment (learning) laws (22), estimate the upper bound of the identification error given by (with from (24)) and, if it is possible, to reduce to its lowest possible value, selecting free parameters participating into the DNN identifier ().
This implicates that the reduction of the identification error means that the differential neural network has converged to the solution of the 3D PDE; this can be observed in the matching of the DNN to the PDE state.
6. Main Result
The main result of this paper is presented in the following theorem.
Theorem 8. Consider the nonlinear model (1) ; , , given by the system of PDEs with uncertainties (perturbations) in the states, under the border conditions (2). Let us also suppose that the DNN-identifier is given by (20) which parameters are adjusted by the learning laws (22) with parameters . If positive definite matrices , , and provide the existence of positive solutions , , , and to the Riccati equations (24), then for all ; ; the following -upper bound: is ensured with and Moreover, the weights remain bounded being proportional to , that is, , .
Proof. The proof is given in the appendix.
7. Simulation Results
7.1. Numerical Example
Below, the numerical smulation shows the qualitative illustration for a benchmark system. Therefore, consider the following three-dimensional PDE described as follows: where . It is assumed that there is access to discrete measurements of the state along the whole domain, which is feasible in practice. is a noise in the state dynamics. This model will be used just to generate the data to test the 3D identifier based on DNN. Boundary conditions and initial conditions were selected as follows: The trajectories of the model can be seen in Figure 3 as well as the estimated state, produced by the DNN identifier. The efficiency of the identification process provided by the suggested DNN algorithm shown in Figure 4.
7.2. Tumour Growth Example
The mathematical model of the brain tumour growth is presented in this section based on the results of . Here the diffusion coefficient is considered as a constant. Let consider the following three-dimensional parabolic equation of the tumour growth described as follows: where is the growth rate of a brain tumour, is the diffusion coefficient, is the drift velocity field, is the proliferation coefficient, and is the decay coefficient of cells. It is assumed that there is access to discrete measurements of the state along the whole domain, which is feasible in practice by PET-CT (Positron emission tomography-computed tomography) technology. The domain , , and This model will be used just to generate the data to test the 3D identifier based on DNN. Boundary conditions and initial conditions were selected as follows: The trajectories of the model and the estimate state produced by the DNN identifier can be seen in Figure 7. The dissimilarity between both trajectories depends on the learning period required for adjusting the DNN identifier. The error between trajectories produced by the model and the proposed identifier is close to zero almost for all , , and all that shows the efficiency of the identification process provided by the suggested DNN algorithm is shown in Figure 8.
The adaptive DNN method proposed here solves the problem of non-parametric identification of nonlinear systems (with uncertainties) given by 3D uncertain PDE. Practical stability for the identification process is demonstrated based on the Lyapunov-like analysis. The upper bound of the identification error is explicitly constructed. Numerical examples demonstrate the estimation efficiency of the suggested methodology.
Consider the Lyapunov-like function defined as the composition of NML individual Lyapunov functions along the whole space:
The time derivative of can be obtained so that
Applying the matrix inequality given in  valid for any and for any to the terms containing and their derivatives. We obtain where By the Riccati equations, defined in (24), , and in view of the adjust equations of the weights (22), the previous inequality is simplified to
Applying Lemma 5, one has , which completes the proof.
Y. Pinchover and J. Rubinstein, An Introduction to Partial Differential Equations, Cambridge University Press, 2008.
G. D. Smith, Numerical Solution of Partial Differential Equations: Infinite Difference Methods, Clarendon Press, Oxford, UK, 1978.
T. J. R. Hughes, The Finite Element Method, Prentice Hall, Upper Saddle River, NJ, USA, 1987.
R. Fuentes, A. Poznyak, T. Poznyak, and I. Chairez, “Neural numerical modeling for uncertain distributed parameter system,” in Proceedings of the International Joint Conference in Neural Networks, pp. 909–916, Atlanta, Ga, USA, June 2009.View at: Google Scholar
S. Haykin, Neural Networks. A Comprehensive Foundation, IEEE Press, Prentice Hall U.S., New York, NY, USA, 2nd edition, 1999.
G. Cybenko, “Approximation by superposition of sigmoidal activation function,” Mathematics of Control, Signals and Systems, vol. 2, pp. 303–314, 1989.View at: Google Scholar
I. E. Lagaris, A. Likas, and D. I. Fotiadis, “Artificial neural networks for solving ordinary and partial differential equations,” IEEE Transactions on Neural Networks, vol. 9, no. 5, pp. 987–1000, 1998.View at: Google Scholar
M. W.M Dissanayake and N. Phan-Thien, “Neural-network-based approximations for solving partial differential equations,” Communications in Numerical Methods in Engineering, vol. 10, no. 3, pp. 195–201, 1994.View at: Google Scholar
A. Poznyak, E. Sanchez, and W. Yu, Differential Neural Networks for Robust Nonlinear Control (Identification, Estate Estimation an trajectory Tracking), World Scientific, 2001.
F. L. Lewis, A. Yeşildirek, and K. Liu, “Multilayer neural-net robot controller with guaranteed tracking performance,” IEEE Transactions on Neural Networks, vol. 7, no. 2, pp. 1–11, 1996.View at: Google Scholar
H. K. Khalil, Nonlinear Systems, Prentice-Hall, Upper Saddle River, NJ, USA, 2002.
R. A. Adams and J. Fournier, Sobolev Spaces, Academic Press, New York, NY, USA, 2nd edition, 2003.
M. R. Islam and N. Alias, “A case study: 2D Vs 3D partial differential equation toward tumour cell visualisation on multi-core parallel computing atmosphere,” International Journal for the Advancement of Science and Arts, vol. 10, no. 1, pp. 25–35, 2010.View at: Google Scholar
A. Poznyak, Advanced Mathetical Tools for Automatic Control Engineers, Deterministic Technique, vol. 1, Elsevier, 2008.