Abstract

This paper presents the state identification study of 3D partial differential equations (PDEs) using the differential neural networks (DNNs) approximation. There are so many physical situations in applied mathematics and engineering that can be described by PDEs; these models possess the disadvantage of having many sources of uncertainties around their mathematical representation. Moreover, to find the exact solutions of those uncertain PDEs is not a trivial task especially if the PDE is described in two or more dimensions. Given the continuous nature and the temporal evolution of these systems, differential neural networks are an attractive option as nonparametric identifiers capable of estimating a 3D distributed model. The adaptive laws for weights ensure the “practical stability” of the DNN trajectories to the parabolic three-dimensional (3D) PDE states. To verify the qualitative behavior of the suggested methodology, here a nonparametric modeling problem for a distributed parameter plant is analyzed.

1. Introduction

1.1. 3D Partial Differential Equations

Partial differential equations (PDEs) are of vast importance in applied mathematics, physics, and engineering since so many real physical situations can be modelled by them. The dynamic description of natural phenomenons are usually described by a set of differential equations using mathematical modeling rules [1]. Almost every system described in PDE has already appeared in the one- and two-dimensional situations. Appending a third dimension ascends the dimensional ladder to its ultimate rung, in physical space at least. For instance, linear second-order 3D partial differential equations appear in many problems modeling equilibrium configurations of solid bodies, the three-dimensional wave equation governing vibrations of solids, liquids, gases, and electromagnetic waves, and the three-dimensional heat equation modeling basic spatial diffusion processes. These equations define a state representing rectangular coordinates on 3. There are some basic underlying solution techniques to solve 3D PDEs: separation of variables and Green’s functions or fundamental solutions. Unfortunately, the most powerful of the planar tools, conformal mapping, does not carry over to higher dimensions. In this way, many numerical techniques solving such PDE, for example, the finite difference method (FDM) and the finite element method (FEM), have been developed (see [2, 3]). The principal disadvantage of these methods is that they require the complete mathematical knowledge of the system to define a mesh (domain discretization), where the functions are approximated locally. The construction of a mesh in two or more dimensions is a nontrivial task. Usually, in practice, only low-order approximations are employed resulting in a continuous approximation of the function across the mesh but not in its partial derivatives. The approximation discontinuities of the derivative can adversely affect the stability of the solution. However, all those methods are well defined if the PDE structure is perfectly known. Actually, the most of suitable numerical solutions could be achieved only if the PDE is linear. Nevertheless, there are not so many methods to solve or approximate the PDE solution when its structure (even in a linear case) is uncertain. This paper suggests a different numerical solution for uncertain systems (given by a 3D PDE) based on the Neural Network approach [4].

1.2. Application of Neural Networks to Model PDEs

Recent results show that neural networks techniques seem to be very effective to identify a wide class or systems when we have no complete model information, or even when the plant is considered as a gray box. It is well known that radial basis function neural networks (RBFNNs) and MultiLayer Perceptrons (MLPs) are considered as a powerful tool to approximate nonlinear uncertain functions (see [5]): any continuous function defined on a compact set can be approximated to arbitrary accuracy by such class of neural networks [6]. Since the solutions of interesting PDEs are uniformly continuous and the viable sets that arise in common problems are often compact, neural networks seem like ideal candidates for approximating viability problems (see [7, 8]). Neural networks may provide exact approximation for PDE solutions; however, numerical constraints avoid this possible exactness because it is almost impossible to simulate NN structures with infinite number of nodes (see [7, 9, 10]). The Differential Neural Network (DNN) approach avoids many problems related to global extremum search converting the learning process to an adequate feedback design (see [11, 12]). Lyapunov’s stability theory has been used within the neural networks framework (see [4, 11, 13]). The contributions given in this paper regard the development of a nonparametric identifier for 3D uncertain systems described by partial differential equations. The method produces an artificial mathematical model in three dimensions that is able to describe the PDEs dynamics. The required numerical algorithm to solve the non-parametric identifier was also developed.

2. 3D Finite Differences Approximation

The problem requires the proposal of a non-parametric identifier based on DNN in three dimensions. The problem here may be treated within the PDEs framework. Therefore, this section introduces the DNN approximation characteristics to reconstruct the trajectories profiles for a family of 3D PDEs.

Consider the set of uncertain second-order PDEs𝑢𝑡=𝑓𝑢,𝑢𝑥,𝑢𝑥𝑥,𝑢𝑦,𝑢𝑦𝑦,𝑢𝑧,𝑢𝑧𝑧,𝑢𝑥𝑦,𝑢𝑦𝑥,𝑢𝑥𝑧,𝑢𝑦𝑧+𝜉,(1) here 𝑢=𝑢(𝑥,𝑦,𝑧,𝑡), where represents 𝑡, 𝑥, 𝑦, 𝑧, 𝑥𝑥, 𝑦𝑦, 𝑧𝑧, 𝑥𝑦, 𝑥𝑧, 𝑦𝑥, 𝑦𝑧 and 𝑢=𝑢(𝑥,𝑦,𝑧,𝑡)has 𝑛 components (𝑢(𝑥,𝑦,𝑧,𝑡)𝑛) defined in a domain given by [𝑥,𝑦,𝑧)[0,1]×[0,1]×[0,1], 𝑡0, 𝜉=𝜉(𝑥,𝑦,𝑧,𝑡)𝑛 is a noise in the state dynamics. This PDE has a set of initial and boundary conditions given by𝑢𝑥(0,𝑦,𝑧,𝑡)=0𝑛,𝑢(𝑥,0,𝑧,𝑡)=𝑢0𝑛,𝑢(𝑥,𝑦,0,𝑡)=𝑢0𝑛,𝑢(𝑥,𝑦,𝑧,0)=𝑐𝑛.(2) In (1), 𝑢𝑡(𝑥,𝑦,𝑧,𝑡) stands for 𝜕𝑢(𝑥,𝑦,𝑧,𝑡)/𝜕𝑡.

System (1) armed with boundary and initial conditions (2) is driven in a Hilbert space 𝐻 equipped with an inner product (,). Let us consider a vector function 𝑔(𝑡)𝐻 to be a piecewise continuous in 𝑡. By 𝐿(𝑎,𝑏;𝐻) we denote the set of 𝐻-valued functions 𝑔 such that (𝑔(),𝑢) is Lebesgue measurable for all 𝑢𝐻 and 𝑒𝑠𝑠max𝑡[𝑎,𝑏]𝑔(𝛾,𝑡)<, 𝛾𝑛. Suppose that the nonlinear function 𝑔(𝛾,𝑡) satisfies the Lipschitz condition 𝑔(𝛾,𝑡)𝑔(𝜂,𝑡)𝐿𝛾𝜂,𝛾,𝜂𝐵𝑟𝛾0=𝛾𝑛𝛾𝛾0𝑡𝑟,𝑡0,𝑡1,(3) where 𝐿 is positive constant and 𝑔2=(𝑔,𝑔) is used just to ensure that there exists some 𝛿>0 such that the state equation ̇𝛾=𝑔(𝛾,𝑡) with 𝛾(𝑡0)=𝛾0 has a unique solution over [𝑡0,𝑡0+𝛿] (see [14]). The norm defined above stands for the Sobolev space defined in [15] as follows.

Definition 1. Let Ω be an open set in 𝑛, and let 𝜈𝐶𝑚(Ω). Define the norm of 𝜈(𝛾) as 𝜈𝑚,𝑝=0|𝛼|𝑚Ω||𝐷𝛼||𝜈(𝛾)𝑝𝑑𝛾1/𝑝(4) (1𝑝<,𝐷𝛼𝜈(𝛾)=(𝜕𝛼/𝜕𝛾𝛼)𝜈(𝛾)). This is the Sobolev norm in which the integration is performed in the Lebesgue sense. The completion of the space of function 𝜈(𝛾)𝐶𝑚(Ω): 𝜈𝑚,𝑝< with respect to 𝑚,𝑝 is the Sobolev space 𝐻𝑚,𝑝(Ω). For 𝑝=2, the Sobolev space is a Hilbert space (see [14, 15]).

Below we will use the norm (4) for the functions 𝑢(,𝑡) for each fixed 𝑡.

2.1. Numerical Approximation for Uncertain Functions

Now consider a function 0() in 𝐻𝑚,2(Ω). By [16], 0() can be rewritten as0𝛾,𝜃=𝑖𝑗𝑘𝜃𝑖𝑗𝑘Ψ𝑖𝑗𝑘(𝜃𝛾),𝑖𝑗𝑘=+0(𝛾)Ψ𝑖𝑗𝑘(𝛾)𝑑𝛾,𝑖,𝑗,𝑘,(5) where {Ψ𝑖𝑗𝑘(𝛾)} are functions constituting a basis in 𝐻𝑚,2(Ω). Last expression is referred to as a vector function series expansion of 0(𝛾,𝜃). Based on this series expansion, an NN takes the following mathematical structure:0(𝛾,𝜃)=𝐿2𝑖=𝐿1𝑀2𝑗=𝑀1𝑁2𝑘=𝑁1𝜃𝑖𝑗𝑘Ψ𝑖𝑗𝑘(𝛾)=Θ𝑇𝑊(𝛾)(6) that can be used to approximate a nonlinear function 0(𝛾,𝜃)𝑆 with an adequate selection of integers 𝐿1, 𝐿2, 𝑀1, 𝑀2, 𝑁1, 𝑁2+, where𝜃Θ=𝐿1𝑀1𝑁1,,𝜃𝐿1𝑀1𝑁2,𝜃𝐿2𝑀1𝑁1,𝜃𝐿2𝑀2𝑁2,Ψ𝑊(𝛾)=𝐿1𝑀1𝑁1,,Ψ𝐿1𝑀1𝑁2,Ψ𝐿2𝑀1𝑁1,Ψ𝐿2𝑀2𝑁2.(7) Following the Stone-Weierstrass Theorem [17], if𝜖𝐿1,𝐿2,𝑀1,𝑀2,𝑁1,𝑁2=0𝛾,𝜃0(𝛾,𝜃)(8) is the NN approximation error, then for any arbitrary positive constant 𝜀 there are some constants 𝐿1,𝐿2,𝑀1,𝑀2,𝑁1,𝑁2 such that for all 𝑥𝑋𝜖𝐿1,𝐿2,𝑀1,𝑀2,𝑁1,𝑁22𝜀.(9) The main idea behind the application of DNN [11] to approximate the 3D PDEs solution is to use a class of finite-difference methods but for uncertain nonlinear functions. So, it is necessary to construct an interior set (commonly called grid or mesh) that divides the subdomain 𝑥[0,1] in 𝐿 equidistant sections, 𝑦[0,1] in 𝑀, and 𝑧[0,1] in 𝑁 equidistant sections, each one of them (Figure 1) defined as (𝑥𝑖,𝑦𝑗,𝑧𝑘) in such a way that 𝑥0=𝑦0=𝑧0=0 and 𝑥𝐿=𝑦𝑀=𝑧𝑁=1.

Using the mesh description, one can use the next definitions:𝑢𝑖,𝑗,𝑘𝑥(𝑡)=𝑢𝑖,𝑦𝑗,𝑧𝑘,𝑢,𝑡𝑡𝑖,𝑗,𝑘(𝑡)=𝜕𝑢(𝑥,𝑦,𝑧,𝑡)|||𝜕𝑡𝑥=𝑥𝑖,𝑦=𝑦𝑗,𝑧=𝑧𝑘,𝑢𝑥𝑖,𝑗,𝑘(𝑡)=𝑢𝑥(||𝑥,𝑦,𝑧,𝑡)𝑥=𝑥𝑖,𝑦=𝑦𝑗,𝑧=𝑧𝑘,𝑢𝑖,𝑗,𝑘𝑥𝑥(𝑡)=𝑢𝑥𝑥||(𝑥,𝑦,𝑧,𝑡)𝑥=𝑥𝑖,𝑦=𝑦𝑗,𝑧=𝑧𝑘.(10)

Analogously, we may consider the other cases (𝑢𝑥𝑥, 𝑢𝑦, 𝑢𝑦𝑦, 𝑢𝑧, 𝑢𝑧𝑧, 𝑢𝑥𝑦,𝑢𝑥𝑧,𝑢𝑦𝑧). Using the mesh description and applying the finite-difference representation, one gets𝑢𝑥𝑖,𝑗,𝑘𝑢(𝑡)𝑖,𝑗,𝑘(𝑡)𝑢𝑖1,𝑗,𝑘(𝑡),𝑢Δ𝑥𝑖,𝑗,𝑘𝑥𝑥𝑢(𝑡)𝑥𝑖,𝑗,𝑘(𝑡)𝑢𝑥𝑖1,𝑗,𝑘(𝑡)Δ2𝑥,(11) and it follows for all cases such that the (Δ𝑥,Δ𝑦,Δ𝑧)-approximation of the nonlinear PDE (1) can be represented aṡ𝑢𝑖,𝑗,𝑘(𝑡)=𝑢𝑡𝑖,𝑗,𝑘𝑢(𝑡)Φ𝑖,𝑗,𝑘,𝑢𝑖1,𝑗,𝑘,𝑢𝑖2,𝑗,𝑘,𝑢𝑖,𝑗1,𝑘,𝑢𝑖,𝑗2,𝑘,𝑢𝑖,𝑗,𝑘1,𝑢𝑖,𝑗,𝑘2,𝑢𝑖1,𝑗1,𝑘,𝑢𝑖1,𝑗,𝑘1,𝑢𝑖,𝑗1,𝑘1,𝑢𝑖1,𝑗1,𝑘1(𝑖=0,,𝐿;𝑗=0,,𝑀,𝑘=0,,𝑁).(12)

2.2. 3D Approximation for Uncertain PDE

By simple adding and subtracting the corresponding terms, one can describe (1) as𝑢𝑡=𝐴𝑢+𝑉1𝜎𝑢+𝑉2𝜑1𝑢𝑥+𝑉3𝛾1𝑢𝑥𝑥𝑉4𝜑2𝑢𝑦+𝑉5𝛾2𝑢𝑦𝑦+𝑉6𝜑3𝑢𝑧+𝑉7𝛾3𝑢𝑧𝑧+𝑉8𝜓1𝑢𝑥𝑦+𝑉9𝜓2𝑢𝑦𝑧𝑉10𝜓3𝑢𝑥𝑧+𝑉11𝜎2𝑢𝑥𝑦𝑧+𝑓,(13) where 𝑢𝑡=𝑢𝑡(𝑥,𝑦,𝑧,𝑡), 𝑢=𝑢(𝑥,𝑦,𝑧,𝑡), 𝜎=𝜎(𝑥,𝑦,𝑧), 𝑢𝑥=𝑢𝑥(𝑥,𝑦,𝑧,𝑡), 𝑢𝑥𝑥=𝑢𝑥𝑥(𝑥,𝑦,𝑧,𝑡), the same for 𝜎2, 𝜑𝑖, 𝛾𝑖, 𝜓𝑖, 𝑢𝑦, 𝑢𝑦𝑦, 𝑢𝑧, 𝑢𝑧𝑧, 𝑢𝑥𝑦, 𝑢𝑦𝑧, 𝑢𝑥𝑧, and 𝑢𝑥𝑦𝑧 (𝑖=1,3), 𝐴𝑛×𝑛, 𝑉1𝑛×𝑠1, 𝑉2𝑛×𝑠2, 𝑉3𝑛×𝑠3, 𝑓=𝑓(𝑥,𝑦,𝑧,𝑡), 𝑉4𝑛×𝑠4, 𝑉5𝑛×𝑠5, 𝑉6𝑛×𝑠6, 𝑉7𝑛×𝑠7, 𝑉8𝑛×𝑠8, 𝑉9𝑛×𝑠9, 𝑉10𝑛×𝑠10.

Here 𝑓(𝑥,𝑡)𝑛 represents the modelling error term, 𝐴 and 𝑉𝑘 (𝑘=1,6) any constant matrices and the set of sigmoidal functions have the corresponding size (𝜎(𝑥,𝑦,𝑧)𝑠1, 𝜑1(𝑥,𝑦,𝑧)𝑠2, 𝛾1(𝑥,𝑦,𝑧)𝑠3, 𝜑2(𝑥,𝑦,𝑧)𝑠4, 𝛾2(𝑥,𝑦,𝑧)𝑠5, 𝜓1(𝑥,𝑦,𝑧)𝑠6, 𝜑3(𝑥,𝑦,𝑧)𝑠7, 𝛾3(𝑥,𝑦,𝑧)𝑠8, 𝜓2(𝑥,𝑦,𝑧)𝑠9, 𝜓3(𝑥,𝑦,𝑧)𝑠10, and 𝜎2(𝑥,𝑦,𝑧)𝑠11) and are known as the neural network set of activation functions. These functions obey the following sector conditions:𝜎1(𝑥,𝑦,𝑧)𝜎1(𝑥,𝑦,𝑧)2𝐿𝜎1𝑥𝑥2+𝑦𝑦2+𝑧𝑧2,𝜎2(𝑥,𝑦,𝑧)𝜎2(𝑥,𝑦,𝑧)2𝐿𝜎2𝑥𝑥2+𝑦𝑦2+𝑧𝑧2,𝜙𝑠(𝑥,𝑦,𝑧)𝜙𝑠(𝑥,𝑦,𝑧)2𝐿𝜙𝑠𝑥𝑥2+𝑦𝑦2+𝑧𝑧2(14) which are bounded in 𝑥, 𝑦, and 𝑧, that is,𝜎()2𝜎(𝑙1)+,𝜑𝑙()2𝜑𝑙+,𝛾𝑙()2𝛾𝑙+,𝜓𝑙()𝜓𝑙+,𝑙=1,3.(15) Following the methodology of DNN [11] and applying the same representation to (12), we get for each 𝑖(1,,𝐿), 𝑗(1,,𝑀), 𝑘(1,,𝑁) the following robust adaptive non-parametric identifier:𝑢𝑡𝑖,𝑗,𝑘(𝑡)=𝐴𝑖,𝑗,𝑘𝑢𝑖,𝑗,𝑘(𝑡)+11𝑝=1𝑊𝑝𝑖,𝑗,𝑘𝜙𝑠𝑓𝑈(𝑡)+𝑖,𝑗,𝑘(𝑡).(16) In the sum we have that 𝑠=1,3, 𝜙 represents functions 𝜎, 𝜑, 𝛾, 𝜓, and 𝑈 can be taken as the corresponding 𝑢𝑖,𝑗,𝑘, 𝑢𝑖1,𝑗,𝑘, 𝑢𝑖2,𝑗,𝑘, 𝑢𝑖,𝑗1,𝑘, 𝑢𝑖,𝑗2,𝑘, 𝑢𝑖,𝑗,𝑘1, 𝑢𝑖,𝑗,𝑘2, 𝑢𝑖1,𝑗1,𝑘, 𝑢𝑖,𝑗1,𝑘1, 𝑢𝑖1,𝑗,𝑘1, 𝑢𝑖1,𝑗1,𝑘1. In this equation the term 𝑓𝑖,𝑗,𝑘(𝑡), which is usually recognized as the modeling error, satisfies the following identify, and here, it has been omitted the dependence to 𝑥𝑖,𝑦𝑗,𝑧𝑘 of each sigmoidal function𝑓𝑖,𝑗,𝑘𝑢(𝑡)=Φ𝑖,𝑗,𝑘,𝑢𝑖1,𝑗,𝑘,𝑢𝑖2,𝑗,𝑘,𝑢𝑖,𝑗1,𝑘,𝑢𝑖,𝑗2,𝑘,𝑢𝑖,𝑗,𝑘1,𝑢𝑖,𝑗,𝑘2,𝑢𝑖1,𝑗1,𝑘,𝑢𝑖1,𝑗,𝑘1,𝑢𝑖,𝑗1,𝑘1,𝑢𝑖1,𝑗1,𝑘1𝐴𝑖,𝑗,𝑘𝑢𝑖,𝑗,𝑘(𝑡)11𝑝=1𝑊𝑝𝑖,𝑗,𝑘𝜙𝑠𝑈(𝑡),(17) where 𝑊𝑖𝑝𝑛×𝑠𝑝,𝑝=1,11,𝜙𝑠, 𝜙 represents functions 𝜎,𝜑,𝛾,𝜓 and 𝑠=1,3,𝑈(𝑡) represents the corresponding (𝑢𝑖,𝑗,𝑘, 𝑢𝑖1,𝑗,𝑘, 𝑢𝑖2,𝑗,𝑘, 𝑢𝑖,𝑗1,𝑘, 𝑢𝑖,𝑗2,𝑘, 𝑢𝑖,𝑗,𝑘1, 𝑢𝑖,𝑗,𝑘2, 𝑢𝑖1,𝑗1,𝑘, 𝑢𝑖1,𝑗,𝑘1, 𝑢𝑖,𝑗1,𝑘1, 𝑢𝑖1,𝑗1,𝑘1).

We will assume that the modeling error terms satisfy the following.

Assumption 2. The modeling error is absolutely bounded in Ω: 𝑓𝑖,𝑗,𝑘2𝑓1𝑖,𝑗,𝑘.(18)

Assumption 3. The error modeling gradient 𝑠||𝑓(𝑥,𝑦,𝑧,𝑡)𝑠=𝑠𝑖=𝑠𝑓𝑖,𝑗,𝑘(19) is bounded as𝑠𝑓𝑖,𝑗,𝑘2𝑓𝑟𝑖,𝑗,𝑘, where𝑠=𝑥,𝑦,𝑧 and𝑓𝑟𝑖,𝑗,𝑘(𝑟=1,3) are constants.

3. DNN Identification for Distributed Parameters Systems

3.1. DNN Identifier Structure

Based on the DNN methodology [11], consider the DNN identifier𝑑𝑑𝑡̂𝑢𝑖,𝑗,𝑘=𝐴𝑖,𝑗,𝑘̂𝑢𝑖,𝑗,𝑘+11𝑝=1𝑊𝑝𝑖,𝑗,𝑘(𝑡)𝜙𝑠𝑈(20) for all 𝑖=0,,𝐿; ̂𝑢1(𝑡)=̂𝑢2(𝑡)=0, where 𝜙 represents activation functions 𝜎, 𝜑, 𝛾, and 𝜓, 𝑠=1,3, 𝑈 is each one of the states ̂𝑢𝑖,𝑗,𝑘, ̂𝑢𝑖1,𝑗,𝑘, ̂𝑢𝑖2,𝑗,𝑘, ̂𝑢𝑖,𝑗1,𝑘, ̂𝑢𝑖,𝑗2,𝑘, ̂𝑢𝑖,𝑗,𝑘1, ̂𝑢𝑖,𝑗,𝑘1, ̂𝑢𝑖,𝑗,𝑘2, ̂𝑢𝑖1,𝑗1,𝑘, ̂𝑢𝑖1,𝑗,𝑘1, ̂𝑢𝑖1,𝑗1,𝑘1, and 𝐴𝑖,𝑗,𝑘𝑛×𝑛 is a constant matrix to be selected, ̂𝑢𝑖,𝑗,𝑘(𝑡) is the estimate of 𝑢𝑖,𝑗,𝑘(𝑡). Obviously that proposed methodology implies the designing of individual DNN identifier for each point 𝑥𝑖, 𝑦𝑗, 𝑧𝑘. The collection of such identifiers will constitute a DNN net containing 𝑁×𝑀 connected DNN identifiers working in parallel. Here 𝜎1(𝑥𝑖,𝑦𝑗,𝑧𝑘), 𝜑1(𝑥𝑖,𝑦𝑗,𝑧𝑘), 𝜑2(𝑥𝑖,𝑦𝑗,𝑧𝑘), 𝜑3(𝑥𝑖,𝑦𝑗,𝑧𝑘), 𝛾1(𝑥𝑖,𝑦𝑗,𝑧𝑘), 𝛾2(𝑥𝑖,𝑦𝑗,𝑧𝑘), 𝛾3(𝑥𝑖,𝑦𝑗,𝑧𝑘), 𝜓1(𝑥𝑖,𝑦𝑗,𝑧𝑘), 𝜓2(𝑥𝑖,𝑦𝑗,𝑧𝑘), 𝜓3(𝑥𝑖,𝑦𝑗,𝑧𝑘), and 𝜎2(𝑥𝑖,𝑦𝑗,𝑧𝑘) are the NN activation vectors. This means that the applied DNN-approximation significantly simplifies the specification of 𝜎1(,), 𝜑1(,), 𝜑2(,), 𝜑3(,), 𝛾1(,), 𝛾2(,), 𝛾3(,) and 𝜓1(,), 𝜓2(,), 𝜓3(,), 𝜎2(,) which now are constant for any 𝑥𝑖, 𝑦𝑗, 𝑧𝑘 fixed.

3.2. Learning Laws for Identifier’s Weights

For each 𝑖=0,,𝐿,𝑗=0,,𝑀,𝑘=0,,𝑁, define the vector-functions defining the error between the trajectories produced by the model and the DNN-identifier as well as their derivatives with respect to 𝑥, 𝑦, and 𝑧 for each 𝑖,𝑗,𝑘:̃𝑢𝑖,𝑗,𝑘(𝑡)=̂𝑢𝑖,𝑗,𝑘(𝑡)𝑢𝑖,𝑗,𝑘(𝑡),̃𝑢𝑠𝑖,𝑗,𝑘(𝑡)=̂𝑢𝑠𝑖,𝑗,𝑘(𝑡)𝑢𝑠𝑖,𝑗,𝑘(𝑡),𝑠=𝑥,𝑦,𝑧.(21) Let 𝑊𝑟𝑖,𝑗,𝑘(𝑡)𝑛, 𝑟=1,11 be time-variant matrices. These matrices satisfy the following nonlinear matrix differential equations: ̇𝑊𝑟𝑖,𝑗,𝑘𝑑(𝑡)=𝑑𝑡𝑊𝑟𝑖,𝑗,𝑘𝑊(𝑡)=𝛼𝑟𝑖,𝑗,𝑘(𝑡)𝐾𝑟1𝑃𝑖,𝑗,𝑘̂𝑢𝑖,𝑗,𝑘𝑈(𝑖,𝑗,𝑘),𝑟Ω𝑟(𝑥𝑖,𝑦𝑗,𝑧𝑗)𝐾𝑟1𝑆1𝑖,𝑗,𝑘̃𝑢𝑥𝑖,𝑗,𝑘𝑈(𝑖,𝑗,𝑘),𝑟Ω𝑟𝑥(𝑥𝑖,𝑦𝑗,𝑧𝑗)𝐾𝑟1𝑆2𝑖,𝑗,𝑘̃𝑢𝑦𝑖,𝑗,𝑘𝑈(𝑖,𝑗,𝑘),𝑟Ω𝑟𝑦(𝑥𝑖,𝑦𝑗,𝑧𝑗)𝐾𝑟1𝑆3𝑖,𝑗,𝑘̃𝑢𝑧𝑖,𝑗,𝑘𝑈(𝑖,𝑗,𝑘),𝑟Ω𝑟𝑧(𝑥𝑖,𝑦𝑗,𝑧𝑗),(22) where Ω(𝑥𝑖,𝑦𝑗,𝑧𝑗)=𝑆𝑙(𝑥𝑖,𝑦𝑗,𝑧𝑗) (=1,10, 𝑙=1,3) represents the corresponding sigmoidal functions 𝜎𝑙(𝑥𝑖,𝑦𝑗,𝑧𝑗), 𝜑𝑙(𝑥𝑖,𝑦𝑗,𝑧𝑗), 𝛾𝑙(𝑥𝑖,𝑦𝑗,𝑧𝑗), and 𝜓𝑙(𝑥𝑖,𝑦𝑗,𝑧𝑗). Here𝑈(𝑖,𝑗,𝑘),1(𝑡)=̂𝑢𝑖,𝑗,𝑘𝑈(𝑡),(𝑖,𝑗,𝑘),2(𝑡)=̂𝑢𝑖1,𝑗,𝑘𝑈(𝑡),(𝑖,𝑗,𝑘),3(𝑡)=̂𝑢𝑖2,𝑗,𝑘𝑈(𝑡),(𝑖,𝑗,𝑘),4(𝑡)=̂𝑢𝑖,𝑗1,𝑘𝑈(𝑡),(𝑖,𝑗,𝑘),5(𝑡)=̂𝑢𝑖,𝑗2,𝑘𝑈(𝑡),(𝑖,𝑗,𝑘),6(𝑡)=̂𝑢𝑖1,𝑗1,𝑘𝑈(𝑡),(𝑖,𝑗,𝑘),7(𝑡)=̂𝑢𝑖1,𝑗1,𝑘𝑈(𝑡),(𝑖,𝑗,𝑘),8(𝑡)=̂𝑢𝑖1,𝑗1,𝑘𝑈(𝑡),(𝑖,𝑗,𝑘),9(𝑡)=̂𝑢𝑖1,𝑗1,𝑘𝑈(𝑡),(𝑖,𝑗,𝑘),10(𝑡)=̂𝑢𝑖1,𝑗1,𝑘𝑈(𝑡),(𝑖,𝑗,𝑘),11(𝑡)=̂𝑢𝑖1,𝑗1,𝑘1(𝑡)(23) with positive matrices 𝐾𝑟(𝑟=1,11) and 𝑃𝑖,𝑗,𝑘, 𝑆1𝑖,𝑗,𝑘, and 𝑆2𝑖,𝑗,𝑘 (𝑖=0,𝑁;𝑗=0,𝑀) which are positive definite solutions (𝑃𝑖,𝑗>0, 𝑆1𝑖,𝑗>0, 𝑆2𝑖,𝑗>0) and 𝑆3𝑖,𝑗,𝑘 (𝑖=0,𝑁;𝑗=0,𝑀, 𝑘=0,𝐿) of the algebraic Riccati equations defined as follows:𝑃Ric𝑖,𝑗,𝑘=𝑃𝑖,𝑗,𝑘𝐴𝑖,𝑗,𝑘+𝐴𝑖,𝑗,𝑘𝑃𝑖,𝑗,𝑘+𝑃𝑖,𝑗,𝑘𝑅𝑃𝑖,𝑗,𝑘𝑃𝑖,𝑗,𝑘+𝑄𝑃𝑖,𝑗,𝑘𝑆=0,Ric1𝑖,𝑗,𝑘=𝑆1𝑖,𝑗,𝑘𝐴𝑖,𝑗,𝑘+𝐴𝑖,𝑗𝑆1𝑖,𝑗,𝑘+𝑆1𝑖,𝑗,𝑘𝑅𝑆𝑖,𝑗,𝑘1𝑆1𝑖,𝑗,𝑘+𝑄𝑆𝑖,𝑗,𝑘1𝑆=0,Ric2𝑖,𝑗,𝑘=𝑆2𝑖,𝑗,𝑘𝐴𝑖,𝑗,𝑘+𝐴𝑖,𝑗,𝑘𝑆2𝑖,𝑗,𝑘+𝑆2𝑖,𝑗,𝑘𝑅𝑆𝑖,𝑗,𝑘2𝑆2𝑖,𝑗,𝑘+𝑄𝑆𝑖,𝑗,𝑘2𝑆=0,Ric3𝑖,𝑗,𝑘=𝑆3𝑖,𝑗,𝑘𝐴𝑖,𝑗,𝑘+𝐴𝑖,𝑗,𝑘𝑆3𝑖,𝑗,𝑘+𝑆3𝑖,𝑗,𝑘𝑅𝑆𝑖,𝑗,𝑘3𝑆3𝑖,𝑗,𝑘+𝑄𝑆𝑖,𝑗,𝑘3=0,(24) where each 𝑅𝐵𝑖,𝑗,𝑘 has the form𝑅𝐵𝑖,𝑗,𝑘=11𝑟=1𝑊𝑟𝑖,𝑗,𝑘Λ𝑟+𝑏𝑊𝑟𝑖,𝑗,𝑘+Λ𝑏,(25) where 𝐵 can be 𝑃, 𝑆1, 𝑆2, and 𝑆3 and 𝑏=(7,14,21,28).

Matrices 𝑄𝐵𝑖,𝑗,𝑘 have the form𝑄𝐵𝑖,𝑗,𝑘Ω=1𝑚(𝑥𝑖,𝑦𝑗,𝑧𝑘)2Λ11+Ω2𝑚(𝑥𝑖1,𝑦𝑗,𝑧𝑘)2Λ21+Ω3𝑚(𝑥𝑖2,𝑦𝑗,𝑧𝑘)2Λ31+Ω4𝑚(𝑥𝑖,𝑦𝑗1,𝑧𝑘)2Λ41+Ω5𝑚(𝑥𝑖,𝑦𝑗2,𝑧𝑘)2Λ51+Ω6𝑚(𝑥𝑖,𝑦𝑗,𝑧𝑘1)2Λ61+Ω7𝑚(𝑥𝑖,𝑦𝑗,𝑧𝑘2)2Λ71+Ω8𝑚(𝑥𝑖1,𝑦𝑗1,𝑧𝑘)2Λ81+Ω9𝑚(𝑥𝑖,𝑦𝑗1,𝑧𝑘1)2Λ91+Ω𝑚10(𝑥𝑖1,𝑦𝑗,𝑧𝑘1)2Λ110+Ω𝑚11(𝑥𝑖1,𝑦𝑗1,𝑧𝑘1)2Λ111+𝑄𝐵𝑖,𝑗,𝑘,(26) where 𝐵 can be 𝑃, 𝑆1, 𝑆2, or 𝑆3, representing the partial derivative; for 𝑆1 it is with respect to 𝑥, for 𝑆2 with respect to 𝑦, and for 𝑆3 with respect to 𝑧, and Λ𝑙1 (𝑙=1,46), whereΩ𝑟𝑥𝑥𝑖,𝑦𝑗,𝑧𝑗𝑑=Ω𝑑𝑥𝑟|||(𝑥,𝑦,𝑧)𝑥=𝑥𝑖,𝑦=𝑦𝑗,𝑧=𝑧𝑘,Ω𝑟𝑦𝑥𝑖,𝑦𝑗,𝑧𝑗𝑑=Ω𝑑𝑦𝑟(||||𝑥,𝑦,𝑧)𝑥=𝑥𝑖,𝑦=𝑦𝑗,𝑧=𝑧𝑘,Ω𝑟𝑧𝑥𝑖,𝑦𝑗,𝑧𝑗𝑑=Ω𝑑𝑧𝑟|||(𝑥,𝑦,𝑧)𝑥=𝑥𝑖,𝑦=𝑦𝑗,𝑧=𝑧𝑘.(27) Here, 𝑊𝑘𝑖,𝑗,𝑘(𝑡)=𝑊𝑘𝑖,𝑗,𝑘(𝑡)𝑊𝑘𝑖,𝑗,𝑘, 𝑘=1,11.

The special class of Riccati equation𝑃𝐴+𝐴𝑇𝑃+𝑃𝑅𝑃+𝑄=0(28) has a unique positive solution 𝑃 if and only if the four conditions given in [11] (page 65, chapter 2 Nonlinear System Identification: Differential Learning) are fulfilled (see [11]): (1) matrix 𝐴 is stable, (2) pair (𝐴,𝑅1/2) is controllable, (3) pair (𝑄1/2,𝐴) is observable, and (4) matrices (𝐴,𝑄,𝑅) should be selected in such a way to satisfy the following inequality:14𝐴𝑅1𝑅1𝐴𝑅𝐴𝑅1𝑅1𝐴+𝑄𝐴𝑅1𝐴(29) which restricts the largest eigenvalue of 𝑅 guarantying the existence of a unique positive solution. The main result obtained in this part is in the practical stability framework.

4. Practical Stability and Stabilization

The following definition and proposition are needed for the main results of the paper. Consider the following ODE nonlinear system:̇𝑧𝑡𝑧=𝑔𝑡,𝑣𝑡+𝜛𝑡(30) with 𝑧𝑡𝑛, and 𝑣𝑡𝑚, and 𝜛𝑡 an external perturbation or uncertainty such that 𝜛𝑡2𝜛+.

Definition 4 (Practical Stability). Assume that a time interval 𝑇 and a fixed function 𝑣𝑡𝑚 over 𝑇 are given. Given 𝜀>0, the nonlinear system (30) is said to be 𝜀-practically stable over 𝑇 under the presence of 𝜛𝑡 if there exists a 𝛿>0 (𝛿 depends on 𝜀 and the interval 𝑇) such that 𝑧𝑡𝐵[0,𝜀], for all 0𝑡, whenever 𝑧𝑡0𝐵[0,𝛿].

Similarly to the Lyapunov stability theory for nonlinear systems, it was applied the aforementioned direct method for the 𝜀-practical stability of nonlinear systems using-practical Lyapunov-like functions under the presence of external perturbations and model uncertainties. Note that these functions have properties differing significantly from the usual Lyapunov functions in classic stability theory.

The subsequent proposition requires the following lemma.

Lemma 5. Let a nonnegative function 𝑉(𝑡) satisfying the following differential inequality: ̇𝑉(𝑡)𝛼𝑉(𝑡)+𝛽,(31) where 𝛼>0 and 𝛽0. Then 𝜇1𝑉(𝑡)+0,(32) with 𝜇=𝛽/𝛼 and the function []+ defined as [𝑧]+=𝑧if𝑧0,0if𝑧<0.(33)

Proof. The proof of this lemma can be obtained directly by the application of the Gronwall-Bellman Lemma.

Proposition 6. Given a time interval 𝑇 and a function 𝑣() over a continuously differentiable real-valued function 𝑉(𝑧,𝑡) satisfying 𝑉(0,𝑡)=0, for all 𝑡𝜖𝑇, is said to be 𝜀-practical Lyapunov-like function over 𝑇 under 𝑣 if there exists a constant 𝛼>0 such that ̇𝑉𝜛(𝑧,𝑡)𝛼𝑉(𝑧,𝑡)+𝐻+,(34) with 𝐻 a bounded nonnegative nonlinear function with upper bound 𝐻+. Moreover, the trajectories of 𝑧𝑡 belong to the zone 𝜀=𝐻+/𝛼 when 𝑡. In this proposition ̇𝑉(𝑧𝑡,𝑡) denotes the derivative of 𝑉(𝑧,𝑡) along 𝑧𝑡, that is, ̇𝑉(𝑧,𝑡)=𝑉𝑧(𝑧,𝑡)(𝑔(𝑧𝑡,𝑣𝑡)+𝜛𝑡)+𝑉𝑡(𝑥,𝑡).

Proof. The proof follows directly from Lemma 5.

Definition 7. Given a time interval 𝑇 and a function 𝑣() over 𝑇, nonlinear system (30) is 𝜀-practically stable, 𝑇 under 𝑣 if there exists an 𝜀-practical Lyapunov-like function 𝑉(𝑥,𝑡) over 𝑇 under 𝑣.

5. Identification Problem Formulation

The state identification problem for nonlinear systems (13) analyzed in this study, could be now stated as follows.

Problem. For the nonlinear system, given by the vector PDE (20), to study the quality of the DNN identifier supplied with the adjustment (learning) laws (22), estimate the upper bound of the identification error 𝛿 given by 𝛿=lim𝐿𝑡𝑀𝑖=0𝑁𝑗=0𝑘=0̂𝑢𝑖,𝑗,𝑘(𝑡)𝑢𝑖,𝑗,𝑘(𝑡)2𝑃𝑖,𝑗,𝑘(35) (with 𝑃𝑖,𝑗,𝑘 from (24)) and, if it is possible, to reduce to its lowest possible value, selecting free parameters participating into the DNN identifier (𝐴,𝐾𝑟,𝑟=1,11).

This implicates that the reduction of the identification error 𝛿 means that the differential neural network has converged to the solution of the 3D PDE; this can be observed in the matching of the DNN to the PDE state.

6. Main Result

The main result of this paper is presented in the following theorem.

Theorem 8. Consider the nonlinear model (1) (𝑖=0,,𝐿; 𝑗=0,,𝑀, 𝑘=0,,𝑁), given by the system of PDEs with uncertainties (perturbations) in the states, under the border conditions (2). Let us also suppose that the DNN-identifier is given by (20) which parameters are adjusted by the learning laws (22) with parameters 𝛼𝑚𝑖,𝑗,𝑘(𝑖=0,,𝐿;𝑗=0,,𝑀;𝑘=0,,𝑁). If positive definite matrices 𝑄1𝑖,𝑗,𝑘, 𝑄2𝑖,𝑗,𝑘, and 𝑄3𝑖,𝑗,𝑘 provide the existence of positive solutions 𝑃𝑖,𝑗, 𝑆1𝑖,𝑗,𝑘, 𝑆2𝑖,𝑗,𝑘, and 𝑆3𝑖,𝑗,𝑘(𝑖=0,,𝐿) to the Riccati equations (24), then for all 𝑖=0,,𝐿; 𝑗=0,,𝑀; 𝑘=0,,𝑁 the following 𝜌-upper bound: lim𝐿𝑡𝑀𝑖=0𝑗=0×𝑁𝑘=0̂𝑢𝑖,𝑗,𝑘(𝑡)𝑢𝑖,𝑗,𝑘(𝑡)2𝑃𝑖,𝑗,𝑘+̂𝑢𝑥𝑖,𝑗,𝑘(𝑡)𝑢𝑥𝑖,𝑗,𝑘(𝑡)𝑆1𝑖,𝑗,𝑘+̂𝑢𝑦𝑖,𝑗,𝑘(𝑡)𝑢𝑦𝑖,𝑗,𝑘(𝑡)𝑆2𝑖,𝑗,𝑘+̂𝑢𝑧𝑖,𝑗,𝑘(𝑡)𝑢𝑧𝑖,𝑗,𝑘(𝑡)𝑆3𝑖,𝑗,𝑘𝜇(36) is ensured with 𝜇0=𝐿𝑀𝑁𝛼1/2 and 𝜇=𝜇0max𝑖,𝑗,𝑘Ψ,Ψ=𝜛3𝑠=1𝑓𝑠𝑖,𝑗,𝑘𝑓𝑖,𝑗,𝑘(𝑡)𝑇Λ71𝑓𝑖,𝑗,𝑘𝑓(𝑡)+𝑥𝑖,𝑗,𝑘(𝑡)𝑇Λ114𝑓𝑥𝑖,𝑗,𝑘+𝑓(𝑡)𝑦𝑖,𝑗,𝑘(𝑡)𝑇Λ121𝑓𝑦𝑖,𝑗,𝑘𝑓(𝑡)+𝑧𝑖,𝑗,𝑘(𝑡)𝑇Λ128𝑓𝑧𝑖,𝑗,𝑘𝜆(𝑡),𝜛=maxmaxΛ71,𝜆maxΛ114,𝜆maxΛ121,𝜆maxΛ128.(37) Moreover, the weights 𝑊𝑟,𝑡𝑟=1,11 remain bounded being proportional to 𝜇, that is, 𝑊𝑟,𝑡𝐾𝑟𝜇, 𝑟=1,11.

Proof. The proof is given in the appendix.

7. Simulation Results

7.1. Numerical Example

Below, the numerical smulation shows the qualitative illustration for a benchmark system. Therefore, consider the following three-dimensional PDE described as follows:𝑢𝑡(𝑥,𝑦,𝑧,𝑡)=𝑐1𝑢𝑥𝑥(𝑥,𝑦,𝑧,𝑡)𝑐2𝑢𝑦𝑦(𝑥,𝑦,𝑧,𝑡)𝑐3𝑢𝑧𝑧(𝑥,𝑦,𝑧,𝑡)+𝜉(𝑥,𝑦,𝑧,𝑡),(38) where 𝑐1=𝑐2=𝑐3=0.15. It is assumed that there is access to discrete measurements of the state 𝑢(𝑥,𝑦,𝑧,𝑡) along the whole domain, which is feasible in practice. 𝜉(𝑥,𝑦,𝑧,𝑡) is a noise in the state dynamics. This model will be used just to generate the data to test the 3D identifier based on DNN. Boundary conditions and initial conditions were selected as follows:𝑢(0,0,0,𝑡)=rand(1),𝑢𝑥𝑢(0,0,0,𝑡)=0,𝑦(0,0,0,𝑡)=0,𝑢𝑧(0,0,0,𝑡)=0.(39) The trajectories of the model can be seen in Figure 3 as well as the estimated state, produced by the DNN identifier. The efficiency of the identification process provided by the suggested DNN algorithm shown in Figure 4.

In Figures 5 and 6 there are shown the trajectories of the PDE and the DNN for the coordinate 𝑧 and the index error of the PDE and the DNN at 10 seconds for coordinates 𝑥, 𝑧, respectively (Figure 2).

7.2. Tumour Growth Example

The mathematical model of the brain tumour growth is presented in this section based on the results of [18]. Here the diffusion coefficient is considered as a constant. Let consider the following three-dimensional parabolic equation of the tumour growth described as follows:𝑢𝑡(𝑥,𝑦,𝑧,𝑡)=𝑃𝑢𝑥(𝑥,𝑦,𝑧,𝑡)𝑅𝑢𝑦(𝑥,𝑦,𝑧,𝑡)𝑆𝑢𝑧(𝑥,𝑦,𝑧,𝑡)+𝑄𝑢𝑥𝑥(𝑥,𝑦,𝑧,𝑡)+𝑄𝑢𝑦𝑦(𝑥,𝑦,𝑧,𝑡)+𝑄𝑢𝑧𝑧(𝑥,𝑦,𝑧,𝑡)+Γ𝐿𝑢(𝑥,𝑦,𝑧,𝑡),(40) where 𝑢(𝑥,𝑦,𝑧,𝑡) is the growth rate of a brain tumour, 𝑄 is the diffusion coefficient, 𝑊=(𝑃,𝑅,𝑆) is the drift velocity field, Γ=Γ(𝑢) is the proliferation coefficient, and 𝐿=𝐿(𝑢) is the decay coefficient of cells. It is assumed that there is access to discrete measurements of the state 𝑢(𝑥,𝑦,𝑧,𝑡) along the whole domain, which is feasible in practice by PET-CT (Positron emission tomography-computed tomography) technology. The domain 0𝑥1, 0𝑦1, and 0𝑧1 This model will be used just to generate the data to test the 3D identifier based on DNN. Boundary conditions and initial conditions were selected as follows:𝑢(0,0,0,𝑡)=200±20𝜇,𝑢𝑥𝑢(0,0,0,𝑡)=0,𝑦(0,0,0,𝑡)=0,𝑢𝑧(0,0,0,𝑡)=0.(41) The trajectories of the model and the estimate state produced by the DNN identifier can be seen in Figure 7. The dissimilarity between both trajectories depends on the learning period required for adjusting the DNN identifier. The error between trajectories produced by the model and the proposed identifier is close to zero almost for all 𝑥, 𝑦, 𝑧 and all 𝑡 that shows the efficiency of the identification process provided by the suggested DNN algorithm is shown in Figure 8.

In Figures 9 and 10 there are shown the trajectories of the PDE and the DNN for the coordinate 𝑧 and the index error of the PDE and the DNN at 46 days for coordinates 𝑥, 𝑧, respectively.

8. Conclusions

The adaptive DNN method proposed here solves the problem of non-parametric identification of nonlinear systems (with uncertainties) given by 3D uncertain PDE. Practical stability for the identification process is demonstrated based on the Lyapunov-like analysis. The upper bound of the identification error is explicitly constructed. Numerical examples demonstrate the estimation efficiency of the suggested methodology.

Appendix

Consider the Lyapunov-like function defined as the composition of NML individual Lyapunov functions 𝑉𝑖,𝑗,𝑘(𝑡) along the whole space:𝑉(𝑡)=𝐿𝑀𝑖=0𝑁𝑗=0𝑘=0𝑉𝑖,𝑗,𝑘(𝑡)+11𝑟=1𝑊tr𝑖𝑟(𝑡)𝑇𝐾𝑟𝑊𝑖𝑟,(𝑡)𝑉𝑖,𝑗,𝑘(𝑡)=̃𝑢𝑖,𝑗,𝑘(𝑡)2𝑃𝑖,𝑗,𝑘+̃𝑢𝑥𝑖,𝑗,𝑘(𝑡)2𝑆1𝑖,𝑗,𝑘+̃𝑢𝑦𝑖,𝑗,𝑘(𝑡)2𝑆2𝑖,𝑗,𝑘+̃𝑢𝑧𝑖,𝑗,𝑘(𝑡)2𝑆3𝑖,𝑗,𝑘.(A.1)

The time derivative ̇𝑉(𝑡) of 𝑉(𝑡) can be obtained so thaṫ𝑉(𝑡)=2𝑁𝑀𝑖=0𝐿𝑗=0𝑘=0̃𝑢𝑖,𝑗,𝑘(𝑡)𝑇(𝑡)𝑃𝑖𝑗,𝑘𝑑𝑑𝑡̃𝑢𝑖,𝑗,𝑘(𝑡)+2𝑁𝑀𝑖=0𝐿𝑗=0𝑘=0̃𝑢𝑥𝑖,𝑗,𝑘(𝑡)𝑇𝑆1𝑖𝑗,𝑘𝑑𝑑𝑡̃𝑢𝑥𝑖.𝑗,𝑘(𝑡)+2𝑁𝑀𝑖=0𝐿𝑗=0𝑘=0̃𝑢𝑦𝑖,𝑗,𝑘(𝑡)𝑇𝑆2𝑖𝑗,𝑘𝑑𝑑𝑡̃𝑢𝑦𝑖.𝑗,𝑘(𝑡)+2𝑁𝑀𝑖=0𝐿𝑗=0𝑘=0̃𝑢𝑧𝑖,𝑗,𝑘(𝑡)𝑇𝑆3𝑖𝑗,𝑘𝑑𝑑𝑡̃𝑢𝑧𝑖.𝑗,𝑘(𝑡)+2𝑁𝑀𝑖=0𝐿𝑗=03𝑘=0𝑟=1𝑊tr𝑟𝑖,𝑗,𝑘(𝑡)𝑇𝐾𝑟̇𝑊𝑟𝑖,𝑗,𝑘.(𝑡)(A.2)

Applying the matrix inequality given in [19]𝑋𝑌𝑇+𝑌𝑋𝑇𝑋Λ𝑋𝑇+𝑌Λ1𝑌𝑇(A.3) valid for any 𝑋,𝑌𝑅𝑟×𝑠and for any 0<Λ=Λ𝑇𝑅𝑠×𝑠 to the terms containing 𝑓𝑖(𝑡) and their derivatives. We obtaiṅ𝑉(𝑡)𝐼1(𝑡)+𝐼2(𝑡)+𝐼3(𝑡)+𝐼4(𝑡)2𝛼11𝑁𝑟=1𝑀𝑖=0𝐿𝑗=0𝑘=0𝑊tr𝑟𝑖,𝑗(𝑡)𝑇𝐾𝑟𝑊𝑟𝑖,𝑗,𝑘(𝑡)+2𝛼11𝑁𝑟=1𝑀𝑖=0𝐿𝑗=0𝑘=0𝑊tr𝑟𝑖,𝑗(𝑡)𝑇𝐾𝑟𝑊𝑟𝑖,𝑗,𝑘(𝑡)+211𝑁𝑟=1𝑀𝑖=0𝐿𝑗=0𝑘=0𝑊tr𝑟𝑖,𝑗(𝑡)𝑇𝐾2𝑟̇𝑊1𝑖,𝑗,𝑘(𝑡)𝑁𝑀𝑖=0𝐿𝑗=0𝑘=0𝛼𝑉𝑖,𝑗,𝑘(𝑡)+𝜛𝑁𝑀𝑖=0𝐿𝑗=03𝑘=0𝑠=1𝑓𝑠𝑖,𝑗,𝑘,(A.4) where𝐼1(𝑡)=𝑁𝑀𝑖=0𝐿𝑗=0𝑘=0̃𝑢𝑖.𝑗,𝑘(𝑡)𝑇𝑃Ric𝑖,𝑗̃𝑢𝑖,𝑗,𝑘𝐼(𝑡),2(𝑡)=𝑁𝑀𝑖=0𝐿𝑗=0𝑘=0̃𝑢𝑥𝑖.𝑗,𝑘(𝑡)𝑇𝑆Ric1𝑖,𝑗,𝑘̃𝑢𝑥𝑖,𝑗,𝑘𝐼(𝑡),3(𝑡)=𝑁𝑀𝑖=0𝐿𝑗=0𝑘=0̃𝑢𝑦𝑖.𝑗,𝑘(𝑡)𝑇𝑆Ric2𝑖,𝑗̃𝑢𝑦𝑖,𝑗,𝑘𝐼(𝑡),4(𝑡)=𝑁𝑀𝑖=0𝐿𝑗=0𝑘=0̃𝑢𝑧𝑖.𝑗,𝑘(𝑡)𝑇𝑆Ric3𝑖,𝑗̃𝑢𝑧𝑖,𝑗,𝑘(𝑡).(A.5) By the Riccati equations, defined in (24), 𝐼1(𝑡)=𝐼2(𝑡)=𝐼3(𝑡)=𝐼4(𝑡)=0, and in view of the adjust equations of the weights (22), the previous inequality is simplified tȯ𝑉(𝑡)𝛼𝑁𝑀𝑖=0𝐿𝑗=0𝑘=0𝑉𝑖,𝑗,𝑘(𝑡)+2𝛼11𝑁𝑟=1𝑀𝑖=0𝐿𝑗=0𝑘=0𝑊tr𝑟𝑖,𝑗(𝑡)𝑇𝐾𝑟𝑊𝑟𝑖,𝑗,𝑘̇(𝑡)+Ψ𝑉(𝑡)𝛼𝑁𝑀𝑖=0𝐿𝑗=0𝑘=0𝑉𝑖,𝑗,𝑘(𝑡)+Ψ.(A.6)

Applying Lemma 5, one has [1(𝜇/𝑉(𝑡))]+0, which completes the proof.