Research Article | Open Access
State Estimation for Discrete-Time Stochastic Neural Networks with Mixed Delays
This paper investigates the analysis problem for stability of discrete-time neural networks (NNs) with discrete- and distribute-time delay. Stability theory and a linear matrix inequality (LMI) approach are developed to establish sufficient conditions for the NNs to be globally asymptotically stable and to design a state estimator for the discrete-time neural networks. Both the discrete delay and distribute delays employ decomposing the delay interval approach, and the Lyapunov-Krasovskii functionals (LKFs) are constructed on these intervals, such that a new stability criterion is proposed in terms of linear matrix inequalities (LMIs). Numerical examples are given to demonstrate the effectiveness of the proposed method and the applicability of the proposed method.
In the past decades, recurrent neural networks (RNNs) have been widely studied due to their wide applications in some areas such as pattern recognition, associative memory, combinatorial optimization, and signal processing. Dynamical behaviors (e.g., stability, instability, periodic oscillatory, and chaos) of the neural networks are known to be crucial in applications. It is noted that the stability of neural networks is a prerequisite for some optimization problems. As is known to all, many biological and artificial neural networks contain inherent time delays in signal transmission due to the finite speed of information processing, which may cause oscillation, divergence, and instability. In recent years, a great number of papers have been published on various networks with time delays [1–10].
On one hand, delay-dependent stability condition for continuous-time RNNs with time-varying delays was derived by defining a new Lyapunov functional, and the obtained condition could include some existing time delay-independent ones; see [11, 12]. Up to now, however, when we use computer to simulate, experimentalize or compute continuous-time RNNs, it is necessary to discretize the continuous-time networks to formulate a discrete-time system. So, the study on the dynamics of discrete-time neural networks is crucially needed. In particular, the stability of discrete-time neural networks (DNNs) has been studied in [13–18], since DNNs play a more important role than their continuous-time counterparts in today’s digital life.
On the other hand, the neuron states are seldom fully available in the network outputs in many applications; the neuron state estimation problem becomes important to utilize the estimated neuron state through the available measurements. Recently, the state estimation problem for the neural networks has engaged lots of scholars' attention and interest. Therefore, delay-dependent state estimation problem has been studied widely for NNs; see [19–26].
Stochastic disturbances are mostly inevitable owing to thermal noise in electronic implementations. It has also been revealed that certain stochastic inputs could make a neural network unstable.
Summarizing the above discussion, in this paper, the stability problem is considered for discrete-time neural networks with discrete and distribute delays. Firstly, the mathematical models are established. Secondly, a less conservative and new stability criterion is derived by using a novel Lyapunov-Krasovskii functional. Thirdly, a numerical example is provided to show the effectiveness of the main result. The technical difficulties of our paper are the partition of the distributed time-varying delays. The novel contribution of this work with respect to existing literature is to construct a novel Lyapunov-Krasovskii functional according to the situation of the distributed time-varying delays' partition. In Corollary 11, we use Lemma 7, which we have proved in Section 2, and we can get a new stability criterion.
Notation. Throughout this paper, and denote, respectively, the -dimensional Euclidean space and the set of all real matrices. The superscript denotes matrix transposition and the notation (resp., ), where and are symmetric matrices, which means that is positive semidefinite (resp., positive definite). In symmetric block matrices, the symbol is used as an ellipsis for terms induced by symmetry. stands for the Euclidean vector norm in . is defined as . denotes the set including zero and positive integers. and denote the expectation of and the expectation of conditional on . is a probability space, where is the sample space, is the -algebra of subsets of the sample space, and is the probability measure on .
Consider the following discrete-time recurrent neural network with time-varying delays described by where is the neural state vector at time ; with is the state feedback coefficient matrix; the matrices , and are the connection weight matrix, the discretely delayed connection weight matrix and distributively delayed connection weight matrix, respectively; is the exogenous input; , , and are the neuron activation functions, which satisfy , , and ; and respectively, denote the discrete and distributed time-varying delays. is a scalar Wiener process on a probability space with , , and .
Assumption 1. For any , , , the activation functions satisfy where , and are constants.
Remark 2. The condition on the activation function in Assumption 1 was originally employed in  and has been subsequently used in recent papers with the problem of stability of neural networks; see [5, 6, 11, 28, 29], for example.
Assumption 3. The noise intensity function vector satisfies the Lipschitz condition; that is, there exists a constant such that for any the following inequality:
Assumption 4. The time-varying delays and are bounded, , , and its probability distribution can be observed. Assume that takes values in and , where , , and . Similarly, takes values in , and , where , , and .
Remark 5. It is noted that the introduction of binary stochastic variable was first introduced in .
To describe the probability distribution of time-varying delays, we define the following sets , , and , . Define mapping functions
Remark 6. Consider , ,
Similarly, , ,
Proof . When , and as , we can easily deduce , so Similarly, the result about can be deduced. The proof is complete.
The system (1) can be rewritten as
As mentioned before, it is very difficult or even impossible to acquire the complete information of the neuron states in relatively large-scale neural networks. The main purpose of this study is to develop a novel approach to estimating the neuron states via the network outputs. As mentioned above, the objective of this study is to present an efficient algorithm to estimate the neuron states via available network outputs. It is assumed that the measured network outputs are of the form where is the measured output, is known constant matrix with appropriate dimensions, and is a nonlinear disturbance on the network outputs satisfying
As a matter of fact, the activation functions are known. In order to fully utilize the information of the activation function, the state estimator for the neural network is constructed as where is the estimation of the neuron state and is the estimator gain matrix to be determined. Define the error signal ; thus, we obtain the error state system as follows:
Denote , , , , then (12) can be rewritten as
The initial condition associated with the error system (13) is given as where and .
Lemma 7. For any constant matrix , any integers , and any vector function where staisfies such that the sums in the following are well defined, then where, matrix and vector independent of and are appropriate dimensional arbitrary ones.
Proof. It's well known that where where the vector , , with appropriate dimensional and . From this, we can get which is equivalent to (18).
Lemma 8 (Zhu and Yang ). For any constant matrix , any integers , and any vector function where satisfies , such that the sums in the following are well defined; then
3. New Stability Criteria
Definition 9. The system (11) is said to be globally asymptotically state estimator of the system (8), if the estimation error system (13) satisfies is globally asymptotical stable in mean square; that is,
Theorem 10. Under Assumptions 1, 3, and 4, the system (15) is globally asymptotically stable in mean square, if there exist matrices , , , , and , and positive diagonal matrices , , and , and scalars and such that the following LMI holds:
And the estimator gain can be designed as .
Proof. We construct a new Lyapunov-Krasovskii functional as
Taking the difference of the functional along the solution of the system, we obtain From Remark 6, we can get
Similarly, the following equation can be deduced: And it is easy to deduce that , , and . Consider Then, by using Lemma 7 and , we have Then, by using Lemma 7, we have Let , , ; we have