Abstract

This paper investigates the analysis problem for stability of discrete-time neural networks (NNs) with discrete- and distribute-time delay. Stability theory and a linear matrix inequality (LMI) approach are developed to establish sufficient conditions for the NNs to be globally asymptotically stable and to design a state estimator for the discrete-time neural networks. Both the discrete delay and distribute delays employ decomposing the delay interval approach, and the Lyapunov-Krasovskii functionals (LKFs) are constructed on these intervals, such that a new stability criterion is proposed in terms of linear matrix inequalities (LMIs). Numerical examples are given to demonstrate the effectiveness of the proposed method and the applicability of the proposed method.

1. Introduction

In the past decades, recurrent neural networks (RNNs) have been widely studied due to their wide applications in some areas such as pattern recognition, associative memory, combinatorial optimization, and signal processing. Dynamical behaviors (e.g., stability, instability, periodic oscillatory, and chaos) of the neural networks are known to be crucial in applications. It is noted that the stability of neural networks is a prerequisite for some optimization problems. As is known to all, many biological and artificial neural networks contain inherent time delays in signal transmission due to the finite speed of information processing, which may cause oscillation, divergence, and instability. In recent years, a great number of papers have been published on various networks with time delays [110].

On one hand, delay-dependent stability condition for continuous-time RNNs with time-varying delays was derived by defining a new Lyapunov functional, and the obtained condition could include some existing time delay-independent ones; see [11, 12]. Up to now, however, when we use computer to simulate, experimentalize or compute continuous-time RNNs, it is necessary to discretize the continuous-time networks to formulate a discrete-time system. So, the study on the dynamics of discrete-time neural networks is crucially needed. In particular, the stability of discrete-time neural networks (DNNs) has been studied in [1318], since DNNs play a more important role than their continuous-time counterparts in today’s digital life.

On the other hand, the neuron states are seldom fully available in the network outputs in many applications; the neuron state estimation problem becomes important to utilize the estimated neuron state through the available measurements. Recently, the state estimation problem for the neural networks has engaged lots of scholars' attention and interest. Therefore, delay-dependent state estimation problem has been studied widely for NNs; see [1926].

Stochastic disturbances are mostly inevitable owing to thermal noise in electronic implementations. It has also been revealed that certain stochastic inputs could make a neural network unstable.

Summarizing the above discussion, in this paper, the stability problem is considered for discrete-time neural networks with discrete and distribute delays. Firstly, the mathematical models are established. Secondly, a less conservative and new stability criterion is derived by using a novel Lyapunov-Krasovskii functional. Thirdly, a numerical example is provided to show the effectiveness of the main result. The technical difficulties of our paper are the partition of the distributed time-varying delays. The novel contribution of this work with respect to existing literature is to construct a novel Lyapunov-Krasovskii functional according to the situation of the distributed time-varying delays' partition. In Corollary 11, we use Lemma 7, which we have proved in Section 2, and we can get a new stability criterion.

Notation. Throughout this paper, and denote, respectively, the -dimensional Euclidean space and the set of all real matrices. The superscript denotes matrix transposition and the notation (resp., ), where and are symmetric matrices, which means that is positive semidefinite (resp., positive definite). In symmetric block matrices, the symbol is used as an ellipsis for terms induced by symmetry. stands for the Euclidean vector norm in . is defined as . denotes the set including zero and positive integers. and denote the expectation of and the expectation of conditional on . is a probability space, where is the sample space, is the -algebra of subsets of the sample space, and is the probability measure on .

2. Preliminaries

Consider the following discrete-time recurrent neural network with time-varying delays described by where is the neural state vector at time ; with is the state feedback coefficient matrix; the matrices , and are the connection weight matrix, the discretely delayed connection weight matrix and distributively delayed connection weight matrix, respectively;   is the exogenous input; ,   , and     are the neuron activation functions, which satisfy ,   , and ; and respectively, denote the discrete and distributed time-varying delays. is a scalar Wiener process on a probability space with , , and .

Assumption 1. For any , , , the activation functions satisfy where , and are constants.

Remark 2. The condition on the activation function in Assumption 1 was originally employed in [27] and has been subsequently used in recent papers with the problem of stability of neural networks; see [5, 6, 11, 28, 29], for example.

Assumption 3. The noise intensity function vector satisfies the Lipschitz condition; that is, there exists a constant such that for any the following inequality:

Assumption 4. The time-varying delays and are bounded, , , and its probability distribution can be observed. Assume that takes values in and , where , , and . Similarly, takes values in , and , where , , and .

Remark 5. It is noted that the introduction of binary stochastic variable was first introduced in [6].

To describe the probability distribution of time-varying delays, we define the following sets , , and , . Define mapping functions

Remark 6. Consider , ,
Similarly, , ,

Proof . When , and as , we can easily deduce , so Similarly, the result about can be deduced. The proof is complete.

The system (1) can be rewritten as

As mentioned before, it is very difficult or even impossible to acquire the complete information of the neuron states in relatively large-scale neural networks. The main purpose of this study is to develop a novel approach to estimating the neuron states via the network outputs. As mentioned above, the objective of this study is to present an efficient algorithm to estimate the neuron states via available network outputs. It is assumed that the measured network outputs are of the form where is the measured output, is known constant matrix with appropriate dimensions, and is a nonlinear disturbance on the network outputs satisfying

As a matter of fact, the activation functions are known. In order to fully utilize the information of the activation function, the state estimator for the neural network is constructed as where is the estimation of the neuron state and is the estimator gain matrix to be determined. Define the error signal ; thus, we obtain the error state system as follows:

Denote ,  ,  ,  , then (12) can be rewritten as

The initial condition associated with the error system (13) is given as where and .

By defining and combing (8) and (13) with , we can obtain the following system: where Then, it is easy to show the following equations:

Lemma 7. For any constant matrix , any integers , and any vector function where staisfies such that the sums in the following are well defined, then where, matrix and vector independent of and are appropriate dimensional arbitrary ones.

Proof. It's well known that where where the vector , , with appropriate dimensional and . From this, we can get which is equivalent to (18).

Lemma 8 (Zhu and Yang [28]). For any constant matrix , any integers , and any vector function where satisfies , such that the sums in the following are well defined; then

3. New Stability Criteria

In this section, we will establish new stability criteria for system (1). Since the system in (8) involves a stochastic parameter, to investigate its stability, we need the following definition.

Definition 9. The system (11) is said to be globally asymptotically state estimator of the system (8), if the estimation error system (13) satisfies is globally asymptotical stable in mean square; that is,

Theorem 10. Under Assumptions 1, 3, and 4, the system (15) is globally asymptotically stable in mean square, if there exist matrices , , , , and , and positive diagonal matrices , , and , and scalars and such that the following LMI holds:
And the estimator gain can be designed as .

Proof. We construct a new Lyapunov-Krasovskii functional as where
Taking the difference of the functional along the solution of the system, we obtain From Remark 6, we can get
Similarly, the following equation can be deduced: And it is easy to deduce that , , and . Consider Then, by using Lemma 7 and , we have Then, by using Lemma 7, we have Let , , ; we have
From Assumption 1, and , we have and .
It can be deduced that there exist , , and , such that where denotes the unit column vector having one element on its th row and zeros elsewhere.
According to (10), ; the following equation can be concluded, where is a positive scalar: And from Assumption 3, we can obtain, for a positive scalar, Combining (27)–(36), we obtain where
Using Schur complement, we can let be equal to Pre- and postmultiplying (39), respectively, by and its transpose yield Then, by denoting , one gets that the LMI condition (40) can guarantee  (23).
It is obvious that The summation of both sides of (41) from 1 to (let be a positive integer) is equal to So, We can conclude that is convergent and This completes the proof.

Based on Theorem 10, a further improved delay-dependent stability criterion of the system (15) is given in the following corollary by using Lemma 8.

Corollary 11. Under Assumptions 1, 3, and 4, the system (15) is globally asymptotically stable in mean square, if there exist matrices , , , , and such that the following LMI holds: where is defined in Theorem 10 and .

Proof . From Theorem 10, we can know that Then, Using Lemma 8 and , we can deduce that, for any matrices with appropriate dimension,

4. Examples

In this section, a numerical example is given to illustrate the effectiveness and benefits of the developed methods.

Example 1. We consider the delayed stochastic DNNs (1) with the following parameters: And the activation functions satisfy Assumption 1 with For the parameters listed above, letting , , , and , we can obtain the following feasible solution. Therefore, it is clear to see that our method is effective. Due to the limitation of the length of this paper, we only provide a part of the feasible solution here:
Therefore, according to Theorem 10, the gain matrix of the desired estimator can be obtained as When we define , , , , we get Figures 1 and 2. They represent the trajectories of and the state estimator with the initial condition , . And from Theorem 10, it follows that the state estimator (46) is indeed a state estimator of the delayed neural network (1). Figure 3 further confirmed that the estimation error tends to zero as .

5. Conclusions

The robust stability for stochastic discrete-time NNs with mixed delays has been investigated in this research via the Lyapunov functional method. By employing delay partitioning and introducing a new Lyapunov functional, more general LMIs conditions on the stability of the stochastic discrete-time NNs are established. Finally, the feasibility and effectiveness of the developed methods and their less conservatism than most of the existing results have been shown by numerical simulation examples. The foregoing results have the potential to be useful for the study of stochastic discrete-time NNs. And the results can also been extended to complex networks with mixed time-varying delays.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the Foundation of National Nature Science of China (2010 CB732501) and the Fund of Sichuan Provincial Key Laboratory of Signal and Information Processing (SZJJ2009-002).