#### Abstract

This paper investigates the stability of stochastic discrete-time neural networks (NNs) with discrete time-varying delays and leakage delay. As the partition of time-varying and leakage delay is brought in the discrete-time system, we construct a novel Lyapunov-Krasovskii function based on stability theory. Furthermore sufficient conditions are derived to guarantee the global asymptotic stability of the equilibrium point. Numerical example is given to demonstrate the effectiveness of the proposed method and the applicability of the proposed method.

#### 1. Introduction

Neural networks (NNs) have received much interest owing to their wide application in the areas of signal processing, pattern recognition, and static image processing in [1, 2]. One of the most important and challenging questions in theoretical analysis of neural networks (NNs) is dynamical behaviors of the neural networks, for example, stability, instability, periodic oscillatory, and chaos. Among them analysis of stability has received much attention and various stability conditions have been obtained in [3–10]. The main problem of stability analysis is how to construct appropriate Lyapunov functions which are widely used in various fields [11–18]. Then the construction of Lyapunov functions is determined by the structure of neural networks. The study of neural networks generally considers the following factors.

It is well known that time delay is inherent in various systems, including artificial neural networks, owing to the finite speed of signal transmission [3]. Delays in a system may cause oscillation and divergence and may degrade the performance. Hence, stability analysis of systems with time delay is widespread. Scholars classify the analysis of stability into delay-independent and delay-dependent ones. It has been proved in [4, 5] that the delay-dependent criterion is less conservative than the delay-independent one, especially for smaller values of delay. Delay-dependent stability condition for continuous-time NNs with time-varying delays has been reported in literatures [4–10]. The approaches to handle the time-varying delay in most literatures are based on introducing free weighting matrices [19], model transformation method [20], and linear matrix inequality (LMI) approach and employing the delay partitioning approach in [21, 22].

In the digital life, most of the signals including the continuous-time NNs need to be processed, experimentalized, or computed by the computer, such that we must discretize the continuous-time signals before delivering them to the computer. Therefore, the stability of discrete-time neural networks is necessary, and more and more literature about it was published [22–28].

As pointed out in [29], discrete-time delays were introduced into bidirectional associative memory (BAM) neural networks which were known as leakage (or forgetting) terms. From literatures [30–36] it can be found that the leakage terms had a tendency to destabilize the neural networks. In the literatures [30, 31], the system with leakage delays had been studied. However, different from [29], the leakage delays changed to be time varying. Not only that, different kinds of neural networks with time delays in the leakage terms were studied, especially the effect of leakage term on the dynamical behavior of various kinds of neural networks [32–42]. In [33], passivity analysis of neural networks with time-varying delays and leakage delay was considered. And they used the free-weighting matrix method and stochastic analysis technique. Furthermore, Li and others in [34, 35] researched stability of various differential systems including neural networks in leakage terms, by using contraction mapping theorem, Brouwers fixed point theorem, Lyapunov-Krasovskii functional method, and free weighting matrix technique. Unfortunately, neural networks with the leakage terms in most literatures are continuous systems, and there is not any thesis about discrete neural networks with the leakage delay. On the other hand, it has now been well recognized that stochastic disturbances are mostly inevitable owing to noise in electronic implementations. It has also been revealed that certain stochastic inputs could make a neural network unstable.

In this paper, the stability problem is considered for discrete-time neural networks with time-varying delays in the leakage terms. Firstly, the mathematical models are established. Secondly, a less conservative and new stability criterion is derived by using a novel Lyapunov-Krasovskii functional which depends on the circumstance of the delay partition. Thirdly, a numerical example is provided to show the effectiveness of the main result.

*Notation*. Throughout this paper, denotes the -dimensional Euclidean space, and denotes the set of all real matrices. The superscript denotes matrix transposition and the notation (, resp.), where and are symmetric matrices, means that is positive semidefinite (positive definite, resp.). In symmetric block matrices, the symbol is used as an ellipsis for terms induced by symmetry. stands for the Euclidean vector norm in . and denote the expectation of and the expectation of conditional on . is a probability space, where is the sample space, is the -algebra of subsets of the sample space, and is the probability measure on .

#### 2. Preliminaries

Discretizing the continuous-time NNs with leakage terms [39], we will consider the following discrete-time systems with time-varying delays in leakage terms:where is the state vector at time ; with is the state feedback coefficient matrix; the matrices and are the connection weight matrix and the discretely delayed connection weight matrix, respectively; , are the neuron activation functions, which satisfy , ; denotes the discrete time-varying delay; means the leakage delays.

*Assumption 1. *For any , , , the activation functions satisfywhere , , , and are constants.

*Remark 2. *The condition on the activation functions in Assumption 1 was originally employed in [6, 7] and has been subsequently used in recent papers with the problem of stability of neural networks; see [21–28].

The system (1) has equilibrium points under Assumption 1. Let be the equilibrium point of system (1), and shift it to the origin by letting , and then system (1) with stochastic disturbances can be rewritten aswhere is the state vector of the transformed system, , and the transformed neuron activation functions are , , and is a scalar Wiener process on a probability space with , , (). Then ; can be verified from Remark 2.

*Assumption 3. *The noise intensity function vector satisfies the Lipschitz condition; that is, the nonlinear function satisfies the following inequality:where is a known scalar constant.

*Assumption 4. *The time-varying delay is bounded, , and its probability distribution can be observed. Assume that integer is satisfied in , , , which means we divide into parts, and , where , , , and , .

To describe the probability distribution of time-varying delay, we define the following set , . Define mapping functions

*Remark 5. *Consider , ,

*Remark 6. *In the literature [21], the delay-partitioning projection technique is used. In [21], they partition into several components, that is, , where is a positive integer, in order to derive some less restrictive stability criteria. Different from the literature [21], we use the knowledge of probability to describe the partition of time delay. This way has the advantage of reducing the amount of calculation.

Then the system (3) can be rewritten as

#### 3. New Stability Criteria

In this section, we will establish new stability criteria for system (1). The following lemma is needed in order to derive our main results.

Lemma 7 (Zhu and Yang [23] discrete Jenson inequality). *For any constant matrix , , integers , vector function such that the sums in the following are well defined, and then*

The next theorem can be obtained if the stochastic term is removed in the system (6).

Theorem 8. *Under Assumptions 1 and 4, the system (6) with is globally asymptotically stable, if there exist positive matrices , , , , , (), , , and free matrix , , , with any appropriate dimensions, such that the following LMI holds:**where , denotes a matrix with the th element while the th element is , and other elements are zero matrixes. For example, .*

*Proof. *For convenience, we denote ; that is, , and then systems (6) with can be rewritten as follows:

We construct a new Lyapunov-Krasovskii functional aswhere

Taking the difference of the functional along the solution of (9), we obtainAccording to Remark 5, it is easy to getwhere .

Consider

Using the algebraic expression in and by using Lemma 7, we can get

Let , , , ; then

As is well known, for any vectors , and any symmetric matrix , the following inequality holds:Denoting , , , where is free matrix, then the last inequality can be expressed as follows:which is equal toSimilarly, we can getCombining those inequalities, we obtain

According to the equation in Assumption 1, we haveIt can be deduced that there exist and , such thatwhere denotes the unit column vector having one element on its th row and zeros elsewhere. Derived from , we can get the following equation for any matrices with appropriate dimensions:Combining (12)–(25), we obtainThen ifwe can conclude that is convergent and , which implies that the system (6) is globally asymptotically stable. Using Schur complement and the boundary condition , we can get the inequality in this theorem.

This completes the proof of the theorem.

*Remark 9. *The stability analysis problem of a general class of discrete-time neural networks with leakage delays is dealt with in Theorem 8. The stability condition can be expressed by a set of LMIs, which can be checked by resorting to MATLAB LMI Toolbox.

*Remark 10. *A similar study to this paper has been investigated in [21, 22, 33]. We note that our new stability criterion for stochastic discrete system benefits from the idea of delay partitioning. Based on the general assumption of time delay, we represent as parts, such that the condition is not only delay dependent but also dependent on the partitioning number.

*Remark 11. *From the proof of Theorem 8, it can be easily found that we enlarged , , into . We can get the next theorem in a different way to handle , , .

Corollary 12. *Under Assumptions 1 and 4, the system (6) with is globally asymptotically stable, if there exist positive matrices , , , , , (), , , and free matrix , , , , with any appropriate dimensions, such that the following LMI holds:**and the definition of is the same as the definition in Theorem 8, and , , , in inequality (29) are , , , in Theorem 8, respectively.*

*Proof. * in is handled in the following way:

When , , , , the last inequality is equivalent toThen

By solving the convex LMI condition at boundary conditions , we can get the LMIs in Corollary 12. This completes the proof of Corollary 12.

*Remark 13. *The difference between Theorem 8 and Corollary 12 is obvious. The LMIs in Corollary 12 are derived by property of , which is more simple than the LMIs in Theorem 8.

The next theorem can be obtained if the stochastic term is not removed in (6).

Corollary 14. *Under Assumptions 1 and 4, the system (6) with is globally asymptotically stable, if there exist positive matrices , , , (), (), , , and free matrix with any appropriate dimensions, such that the following LMI holds:*