Abstract

The problem of passivity analysis for discrete-time stochastic neural networks with time-varying delays is investigated in this paper. New delay-dependent passivity conditions are obtained in terms of linear matrix inequalities. Less conservative conditions are obtained by using integral inequalities to aid in the achievement of criteria ensuring the positiveness of the Lyapunov-Krasovskii functional. At last, numerical examples are given to show the effectiveness of the proposed method.

1. Introduction

Neural networks have been greatly applied in many areas in the past few decades, such as static processing, pattern recognition, and combinatorial optimization [13]. In practice, time-delays are frequently encountered in neural networks. As the finite signal propagation time and the finite speed of information processing, the existence of the delays may cause oscillation, instability, and divergence in neural networks. Moreover, stochastic perturbations and parameter uncertainties are two main resources which could reduce the performances of delayed neural networks. Due to the importance in both theory and practice, the problem of stability for stochastic delayed neural networks with parameter uncertainties is one of hot issues. Therefore, there have been lots of important and interesting results in this field [3, 617].

It should be noticed that most neural networks are focused on continuous-time case [3, 712]. However, discrete-time systems play crucial roles in today’s information society. Particulary, when implementing the delayed continuous-time neural networks for computer simulation, it needs to formulate discrete-time system. Thus, it is necessary to research the dynamics of discrete-time neural networks. In recent years, a lot of important results have been published in the literatures [1317]. Kwon et al. [14] discussed the stability criteria for the discrete-time system with time varying delays. Wang et al. [16] researched the exponential stability of discrete-time neural networks with distributed delays by means of Lyapunov-Krasovskill functional theory and linear matrix inequalities technology. In [17], the authors are concerned with the robust state estimation for discrete neural networks with successive packet dropouts, linear fractional uncertainties, and mixed time-delays.

On the other hand, passivity is a significant concept that represents input-output feature of dynamic systems, which can offer a powerful tool for analyzing mechanical systems, nonlinear systems, and electrical circuits [18]. The passivity theory was firstly presented in the circuit analysis [19]. During the past several decades, the passivity theory has found successful applications in various areas such as complexity, signal processing, stability, chaos control, and fuzzy control. Thus, the problem of passivity for time-delay neural networks has received much attention and lots of effective approaches have been proposed in this research area [2027]. The authors [21, 22] discussed the problem of passivity for neural networks with time-delays. Recently, Lee et al. [23] further studied the problem of dissipative analysis for neural networks with times-delays by using reciprocally convex approach and linear matrix inequality technology. Very recently, in [24], the problem of passivity criterion of discrete-time stochastic bidirectional associative memory neural networks with time-varying delays has been developed. In [25], some delay-dependent sufficient passivity conditions have been obtained for stochastic discrete-time neural networks with time-varying delays in terms of linear matrix inequalities technology and free-weighting matrices approach. A less conservative passivity criterion for discrete-time stochastic neural networks with time-varying delays was derived in [26]. However, there is still a room for decreasing the conservatism.

Motivated by the above discussion, the problem of passivity for discrete-time stochastic neural networks with time-varying delays is studied. The major contribution of this paper lies in that, first of all, different from the traditional way, a new inequality is introduced to deal with terms and . This method can effectively reduce the conservatism. Secondly, we do not need all the symmetric matrices in the Lyapunov functional to be positive definite and take advantage of the relationships of and . New passivity conditions are presented in terms of matrix inequalities. Finally, numerical examples are given to indicate the effectiveness of the proposed method.

Notations. Throughout this paper, the superscripts “” and “” stand for the inverse and transpose of a matrix, respectively;    means that the matrix is symmetric positive definite (positive semidefinite, negative definite, and negative semidefinite); stands for the mathematical expectation operator with respect to the given probability measure; refers to the Euclidean vector norm; denotes a complete probability space with a filtration containing all -null sets and is right conditions; denotes the discrete interval given ; denotes -dimensional Euclidean space; is the set of real matrices; denotes the symmetric block in symmetric matrix; and denote, respectively, the maximal and minimal eigenvalue of matrix .

2. Problem Statement and Preliminaries

Consider the following DSNN with time-varying delays:where is the neuron state vector of the system; is the output of the neural networks; is the input vector; is the initial condition; is the state feedback coefficient matrix; and are the connection weight matrices and the delayed connection weight matrices, respectively; represents the neuron activation functions; denotes the known time-varying delay and satisfies ; is diffusion coefficient vector and is a scalar Brownian motion defined on the probability space with

Assumption 1. The neuron activation function satisfies for all , , where and are known real constants.

Remark 2. In Assumption 1, and can be positive, negative, or zero. Moreover, when , then .

Assumption 3. is the continuous function satisfying where , are known constant scalars.

The following lemmas and definition will be used in proof of main results.

Lemma 4. For integers and vector function , , for any positive semidefinite matrix the following inequality holds:

Proof. In fact, we have Thus, one can easily obtain The proof is completed.

Remark 5. The new inequality was proposed in [5, 6] for continuous-time systems; it is worth noting that we firstly extend this method to study discrete-time neural networks in this paper.

Lemma 6 (see [4, 13]). Let be a positive-definite matrix, ; then

Lemma 7 (see [24]). Let be real matrices with appropriate dimensions, with matrix satisfying ; there exists a scalar such that

Lemma 8 (see [3]). For any constant matrices , , with appropriate dimensions, and a function satisfying , then if and only if

Definition 9 (see [25]). The system (1) is said to be passive if there exists a scalar satisfying for all and for all solution of (1) with .

3. Main Results

In this section, the passivity of discrete-time stochastic neural networks with time-varying delays will be investigated by use of the new integral inequality and Lyapunov method. In the paper, some of symmetric matrices in Lyapunov-Krasovskii functional are not necessarily required to be positive definite.

Denote

Main results are given in the following theorems.

Theorem 10. Under Assumptions 1 and 3, the discrete-time stochastic neural network (1) is passive, if there exist matrices , , , , , , , , , the positive diagonal matrices   , and scalars , , such that the following matrix inequalities hold: where

Proof. Define a new augmented of Lyapunov-Krasovskii functional as follows: where Firstly, we show that the Lyapunov-Krasovskii functional is positive definite. By using Lemma 6, one can obtain Then, it follows from (19)–(21) that From condition (15), there exists a scalar , for any , such that Now, taking the forward difference of along the trajectories of system (1), it yields that Form the new inequality of Lemma 4, one can get It is easy to get From Assumption 3 and inequality (16), we have From Assumption 1, it follows that Thus, for the diagonal matrices   , one can receive the following inequalities: Combining (25)–(30), it yields where From (15)–(17), observing that , , and , one can conclude that Then, By the definition of and inequality (23), one can find that So, one can have for all . This completes the proof.

Remark 11. It should be pointed out that the new inequality is introduced to deal with and , which is immensely different from traditional ways. This method can effectively reduce the conservatism of the results.

Remark 12. In this paper, not all the matrices in the Lyapunov functional need to be positive definite. In fact, the conditions in (15) assure the positive definiteness of the Lyapunov functional; this is greatly different from traditional ways for passivity researches of discrete-time neural network, because the traditional methods always need Lyapunov matrices to be positive definite.

Remark 13. It can be seen that the term is divided into two parts; the aim is to make full use of the relationship of and , and then, taking advantage of the integral inequality and Lemma 8, new passivity conditions are obtained in terms of LMIs.

Corollary 14. Under Assumptions 1 and 3, the discrete-time stochastic neural network (1) is passive, if there exist scalars , , matrices , , , , , , , , , and the positive diagonal matrices   , such that linear matrix inequalities (16) and (17) hold.
Now, we will consider the stochastic discrete-time neural networks with time-varying delay and parameter uncertainties as follows: where , , and denote the parameter uncertainties that are assumed to be of the form where , are known constant matrices, and is the unknown matrix valued function subject to

Theorem 15. Under Assumptions 1 and 3, the discrete-time stochastic uncertain neural network (37) is robustly passive, if there exist scalars , , , matrices , , , , , , , , , and the positive diagonal matrices   , such that the following matrix inequalities hold: where, are the same as defined in Theorem 10.

Proof. By replacing , , in (17) with , , , respectively, then using Lemma 7, the desired results can be obtained immediately. The proof is completed.

4. Numerical Examples

In this section, some numerical examples are proposed to show the effectiveness of the results obtained in this paper.

Example 1. Consider the system (1) with the following parameters: The activation functions are taken as It can be verified that In this example, if and , the optimal passivity performance obtained is by the method in [25] and by the method in [26], while by Theorem 10 in this paper, the optimal passivity performance . The comparisons of are listed in Table 1, when , . Then, when we assume , the optimal passivity performance obtained by Theorem 10 for different can be found in Table 2. It can be seen that our results are less conservative than the ones in [25, 26].

Example 2. Consider the system (1) with the following parameters: The activation functions are taken as It can be verified that For this example, when and , by Theorem 10, we can get that the upper bound of the time-varying delay is . When and , by Theorem 10, we can get ; we can obtain upper bound of for different and , which are summarized in Table 3. It can be found from Table 3 that, for the same , a larger passivity performance corresponds to a larger ; with the same , a smaller passivity performance corresponds to a larger .

5. Conclusions

In this paper, the problem of passivity analysis for discrete-time stochastic neural networks with time-varying delays has been investigated. The presented sufficient conditions are based on the Lyapunov-Krasovskii functional, a new inequality and linear matrix inequality approach. Numerical examples are given to demonstrate the usefulness and effectiveness of the proposed results. Finally, it should be worth noticing that the proposed method in this paper may be extensively applicable in many other areas, such as Markov jump neural networks, Markov jump neural networks with incomplete transition descriptions, and switched neural networks, which deserves further investigation.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by National Natural Science Foundation of China (Grants nos. 61273015 and 61473001), the Natural Science Research Project of Fuyang Normal College (2013FSKJ09), and the Teaching Reform Project of Fuyang Normal College (2013JYXM48).