Research Article  Open Access
New Results on Passivity for DiscreteTime Stochastic Neural Networks with TimeVarying Delays
Abstract
The problem of passivity analysis for discretetime stochastic neural networks with timevarying delays is investigated in this paper. New delaydependent passivity conditions are obtained in terms of linear matrix inequalities. Less conservative conditions are obtained by using integral inequalities to aid in the achievement of criteria ensuring the positiveness of the LyapunovKrasovskii functional. At last, numerical examples are given to show the effectiveness of the proposed method.
1. Introduction
Neural networks have been greatly applied in many areas in the past few decades, such as static processing, pattern recognition, and combinatorial optimization [1–3]. In practice, timedelays are frequently encountered in neural networks. As the finite signal propagation time and the finite speed of information processing, the existence of the delays may cause oscillation, instability, and divergence in neural networks. Moreover, stochastic perturbations and parameter uncertainties are two main resources which could reduce the performances of delayed neural networks. Due to the importance in both theory and practice, the problem of stability for stochastic delayed neural networks with parameter uncertainties is one of hot issues. Therefore, there have been lots of important and interesting results in this field [3, 6–17].
It should be noticed that most neural networks are focused on continuoustime case [3, 7–12]. However, discretetime systems play crucial roles in today’s information society. Particulary, when implementing the delayed continuoustime neural networks for computer simulation, it needs to formulate discretetime system. Thus, it is necessary to research the dynamics of discretetime neural networks. In recent years, a lot of important results have been published in the literatures [13–17]. Kwon et al. [14] discussed the stability criteria for the discretetime system with time varying delays. Wang et al. [16] researched the exponential stability of discretetime neural networks with distributed delays by means of LyapunovKrasovskill functional theory and linear matrix inequalities technology. In [17], the authors are concerned with the robust state estimation for discrete neural networks with successive packet dropouts, linear fractional uncertainties, and mixed timedelays.
On the other hand, passivity is a significant concept that represents inputoutput feature of dynamic systems, which can offer a powerful tool for analyzing mechanical systems, nonlinear systems, and electrical circuits [18]. The passivity theory was firstly presented in the circuit analysis [19]. During the past several decades, the passivity theory has found successful applications in various areas such as complexity, signal processing, stability, chaos control, and fuzzy control. Thus, the problem of passivity for timedelay neural networks has received much attention and lots of effective approaches have been proposed in this research area [20–27]. The authors [21, 22] discussed the problem of passivity for neural networks with timedelays. Recently, Lee et al. [23] further studied the problem of dissipative analysis for neural networks with timesdelays by using reciprocally convex approach and linear matrix inequality technology. Very recently, in [24], the problem of passivity criterion of discretetime stochastic bidirectional associative memory neural networks with timevarying delays has been developed. In [25], some delaydependent sufficient passivity conditions have been obtained for stochastic discretetime neural networks with timevarying delays in terms of linear matrix inequalities technology and freeweighting matrices approach. A less conservative passivity criterion for discretetime stochastic neural networks with timevarying delays was derived in [26]. However, there is still a room for decreasing the conservatism.
Motivated by the above discussion, the problem of passivity for discretetime stochastic neural networks with timevarying delays is studied. The major contribution of this paper lies in that, first of all, different from the traditional way, a new inequality is introduced to deal with terms and . This method can effectively reduce the conservatism. Secondly, we do not need all the symmetric matrices in the Lyapunov functional to be positive definite and take advantage of the relationships of and . New passivity conditions are presented in terms of matrix inequalities. Finally, numerical examples are given to indicate the effectiveness of the proposed method.
Notations. Throughout this paper, the superscripts “” and “” stand for the inverse and transpose of a matrix, respectively; means that the matrix is symmetric positive definite (positive semidefinite, negative definite, and negative semidefinite); stands for the mathematical expectation operator with respect to the given probability measure; refers to the Euclidean vector norm; denotes a complete probability space with a filtration containing all null sets and is right conditions; denotes the discrete interval given ; denotes dimensional Euclidean space; is the set of real matrices; denotes the symmetric block in symmetric matrix; and denote, respectively, the maximal and minimal eigenvalue of matrix .
2. Problem Statement and Preliminaries
Consider the following DSNN with timevarying delays:where is the neuron state vector of the system; is the output of the neural networks; is the input vector; is the initial condition; is the state feedback coefficient matrix; and are the connection weight matrices and the delayed connection weight matrices, respectively; represents the neuron activation functions; denotes the known timevarying delay and satisfies ; is diffusion coefficient vector and is a scalar Brownian motion defined on the probability space with
Assumption 1. The neuron activation function satisfies for all , , where and are known real constants.
Remark 2. In Assumption 1, and can be positive, negative, or zero. Moreover, when , then .
Assumption 3. is the continuous function satisfying where , are known constant scalars.
The following lemmas and definition will be used in proof of main results.
Lemma 4. For integers and vector function , , for any positive semidefinite matrix the following inequality holds:
Proof. In fact, we have Thus, one can easily obtain The proof is completed.
Remark 5. The new inequality was proposed in [5, 6] for continuoustime systems; it is worth noting that we firstly extend this method to study discretetime neural networks in this paper.
Lemma 6 (see [4, 13]). Let be a positivedefinite matrix, ; then
Lemma 7 (see [24]). Let be real matrices with appropriate dimensions, with matrix satisfying ; there exists a scalar such that
Lemma 8 (see [3]). For any constant matrices , , with appropriate dimensions, and a function satisfying , then if and only if
Definition 9 (see [25]). The system (1) is said to be passive if there exists a scalar satisfying for all and for all solution of (1) with .
3. Main Results
In this section, the passivity of discretetime stochastic neural networks with timevarying delays will be investigated by use of the new integral inequality and Lyapunov method. In the paper, some of symmetric matrices in LyapunovKrasovskii functional are not necessarily required to be positive definite.
Denote
Main results are given in the following theorems.
Theorem 10. Under Assumptions 1 and 3, the discretetime stochastic neural network (1) is passive, if there exist matrices , , , , , , , , , the positive diagonal matrices , and scalars , , such that the following matrix inequalities hold: where
Proof. Define a new augmented of LyapunovKrasovskii functional as follows: where Firstly, we show that the LyapunovKrasovskii functional is positive definite. By using Lemma 6, one can obtain Then, it follows from (19)–(21) that From condition (15), there exists a scalar , for any , such that Now, taking the forward difference of along the trajectories of system (1), it yields that Form the new inequality of Lemma 4, one can get It is easy to get From Assumption 3 and inequality (16), we have From Assumption 1, it follows that Thus, for the diagonal matrices , one can receive the following inequalities: Combining (25)–(30), it yields where From (15)–(17), observing that , , and , one can conclude that Then, By the definition of and inequality (23), one can find that So, one can have for all . This completes the proof.
Remark 11. It should be pointed out that the new inequality is introduced to deal with and , which is immensely different from traditional ways. This method can effectively reduce the conservatism of the results.
Remark 12. In this paper, not all the matrices in the Lyapunov functional need to be positive definite. In fact, the conditions in (15) assure the positive definiteness of the Lyapunov functional; this is greatly different from traditional ways for passivity researches of discretetime neural network, because the traditional methods always need Lyapunov matrices to be positive definite.
Remark 13. It can be seen that the term is divided into two parts; the aim is to make full use of the relationship of and , and then, taking advantage of the integral inequality and Lemma 8, new passivity conditions are obtained in terms of LMIs.
Corollary 14. Under Assumptions 1 and 3, the discretetime stochastic neural network (1) is passive, if there exist scalars , , matrices , , , , , , , , , and the positive diagonal matrices , such that linear matrix inequalities (16) and (17) hold.
Now, we will consider the stochastic discretetime neural networks with timevarying delay and parameter uncertainties as follows: where , , and denote the parameter uncertainties that are assumed to be of the form where , are known constant matrices, and is the unknown matrix valued function subject to
Theorem 15. Under Assumptions 1 and 3, the discretetime stochastic uncertain neural network (37) is robustly passive, if there exist scalars , , , matrices , , , , , , , , , and the positive diagonal matrices , such that the following matrix inequalities hold: where, are the same as defined in Theorem 10.
Proof. By replacing , , in (17) with , , , respectively, then using Lemma 7, the desired results can be obtained immediately. The proof is completed.
4. Numerical Examples
In this section, some numerical examples are proposed to show the effectiveness of the results obtained in this paper.
Example 1. Consider the system (1) with the following parameters: The activation functions are taken as It can be verified that In this example, if and , the optimal passivity performance obtained is by the method in [25] and by the method in [26], while by Theorem 10 in this paper, the optimal passivity performance . The comparisons of are listed in Table 1, when , . Then, when we assume , the optimal passivity performance obtained by Theorem 10 for different can be found in Table 2. It can be seen that our results are less conservative than the ones in [25, 26].
Example 2. Consider the system (1) with the following parameters: The activation functions are taken as It can be verified that For this example, when and , by Theorem 10, we can get that the upper bound of the timevarying delay is . When and , by Theorem 10, we can get ; we can obtain upper bound of for different and , which are summarized in Table 3. It can be found from Table 3 that, for the same , a larger passivity performance corresponds to a larger ; with the same , a smaller passivity performance corresponds to a larger .

5. Conclusions
In this paper, the problem of passivity analysis for discretetime stochastic neural networks with timevarying delays has been investigated. The presented sufficient conditions are based on the LyapunovKrasovskii functional, a new inequality and linear matrix inequality approach. Numerical examples are given to demonstrate the usefulness and effectiveness of the proposed results. Finally, it should be worth noticing that the proposed method in this paper may be extensively applicable in many other areas, such as Markov jump neural networks, Markov jump neural networks with incomplete transition descriptions, and switched neural networks, which deserves further investigation.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This work is supported by National Natural Science Foundation of China (Grants nos. 61273015 and 61473001), the Natural Science Research Project of Fuyang Normal College (2013FSKJ09), and the Teaching Reform Project of Fuyang Normal College (2013JYXM48).
References
 M. M. Gupta, L. Jin, and N. Homma, Static and Dynamic Neural Networks: From Fundamentals to Advanced Theory, John Wiley & Sons, New York, NY, USA, 2003.
 J. J. Hopfield, “Neurons with graded response have collective computational properties like those of twostate neurons,” Proceedings of the National Academy of Sciences of the United States of America, vol. 81, no. 10 I, pp. 3088–3092, 1984. View at: Publisher Site  Google Scholar
 D. Yue, Y. Zhang, and E. Tian, “Improved global robust delaydependent stability criteria for delayed cellular neural networks,” International Journal of Computer Mathematics, vol. 85, no. 8, pp. 1265–1277, 2008. View at: Publisher Site  Google Scholar  MathSciNet
 S. Xu, J. Lam, B. Zhang, and Y. Zou, “A new result on the delaydependent stability of discrete systems with timevarying delays,” International Journal of Robust and Nonlinear Control, vol. 24, no. 16, pp. 2512–2521, 2014. View at: Publisher Site  Google Scholar
 P.L. Liu, “Delaydependent global exponential robust stability for delayed cellular neural networks with timevarying delay,” ISA Transactions, vol. 52, no. 6, pp. 711–716, 2013. View at: Publisher Site  Google Scholar
 P. L. Liu, “A delay decomposition approach to robust stability analysis of uncertain systems with timevarying delay,” ISA Transactions, vol. 51, no. 6, pp. 694–701, 2012. View at: Publisher Site  Google Scholar
 J. Sun and J. Chen, “Stability analysis of static recurrent neural networks with interval timevarying delay,” Applied Mathematics and Computation, vol. 221, pp. 111–120, 2013. View at: Publisher Site  Google Scholar  MathSciNet
 D. Zhang and L. Yu, “Exponential state estimation for Markovian jumping neural networks with timevarying discrete and distributed delays,” Neural Networks, vol. 35, pp. 103–111, 2012. View at: Publisher Site  Google Scholar
 P. G. Park, J. W. Ko, and C. Jeong, “Reciprocally convex approach to stability of systems with timevarying delays,” Automatica, vol. 47, no. 1, pp. 235–238, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 C. Lin, Q.G. Wang, and T. H. Lee, “A less conservative robust stability test for linear uncertain timedelay systems,” IEEE Transactions on Automatic Control, vol. 51, no. 1, pp. 87–91, 2006. View at: Publisher Site  Google Scholar  MathSciNet
 R. Lu, H. Wu, and J. Bai, “New delaydependent robust stability criteria for uncertain neutral systems with mixed delays,” Journal of the Franklin Institute. Engineering and Applied Mathematics, vol. 351, no. 3, pp. 1386–1399, 2014. View at: Publisher Site  Google Scholar  MathSciNet
 J. Cheng, H. Zhu, S. Zhong, F. Zheng, and Y. Zeng, “Finitetime filtering for switched linear systems with a modedependent average dwell time,” Nonlinear Analysis: Hybrid Systems, vol. 15, pp. 145–156, 2015. View at: Publisher Site  Google Scholar  MathSciNet
 Z. Wang, G. Wei, and G. Feng, “Reliable ${H}_{\infty}$ control for discretetime piecewise linear systems with infinite distributed delays,” Automatica, vol. 45, no. 12, pp. 2991–2994, 2009. View at: Publisher Site  Google Scholar  MathSciNet
 O. M. Kwon, M. J. Park, J. H. Park, S. M. Lee, and E. J. Cha, “New criteria on delaydependent stability for discretetime neural networks with timevarying delays,” Neurocomputing, vol. 121, pp. 185–194, 2013. View at: Publisher Site  Google Scholar
 J. Cheng, H. Zhu, S. Zhong, Q. Zhong, and Y. Zeng, “Finitetime ${H}_{\infty}$ estimation for discretetime Markov jump systems with timevarying transition probabilities subject to average dwell time switching,” Communications in Nonlinear Science and Numerical Simulation, vol. 20, no. 2, pp. 571–582, 2015. View at: Publisher Site  Google Scholar  MathSciNet
 T. Wang, C. Zhang, S. Fei, and T. Li, “Further stability criteria on discretetime delayed neural networks with distributeddelay,” Neurocomputing, vol. 111, pp. 195–203, 2013. View at: Publisher Site  Google Scholar
 X. Kan, H. Shu, and Z. Li, “Robust state estimation for discretetime neural networks with mixed timedelays, linear fractional uncertainties and successive packet dropouts,” Neurocomputing, vol. 135, pp. 130–138, 2014. View at: Publisher Site  Google Scholar
 R. Lozano, B. Brogliato, O. Egeland, and B. Maschke, Dissipative Systems Analysis and Control: Theory and Applications, Springer, London, UK, 2nd edition, 2007. View at: MathSciNet
 V. Bevelevich, Classical Network Synthesis, Van Nostrand, New York, NY, USA, 1968.
 B. Zhang, S. Xu, and J. Lam, “Relaxed passivity conditions for neural networks with timevarying delays,” Neurocomputing, vol. 142, pp. 299–306, 2014. View at: Publisher Site  Google Scholar
 D. H. Ji, J. H. Koo, S. C. Won, S. M. Lee, and J. H. Park, “Passivitybased control for Hopfield neural networks using convex representation,” Applied Mathematics and Computation, vol. 217, no. 13, pp. 6168–6175, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 Z. G. Wu, J. H. Park, H. Su, and J. Chu, “New results on exponential passivity of neural networks with timevarying delays,” Nonlinear Analysis: Real World Applications, vol. 13, no. 4, pp. 1593–1599, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 T. H. Lee, M.J. Park, J. H. Park, O.M. Kwon, and S.M. Lee, “Extended dissipative analysis for neural networks with timevarying delays,” IEEE Transactions on Neural Networks and Learning Systems, vol. 25, no. 10, pp. 1936–1941, 2014. View at: Publisher Site  Google Scholar
 R. Raja, U. K. Raja, R. Samidurai, and A. Leelamani, “Passivity analysis for uncertain discretetime stochastic BAM neural networks with timevarying delays,” Neural Computing and Applications, vol. 25, no. 34, pp. 751–766, 2014. View at: Publisher Site  Google Scholar
 Q. Song, J. Liang, and Z. Wang, “Passivity analysis of discretetime stochastic neural networks with timevarying delays,” Neurocomputing, vol. 72, no. 7–9, pp. 1782–1788, 2009. View at: Publisher Site  Google Scholar
 Z.G. Wu, P. Shi, H. Su, and J. Chu, “Dissipativity analysis for discretetime stochastic neural networks with timevarying delays,” IEEE Transactions on Neural Networks and Learning Systems, vol. 24, no. 3, pp. 345–355, 2013. View at: Publisher Site  Google Scholar
 Z. Wu, P. Shi, H. Su, and J. Chu, “Stability and disspativity analysis of static neural networks with time delay,” IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 2, pp. 199–210, 2012. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2015 Wei Kang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.