Discrete Dynamics in Nature and Society

Discrete Dynamics in Nature and Society / 2009 / Article

Research Article | Open Access

Volume 2009 |Article ID 139671 | 17 pages | https://doi.org/10.1155/2009/139671

New Results on Passivity Analysis of Delayed Discrete-Time Stochastic Neural Networks

Academic Editor: Guang Zhang
Received24 Jul 2009
Accepted10 Nov 2009
Published23 Dec 2009

Abstract

The problem of passivity analysis for a class of discrete-time stochastic neural networks (DSNNs) with time-varying interval delay is investigated. The delay-dependent sufficient criteria are derived in terms of linear matrix inequalities (LMIs). The results are shown to be generalization of some previous results and are less conservative than the existing works. Meanwhile, the computational complexity of the obtained stability conditions is reduced because less variables are involved. Two numerical examples are given to show the effectiveness and the benefits of the proposed method.

1. Introduction

Over the past decades, Neural Networks (NNs) have attracted considerable attention because of their extensive applications in pattern recognition, optimization solvers, model identification, signal processing, and other engineering areas [1]. Meanwhile, time delays are frequently encountered in various engineering, biological, and economic systems due to the finite switching speed of amplifiers and the inherent communication time of neurons. It has been revealed that time delay may cause instability and oscillation of the neural networks [2, 3]. For these reasons, the stability problem of NNs with delays has been extensively studied, for example, see [210]. It is well known that delay-dependent stability conditions are generally less conservative than delay-independent conditions, especially when the size of the delay is small. Therefore, considerable attention has been focused on the derivation of delay-dependent stability results and many effective approaches have been provided to reduce the conservatism of stability results for further improving the quality of delay-dependent stability criteria.

Most NNs studied are assumed to act in a continuous-time manner; however, in implementing and applications of neural networks, discrete-time neural networks become more and more important than their continuous-time counterparts [11]. So, the stability analysis problems for discrete-time neural networks have received more and more interests, and some stability criteria have been proposed in literature; for example, see [1115] and the references cited therein. Moreover, stochastic disturbance usually appears in the electrical circuit design of neural networks; a neural network could be stabilized or destabilized by certain stochastic inputs. The delay-dependent stability problems of stochastic neural networks are studied in some works, such as, [16, 17] and the reference cited therein.

On the other hand, the passivity theory plays an important role in the analysis and design of linear and nonlinear delayed systems. Recently, the passivity of linear systems with delays [1820] and the passivity of neural networks with delays [2123] have been studied. Based on the Lyapunov-Krasovskii method and LMI framework, the passivity properties for delayed NNs were firstly studied in [21]. In [23], the problem of passivity and robust passivity for a class of discrete-time stochastic neural networks with time-varying delays was studied.

In this paper, by constructing a new Lyapunov-Krasovskii functional, the improved delay-dependent passivity and robust passivity criteria of DSNNs are obtained in the form of linear matrix inequalities (LMIs). It is shown that the obtained conditions are less conservative and more efficiency than those in [23]. Two numerical examples are also provided to show the effectiveness of the proposed stability criteria.

Notation 1. Throughout this paper, stands for the set of nonnegative integers; and denote the dimensional Euclidean space and the set of all real matrices, respectively. Real symmetric matrix denotes being a positive definite (positive semi-definite) matrix. The notation (resp., ) means that and are symmetric matrices, and that is positive semi-definite (resp., positive definite). is used to denote an identity matrix with proper dimension. Matrices, if not explicitly stated, are assumed to have compatible dimensions. The symmetric terms in a symmetric matrix are denoted by . The superscript represents the transpose. We use and to denote the minimum and maximum eigenvalue of a real symmetric matrix, respectively. Let be a complete probability space with a filtration satisfying the usual conditions (i.e., it is right continuous and contains all -pull sets); let be the family of all -measurable -valued random variables such that where stands for the mathematical expectation operator with respect to the given probability measure

2. Problem Formulation and Preliminaries

Consider the uncertain DSNNs with time-varying interval delay described by

where is the neural state vector; and denote the neuron activation functions; is the output of the neural network; is the input vector; denotes the time-varying delay satisfying

and are known positive integers. , , and , where with describing the rate with which the th neuron will reset its potential to the resting state in isolation when disconnected from the networks and external inputs, is the connection weight matrix, and is the delayed connection weight matrix. , and represent the time-varying parameter uncertainties and are assumed to satisfy the following admissible condition:

where , and are known real constant matrices of appropriate dimensions; is the unknown time-varying matrix-valued function satisfying

In system (2.1), is a scalar Wiener process (Brownian Motion) on with

and there exist two positive constants that and such that

The initial condition of system (2.1) is given by

The activation functions in (2.1) satisfy the following assumption.

Assumption 2.1. Activation functions in (2.1) are bounded and satisfy : where are constants. Denote and

Remark 2.2. As pointed out in [11], the constants in Assumption 2.1 are allowed to be positive, negative, or zero. So the assumption is weaker in comparison with those made in [2, 13], and so forth.

Definition 2.3. The delayed DSNNs (2.1) are said passive if there exists a scalar such that
The purpose of this paper is to find the maximum allowed delay bound for the given lower bound such that the system described by (2.1) with uncertainties (2.3) is robust passivity.

In obtaining the main results of this paper, the following lemma will be useful for the proofs.

Lemma 2.4 (see [3]). Given constant matrices where , then the LMI is equivalent to the following condition:

Lemma 2.5 (see [11]). Given matrices and with , then holds for all satisfying if and only if there exists a scalar such that

3. Main Results

In this section, we present a delay-dependent criterion guaranteeing the passivity of DSNNs with time-varying delay:

Theorem 3.1. Under Assumption 2.1 given two scalars and , for any delay satisfying (2.2), system (3.1) is passive in the sense of Definition 2.3 if there exist two scalars , five positive definite matrices , three diagonal matrices , and six matrices with appropriate dimensions, such that the following LMIs hold: where with

Proof. Choose a new Lyapunov-Krasovskii functional candidate as follows: where
Defining , calculating the difference of along the system (3.1), and taking the mathematical expectation, we have where, Define , and it is easy to see From Assumption 2.1, if follows that for For any diagonal matrices , and in Assumption 2.1, it is easy to get
On the other hand, using (3.2) Since , combining (3.9)–(3.13), we get Applying Lemma 2.4 to (3.3) yields From (3.15), we obtain for all . By the definition of , we know that for all . Thus, then by Definition 2.3, it completes the proof of Theorem 3.1.

Remark 3.2. In Theorem 3.1, the free-weighing matrices are introduced so as to reduce the conservatism of the delay-dependent results.

Remark 3.3. Since the activation functions satisfy Assumption 2.1, we know that So, then So, is nonnegative definite. Including the term in the Lyapunov functional is to reduce the conservatism further, and it is illustrated by the later example. If is not included in (3.6), we can get Corollary 3.4.

Corollary 3.4. Given two scalars and . Then, for any delay satisfying (2.2), system (3.1) is passive in the sense of Definition 2.3 if there exist two scalars , five positive definite matrices , two diagonal matrices , and six matrices with appropriate dimensions, such that (3.2) and the following LMI hold: where with

Next, we provide the delay-dependent robust passivity analysis for uncertain DSNN (2.1).

Theorem 3.5. Given two scalars and . Then, under Assumption 2.1, for any delay satisfying (2.2), system (3.1) is robust passive if there exist two scalars , five positive definite matrices , three diagonal matrices , and six matrices with appropriate dimensions, such that (3.3) and the following LMI hold: where with

Proof. Assume that inequality (3.25) holds, according to Lemma 2.4, we have Then, from Lemma 2.5, we know that LMI (3.28) is equivalent to the following inequality: It can be verified that the above inequality is exactly the left-hand side of (3.3) when , and are replaced with , and , respectively. The result then follows from Theorem 3.1.
We now consider the DSNNs without stochastic term. In this case, the system (2.1) reduces to From Theorem 3.5, we can easily obtain the following corollary.

Corollary 3.6. Given two scalars and . Then, for any delay satisfying (2.2), system (3.30) is robust passive if there exist three scalars , five positive definite matrices , three diagonal matrices , and six matrices with appropriate dimensions, such that the following LMI holds:s where with

4. Numerical Examples

This section presents two numerical examples that demonstrate the validity of the method described above.

Example 4.1. Consider a delayed DSNNs (3.1) with the following parameters: , and the activation functions satisfy Assumption 2.1 with , , , and .

The activation functions satisfy Assumption 2.1 with , , , and .

From Tables 1 and 2, it is easy to see that the results in this paper are superior to those in [23].


Methods

By [23]4850576781
By Corollary 3.44850576745



By [23]79162681
By Corollary 3.479162645
By Theorem 3.1810172747

Example 4.2. Consider a delayed uncertain DSNNs (3.1) with the following parameters: , and the activation functions satisfy Assumption 2.1 with , .
If the time-varying delays satisfy , by the Matlab LMI Toolbox, we can find a solution to the LMI (3.25) as follows: and . Therefore, by Theorem 3.5, we know that system (3.1) with the above given parameters is robust passive.

5. Conclusions

In this paper, we have considered the problem of passivity and robust passivity analysis for a class of DSNNs with time-varying delay. By choosing a new Lyapunov-Krasovskii functional, the improved delay-dependent criteria have been proposed. Finally, two numerical examples have been provided to illustrate the effectiveness of the obtained results.

Acknowledgments

This work is partially supported by the Natural Science Foundation of China (60874030, 60835001, 60574006), the Qing Lan Project by the Jiangsu Higher Education Institutions of China, and the Natural Science Foundation of the Jiangsu Higher Education Institutions of China (Grant no. 07KJB510125, 08KJD510008, 09KJB510018).

References

  1. S. Arik, “Global asymptotic stability of a class of dynamical neural networks,” IEEE Transactions on Circuits and Systems I, vol. 47, no. 4, pp. 568–571, 2000. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  2. J. Cao, “Global stability conditions for delayed CNNs,” IEEE Transactions on Circuits and Systems I, vol. 48, no. 11, pp. 1330–1333, 2001. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  3. X. Liao, G. Chen, and E. N. Sanchez, “Delay-dependent exponential stability analysis of delayed neural networks: an LMI approach,” Neural Networks, vol. 15, no. 7, pp. 855–866, 2002. View at: Publisher Site | Google Scholar
  4. Y. He, G. P. Liu, D. Rees, and M. Wu, “Stability analysis for neural networks with time-varying interval delay,” IEEE Transactions on Neural Networks, vol. 18, no. 6, pp. 1850–1854, 2007. View at: Publisher Site | Google Scholar
  5. T. Li, L. Guo, and C. Sun, “Further result on asymptotic stability criterion of neural networks with time-varying delays,” Neurocomputing, vol. 71, no. 1–3, pp. 439–447, 2007. View at: Publisher Site | Google Scholar
  6. J. Yu, K. Zhang, S. Fei, and T. Li, “Simplified exponential stability analysis for recurrent neural networks with discrete and distributed time-varying delays,” Applied Mathematics and Computation, vol. 205, no. 1, pp. 465–474, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  7. O. M. Kwon and J. H. Park, “Improved delay-dependent stability criterion for neural networks with time-varying delays,” Physics Letters A, vol. 373, no. 5, pp. 529–535, 2009. View at: Publisher Site | Google Scholar | MathSciNet
  8. T. Li and S. M. Fei, “Stability analysis of Cohen-Grossberg neural networks with time-varying and distributed delays,” Neurocomputing, vol. 71, no. 4–6, pp. 1069–1081, 2008. View at: Publisher Site | Google Scholar
  9. O. M. Kwon and J. H. Park, “New delay-dependent robust stability criterion for uncertain neural networks with time-varying delays,” Applied Mathematics and Computation, vol. 205, no. 1, pp. 417–427, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  10. J. Yu, K. Zhang, and S. Fei, “Mean square exponential stability of generalized stochastic neural networks with time-varying delays,” Asian Journal of Control, vol. 11, pp. 633–642, 2009. View at: Publisher Site | Google Scholar
  11. Y. R. Liu, Z. D. Wang, and X. H. Liu, “Robust stability of discrete-time stochastic neural networks with time-varying delays,” Neurocomputing, vol. 71, no. 4–6, pp. 823–833, 2008. View at: Publisher Site | Google Scholar
  12. S. Mohamad and K. Gopalsamy, “Exponential stability of continuous-time and discrete-time cellular neural networks with delays,” Applied Mathematics and Computation, vol. 135, no. 1, pp. 17–38, 2003. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  13. W. H. Chen, X. M. Lu, and D. Y. Liang, “Global exponential stability for discrete-time neural networks with variable delays,” Physics Letters A, vol. 358, no. 3, pp. 186–198, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  14. B. Y. Zhang, S. Y. Xu, and Y. Zou, “Improved delay-dependent exponential stability criteria for discrete-time recurrent neural networks with time-varying delays,” Neurocomputing, vol. 72, pp. 321–330, 2008. View at: Publisher Site | Google Scholar
  15. Z. Liu, S. Lv, S. Zhong, and M. Ye, “New improved exponential stability criteria for discrete-time neural networks with time-varying delay,” Discrete Dynamics in Nature and Society, vol. 2009, Article ID 874582, 23 pages, 2009. View at: Publisher Site | Google Scholar
  16. Y. Sun and J. Cao, “Stabilization of stochastic delayed neural networks with Markovian switching,” Asian Journal of Control, vol. 10, no. 3, pp. 327–340, 2008. View at: Publisher Site | Google Scholar | MathSciNet
  17. W.-H. Chen and X. Lu, “Mean square exponential stability of uncertain stochastic delayed neural networks,” Physics Letters A, vol. 372, no. 7, pp. 1061–1069, 2008. View at: Publisher Site | Google Scholar | MathSciNet
  18. S.-I. Niculescu and R. Lozano, “On the passivity of linear delay systems,” IEEE Transactions on Automatic Control, vol. 46, no. 3, pp. 460–464, 2001. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  19. B. T. Cui and M. G. Hua, “Robust passive control for uncertain discrete-time systems with time-varying delays,” Chaos, Solitons & Fractals, vol. 29, no. 2, pp. 331–341, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  20. H. Gao, T. Chen, and T. Chai, “Passivity and passification for networked control systems,” SIAM Journal on Control and Optimization, vol. 46, no. 4, pp. 1299–1322, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  21. C. G. Li and X. F. Liao, “Passivity analysis of neural networks with time delay,” IEEE Transactions on Circuits and Systems II, vol. 52, no. 8, pp. 471–475, 2005. View at: Publisher Site | Google Scholar
  22. W. H. Chen, Z. H. Guan, and X. M. Lu, “Passivity control synthesis for uncertain Markovian jump system with multiple mode-dependent time-delays,” Asian Journal of Control, vol. 7, no. 2, pp. 135–143, 2005. View at: Google Scholar
  23. Q. Song, J. Liang, and Z. Wang, “Passivity analysis of discrete-time stochastic neural networks with time-varying delays,” Neurocomputing, vol. 72, no. 7–9, pp. 1782–1788, 2009. View at: Publisher Site | Google Scholar

Copyright © 2009 Jianjiang Yu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

445 Views | 319 Downloads | 0 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.