Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2013, Article ID 486257, 10 pages
http://dx.doi.org/10.1155/2013/486257
Research Article

Exponential Stability Results of Discrete-Time Stochastic Neural Networks with Time-Varying Delays

Department of Electronics and Information Engineering, Shunde Polytechnic, Foshan 528300, China

Received 31 January 2013; Accepted 7 April 2013

Academic Editor: Weihai Zhang

Copyright © 2013 Yajun Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

An innovative stability analysis approach for a class of discrete-time stochastic neural networks (DSNNs) with time-varying delays is developed. By constructing a novel piecewise Lyapunov-Krasovskii functional candidate, a new sum inequality is presented to deal with sum items without ignoring any useful items, the model transformation is no longer needed, and the free weighting matrices are added to reduce the conservatism in the derivation of our results, so the improvement of computational efficiency can be expected. Numerical examples and simulations are also given to show the effectiveness and less conservatism of the proposed criteria.

1. Introduction

In the past decades, neural networks (NNs) have attracted considerable attention due to their potential applications in associative memory, pattern recognition, optimization and signal processing, and so forth [13]. It is well known to us that stability is one of the preconditions in the design of neural networks. For example, if a neural network is employed to solve some optimization problems, it is highly desirable for the NNs to have a unique globally stable equilibrium. Therefore, stability analysis of NNs is a very important issue and has been studied extensively [411]. It is worth noting that most of the NNs have been analyzed by using a continuous-time model. However, when it comes to the implementation of continuous-time networks for the sake of computer-based simulation, experimentation or computation, it is necessary to discretize the continuous-time networks to formulate a discrete-time system. Under mild or no restriction on the discretization of step size, the dynamic characteristics of the continuous-time counterpart can be inherited by the discrete-time analogue to a certain extent, and the discrete-time model also remains similar to some other properties of the continuous-time system.

On the other hand, as a result of the finite switching speed of amplifiers and the inherent communication time of neurons, time delays are frequently encountered in neural networks in electronic implementations. Time delays can change the dynamic behaviors of neural networks evidently, which is very often the sources of instability, oscillation, and poor performance. Therefore, stability analysis of neural networks with time delays has been studied extensively during the past years; see [1214] and the references therein. In practice, when modeling real neural systems, stochastic disturbances are probably part of the main sources leading to unwilling behaviors of neural networks. It has been proved that certain stochastic inputs could make the neural network unstable. Therefore, it is necessary to take into account both time delay and stochastic external flucation when modeling neural networks.

Recently, stochastic discrete-time system has been studied extensively; [15] presented a necessary and sufficient condition for the existence of control to transform controller design into solving coupled matrix-valued equation for discrete-time system with state and disturbance-dependent noise. In [16], the robust filtering analysis and synthesis of nonlinear stochastic systems with state and exogenous disturbance-dependent noise are presented. Instead of solving the Hamilton-Jacobi inequalities, a more convenient algorithm for practical applications is given by solving several linear matrix inequalities, and a few examples have shown the effectiveness of the proposed methods. In [17], based on a nonlinear stochastic bounded real lemma and an exponential estimate formula, an exponential mean square filtering design of nonlinear stochastic time-delay systems is presented via solving a Hamilton-Jacobi inequality.

But as pointed out in some previous articles, the discretization cannot preserve the dynamics of continuous-time system even for a small sampling period. Therefore, the study on the dynamics of discrete-time neural networks is crucially needed; see [1822] and the references therein.

Based on the discussions previously mentioned, the problem of stability analysis for discrete stochastic neural networks (DSNNs) with time-varying delays has been investigated recently. In [23], the problem of exponentially stability analysis for uncertain discrete-time stochastic neural networks with time-varying delays is investigated by utilizing Lyapunov-Krasovskii method of converting the addressed stability analysis problem into a convex optimization problem. In [24], combining with the free weighting matrix method and a new Lyapunov-Krasovskii functional candidate, a delay-dependent stability condition has been obtained, which proves to be less conservative than [23]. In [25], a new Lyapunov-Krasovskii functional candidate and the delay partition idea are used to solve the problem of asymptotic stability analysis in the mean square for a class of DSNNs with time-varying delay; the stability analysis problem is converted into a feasibility problem of LMIs, and a numerical example is provided to show the usefulness of the proposed condition.

In [26], the global exponential stability problem for a class of discrete-time uncertain stochastic neural networks with time-varying delays is studied, and an improved result is obtained.

In [27], the midpoint of the time delay variation interval is introduced, and the variation interval is divided into two subintervals; a new Lyapunov-Krasovskii functional candidate is constructed, and the variation in the two subintervals are checked by LMIs; some novel delay-dependent stability criteria for the addressed neural networks are derived, and less conservation is obtained.

In [28], a new Lyapunov functional candidate with the idea of delay partitioning is introduced; the effects of both variation range and distribution probability of the time delay are taken into account at the same time; the time-varying delay is characterized by introducing a Bernoulli stochastic variable; the distribution probability of time delay is translated into parameter matrices of the transferred DSNNs model, so the conservation has been reduced further. However, one of the main issues in the stability criteria is how to reduce the possible conservatism induced by introduction of the Lyapunov-Krasovskii functional candidate when dealing with time delay, which leaves much room for further research by using the latest analysis techniques.

In this paper, we develop an innovative stability analysis approach for a class of discrete-time stochastic neural networks with time-varying delays. By constructing a novel piecewise Lyapunov-Krasovskii functional, a new sum inequality is presented to deal with sum items without ignoring any useful items, and the model transformation is no longer needed in the derivation of our results. All results are expressed in the form of LMIs, whose feasibility can be easily checked by using the numerically efficient Matlab LMI toolbox, and no tuning of parameters is required, so the improvement of computational efficiency can be expected. Numerical examples are also given to show the effectiveness and less conservatism of the proposed criteria.

Notation. Throughout this paper, if not explicit, matrices are assumed to have compatible dimensions. The notation means that the symmetric matrix is positive definite (positive semidefinite, negative, negative semidefinite). The superscript stands for the transpose of a matrix; the shorthand denotes the block diagonal matrix; represents the Euclidean norm for vector or the spectral norm of matrices; and denote the maximal and minimal eigenvalue of matrix , respectively. refers to an identity matrix of appropriate dimensions, stands for the mathematical expectation, and means the symmetric terms. Sometimes, the arguments of a function will be omitted in the analysis when no confusion can arise.

2. System Description

Consider the following -neuron discrete stochastic neural networks with mixed time-varying delays: where is the neuron state vector; and denotes the neuron activation function. with . are the connection weight matrix and the delayed connection weight matrix, respectively, represents the transmission time-varying delay, is a continuous function, and is a Brown Motion defined on the complete probability space with , .

For further discussion, we introduce the following assumptions and lemmas.

Assumption 1. There exist two positive constants and such that

Assumption 2. For , the neuron activation functions in the DSNNS in (1) satisfy where are some constants.

Remark 3. Assumption 2 previously mentioned on the activation function was widely used in many papers; see [1621, 2326], for example.

Lemma 4 (see [27]). For any symmetric constant matrix , , integers , and vector valued function , one has

Remark 5. Lemma 4 is called discrete Jensen inequality, which is a very important tool for us to get the main results in this paper. The Lemma has been used in some literatures such as [28, 29].

Lemma 6 (see [30]). For any constant matrix , a scalar , a positive integer time-varying , and vector function , such that the following sum is well defined, the following inequalities hold: Furthermore, if then where and , ,??,

Remark 7. Lemma 6 is given and proved in [29], which is an effective method to reduce the conservation when studying the time delay stability problem for discrete system; see the literature mentioned previously [30]. Our main study is based on Lemma 6. It is worth mentioning that if and in the proof of (6) are directly ignored, then the following inequality can be derived:
Compared with the literature [1218], none of useful items is ignored in (5); what is more, (5) provides a tighter bound to deal with sum terms than those based on (10). However, since (6) is related to time-varying delay items and , it cannot be directly solved based on MATLAB LMI toolbox. To tackle this problem, (6) is used to transfer the time-varying matrix inequality to a set of solvable LMIs. That is, the additional information and in (10) is effectively expressed by (7), and then the less conservative results can be expected.

3. Main Results

For convenience of presentation, we use the following notations:

In this section, a new delay-dependent stability criteria is proposed for system (1) with time-varying delay satisfying (3); the sufficient conditions of stability are given as follows by Theorem 8.

Theorem 8. For given and , diagonal matrices , and , the system (1) is said to be exponentially stable, if there exist scalar , diagonal matrices and symmetric matrices such that the following LMIs (12), (13) and (14) hold for : where where The other terms in have the same expression as that in .

Proof. Define ,.
Construct the following Lyapunov-Krasovskii functional candidates: where where if is an integer, then , else .
By calculating the difference of along the solution of the system (1), and taking the mathematical expectation, we have where From Assumption 2, we can obtain that So we can get that
Now, we are in the position to prove that holds for both and .
Case I (when ). Using Lemmas 4 and Lemma 6 to deal with the second sum items in the right side of (25), we have where
At the same time, for any matrices of appropriate dimensions, we have where
In addition, it can be deduced from Assumption 2 that there exist two positive diagonal matrices such that
By substituting (22)–(25) into (19), adding (30) and (32) into the right side of (19), and using (26) and (27), we can get that where ,?? and ,
So (12) and (13) imply that So from Lemma 6, (35) guarantees that , so there exists a positive scalar such that
Case II (when ). Keeping (26) and by utilizing Lemmas 4 and 6 to deal with the accumulative items of (27) and (28), we can get where
So the whole difference of Lyapunov-Krasovskii functional candidate is given as follows: where , and??. For and , we can also get that there exists a positive scalar such that Combining Cases I and II, we can conclude that (12), (13) and (14) guarantee that Defining a new function and then using the similar analysis method of Theorem 1 in [18], we can easily get that the system (1) is globally exponentially stable in the mean square sense. This completes the proof.

Remark 9. It can be seen from the proof previously mentioned that no model transformation has been employed to deal with the sum terms, and none of the useful items are ignored in the proof.
If we neglect the effect of the stochastic term in (1), then and (1) will reduce to For system (42), we can obtain the following corollary based on Theorem 8.

Corollary 10. For given , , and and diagonal matrices , , and , the system (42) is said to be exponentially stable, if there exist diagonal matrices , symmetric positive definite matrices and ? such that (12) and the following LMIs hold for :

Remark 11. The system (42) has been studied by many researchers, many stability criteria have been proposed, and many improved analysis results have been obtained; see [2022].

4. Numerical Example

In this section, three examples are given to demonstrate the benefits of the proposed method.

Example 1. Consider the following stochastic discrete neural networks [25, 26]: with the following parameters:
So it can be verified that
For , by using Matlab LMI toolbox, the maximum allowable value is . Setting and in Theorem 8, we can solve a set of feasible solutions for the LMIs (12), (13) and (14) which are listed as follows:
At the same time, we set ; the state curve can be obtained as Figure 1 by using Matlab simulation software. From Figure 1, we can see that the system (1) is asymptotically stable.
In our study, our purpose is to compare the maximum upper bound for different . Now assume the different lower bound ; by solving the LMIs (12), (13) and (14), we can get the maximum delays of upper bound, which are listed in Table 1. From Table 1, we can see that Theorem 8 is less conservative than the criterion proposed in [25, 26].

tab1
Table 1: Allowable upper bound of for various .
486257.fig.001
Figure 1: Trajectories of of system in Example 1.

Remark 12. According to Theorem 8, we can obtain that system (1) with the pervious parameters is mean square exponentially stable, and meantime it is very easy to check that our results have improved the conclusions in [26] by the Matlab LMI Toolbox. If we negelect the uncertainty effect and take , then apply the criteria in [26], the value of for mean square exponentially stable of system (1) is 55, respectively. While by using Theorem 8 in this paper, taking 62 into LMIs (12), (13) and (14), we find that LMIs (12), (13) and (14) are feasible, which shows that our result is less conservative than [26] under the identical conditions.

Example 2. Consider the neural networks (1) with the following parameters [23]:
Take the activation function as follows: From the previous parameters, it can be verified that
For the Corollary 1 in [23], using the previous parameters, we find that it is unsolvable, but it is solvable in our criteria in Theorem 8. By virtue of the Matlab Toolbox, we can obtain the feasible solutions as follows:

Example 3. Consider the neural networks (1) with the following parameters [23, 27]:
Take the activation function as follows:
From the previous parameters, it can be verified that
For this example, these conditions in [23] cannot be satisfied. For , it has been verified that the maximum allowable time delay for system (42) is in [27]. By letting and in Example 3, we find that systems (42) are feasible, which implies that the exponential stability result proposed in Corollary 10 in this paper provides less conservatism than in [23, 27].

5. Conclusion

An effective sum inequality has been introduced to derive the delay-dependent stability criteria for a class of discrete stochastic neural networks system with an interval time-varying delay. By choosing piecewise Lyapunov-Krasovskii functional candidate and employing the proposed sum inequalities, significant performance improvement has been achieved with noticeably reducing the number of LMIs scalar decision variables. All results are given by the form of LMIs. Numerical examples show that the achieved results are less conservative than some existing literatures.

References

  1. S. Arik, “An analysis of exponential stability of delayed neural networks with time varying delays,” Neural Networks, vol. 17, no. 7, pp. 1027–1031, 2004. View at Publisher · View at Google Scholar · View at Scopus
  2. J. Cao and M. Xiao, “Stability and Hopf bifurcation in a simplified BAM neural network with two time delays,” IEEE Transactions on Neural Networks, vol. 18, no. 2, pp. 416–430, 2007. View at Publisher · View at Google Scholar · View at Scopus
  3. J. Cao, K. Yuan, and H. X. Li, “Global asymptotical stability of recurrent neural networks with multiple discrete delays and distributed delays,” IEEE Transactions on Neural Networks, vol. 17, no. 6, pp. 1646–1651, 2006. View at Publisher · View at Google Scholar · View at Scopus
  4. X. Lou, Q. Ye, and B. Cui, “Exponential stability of genetic regulatory networks with random delays,” Neurocomputing, vol. 73, no. 4–6, pp. 759–769, 2010. View at Publisher · View at Google Scholar · View at Scopus
  5. H. Yang, T. Chu, and C. Zhang, “Exponential stability of neural networks with variable delays via LMI approach,” Chaos, Solitons and Fractals, vol. 30, no. 1, pp. 133–139, 2006. View at Publisher · View at Google Scholar · View at Scopus
  6. H. Huang, Y. Qu, and H. X. Li, “Robust stability analysis of switched Hopfield neural networks with time-varying delay under uncertainty,” Physics Letters A, vol. 345, no. 4–6, pp. 345–354, 2005. View at Publisher · View at Google Scholar · View at Scopus
  7. C. Song, H. Gao, and W. X. Zheng, “A new approach to stability analysis of discrete-time recurrent neural networks with time-varying delay,” Neurocomputing, vol. 72, no. 10–12, pp. 2563–2568, 2009. View at Publisher · View at Google Scholar · View at Scopus
  8. J. Cao and J. Wang, “Global asymptotic and robust stability of recurrent neural networks with time delays,” IEEE Transactions on Circuits and Systems I, vol. 52, no. 2, pp. 417–426, 2005. View at Publisher · View at Google Scholar · View at Scopus
  9. S. Xu, J. Lam, and D. W. C. Ho, “A new LMI condition for delay-dependent asymptotic stability of delayed Hopfield neural networks,” IEEE Transactions on Circuits and Systems II, vol. 53, no. 3, pp. 230–234, 2006. View at Publisher · View at Google Scholar · View at Scopus
  10. Y. Zhang, D. Yue, and E. Tian, “New stability criteria of neural networks with interval time-varying delay: a piecewise delay method,” Applied Mathematics and Computation, vol. 208, no. 1, pp. 249–259, 2009. View at Publisher · View at Google Scholar · View at Scopus
  11. Z. Wang, Y. Liu, and X. Liu, “On global asymptotic stability of neural networks with discrete and distributed delays,” Physics Letters A, vol. 345, no. 4–6, pp. 299–308, 2005. View at Publisher · View at Google Scholar · View at Scopus
  12. H. Shao and Q. L. Han, “New stability criteria for linear discrete-time systems with interval-like time-varying delays,” IEEE Transactions on Automatic Control, vol. 56, no. 3, pp. 619–625, 2011. View at Publisher · View at Google Scholar · View at Scopus
  13. Z. Wu, H. Su, J. Chu, and W. Zhou, “Improved delay-dependent stability condition of discrete recurrent neural networks with time-varying delays,” IEEE Transactions on Neural Networks, vol. 21, no. 4, pp. 692–697, 2010. View at Google Scholar
  14. Q. Song, J. Liang, and Z. Wang, “Passivity analysis of discrete-time stochastic neural networks with time-varying delays,” Neurocomputing, vol. 72, no. 7–9, pp. 1782–1788, 2009. View at Publisher · View at Google Scholar · View at Scopus
  15. W. Zhang, Y. Huang, and H. Zhang, “Stochastic H2/H control for discrete-time systems with state and disturbance dependent noise,” Automatica, vol. 43, no. 3, pp. 513–521, 2007. View at Publisher · View at Google Scholar · View at Scopus
  16. W. Zhang, B. S. Chen, and C. S. Tseng, “Robust H filtering for nonlinear stochastic systems,” IEEE Transactions on Signal Processing, vol. 53, no. 2, pp. 589–598, 2005. View at Publisher · View at Google Scholar · View at Scopus
  17. W. H. Zhang, G. Feng, and Q. H. Li, “Robust H filtering for general nonlinear stochastic state-delayed systems,” Mathematical Problems in Engineering, vol. 2012, Article ID 231352, 15 pages, 2012. View at Publisher · View at Google Scholar
  18. X. Wei, D. Zhou, and Q. Zhang, “On asymptotic stability of discrete-time non-autonomous delayed Hopfield neural networks,” Computers and Mathematics with Applications, vol. 57, no. 11-12, pp. 1938–1942, 2009. View at Publisher · View at Google Scholar · View at Scopus
  19. S. Hu and J. Wang, “Global robust stability of a class of discrete-time interval neural networks,” IEEE Transactions on Circuits and Systems I, vol. 53, no. 1, pp. 129–138, 2006. View at Publisher · View at Google Scholar · View at Scopus
  20. X. L. Zhu, Y. Wang, and G. H. Yang, “New delay-dependent stability results for discrete-time recurrent neural networks with time-varying delay,” Neurocomputing, vol. 72, no. 13–15, pp. 3376–3383, 2009. View at Publisher · View at Google Scholar · View at Scopus
  21. W. Xiong and J. Cao, “Global exponential stability of discrete-time Cohen-Grossberg neural networks,” Neurocomputing, vol. 64, no. 1–4, pp. 433–446, 2005. View at Publisher · View at Google Scholar · View at Scopus
  22. L. Wang and Z. Xu, “Sufficient and necessary conditions for global exponential stability of discrete-time recurrent neural networks,” IEEE Transactions on Circuits and Systems I, vol. 53, no. 6, pp. 1373–1380, 2006. View at Publisher · View at Google Scholar · View at Scopus
  23. Y. Liu, Z. Wang, and X. Liu, “Robust stability of discrete-time stochastic neural networks with time-varying delays,” Neurocomputing, vol. 71, no. 4–6, pp. 823–833, 2008. View at Publisher · View at Google Scholar · View at Scopus
  24. Q. Song and Z. Wang, “A delay-dependent LMI approach to dynamics analysis of discrete-time recurrent neural networks with time-varying delays,” Physics Letters A, vol. 368, no. 1-2, pp. 134–145, 2007. View at Publisher · View at Google Scholar · View at Scopus
  25. Y. Ou, H. Liu, Y. Si, and Z. Feng, “Stability analysis of discrete-time stochastic neural networks with time-varying delays,” Neurocomputing, vol. 73, no. 4–6, pp. 740–748, 2010. View at Publisher · View at Google Scholar · View at Scopus
  26. M. Luo, S. Zhong, R. Wang, and W. Kang, “Robust stability analysis for discrete-time stochastic neural networks systems with time-varying delays,” Applied Mathematics and Computation, vol. 209, no. 2, pp. 305–313, 2009. View at Publisher · View at Google Scholar · View at Scopus
  27. Y. Zhang, S. Xu, and Z. Zeng, “Novel robust stability criteria of discrete-time stochastic recurrent neural networks with time delay,” Neurocomputing, vol. 72, no. 13–15, pp. 3343–3351, 2009. View at Publisher · View at Google Scholar · View at Scopus
  28. Y. Zhang, D. Yue, and E. Tian, “Robust delay-distribution-dependent stability of discrete-time stochastic neural networks with time-varying delay,” Neurocomputing, vol. 72, no. 4–6, pp. 1265–1273, 2009. View at Publisher · View at Google Scholar · View at Scopus
  29. X. M. Zhang and Q. L. Han, “A new finite sum inequality approach to delay-dependent H control of discrete-time systems with time-varying delay,” International Journal of Robust and Nonlinear Control, vol. 18, no. 6, pp. 630–647, 2008. View at Publisher · View at Google Scholar · View at Scopus
  30. C. Peng, “Improved delay-dependent stabilisation criteria for discrete systems with a new finite sum inequality,” IET Control Theory & Applications, vol. 6, no. 3, pp. 448–453, 2012. View at Publisher · View at Google Scholar · View at MathSciNet