Research Article  Open Access
Improved Stability Criteria of Static Recurrent Neural Networks with a TimeVarying Delay
Abstract
This paper investigates the stability of static recurrent neural networks (SRNNs) with a timevarying delay. Based on the complete delaydecomposing approach and quadratic separation framework, a novel LyapunovKrasovskii functional is constructed. By employing a reciprocally convex technique to consider the relationship between the timevarying delay and its varying interval, some improved delaydependent stability conditions are presented in terms of linear matrix inequalities (LMIs). Finally, a numerical example is provided to show the merits and the effectiveness of the proposed methods.
1. Introduction
During the past decades, recurrent neural network (RNN) has been successfully applied in many fields, such as signal processing, pattern classification, associative memory design, and optimization. Therefore, the study of RNN has attracted considerable attention and various issues of neural networks have been investigated (see, e.g., [1–4] and the references therein). As the integration and communication delay is unavoidably encountered in implementation of RNN and is often the main source of instability and oscillations, much efforts have been expended on the problem of stability of RNNs with time delays (see, e.g., [5–14]).
RNNs can be classified as local field networks and static neural networks based on the difference of basic variables (local field states or neuron states) [15]. Recently, the stability of static recurrent neural networks (SRNNs) with timevarying delay was investigated in [16], where sufficient conditions were obtained guaranteeing the global asymptotic stability of the neural network. Nevertheless, some negative semidefinite terms were ignored in [16], which lead to the conservatism of the derived result. By retaining these terms and considering the low bound of the delay, some improved stability conditions were derived for SRNNs with interval timevarying delay in [17]. In [18], an inputoutput framework was proposed to investigate the stability of SRNNs with linear fractional uncertainties and delays. Based on the augmented LyapunovKrasovskii functional approach, some new conditions were derived to assure the stability of SRNNs in [19–22], but the results can be further improved.
In this paper, the problem of stability of SRNNs with timevarying delay is investigated based on the complete delaydecomposing approach [12]. By employing a reciprocally convex technique, some sufficient conditions are derived in the forms of linear matrix inequalities (LMIs). The effectiveness and the merit are illustrated by a numerical example.
Notations. Through this paper, and stand for the transpose and the inverse of the matrix , respectively; means that the matrix is symmetric and positive definite (semipositive definite); denotes the dimensional Euclidean space; denotes a blockdiagonal matrix; is the Euclidean norm of ; the symbol within a matrix represents the symmetric terms of the matrix; for example, . Matrices, if not explicitly stated, are assumed to have compatible dimensions.
2. System Description
Consider the following delayed neural network: where and denote the neuron state vector and the input vector, respectively; is the neuron activation function; is the initial condition; and are known interconnection weight matrices; and is the timevarying delay and satisfies Furthermore, the neuron activation functions satisfy the following assumption.
Assumption 1. The neuron activation functions are bounded and satisfy where for . For simplicity, denote .
Under Assumption 1, there exists an equilibrium of (1). Hence, by the transformation , (1) can be transformed into where is the state vector; is the initial condition; and the transformed neuron activation functions − satisfies
Notice that there exists an equilibrium point in neural network (5), corresponding to the initial condition . Based on the analysis above, the problem of analyzing the stability of system (1) at equilibrium is changed into a problem of analyzing the zero stability of system (5).
Before presenting our main results, we first introduce two lemmas, which are useful in the stability analysis of the considered neural network.
Lemma 2 (see [23]). Let be a constant real matrix, and suppose with such that the subsequent integration is well defined. Then, one has where .
Lemma 3 (see [24]). Let be given finite functions, and they have positive values for arbitrary value of independent variable in an open subset of . The reciprocally convex combination of in satisfies subject to
3. Main Results
In the sequel, following the method proposed in [13], we decompose the delay interval into equidistant subintervals, where is a given integer; that is, with . Thus, for any , there should exist an integer , such that . Then the LyapunovKrasovskii functional candidate is chosen as with where are to be determined, , , and denotes the th row of matrix .
Remark 4. Notice that a novel term being continuous at is included in the LyapunovKrasovskii functional (10), which plays an important role in reducing conservativeness of the derived result.
Next, we develop some new delaydependent stability criteria for the delayed neural networks described by (5) and (6) with satisfying (2) and (3). By employing the LyapunovKrasovskii functional (10), the following theorem is obtained.
Theorem 5. For a given positive integer , scalars and , the origin of system (5) with the activation function satisfying (6) and a timevarying delay satisfying conditions (3) is globally asymptotically stable if there exist and , , , and , , with appropriate dimensions, such that, for , where with
Proof. From Assumption 1, it can be deduced that, for any diagonal matrices , ,
Now, calculating the derivative of along the solutions of neural network (5) yields
where
By Lemmas 2 and 3, it can be deduced that
where .
Next, we introduce a new vector as
where
Then, rewrite system (5) as
Adding the right sides of (18) to (19) and applying (21) yield
where
For all , if , which is equivalent to LMIs (14) in the sense of Schur complement [25], then for any . Note that is continuous at , so the system (5) is globally asymptotically stable. This completes the proof.
Remark 6. In the proof of Theorem 5, and are not simply enlarged to as [16] does. By employing reciprocally convex approach to consider this information, Theorem 5 may be less conservative, which will be verified by the simulation results in the next section.
Remark 7. In previous works such as [16, 19], considerable attention has been paid to the case that the derivative of the timevarying delay satisfies (3). However, in the case of satifying the treatment in [16, 19] means that in (27) is enlarged to , which may lead to conservativeness inevitably. By contrast, the case above can be taken fully into account by replacing with in Theorem 5.
For the case that the timevarying delay is nondifferentiable or is unknown, setting , , in Theorem 5, a delaydependent and rateindependent criterion is easily derived as follows.
Corollary 8. For a given positive integer , scalars , the origin of system (5) with the activation function satisfying (6) and a timevarying delay satisfying condition (2) is globally asymptotically stable if there exist with appropriate dimensions, such that, for , LMIs in (15) and (29) hold where , and are defined in Theorem 5.
4. Numerical Examples
In this section, we will provide a numerical example to show the effectiveness of the presented criteria.
Example 1. Consider neural network (1) with the following parameters: The activation functions satisfy (6) with
This example has been discussed in [16–22]. By using Theorem 5 and Corollary 8 with , for various , the upper bounds that guarantee the global asymptotic stability of neural network (1) are computed and listed in Table 1. It can be concluded that the upper bounds obtained by our method are much better than those in [16–22]. Obviously, the conditions proposed in this paper are an improvement over the existing ones.

5. Conclusions
This paper has studied the stability of SRNNs by constructing a complete delaydecomposing LyapunovKrasovskii functional. Some improved delaydependent stability conditions have been derived by utilizing a reciprocally convex technique to consider the relationship between the timevarying delay and its varying interval, which are formulated in linear matrix inequalities (LMIs). Finally, a numerical example has been provided to show the effectiveness of the proposed methods.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (nos. 61304064, 61363073, 61273157, and 61262032), the Hunan Provincial Natural Science Foundation of China (no. 13JJ6058), the Research Foundation of Education Bureau of Hunan Province (no. 13A075), and the Natural Science Foundation of Hunan University of Technology (no. 2012HZX06).
References
 A. N. Michel and D. Liu, Qualitative Analysis and Synthesis of Recurrent Neural Networks, Marcel Dekker, New York, NY, USA, 2002.
 Q. J. Zhang and X. Q. Lu, “A recurrent neural network for nonlinear fractional programming,” Mathematical Problems in Engineering, vol. 2012, Article ID 807656, 18 pages, 2012. View at: Publisher Site  Google Scholar
 K. Goto and K. Shibata, “Emergence of prediction by reinforcement learning using a recurrent neural network,” Journal of Robotics, vol. 2010, Article ID 437654, 9 pages, 2010. View at: Publisher Site  Google Scholar
 C.C. Ku and K. Y. Lee, “Diagonal recurrent neural networks for dynamic systems control,” IEEE Transactions on Neural Networks, vol. 6, no. 1, pp. 144–156, 1995. View at: Publisher Site  Google Scholar
 Y. Liu, Z. Wang, and X. Liu, “Global exponential stability of generalized recurrent neural networks with discrete and distributed delays,” Neural Networks, vol. 19, no. 5, pp. 667–675, 2006. View at: Publisher Site  Google Scholar
 T. Li, W. X. Zheng, and C. Lin, “Delayslopedependent stability results of recurrent neural networks,” IEEE Transactions on Neural Networks, vol. 22, no. 12, pp. 2138–2143, 2011. View at: Publisher Site  Google Scholar
 T. Li, Q. Luo, C. Sun, and B. Zhang, “Exponential stability of recurrent neural networks with timevarying discrete and distributed delays,” Nonlinear Analysis: Real World Applications, vol. 10, no. 4, pp. 2581–2589, 2009. View at: Publisher Site  Google Scholar
 J. Cao and J. Wang, “Global asymptotic and robust stability of recurrent neural networks with time delays,” IEEE Transactions on Circuits and Systems I, vol. 52, no. 2, pp. 417–426, 2005. View at: Publisher Site  Google Scholar
 C. Li and X. Liao, “Robust stability and robust periodicity of delayed recurrent neural networks with noise disturbance,” IEEE Transactions on Circuits and Systems I, vol. 53, no. 10, pp. 2265–2273, 2006. View at: Publisher Site  Google Scholar
 Y. Liu, Z. Wang, A. Serrano, and X. Liu, “Discretetime recurrent neural networks with timevarying delays: exponential stability analysis,” Physics Letters A, vol. 362, no. 56, pp. 480–488, 2007. View at: Publisher Site  Google Scholar
 Y. He, G. P. Liu, D. Rees, and M. Wu, “Stability analysis for neural networks with timevarying interval delay,” IEEE Transactions on Neural Networks, vol. 18, no. 6, pp. 1850–1854, 2007. View at: Publisher Site  Google Scholar
 H.B. Zeng, Y. He, M. Wu, and C.F. Zhang, “Complete delaydecomposing approach to asymptotic stability for neural networks with timevarying delays,” IEEE Transactions on Neural Networks, vol. 22, no. 5, pp. 806–812, 2011. View at: Publisher Site  Google Scholar
 X.M. Zhang and Q.L. Han, “New LyapunovKrasovskii functionals for global asymptotic stability of delayed neural networks,” IEEE Transactions on Neural Networks, vol. 20, no. 3, pp. 533–539, 2009. View at: Publisher Site  Google Scholar
 S.P. Xiao and X.M. Zhang, “New globally asymptotic stability criteria for delayed cellular neural networks,” IEEE Transactions on Circuits and Systems II, vol. 56, no. 8, pp. 659–663, 2009. View at: Publisher Site  Google Scholar
 Z.B. Xu, H. Qiao, J. Peng, and B. Zhang, “A comparative study of two modeling approaches in neural networks,” Neural Networks, vol. 17, no. 1, pp. 73–85, 2004. View at: Publisher Site  Google Scholar
 H. Shao, “Delaydependent stability for recurrent neural networks with timevarying delays,” IEEE Transactions on Neural Networks, vol. 19, no. 9, pp. 1647–1651, 2008. View at: Publisher Site  Google Scholar
 Z. Zuo, C. Yang, and Y. Wang, “A new method for stability analysis of recurrent neural networks with interval timevarying delay,” IEEE Transactions on Neural Networks, vol. 21, no. 2, pp. 339–344, 2010. View at: Publisher Site  Google Scholar
 X. Li, H. Gao, and X. Yu, “A unified approach to the stability of generalized static neural networks with linear fractional uncertainties and delays,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 41, no. 5, pp. 1275–1286, 2011. View at: Publisher Site  Google Scholar
 Y.Y. Wu and Y.Q. Wu, “Stability analysis for recurrent neural networks with timevarying delay,” International Journal of Automation and Computing, vol. 6, no. 3, pp. 223–227, 2009. View at: Publisher Site  Google Scholar
 H.B. Zeng, S.P. Xiao, and B. Liu, “New stability criteria for recurrent neural networks with a timevarying delay,” International Journal of Automation and Computing, vol. 8, no. 1, pp. 128–133, 2011. View at: Publisher Site  Google Scholar
 Y. Q. Bai and J. Chen, “New stability criteria for recurrent neural networks with interval timevarying delay,” Neurocomputing, vol. 121, pp. 179–184, 2013. View at: Publisher Site  Google Scholar
 M. D. Ji, Y. He, C. K. Zhang, and M. Wu, “Novel stability criteria for recurrent neural network with timevarying delay,” Neurocomputing, 2014. View at: Publisher Site  Google Scholar
 Q.L. Han, “A new delaydependent stability criterion for linear neutral systems with normbounded uncertainties in all system matrices,” International Journal of Systems Science, vol. 36, no. 8, pp. 469–475, 2005. View at: Publisher Site  Google Scholar
 P. Park, J. W. Ko, and C. Jeong, “Reciprocally convex approach to stability of systems with timevarying delays,” Automatica, vol. 47, no. 1, pp. 235–238, 2011. View at: Publisher Site  Google Scholar
 S. Boyd, L. E. Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, SIAM, Philadelphia, Pa, USA, 1994.
Copyright
Copyright © 2014 Lei Ding et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.