Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2014 (2014), Article ID 951973, 9 pages
http://dx.doi.org/10.1155/2014/951973
Research Article

Delay-Dependent State Estimation of Static Neural Networks with Time-Varying and Distributed Delays

1School of Electronics and Information Engineering, Soochow University, Suzhou 215006, China
2Texas A&M University at Qatar, P.O. Box 23874, Doha, Qatar

Received 5 June 2014; Accepted 6 September 2014; Published 29 September 2014

Academic Editor: Chuandong Li

Copyright © 2014 Lei Shao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper focuses on studying the state estimation problem of static neural networks with time-varying and distributed delays. By constructing a suitable Lyapunov functional and employing two integral inequalities, a sufficient condition is obtained under which the estimation error system is globally asymptotically stable. It can be seen that this condition is dependent on the two kinds of time delays. To reduce the conservatism of the derived result, Wirtinger inequality is employed to handle a cross term in the time-derivative of Lyapunov functional. It is further shown that the design of the gain matrix of state estimator is transformed to finding a feasible solution of a linear matrix inequality, which is efficiently facilitated by available algorithms. A numerical example is explored to demonstrate the effectiveness of the developed result.

1. Introduction

As a kind of recurrent neural networks [1], static neural networks have attracted more and more attention from the communities of artificial intelligence, nonlinear science, and systems and control during the past few years. Many exciting applications have been established in various areas including combinational optimization, image processing, pattern recognition, knowledge engineering, and semantic web [2]. As a matter of fact, time delay is frequently encountered in neural network models and leads to unexpected dynamical behaviors. For example, the existence of time delay may make the underlying neural network unstable or even chaotic (especially when the size of time delay is a bit large). On the other hand, one of the requirements for successful applications is closely related to the stability of the constructed neural network. Consequently, stability analysis of delayed static neural networks has been extensively discussed and some interesting stability criteria have been reported in the open literature (see, e.g., [37] and references therein).

With the rapid development of modern industry, the problems to be tackled are of high nonlinearity. Therefore, the neural network applied to solve a complex nonlinear problem often has a great number of neurons. As suggested in [8], it is very hard or expensive to acquire the complete information of the states of all neurons in such a relatively large-scale recurrent neural network, while, in practice, it is necessary to know these types of information in advance and then make use of them to achieve specific objectives [9, 10]. It is thus of great significance to investigate the state estimation problem of delayed static neural networks. Inspired by [1113], the authors in [14] proposed an improved delay partition approach to dealing with the state estimation problem of static neural networks with time-varying delay. In [15], this issue was studied for discrete-time static neural networks. A delay-range-dependent condition was derived in terms of a linear matrix inequality (LMI). To reduce its conservatism, some free-weighting matrices [16] were introduced. Other related results can be found in [1719].

It should be noted that, in the above mentioned results on state estimation of delayed static neural networks, only time-varying delay was taken into account. In fact, distributed delay, which is distinct from time-varying delay, should be also considered because parallel pathways with various axon sizes and lengths frequently occur in neural networks and the signal transmission between neurons is distributed in general [2022]. That is to say, it is very important to study the effect of distributed delay on state estimation of static neural networks. This is the motivation of the present study.

In this paper, our attention is concerned with the problem of state estimation of delayed static neural networks. Here, both time-varying and distributed delays are considered. To the best of our knowledge, it is the first time to introduce distributed delay in static neural networks. The mathematical model of this kind of delayed static neural networks is presented. Then, by constructing an appropriate Lyapunov functional and employing Jensen and Wirtinger inequalities [23, 24], a sufficient condition, which is dependent on both the time-varying delay and distributed delay, is established under which the estimation error system is globally asymptotically stable. Then, the desired gain matrix of state estimator is obtained by solving an LMI [25]. An example is finally given to show the effectiveness of the developed result. Although some important results on state estimation of delayed static neural networks were available in [14, 15, 1719], the distributed delay has not yet been taken into consideration. One of the contributions of this study is to close this gap and present an efficient approach to handling this issue for delayed static neural networks with the two kinds of time delays. At the same time, some recently proposed techniques are employed to derive a delay-dependent criterion such that the implementation of a proper state estimator is easily accomplished based on a convex optimization problem. This is the second contribution of this study.

Notations. Let be the set of real numbers, the -dimensional Euclidean space, and the set of all real matrices. For a real matrix , () means that is symmetric and positive definite (negative definite). The superscripts and , respectively, stand for the transpose and inverse of a matrix. is an identity matrix with appropriate dimension. denotes a block diagonal matrix. The symmetric block in a symmetric matrix is denoted by . Matrices, if not explicitly stated, are assumed to have compatible dimensions.

2. Problem Formulation and Preliminaries

In the hardware implementations of recurrent neural networks, time delays are unavoidable owing to the finite speed of amplifiers and signal transmission between neurons. In practice, some time delays maybe vary with time. This kind of delays is named as time-varying delays. On the other hand, the so-called distributed delay should be also taken into account since parallel pathways with different axon sizes are actually found and the signal transmission is distributed in a neural network. Therefore, it is reasonable to consider the static neural network with time-varying and distributed delays which is described by where with being the number of neurons, is the state of the th neuron, is a firing rate, is an activation function of the th neuron, and are delayed connection weights between neurons and , and are, respectively, time-varying and distributed delays, and and are external inputs of neuron .

Remark 1. It is clear to see that both time-varying and distributed delays are taken into account in the static neural network (1). To the best of our knowledge, this is the first attempt to introduce distributed delay in static neural networks. Additionally, it can be found in (1) that only is assumed to vary with time while the distributed delay is constant. It should be emphasized that the approach which will be developed later can be easily extended to deal with the case that is also time-varying. Here, just for simplicity, the distributed delay is supposed to be constant.

Let with or . Then, the delayed static neural network (1) can be rewritten as a compact form

As discussed in [14], it is difficult or even impossible to fully know the state information of all neurons in a delayed static neural network. However, in some applications, one needs to utilize this information to accomplish desired objectives. In this situation, it is thus very necessary to present an efficient algorithm to implement the state estimation of the underlying static neural network. Then, in place of the “true” states of neurons, the estimated states can be directly used in practice. Generally, one is able to measure the output of a static neural network. As a result, the output measurement plays a key role when estimating the neurons’ states. Besides this, the activation function is also fixed in the design process of a neural network. By taking these into account, the output measurement of the delayed static neural network (3) is assumed to be of the form and a state estimator is constructed as where is a real known matrix with compatible dimension, is a nonlinear disturbance on the output measurement, and , to be determined, is a gain matrix of the state estimator.

Define the error signal to be the difference between and (i.e., ). It follows from (3)–(5) that the error signal satisfies Let Then, (6) can be simplified as

Throughout this paper, the following three assumptions are always made.

Assumption 2. For and any different , the activation function satisfies where and are constant scalars.

Assumption 3. There exist constant scalars and such that

Assumption 4. For any vectors , there are two real known constant matrices and with compatible dimensions such that

Remark 5. As mentioned in [21], the scalars and in Assumption 2 can be positive, zero, or negative. It means that the monotonicity is no longer required for the activation function. Therefore, it is more general than the popularly adopted sigmoid functions such as and .

Remark 6. Inequality (11) in Assumption 4 is named as the sector-bounded condition [26], which has been widely used in the state estimation theory of delayed neural networks [22, 27].

Let and be the th rows of the matrices and , respectively. That is, and . It follows from (9) that where is the th element of .

From (12), for any diagonal matrix , it is clear that with Consider the following.

Lemma 7. For any given diagonal matrix , one has the following inequality:

Note that . It immediately follows from (11) that, for any positive scalar , After some manipulations, one can arrive at the following lemma.

Lemma 8. For any given scalar , satisfies

Lemma 9. For any real matrices and , one always has

Proof. It can be easily proven by noting . This completes the proof.

Remark 10. It is known that it is usually difficult to solve a nonlinear matrix inequality. Lemma 9 will be employed to transform a nonlinear matrix inequality into an LMI such that it can be efficiently solved.

Before ending this section, we recall two integral inequalities, which are very essential to the derivation of our main result.

Lemma 11 (Jensen inequality [23]). For any given matrix with , scalars with , and continuous function , then

Lemma 12 (Wirtinger inequality [24]). For any given matrix with , scalars with , and continuously differentiable function , one has where

Remark 13. It is obvious that, for given and , According to Newton-Leibniz formula, when in Lemma 11 is differentiable, it is known that Wirtinger inequality is less conservative than Jensen inequality. It is thus believed [24] that better performance can be achieved by Wirtinger inequality than by Jensen inequality. In this study, Lemmas 11 and 12 will be, respectively, employed to deal with the terms , , and in the time-derivative of Lyapunov functional .

3. Delay-Dependent State Estimation Criterion

Based on Lemmas 712, a design criterion is presented for the delayed static neural network (3), which depends on both the time-varying and distributed delays. It is shown that the design of a suitable gain matrix in (5) is transferred to finding a feasible solution of an LMI.

Theorem 14. For given scalars , , and , the resulting error system (8) is globally asymptotically stable if there are real matrices , , , , diagonal matrices , and a scalar such that the following LMI is satisfied:where Moreover, the gain matrix of the state estimator (5) can be designed as

Proof. By Lemma 9 and (23), one hasPre- and postmultiplying (26), respectively, by and its transpose and noting yieldwith According to Schur complement [25], (27) is equivalent to where
Construct a Lyapunov functional By computing the time-derivative of along the solutions of the error system (8), one gets where is used to derive the above inequality. By Lemma 7, for diagonal matrices , , and , it is not difficult to deduce that By Lemmas 11 and 12, one has By combining (17) and (32)–(34) together, one can deduce where Then, it immediately follows from (29) that for any . According to the theory of Lyapunov stability, the error system (8) is globally asymptotically stable. This completes the proof.

Remark 15. The objective of this study is to propose a delay-dependent approach to handling the state estimation problem of static neural networks with time-varying and distributed delays. In Theorem 14, a design criterion is derived by means of an LMI. It can be efficiently solved in practice by resorting to some famous algorithms in [25]. It should be pointed out that, in order to reduce the conservatism of Theorem 14, Wirtinger inequality is utilized to deal with in (32), although it can be also estimated by Jensen inequality.

4. A Numerical Example

Let and consider the delayed static neural network (3) with the following parameters: Let and ; by solving the LMI (23) in Theorem 14 for different , the gain matrix can be obtained, which is summarized in Table 1. It is also found that when and , the LMI (23) is feasible for . If , a feasible solution is Then, the gain matrix can be designed as Now, let and . We change the value of and solve the LMI (23). The numerical results are given in Table 2. For and , the allowable maximum value of such that the LMI (23) is feasible is , and the gain matrix is

tab1
Table 1: The gain matrix obtained by Theorem 14 for different .
tab2
Table 2: The gain matrix obtained by Theorem 14 for different .

5. Conclusion

In this paper, the state estimation problem has been studied for a class of delayed static neural networks. Both time-varying and distributed delays have been taken into account. Based on the Lyapunov stability theory and some integral inequalities, a delay-dependent design criterion has been presented by means of an LMI. So the proper gain matrix can be easily obtained in practice since it is facilitated readily by mature algorithms. Finally, a numerical example has been provided to illustrate the effectiveness of the developed result on the state estimator design of static neural networks with mixed delays.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors would like to thank the anonymous reviewers for their constructive comments that have greatly improved the quality of this paper. This work was supported by the National Natural Science Foundation of China under Grant nos. 61005047 and 61372146 and the Natural Science Foundation of Jiangsu Province of China under Grant no. BK2010214. Also, this publication was made possible by NPRP Grant # 4-1162-1-181 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the author.

References

  1. Z.-B. Xu, H. Qiao, J. Peng, and B. Zhang, “A comparative study of two modeling approaches in neural networks,” Neural Networks, vol. 17, no. 1, pp. 73–85, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  2. M. M. Gupta, L. Jin, and N. Homma, Static and Dynamic Neural Networks: From Fundamentals to Advanced Theory, Wiley, New York, NY, USA, 2003.
  3. X. Li, H. Gao, and X. Yu, “A unified approach to the stability of generalized static neural networks with linear fractional uncertainties and delays,” IEEE Transactions on Systems, Man, and Cybernetics B: Cybernetics, vol. 41, no. 5, pp. 1275–1286, 2011. View at Publisher · View at Google Scholar · View at Scopus
  4. P. Li and J. Cao, “Stability in static delayed neural networks: a nonlinear measure approach,” Neurocomputing, vol. 69, no. 13–15, pp. 1776–1781, 2006. View at Publisher · View at Google Scholar · View at Scopus
  5. C.-D. Zheng, H. Zhang, and Z. Wang, “Delay-dependent globally exponential stability criteria for static neural networks: an LMI approach,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 56, no. 7, pp. 605–609, 2009. View at Publisher · View at Google Scholar
  6. Z.-G. Wu, J. Lam, H. Su, and J. Chu, “Stability and dissipativity analysis of static neural networks with time delay,” IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 2, pp. 199–210, 2012. View at Publisher · View at Google Scholar · View at Scopus
  7. H. Shao, “Delay-dependent stability for recurrent neural networks with time-varying delays,” IEEE Transactions on Neural Networks, vol. 19, no. 9, pp. 1647–1651, 2008. View at Publisher · View at Google Scholar · View at Scopus
  8. Z. Wang, D. W. C. Ho, and X. Liu, “State estimation for delayed neural networks,” IEEE Transactions on Neural Networks, vol. 16, no. 1, pp. 279–284, 2005. View at Publisher · View at Google Scholar · View at Scopus
  9. L. Jin, P. N. Nikiforuk, and M. M. Gupta, “Adaptive control of discrete-time nonlinear systems using recurrent neural networks,” IEE Proceedings: Control Theory and Applications, vol. 141, no. 3, pp. 169–176, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  10. H. Huang, T. Huang, X. Chen, and C. Qian, “Exponential stabilization of delayed recurrent neural networks: a state estimation based approach,” Neural Networks, vol. 48, pp. 153–157, 2013. View at Publisher · View at Google Scholar · View at Scopus
  11. Y. Ariba and F. Gouaisbaut, “Delay-dependent stability analysis of linear systems with time-varying delay,” in Proceeding of the 46th IEEE Conference on Decision and Control (CDC ’07), pp. 2053–2058, New Orleans, La, USA, December 2007. View at Publisher · View at Google Scholar · View at Scopus
  12. S. Mou, H. Gao, J. Lam, and W. Qiang, “A new criterion of delay-dependent asymptotic stability for Hopfield neural networks with time delay,” IEEE Transactions on Neural Networks, vol. 19, no. 3, pp. 532–535, 2008. View at Publisher · View at Google Scholar · View at Scopus
  13. Q.-L. Han, “A delay decomposition approach to stability of linear neutral systems,” in Proceedings of the 17th IFAC World Congress, pp. 2607–2612, Seoul, Republic of Korea, July 2008. View at Publisher · View at Google Scholar
  14. H. Huang, G. Feng, and J. Cao, “State estimation for static neural networks with time-varying delay,” Neural Networks, vol. 23, no. 10, pp. 1202–1207, 2010. View at Publisher · View at Google Scholar · View at Scopus
  15. C.-Y. Lu, “A delay-range-dependent approach to design state estimator for discrete-time recurrent neural networks with interval time-varying delay,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 55, no. 11, pp. 1163–1167, 2008. View at Publisher · View at Google Scholar · View at Scopus
  16. Y. He, Q.-G. Wang, M. Wu, and C. Lin, “Delay-dependent state estimation for delayed neural networks,” IEEE Transactions on Neural Networks, vol. 17, no. 4, pp. 1077–1081, 2006. View at Publisher · View at Google Scholar · View at Scopus
  17. Q. Duan, H. Su, and Z.-G. Wu, “H state estimation of static neural networks with time-varying delay,” Neurocomputing, vol. 97, pp. 16–21, 2012. View at Publisher · View at Google Scholar · View at Scopus
  18. H. Huang, T. Huang, and X. Chen, “Guaranteed H performance state estimation of delayed static neural networks,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 60, no. 6, pp. 371–375, 2013. View at Publisher · View at Google Scholar · View at Scopus
  19. Y. Liu, S. M. Lee, O. M. Kwon, and J. H. Park, “A study on H state estimation of static neural networks with time-varying delays,” Applied Mathematics and Computation, vol. 226, pp. 589–597, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  20. K. Gopalsamy and X. Z. He, “Stability in asymmetric Hopfield nets with transmission delays,” Physica D, vol. 76, no. 4, pp. 344–358, 1994. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  21. Z. Wang, Y. Liu, and X. Liu, “State estimation for jumping recurrent neural networks with discrete and distributed delays,” Neural Networks, vol. 22, no. 1, pp. 41–48, 2009. View at Publisher · View at Google Scholar · View at Scopus
  22. Y. Chen and W. X. Zheng, “Stochastic state estimation for neural networks with distributed delays and Markovian jump,” Neural Networks, vol. 25, pp. 14–20, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  23. K. Gu, V. L. Kharitonov, and J. Chen, Stability of Time-Delay Systems, Control Engineering, Birkhäuser, Boston, Mass, USA, 2003. View at Publisher · View at Google Scholar · View at MathSciNet
  24. A. Seuret and F. Gouaisbaut, “Wirtinger-based integral inequality: application to time-delay systems,” Automatica, vol. 49, no. 9, pp. 2860–2866, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  25. S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, SIAM, Philadelphia, Pa, USA, 1994. View at Publisher · View at Google Scholar · View at MathSciNet
  26. H. K. Khalil, Nonlinear Systems, Prentice-Hall, Upper Saddle River, NJ, USA, 3rd edition, 2002.
  27. H. Huang, T. Huang, and X. Chen, “A mode-dependent approach to state estimation of recurrent neural networks with Markovian jumping parameters and mixed delays,” Neural Networks, vol. 46, pp. 50–61, 2013. View at Publisher · View at Google Scholar · View at Scopus