Table of Contents Author Guidelines Submit a Manuscript
Journal of Control Science and Engineering
Volume 2016 (2016), Article ID 1759650, 11 pages
http://dx.doi.org/10.1155/2016/1759650
Research Article

Improved Results on State Estimation of Static Neural Networks with Time Delay

1School of Aeronautics and Astronautics, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, China
2School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, China
3Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu 611731, China

Received 29 September 2016; Accepted 10 November 2016

Academic Editor: Xian Zhang

Copyright © 2016 Bin Wen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper studies the problem of state estimation for a class of delayed static neural networks. The purpose of the problem is to design a delay-dependent state estimator such that the dynamics of the error system is globally exponentially stable and a prescribed performance is guaranteed. Some improved delay-dependent conditions are established by constructing augmented Lyapunov-Krasovskii functionals (LKFs). The desired estimator gain matrix can be characterized in terms of the solution to LMIs (linear matrix inequalities). Numerical examples are provided to illustrate the effectiveness of the proposed method compared with some existing results.

1. Introduction

Neural networks (NNs) have drawn a great deal of attention due to their extensive applications in various fields such as associative memory, pattern recognition, signal processing, combinatorial optimization, and adaptive control [13]. In the real world, time delays are unavoidably encountered in electronic implementations of neural networks because of the finite switching speed of the amplifiers. The presence of time delay may cause instability or deteriorate the performance of neural networks. Thus many recent literatures [19] have been working on the stability problem of delayed NNs.

We mainly focus on static neural networks (SNNs) in this paper, which is one type of recurrent neural networks (RNNs). Another type is local field neural networks, which has been fully studied while relatively less effort has been paid to the delayed SNNs. The main difference between SNNs and local field neural networks is whether the neuron states or the local field states of neurons are taken as basic variables. As mentioned in [10, 11], local field neural networks models and SNNs models are not always equivalent. Thus, it is necessary to study SNNs separately. Recently, many interesting results on the stability analysis of SNNs have been addressed in the literature [2, 1216].

Meanwhile, the state estimation of neural networks is an important issue. Generally, a neural network is a highly interconnected network with a great number of neurons. As a result, it would be very difficult to completely acquire the state information of all neurons. On the other hand, one needs to know the information of the neuron states and then make use of the neural networks in practice. Some results on the state estimation problem for the neural networks have been investigated in [1730]. Among them state estimation of static neural networks with time delay was studied in [1719, 28, 30, 31]. In [28], a delay partition approach was proposed to deal with the state estimation problem for a class of static neural networks with time-varying delay. In [30], the state estimation problem of the guaranteed and performance of static neural networks was considered. Further improved results were obtained in [17, 18, 31] by using the convex combination approach. The exponential state estimation of time-varying delayed neural networks was studied in [19]. However, the information of neuron activation functions has not been adequately taken into account. Moreover, the inequalities used may lead to conservatism to some extent. Therefore, the guaranteed performance state estimation problem has not yet been fully studied and remains a space for improvement.

This paper investigates the problem of state estimation for a class of delayed static neural networks. The delay-dependent criteria are proposed such that the resulting filtering error system is globally exponentially stable with a guaranteed performance. Different from time-varying delays considered in many papers such as [17, 19, 28], we consider the range of delay varies in an interval for which the lower bound is nonzero and fully take into account the information of lower bound of the delay. By using delay equal-partitioning method, augmented Lyapunov-Krasovskii functionals (LKFs) are properly constructed which is different from the existing relevant results. Then the free-weighting matrix technique is used to get a tighter upper bound on the derivatives of the LKFs. As mentioned in Remark 10, we also reduce conservatism by taking advantage of the information on activation function. Therefore, slack variables were introduced in our results, and it will increase the computational burden. To reduce decision variables so as to reduce computational burden, integral inequalities together with reciprocally convex approach are considered. Compared with existing results in [1719], the criteria in this paper not only lead to less conservatism, but also have a balance between computational burden and conservatism.

The main contributions of this paper are as follows:(1)Augmented LKFs are properly constructed based on equal-partitioning method.(2)We also make use of integral inequalities to reduce computational burden.(3)Time delay is discussed under two different conditions: time-invariant delay and time-varying delay. In time-varying delay case, we consider the range of delay varies in an interval for which the lower bound is nonzero. Information of the lower bound is fully taken into account in LKFs.(4)We reduce conservatism by taking advantage of the information on activation function.

The remainder of this paper is organized as follows. The state estimation problem is formulated in Section 2. Section 3 is dedicated to deal with the designs of the state estimators for delayed static neural networks under two different conditions. In Section 4, two numerical examples with simulation results are provided to show the effectiveness of the results. Finally, some conclusions are made in Section 5.

Notations. The notations used throughout the paper are fairly standard. denotes the -dimensional Euclidean space; is the set of all real matrices; the notation means is a symmetric positive (negative) definite matrix; and denote the inverse of matrix and the transpose of matrix ; represents the identity matrix with proper dimensions, respectively; a symmetric term in a symmetric matrix is denoted by (); represents ; stands for a block-diagonal matrix. Matrices, if their dimensions are not explicitly stated, are assumed to be compatible for algebraic operations.

2. Problem Formulation

Consider the delayed static neural network subject to noise disturbances described bywhere is the state vector of the neural network, is the neural network output measurement, to be estimated is a linear combination of the state, is the noise input belonging to , denotes the neuron activation function, with is the positive diagonal matrix, , , , , and are real known matrices with appropriate dimensions, denote the connection weights, represent the time-varying delays, represents the exogenous input vector, the function is the initial condition, and .

In this paper, time delay is discussed under two different conditions described as follows:(c1)time-invariant delay: ,(c2)time-varying delay: , ,where , , and are constants.

In order to conduct the analysis, the following assumptions are necessary.

Assumption 1. For any , , the activation function satisfieswhere are constants and we define , .

We construct a state estimator for estimation of :where is the estimated state vector of the neural network, and denote the estimated measurements of and , and is the gain matrix to be determined. Define the error , ; we can easily obtain the error system:where .

Definition 2 (see [18]). For any finite initial condition , the error system (4) with is said to be globally exponentially stable with a decay rate , if there exist constants and such that Given a prescribed level of disturbance attenuation level , the error system is said to be globally exponentially stable with performance, when the error system is globally exponentially stable and the response under zero initial condition satisfies for every nonzero , where .

Lemma 3 (see [32]). For any constant matrix , scalars , and vector function such that the following integrations are well defined, then

Lemma 4 (Schur complement). Given constant symmetric matrices , and , where and , then if and only if

Lemma 5 (see [33]). For , and vector which satisfy with , and matrices , there exists matrix , which satisfies such that the following inequality holds:

3. State Estimator Design

In this section, the state estimation problem will be discussed under two different conditions: time-invariant delay and time-varying delay. We consider the constant time delay case first. For convenience of presentation, we denotehere we use “” to give the condition.

Theorem 6. Under Assumption 1, for given scalars , , and and an integer , system (4) is globally exponentially stable with performance if there exist positive diagonal matrices , , matrices , , , and any matrix with appropriate dimensions , such that the following LMI holds:where is symmetric with and other entries of are zeros:The estimator gain matrix is given by .

Proof. Construct a Lyapunov-Krasovskii functional candidate as follows:where Calculating the derivative of along the trajectory of system, we obtainand using Lemma 3, we can obtainAccording to Assumption 1, we havewhich is equivalent towhere .
Similarly, we obtainwhere .
According to the system equation, the following equality holds:Combining the qualities and inequalities from (17), (18), (20), (21), and (22), we can obtainwhere is defined as Based on Lemma 4, one can deduce thatwhere .
If LMI (12) holds, then, so we can obtainand since , under the zero initial condition, we have which is equal to therefore, the error system (4) guaranteed performance according to Definition 2. In sequel, we show the globally exponential stability of the estimation error system with . When , the error system (4) becomesEquation (22) becomesconsidering the same Lyapunov-Krasovskii functional candidate and calculating its time-derivative along the solution of (30), we can derivewhere Let , and it is obvious that if , then , so we getIntegrating inequality (34), so we obtainFrom (15), we havewhereCombining (35) and (36) it yieldsand hence the error system (4) is globally exponentially stable. Above all, if , then the state estimator for the static neural network has the prescribed performance and guarantees the globally exponential stability of the error system. This completes the proof.

Remark 7. Based on the method of delay partitioning together with the free-weighting matrix approach, a new delay-dependent condition is proposed in Theorem 6 for the state estimation of static neural networks (1) with time-invariant delay. Delay partitioning method reduces the conservatism by employing more detailed information of time delay. The simulation results in Numerical Examples reveal the effectiveness of the delay partitioning approach to the design of the state estimator for static neural networks.

In the following, we will study the time-varying delay case; the result is as follows.

Theorem 8. Under Assumption 1, for given scalars , , , and an integer , system (4) is globally exponentially stable with performance if there exist matrices , , , , , positive diagonal matrices , , , , and any matrices with appropriate dimensions , , such that the following LMIs hold:where is symmetric with and other entries of are zeros:The estimator gain matrix is given by .

Proof. Construct a Lyapunov-Krasovskii functional candidate as follows:where and calculating the derivative of along the trajectory of system, we obtainand using Lemmas 3 and 5, we can obtainAccording to Assumption 1, Similarly to (20), we obtainand according to the system equation, the following equality holds:Combining the qualities and inequalities from (45) to (48), we can obtainwhere is defined as Based on Lemma 4, one can deduce thatwhere .
If LMI (39) holds, then, so we can obtainand since , under the zero initial condition, we have Therefore, the error system (4) guaranteed performance according to Definition 2. The remainder of proof is similar to the proof of Theorem 6. This completes the proof.

Remark 9. If we only use the free-weighting matrix method together with the delay partitioning method to deal with the state estimation problem of static neural networks (1), a great many free-weighting matrices will be introduced with the increasing number of partitions. That will lead to complexity and computational burden. So in this paper we also make use of integral inequalities to reduce decision variables so as to reduce computational burden, because only one matrix is introduced no matter how large the number of partitions is. Moreover, reciprocally convex approach was used with integral inequalities to reduce the conservatism.

Remark 10. In some previous literatures [18, 19, 30], , which is a special case of , was used to reduce the conservatism. In our proof, not only , but also , , and have been used, which play an important role in reducing the conservatism.

4. Numerical Examples

In this section, numerical examples are provided to illustrate effectiveness of the developed method for the state estimation of static neural networks.

Example 1. Consider the neural networks (1) with the following parameters:To compare with the existing results, we let , , , and . And we obtain the optimal performance index for different values of delay and . It is summarized in Table 1.

Table 1: The performance index with different .

From Table 1, it is clear that our results achieve better performance. In addition, the optimal performance index becomes smaller as the partitioning number is increasing. It shows that delay partitioning method can reduce the conservatism effectively.

Example 2. Consider the neural networks (1) with the following parameters:The activation function is , and it is easy to get that , . And we set , , and , so the bound of time delay , , and . The noise disturbance is assumed to be . By solving through the Matlab LMI toolbox, we obtain the gain matrix of the estimator:Figure 1 presents the state variables and their estimation of the neural network (1) from initial values . Figure 2 shows the state response of the error system (4) under the initial condition . It is clear that converge rapidly to zero. The simulation results reveal the effectiveness of the proposed approach to the design of the state estimator for static neural networks.

Figure 1: The state variables and their estimation.
Figure 2: The response of the error ().

5. Conclusions

In this paper, we investigated the state estimation problem for a class of delayed static neural networks. By constructing augmented Lyapunov-Krasovskii functionals, new delay-dependent conditions were established. The designs of the desired estimator are achieved by solving a set of linear matrix inequalities, which can be facilitated efficiently by resorting to standard numerical algorithms. In the end, numerical examples were provided to illustrate the effectiveness of the proposed method compared with some existing results.

Competing Interests

The authors declare that they have no competing interests.

References

  1. O. M. Kwon, S. M. Lee, and J. U. H. Park, “Improved results on stability analysis of neural networks with time-varying delays: novel delay-dependent criteria,” Modern Physics Letters B, vol. 24, no. 8, pp. 775–789, 2010. View at Publisher · View at Google Scholar · View at Scopus
  2. B. Du and J. Lam, “Stability analysis of static recurrent neural networks using delay-partitioning and projection,” Neural Networks, vol. 22, no. 4, pp. 343–347, 2009. View at Publisher · View at Google Scholar · View at Scopus
  3. J. Tian and S. Zhong, “Improved delay-dependent stability criterion for neural networks with time-varying delay,” Applied Mathematics and Computation, vol. 217, no. 24, pp. 10278–10288, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  4. A. Arunkumar, R. Sakthivel, K. Mathiyalagan, and J. H. Park, “Robust stochastic stability of discrete-time fuzzy Markovian jump neural networks,” ISA Transactions, vol. 53, no. 4, pp. 1006–1014, 2014. View at Publisher · View at Google Scholar · View at Scopus
  5. K. Shi, H. Zhu, S. Zhong, Y. Zeng, and Y. Zhang, “Less conservative stability criteria for neural networks with discrete and distributed delays using a delay-partitioning approach,” Neurocomputing, vol. 140, pp. 273–282, 2014. View at Publisher · View at Google Scholar · View at Scopus
  6. M. Luo, S. Zhong, R. Wang, and W. Kang, “Robust stability analysis for discrete-time stochastic neural networks systems with time-varying delays,” Applied Mathematics and Computation, vol. 209, no. 2, pp. 305–313, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. B. Chen and J. Wang, “Global exponential periodicity and global exponential stability of a class of recurrent neural networks with various activation functions and time-varying delays,” Neural Networks, vol. 20, no. 10, pp. 1067–1080, 2007. View at Publisher · View at Google Scholar · View at Scopus
  8. X. Nie and J. Cao, “Stability analysis for the generalized Cohen-Grossberg neural networks with inverse Lipschitz neuron activations,” Computers & Mathematics with Applications, vol. 57, no. 9, pp. 1522–1536, 2009. View at Publisher · View at Google Scholar · View at Scopus
  9. O. M. Kwon, M. J. Park, J. H. Park, S. M. Lee, and E. J. Cha, “Improved approaches to stability criteria for neural networks with time-varying delays,” Journal of the Franklin Institute, vol. 350, no. 9, pp. 2710–2735, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  10. Z.-B. Xu, H. Qiao, J. Peng, and B. Zhang, “A comparative study of two modeling approaches in neural networks,” Neural Networks, vol. 17, no. 1, pp. 73–85, 2004. View at Publisher · View at Google Scholar · View at Scopus
  11. H. Qiao, J. Peng, Z.-B. Xu, and B. Zhang, “A reference model approach to stability analysis of neural networks,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 33, no. 6, pp. 925–936, 2003. View at Publisher · View at Google Scholar · View at Scopus
  12. P. Li and J. Cao, “Stability in static delayed neural networks: a nonlinear measure approach,” Neurocomputing, vol. 69, no. 13–15, pp. 1776–1781, 2006. View at Publisher · View at Google Scholar · View at Scopus
  13. C.-D. Zheng, H. Zhang, and Z. Wang, “Delay-dependent globally exponential stability criteria for static neural networks: an LMI approach,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 56, no. 7, pp. 605–609, 2009. View at Publisher · View at Google Scholar
  14. H. Shao, “Delay-dependent stability for recurrent neural networks with time-varying delays,” IEEE Transactions on Neural Networks, vol. 19, no. 9, pp. 1647–1651, 2008. View at Publisher · View at Google Scholar · View at Scopus
  15. X. Li, H. Gao, and X. Yu, “A unified approach to the stability of generalized static neural networks with linear fractional uncertainties and delays,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 41, no. 5, pp. 1275–1286, 2011. View at Publisher · View at Google Scholar · View at Scopus
  16. Z.-G. Wu, J. Lam, H. Su, and J. Chu, “Stability and dissipativity analysis of static neural networks with time delay,” IEEE Transactions on Neural Networks and Learning Systems, vol. 23, no. 2, pp. 199–210, 2012. View at Publisher · View at Google Scholar · View at Scopus
  17. Q. Duan, H. Su, and Z.-G. Wu, “H state estimation of static neural networks with time-varying delay,” Neurocomputing, vol. 97, pp. 16–21, 2012. View at Publisher · View at Google Scholar · View at Scopus
  18. H. Huang, T. Huang, and X. Chen, “Guaranteed H performance state estimation of delayed static neural networks,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 60, no. 6, pp. 371–375, 2013. View at Publisher · View at Google Scholar · View at Scopus
  19. Y. Liu, S. M. Lee, O. M. Kwon, and J. H. Park, “A study on H state estimation of static neural networks with time-varying delays,” Applied Mathematics and Computation, vol. 226, pp. 589–597, 2014. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  20. T. Li, S.-M. Fei, and Q. Zhu, “Design of exponential state estimator for neural networks with distributed delays,” Nonlinear Analysis: Real World Applications, vol. 10, no. 2, pp. 1229–1242, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  21. M. S. Mahmoud, “New exponentially convergent state estimation method for delayed neural networks,” Neurocomputing, vol. 72, no. 16, pp. 3935–3942, 2009. View at Publisher · View at Google Scholar · View at Scopus
  22. C.-D. Zheng, M. Ma, and Z. Wang, “Less conservative results of state estimation for delayed neural networks with fewer LMI variables,” Neurocomputing, vol. 74, no. 6, pp. 974–982, 2011. View at Publisher · View at Google Scholar · View at Scopus
  23. D. Zhang and L. Yu, “Exponential state estimation for Markovian jumping neural networks with time-varying discrete and distributed delays,” Neural Networks, vol. 35, pp. 103–111, 2012. View at Publisher · View at Google Scholar · View at Scopus
  24. Y. Chen and W. X. Zheng, “Stochastic state estimation for neural networks with distributed delays and Markovian jump,” Neural Networks, vol. 25, pp. 14–20, 2012. View at Publisher · View at Google Scholar · View at Scopus
  25. R. Rakkiyappan, N. Sakthivel, J. H. Park, and O. Kwon, “Sampled-data state estimation for Markovian jumping fuzzy cellular neural networks with mode-dependent probabilistic time-varying delays,” Applied Mathematics and Computation, vol. 221, pp. 741–769, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  26. H. Huang, T. Huang, and X. Chen, “A mode-dependent approach to state estimation of recurrent neural networks with Markovian jumping parameters and mixed delays,” Neural Networks, vol. 46, pp. 50–61, 2013. View at Publisher · View at Google Scholar · View at Scopus
  27. S. Lakshmanan, K. Mathiyalagan, J. H. Park, R. Sakthivel, and F. A. Rihan, “Delay-dependent H state estimation of neural networks with mixed time-varying delays,” Neurocomputing, vol. 129, pp. 392–400, 2014. View at Publisher · View at Google Scholar · View at Scopus
  28. H. Huang, G. Feng, and J. Cao, “State estimation for static neural networks with time-varying delay,” Neural Networks, vol. 23, no. 10, pp. 1202–1207, 2010. View at Publisher · View at Google Scholar · View at Scopus
  29. H. Huang and G. Feng, “Delay-dependent H and generalized H2 filtering for delayed neural networks,” IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 56, no. 4, pp. 846–857, 2009. View at Google Scholar
  30. H. Huang, G. Feng, and J. Cao, “Guaranteed performance state estimation of static neural networks with time-varying delay,” Neurocomputing, vol. 74, no. 4, pp. 606–616, 2011. View at Publisher · View at Google Scholar · View at Scopus
  31. M. S. Ali, R. Saravanakumar, and S. Arik, “Novel H state estimation of static neural networks with interval time-varying delays via augmented Lyapunov-Krasovskii functional,” Neurocomputing, vol. 171, pp. 949–954, 2016. View at Publisher · View at Google Scholar · View at Scopus
  32. K. Gu, J. Chen, and V. L. Kharitonov, Stability of Time-Delay Systems, Springer Science & Business Media, 2003.
  33. P. Park, J. W. Ko, and C. Jeong, “Reciprocally convex approach to stability of systems with time-varying delays,” Automatica, vol. 47, no. 1, pp. 235–238, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus