Delay-Dependent State Estimation of Static Neural Networks with Time-Varying and Distributed Delays
This paper focuses on studying the state estimation problem of static neural networks with time-varying and distributed delays. By constructing a suitable Lyapunov functional and employing two integral inequalities, a sufficient condition is obtained under which the estimation error system is globally asymptotically stable. It can be seen that this condition is dependent on the two kinds of time delays. To reduce the conservatism of the derived result, Wirtinger inequality is employed to handle a cross term in the time-derivative of Lyapunov functional. It is further shown that the design of the gain matrix of state estimator is transformed to finding a feasible solution of a linear matrix inequality, which is efficiently facilitated by available algorithms. A numerical example is explored to demonstrate the effectiveness of the developed result.
As a kind of recurrent neural networks , static neural networks have attracted more and more attention from the communities of artificial intelligence, nonlinear science, and systems and control during the past few years. Many exciting applications have been established in various areas including combinational optimization, image processing, pattern recognition, knowledge engineering, and semantic web . As a matter of fact, time delay is frequently encountered in neural network models and leads to unexpected dynamical behaviors. For example, the existence of time delay may make the underlying neural network unstable or even chaotic (especially when the size of time delay is a bit large). On the other hand, one of the requirements for successful applications is closely related to the stability of the constructed neural network. Consequently, stability analysis of delayed static neural networks has been extensively discussed and some interesting stability criteria have been reported in the open literature (see, e.g., [3–7] and references therein).
With the rapid development of modern industry, the problems to be tackled are of high nonlinearity. Therefore, the neural network applied to solve a complex nonlinear problem often has a great number of neurons. As suggested in , it is very hard or expensive to acquire the complete information of the states of all neurons in such a relatively large-scale recurrent neural network, while, in practice, it is necessary to know these types of information in advance and then make use of them to achieve specific objectives [9, 10]. It is thus of great significance to investigate the state estimation problem of delayed static neural networks. Inspired by [11–13], the authors in  proposed an improved delay partition approach to dealing with the state estimation problem of static neural networks with time-varying delay. In , this issue was studied for discrete-time static neural networks. A delay-range-dependent condition was derived in terms of a linear matrix inequality (LMI). To reduce its conservatism, some free-weighting matrices  were introduced. Other related results can be found in [17–19].
It should be noted that, in the above mentioned results on state estimation of delayed static neural networks, only time-varying delay was taken into account. In fact, distributed delay, which is distinct from time-varying delay, should be also considered because parallel pathways with various axon sizes and lengths frequently occur in neural networks and the signal transmission between neurons is distributed in general [20–22]. That is to say, it is very important to study the effect of distributed delay on state estimation of static neural networks. This is the motivation of the present study.
In this paper, our attention is concerned with the problem of state estimation of delayed static neural networks. Here, both time-varying and distributed delays are considered. To the best of our knowledge, it is the first time to introduce distributed delay in static neural networks. The mathematical model of this kind of delayed static neural networks is presented. Then, by constructing an appropriate Lyapunov functional and employing Jensen and Wirtinger inequalities [23, 24], a sufficient condition, which is dependent on both the time-varying delay and distributed delay, is established under which the estimation error system is globally asymptotically stable. Then, the desired gain matrix of state estimator is obtained by solving an LMI . An example is finally given to show the effectiveness of the developed result. Although some important results on state estimation of delayed static neural networks were available in [14, 15, 17–19], the distributed delay has not yet been taken into consideration. One of the contributions of this study is to close this gap and present an efficient approach to handling this issue for delayed static neural networks with the two kinds of time delays. At the same time, some recently proposed techniques are employed to derive a delay-dependent criterion such that the implementation of a proper state estimator is easily accomplished based on a convex optimization problem. This is the second contribution of this study.
Notations. Let be the set of real numbers, the -dimensional Euclidean space, and the set of all real matrices. For a real matrix , () means that is symmetric and positive definite (negative definite). The superscripts and , respectively, stand for the transpose and inverse of a matrix. is an identity matrix with appropriate dimension. denotes a block diagonal matrix. The symmetric block in a symmetric matrix is denoted by . Matrices, if not explicitly stated, are assumed to have compatible dimensions.
2. Problem Formulation and Preliminaries
In the hardware implementations of recurrent neural networks, time delays are unavoidable owing to the finite speed of amplifiers and signal transmission between neurons. In practice, some time delays maybe vary with time. This kind of delays is named as time-varying delays. On the other hand, the so-called distributed delay should be also taken into account since parallel pathways with different axon sizes are actually found and the signal transmission is distributed in a neural network. Therefore, it is reasonable to consider the static neural network with time-varying and distributed delays which is described by where with being the number of neurons, is the state of the th neuron, is a firing rate, is an activation function of the th neuron, and are delayed connection weights between neurons and , and are, respectively, time-varying and distributed delays, and and are external inputs of neuron .
Remark 1. It is clear to see that both time-varying and distributed delays are taken into account in the static neural network (1). To the best of our knowledge, this is the first attempt to introduce distributed delay in static neural networks. Additionally, it can be found in (1) that only is assumed to vary with time while the distributed delay is constant. It should be emphasized that the approach which will be developed later can be easily extended to deal with the case that is also time-varying. Here, just for simplicity, the distributed delay is supposed to be constant.
Let with or . Then, the delayed static neural network (1) can be rewritten as a compact form
As discussed in , it is difficult or even impossible to fully know the state information of all neurons in a delayed static neural network. However, in some applications, one needs to utilize this information to accomplish desired objectives. In this situation, it is thus very necessary to present an efficient algorithm to implement the state estimation of the underlying static neural network. Then, in place of the “true” states of neurons, the estimated states can be directly used in practice. Generally, one is able to measure the output of a static neural network. As a result, the output measurement plays a key role when estimating the neurons’ states. Besides this, the activation function is also fixed in the design process of a neural network. By taking these into account, the output measurement of the delayed static neural network (3) is assumed to be of the form and a state estimator is constructed as where is a real known matrix with compatible dimension, is a nonlinear disturbance on the output measurement, and , to be determined, is a gain matrix of the state estimator.
Throughout this paper, the following three assumptions are always made.
Assumption 2. For and any different , the activation function satisfies where and are constant scalars.
Assumption 3. There exist constant scalars and such that
Assumption 4. For any vectors , there are two real known constant matrices and with compatible dimensions such that
Remark 5. As mentioned in , the scalars and in Assumption 2 can be positive, zero, or negative. It means that the monotonicity is no longer required for the activation function. Therefore, it is more general than the popularly adopted sigmoid functions such as and .
Let and be the th rows of the matrices and , respectively. That is, and . It follows from (9) that where is the th element of .
From (12), for any diagonal matrix , it is clear that with Consider the following.
Lemma 7. For any given diagonal matrix , one has the following inequality:
Note that . It immediately follows from (11) that, for any positive scalar , After some manipulations, one can arrive at the following lemma.
Lemma 8. For any given scalar , satisfies
Lemma 9. For any real matrices and , one always has
Proof. It can be easily proven by noting . This completes the proof.
Remark 10. It is known that it is usually difficult to solve a nonlinear matrix inequality. Lemma 9 will be employed to transform a nonlinear matrix inequality into an LMI such that it can be efficiently solved.
Before ending this section, we recall two integral inequalities, which are very essential to the derivation of our main result.
Lemma 11 (Jensen inequality ). For any given matrix with , scalars with , and continuous function , then
Lemma 12 (Wirtinger inequality ). For any given matrix with , scalars with , and continuously differentiable function , one has where
Remark 13. It is obvious that, for given and , According to Newton-Leibniz formula, when in Lemma 11 is differentiable, it is known that Wirtinger inequality is less conservative than Jensen inequality. It is thus believed  that better performance can be achieved by Wirtinger inequality than by Jensen inequality. In this study, Lemmas 11 and 12 will be, respectively, employed to deal with the terms , , and in the time-derivative of Lyapunov functional .
3. Delay-Dependent State Estimation Criterion
Based on Lemmas 7–12, a design criterion is presented for the delayed static neural network (3), which depends on both the time-varying and distributed delays. It is shown that the design of a suitable gain matrix in (5) is transferred to finding a feasible solution of an LMI.
Theorem 14. For given scalars , , and , the resulting error system (8) is globally asymptotically stable if there are real matrices , , , , diagonal matrices , and a scalar such that the following LMI is satisfied:where Moreover, the gain matrix of the state estimator (5) can be designed as
Proof. By Lemma 9 and (23), one hasPre- and postmultiplying (26), respectively, by and its transpose and noting yieldwith
According to Schur complement , (27) is equivalent to
Construct a Lyapunov functional By computing the time-derivative of along the solutions of the error system (8), one gets where is used to derive the above inequality. By Lemma 7, for diagonal matrices , , and , it is not difficult to deduce that By Lemmas 11 and 12, one has By combining (17) and (32)–(34) together, one can deduce where Then, it immediately follows from (29) that for any . According to the theory of Lyapunov stability, the error system (8) is globally asymptotically stable. This completes the proof.
Remark 15. The objective of this study is to propose a delay-dependent approach to handling the state estimation problem of static neural networks with time-varying and distributed delays. In Theorem 14, a design criterion is derived by means of an LMI. It can be efficiently solved in practice by resorting to some famous algorithms in . It should be pointed out that, in order to reduce the conservatism of Theorem 14, Wirtinger inequality is utilized to deal with in (32), although it can be also estimated by Jensen inequality.
4. A Numerical Example
Let and consider the delayed static neural network (3) with the following parameters: Let and ; by solving the LMI (23) in Theorem 14 for different , the gain matrix can be obtained, which is summarized in Table 1. It is also found that when and , the LMI (23) is feasible for . If , a feasible solution is Then, the gain matrix can be designed as Now, let and . We change the value of and solve the LMI (23). The numerical results are given in Table 2. For and , the allowable maximum value of such that the LMI (23) is feasible is , and the gain matrix is
In this paper, the state estimation problem has been studied for a class of delayed static neural networks. Both time-varying and distributed delays have been taken into account. Based on the Lyapunov stability theory and some integral inequalities, a delay-dependent design criterion has been presented by means of an LMI. So the proper gain matrix can be easily obtained in practice since it is facilitated readily by mature algorithms. Finally, a numerical example has been provided to illustrate the effectiveness of the developed result on the state estimator design of static neural networks with mixed delays.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
The authors would like to thank the anonymous reviewers for their constructive comments that have greatly improved the quality of this paper. This work was supported by the National Natural Science Foundation of China under Grant nos. 61005047 and 61372146 and the Natural Science Foundation of Jiangsu Province of China under Grant no. BK2010214. Also, this publication was made possible by NPRP Grant # 4-1162-1-181 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the author.
M. M. Gupta, L. Jin, and N. Homma, Static and Dynamic Neural Networks: From Fundamentals to Advanced Theory, Wiley, New York, NY, USA, 2003.
H. K. Khalil, Nonlinear Systems, Prentice-Hall, Upper Saddle River, NJ, USA, 3rd edition, 2002.