Journal of Control Science and Engineering

Volume 2016, Article ID 1759650, 11 pages

http://dx.doi.org/10.1155/2016/1759650

## Improved Results on State Estimation of Static Neural Networks with Time Delay

^{1}School of Aeronautics and Astronautics, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, China^{2}School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, China^{3}Key Laboratory for Neuroinformation of Ministry of Education, University of Electronic Science and Technology of China, Chengdu 611731, China

Received 29 September 2016; Accepted 10 November 2016

Academic Editor: Xian Zhang

Copyright © 2016 Bin Wen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper studies the problem of state estimation for a class of delayed static neural networks. The purpose of the problem is to design a delay-dependent state estimator such that the dynamics of the error system is globally exponentially stable and a prescribed performance is guaranteed. Some improved delay-dependent conditions are established by constructing augmented Lyapunov-Krasovskii functionals (LKFs). The desired estimator gain matrix can be characterized in terms of the solution to LMIs (linear matrix inequalities). Numerical examples are provided to illustrate the effectiveness of the proposed method compared with some existing results.

#### 1. Introduction

Neural networks (NNs) have drawn a great deal of attention due to their extensive applications in various fields such as associative memory, pattern recognition, signal processing, combinatorial optimization, and adaptive control [1–3]. In the real world, time delays are unavoidably encountered in electronic implementations of neural networks because of the finite switching speed of the amplifiers. The presence of time delay may cause instability or deteriorate the performance of neural networks. Thus many recent literatures [1–9] have been working on the stability problem of delayed NNs.

We mainly focus on static neural networks (SNNs) in this paper, which is one type of recurrent neural networks (RNNs). Another type is local field neural networks, which has been fully studied while relatively less effort has been paid to the delayed SNNs. The main difference between SNNs and local field neural networks is whether the neuron states or the local field states of neurons are taken as basic variables. As mentioned in [10, 11], local field neural networks models and SNNs models are not always equivalent. Thus, it is necessary to study SNNs separately. Recently, many interesting results on the stability analysis of SNNs have been addressed in the literature [2, 12–16].

Meanwhile, the state estimation of neural networks is an important issue. Generally, a neural network is a highly interconnected network with a great number of neurons. As a result, it would be very difficult to completely acquire the state information of all neurons. On the other hand, one needs to know the information of the neuron states and then make use of the neural networks in practice. Some results on the state estimation problem for the neural networks have been investigated in [17–30]. Among them state estimation of static neural networks with time delay was studied in [17–19, 28, 30, 31]. In [28], a delay partition approach was proposed to deal with the state estimation problem for a class of static neural networks with time-varying delay. In [30], the state estimation problem of the guaranteed and performance of static neural networks was considered. Further improved results were obtained in [17, 18, 31] by using the convex combination approach. The exponential state estimation of time-varying delayed neural networks was studied in [19]. However, the information of neuron activation functions has not been adequately taken into account. Moreover, the inequalities used may lead to conservatism to some extent. Therefore, the guaranteed performance state estimation problem has not yet been fully studied and remains a space for improvement.

This paper investigates the problem of state estimation for a class of delayed static neural networks. The delay-dependent criteria are proposed such that the resulting filtering error system is globally exponentially stable with a guaranteed performance. Different from time-varying delays considered in many papers such as [17, 19, 28], we consider the range of delay varies in an interval for which the lower bound is nonzero and fully take into account the information of lower bound of the delay. By using delay equal-partitioning method, augmented Lyapunov-Krasovskii functionals (LKFs) are properly constructed which is different from the existing relevant results. Then the free-weighting matrix technique is used to get a tighter upper bound on the derivatives of the LKFs. As mentioned in Remark 10, we also reduce conservatism by taking advantage of the information on activation function. Therefore, slack variables were introduced in our results, and it will increase the computational burden. To reduce decision variables so as to reduce computational burden, integral inequalities together with reciprocally convex approach are considered. Compared with existing results in [17–19], the criteria in this paper not only lead to less conservatism, but also have a balance between computational burden and conservatism.

The main contributions of this paper are as follows:(1)Augmented LKFs are properly constructed based on equal-partitioning method.(2)We also make use of integral inequalities to reduce computational burden.(3)Time delay is discussed under two different conditions: time-invariant delay and time-varying delay. In time-varying delay case, we consider the range of delay varies in an interval for which the lower bound is nonzero. Information of the lower bound is fully taken into account in LKFs.(4)We reduce conservatism by taking advantage of the information on activation function.

The remainder of this paper is organized as follows. The state estimation problem is formulated in Section 2. Section 3 is dedicated to deal with the designs of the state estimators for delayed static neural networks under two different conditions. In Section 4, two numerical examples with simulation results are provided to show the effectiveness of the results. Finally, some conclusions are made in Section 5.

*Notations.* The notations used throughout the paper are fairly standard. denotes the -dimensional Euclidean space; is the set of all real matrices; the notation means is a symmetric positive (negative) definite matrix; and denote the inverse of matrix and the transpose of matrix ; represents the identity matrix with proper dimensions, respectively; a symmetric term in a symmetric matrix is denoted by (); represents ; stands for a block-diagonal matrix. Matrices, if their dimensions are not explicitly stated, are assumed to be compatible for algebraic operations.

#### 2. Problem Formulation

Consider the delayed static neural network subject to noise disturbances described bywhere is the state vector of the neural network, is the neural network output measurement, to be estimated is a linear combination of the state, is the noise input belonging to , denotes the neuron activation function, with is the positive diagonal matrix, , , , , and are real known matrices with appropriate dimensions, denote the connection weights, represent the time-varying delays, represents the exogenous input vector, the function is the initial condition, and .

In this paper, time delay is discussed under two different conditions described as follows:(c1)time-invariant delay: ,(c2)time-varying delay: , ,where , , and are constants.

In order to conduct the analysis, the following assumptions are necessary.

*Assumption 1. *For any , , the activation function satisfieswhere are constants and we define , .

We construct a state estimator for estimation of :where is the estimated state vector of the neural network, and denote the estimated measurements of and , and is the gain matrix to be determined. Define the error , ; we can easily obtain the error system:where .

*Definition 2 (see [18]). *For any finite initial condition , the error system (4) with is said to be globally exponentially stable with a decay rate , if there exist constants and such that Given a prescribed level of disturbance attenuation level , the error system is said to be globally exponentially stable with performance, when the error system is globally exponentially stable and the response under zero initial condition satisfies for every nonzero , where .

Lemma 3 (see [32]). *For any constant matrix , scalars , and vector function such that the following integrations are well defined, then *

Lemma 4 (Schur complement). *Given constant symmetric matrices , and , where and , then if and only if*

Lemma 5 (see [33]). *For , and vector which satisfy with , and matrices , there exists matrix , which satisfies such that the following inequality holds:*

#### 3. State Estimator Design

In this section, the state estimation problem will be discussed under two different conditions: time-invariant delay and time-varying delay. We consider the constant time delay case first. For convenience of presentation, we denotehere we use “” to give the condition.

Theorem 6. *Under Assumption 1, for given scalars , , and and an integer , system (4) is globally exponentially stable with performance if there exist positive diagonal matrices , , matrices , , , and any matrix with appropriate dimensions , such that the following LMI holds:where is symmetric with and other entries of are zeros:The estimator gain matrix is given by .*

*Proof. *Construct a Lyapunov-Krasovskii functional candidate as follows:where Calculating the derivative of along the trajectory of system, we obtainand using Lemma 3, we can obtainAccording to Assumption 1, we havewhich is equivalent towhere .

Similarly, we obtainwhere .

According to the system equation, the following equality holds:Combining the qualities and inequalities from (17), (18), (20), (21), and (22), we can obtainwhere is defined as Based on Lemma 4, one can deduce thatwhere .

If LMI (12) holds, then, so we can obtainand since , under the zero initial condition, we have which is equal to therefore, the error system (4) guaranteed performance according to Definition 2. In sequel, we show the globally exponential stability of the estimation error system with . When , the error system (4) becomesEquation (22) becomesconsidering the same Lyapunov-Krasovskii functional candidate and calculating its time-derivative along the solution of (30), we can derivewhere Let , and it is obvious that if , then , so we getIntegrating inequality (34), so we obtainFrom (15), we havewhereCombining (35) and (36) it yieldsand hence the error system (4) is globally exponentially stable. Above all, if , then the state estimator for the static neural network has the prescribed performance and guarantees the globally exponential stability of the error system. This completes the proof.

*Remark 7. *Based on the method of delay partitioning together with the free-weighting matrix approach, a new delay-dependent condition is proposed in Theorem 6 for the state estimation of static neural networks (1) with time-invariant delay. Delay partitioning method reduces the conservatism by employing more detailed information of time delay. The simulation results in Numerical Examples reveal the effectiveness of the delay partitioning approach to the design of the state estimator for static neural networks.

In the following, we will study the time-varying delay case; the result is as follows.

Theorem 8. *Under Assumption 1, for given scalars , , , and an integer , system (4) is globally exponentially stable with performance if there exist matrices , , , , , positive diagonal matrices , , , , and any matrices with appropriate dimensions , , such that the following LMIs hold:where is symmetric with and other entries of are zeros:The estimator gain matrix is given by .*

*Proof. *Construct a Lyapunov-Krasovskii functional candidate as follows:where and calculating the derivative of along the trajectory of system, we obtainand using Lemmas 3 and 5, we can obtainAccording to Assumption 1, Similarly to (20), we obtainand according to the system equation, the following equality holds:Combining the qualities and inequalities from (45) to (48), we can obtainwhere is defined as Based on Lemma 4, one can deduce thatwhere .

If LMI (39) holds, then, so we can obtainand since , under the zero initial condition, we have Therefore, the error system (4) guaranteed performance according to Definition 2. The remainder of proof is similar to the proof of Theorem 6. This completes the proof.

*Remark 9. *If we only use the free-weighting matrix method together with the delay partitioning method to deal with the state estimation problem of static neural networks (1), a great many free-weighting matrices will be introduced with the increasing number of partitions. That will lead to complexity and computational burden. So in this paper we also make use of integral inequalities to reduce decision variables so as to reduce computational burden, because only one matrix is introduced no matter how large the number of partitions is. Moreover, reciprocally convex approach was used with integral inequalities to reduce the conservatism.

*Remark 10. *In some previous literatures [18, 19, 30], , which is a special case of , was used to reduce the conservatism. In our proof, not only , but also , , and have been used, which play an important role in reducing the conservatism.

#### 4. Numerical Examples

In this section, numerical examples are provided to illustrate effectiveness of the developed method for the state estimation of static neural networks.

*Example 1. *Consider the neural networks (1) with the following parameters:To compare with the existing results, we let , , , and . And we obtain the optimal performance index for different values of delay and . It is summarized in Table 1.