## Functional Differential and Difference Equations with Applications

View this Special IssueResearch Article | Open Access

# Switched Exponential State Estimation and Robust Stability for Interval Neural Networks with Discrete and Distributed Time Delays

**Academic Editor:**Agacik Zafer

#### Abstract

The interval exponential state estimation and robust exponential stability for the switched interval neural networks with discrete and distributed time delays are considered. Firstly, by combining the theories of the switched systems and the interval neural networks, the mathematical model of the switched interval neural networks with discrete and distributed time delays and the interval estimation error system are established. Secondly, by applying the augmented Lyapunov-Krasovskii functional approach and available output measurements, the dynamics of estimation error system is proved to be globally exponentially stable for all admissible time delays. Both the existence conditions and the explicit characterization of desired estimator are derived in terms of linear matrix inequalities (LMIs). Moreover, a delay-dependent criterion is also developed, which guarantees the robust exponential stability of the switched interval neural networks with discrete and distributed time delays. Finally, two numerical examples are provided to illustrate the validity of the theoretical results.

#### 1. Introduction

In the past few decades, the different models of neural networks such as Hopfield neural networks, Cohen-Grossberg neural networks, cellular neural networks, and bidirectional associative memory neural networks have been extensively investigated due to their wide applications in areas like associative memory, pattern classification, reconstruction of moving images, signal processing, solving optimization problems, and so forth, see [1]. In almost all applications about neural networks, a fundamental problem is the stability, which is the prerequisite to ensure that the developed neural network can work [2–12]. In hardware implementation of the neural networks, time delay is inevitably encountered and is usually discrete and distributed due to the finite switching speed of amplifiers. It is known that time delay is often the main cause for instability and poor performance of neural networks. Moreover, due to unavoidable factors, such as modeling error, external perturbation, and parameter fluctuation, the neural networks model certainly involves uncertainties such as perturbations, and component variations, which will change the stability of neural networks. Therefore, it is of great importance to study the robust stability of neural networks with time delays in the presence of uncertainties, see, for example, [11, 13–26] and the references therein. There are mainly two forms of uncertainties, namely, the interval uncertainty and the norm-bounded uncertainty. Recently, some sufficient conditions for the global robust stability of interval neural networks with time delays and parametric uncertainties have been obtained in terms of LMIs [13–16, 18].

A class of hybrid systems have attracted significant attention because it can model several practical control problems that involve the integration of supervisory logic-based control schemes and feedback control algorithms. As a special class of hybrid systems, switched systems are regarded as nonlinear systems, which are composed of a family of continuous-time or discrete-time subsystems and a rule that orchestrates the switching between the subsystems. Recently, switched neural networks, whose individual subsystems are a set of neural networks, have found applications in fields of high-speed signal processing, artificial intelligence, and gene selection in a DNA microarray analysis [27–29]. Therefore, some researchers have studied the stability issues for switched neural networks [19–24]. In [19], based on the Lyapunov-Krasovskii method and LMI approach, some sufficient conditions were derived for global robust exponential stability of a class of switched Hopfield neural networks with time-varying delay under uncertainty. In [20], by combining Cohen-Grossberg neural networks with an arbitrary switching rule, the mathematical model of a class of switched Cohen-Grossberg neural networks with mixed time varying delays were established, and the robust stability for such switched Cohen-Grossberg neural networks were analyzed. In [21], by employing nonlinear measure and LMI techniques, some new sufficient conditions were obtained to ensure global robust asymptotic stability and global robust stability of the unique equilibrium for a class of switched recurrent neural networks with time-varying delay. In [22], a large class of switched recurrent neural networks with time-varying structured uncertainties and time-varying delay were investigated, and some delay-dependent robust periodicity criteria were derived to guarantee the existence, uniqueness, and global asymptotic stability of periodic solution for all admissible parametric uncertainties by taking free weighting matrices and LMIs. In [23], based on multiple Lyapunov functions method and LMI techniques, the authors presented some sufficient conditions in terms of LMIs, which guarantee the robust exponential stability for uncertain switched Cohen-Grossberg neural networks with interval time-varying delay and distributed time-varying delay under the switching rule with the average dwell time property. It should be noted that, almost all results treated the robust stability for switched neural networks with norm-bounded uncertainty in the above literature [19–23], there are few researchers to deal with the global exponential robust stability for switched neural networks with the interval uncertainty in the existing literature, despite its potential and practical importance in many different areas such as system control and error analysis, see [14, 15].

The neuron states in relatively large-scale neural networks are not often completely available in the network outputs. Thus, in many applications, one often needs to estimate the neuron states through available measurements and then utilizes the estimated neuron states to achieve certain design objectives. For example, in [30], a recurrent neural network was applied to model an unknown system, and the neuron states of the designed neural network were then utilized by the control law. Therefore, from the point of view of control, the state estimation problem for neural networks is of significance for many applications. Recently, there are some results for the neuron state estimation problem of neural networks with or without time delays in the existing available [31–35]. In [31, 32], the authors studied state estimation for Markovian jumping recurrent neural networks with time-delays by constructing Lyapunov-Krasovskii functionals and LMIs. In [33], the interconnection matrix of neural networks are assumed to be norm-bounded. Through available output measurements and by using LMI technique, the authors proved that the dynamics of the estimation error was globally exponentially stable for all admissible time-delays for delayed neural networks. In [34], based on the free-weighting matrix approach, a delay-dependent criterion was established to estimate the neuron states through available output measurements such that the dynamics of the estimation error was globally exponentially stable for neural networks with time-varying delays. The results were applicable to the case that the derivative of a time-varying delay takes any value. In [35], by using the Lyapunov-Krasovskii functional approach, the authors presented the existence conditions of the state estimators in terms of the solution to an LMI. In [36], the neuron activation function and perturbed function of the measurement equation were assumed to be sector-bounded, an LMI-based state estimator and a stability criterion for delayed Hopfield neural networks are developed. In [37], based on augmented Lyapunov-Krasovskii functional and passivity theory, the authors proved that the estimation error system was exponentially stable and passive from the control input to the output error. A new delay-dependent state estimator for switched Hopfield neural networks was achieved by solving LMIs obtained. In spite of these advances in studying neural network state estimation, the state estimation problem for switched interval neural networks has not been investigated in the literature, and it is very important in both theories and applications.

Motivated by the preceding discussion, the aim of this paper is to present a new class of neural network models, that is, switched interval neural networks with discrete and distributed time delays, under interval parameter uncertainties by integrating the theory of switched systems with neural networks. Based on Lyapunov stability theory and by using available output measurements, the dynamics of estimation error system will be proved to be globally exponentially stable for all admissible time delays. Both the existence conditions and the explicit characterization of desired estimator are also derived in terms of LMIs. In addition, a delay-dependent criterion will be derived such that the proposed switched interval neural networks is globally robustly exponentially stable. The proposed criterion is represented in terms of LMIs, which can be solved efficiently by using recently developed convex optimization algorithms [38].

The rest of this paper is organized as follows. In Section 2, the model formulation and some preliminaries are given. Section 3 treats switched exponential state estimation problem for interval neural networks with discrete and distributed time delays. In Section 4, the robust exponential stability is discussed and a delay-dependent criterion is developed. Two numerical examples are presented to demonstrate the validity of the proposed results in Section 5. Some conclusions are drawn in Section 6.

*Notations*

Throughout this paper, denotes the set of real numbers, denotes the -dimensional Euclidean space, and denotes the set of all real matrices. For any matrix , means that is positive definite (negative definite). denotes the inverse of . denotes the transpose of . and denote the maximum and minimum eigenvalue of , respectively. Given the vectors , , . For , denotes the family of continuous function from to with the norm . denotes the derivative of , represents the symmetric form of matrix. Matrices, if their dimensions not explicitly stated, are assumed to have compatible dimensions for algebraic operations.

#### 2. Neural Network Model and Preliminaries

The model of interval neural network with discrete and distributed time delays can be described by differential equation system where is the vector of neuron states; is the output vector; , are the vector-valued neuron activation functions; is a mapping from to , which is the neuron-dependent nonlinear disturbances on the network outputs; is a known constant matrix with appropriate dimension; is a constant external input vector. , denote the discrete and distributed time delays, respectively, and , ; is an constant diagonal matrices, , are the neural self-inhibitions; , are the connection weight matrices; , with , , .

The initial value associated with the system (2.1) is assumed to be , , .

Throughout this paper, the following assumptions are made on , and .For any two different , where . is a constant, .For any two different , where is a constant matrix.

Based on some transformations [15], the system (2.1) can be equivalently written as where . where denotes the column vector with element to be 1 and others to be 0.

The switched interval neural network with discrete and distributed time delays consists of a set of interval neural network with discrete and distributed time delays and a switching rule. Each of the interval neural networks was regarded as an individual subsystem. The operation mode of the switched neural networks is determined by the switching rule. According to (2.1), the switched interval neural network with discrete and distributed delays can be represented as follows: where , with , , is the switching signal, which is a piecewise constant function of time. For any , , , and . This means that the matrices are allowed to take values, at an arbitrary time, in the finite set . In this paper, it is assumed that the switching rule is not known a priori and its instantaneous value is available in real time.

By (2.4), the system (2.6) can be rewritten as where , and satisfies the following matrix quadratic inequality:

Define the indicator function , where where . Therefore, the system model (2.8) can also be written as where is satisfied under any switching rules.

In this paper, our main purpose is to develop an efficient algorithm to estimate the neuron states in (2.12) from the available network outputs in (2.12). The full-order state estimator is of the form where is the estimation of the neuron state, and the matrix is the estimator gain matrix to be designed.

Let the error state be ; then it follows from (2.12) and (2.13) that

For presentation convenience, set , , , . Then the system (2.14) becomes where and it satisfies the following quadratic inequality

The initial value associated with (2.15) is , , .

To obtain the main results of this paper, the following definitions and lemmas are introduced.

*Definition 2.1. *For the switched estimation error-state system (2.15), the trivial solution is said to be globally exponentially stable if there exist positive scalars and such that
where is the solution of the system (2.15) with the initial value *, *, .

Lemma 2.2 (see [15]). *Let and be two arbitrary quadratic forms over , then for all satisfying if and only if there exists such that
*

Lemma 2.3 (Jensen’s Inequality). *For any constant matrix scalar , vector function , such that the integrations concerned are well defined, then
*

Lemma 2.4 (see [38]). *The following LMI
**
where , and depend on , is equivalent to each of the following conditions:
*

Lemma 2.5. *Given any real matrices , , and with appropriate dimensions, then the following matrix inequality holds:
*

#### 3. Switched Exponential State Estimation for Interval Neural Networks

In this section, we will study the global exponential stability of the system (2.15) under arbitrary switching rule. By constructing a suitable Lyapunov-Krasovskii functional, a delay-dependent criterion for the global exponential stability of the estimation process (2.15) is derived. The following theorem shows that this criterion can be obtained if a quadratic matrix inequality involving several scalar parameters is feasible.

Theorem 3.1. *If there exist scalars and , a matrix and two diagonal matrices , such that the following quadratic matrix inequalities:
**
are satisfied, where
**
then the switched error-state system (2.14) of the neural network (2.12) is globally exponentially stable under any switching rules. Moreover, the estimate of the error-state decay can be given by
**
where .*

*Proof. *Consider the following Lyapunov-Krasovskii functional:
Calculating the time derivative of along the solution of the system (2.15), it can follow that
By the assumption and Lemmas 2.3 and 2.5, we have
In the light of (3.7)–(3.10), we obtain that
This implies that
where . By Lemma 2.2, (2.17) and (3.11), we can obtain that for all . Hence,
From (3.4), it is easy to get
On the other hand, we also have
By combining (3.12), (3.13), and (3.14), it follows that
where. The proof is completed.

The inequalities in (3.1) are nonlinear and coupled, each involving many parameters. Obviously, the inequalities in (3.1) are difficult to solve. A meaningful approach to tackling such a problem is to convert the nonlinearly coupled matrix inequalities into LMIs, while the estimator gain is designed simultaneously. In the following, we will deal with the design problem, that is, giving a practical design procedure for the estimator gain, , such that the set of inequalities (3.1) in Theorem 3.1 are satisfied.

Theorem 3.2. *If there exist two scalars , a matrix , and two diagonal matrices , such that the following linear matrix inequalities:
**
are satisfied, where
**
and the estimator gain is given by , then the switched error-state system (2.14) of the neural network (2.12) is globally exponentially stable under any switching rules. Moreover, the estimate of the error-state decay can be given by
**
where .*

*Proof. *Using Lemma 2.4, (3.16) holds if and only if
where.

Noticing that , it can be easily seen that (3.19) is the same as (3.1). Hence, it follows from Theorem 3.1 that, with the estimator gain given by , the switched error-state system (2.14) of the neural network (2.12) is globally exponentially stable under any switching rules. The proof of Theorem 3.2 is complete.

#### 4. The Stability of Switched Interval Neural Networks

In the section, we will consider the stability of switched interval neural network (2.1) with discrete and distributed time delays and without the output . It should be noted that the stability of switched interval neural network (2.1) without the output can be as a by-product, and the main results can be easily derived from the previous section.

Consider the interval neural network (2.1) without the output

The assumptions on the model (4.1) are same as the above in Section 2. With loss of generality, it is assumed that the neural network (4.1) has only one equilibrium point, and denoted by . For the purpose of simplicity, the equilibrium will be shifted to the origin by letting and the system (4.1) can be represented as where , . The initial value associated with (4.1) is changed to be .

The system (4.2) can also be written as an equivalent form

Similar to the system (2.8), the switched interval neural network with discrete and distributed time delays and without the output can be written as

The following theorem gives a condition, which can ensure that the switched system (4.4) is globally exponentially stable under any switching rules.

Theorem 4.1. *If there exist a scalar , a matrix , and two diagonal matrices , such that the following linear matrix inequalities:
**
are satisfied, where
**
then switched interval neural network system (4.4) is exponentially stable under any switching rules. Moreover, the estimate of the state decay is given by
**
where .*

*Proof. *By using Lyapunov-Krasovskii functional in (3.4), let the matrix , and following the similar line of the proof of Theorem 3.1, it is not difficult to get the proof of Theorem 4.1.

When the distributed delays , the system (4.4) changes as the switched interval neural networks with discrete delays
By Theorem 4.1, it is easy to obtain the following corollary.

Corollary 4.2. *If there exist scalars , a matrix , and two diagonal matrices , , such that the following linear matrix inequalities:
**
are satisfied, where
**
then switched interval neural network system (4.8) is globally exponentially stable under any switching rules. Moreover, the estimate of the state decay is given by
**
where .*

*Remark 4.3. *In [25], based on homeomorphism mapping theorem and by using Lyapunov functional, some delay-independent stability criteria were obtained to ensure the existence, uniqueness, and global asymptotic stability of the equilibrium point for neural networks with multiple time delays under parameter uncertainties. In [26], authors dealt with the global robust asymptotic stability of a great class of dynamical neural networks with multiple time delays, a new alternative sufficient condition for the existence, uniqueness, and global asymptotic stability of the equilibrium point under parameter uncertainties is proposed by employing a new Lyapunov functional. In this paper, when , the switched system model (4.4) degenerated into the interval neural network model (4.1) with discrete and distributed time delays. It is easy to see that the model studied in [25, 26] is a special case of the model (4.1). Hence, the results obtained in this paper expand and improve the stability results in the existing literature [25, 26].

*Remark 4.4. *In [39], the authors considered Markovian jumping fuzzy Hopfield neural networks with mixed random time-varying delays. By applying the Lyapunov functional method and LMI technique, delay-dependent robust exponential state estimation and new sufficient conditions guaranteeing the robust exponential stability (in the mean square sense) were proposed. In [40], the authors considered delay-dependent robust asymptotic state estimation for fuzzy Hopfield neural networks with mixed interval time-varying delay. By constructing a Lyapunov-Krasovskii functional containing triple integral term and by employing some analysis techniques, sufficient conditions are derived in terms of LMIs. In the future, based on [39, 40], the model of the switched interval fuzzy Hopfield neural networks with mixed random time-varying delays and the switched interval discrete-time fuzzy complex networks will be expected to be established, the strategy proposed in this paper will be utilized to investigate the state estimation and stability problems.

#### 5. Illustrative Examples

In this section, two illustrative examples will be given to check the validity of the results obtained in Theorems 3.2 and 4.1.

*Example 5.1. *Consider the second-order switched interval neural network with discrete and distributed delays in (2.6) described by *,* , , *, *and *. *Obviously, the assumptions and are satisfied with *. *The neural network system parameters are defined as

In the following, we will design an estimator *, *for the switched interval neural network in this example. Solving the LMI in (3.16) by using appropriate LMI solver in the Matlab, the scalars and feasible positive definite matrices , and the matrices , could be as
Then the the estimator gain , can be designed as
By Theorem 3.2, the switched error-state system of the neural network in this example is globally exponentially stable under any switching rules. Moreover, the estimate of the error-state decay is given by

For making numerical simulation for the switched error-state system, set , , , and , and assume that two subsystems are switched every five seconds. Figure 1 displays the trajectories of the error-state with initial value . It can be seen that these trajectories converge to . This is in accordance with the conclusion of Theorem 3.2.

*Example 5.2. *Consider the second-order switched interval neural network with discrete and distributed delays described by