Research Article | Open Access

# Input-to-State Stability for Dynamical Neural Networks with Time-Varying Delays

**Academic Editor:**Sabri Arik

#### Abstract

A class of dynamical neural network models with time-varying delays is considered. By employing the Lyapunov-Krasovskii functional method and linear matrix inequalities (LMIs) technique, some new sufficient conditions ensuring the input-to-state stability (ISS) property of the nonlinear network systems are obtained. Finally, numerical examples are provided to illustrate the efficiency of the derived results.

#### 1. Introduction

Recently, the dynamical neural networks (DNNs), which are firstly introduced by Hopfield in [1], have been extensively studied due to its wide applications in various areas such as associative memory, parallel computation, signal processing, optimization, and moving object speed detection. Since time delay is inevitably encountered in implementation of DNNs and is frequently a source of oscillation and instability, neural networks with time delays have become a topic of great theoretical and practical importance, and many interesting results have been derived (see, e.g., [2–5] and [6–9]). Furthermore, in practical evolutionary processes of the networks, absolute constant delay may be scarce and is only the poetic approximation of the time-varying delays. Delays are generally varied with time because information transmission from one neuron to another neuron may make the response of networks with time-varying delays. Accordingly, dynamical behaviors of neural networks with time-varying delays have been discussed in the last decades (see, e.g., [3, 8–11], etc.).

It is well known that neural networks are often influenced by external disturbances and input errors. Thus many dissipative properties such as robustness [12], passivity [13], and input-to-state stability [4, 10, 11, 14–19] are apparently significant to analyze its dynamical behaviors of the networks. For instance, Ahn incorporated robust training law in switched Hopfield neural networks with external disturbances to study boundedness and exponentially stability [12], and studied passivity in [13]. Especially, the ISS implies not only that the unperturbed system is asymptotically stable in the Lyapunov sense but also that its behavior remains bounded when its inputs are bounded. It is one of the useful classes of dissipative properties for nonlinear systems, which is firstly introduced in nonlinear control systems by Sontag in [20], and then extended by Praly and Jiang [21] and Angeli et al. [19 and Ahn (see [17, 19], and references therein). Due to these research background, the ISS properties of neural networks are investigated in recent years (see, e.g., [16–19] and references therein). For example, by using the Lyapunov function method, some nonlinear feedback matrix norm conditions for ISS have been developed for recurrent neural networks ([16]). Moreover, Ahn utilized Lyapunov function method to discuss robust stability problem for a class of recurrent neural networks, and also some LMI sufficient conditions have been proposed to guarantee the ISS (see [17]). In [18], by employing a suitable Lyapunov function, some results on boundedness, ISS, and convergence are established. Also, in [19] a new sufficient condition is derived to guarantee ISS of Takagi-Sugeno fuzzy Hopfield neural networks with time delay. However, there is few results to deal with the ISS of dynamical neural networks (DNNs) with time-varying delays ([11]).

Motivated by the above discussions, we discuss the ISS properties of DNNs with time-varying delays in this paper. By using Lyapunov-Krasovskii functional technique, ISS conditions for the considered dynamical neural networks are given in terms of LMIs, which can be easily calculated by certain standard numerical packages. We also provide two illustrative examples to demonstrate the effectiveness of the proposed stability results.

The organization of this paper is as follows. In Section 2, our mathematical model of dynamical neural networks is presented and some preliminaries are given. In Section 3, the main results for both ISS and asymptotically stability of dynamical neural networks with time-varying delays are proposed. In Section 4, two numerical examples are illustrated to demonstrate the effectiveness of the theoretical results. Concluding remarks are collected in Section 5. Proof of Lemma 2.4 is given in the appendix.

* Notions*

Let denote the dimensional Euclidean space and denote the usual Euclidean norm. Denote and designate the norm of an element in by . is the set of all real matrices. Let , , , , and denote the transpose, the inverse, the largest eigenvalue, the smallest eigenvalue, and the Euclidean norm of a square matrix , respectively. The notation means that is real symmetric and positive definite (positive semidefinite). The notion , where and are symmetric matrices, means that is positive definite (positive semidefinite). denotes the element matrix. The set of all measurable locally essentially bounded functions , endowed with (essential) supremum norm , is denoted by . In addition, denote the truncation of at ; that is, if , and if . We recall that a function is a function if it is continuous, strictly increasing, and ; it will be a if it is a and also as . A function is a if for each fixed the function is a , and, for each fixed , it is decreasing to zero as .

#### 2. Mathematical Model and Preliminaries

Consider the following nonlinear time-delay system where is the state vector, is the input function; is the standard function given by . Without loss of generality, we suppose that , which ensuring that is the trivial solution for the unforced system . Define is a solution of system with initial value at time .

Given a continuous functional , the upper right-hand derivative of the function is given by

For delayed dynamical system, we first give the input-to-state stable (ISS) definition as usual case.

*Definition 2.1. * System (2.1) is ISS if there exist a and a such that, for each input and each , it satisfies
Note that, by causality, the same definition would result if one could replace by .

*Definition 2.2. * A continuous differentiable functional is called the ISS Lyapunov-Krasovskii functional if there exist functions , of class , a function of class and a continuous positive definite function such that

*Remark 2.3. * A continuous differential functional is an ISS Lyapunov-Krasovskii functional if and only if there exist such that (2.4) holds and
The proof is similarly to one of Remark 2.4 in [23]. We omit it here.

Similarly to the case of ordinary differential equation (ODE), we will establish a link between the ISS property and the ISS Lyapunov-Krasovskii functional for time-delay systems in the following Lemma.

Lemma 2.4. * The system (2.1) is ISS if it admits an ISS Lyapunov-Krasovskii functional. ** For completeness, the proof is given in appendix. *

To obtain our results, we need the following two useful lemmas.

Lemma 2.5 (Schur Complement [24]). *For given symmetric matrix , where , , the following three conditions are equivalent: *(i)*; *(ii)*; *(iii)*. *

Lemma 2.6 (see [25]). * Given any matrix , and with appropriate dimensions such that and any scalar , then
*

In this paper, we consider the following dynamical neural networks with time-varying delays or equivalently where is the neuron state, is the input, and denotes the nonlinear neuron activation function. is the positive diagonal matrix. and are the interconnection matrices representing the weighting coefficients of neurons. is the time-varying delays.

Throughout this paper, we always suppose that From , we easily see that is the solution of (2.9) with .

#### 3. ISS Analysis

In this section, we give two theorems on ISS in form of LMIs.

Theorem 3.1. * Let and hold. If there exist a positive definite matrix and a positive diagonal matrix such that
**
where , and then the system (2.9) is ISS. *

*Proof. * We consider the following functional:
Its derivative along the solution of (2.9) is given as
We have
Since the first term of the right-hand side of (3.4) is negative semidefinite, we obtain
From , we obtain
where .

Then by Lemma 2.6, we have
Substituting (3.5), (3.6), and (3.7) into (3.3), we finally obtain
where .

Define , , then we can obtain that

Note that is equivalent to (3.1) by Lemma 2.5. Then the defined is an ISS Lyapunov-Krasovskii functional. It follows from Lemma 2.4 and Remark 2.3 that the delayed neural networks (2.9) are ISS. The proof is complete.

*Remark 3.2. * Theorem 3.1 reduces to asymptotically stability condition for dynamical neural networks with time-varying delays when .

*Remark 3.3. *Recently, some results on ISS or IOSS were obtained in [10, 17–19, 26]. However, these results were restricted to nondelay or constant delay. In contrast to the results [10, 17–19, 26], we consider dynamical neural networks with time-varying delays and propose a set of delay-independent criteria for asymptotically convergent state estimation of these neural networks in this paper.

In the following, we give a delay-dependent sufficient criterion.

Theorem 3.4. * Let and hold. The system (2.9) is ISS if there exist a symmetric positive definite matrix and a positive definite matrix such that
**
where .*

* Proof. * We consider the following functional:
where is a positive definite matrix.

The derivative of (3.12) along the trajectories of the system is obtained as follows:
From (3.10), which reduces to
From , we obtain that
where .

From Lemma 2.6, we have
Then
For the third term of (3.14), we have
Substituting (3.15), (3.16), (3.17), and (3.18) into (3.14), we can obtain the following inequality:
where we denote that
From (3.11), we easily obtain that .

Define -functions , . Then we can obtain that
From Lemma 2.4 and Remark 2.3, the system (2.9) is ISS. The proof is complete.

#### 4. Illustrative Examples

In this section, we will give two examples to show the efficiency of the results derived in Section 3.

*Example 4.1. * Consider a 3-dimension dynamical neural network (2.9) with parameters defined as

Letting and the time-varying delay is chosen as . They satisfy assumptions and , respectively. Obviously, there exist and that satisfy the conditions. Then .

By using MATLAB to solve the LMIs (3.1), we have
From Theorem 3.1, we can see that delayed neural network (2.9) achieves ISS.

*Example 4.2. *Consider a 3-dimension dynamical neural network (2.9) with parameters followed as

Letting , and the time-varying delay is chosen as . We can check the assumptions and with and , for any . Also we have .

By solving (3.10) and (3.11), we get

From Theorem 3.4, we can see that delayed neural network (2.9) obtains ISS.

However, the above results cannot be obtained by using criteria on ISS in existing publications (e.g., [10, 11, 17–19, 26]).

#### 5. Conclusions

In this paper, dynamical neural networks with time-varying delays were considered. By using Lyapunov-Krasovskii functional method and linear matrix inequalities (LMIs) techniques, several theorems with regarding to judging the ISS property of DNNs with time-varying delays have been obtained. It is shown that the ISS can be determined by solving a set of LMIs, which can be checked by using some standard numerical packages in MATLAB. At last, two numerical examples were given to illustrate the theoretical results.

#### Appendix

* Proof of Lemma 2.4. *We divided into four parts to prove this lemma. *Claim* *1.* (i) The solution of the system (2.9) is uniformly asymptotically stable if and only if there exists a function of class and a positive number independent of such that for it satisfies that
Particularly, the system (2.9) is uniformly global asymptotically stable if and only if (A.1) admits for any .

The Claim is so trivial that we omit the proof here.

*Claim* *2.* For each , if there exist a continuous functional , functions of class , and a continuous positive definite function such that
then the solution is globally uniformly asymptotically stable, and there exist a such that

*Proof. *From [27], the solution is globally uniformly asymptotically stable. Then by Claim 1, we obtain (A.4). The proof is complete.

* Claim* *3.* Let (A.3) in Claim 2 replaced by
Then for any , there exist , such that

*Proof. * Let , , (no loss generality, we assume that , then ). Then . In the following, we divided into two parts. *Case* *1.* .

We make the claim that will be always remain in . Define , if , , then , and , and we have , . Then . If , , let , we will analyze them as the above. Then we obtain . *Case* *2.* , that is, .

Let and . We prove that is limit. From , , (A.2), (A.5), and Case 2, we have , where . Since is strictly decreasing, and as , is limit. Then from Case 1, will be always remain in if arrive the boundary of . Then we obtain (A.6). The proof is complete.

*Claim* *4.* Let (A.3) in Claim 2 replaced by
where . Then the system is ISS.

*Proof. *From Claim 3, we have
Since only depends on the defined on , we obtain
Then
where . This proves that the system is ISS.

#### Acknowledgment

The work is supported partially by National Natural Science Foundation of China under Grant no. 10971240, 61263020, and 61004042, Key Project of Chinese Education Ministry under Grant no. 212138, Natural Science Foundation of Chongqing under Grant CQ CSTC 2011BB0117, and Foundation of Science and Technology project of Chongqing Education Commission under Grant KJ120630.

#### References

- J. J. Hopfield, “Neurons with graded response have collective computational properties like those of two state neurons,”
*Proceeding of the National Academy of Sciences of the United States of America*, vol. 81, no. 10, pp. 3088–3092, 1984. View at: Publisher Site | Google Scholar - S. Arik, “Stability analysis of delayed neural networks,”
*IEEE Transactions on Circuits and Systems I*, vol. 47, no. 7, pp. 1089–1092, 2000. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - T. Ensari and S. Arik, “Global stability analysis of neural networks with multiple time varying delays,”
*IEEE Transactions on Automatic Control*, vol. 50, no. 11, pp. 1781–1785, 2005. View at: Publisher Site | Google Scholar | MathSciNet - C. K. Ahn, “Passive learning and input-to-state stability of switched Hopfield neural networks with time-delay,”
*Information Sciences*, vol. 180, pp. 4582–4594, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH - C. K. Ahn, “${\mathcal{L}}_{2}-{\mathcal{L}}_{\infty}$ filtering for time-delayed switched Hopfield neural networks,”
*International Journal of Innovative Computing, Information and Control*, vol. 7, no. 5, pp. 1831–1843, 2011. View at: Google Scholar - T. W. Huang, A. Chan, Y. Huang, and J. D. Cao, “Stability of Cohen-Grossberg neural networks with time-varying delays,”
*Neural Networks*, vol. 20, no. 8, pp. 868–873, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH - Z. Yang and D. Xu, “Robust stability of uncertain impulsive control systems with time-varying delay,”
*Computers & Mathematics with Applications*, vol. 53, no. 5, pp. 760–769, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - Z. Yang, T. Huang, L. Zhang, and Z. Yang, “On networked control of impulsive hybrid systems,”
*Computers & Mathematics with Applications*, vol. 61, no. 8, pp. 2076–2080, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - Z. Yang and D. Xu, “Impulsive effects on stability of Cohen-Grossberg neural networks with variable delays,”
*Applied Mathematics and Computation*, vol. 177, no. 1, pp. 63–78, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - C. K. Ahn, “An input-to-state stability approach to filter design for neural networks with noise disturbance,”
*Advanced Science Letters*, vol. 5, pp. 275–278, 2012. View at: Publisher Site | Google Scholar - S. Zhu and Y. Shen, “Two algebraic criteria for input-to-state stability of recurrent neural networks with time-varying delays,”
*Neural Computing and Applications*, pp. 1–7, 2012. View at: Google Scholar - C. K. Ahn, “Linear matrix inequality optimization approach to exponential robust filtering for switched Hopfield neural networks,”
*Journal of Optimization Theory and Applications*, vol. 154, no. 2, pp. 573–587, 2012. View at: Publisher Site | Google Scholar | MathSciNet - C. K. Ahn, “An error passivation approach to filtering for switched neural networks with noise disturbance,”
*Neural Computing and Applications*, vol. 21, no. 5, pp. 853–861, 2012. View at: Publisher Site | Google Scholar - Z. C. Yang and W. S. Zhou, “Input-to-state stability of impulsive hybrid systems with stochastic effects,” in
*Proceedings of the 24th IEEE Chinese Control and Decision Conference*, pp. 286–291, 2012. View at: Google Scholar - Z. Yang and Y. Hong, “Stabilization of impulsive hybrid systems using quantized input and output feedback,”
*Asian Journal of Control*, vol. 14, no. 3, pp. 679–692, 2012. View at: Publisher Site | Google Scholar | MathSciNet - E. N. Sanchez and J. P. Perez, “Input-to-state stability (ISS) analysis for dynamic neural networks,”
*IEEE Transactions on Circuits and Systems I*, vol. 46, no. 11, pp. 1395–1398, 1999. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - C. K. Ahn, “Robust stability of recurrent neural networks with ISS learning algorithm,”
*Nonlinear Dynamics*, vol. 65, no. 4, pp. 413–419, 2011. View at: Publisher Site | Google Scholar | MathSciNet - C. K. Ahn, “${\mathcal{L}}_{2}-{\mathcal{L}}_{\infty}$ nonlinear system identification via recurrent neural networks,”
*Nonlinear Dynamics*, vol. 62, no. 3, pp. 543–552, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - C. K. Ahn, “Some new results on stability of Takagi-Sugeno fuzzy Hopfield neural networks,”
*Fuzzy Sets and Systems*, vol. 179, pp. 100–111, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - E. D. Sontag, “Smooth stabilization implies coprime factorization,”
*IEEE Transactions on Automatic Control*, vol. 34, no. 4, pp. 435–443, 1989. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - L. Praly and Z.-P. Jiang, “Stabilization by output feedback for systems with ISS inverse dynamics,”
*Systems & Control Letters*, vol. 21, no. 1, pp. 19–33, 1993. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - D. Angeli, E. D. Sontag, and Y. Wang, “A characterization of integral input-to-state stability,”
*IEEE Transactions on Automatic Control*, vol. 45, no. 6, pp. 1082–1097, 2000. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - E. D. Sontag and Y. Wang, “On characterizations of the input-to-state stability property,”
*Systems & Control Letters*, vol. 24, no. 5, pp. 351–359, 1995. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan,
*Linear Matrix Inequalities in System and Control Theory*, vol. 15 of*SIAM Studies in Applied Mathematics*, SIAM, Philadelphia, Pa, USA, 1994. View at: Publisher Site | MathSciNet - A. S. Poznyak and E. N. Sanchez, “Nonlinear systems approximation by neural networks: error stability analysis,”
*Intelligent Automation and Soft Computing*, vol. 1, pp. 247–258, 1995. View at: Google Scholar - C. K. Ahn, “A new robust training law for dynamic neural networks with external disturbance: an LMI approach,”
*Discrete Dynamics in Nature and Society*, vol. 2010, Article ID 415895, 14 pages, 2010. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet - J. Hale,
*Theory of Functional Differential Equations*, Springer, New York, NY, USA, 1976.

#### Copyright

Copyright © 2012 Weisong Zhou and Zhichun Yang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.