Abstract

A class of dynamical neural network models with time-varying delays is considered. By employing the Lyapunov-Krasovskii functional method and linear matrix inequalities (LMIs) technique, some new sufficient conditions ensuring the input-to-state stability (ISS) property of the nonlinear network systems are obtained. Finally, numerical examples are provided to illustrate the efficiency of the derived results.

1. Introduction

Recently, the dynamical neural networks (DNNs), which are firstly introduced by Hopfield in [1], have been extensively studied due to its wide applications in various areas such as associative memory, parallel computation, signal processing, optimization, and moving object speed detection. Since time delay is inevitably encountered in implementation of DNNs and is frequently a source of oscillation and instability, neural networks with time delays have become a topic of great theoretical and practical importance, and many interesting results have been derived (see, e.g., [25] and [69]). Furthermore, in practical evolutionary processes of the networks, absolute constant delay may be scarce and is only the poetic approximation of the time-varying delays. Delays are generally varied with time because information transmission from one neuron to another neuron may make the response of networks with time-varying delays. Accordingly, dynamical behaviors of neural networks with time-varying delays have been discussed in the last decades (see, e.g., [3, 811], etc.).

It is well known that neural networks are often influenced by external disturbances and input errors. Thus many dissipative properties such as robustness [12], passivity [13], and input-to-state stability [4, 10, 11, 1419] are apparently significant to analyze its dynamical behaviors of the networks. For instance, Ahn incorporated robust training law in switched Hopfield neural networks with external disturbances to study boundedness and exponentially stability [12], and studied passivity in [13]. Especially, the ISS implies not only that the unperturbed system is asymptotically stable in the Lyapunov sense but also that its behavior remains bounded when its inputs are bounded. It is one of the useful classes of dissipative properties for nonlinear systems, which is firstly introduced in nonlinear control systems by Sontag in [20], and then extended by Praly and Jiang [21] and Angeli et al. [19 and Ahn (see [17, 19], and references therein). Due to these research background, the ISS properties of neural networks are investigated in recent years (see, e.g., [1619] and references therein). For example, by using the Lyapunov function method, some nonlinear feedback matrix norm conditions for ISS have been developed for recurrent neural networks ([16]). Moreover, Ahn utilized Lyapunov function method to discuss robust stability problem for a class of recurrent neural networks, and also some LMI sufficient conditions have been proposed to guarantee the ISS (see [17]). In [18], by employing a suitable Lyapunov function, some results on boundedness, ISS, and convergence are established. Also, in [19] a new sufficient condition is derived to guarantee ISS of Takagi-Sugeno fuzzy Hopfield neural networks with time delay. However, there is few results to deal with the ISS of dynamical neural networks (DNNs) with time-varying delays ([11]).

Motivated by the above discussions, we discuss the ISS properties of DNNs with time-varying delays in this paper. By using Lyapunov-Krasovskii functional technique, ISS conditions for the considered dynamical neural networks are given in terms of LMIs, which can be easily calculated by certain standard numerical packages. We also provide two illustrative examples to demonstrate the effectiveness of the proposed stability results.

The organization of this paper is as follows. In Section 2, our mathematical model of dynamical neural networks is presented and some preliminaries are given. In Section 3, the main results for both ISS and asymptotically stability of dynamical neural networks with time-varying delays are proposed. In Section 4, two numerical examples are illustrated to demonstrate the effectiveness of the theoretical results. Concluding remarks are collected in Section 5. Proof of Lemma 2.4 is given in the appendix.

Notions
Let denote the dimensional Euclidean space and denote the usual Euclidean norm. Denote and designate the norm of an element in by . is the set of all real matrices. Let , , , , and denote the transpose, the inverse, the largest eigenvalue, the smallest eigenvalue, and the Euclidean norm of a square matrix , respectively. The notation means that is real symmetric and positive definite (positive semidefinite). The notion , where and are symmetric matrices, means that is positive definite (positive semidefinite). denotes the element matrix. The set of all measurable locally essentially bounded functions , endowed with (essential) supremum norm , is denoted by . In addition, denote the truncation of at ; that is, if , and if . We recall that a function is a function if it is continuous, strictly increasing, and ; it will be a if it is a and also as . A function is a if for each fixed the function is a , and, for each fixed , it is decreasing to zero as .

2. Mathematical Model and Preliminaries

Consider the following nonlinear time-delay system where is the state vector, is the input function; is the standard function given by . Without loss of generality, we suppose that , which ensuring that is the trivial solution for the unforced system . Define is a solution of system with initial value at time .

Given a continuous functional , the upper right-hand derivative of the function is given by

For delayed dynamical system, we first give the input-to-state stable (ISS) definition as usual case.

Definition 2.1. System (2.1) is ISS if there exist a and a such that, for each input and each , it satisfies Note that, by causality, the same definition would result if one could replace by .

Definition 2.2. A continuous differentiable functional is called the ISS Lyapunov-Krasovskii functional if there exist functions , of class , a function of class and a continuous positive definite function such that

Remark 2.3. A continuous differential functional is an ISS Lyapunov-Krasovskii functional if and only if there exist such that (2.4) holds and The proof is similarly to one of Remark 2.4 in [23]. We omit it here.

Similarly to the case of ordinary differential equation (ODE), we will establish a link between the ISS property and the ISS Lyapunov-Krasovskii functional for time-delay systems in the following Lemma.

Lemma 2.4. The system (2.1) is ISS if it admits an ISS Lyapunov-Krasovskii functional.
For completeness, the proof is given in appendix.

To obtain our results, we need the following two useful lemmas.

Lemma 2.5 (Schur Complement [24]). For given symmetric matrix , where , , the following three conditions are equivalent: (i) ; (ii) ; (iii) .

Lemma 2.6 (see [25]). Given any matrix , and with appropriate dimensions such that and any scalar , then

In this paper, we consider the following dynamical neural networks with time-varying delays or equivalently where is the neuron state, is the input, and denotes the nonlinear neuron activation function. is the positive diagonal matrix. and are the interconnection matrices representing the weighting coefficients of neurons. is the time-varying delays.

Throughout this paper, we always suppose that From , we easily see that is the solution of (2.9) with .

3. ISS Analysis

In this section, we give two theorems on ISS in form of LMIs.

Theorem 3.1. Let and hold. If there exist a positive definite matrix and a positive diagonal matrix such that where , and then the system (2.9) is ISS.

Proof. We consider the following functional: Its derivative along the solution of (2.9) is given as We have Since the first term of the right-hand side of (3.4) is negative semidefinite, we obtain From , we obtain where .
Then by Lemma 2.6, we have Substituting (3.5), (3.6), and (3.7) into (3.3), we finally obtain where .
Define , , then we can obtain that
Note that is equivalent to (3.1) by Lemma 2.5. Then the defined is an ISS Lyapunov-Krasovskii functional. It follows from Lemma 2.4 and Remark 2.3 that the delayed neural networks (2.9) are ISS. The proof is complete.

Remark 3.2. Theorem 3.1 reduces to asymptotically stability condition for dynamical neural networks with time-varying delays when .

Remark 3.3. Recently, some results on ISS or IOSS were obtained in [10, 1719, 26]. However, these results were restricted to nondelay or constant delay. In contrast to the results [10, 1719, 26], we consider dynamical neural networks with time-varying delays and propose a set of delay-independent criteria for asymptotically convergent state estimation of these neural networks in this paper.

In the following, we give a delay-dependent sufficient criterion.

Theorem 3.4. Let and hold. The system (2.9) is ISS if there exist a symmetric positive definite matrix and a positive definite matrix such that where .

Proof. We consider the following functional: where is a positive definite matrix.
The derivative of (3.12) along the trajectories of the system is obtained as follows: From (3.10), which reduces to From , we obtain that where .
From Lemma 2.6, we have Then For the third term of (3.14), we have Substituting (3.15), (3.16), (3.17), and (3.18) into (3.14), we can obtain the following inequality: where we denote that From (3.11), we easily obtain that .
Define -functions , . Then we can obtain that From Lemma 2.4 and Remark 2.3, the system (2.9) is ISS. The proof is complete.

4. Illustrative Examples

In this section, we will give two examples to show the efficiency of the results derived in Section 3.

Example 4.1. Consider a 3-dimension dynamical neural network (2.9) with parameters defined as
Letting and the time-varying delay is chosen as . They satisfy assumptions and , respectively. Obviously, there exist and that satisfy the conditions. Then .
By using MATLAB to solve the LMIs (3.1), we have From Theorem 3.1, we can see that delayed neural network (2.9) achieves ISS.

Example 4.2. Consider a 3-dimension dynamical neural network (2.9) with parameters followed as
Letting ,  and the time-varying delay is chosen as . We can check the assumptions and with and , for any . Also we have .

By solving (3.10) and (3.11), we get

From Theorem 3.4, we can see that delayed neural network (2.9) obtains ISS.

However, the above results cannot be obtained by using criteria on ISS in existing publications (e.g., [10, 11, 1719, 26]).

5. Conclusions

In this paper, dynamical neural networks with time-varying delays were considered. By using Lyapunov-Krasovskii functional method and linear matrix inequalities (LMIs) techniques, several theorems with regarding to judging the ISS property of DNNs with time-varying delays have been obtained. It is shown that the ISS can be determined by solving a set of LMIs, which can be checked by using some standard numerical packages in MATLAB. At last, two numerical examples were given to illustrate the theoretical results.

Appendix

Proof of Lemma 2.4. We divided into four parts to prove this lemma.
Claim  1. (i) The solution of the system (2.9) is uniformly asymptotically stable if and only if there exists a function of class and a positive number independent of such that for    it satisfies that Particularly, the system (2.9) is uniformly global asymptotically stable if and only if (A.1) admits for any .
The Claim is so trivial that we omit the proof here.

Claim  2. For each , if there exist a continuous functional , functions of class , and a continuous positive definite function such that then the solution is globally uniformly asymptotically stable, and there exist a such that

Proof. From [27], the solution is globally uniformly asymptotically stable. Then by Claim 1, we obtain (A.4). The proof is complete.

Claim  3. Let (A.3) in Claim 2 replaced by Then for any , there exist , such that

Proof. Let , , (no loss generality, we assume that , then ). Then . In the following, we divided into two parts.
Case  1.   .
We make the claim that will be always remain in . Define , if , , then , and , and we have , . Then . If , , let , we will analyze them as the above. Then we obtain .
Case  2.   , that is, .
Let and . We prove that is limit. From , , (A.2), (A.5), and Case 2, we have , where . Since is strictly decreasing, and as , is limit. Then from Case 1, will be always remain in if arrive the boundary of . Then we obtain (A.6). The proof is complete.

Claim  4. Let (A.3) in Claim 2 replaced by where . Then the system is ISS.

Proof. From Claim 3, we have Since only depends on the defined on , we obtain Then where . This proves that the system is ISS.

Acknowledgment

The work is supported partially by National Natural Science Foundation of China under Grant no. 10971240, 61263020, and 61004042, Key Project of Chinese Education Ministry under Grant no. 212138, Natural Science Foundation of Chongqing under Grant CQ CSTC 2011BB0117, and Foundation of Science and Technology project of Chongqing Education Commission under Grant KJ120630.