Discrete Dynamics in Nature and Society

Volume 2010 (2010), Article ID 810408, 19 pages

http://dx.doi.org/10.1155/2010/810408

## Global Dissipativity on Uncertain Discrete-Time Neural Networks with Time-Varying Delays

^{1}Department of Mathematics, Chongqing Jiaotong University, Chongqing 400074, China^{2}Yangtze Center of Mathematics, Sichuan University, Chengdu 610064, China^{3}Department of Mathematics, Southeast University, Nanjing 210096, China

Received 24 December 2009; Accepted 14 February 2010

Academic Editor: Yong Zhou

Copyright © 2010 Qiankun Song and Jinde Cao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The problems on global dissipativity and global exponential dissipativity are investigated for uncertain discrete-time neural networks with time-varying delays and general activation functions. By constructing appropriate Lyapunov-Krasovskii functionals and employing linear matrix inequality technique, several new delay-dependent criteria for checking the global dissipativity and global exponential dissipativity of the addressed neural networks are established in linear matrix inequality (LMI), which can be checked numerically using the effective LMI toolbox in MATLAB. Illustrated examples are given to show the effectiveness of the proposed criteria. It is noteworthy that because neither model transformation nor free-weighting matrices are employed to deal with cross terms in the derivation of the dissipativity criteria, the obtained results are less conservative and more computationally efficient.

#### 1. Introduction

In the past few decades, delayed neural networks have found successful applications in many areas such as signal processing, pattern recognition, associative memories, parallel computation, and optimization solvers [1]. In such applications, the qualitative analysis of the dynamical behaviors is a necessary step for the practical design of neural networks [2]. Many important results on the dynamical behaviors have been reported for delayed neural networks; see [1–16] and the references therein for some recent publications.

It should be pointed out that all of the abovementioned literatures on the dynamical behaviors of delayed neural networks are concerned with continuous-time case. However, when implementing the continuous-time delayed neural network for computer simulation, it becomes essential to formulate a discrete-time system that is an analogue of the continuous-time delayed neural network. To some extent, the discrete-time analogue inherits the dynamical characteristics of the continuous-time delayed neural network under mild or no restriction on the discretization step-size, and also remains some functional similarity [17]. Unfortunately, as pointed out in [18], the discretization cannot preserve the dynamics of the continuous-time counterpart even for a small sampling period, and therefore there is a crucial need to study the dynamics of discrete-time neural networks. Recently, the dynamics analysis problem for discrete-time delayed neural networks and discrete-time systems with time-varying state delay has been extensively studied; see [17–21] and references therein.

It is well known that the stability problem is central to the analysis of a dynamic system where various types of stability of an equilibrium point have captured the attention of researchers. Nevertheless, from a practical point of view, it is not always the case that every neural network has its orbits approach a single equilibrium point. It is possible that there is no equilibrium point in some situations. Therefore, the concept on dissipativity has been introduced [22]. As pointed out in [23], the dissipativity is also an important concept in dynamical neural networks. The concept of dissipativity in dynamical systems is a more general concept and it has found applications in the areas such as stability theory, chaos and synchronization theory, system norm estimation, and robust control [23]. Some sufficient conditions checking the dissipativity for delayed neural networks and nonlinear delay systems have been derived, for example, see [23–33] and references therein. In [23, 24], authors analyzed the dissipativity of neural network with constant delays, and derived some sufficient conditions for the global dissipativity of neural network with constant delays. In [25, 26], authors considered the global dissipativity and global robust dissipativity for neural network with both time-varying delays and unbounded distributed delays; several sufficient conditions for checking the global dissipativity and global robust dissipativity were obtained. In [27, 28], by using linear matrix inequality technique, authors investigated the global dissipativity of neural network with both discrete time-varying delays and distributed time-varying delays. In [29], authors developed dissipativity notions for nonnegative dynamical systems with respect to linear and nonlinear storage functions and linear supply rates, and obtained a key result on linearization of nonnegative dissipative dynamical systems. In [30], the uniform dissipativity of a class of nonautonomous neural networks with time-varying delays was investigated by employing -matrix and the techniques of inequality. In [31–33], the dissipativity of a class of nonlinear delay systems was considered; some sufficient conditions for checking the dissipativity were given. However, all of the abovementioned literatures on the dissipativity for delayed neural networks and nonlinear delay systems are concerned with continuous-time case. To the best of our knowledge, few authors have considered the problem on the dissipativity of uncertain discrete-time neural networks with time-varying delays. Therefore, the study on the dissipativity of uncertain discrete-time neural networks is not only important but also necessary.

Motivated by the above discussions, the objective of this paper is to study the problem on global dissipativity and global exponential dissipativity for uncertain discrete-time neural networks. By employing appropriate Lyapunov-Krasovskii functionals and LMI technique, we obtain several new sufficient conditions for checking the global dissipativity and global exponential dissipativity of the addressed neural networks.

*Notations 1. *The notations are quite standard. Throughout this paper, represents the unitary matrix with appropriate dimensions; stands for the set of nonnegative integers; and denote, respectively, the -dimensional Euclidean space and the set of all real matrices. The superscript “” denotes matrix transposition and the asterisk “*” denotes the elements below the main diagonal of a symmetric block matrix. denotes the absolute-value matrix given by ; the notation (resp., ) means that and are symmetric matrices, and that is positive semidefinite (resp., positive definite). is the Euclidean norm in . For a positive constant , denotes the integer part of . For integers , with , denotes the discrete interval given by . denotes the set of all functions : . Matrices, if not explicitly specified, are assumed to have compatible dimensions.

#### 2. Model Description and Preliminaries

In this paper, we consider the following discrete-time neural network model

for , where , is the state of the th neuron at time ; , denotes the activation function of the th neuron at time ; is the input vector; the positive integer corresponds to the transmission delay and satisfies ( and are known integers); , where () describes the rate with which the th neuron will reset its potential to the resting state in isolation when disconnected from the networks and external inputs; is the connection weight matrix; is the delayed connection weight matrix.

The initial condition associated with model (2.1) is given by

Throughout this paper, we make the following assumption [6].

(**H**) For any , and there exist constants and such that

Similar to [23], we also give the following definitions for discrete-time neural networks (2.1).

*Definition 2.1. *Discrete-time neural networks (2.1) are said to be globally dissipative if there exists a compact set , such that for all , positive integer , when , , where denotes the solution of (2.1) from initial state and initial time . In this case, is called a globally attractive set. A set is called a positive invariant if implies for .

*Definition 2.2. *Let be a globally attractive set of discrete-time neural networks (2.1). Discrete-time neural networks (2.1) are said to be globally exponentially dissipative if there exists a compact set in such that , there exist constants and such that
Set is called globally exponentially attractive set, where means but .

To prove our results, the following lemmas are necessary.

Lemma 2.3 (see [34]). *Given constant matrices , , and , where , , then
**
is equivalent to the following conditions:
*

Lemma 2.4 (see [35, 36]). *Given matrices , , and with , then
**
holds for all satisfying if and only if there exists a scalar such that
*

#### 3. Main Results

In this section, we shall establish our main criteria based on the LMI approach. For presentation convenience, in the following, we denote

Theorem 3.1. *Suppose that ( H) holds. If there exist nine symmetric positive definite matrices , , , , and four positive diagonal matrices , , such that the following LMI holds:
*

*where , , , , , , , , , , , , , , , and then discrete-time neural network (2.1) is globally dissipative, and*

*is a positive invariant and globally attractive set.*

*Proof. *For positive diagonal matrices and , we know from assumption (**H**) that

Defining , we consider the following Lyapunov-Krasovskii functional candidate for model (2.1) as
where
Calculating the difference of () along the positive half trajectory of (2.1), we obtain

Similarly, one has
When ,
When ,

For positive diagonal matrices and , we can get from assumption (**H**) that [6]

Denoting , where
it follows from (3.7) to (3.14) that
From condition (3.2) and inequality (3.16), we get
when , that is, . Therefore, discrete-time neural network (2.1) is a globally dissipative system, and the set is a positive invariant and globally attractive set as LMI (3.2) holds. The proof is completed.

Next, we are now in a position to discuss the global exponential dissipativity of discrete-time neural network (2.1) as follows.

Theorem 3.2. *Under the conditions of Theorem 3.1, neural network (2.1) is globally exponentially dissipative, and
**
is a positive invariant and globally attractive set.*

*Proof. *When , that is, , we know from (3.16) that
From the definition of in (3.5), it is easy to verify that
where
For any scalar , it follows from (3.19) and (3.20) that
Summing up both sides of (3.22) from to with respect to , we have
It is easy to compute that
From (3.20), we obtain
It follows from (3.23)–(3.25) that
where
Since , by the continuity of functions , we can choose a scalar such that . Obviously, . From (3.26), we get

From the definition of in (3.5), we have
Let , , then , . It follows from (3.28) and (3.29) that
for all , which means that discrete-time neural network (2.1) is globally exponentially dissipative, and the set is a positive invariant and globally attractive set as LMI (3.2) holds. The proof is completed.

*Remark 3.3. *In the study on dissipativity of neural networks, the assumption (**H**) of this paper is as same as that in [28], the constants and () in assumption (**H**) of this paper are allowed to be positive, negative, or zero. Hence, assumption (**H**), first proposed by Liu et al. in [6], is weaker than the assumption in [23–27, 30].

*Remark 3.4. *The idea of constructing Lyapunov-Krasovskii functional (3.5) is that we divide the delay interval into two subintervals and , thus the proposed Lyapunov-Krasovskii functional is different when the time-delay belongs to different subinterval. The main advantage of such Lyapunov-Krasovskii functional is that it makes full use of the information on the considered time-delay .

Now, let us consider the case when the parameter uncertainties appear in the discrete-time neural networks with time-varying delays. In this case, model (2.1) can be further generalized to the following one:

where , , are known real constant matrices, and the time-varying matrices , and represent the time-varying parameter uncertainties that are assumed to satisfy the following admissible condition:

where and are known real constant matrices, and is the unknown time-varying matrix-valued function subject to the following condition:

For model (3.31), we have the following result readily.

Theorem 3.5. *Suppose that ( H) holds. If there exist nine symmetric positive definite matrices , , , , and four positive diagonal matrices , , such that the following LMI holds:
*

*where*

*and , , , , , , , , , , and then uncertain discrete-time neural network (3.31) is globally dissipative and globally exponentially dissipative, and*

*is a positive invariant and globally attractive set.*

*Proof. *By Lemma 2.3, we know that LMI (3.34) is equivalent to the following inequality:
From Lemma 2.4, we know that (3.37) is equivalent to the following inequality:
that is
Let , , . As an application of Lemma 2.3, we know that (3.39) is equivalent to the following inequality:
By simple computation and noting that , we have
Therefore, inequality (3.40) is just the same as inequality (3.2) when we use , , to replace , , of inequality (3.2), respectively. From Theorems 3.1 and 3.2, we know that uncertain discrete-time neural network (3.31) is globally dissipative and globally exponentially dissipative, and
is a positive invariant and globally attractive set. The proof is then completed.

#### 4. Examples

*Example 4.1. *Consider a discrete-time neural network (2.1) with

It is easy to check that assumption (**H**) is satisfied, and , , , , , . Thus,
By the Matlab LMI Control Toolbox, we can find a solution to the LMI in (3.2) as follows:
Therefore, by Theorem 3.1, we know that model (2.1) with above given parameters is globally dissipative and globally exponentially dissipative. It is easy to compute that the positive invariant and global attractive set are .

The following example is given to illustrate that when the sufficient conditions ensuring the global dissipativity are not satisfied, the complex dynamics will appear.

*Example 4.2. *Consider a discrete-time neural network (2.1) with
It is easy to check that the linear matrix inequality (3.2) with and of Example 4.1 has not a feasible solution. Figure 1 depicts the states of the considered neural network (2.1) with initial conditions , , . One can see from Figure 1 that the chaos behaviors have appeared for neural network (2.1) with above given parameters.

#### 5. Conclusions

In this paper, the global dissipativity and global exponential dissipativity have been investigated for uncertain discrete-time neural networks with time-varying delays and general activation functions. By constructing appropriate Lyapunov-Krasovskii functionals and employing linear matrix inequality technique, several new delay-dependent criteria for checking the global dissipativity and global exponential dissipativity of the addressed neural networks have been derived in terms of LMI, which can be checked numerically using the effective LMI toolbox in MATLAB. Illustrated examples are also given to show the effectiveness of the proposed criteria. It is noteworthy that because neither model transformation nor free weighting matrices are employed to deal with cross terms in the derivation of the dissipativity criteria, the obtained results are less conservative and more computationally efficient.

#### Acknowledgments

The authors would like to thank the reviewers and the editor for their valuable suggestions and comments which have led to a much improved paper. This work was supported by the National Natural Science Foundation of China under Grants 60974132, 60874088, and 10772152.

#### References

- K. Gopalsamy and X.-Z. He, “Delay-independent stability in bidirectional associative memory networks,”
*IEEE Transactions on Neural Networks*, vol. 5, no. 6, pp. 998–1002, 1994. View at Publisher · View at Google Scholar · View at Scopus - M. Forti, “On global asymptotic stability of a class of nonlinear systems arising in neural network theory,”
*Journal of Differential Equations*, vol. 113, no. 1, pp. 246–264, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - S. Arik, “An analysis of exponential stability of delayed neural networks with time varying delays,”
*Neural Networks*, vol. 17, no. 7, pp. 1027–1031, 2004. View at Publisher · View at Google Scholar · View at Scopus - D. Xu and Z. Yang, “Impulsive delay differential inequality and stability of neural networks,”
*Journal of Mathematical Analysis and Applications*, vol. 305, no. 1, pp. 107–120, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - T. Chen, W. Lu, and G. Chen, “Dynamical behaviors of a large class of general delayed neural networks,”
*Neural Computation*, vol. 17, no. 4, pp. 949–968, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - Y. Liu, Z. Wang, and X. Liu, “Global exponential stability of generalized recurrent neural networks with discrete and distributed delays,”
*Neural Networks*, vol. 19, no. 5, pp. 667–675, 2006. View at Publisher · View at Google Scholar · View at Scopus - X. Liao, Q. Luo, Z. Zeng, and Y. Guo, “Global exponential stability in Lagrange sense for recurrent neural networks with time delays,”
*Nonlinear Analysis: Real World Applications*, vol. 9, no. 4, pp. 1535–1557, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - H. Zhang, Z. Wang, and D. Liu, “Global asymptotic stability of recurrent neural networks with multiple time-varying delays,”
*IEEE Transactions on Neural Networks*, vol. 19, no. 5, pp. 855–873, 2008. View at Publisher · View at Google Scholar · View at Scopus - J. H. Park and O. M. Kwon, “Further results on state estimation for neural networks of neutral-type with time-varying delay,”
*Applied Mathematics and Computation*, vol. 208, no. 1, pp. 69–75, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - S. Xu, W. X. Zheng, and Y. Zou, “Passivity analysis of neural networks with time-varying delays,”
*IEEE Transactions on Circuits and Systems II*, vol. 56, no. 4, pp. 325–329, 2009. View at Publisher · View at Google Scholar · View at Scopus - J.-C. Ban, C.-H. Chang, S.-S. Lin, and Y.-H. Lin, “Spatial complexity in multi-layer cellular neural networks,”
*Journal of Differential Equations*, vol. 246, no. 2, pp. 552–580, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - Z. Wang, Y. Liu, and X. Liu, “State estimation for jumping recurrent neural networks with discrete and distributed delays,”
*Neural Networks*, vol. 22, no. 1, pp. 41–48, 2009. View at Publisher · View at Google Scholar · View at Scopus - J. H. Park, C. H. Park, O. M. Kwon, and S. M. Lee, “A new stability criterion for bidirectional associative memory neural networks of neutral-type,”
*Applied Mathematics and Computation*, vol. 199, no. 2, pp. 716–722, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - H. J. Cho and J. H. Park, “Novel delay-dependent robust stability criterion of delayed cellular neural networks,”
*Chaos, Solitons and Fractals*, vol. 32, no. 3, pp. 1194–1200, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - J. H. Park, “Robust stability of bidirectional associative memory neural networks with time delays,”
*Physics Letters A*, vol. 349, no. 6, pp. 494–499, 2006. View at Publisher · View at Google Scholar · View at Scopus - J. H. Park, S. M. Lee, and H. Y. Jung, “LMI optimization approach to synchronization of stochastic delayed discrete-time complex networks,”
*Journal of Optimization Theory and Applications*, vol. 143, no. 2, pp. 357–367, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - S. Mohamad and K. Gopalsamy, “Exponential stability of continuous-time and discrete-time cellular neural networks with delays,”
*Applied Mathematics and Computation*, vol. 135, no. 1, pp. 17–38, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - S. Hu and J. Wang, “Global robust stability of a class of discrete-time interval neural networks,”
*IEEE Transactions on Circuits and Systems. I*, vol. 53, no. 1, pp. 129–138, 2006. View at Publisher · View at Google Scholar · View at MathSciNet - H. Gao and T. Chen, “New results on stability of discrete-time systems with time-varying state delay,”
*IEEE Transactions on Automatic Control*, vol. 52, no. 2, pp. 328–334, 2007. View at Publisher · View at Google Scholar · View at MathSciNet - Z. Wu, H. Su, J. Chu, and W. Zhou, “Improved result on stability analysis of discrete stochastic neural networks with time delay,”
*Physics Letters A*, vol. 373, no. 17, pp. 1546–1552, 2009. View at Publisher · View at Google Scholar · View at MathSciNet - J. Cao and F. Ren, “Exponential stability of discrete-time genetic regulatory networks with delays,”
*IEEE Transactions on Neural Networks*, vol. 19, no. 3, pp. 520–523, 2008. View at Publisher · View at Google Scholar · View at Scopus - J. K. Hale,
*Asymptotic Behavior of Dissipative Systems*, vol. 25 of*Mathematical Surveys and Monographs*, American Mathematical Society, Providence, RI, USA, 1988. View at MathSciNet - X. Liao and J. Wang, “Global dissipativity of continuous-time recurrent neural networks with time delay,”
*Physical Review E*, vol. 68, no. 1, Article ID 016118, p. 7, 2003. View at Publisher · View at Google Scholar · View at MathSciNet - S. Arik, “On the global dissipativity of dynamical neural networks with time delays,”
*Physics Letters A*, vol. 326, no. 1-2, pp. 126–132, 2004. View at Publisher · View at Google Scholar · View at Scopus - Q. Song and Z. Zhao, “Global dissipativity of neural networks with both variable and unbounded delays,”
*Chaos, Solitons and Fractals*, vol. 25, no. 2, pp. 393–401, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - X. Y. Lou and B. T. Cui, “Global robust dissipativity for integro-differential systems modeling neural networks with delays,”
*Chaos, Solitons and Fractals*, vol. 36, no. 2, pp. 469–478, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - J. Cao, K. Yuan, D. W. C. Ho, and J. Lam, “Global point dissipativity of neural networks with mixed time-varying delays,”
*Chaos*, vol. 16, no. 1, Article ID 013105, 9 pages, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - Q. Song and J. Cao, “Global dissipativity analysis on uncertain neural networks with mixed time-varying delays,”
*Chaos*, vol. 18, no. 4, Article ID 043126, 10 pages, 2008. View at Publisher · View at Google Scholar · View at Scopus - W. M. Haddad and V. S. Chellaboina, “Stability and dissipativity theory for nonnegative dynamical systems: a unified analysis framework for biological and physiological systems,”
*Nonlinear Analysis: Real World Applications*, vol. 6, no. 1, pp. 35–65, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - Y. Huang, D. Xu, and Z. Yang, “Dissipativity and periodic attractor for non-autonomous neural networks with time-varying delays,”
*Neurocomputing*, vol. 70, no. 16–18, pp. 2953–2958, 2007. View at Publisher · View at Google Scholar · View at Scopus - H. Tian, N. Guo, and A. Shen, “Dissipativity of delay functional differential equations with bounded lag,”
*Journal of Mathematical Analysis and Applications*, vol. 355, no. 2, pp. 778–782, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - M. S. Mahmoud, Y. Shi, and F. M. AL-Sunni, “Dissipativity analysis and synthesis of a class of nonlinear systems with time-varying delays,”
*Journal of the Franklin Institute*, vol. 346, no. 6, pp. 570–592, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - S. Gan, “Dissipativity of $\theta $-methods for nonlinear delay differential equations of neutral type,”
*Applied Numerical Mathematics*, vol. 59, no. 6, pp. 1354–1365, 2009. View at Publisher · View at Google Scholar · View at MathSciNet - S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan,
*Linear Matrix Inequalities in System and Control Theory*, vol. 15 of*SIAM Studies in Applied Mathematics*, SIAM, Philadelphia, Pa, USA, 1994. View at MathSciNet - P. P. Khargonekar, I. R. Petersen, and K. Zhou, “Robust stabilization of uncertain linear systems: quadratic stabilizability and ${H}^{\infty}$ control theory,”
*IEEE Transactions on Automatic Control*, vol. 35, no. 3, pp. 356–361, 1990. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet - L. Xie, M. Fu, and C. E. de Souza, “${H}_{\infty}$-control and quadratic stabilization of systems with parameter uncertainty via output feedback,”
*IEEE Transactions on Automatic Control*, vol. 37, no. 8, pp. 1253–1256, 1992. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet