Global Dissipativity on Uncertain Discrete-Time Neural Networks with Time-Varying Delays
Qiankun Song1,2and Jinde Cao3
Academic Editor: Yong Zhou
Received24 Dec 2009
Accepted14 Feb 2010
Published18 Mar 2010
Abstract
The problems on global dissipativity and global exponential dissipativity are investigated for
uncertain discrete-time neural networks with time-varying delays and general activation functions. By constructing
appropriate Lyapunov-Krasovskii functionals and employing linear matrix inequality technique, several new delay-dependent
criteria for checking the global dissipativity and global exponential dissipativity of the addressed neural
networks are established in linear matrix inequality (LMI), which can be checked numerically using the effective
LMI toolbox in MATLAB. Illustrated examples are given to show the effectiveness of the proposed criteria. It is
noteworthy that because neither model transformation nor free-weighting matrices are employed to deal with cross
terms in the derivation of the dissipativity criteria, the obtained results are less conservative and more computationally
efficient.
1. Introduction
In the past few decades, delayed neural networks have found successful applications in many areas such as signal processing, pattern recognition, associative memories, parallel computation, and optimization solvers [1]. In such applications, the qualitative analysis of the dynamical behaviors is a necessary step for the practical design of neural networks [2]. Many important results on the dynamical behaviors have been reported for delayed neural networks; see [1โ16] and the references therein for some recent publications.
It should be pointed out that all of the abovementioned literatures on the dynamical behaviors of delayed neural networks are concerned with continuous-time case. However, when implementing the continuous-time delayed neural network for computer simulation, it becomes essential to formulate a discrete-time system that is an analogue of the continuous-time delayed neural network. To some extent, the discrete-time analogue inherits the dynamical characteristics of the continuous-time delayed neural network under mild or no restriction on the discretization step-size, and also remains some functional similarity [17]. Unfortunately, as pointed out in [18], the discretization cannot preserve the dynamics of the continuous-time counterpart even for a small sampling period, and therefore there is a crucial need to study the dynamics of discrete-time neural networks. Recently, the dynamics analysis problem for discrete-time delayed neural networks and discrete-time systems with time-varying state delay has been extensively studied; see [17โ21] and references therein.
It is well known that the stability problem is central to the analysis of a dynamic system where various types of stability of an equilibrium point have captured the attention of researchers. Nevertheless, from a practical point of view, it is not always the case that every neural network has its orbits approach a single equilibrium point. It is possible that there is no equilibrium point in some situations. Therefore, the concept on dissipativity has been introduced [22]. As pointed out in [23], the dissipativity is also an important concept in dynamical neural networks. The concept of dissipativity in dynamical systems is a more general concept and it has found applications in the areas such as stability theory, chaos and synchronization theory, system norm estimation, and robust control [23]. Some sufficient conditions checking the dissipativity for delayed neural networks and nonlinear delay systems have been derived, for example, see [23โ33] and references therein. In [23, 24], authors analyzed the dissipativity of neural network with constant delays, and derived some sufficient conditions for the global dissipativity of neural network with constant delays. In [25, 26], authors considered the global dissipativity and global robust dissipativity for neural network with both time-varying delays and unbounded distributed delays; several sufficient conditions for checking the global dissipativity and global robust dissipativity were obtained. In [27, 28], by using linear matrix inequality technique, authors investigated the global dissipativity of neural network with both discrete time-varying delays and distributed time-varying delays. In [29], authors developed dissipativity notions for nonnegative dynamical systems with respect to linear and nonlinear storage functions and linear supply rates, and obtained a key result on linearization of nonnegative dissipative dynamical systems. In [30], the uniform dissipativity of a class of nonautonomous neural networks with time-varying delays was investigated by employing -matrix and the techniques of inequality. In [31โ33], the dissipativity of a class of nonlinear delay systems was considered; some sufficient conditions for checking the dissipativity were given. However, all of the abovementioned literatures on the dissipativity for delayed neural networks and nonlinear delay systems are concerned with continuous-time case. To the best of our knowledge, few authors have considered the problem on the dissipativity of uncertain discrete-time neural networks with time-varying delays. Therefore, the study on the dissipativity of uncertain discrete-time neural networks is not only important but also necessary.
Motivated by the above discussions, the objective of this paper is to study the problem on global dissipativity and global exponential dissipativity for uncertain discrete-time neural networks. By employing appropriate Lyapunov-Krasovskii functionals and LMI technique, we obtain several new sufficient conditions for checking the global dissipativity and global exponential dissipativity of the addressed neural networks.
Notations 1. The notations are quite standard. Throughout this paper, represents the unitary matrix with appropriate dimensions; stands for the set of nonnegative integers; and denote, respectively, the -dimensional Euclidean space and the set of all real matrices. The superscript โโ denotes matrix transposition and the asterisk โ*โ denotes the elements below the main diagonal of a symmetric block matrix. denotes the absolute-value matrix given by ; the notation (resp., ) means that and are symmetric matrices, and that is positive semidefinite (resp., positive definite). is the Euclidean norm in . For a positive constant , denotes the integer part of . For integers , with , denotes the discrete interval given by . denotes the set of all functions : . Matrices, if not explicitly specified, are assumed to have compatible dimensions.
2. Model Description and Preliminaries
In this paper, we consider the following discrete-time neural network model
for , where , is the state of the th neuron at time ; , denotes the activation function of the th neuron at time ; is the input vector; the positive integer corresponds to the transmission delay and satisfies ( and are known integers); , where () describes the rate with which the th neuron will reset its potential to the resting state in isolation when disconnected from the networks and external inputs; is the connection weight matrix; is the delayed connection weight matrix.
The initial condition associated with model (2.1) is given by
Throughout this paper, we make the following assumption [6].
(H) For any , and there exist constants and such that
Similar to [23], we also give the following definitions for discrete-time neural networks (2.1).
Definition 2.1. Discrete-time neural networks (2.1) are said to be globally dissipative if there exists a compact set , such that for all , positive integer , when , , where denotes the solution of (2.1) from initial state and initial time . In this case, is called a globally attractive set. A set is called a positive invariant if implies for .
Definition 2.2. Let be a globally attractive set of discrete-time neural networks (2.1). Discrete-time neural networks (2.1) are said to be globally exponentially dissipative if there exists a compact set in such that , there exist constants and such that
Set is called globally exponentially attractive set, where means but .
To prove our results, the following lemmas are necessary.
Lemma 2.3 (see [34]). Given constant matrices , , and , where , , then
is equivalent to the following conditions:
Lemma 2.4 (see [35, 36]). Given matrices , , and with , then
holds for all satisfying if and only if there exists a scalar such that
3. Main Results
In this section, we shall establish our main criteria based on the LMI approach. For presentation convenience, in the following, we denote
Theorem 3.1. Suppose that (H) holds. If there exist nine symmetric positive definite matrices , , , , and four positive diagonal matrices , , such that the following LMI holds:
where , , , , , , , , , , , , , , , and then discrete-time neural network (2.1) is globally dissipative, and
is a positive invariant and globally attractive set.
Proof. For positive diagonal matrices and , we know from assumption (H) that
Defining , we consider the following Lyapunov-Krasovskii functional candidate for model (2.1) as
where
Calculating the difference of () along the positive half trajectory of (2.1), we obtain
Similarly, one has
When ,
When ,
For positive diagonal matrices and , we can get from assumption (H) that [6]
Denoting , where
it follows from (3.7) to (3.14) that
From condition (3.2) and inequality (3.16), we get
when , that is, . Therefore, discrete-time neural network (2.1) is a globally dissipative system, and the set is a positive invariant and globally attractive set as LMI (3.2) holds. The proof is completed.
Next, we are now in a position to discuss the global exponential dissipativity of discrete-time neural network (2.1) as follows.
Theorem 3.2. Under the conditions of Theorem 3.1, neural network (2.1) is globally exponentially dissipative, and
is a positive invariant and globally attractive set.
Proof. When , that is, , we know from (3.16) that
From the definition of in (3.5), it is easy to verify that
where
For any scalar , it follows from (3.19) and (3.20) that
Summing up both sides of (3.22) from to with respect to , we have
It is easy to compute that
From (3.20), we obtain
It follows from (3.23)โ(3.25) that
where
Since , by the continuity of functions , we can choose a scalar such that . Obviously, . From (3.26), we get
From the definition of in (3.5), we have
Let , , then , . It follows from (3.28) and (3.29) that
for all , which means that discrete-time neural network (2.1) is globally exponentially dissipative, and the set is a positive invariant and globally attractive set as LMI (3.2) holds. The proof is completed.
Remark 3.3. In the study on dissipativity of neural networks, the assumption (H) of this paper is as same as that in [28], the constants and () in assumption (H) of this paper are allowed to be positive, negative, or zero. Hence, assumption (H), first proposed by Liu et al. in [6], is weaker than the assumption in [23โ27, 30].
Remark 3.4. The idea of constructing Lyapunov-Krasovskii functional (3.5) is that we divide the delay interval into two subintervals and , thus the proposed Lyapunov-Krasovskii functional is different when the time-delay belongs to different subinterval. The main advantage of such Lyapunov-Krasovskii functional is that it makes full use of the information on the considered time-delay .
Now, let us consider the case when the parameter uncertainties appear in the discrete-time neural networks with time-varying delays. In this case, model (2.1) can be further generalized to the following one:
where , , are known real constant matrices, and the time-varying matrices , and represent the time-varying parameter uncertainties that are assumed to satisfy the following admissible condition:
where and are known real constant matrices, and is the unknown time-varying matrix-valued function subject to the following condition:
For model (3.31), we have the following result readily.
Theorem 3.5. Suppose that (H) holds. If there exist nine symmetric positive definite matrices , , , , and four positive diagonal matrices , , such that the following LMI holds:
where
and , , , , , , , , , , and then uncertain discrete-time neural network (3.31) is globally dissipative and globally exponentially dissipative, and
is a positive invariant and globally attractive set.
Proof. By Lemma 2.3, we know that LMI (3.34) is equivalent to the following inequality:
From Lemma 2.4, we know that (3.37) is equivalent to the following inequality:
that is
Let , , . As an application of Lemma 2.3, we know that (3.39) is equivalent to the following inequality:
By simple computation and noting that , we have
Therefore, inequality (3.40) is just the same as inequality (3.2) when we use , , to replace , , of inequality (3.2), respectively. From Theorems 3.1 and 3.2, we know that uncertain discrete-time neural network (3.31) is globally dissipative and globally exponentially dissipative, and
is a positive invariant and globally attractive set. The proof is then completed.
4. Examples
Example 4.1. Consider a discrete-time neural network (2.1) with
It is easy to check that assumption (H) is satisfied, and , , , , , . Thus,
By the Matlab LMI Control Toolbox, we can find a solution to the LMI in (3.2) as follows:
Therefore, by Theorem 3.1, we know that model (2.1) with above given parameters is globally dissipative and globally exponentially dissipative. It is easy to compute that the positive invariant and global attractive set are .
The following example is given to illustrate that when the sufficient conditions ensuring the global dissipativity are not satisfied, the complex dynamics will appear.
Example 4.2. Consider a discrete-time neural network (2.1) with
It is easy to check that the linear matrix inequality (3.2) with and of Example 4.1 has not a feasible solution. Figure 1 depicts the states of the considered neural network (2.1) with initial conditions , , . One can see from Figure 1 that the chaos behaviors have appeared for neural network (2.1) with above given parameters.
5. Conclusions
In this paper, the global dissipativity and global exponential dissipativity have been investigated for uncertain discrete-time neural networks with time-varying delays and general activation functions. By constructing appropriate Lyapunov-Krasovskii functionals and employing linear matrix inequality technique, several new delay-dependent criteria for checking the global dissipativity and global exponential dissipativity of the addressed neural networks have been derived in terms of LMI, which can be checked numerically using the effective LMI toolbox in MATLAB. Illustrated examples are also given to show the effectiveness of the proposed criteria. It is noteworthy that because neither model transformation nor free weighting matrices are employed to deal with cross terms in the derivation of the dissipativity criteria, the obtained results are less conservative and more computationally efficient.
Acknowledgments
The authors would like to thank the reviewers and the editor for their valuable suggestions and comments which have led to a much improved paper. This work was supported by the National Natural Science Foundation of China under Grants 60974132, 60874088, and 10772152.
References
K. Gopalsamy and X.-Z. He, โDelay-independent stability in bidirectional associative memory networks,โ IEEE Transactions on Neural Networks, vol. 5, no. 6, pp. 998โ1002, 1994.
M. Forti, โOn global asymptotic stability of a class of nonlinear systems arising in neural network theory,โ Journal of Differential Equations, vol. 113, no. 1, pp. 246โ264, 1994.
S. Arik, โAn analysis of exponential stability of delayed neural networks with time varying delays,โ Neural Networks, vol. 17, no. 7, pp. 1027โ1031, 2004.
D. Xu and Z. Yang, โImpulsive delay differential inequality and stability of neural networks,โ Journal of Mathematical Analysis and Applications, vol. 305, no. 1, pp. 107โ120, 2005.
T. Chen, W. Lu, and G. Chen, โDynamical behaviors of a large class of general delayed neural networks,โ Neural Computation, vol. 17, no. 4, pp. 949โ968, 2005.
Y. Liu, Z. Wang, and X. Liu, โGlobal exponential stability of generalized recurrent neural networks with discrete and distributed delays,โ Neural Networks, vol. 19, no. 5, pp. 667โ675, 2006.
X. Liao, Q. Luo, Z. Zeng, and Y. Guo, โGlobal exponential stability in Lagrange sense for recurrent neural networks with time delays,โ Nonlinear Analysis: Real World Applications, vol. 9, no. 4, pp. 1535โ1557, 2008.
H. Zhang, Z. Wang, and D. Liu, โGlobal asymptotic stability of recurrent neural networks with multiple time-varying delays,โ IEEE Transactions on Neural Networks, vol. 19, no. 5, pp. 855โ873, 2008.
J. H. Park and O. M. Kwon, โFurther results on state estimation for neural networks of neutral-type with time-varying delay,โ Applied Mathematics and Computation, vol. 208, no. 1, pp. 69โ75, 2009.
S. Xu, W. X. Zheng, and Y. Zou, โPassivity analysis of neural networks with time-varying delays,โ IEEE Transactions on Circuits and Systems II, vol. 56, no. 4, pp. 325โ329, 2009.
Z. Wang, Y. Liu, and X. Liu, โState estimation for jumping recurrent neural networks with discrete and distributed delays,โ Neural Networks, vol. 22, no. 1, pp. 41โ48, 2009.
J. H. Park, C. H. Park, O. M. Kwon, and S. M. Lee, โA new stability criterion for bidirectional associative memory neural networks of neutral-type,โ Applied Mathematics and Computation, vol. 199, no. 2, pp. 716โ722, 2008.
H. J. Cho and J. H. Park, โNovel delay-dependent robust stability criterion of delayed cellular neural networks,โ Chaos, Solitons and Fractals, vol. 32, no. 3, pp. 1194โ1200, 2007.
J. H. Park, โRobust stability of bidirectional associative memory neural networks with time delays,โ Physics Letters A, vol. 349, no. 6, pp. 494โ499, 2006.
J. H. Park, S. M. Lee, and H. Y. Jung, โLMI optimization approach to synchronization of stochastic delayed discrete-time complex networks,โ Journal of Optimization Theory and Applications, vol. 143, no. 2, pp. 357โ367, 2009.
S. Mohamad and K. Gopalsamy, โExponential stability of continuous-time and discrete-time cellular neural networks with delays,โ Applied Mathematics and Computation, vol. 135, no. 1, pp. 17โ38, 2003.
S. Hu and J. Wang, โGlobal robust stability of a class of discrete-time interval neural networks,โ IEEE Transactions on Circuits and Systems. I, vol. 53, no. 1, pp. 129โ138, 2006.
H. Gao and T. Chen, โNew results on stability of discrete-time systems with time-varying state delay,โ IEEE Transactions on Automatic Control, vol. 52, no. 2, pp. 328โ334, 2007.
Z. Wu, H. Su, J. Chu, and W. Zhou, โImproved result on stability analysis of discrete stochastic neural networks with time delay,โ Physics Letters A, vol. 373, no. 17, pp. 1546โ1552, 2009.
J. Cao and F. Ren, โExponential stability of discrete-time genetic regulatory networks with delays,โ IEEE Transactions on Neural Networks, vol. 19, no. 3, pp. 520โ523, 2008.
J. K. Hale, Asymptotic Behavior of Dissipative Systems, vol. 25 of Mathematical Surveys and Monographs, American Mathematical Society, Providence, RI, USA, 1988.
X. Liao and J. Wang, โGlobal dissipativity of continuous-time recurrent neural networks with time delay,โ Physical Review E, vol. 68, no. 1, Article ID 016118, p. 7, 2003.
Q. Song and Z. Zhao, โGlobal dissipativity of neural networks with both variable and unbounded delays,โ Chaos, Solitons and Fractals, vol. 25, no. 2, pp. 393โ401, 2005.
X. Y. Lou and B. T. Cui, โGlobal robust dissipativity for integro-differential systems modeling neural networks with delays,โ Chaos, Solitons and Fractals, vol. 36, no. 2, pp. 469โ478, 2008.
J. Cao, K. Yuan, D. W. C. Ho, and J. Lam, โGlobal point dissipativity of neural networks with mixed time-varying delays,โ Chaos, vol. 16, no. 1, Article ID 013105, 9 pages, 2006.
Q. Song and J. Cao, โGlobal dissipativity analysis on uncertain neural networks with mixed time-varying delays,โ Chaos, vol. 18, no. 4, Article ID 043126, 10 pages, 2008.
W. M. Haddad and V. S. Chellaboina, โStability and dissipativity theory for nonnegative dynamical systems: a unified analysis framework for biological and physiological systems,โ Nonlinear Analysis: Real World Applications, vol. 6, no. 1, pp. 35โ65, 2005.
Y. Huang, D. Xu, and Z. Yang, โDissipativity and periodic attractor for non-autonomous neural networks with time-varying delays,โ Neurocomputing, vol. 70, no. 16โ18, pp. 2953โ2958, 2007.
H. Tian, N. Guo, and A. Shen, โDissipativity of delay functional differential equations with bounded lag,โ Journal of Mathematical Analysis and Applications, vol. 355, no. 2, pp. 778โ782, 2009.
M. S. Mahmoud, Y. Shi, and F. M. AL-Sunni, โDissipativity analysis and synthesis of a class of nonlinear systems with time-varying delays,โ Journal of the Franklin Institute, vol. 346, no. 6, pp. 570โ592, 2009.
S. Gan, โDissipativity of -methods for nonlinear delay differential equations of neutral type,โ Applied Numerical Mathematics, vol. 59, no. 6, pp. 1354โ1365, 2009.
S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, vol. 15 of SIAM Studies in Applied Mathematics, SIAM, Philadelphia, Pa, USA, 1994.
P. P. Khargonekar, I. R. Petersen, and K. Zhou, โRobust stabilization of uncertain linear systems: quadratic stabilizability and control theory,โ IEEE Transactions on Automatic Control, vol. 35, no. 3, pp. 356โ361, 1990.
L. Xie, M. Fu, and C. E. de Souza, โ-control and quadratic stabilization of systems with parameter uncertainty via output feedback,โ IEEE Transactions on Automatic Control, vol. 37, no. 8, pp. 1253โ1256, 1992.