Abstract
The problems on global dissipativity and global exponential dissipativity are investigated for uncertain discrete-time neural networks with time-varying delays and general activation functions. By constructing appropriate Lyapunov-Krasovskii functionals and employing linear matrix inequality technique, several new delay-dependent criteria for checking the global dissipativity and global exponential dissipativity of the addressed neural networks are established in linear matrix inequality (LMI), which can be checked numerically using the effective LMI toolbox in MATLAB. Illustrated examples are given to show the effectiveness of the proposed criteria. It is noteworthy that because neither model transformation nor free-weighting matrices are employed to deal with cross terms in the derivation of the dissipativity criteria, the obtained results are less conservative and more computationally efficient.
1. Introduction
In the past few decades, delayed neural networks have found successful applications in many areas such as signal processing, pattern recognition, associative memories, parallel computation, and optimization solvers [1]. In such applications, the qualitative analysis of the dynamical behaviors is a necessary step for the practical design of neural networks [2]. Many important results on the dynamical behaviors have been reported for delayed neural networks; see [1–16] and the references therein for some recent publications.
It should be pointed out that all of the abovementioned literatures on the dynamical behaviors of delayed neural networks are concerned with continuous-time case. However, when implementing the continuous-time delayed neural network for computer simulation, it becomes essential to formulate a discrete-time system that is an analogue of the continuous-time delayed neural network. To some extent, the discrete-time analogue inherits the dynamical characteristics of the continuous-time delayed neural network under mild or no restriction on the discretization step-size, and also remains some functional similarity [17]. Unfortunately, as pointed out in [18], the discretization cannot preserve the dynamics of the continuous-time counterpart even for a small sampling period, and therefore there is a crucial need to study the dynamics of discrete-time neural networks. Recently, the dynamics analysis problem for discrete-time delayed neural networks and discrete-time systems with time-varying state delay has been extensively studied; see [17–21] and references therein.
It is well known that the stability problem is central to the analysis of a dynamic system where various types of stability of an equilibrium point have captured the attention of researchers. Nevertheless, from a practical point of view, it is not always the case that every neural network has its orbits approach a single equilibrium point. It is possible that there is no equilibrium point in some situations. Therefore, the concept on dissipativity has been introduced [22]. As pointed out in [23], the dissipativity is also an important concept in dynamical neural networks. The concept of dissipativity in dynamical systems is a more general concept and it has found applications in the areas such as stability theory, chaos and synchronization theory, system norm estimation, and robust control [23]. Some sufficient conditions checking the dissipativity for delayed neural networks and nonlinear delay systems have been derived, for example, see [23–33] and references therein. In [23, 24], authors analyzed the dissipativity of neural network with constant delays, and derived some sufficient conditions for the global dissipativity of neural network with constant delays. In [25, 26], authors considered the global dissipativity and global robust dissipativity for neural network with both time-varying delays and unbounded distributed delays; several sufficient conditions for checking the global dissipativity and global robust dissipativity were obtained. In [27, 28], by using linear matrix inequality technique, authors investigated the global dissipativity of neural network with both discrete time-varying delays and distributed time-varying delays. In [29], authors developed dissipativity notions for nonnegative dynamical systems with respect to linear and nonlinear storage functions and linear supply rates, and obtained a key result on linearization of nonnegative dissipative dynamical systems. In [30], the uniform dissipativity of a class of nonautonomous neural networks with time-varying delays was investigated by employing -matrix and the techniques of inequality. In [31–33], the dissipativity of a class of nonlinear delay systems was considered; some sufficient conditions for checking the dissipativity were given. However, all of the abovementioned literatures on the dissipativity for delayed neural networks and nonlinear delay systems are concerned with continuous-time case. To the best of our knowledge, few authors have considered the problem on the dissipativity of uncertain discrete-time neural networks with time-varying delays. Therefore, the study on the dissipativity of uncertain discrete-time neural networks is not only important but also necessary.
Motivated by the above discussions, the objective of this paper is to study the problem on global dissipativity and global exponential dissipativity for uncertain discrete-time neural networks. By employing appropriate Lyapunov-Krasovskii functionals and LMI technique, we obtain several new sufficient conditions for checking the global dissipativity and global exponential dissipativity of the addressed neural networks.
Notations 1. The notations are quite standard. Throughout this paper, represents the unitary matrix with appropriate dimensions; stands for the set of nonnegative integers; and denote, respectively, the -dimensional Euclidean space and the set of all real matrices. The superscript “” denotes matrix transposition and the asterisk “*” denotes the elements below the main diagonal of a symmetric block matrix. denotes the absolute-value matrix given by ; the notation (resp., ) means that and are symmetric matrices, and that is positive semidefinite (resp., positive definite). is the Euclidean norm in . For a positive constant , denotes the integer part of . For integers , with , denotes the discrete interval given by . denotes the set of all functions : . Matrices, if not explicitly specified, are assumed to have compatible dimensions.
2. Model Description and Preliminaries
In this paper, we consider the following discrete-time neural network model
for , where , is the state of the th neuron at time ; , denotes the activation function of the th neuron at time ; is the input vector; the positive integer corresponds to the transmission delay and satisfies ( and are known integers); , where () describes the rate with which the th neuron will reset its potential to the resting state in isolation when disconnected from the networks and external inputs; is the connection weight matrix; is the delayed connection weight matrix.
The initial condition associated with model (2.1) is given by
Throughout this paper, we make the following assumption [6].
(H) For any , and there exist constants and such thatSimilar to [23], we also give the following definitions for discrete-time neural networks (2.1).
Definition 2.1. Discrete-time neural networks (2.1) are said to be globally dissipative if there exists a compact set , such that for all , positive integer , when , , where denotes the solution of (2.1) from initial state and initial time . In this case, is called a globally attractive set. A set is called a positive invariant if implies for .
Definition 2.2. Let be a globally attractive set of discrete-time neural networks (2.1). Discrete-time neural networks (2.1) are said to be globally exponentially dissipative if there exists a compact set in such that , there exist constants and such that Set is called globally exponentially attractive set, where means but .
To prove our results, the following lemmas are necessary.
Lemma 2.3 (see [34]). Given constant matrices , , and , where , , then is equivalent to the following conditions:
Lemma 2.4 (see [35, 36]). Given matrices , , and with , then holds for all satisfying if and only if there exists a scalar such that
3. Main Results
In this section, we shall establish our main criteria based on the LMI approach. For presentation convenience, in the following, we denote
Theorem 3.1. Suppose that (H) holds. If there exist nine symmetric positive definite matrices , , , , and four positive diagonal matrices , , such that the following LMI holds: where , , , , , , , , , , , , , , , and then discrete-time neural network (2.1) is globally dissipative, and is a positive invariant and globally attractive set.
Proof. For positive diagonal matrices and , we know from assumption (H) that
Defining , we consider the following Lyapunov-Krasovskii functional candidate for model (2.1) as
where
Calculating the difference of () along the positive half trajectory of (2.1), we obtain
Similarly, one has
When ,
When ,
For positive diagonal matrices and , we can get from assumption (H) that [6]
Denoting , where
it follows from (3.7) to (3.14) that
From condition (3.2) and inequality (3.16), we get
when , that is, . Therefore, discrete-time neural network (2.1) is a globally dissipative system, and the set is a positive invariant and globally attractive set as LMI (3.2) holds. The proof is completed.
Next, we are now in a position to discuss the global exponential dissipativity of discrete-time neural network (2.1) as follows.
Theorem 3.2. Under the conditions of Theorem 3.1, neural network (2.1) is globally exponentially dissipative, and is a positive invariant and globally attractive set.
Proof. When , that is, , we know from (3.16) that
From the definition of in (3.5), it is easy to verify that
where
For any scalar , it follows from (3.19) and (3.20) that
Summing up both sides of (3.22) from to with respect to , we have
It is easy to compute that
From (3.20), we obtain
It follows from (3.23)–(3.25) that
where
Since , by the continuity of functions , we can choose a scalar such that . Obviously, . From (3.26), we get
From the definition of in (3.5), we have
Let , , then , . It follows from (3.28) and (3.29) that
for all , which means that discrete-time neural network (2.1) is globally exponentially dissipative, and the set is a positive invariant and globally attractive set as LMI (3.2) holds. The proof is completed.
Remark 3.3. In the study on dissipativity of neural networks, the assumption (H) of this paper is as same as that in [28], the constants and () in assumption (H) of this paper are allowed to be positive, negative, or zero. Hence, assumption (H), first proposed by Liu et al. in [6], is weaker than the assumption in [23–27, 30].
Remark 3.4. The idea of constructing Lyapunov-Krasovskii functional (3.5) is that we divide the delay interval into two subintervals and , thus the proposed Lyapunov-Krasovskii functional is different when the time-delay belongs to different subinterval. The main advantage of such Lyapunov-Krasovskii functional is that it makes full use of the information on the considered time-delay .
Now, let us consider the case when the parameter uncertainties appear in the discrete-time neural networks with time-varying delays. In this case, model (2.1) can be further generalized to the following one:
where , , are known real constant matrices, and the time-varying matrices , and represent the time-varying parameter uncertainties that are assumed to satisfy the following admissible condition:
where and are known real constant matrices, and is the unknown time-varying matrix-valued function subject to the following condition:
For model (3.31), we have the following result readily.
Theorem 3.5. Suppose that (H) holds. If there exist nine symmetric positive definite matrices , , , , and four positive diagonal matrices , , such that the following LMI holds: where and , , , , , , , , , , and then uncertain discrete-time neural network (3.31) is globally dissipative and globally exponentially dissipative, and is a positive invariant and globally attractive set.
Proof. By Lemma 2.3, we know that LMI (3.34) is equivalent to the following inequality: From Lemma 2.4, we know that (3.37) is equivalent to the following inequality: that is Let , , . As an application of Lemma 2.3, we know that (3.39) is equivalent to the following inequality: By simple computation and noting that , we have Therefore, inequality (3.40) is just the same as inequality (3.2) when we use , , to replace , , of inequality (3.2), respectively. From Theorems 3.1 and 3.2, we know that uncertain discrete-time neural network (3.31) is globally dissipative and globally exponentially dissipative, and is a positive invariant and globally attractive set. The proof is then completed.
4. Examples
Example 4.1. Consider a discrete-time neural network (2.1) with
It is easy to check that assumption (H) is satisfied, and , , , , , . Thus,
By the Matlab LMI Control Toolbox, we can find a solution to the LMI in (3.2) as follows:
Therefore, by Theorem 3.1, we know that model (2.1) with above given parameters is globally dissipative and globally exponentially dissipative. It is easy to compute that the positive invariant and global attractive set are .
The following example is given to illustrate that when the sufficient conditions ensuring the global dissipativity are not satisfied, the complex dynamics will appear.
Example 4.2. Consider a discrete-time neural network (2.1) with It is easy to check that the linear matrix inequality (3.2) with and of Example 4.1 has not a feasible solution. Figure 1 depicts the states of the considered neural network (2.1) with initial conditions , , . One can see from Figure 1 that the chaos behaviors have appeared for neural network (2.1) with above given parameters.
5. Conclusions
In this paper, the global dissipativity and global exponential dissipativity have been investigated for uncertain discrete-time neural networks with time-varying delays and general activation functions. By constructing appropriate Lyapunov-Krasovskii functionals and employing linear matrix inequality technique, several new delay-dependent criteria for checking the global dissipativity and global exponential dissipativity of the addressed neural networks have been derived in terms of LMI, which can be checked numerically using the effective LMI toolbox in MATLAB. Illustrated examples are also given to show the effectiveness of the proposed criteria. It is noteworthy that because neither model transformation nor free weighting matrices are employed to deal with cross terms in the derivation of the dissipativity criteria, the obtained results are less conservative and more computationally efficient.
Acknowledgments
The authors would like to thank the reviewers and the editor for their valuable suggestions and comments which have led to a much improved paper. This work was supported by the National Natural Science Foundation of China under Grants 60974132, 60874088, and 10772152.