Abstract

The problems on global dissipativity and global exponential dissipativity are investigated for uncertain discrete-time neural networks with time-varying delays and general activation functions. By constructing appropriate Lyapunov-Krasovskii functionals and employing linear matrix inequality technique, several new delay-dependent criteria for checking the global dissipativity and global exponential dissipativity of the addressed neural networks are established in linear matrix inequality (LMI), which can be checked numerically using the effective LMI toolbox in MATLAB. Illustrated examples are given to show the effectiveness of the proposed criteria. It is noteworthy that because neither model transformation nor free-weighting matrices are employed to deal with cross terms in the derivation of the dissipativity criteria, the obtained results are less conservative and more computationally efficient.

1. Introduction

In the past few decades, delayed neural networks have found successful applications in many areas such as signal processing, pattern recognition, associative memories, parallel computation, and optimization solvers [1]. In such applications, the qualitative analysis of the dynamical behaviors is a necessary step for the practical design of neural networks [2]. Many important results on the dynamical behaviors have been reported for delayed neural networks; see [116] and the references therein for some recent publications.

It should be pointed out that all of the abovementioned literatures on the dynamical behaviors of delayed neural networks are concerned with continuous-time case. However, when implementing the continuous-time delayed neural network for computer simulation, it becomes essential to formulate a discrete-time system that is an analogue of the continuous-time delayed neural network. To some extent, the discrete-time analogue inherits the dynamical characteristics of the continuous-time delayed neural network under mild or no restriction on the discretization step-size, and also remains some functional similarity [17]. Unfortunately, as pointed out in [18], the discretization cannot preserve the dynamics of the continuous-time counterpart even for a small sampling period, and therefore there is a crucial need to study the dynamics of discrete-time neural networks. Recently, the dynamics analysis problem for discrete-time delayed neural networks and discrete-time systems with time-varying state delay has been extensively studied; see [1721] and references therein.

It is well known that the stability problem is central to the analysis of a dynamic system where various types of stability of an equilibrium point have captured the attention of researchers. Nevertheless, from a practical point of view, it is not always the case that every neural network has its orbits approach a single equilibrium point. It is possible that there is no equilibrium point in some situations. Therefore, the concept on dissipativity has been introduced [22]. As pointed out in [23], the dissipativity is also an important concept in dynamical neural networks. The concept of dissipativity in dynamical systems is a more general concept and it has found applications in the areas such as stability theory, chaos and synchronization theory, system norm estimation, and robust control [23]. Some sufficient conditions checking the dissipativity for delayed neural networks and nonlinear delay systems have been derived, for example, see [2333] and references therein. In [23, 24], authors analyzed the dissipativity of neural network with constant delays, and derived some sufficient conditions for the global dissipativity of neural network with constant delays. In [25, 26], authors considered the global dissipativity and global robust dissipativity for neural network with both time-varying delays and unbounded distributed delays; several sufficient conditions for checking the global dissipativity and global robust dissipativity were obtained. In [27, 28], by using linear matrix inequality technique, authors investigated the global dissipativity of neural network with both discrete time-varying delays and distributed time-varying delays. In [29], authors developed dissipativity notions for nonnegative dynamical systems with respect to linear and nonlinear storage functions and linear supply rates, and obtained a key result on linearization of nonnegative dissipative dynamical systems. In [30], the uniform dissipativity of a class of nonautonomous neural networks with time-varying delays was investigated by employing 𝑀-matrix and the techniques of inequality. In [3133], the dissipativity of a class of nonlinear delay systems was considered; some sufficient conditions for checking the dissipativity were given. However, all of the abovementioned literatures on the dissipativity for delayed neural networks and nonlinear delay systems are concerned with continuous-time case. To the best of our knowledge, few authors have considered the problem on the dissipativity of uncertain discrete-time neural networks with time-varying delays. Therefore, the study on the dissipativity of uncertain discrete-time neural networks is not only important but also necessary.

Motivated by the above discussions, the objective of this paper is to study the problem on global dissipativity and global exponential dissipativity for uncertain discrete-time neural networks. By employing appropriate Lyapunov-Krasovskii functionals and LMI technique, we obtain several new sufficient conditions for checking the global dissipativity and global exponential dissipativity of the addressed neural networks.

Notations 1. The notations are quite standard. Throughout this paper, 𝐼 represents the unitary matrix with appropriate dimensions; stands for the set of nonnegative integers; 𝑛 and 𝑛×𝑚 denote, respectively, the 𝑛-dimensional Euclidean space and the set of all 𝑛×𝑚 real matrices. The superscript “𝑇” denotes matrix transposition and the asterisk “*” denotes the elements below the main diagonal of a symmetric block matrix. |𝐴| denotes the absolute-value matrix given by |𝐴|=(|𝑎𝑖𝑗|)𝑛×𝑛; the notation 𝑋𝑌 (resp., 𝑋>𝑌) means that 𝑋 and 𝑌 are symmetric matrices, and that 𝑋𝑌 is positive semidefinite (resp., positive definite). is the Euclidean norm in 𝑛. For a positive constant 𝑎, [𝑎] denotes the integer part of 𝑎. For integers 𝑎, 𝑏 with 𝑎<𝑏, [𝑎,𝑏] denotes the discrete interval given by [𝑎,𝑏]={𝑎,𝑎+1,,𝑏1,𝑏}. 𝐶([𝜏,0],𝑛) denotes the set of all functions 𝜙: [𝜏,0]𝑛. Matrices, if not explicitly specified, are assumed to have compatible dimensions.

2. Model Description and Preliminaries

In this paper, we consider the following discrete-time neural network model

𝑥(𝑘+1)=𝐶𝑥(𝑘)+𝐴𝑔(𝑥(𝑘))+𝐵𝑔(𝑥(𝑘𝜏(𝑘)))+𝑢(2.1) for 𝑘, where 𝑥(𝑘)=(𝑥1(𝑘),𝑥2(𝑘),,𝑥𝑛(𝑘))𝑇𝑛, 𝑥𝑖(𝑘) is the state of the 𝑖th neuron at time 𝑘; 𝑔(𝑥(𝑘))=(𝑔1(𝑥1(𝑘)),𝑔2(𝑥2(𝑘)),,𝑔𝑛(𝑥𝑛(𝑘)))𝑇𝑛, 𝑔𝑗(𝑥𝑗(𝑘)) denotes the activation function of the 𝑗th neuron at time 𝑘; 𝑢=(𝑢1,𝑢2,,𝑢𝑛)𝑇 is the input vector; the positive integer 𝜏(𝑘) corresponds to the transmission delay and satisfies 𝜏𝜏(𝑘)𝜏 (𝜏0 and 𝜏0 are known integers); 𝐶=diag{𝑐1,𝑐2,,𝑐𝑛}, where 𝑐𝑖 (0𝑐𝑖<1) describes the rate with which the 𝑖th neuron will reset its potential to the resting state in isolation when disconnected from the networks and external inputs; 𝐴=(𝑎𝑖𝑗)𝑛×𝑛 is the connection weight matrix; 𝐵=(𝑏𝑖𝑗)𝑛×𝑛 is the delayed connection weight matrix.

The initial condition associated with model (2.1) is given by

𝑥(𝑠)=𝜑(𝑠),𝑠𝜏,0.(2.2)

Throughout this paper, we make the following assumption [6].

(H) For any 𝑗{1,2,,𝑛}, 𝑔𝑗(0)=0 and there exist constants 𝐺𝑗 and 𝐺+𝑗 such that 𝐺𝑗𝑔𝑗𝛼1𝑔𝑗𝛼2𝛼1𝛼2𝐺+𝑗,𝛼1,𝛼2,𝛼1𝛼2.(2.3)

Similar to [23], we also give the following definitions for discrete-time neural networks (2.1).

Definition 2.1. Discrete-time neural networks (2.1) are said to be globally dissipative if there exists a compact set 𝑆𝑛, such that for all 𝑥0𝑛, positive integer 𝐾(𝑥0)>0, when 𝑘𝑘0+𝐾(𝑥0), 𝑥(𝑘,𝑘0,𝑥0)𝑆, where 𝑥(𝑘,𝑘0,𝑥0) denotes the solution of (2.1) from initial state 𝑥0 and initial time 𝑘0. In this case, 𝑆 is called a globally attractive set. A set 𝑆 is called a positive invariant if 𝑥0𝑆 implies 𝑥(𝑘,𝑘0,𝑥0)𝑆 for 𝑘𝑘0.

Definition 2.2. Let 𝑆 be a globally attractive set of discrete-time neural networks (2.1). Discrete-time neural networks (2.1) are said to be globally exponentially dissipative if there exists a compact set 𝑆𝑆 in 𝑛 such that 𝑥0𝑛𝑆, there exist constants 𝑀(𝑥0)>0 and 0<𝛽<1 such that inf𝑥𝑛𝑆𝑥𝑘,𝑘0,𝑥0̃𝑥̃𝑥𝑆𝑥𝑀0𝛽𝑘𝑘0.(2.4) Set 𝑆 is called globally exponentially attractive set, where 𝑥𝑛𝑆 means 𝑥𝑛 but 𝑥𝑆.

To prove our results, the following lemmas are necessary.

Lemma 2.3 (see [34]). Given constant matrices 𝑃, 𝑄, and 𝑅, where 𝑃𝑇=𝑃, 𝑄𝑇=𝑄, then 𝑅𝑃𝑅𝑇𝑄<0(2.5) is equivalent to the following conditions: 𝑄>0,𝑃+𝑅𝑄1𝑅𝑇<0.(2.6)

Lemma 2.4 (see [35, 36]). Given matrices 𝑃, 𝑄, and 𝑅 with 𝑃𝑇=𝑃, then 𝑃+𝑄𝐹(𝑘)𝑅+𝑅𝑇𝐹𝑇(𝑘)𝑄𝑇<0(2.7) holds for all 𝐹(𝑘) satisfying 𝐹𝑇(𝑘)𝐹(𝑘)𝐼 if and only if there exists a scalar 𝜀>0 such that 𝑃+𝜀1𝑄𝑄𝑇+𝜀𝑅𝑅𝑇<0.(2.8)

3. Main Results

In this section, we shall establish our main criteria based on the LMI approach. For presentation convenience, in the following, we denote

𝐺1𝐺=diag1,𝐺2,,𝐺𝑛,𝐺2𝐺=diag+1,𝐺+2,,𝐺+𝑛,𝐺3𝐺=diag1𝐺+1,𝐺2𝐺+2,,𝐺𝑛𝐺+𝑛,𝐺4𝐺=diag1+𝐺+12,𝐺2+𝐺+22𝐺,,𝑛+𝐺+𝑛2,𝛿=𝜏+𝜏2.(3.1)

Theorem 3.1. Suppose that (H) holds. If there exist nine symmetric positive definite matrices 𝑃>0, 𝑊>0, 𝑅>0, 𝑄𝑖>0(𝑖=1,2,3,4,5,6), and four positive diagonal matrices 𝐷𝑖>0(𝑖=1,2), 𝐻𝑖>0(𝑖=1,2), such that the following LMI holds: ΠΠ=11Π12Π13Π14Π1500Π22Π23Π24000Π33Π340Π360Π44000Π550Π57Π660Π77<0,(3.2) where Π11=𝐶𝑇𝑃𝐶𝑃+𝑄1+(1+𝜏𝜏)(𝑄22𝐺1𝐷1+2𝐺2𝐷2)𝐺3𝐻1+(𝐶𝐼)𝑇(𝑄4+𝑄6)(𝐶𝐼)(1/𝛿2)𝑄4+𝑄5+𝑊, Π12=𝐶𝑇𝑃𝐴+(1+𝜏𝜏)(𝐷1𝐷2)+𝐺4𝐻1+(𝐶𝐼)𝑇(𝑄4+𝑄6)𝐴, Π13=𝐶𝑇𝑃𝐵+(𝐶𝐼)𝑇(𝑄4+𝑄6)𝐵, Π14=𝐶𝑇𝑃+(𝐶𝐼)𝑇(𝑄4+𝑄6), Π15=(1/𝛿2)𝑄4, Π22=(1+𝜏𝜏)𝑄3𝐻1+𝐴𝑇(𝑃+𝑄4+𝑄6)𝐴, Π23=𝐴𝑇(𝑃+𝑄4+𝑄6)𝐵, Π24=𝐴𝑇(𝑃+𝑄4+𝑄6), Π33=𝑄3𝐻2+𝐵𝑇(𝑃+𝑄4+𝑄6)𝐵, Π34=𝐵𝑇(𝑃+𝑄4+𝑄6), Π36=𝐷1+𝐷2+𝐺4𝐻2, Π44=𝑃+𝑄4+𝑄6𝑅, Π55=𝑄1(1/𝛿2)𝑄4(1/(𝜏𝛿)2)𝑄6, Π57=(1/(𝜏𝛿)2)𝑄6, Π66=𝑄2+2𝐺1𝐷12𝐺2𝐷2𝐺3𝐻2, and Π77=𝑄5(1/(𝜏𝛿)2)𝑄6, then discrete-time neural network (2.1) is globally dissipative, and 𝑢𝑆=𝑥𝑥𝑇𝑅𝑢𝜆min(𝑊)1/2(3.3) is a positive invariant and globally attractive set.

Proof. For positive diagonal matrices 𝐷1>0 and 𝐷2>0, we know from assumption (H) that 𝑔(𝑥(𝑘))𝐺1𝑥(𝑘)𝑇𝐷1𝐺𝑥(𝑘)0,2𝑥(𝑘)𝑔(𝑥(𝑘))𝑇𝐷2𝑥(𝑘)0.(3.4)
Defining 𝜂(𝑘)=𝑥(𝑘+1)𝑥(𝑘), we consider the following Lyapunov-Krasovskii functional candidate for model (2.1) as 𝑉(𝑘,𝑥(𝑘))=8𝑖=1𝑉𝑖(𝑘,𝑥(𝑘)),(3.5) where 𝑉1(𝑘,𝑥(𝑘))=𝑥𝑇𝑉(𝑘)𝑃𝑥(𝑘),2(𝑘,𝑥(𝑘))=𝑘1𝑖=𝑘𝛿𝑥𝑇(𝑖)𝑄1𝑉𝑥(𝑖),3(𝑘,𝑥(𝑘))=𝑘1𝑖=𝑘𝜏(𝑘)𝑥𝑇(𝑖)𝑄2𝑥(𝑖)+𝑘𝜏𝑙=𝑘𝜏+1𝑘1𝑖=𝑙𝑥𝑇(𝑖)𝑄2𝑉𝑥(𝑖),4(𝑘,𝑥(𝑘))=𝑘1𝑖=𝑘𝜏(𝑘)𝑔𝑇(𝑥(𝑖))𝑄3𝑔(𝑥(𝑖))+𝑘𝜏𝑙=𝑘𝜏+1𝑘1𝑖=𝑙𝑔𝑇(𝑥(𝑖))𝑄3𝑔𝑉(𝑥(𝑖)),5(𝑘,𝑥(𝑘))=2𝑘1𝑖=𝑘𝜏(𝑘)𝑔(𝑥(𝑖))𝐺1𝑥(𝑖)𝑇𝐷1𝑥(𝑖)+2𝑘𝜏𝑙=𝑘𝜏+1𝑘1𝑖=𝑙𝑔(𝑥(𝑖))𝐺1𝑥(𝑖)𝑇𝐷1𝑉𝑥(𝑖),6(𝑘,𝑥(𝑘))=2𝑘1𝑖=𝑘𝜏(𝑘)𝐺2𝑥(𝑖)𝑔(𝑥(𝑖))𝑇𝐷2𝑥(𝑖)+2𝑘𝜏𝑙=𝑘𝜏+1𝑘1𝑖=𝑙𝐺2𝑥(𝑖)𝑔(𝑥(𝑖))𝑇𝐷2𝑥𝑉(𝑖),7(1𝑘,𝑥(𝑘))=𝛿1𝑙=𝛿𝑘1𝑖=𝑘+𝑙𝜂𝑇(𝑖)𝑄4𝑉𝜂(𝑖),8(𝑘,𝑥(𝑘))=𝑘1𝑖=𝑘𝜏𝑥𝑇(𝑖)𝑄51𝑥(𝑖)+𝜏𝛿𝑘𝜏1𝑙=𝑘𝛿𝑘1𝑖=𝑙𝜂𝑇(𝑖)𝑄6𝜂(𝑖),𝜏𝜏(𝑘)𝛿,𝑘1𝑖=𝑘𝜏𝑥𝑇(𝑖)𝑄51𝑥(𝑖)+𝜏𝛿𝑘𝛿1𝑙=𝑘𝜏𝑘1𝑖=𝑙𝜂𝑇(𝑖)𝑄6𝜂(𝑖),𝛿<𝜏(𝑘)𝜏.(3.6) Calculating the difference of 𝑉𝑖(𝑘) (𝑖=1,2,3) along the positive half trajectory of (2.1), we obtain
Δ𝑉1(𝑘,𝑥(𝑘))=𝑥𝑇𝐶(𝑘)𝑇𝑥𝑃𝐶𝑃(𝑘)+2𝑥𝑇(𝑘)𝐶𝑇𝑃𝐴𝑔(𝑥(𝑘))+2𝑥𝑇(𝑘)𝐶𝑇𝑃𝐵𝑔(𝑥(𝑘𝜏(𝑘)))+2𝑥𝑇(𝑘)𝐶𝑇𝑃𝑢+𝑔𝑇(𝑥(𝑘))𝐴𝑇𝑃𝐴𝑔(𝑥(𝑘))+2𝑔𝑇(𝑥(𝑘))𝐴𝑇𝑃𝐵𝑔(𝑥(𝑘𝜏(𝑘)))+2𝑔𝑇(𝑥(𝑘))𝐴𝑇𝑃𝑢+𝑔𝑇(𝑥(𝑘𝜏(𝑘)))𝐵𝑇𝑃𝐵𝑔(𝑥(𝑘𝜏(𝑘)))+2𝑔𝑇(𝑥(𝑘𝜏(𝑘)))𝐵𝑇𝑃𝑢+𝑢𝑇𝑃𝑢,(3.7)Δ𝑉2(𝑘,𝑥(𝑘))=𝑥𝑇(𝑘)𝑄1𝑥(𝑘)𝑥𝑇(𝑘𝛿)𝑄1𝑥(𝑘𝛿),(3.8)Δ𝑉3(𝑘,𝑥(𝑘))=𝑘𝑖=𝑘+1𝜏(𝑘+1)𝑥𝑇(𝑖)𝑄2𝑥(𝑖)𝑘1𝑖=𝑘𝜏(𝑘)𝑥𝑇(𝑖)𝑄2+𝑥(𝑖)𝑘+1𝜏𝑙=𝑘𝑘𝜏+2𝑖=𝑙𝑥𝑇(𝑖)𝑄2𝑥(𝑖)𝑘𝜏𝑙=𝑘𝜏+1𝑘1𝑖=𝑙𝑥𝑇(𝑖)𝑄2=𝑥(𝑖)𝑘𝜏𝑖=𝑘+1𝜏(𝑘+1)𝑥𝑇(𝑖)𝑄2𝑥(𝑖)+𝑘1𝑖=𝑘𝜏+1𝑥𝑇(𝑖)𝑄2𝑥(𝑖)+𝑥𝑇(𝑘)𝑄2𝑥(𝑘)𝑘1𝑖=𝑘𝜏(𝑘)+1𝑥𝑇(𝑖)𝑄2𝑥(𝑖)𝑥𝑇(𝑘𝜏(𝑘))𝑄2𝑥(𝑘𝜏(𝑘))+𝑥𝜏𝜏𝑇(𝑘)𝑄2𝑥(𝑘)𝑘𝜏𝑙=𝑘𝜏+1𝑥𝑇(𝑙)𝑄2𝑥(𝑙)1+𝑥𝜏𝜏𝑇(𝑘)𝑄2𝑥(𝑘)𝑥𝑇(𝑘𝜏(𝑘))𝑄2𝑥(𝑘𝜏(𝑘)).(3.9)Δ𝑉4(𝑘,𝑥(𝑘))1+𝑔𝜏𝜏𝑇(𝑥(𝑘))𝑄3𝑔(𝑥(𝑘))𝑔𝑇(𝑥(𝑘𝜏(𝑘)))𝑄3𝑔(𝑥(𝑘𝜏(𝑘))),Δ𝑉5(𝑘,𝑥(𝑘))21+𝜏𝜏𝑔(𝑥(𝑘))𝐺1𝑥(𝑘)𝑇𝐷1𝑥(𝑘)2𝑔(𝑥(𝑘𝜏(𝑘)))𝐺1𝑥(𝑘𝜏(𝑘))𝑇𝐷1𝑥(𝑘𝜏(𝑘)),Δ𝑉6(𝑘,𝑥(𝑘))21+𝐺𝜏𝜏2𝑥(𝑘)𝑔(𝑥(𝑘))𝑇𝐷2𝐺𝑥(𝑘)22𝑥(𝑘𝜏(𝑘))𝑔(𝑥(𝑘𝜏(𝑘)))𝑇𝐷2𝑥(𝑘𝜏(𝑘)),Δ𝑉7(𝑘,𝑥(𝑘))=𝜂𝑇(𝑘)𝑄41𝜂(𝑘)𝛿𝑘1𝑖=𝑘𝛿𝜂𝑇(𝑖)𝑄4𝜂(𝑖)𝜂𝑇(𝑘)𝑄41𝜂(𝑘)𝛿2𝑘1𝑖=𝑘𝛿𝜂𝑇(𝑖)𝑄4𝑘1𝑖=𝑘𝛿𝜂(𝑖)=𝑥𝑇(𝑘)(𝐶𝐼)𝑇𝑄4(𝐶𝐼)𝑥(𝑘)+2𝑥𝑇(𝑘)(𝐶𝐼)𝑇𝑄4𝐴𝑔(𝑥(𝑘))+2𝑥𝑇(𝑘)(𝐶𝐼)𝑇𝑄4𝐵𝑔(𝑥(𝑘𝜏(𝑘)))+2𝑥𝑇(𝑘)(𝐶𝐼)𝑇𝑄4𝑢+𝑔𝑇(𝑥(𝑘))𝐴𝑇𝑄4𝐴𝑔(𝑥(𝑘))+2𝑔𝑇(𝑥(𝑘))𝐴𝑇𝑄4𝐵𝑔(𝑥(𝑘𝜏(𝑘)))+2𝑔𝑇(𝑥(𝑘))𝐴𝑇𝑄4𝑢+𝑔𝑇(𝑥(𝑘𝜏(𝑘)))𝐵𝑇𝑄4𝐵𝑔(𝑥(𝑘𝜏(𝑘)))+2𝑔𝑇(𝑥(𝑘𝜏(𝑘)))𝐵𝑇𝑄4𝑢+𝑢𝑇𝑄4+1𝛿2𝑥(𝑘)𝑥(𝑘𝛿)𝑇𝑄4𝑄4𝑄4𝑄4𝑥.𝑥(𝑘)(𝑘𝛿)(3.10) Similarly, one has 𝜏𝜏(𝑘)𝛿 When Δ𝑉8(𝑘,𝑥(𝑘))=𝑥𝑇(𝑘)𝑄5𝑥(𝑘)𝑥𝑇(𝑘𝜏)𝑄5+𝑥(𝑘𝜏)𝛿𝜏𝜂𝜏𝛿𝑇(𝑘)𝑄61𝜂(𝑘)𝜏𝛿𝑘𝜏1𝑖=𝑘𝛿𝜂𝑇(𝑖)𝑄6𝜂(𝑖)𝑥𝑇(𝑘)𝑄5𝑥(𝑘)𝑥𝑇(𝑘𝜏)𝑄5𝑥(𝑘𝜏)+𝜂𝑇(𝑘)𝑄61𝜂(𝑘)(𝜏𝛿𝛿𝜏)𝑘𝜏1𝑖=𝑘𝛿𝜂𝑇(𝑖)𝑄6𝑘𝜏1𝑖=𝑘𝛿𝜂(𝑖)𝑥𝑇(𝑘𝜏)𝑄5𝑥(𝑘𝜏)+𝑥𝑇(𝑘)(𝐶𝐼)𝑇𝑄6(𝐶𝐼)+𝑄5𝑥(𝑘)+2𝑥𝑇(𝑘)(𝐶𝐼)𝑇𝑄6𝐴𝑔(𝑥(𝑘))+2𝑥𝑇(𝑘)(𝐶𝐼)𝑇𝑄6𝐵𝑔(𝑥(𝑘𝜏(𝑘)))+2𝑥𝑇(𝑘)(𝐶𝐼)𝑇𝑄6𝑢+𝑔𝑇(𝑥(𝑘))𝐴𝑇𝑄6𝐴𝑔(𝑥(𝑘))+2𝑔𝑇(𝑥(𝑘))𝐴𝑇𝑄6𝐵𝑔(𝑥(𝑘𝜏(𝑘)))+2𝑔𝑇(𝑥(𝑘))𝐴𝑇𝑄6𝑢+𝑔𝑇(𝑥(𝑘𝜏(𝑘)))𝐵𝑇𝑄6𝐵𝑔(𝑥(𝑘𝜏(𝑘)))+2𝑔𝑇(𝑥(𝑘𝜏(𝑘)))𝐵𝑇𝑄6𝑢+𝑢𝑇𝑄6𝑢+1𝜏𝛿2𝑥(𝑘𝜏)𝑥(𝑘𝛿)𝑇𝑄6𝑄6𝑄6𝑄6.𝑥(𝑘𝜏)𝑥(𝑘𝛿)(3.11), 𝛿<𝜏(𝑘)𝜏 When Δ𝑉8(𝑘,𝑥(𝑘))=𝑥𝑇(𝑘)𝑄5𝑥(𝑘)𝑥𝑇𝑘𝜏𝑄5𝑥𝑘𝜏+𝜂𝑇(𝑘)𝑄61𝜂(𝑘)𝜏𝛿𝑘𝛿1𝑖=𝑘𝜏𝜂𝑇(𝑖)𝑄6𝜂(𝑖)𝑥𝑇(𝑘)𝑄5𝑥(𝑘)𝑥𝑇𝑘𝜏𝑄5𝑥𝑘𝜏+𝜂𝑇(𝑘)𝑄61𝜂(𝑘)𝜏𝛿2𝑘𝛿1𝑖=𝑘𝜏𝜂𝑇(𝑖)𝑄6𝑘𝛿1𝑖=𝑘𝜏𝜂(𝑖)=𝑥𝑇𝑘𝜏𝑄5𝑥𝑘𝜏+𝑥𝑇(𝑘)(𝐶𝐼)𝑇𝑄6(𝐶𝐼)+𝑄5𝑥(𝑘)+2𝑥𝑇(𝑘)(𝐶𝐼)𝑇𝑄6𝐴𝑔(𝑥(𝑘))+2𝑥𝑇(𝑘)(𝐶𝐼)𝑇𝑄6𝐵𝑔(𝑥(𝑘𝜏(𝑘)))+2𝑥𝑇(𝑘)(𝐶𝐼)𝑇𝑄6𝑢+𝑔𝑇(𝑥(𝑘))𝐴𝑇𝑄6𝐴𝑔(𝑥(𝑘))+2𝑔𝑇(𝑥(𝑘))𝐴𝑇𝑄6𝐵𝑔(𝑥(𝑘𝜏(𝑘)))+2𝑔𝑇(𝑥(𝑘))𝐴𝑇𝑄6𝑢+𝑔𝑇(𝑥(𝑘𝜏(𝑘)))𝐵𝑇𝑄6𝐵𝑔(𝑥(𝑘𝜏(𝑘)))+2𝑔𝑇(𝑥(𝑘𝜏(𝑘)))𝐵𝑇𝑄6𝑢+𝑢𝑇𝑄6𝑢+1𝜏𝛿2𝑥𝑘𝜏𝑥(𝑘𝛿)𝑇𝑄6𝑄6𝑄6𝑄6𝑥𝑘𝜏𝑥.(𝑘𝛿)(3.12), 𝐻1>0
For positive diagonal matrices 𝐻2>0 and 0𝑥(𝑘)𝑔(𝑥(𝑘))𝑇𝐺3𝐻1𝐺4𝐻1𝐺4𝐻1𝐻1,𝑥(𝑘)𝑔(𝑥(𝑘))(3.13)0𝑥(𝑘𝜏(𝑘))𝑔(𝑥(𝑘𝜏(𝑘)))𝑇𝐺3𝐻2𝐺4𝐻2𝐺4𝐻2𝐻2.𝑥(𝑘𝜏(𝑘))𝑔(𝑥(𝑘𝜏(𝑘)))(3.14), we can get from assumption (H) that [6] 𝛼(𝑘)=(𝑥𝑇(𝑘),𝑔𝑇(𝑥(𝑘)),𝑔𝑇(𝑥(𝑘𝜏(𝑘))),𝑢𝑇,𝑥𝑇(𝑘𝛿),𝑥𝑇(𝑘𝜏(𝑘)),𝜉𝑇(𝑘))𝑇
Denoting 𝑥𝜉(𝑘)=𝑥(𝑘𝜏),𝜏𝜏(𝑘)𝛿,𝑘𝜏,𝛿<𝜏(𝑘)𝜏,(3.15), where Δ𝑉(𝑘,𝑥(𝑘))𝑥𝑇𝐶(𝑘)𝑇𝑃𝐶𝑃+𝑄1+1+𝑄𝜏𝜏22𝐺1𝐷1+2𝐺2𝐷2𝐺3𝐻1+(𝐶𝐼)𝑇𝑄4+𝑄61(𝐶𝐼)𝛿2𝑄4+𝑄5𝑥(𝑘)+2𝑥𝑇(𝐶𝑘)𝑇𝑃𝐴+1+𝐷𝜏𝜏1𝐷2+𝐺4𝐻1+(𝐶𝐼)𝑇𝑄4+𝑄6𝐴𝑔(𝑥(𝑘))+2𝑥𝑇𝐶(𝑘)𝑇𝑃𝐵+(𝐶𝐼)𝑇𝑄4+𝑄6𝐵𝑔(𝑥(𝑘𝜏(𝑘)))+2𝑥𝑇𝐶(𝑘)𝑇𝑃+(𝐶𝐼)𝑇𝑄4+𝑄61𝑢+2𝛿2𝑥𝑇(𝑘)𝑄4𝑥(𝑘𝛿)+𝑔𝑇𝐴(𝑥(𝑘))𝑇𝑃𝐴+1+𝑄𝜏𝜏3𝐻1+𝐴𝑇𝑄4+𝑄6𝐴𝑔(𝑥(𝑘))+2𝑔𝑇𝐴(𝑥(𝑘))𝑇𝑃𝐵+𝐴𝑇𝑄4+𝑄6𝐵𝑔(𝑥(𝑘𝜏(𝑘)))+2𝑔𝑇𝐴(𝑥(𝑘))𝑇𝑃+𝐴𝑇𝑄4+𝑄6𝑢+𝑔𝑇𝐵(𝑥(𝑘𝜏(𝑘)))𝑇𝑃𝐵𝑄3𝐻2+𝐵𝑇𝑄4+𝑄6𝐵𝑔(𝑥(𝑘𝜏(𝑘)))+2𝑔𝑇𝐵(𝑥(𝑘𝜏(𝑘)))𝑇𝑃+𝐵𝑇𝑄4+𝑄6𝑢+2𝑔𝑇(𝑥(𝑘𝜏(𝑘)))𝐷1+𝐷2+𝐺4𝐻2𝑥(𝑘𝜏(𝑘))+𝑢𝑇𝑃+𝑄4+𝑄6𝑢+𝑥𝑇(𝑘𝛿)𝑄11𝛿2𝑄41𝜏𝛿2𝑄61𝑥(𝑘𝛿)+2𝜏𝛿2𝑥𝑇(𝑘𝛿)𝑄6𝜉(𝑘)+𝑥𝑇(𝑘𝜏(𝑘))𝑄2+2𝐺1𝐷12𝐺2𝐷2𝐺3𝐻2𝑥(𝑘𝜏(𝑘))+𝑥𝑇(𝑘𝜏)𝑄51𝜏𝛿2𝑄6𝑥(𝑘𝜏)=𝑥𝑇(𝑘)𝑊𝑥(𝑘)+𝑢𝑇𝑅𝑢+𝛼𝑇(𝑘)Π𝛼(𝑘).(3.16) it follows from (3.7) to (3.14) that Δ𝑉(𝑘,𝑥(𝑘))𝑥𝑇(𝑘)𝑊𝑥(𝑘)+𝑢𝑇𝑅𝑢𝜆min(𝑊)𝑥2+𝑢𝑇𝑅𝑢<0,(3.17) From condition (3.2) and inequality (3.16), we get 𝑥𝑛𝑆 when 𝑥𝑆, that is, 𝑆. Therefore, discrete-time neural network (2.1) is a globally dissipative system, and the set 𝑢𝑆=𝑥𝑥𝑇𝑅𝑢𝜆min(𝑊)1/2(3.18) is a positive invariant and globally attractive set as LMI (3.2) holds. The proof is completed.

Next, we are now in a position to discuss the global exponential dissipativity of discrete-time neural network (2.1) as follows.

Theorem 3.2. Under the conditions of Theorem 3.1, neural network (2.1) is globally exponentially dissipative, and 𝑥𝑛𝑆 is a positive invariant and globally attractive set.

Proof. When 𝑥𝑆, that is, Δ𝑉(𝑘,𝑥(𝑘))𝜆min(Π)𝑥(𝑘)2.(3.19), we know from (3.16) that 𝑉(𝑘) From the definition of 𝑉(𝑘)𝜆max(𝑃)𝑥(𝑘)2+𝜌𝑘1𝑖=𝑘𝜏𝑥(𝑖)2,(3.20) in (3.5), it is easy to verify that 𝜌=𝜆max𝑄1+1+𝜆𝜏𝜏max𝑄2+𝜆max𝑄3𝜆max(𝐿)+2𝜆max𝐿𝐷1+2𝜆max||𝐺1𝐷1||+2𝜆max𝐿𝐷2+2𝜆max||𝐺2𝐷2||+4𝜏𝛿𝜆max𝑄4+𝜆max𝑄5+4𝜆𝜏𝜏max𝑄6.(3.21) where 𝜇>1 For any scalar 𝜇𝑗+1𝑉(𝑗+1)𝜇𝑗𝑉(𝑗)=𝜇𝑗+1Δ𝑉(𝑗)+𝜇𝑗𝜇(𝜇1)𝑉(𝑗)𝑗(𝜇1)𝜆max(𝑃)𝜇𝑗+1𝜆min(Π)𝑥(𝑗)2+𝜌𝜇𝑗(𝜇1)𝑗1𝑖=𝑗𝜏𝑥(𝑖)2.(3.22), it follows from (3.19) and (3.20) that 0 Summing up both sides of (3.22) from 𝑘1 to 𝑗 with respect to 𝜇𝑘𝑉(𝑘)𝑉(0)(𝜇1)𝜆max(𝑃)𝜇𝜆min(Π)𝑘1𝑗=0𝜇𝑗𝑥(𝑗)2+𝜌(𝜇1)𝑘1𝑗=0𝑗1𝑖=𝑗𝜏𝜇𝑗𝑥(𝑖)2.(3.23), we have 𝑘1𝑗=0𝑗1𝑖=𝑗𝜏𝜇𝑗𝑥(𝑖)21𝑖=𝜏𝑖+𝜏𝑗=0+𝑘1𝜏𝑖=0𝑖+𝜏𝑗=𝑖+1+𝑘1𝑖=𝑘𝜏𝑘1𝑗=𝑖+1𝜇𝑗𝑥(𝑖)2𝜏𝜇𝜏sup𝑠𝑁[𝜏,0]𝑥(𝑠)2+𝜏𝜇𝜏𝑘1𝑖=0𝜇𝑖𝑥(𝑖)2.(3.24) It is easy to compute that 𝑉𝜆(0)max(𝑃)+𝜌𝜏sup𝑠𝑁[𝜏,0]𝑥(𝑠)2.(3.25) From (3.20), we obtain 𝜇𝑘𝑉(𝑘)𝐿1(𝜇)sup𝑠𝑁[𝜏,0]𝑥(𝑠)2+𝐿2(𝜇)𝑘𝑗=0𝜇𝑖𝑥(𝑖)2,(3.26) It follows from (3.23)–(3.25) that 𝐿1(𝜇)=𝜆max(𝑃)+𝜌𝜏+𝜌(𝜇1)𝜏𝜇𝜏,𝐿2(𝜇)=(𝜇1)𝜆max(𝑃)𝜇𝜆min(Π)+𝜌(𝜇1)𝜏𝜇𝜏.(3.27) where 𝐿2(1)<0 Since 𝐿2(𝜇), by the continuity of functions 𝛾>1, we can choose a scalar 𝐿2(𝛾)0 such that 𝐿1(𝛾)>0. Obviously, 𝛾𝑘𝑉(𝑘)𝐿1(𝛾)sup𝑠𝑁[𝜏,0]𝑥(𝑠)2.(3.28). From (3.26), we get 𝑉(𝑘)
From the definition of 𝑉(𝑘)𝜆min(𝑃)𝑥(𝑘)2.(3.29) in (3.5), we have 𝑀=𝐿1(𝛾)/𝜆min(𝑃) Let 𝛽=1/𝛾, 𝑀>0, then 0<𝛽<1, 𝑥(𝑘)𝑀𝛽𝑘sup𝑠𝑁[𝜏,0]𝑥(𝑠)(3.30). It follows from (3.28) and (3.29) that 𝑘 for all 𝑆, which means that discrete-time neural network (2.1) is globally exponentially dissipative, and the set 𝐺𝑗 is a positive invariant and globally attractive set as LMI (3.2) holds. The proof is completed.

Remark 3.3. In the study on dissipativity of neural networks, the assumption (H) of this paper is as same as that in [28], the constants 𝐺+𝑗 and 𝑗=1,2,,𝑛 ([𝜏,𝜏]) in assumption (H) of this paper are allowed to be positive, negative, or zero. Hence, assumption (H), first proposed by Liu et al. in [6], is weaker than the assumption in [2327, 30].

Remark 3.4. The idea of constructing Lyapunov-Krasovskii functional (3.5) is that we divide the delay interval [𝜏,𝛿] into two subintervals [𝛿,𝜏] and 𝜏(𝑘), thus the proposed Lyapunov-Krasovskii functional is different when the time-delay 𝜏(𝑘) belongs to different subinterval. The main advantage of such Lyapunov-Krasovskii functional is that it makes full use of the information on the considered time-delay 𝑥(𝑘+1)=(𝐶+Δ𝐶(𝑘))𝑥(𝑘)+(𝐴+Δ𝐴(𝑘))𝑔(𝑥(𝑘))+(𝐵+Δ𝐵(𝑘))𝑔(𝑥(𝑘𝜏(𝑘)))+𝑢,(3.31).

Now, let us consider the case when the parameter uncertainties appear in the discrete-time neural networks with time-varying delays. In this case, model (2.1) can be further generalized to the following one:

𝐶 where 𝐴, 𝐵, Δ𝐶(𝑘) are known real constant matrices, and the time-varying matrices Δ𝐴(𝑘), Δ𝐵(𝑘) and 𝑁Δ𝐶(𝑘)Δ𝐴(𝑘)Δ𝐵(𝑘)=𝑀𝐹(𝑘)1𝑁2𝑁3,(3.32) represent the time-varying parameter uncertainties that are assumed to satisfy the following admissible condition:

𝑀 where 𝑁𝑖(𝑖=1,2,3) and 𝐹(𝑘) are known real constant matrices, and 𝐹𝑇(𝑘)𝐹(𝑘)𝐼.(3.33) is the unknown time-varying matrix-valued function subject to the following condition:

𝑃>0

For model (3.31), we have the following result readily.

Theorem 3.5. Suppose that (H) holds. If there exist nine symmetric positive definite matrices 𝑊>0, 𝑅>0, 𝑄𝑖>0(𝑖=1,2,3,4,5,6), 𝐷𝑖>0(𝑖=1,2), and four positive diagonal matrices 𝐻𝑖>0(𝑖=1,2), ΓΓ=1Γ2Γ3Γ40𝜀Γ5𝑃00𝑃𝑀0𝑄40𝑄4𝑀0𝑄6𝑄6𝑀0𝜀𝐼0𝜀𝐼<0,(3.34), such that the following LMI holds: Γ1=Ω11Ω1200Ω1500Ω2200000Ω3300Ω360Ω44000Ω550Ω57Ω660Ω77,Γ2=𝑃𝐶𝑃𝐴𝑃𝐵𝑃000𝑇,Γ3=𝑄4(𝐶𝐼)𝑄4𝐴𝑄4𝐵𝑄4000𝑇,Γ4=𝑄6(𝐶𝐼)𝑄6𝐴𝑄6𝐵𝑄6,Γ0005=𝑁1𝑁2𝑁3,0000(3.35) where Ω11=𝑃+𝑄1+(1+𝜏𝜏)(𝑄22𝐺1𝐷1+2𝐺2𝐷2)𝐺3𝐻1(1/𝛿2)𝑄4+𝑄5+𝑊 and Ω12=(1+𝜏𝜏)(𝐷1𝐷2)+𝐺4𝐻1, Ω15=(1/𝛿2)𝑄4, Ω22=(1+𝜏𝜏)𝑄3𝐻1, Ω33=𝑄3𝐻2, Ω36=𝐷1+𝐷2+𝐺4𝐻2, Ω44=𝑅, Ω55=𝑄1(1/𝛿2)𝑄4(1/(𝜏𝛿)2)𝑄6, Ω57=(1/(𝜏𝛿)2)𝑄6, Ω66=𝑄2+2𝐺1𝐷12𝐺2𝐷2𝐺3𝐻2, Ω77=𝑄5(1/(𝜏𝛿)2)𝑄6,, and 𝑢𝑆=𝑥𝑥𝑇𝑅𝑢𝜆min(𝑊)1/2(3.36) then uncertain discrete-time neural network (3.31) is globally dissipative and globally exponentially dissipative, and Γ1Γ2Γ3Γ4𝑃00𝑄40𝑄6+𝜀10𝑄𝑃𝑀4𝑀𝑄6𝑀0𝑄𝑃𝑀4𝑀𝑄6𝑀𝑇Γ+𝜀5000Γ5000𝑇<0.(3.37) is a positive invariant and globally attractive set.

Proof. By Lemma 2.3, we know that LMI (3.34) is equivalent to the following inequality: Γ1Γ2Γ3Γ4𝑃00𝑄40𝑄6+0𝑄𝑃𝑀4𝑀𝑄6𝑀Γ𝐹(𝑘)5000𝑇+Γ5000𝐹𝑇0𝑄(𝑘)𝑃𝑀4𝑀𝑄6𝑀𝑇<0,(3.38) From Lemma 2.4, we know that (3.37) is equivalent to the following inequality: Γ1Γ2+Γ5𝐹𝑇(𝑘)𝑀𝑇𝑃Γ3+Γ5𝐹𝑇(𝑘)𝑀𝑇𝑄4Γ4+Γ5𝐹𝑇(𝑘)𝑀𝑇𝑄6𝑃00𝑄40𝑄6<0.(3.39) that is Φ=Γ2+Γ5𝐹𝑇(𝑘)𝑀𝑇𝑃 Let Ψ=Γ3+Γ5𝐹𝑇(𝑘)𝑀𝑇𝑄4, Υ=Γ4+Γ5𝐹𝑇(𝑘)𝑀𝑇𝑄6, Γ1+Φ𝑃1Φ𝑇+Ψ𝑄41Ψ𝑇+Υ𝑄61Υ𝑇<0.(3.40). As an application of Lemma 2.3, we know that (3.39) is equivalent to the following inequality: [Δ𝐶(𝑘)Δ𝐴(𝑘)Δ𝐵(𝑘)]=𝑀𝐹(𝑘)[𝑁1𝑁2𝑁3] By simple computation and noting that Φ=𝑃(𝐶+Δ𝐶)𝑃(𝐴+Δ𝐴)𝑃(𝐵+Δ𝐵)𝑃000𝑇,𝑄Ψ=4(𝐶+Δ𝐶𝐼)𝑄4(𝐴+Δ𝐴)𝑄4(𝐵+Δ𝐵)𝑄4000𝑇,𝑄Υ=6(𝐶+Δ𝐶𝐼)𝑄6(𝐴+Δ𝐴)𝑄6(𝐵+Δ𝐵)𝑄6000𝑇.(3.41), we have 𝐶+Δ𝐶 Therefore, inequality (3.40) is just the same as inequality (3.2) when we use 𝐴+Δ𝐴, 𝐵+Δ𝐵, 𝐶 to replace 𝐴, 𝐵, 𝑢𝑆=𝑥𝑥𝑇𝑅𝑢𝜆min(𝑊)1/2(3.42) of inequality (3.2), respectively. From Theorems 3.1 and 3.2, we know that uncertain discrete-time neural network (3.31) is globally dissipative and globally exponentially dissipative, and ,𝑔𝐶=0.04000.01,𝐴=0.010.080.050.02,𝐵=0.050.010.020.07,𝑢=0.210.141(𝑥)=tanh(0.7𝑥)0.1sin𝑥,𝑔2(𝑥)=tanh(0.4𝑥)+0.2cos𝑥,𝜏(𝑘)=72sin𝑘𝜋2.(4.1) is a positive invariant and globally attractive set. The proof is then completed.

4. Examples

Example 4.1. Consider a discrete-time neural network (2.1) with 𝐺1=0.1
It is easy to check that assumption (H) is satisfied, and 𝐺+1=0.8, 𝐺2=0.2, 𝐺+2=0.6, 𝜏=5, 𝜏=9, 𝐺1=0.1000.2,𝐺2=0.8000.6,𝐺3=0.08000.12,𝐺4=0.35000.2,𝛿=7.(4.2). Thus, 𝑄1=7.16930.01070.01077.3526,𝑄2=1.50950.00590.00591.6330,𝑄3=,𝑄2.24550.03360.03362.71584=2.38950.00650.00651.8946,𝑄5=7.19270.01080.01087.3714,𝑄6=,,𝐷2.45750.00640.00641.8764𝑃=57.31730.07270.072757.7631,𝑊=5.02490.01100.01105.1411,𝑅=72.23680.04450.044571.49161=1.2817001.7411,𝐷2=,𝐻2.1180002.31571=19.79460023.7062,𝐻2=.9.11100010.5938(4.3) By the Matlab LMI Control Toolbox, we can find a solution to the LMI in (3.2) as follows: 𝑆={𝑥𝑥0.9553} Therefore, by Theorem 3.1, we know that model (2.1) with above given parameters is globally dissipative and globally exponentially dissipative. It is easy to compute that the positive invariant and global attractive set are ,𝑔𝐶=0.12000.68,𝐴=16.7425.38.4,𝐵=9.929.719.219.7,𝑢=10.93.41(𝑥)=tanh(0.7𝑥)0.1sin𝑥,𝑔2(𝑥)=tanh(0.4𝑥)+0.2cos𝑥,𝜏(𝑘)=72sin𝑘𝜋2.(4.4).

The following example is given to illustrate that when the sufficient conditions ensuring the global dissipativity are not satisfied, the complex dynamics will appear.

Example 4.2. Consider a discrete-time neural network (2.1) with 𝐺𝑖(𝑖=1,2,3,4) It is easy to check that the linear matrix inequality (3.2) with 𝛿 and 𝑥1(𝑠)=0.5 of Example 4.1 has not a feasible solution. Figure 1 depicts the states of the considered neural network (2.1) with initial conditions 𝑥2(𝑠)=0.45, 𝑠[9,0], 𝑥1(𝑘). One can see from Figure 1 that the chaos behaviors have appeared for neural network (2.1) with above given parameters.

5. Conclusions

In this paper, the global dissipativity and global exponential dissipativity have been investigated for uncertain discrete-time neural networks with time-varying delays and general activation functions. By constructing appropriate Lyapunov-Krasovskii functionals and employing linear matrix inequality technique, several new delay-dependent criteria for checking the global dissipativity and global exponential dissipativity of the addressed neural networks have been derived in terms of LMI, which can be checked numerically using the effective LMI toolbox in MATLAB. Illustrated examples are also given to show the effectiveness of the proposed criteria. It is noteworthy that because neither model transformation nor free weighting matrices are employed to deal with cross terms in the derivation of the dissipativity criteria, the obtained results are less conservative and more computationally efficient.

Acknowledgments

The authors would like to thank the reviewers and the editor for their valuable suggestions and comments which have led to a much improved paper. This work was supported by the National Natural Science Foundation of China under Grants 60974132, 60874088, and 10772152.