Discrete Dynamics in Nature and Society

Discrete Dynamics in Nature and Society / 2010 / Article

Research Article | Open Access

Volume 2010 |Article ID 810408 | 19 pages | https://doi.org/10.1155/2010/810408

Global Dissipativity on Uncertain Discrete-Time Neural Networks with Time-Varying Delays

Academic Editor: Yong Zhou
Received24 Dec 2009
Accepted14 Feb 2010
Published18 Mar 2010

Abstract

The problems on global dissipativity and global exponential dissipativity are investigated for uncertain discrete-time neural networks with time-varying delays and general activation functions. By constructing appropriate Lyapunov-Krasovskii functionals and employing linear matrix inequality technique, several new delay-dependent criteria for checking the global dissipativity and global exponential dissipativity of the addressed neural networks are established in linear matrix inequality (LMI), which can be checked numerically using the effective LMI toolbox in MATLAB. Illustrated examples are given to show the effectiveness of the proposed criteria. It is noteworthy that because neither model transformation nor free-weighting matrices are employed to deal with cross terms in the derivation of the dissipativity criteria, the obtained results are less conservative and more computationally efficient.

1. Introduction

In the past few decades, delayed neural networks have found successful applications in many areas such as signal processing, pattern recognition, associative memories, parallel computation, and optimization solvers [1]. In such applications, the qualitative analysis of the dynamical behaviors is a necessary step for the practical design of neural networks [2]. Many important results on the dynamical behaviors have been reported for delayed neural networks; see [1–16] and the references therein for some recent publications.

It should be pointed out that all of the abovementioned literatures on the dynamical behaviors of delayed neural networks are concerned with continuous-time case. However, when implementing the continuous-time delayed neural network for computer simulation, it becomes essential to formulate a discrete-time system that is an analogue of the continuous-time delayed neural network. To some extent, the discrete-time analogue inherits the dynamical characteristics of the continuous-time delayed neural network under mild or no restriction on the discretization step-size, and also remains some functional similarity [17]. Unfortunately, as pointed out in [18], the discretization cannot preserve the dynamics of the continuous-time counterpart even for a small sampling period, and therefore there is a crucial need to study the dynamics of discrete-time neural networks. Recently, the dynamics analysis problem for discrete-time delayed neural networks and discrete-time systems with time-varying state delay has been extensively studied; see [17–21] and references therein.

It is well known that the stability problem is central to the analysis of a dynamic system where various types of stability of an equilibrium point have captured the attention of researchers. Nevertheless, from a practical point of view, it is not always the case that every neural network has its orbits approach a single equilibrium point. It is possible that there is no equilibrium point in some situations. Therefore, the concept on dissipativity has been introduced [22]. As pointed out in [23], the dissipativity is also an important concept in dynamical neural networks. The concept of dissipativity in dynamical systems is a more general concept and it has found applications in the areas such as stability theory, chaos and synchronization theory, system norm estimation, and robust control [23]. Some sufficient conditions checking the dissipativity for delayed neural networks and nonlinear delay systems have been derived, for example, see [23–33] and references therein. In [23, 24], authors analyzed the dissipativity of neural network with constant delays, and derived some sufficient conditions for the global dissipativity of neural network with constant delays. In [25, 26], authors considered the global dissipativity and global robust dissipativity for neural network with both time-varying delays and unbounded distributed delays; several sufficient conditions for checking the global dissipativity and global robust dissipativity were obtained. In [27, 28], by using linear matrix inequality technique, authors investigated the global dissipativity of neural network with both discrete time-varying delays and distributed time-varying delays. In [29], authors developed dissipativity notions for nonnegative dynamical systems with respect to linear and nonlinear storage functions and linear supply rates, and obtained a key result on linearization of nonnegative dissipative dynamical systems. In [30], the uniform dissipativity of a class of nonautonomous neural networks with time-varying delays was investigated by employing 𝑀-matrix and the techniques of inequality. In [31–33], the dissipativity of a class of nonlinear delay systems was considered; some sufficient conditions for checking the dissipativity were given. However, all of the abovementioned literatures on the dissipativity for delayed neural networks and nonlinear delay systems are concerned with continuous-time case. To the best of our knowledge, few authors have considered the problem on the dissipativity of uncertain discrete-time neural networks with time-varying delays. Therefore, the study on the dissipativity of uncertain discrete-time neural networks is not only important but also necessary.

Motivated by the above discussions, the objective of this paper is to study the problem on global dissipativity and global exponential dissipativity for uncertain discrete-time neural networks. By employing appropriate Lyapunov-Krasovskii functionals and LMI technique, we obtain several new sufficient conditions for checking the global dissipativity and global exponential dissipativity of the addressed neural networks.

Notations 1. The notations are quite standard. Throughout this paper, 𝐼 represents the unitary matrix with appropriate dimensions; β„• stands for the set of nonnegative integers; ℝ𝑛 and β„π‘›Γ—π‘š denote, respectively, the 𝑛-dimensional Euclidean space and the set of all π‘›Γ—π‘š real matrices. The superscript β€œπ‘‡β€ denotes matrix transposition and the asterisk β€œ*” denotes the elements below the main diagonal of a symmetric block matrix. |𝐴| denotes the absolute-value matrix given by |𝐴|=(|π‘Žπ‘–π‘—|)𝑛×𝑛; the notation 𝑋β‰₯π‘Œ (resp., 𝑋>π‘Œ) means that 𝑋 and π‘Œ are symmetric matrices, and that π‘‹βˆ’π‘Œ is positive semidefinite (resp., positive definite). β€–β‹…β€– is the Euclidean norm in ℝ𝑛. For a positive constant π‘Ž, [π‘Ž] denotes the integer part of π‘Ž. For integers π‘Ž, 𝑏 with π‘Ž<𝑏, β„•[π‘Ž,𝑏] denotes the discrete interval given by β„•[π‘Ž,𝑏]={π‘Ž,π‘Ž+1,…,π‘βˆ’1,𝑏}. 𝐢(β„•[βˆ’πœ,0],ℝ𝑛) denotes the set of all functions πœ™: β„•[βˆ’πœ,0]→ℝ𝑛. Matrices, if not explicitly specified, are assumed to have compatible dimensions.

2. Model Description and Preliminaries

In this paper, we consider the following discrete-time neural network model

π‘₯(π‘˜+1)=𝐢π‘₯(π‘˜)+𝐴𝑔(π‘₯(π‘˜))+𝐡𝑔(π‘₯(π‘˜βˆ’πœ(π‘˜)))+𝑒(2.1) for π‘˜βˆˆβ„•, where π‘₯(π‘˜)=(π‘₯1(π‘˜),π‘₯2(π‘˜),…,π‘₯𝑛(π‘˜))π‘‡βˆˆβ„π‘›, π‘₯𝑖(π‘˜) is the state of the 𝑖th neuron at time π‘˜; 𝑔(π‘₯(π‘˜))=(𝑔1(π‘₯1(π‘˜)),𝑔2(π‘₯2(π‘˜)),…,𝑔𝑛(π‘₯𝑛(π‘˜)))π‘‡βˆˆβ„π‘›, 𝑔𝑗(π‘₯𝑗(π‘˜)) denotes the activation function of the 𝑗th neuron at time π‘˜; 𝑒=(𝑒1,𝑒2,…,𝑒𝑛)𝑇 is the input vector; the positive integer 𝜏(π‘˜) corresponds to the transmission delay and satisfies πœβ‰€πœ(π‘˜)β‰€πœ (𝜏β‰₯0 and 𝜏β‰₯0 are known integers); 𝐢=diag{𝑐1,𝑐2,…,𝑐𝑛}, where 𝑐𝑖 (0≀𝑐𝑖<1) describes the rate with which the 𝑖th neuron will reset its potential to the resting state in isolation when disconnected from the networks and external inputs; 𝐴=(π‘Žπ‘–π‘—)𝑛×𝑛 is the connection weight matrix; 𝐡=(𝑏𝑖𝑗)𝑛×𝑛 is the delayed connection weight matrix.

The initial condition associated with model (2.1) is given by

π‘₯ξ€Ίβˆ’(𝑠)=πœ‘(𝑠),π‘ βˆˆβ„•ξ€»πœ,0.(2.2)

Throughout this paper, we make the following assumption [6].

(H) For any π‘—βˆˆ{1,2,…,𝑛}, 𝑔𝑗(0)=0 and there exist constants πΊβˆ’π‘— and 𝐺+𝑗 such that πΊβˆ’π‘—β‰€π‘”π‘—ξ€·π›Ό1ξ€Έβˆ’π‘”π‘—ξ€·π›Ό2𝛼1βˆ’π›Ό2≀𝐺+𝑗,βˆ€π›Ό1,𝛼2βˆˆβ„,𝛼1≠𝛼2.(2.3)

Similar to [23], we also give the following definitions for discrete-time neural networks (2.1).

Definition 2.1. Discrete-time neural networks (2.1) are said to be globally dissipative if there exists a compact set π‘†βŠ†β„π‘›, such that for all π‘₯0βˆˆβ„π‘›, βˆƒ positive integer 𝐾(π‘₯0)>0, when π‘˜β‰₯π‘˜0+𝐾(π‘₯0), π‘₯(π‘˜,π‘˜0,π‘₯0)βŠ†π‘†, where π‘₯(π‘˜,π‘˜0,π‘₯0) denotes the solution of (2.1) from initial state π‘₯0 and initial time π‘˜0. In this case, 𝑆 is called a globally attractive set. A set 𝑆 is called a positive invariant if βˆ€π‘₯0βˆˆπ‘† implies π‘₯(π‘˜,π‘˜0,π‘₯0)βŠ†π‘† for π‘˜β‰₯π‘˜0.

Definition 2.2. Let 𝑆 be a globally attractive set of discrete-time neural networks (2.1). Discrete-time neural networks (2.1) are said to be globally exponentially dissipative if there exists a compact set π‘†βˆ—βŠƒπ‘† in ℝ𝑛 such that βˆ€π‘₯0βˆˆβ„π‘›β§΅π‘†βˆ—, there exist constants 𝑀(π‘₯0)>0 and 0<𝛽<1 such that infπ‘₯βˆˆβ„π‘›β§΅π‘†βˆ—ξ€½β€–β€–π‘₯ξ€·π‘˜,π‘˜0,π‘₯0ξ€Έβ€–β€–βˆ’Μƒπ‘₯βˆΆΜƒπ‘₯βˆˆπ‘†βˆ—ξ€Ύξ€·π‘₯≀𝑀0ξ€Έπ›½π‘˜βˆ’π‘˜0.(2.4) Set π‘†βˆ— is called globally exponentially attractive set, where π‘₯βˆˆβ„π‘›β§΅π‘†βˆ— means π‘₯βˆˆβ„π‘› but π‘₯βˆ‰π‘†βˆ—.

To prove our results, the following lemmas are necessary.

Lemma 2.3 (see [34]). Given constant matrices 𝑃, 𝑄, and 𝑅, where 𝑃𝑇=𝑃, 𝑄𝑇=𝑄, then ξ‚Έπ‘…π‘ƒπ‘…π‘‡ξ‚Ήβˆ’π‘„<0(2.5) is equivalent to the following conditions: 𝑄>0,𝑃+π‘…π‘„βˆ’1𝑅𝑇<0.(2.6)

Lemma 2.4 (see [35, 36]). Given matrices 𝑃, 𝑄, and 𝑅 with 𝑃𝑇=𝑃, then 𝑃+𝑄𝐹(π‘˜)𝑅+𝑅𝑇𝐹𝑇(π‘˜)𝑄𝑇<0(2.7) holds for all 𝐹(π‘˜) satisfying 𝐹𝑇(π‘˜)𝐹(π‘˜)≀𝐼 if and only if there exists a scalar πœ€>0 such that 𝑃+πœ€βˆ’1𝑄𝑄𝑇+πœ€π‘…π‘…π‘‡<0.(2.8)

3. Main Results

In this section, we shall establish our main criteria based on the LMI approach. For presentation convenience, in the following, we denote

𝐺1𝐺=diagβˆ’1,πΊβˆ’2,…,πΊβˆ’π‘›ξ€Ύ,𝐺2𝐺=diag+1,𝐺+2,…,𝐺+𝑛,𝐺3𝐺=diagβˆ’1𝐺+1,πΊβˆ’2𝐺+2,…,πΊβˆ’π‘›πΊ+𝑛,𝐺4𝐺=diagβˆ’1+𝐺+12,πΊβˆ’2+𝐺+22𝐺,…,βˆ’π‘›+𝐺+𝑛2ξƒ°ξ‚Έ,𝛿=𝜏+𝜏2ξ‚Ή.(3.1)

Theorem 3.1. Suppose that (H) holds. If there exist nine symmetric positive definite matrices 𝑃>0, π‘Š>0, 𝑅>0, 𝑄𝑖>0(𝑖=1,2,3,4,5,6), and four positive diagonal matrices 𝐷𝑖>0(𝑖=1,2), 𝐻𝑖>0(𝑖=1,2), such that the following LMI holds: ⎑⎒⎒⎒⎒⎒⎒⎒⎒⎣ΠΠ=11Ξ 12Ξ 13Ξ 14Ξ 1500βˆ—Ξ 22Ξ 23Ξ 24000βˆ—βˆ—Ξ 33Ξ 340Ξ 360βˆ—βˆ—βˆ—Ξ 44000βˆ—βˆ—βˆ—βˆ—Ξ 550Ξ 57βˆ—βˆ—βˆ—βˆ—βˆ—Ξ 660βˆ—βˆ—βˆ—βˆ—βˆ—βˆ—Ξ 77⎀βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯⎦<0,(3.2) where Ξ 11=πΆπ‘‡π‘ƒπΆβˆ’π‘ƒ+𝑄1+(1+πœβˆ’πœ)(𝑄2βˆ’2𝐺1𝐷1+2𝐺2𝐷2)βˆ’πΊ3𝐻1+(πΆβˆ’πΌ)𝑇(𝑄4+𝑄6)(πΆβˆ’πΌ)βˆ’(1/𝛿2)𝑄4+𝑄5+π‘Š, Ξ 12=𝐢𝑇𝑃𝐴+(1+πœβˆ’πœ)(𝐷1βˆ’π·2)+𝐺4𝐻1+(πΆβˆ’πΌ)𝑇(𝑄4+𝑄6)𝐴, Ξ 13=𝐢𝑇𝑃𝐡+(πΆβˆ’πΌ)𝑇(𝑄4+𝑄6)𝐡, Ξ 14=𝐢𝑇𝑃+(πΆβˆ’πΌ)𝑇(𝑄4+𝑄6), Ξ 15=(1/𝛿2)𝑄4, Ξ 22=(1+πœβˆ’πœ)𝑄3βˆ’π»1+𝐴𝑇(𝑃+𝑄4+𝑄6)𝐴, Ξ 23=𝐴𝑇(𝑃+𝑄4+𝑄6)𝐡, Ξ 24=𝐴𝑇(𝑃+𝑄4+𝑄6), Ξ 33=βˆ’π‘„3βˆ’π»2+𝐡𝑇(𝑃+𝑄4+𝑄6)𝐡, Ξ 34=𝐡𝑇(𝑃+𝑄4+𝑄6), Ξ 36=βˆ’π·1+𝐷2+𝐺4𝐻2, Ξ 44=𝑃+𝑄4+𝑄6βˆ’π‘…, Ξ 55=βˆ’π‘„1βˆ’(1/𝛿2)𝑄4βˆ’(1/(πœβˆ’π›Ώ)2)𝑄6, Ξ 57=(1/(πœβˆ’π›Ώ)2)𝑄6, Ξ 66=βˆ’π‘„2+2𝐺1𝐷1βˆ’2𝐺2𝐷2βˆ’πΊ3𝐻2, and Ξ 77=βˆ’π‘„5βˆ’(1/(πœβˆ’π›Ώ)2)𝑄6, then discrete-time neural network (2.1) is globally dissipative, and 𝑒𝑆=π‘₯βˆΆβ€–π‘₯β€–β‰€π‘‡π‘…π‘’πœ†minξ‚Ά(π‘Š)1/2ξƒ°(3.3) is a positive invariant and globally attractive set.

Proof. For positive diagonal matrices 𝐷1>0 and 𝐷2>0, we know from assumption (H) that 𝑔(π‘₯(π‘˜))βˆ’πΊ1ξ€Έπ‘₯(π‘˜)𝑇𝐷1𝐺π‘₯(π‘˜)β‰₯0,2ξ€Έπ‘₯(π‘˜)βˆ’π‘”(π‘₯(π‘˜))𝑇𝐷2π‘₯(π‘˜)β‰₯0.(3.4)
Defining πœ‚(π‘˜)=π‘₯(π‘˜+1)βˆ’π‘₯(π‘˜), we consider the following Lyapunov-Krasovskii functional candidate for model (2.1) as 𝑉(π‘˜,π‘₯(π‘˜))=8𝑖=1𝑉𝑖(π‘˜,π‘₯(π‘˜)),(3.5) where 𝑉1(π‘˜,π‘₯(π‘˜))=π‘₯𝑇𝑉(π‘˜)𝑃π‘₯(π‘˜),2(π‘˜,π‘₯(π‘˜))=π‘˜βˆ’1𝑖=π‘˜βˆ’π›Ώπ‘₯𝑇(𝑖)𝑄1𝑉π‘₯(𝑖),3(π‘˜,π‘₯(π‘˜))=π‘˜βˆ’1𝑖=π‘˜βˆ’πœ(π‘˜)π‘₯𝑇(𝑖)𝑄2π‘₯(𝑖)+π‘˜βˆ’πœξ“π‘™=π‘˜βˆ’πœ+1π‘˜βˆ’1𝑖=𝑙π‘₯𝑇(𝑖)𝑄2𝑉π‘₯(𝑖),4(π‘˜,π‘₯(π‘˜))=π‘˜βˆ’1𝑖=π‘˜βˆ’πœ(π‘˜)𝑔𝑇(π‘₯(𝑖))𝑄3𝑔(π‘₯(𝑖))+π‘˜βˆ’πœξ“π‘™=π‘˜βˆ’πœ+1π‘˜βˆ’1𝑖=𝑙𝑔𝑇(π‘₯(𝑖))𝑄3𝑔𝑉(π‘₯(𝑖)),5(π‘˜,π‘₯(π‘˜))=2π‘˜βˆ’1𝑖=π‘˜βˆ’πœ(π‘˜)𝑔(π‘₯(𝑖))βˆ’πΊ1ξ€Έπ‘₯(𝑖)𝑇𝐷1π‘₯(𝑖)+2π‘˜βˆ’πœξ“π‘™=π‘˜βˆ’πœ+1π‘˜βˆ’1𝑖=𝑙𝑔(π‘₯(𝑖))βˆ’πΊ1ξ€Έπ‘₯(𝑖)𝑇𝐷1𝑉π‘₯(𝑖),6(π‘˜,π‘₯(π‘˜))=2π‘˜βˆ’1𝑖=π‘˜βˆ’πœ(π‘˜)𝐺2π‘₯ξ€Έ(𝑖)βˆ’π‘”(π‘₯(𝑖))𝑇𝐷2π‘₯(𝑖)+2π‘˜βˆ’πœξ“π‘™=π‘˜βˆ’πœ+1π‘˜βˆ’1𝑖=𝑙𝐺2π‘₯ξ€Έ(𝑖)βˆ’π‘”(π‘₯(𝑖))𝑇𝐷2π‘₯𝑉(𝑖),7(1π‘˜,π‘₯(π‘˜))=π›Ώβˆ’1𝑙=βˆ’π›Ώπ‘˜βˆ’1𝑖=π‘˜+π‘™πœ‚π‘‡(𝑖)𝑄4π‘‰πœ‚(𝑖),8⎧βŽͺβŽͺ⎨βŽͺβŽͺ⎩(π‘˜,π‘₯(π‘˜))=π‘˜βˆ’1𝑖=π‘˜βˆ’πœπ‘₯𝑇(𝑖)𝑄51π‘₯(𝑖)+πœβˆ’π›Ώπ‘˜βˆ’πœβˆ’1𝑙=π‘˜βˆ’π›Ώπ‘˜βˆ’1𝑖=π‘™πœ‚π‘‡(𝑖)𝑄6πœ‚(𝑖),πœβ‰€πœ(π‘˜)≀𝛿,π‘˜βˆ’1𝑖=π‘˜βˆ’πœπ‘₯𝑇(𝑖)𝑄51π‘₯(𝑖)+πœβˆ’π›Ώπ‘˜βˆ’π›Ώβˆ’1𝑙=π‘˜βˆ’πœπ‘˜βˆ’1𝑖=π‘™πœ‚π‘‡(𝑖)𝑄6πœ‚(𝑖),𝛿<𝜏(π‘˜)β‰€πœ.(3.6) Calculating the difference of 𝑉𝑖(π‘˜) (𝑖=1,2,3) along the positive half trajectory of (2.1), we obtain
Δ𝑉1(π‘˜,π‘₯(π‘˜))=π‘₯𝑇𝐢(π‘˜)𝑇π‘₯π‘ƒπΆβˆ’π‘ƒ(π‘˜)+2π‘₯𝑇(π‘˜)𝐢𝑇𝑃𝐴𝑔(π‘₯(π‘˜))+2π‘₯𝑇(π‘˜)𝐢𝑇𝑃𝐡𝑔(π‘₯(π‘˜βˆ’πœ(π‘˜)))+2π‘₯𝑇(π‘˜)𝐢𝑇𝑃𝑒+𝑔𝑇(π‘₯(π‘˜))𝐴𝑇𝑃𝐴𝑔(π‘₯(π‘˜))+2𝑔𝑇(π‘₯(π‘˜))𝐴𝑇𝑃𝐡𝑔(π‘₯(π‘˜βˆ’πœ(π‘˜)))+2𝑔𝑇(π‘₯(π‘˜))𝐴𝑇𝑃𝑒+𝑔𝑇(π‘₯(π‘˜βˆ’πœ(π‘˜)))𝐡𝑇𝑃𝐡𝑔(π‘₯(π‘˜βˆ’πœ(π‘˜)))+2𝑔𝑇(π‘₯(π‘˜βˆ’πœ(π‘˜)))𝐡𝑇𝑃𝑒+𝑒𝑇𝑃𝑒,(3.7)Δ𝑉2(π‘˜,π‘₯(π‘˜))=π‘₯𝑇(π‘˜)𝑄1π‘₯(π‘˜)βˆ’π‘₯𝑇(π‘˜βˆ’π›Ώ)𝑄1π‘₯(π‘˜βˆ’π›Ώ),(3.8)Δ𝑉3(π‘˜,π‘₯(π‘˜))=π‘˜ξ“π‘–=π‘˜+1βˆ’πœ(π‘˜+1)π‘₯𝑇(𝑖)𝑄2π‘₯(𝑖)βˆ’π‘˜βˆ’1𝑖=π‘˜βˆ’πœ(π‘˜)π‘₯𝑇(𝑖)𝑄2+π‘₯(𝑖)π‘˜+1βˆ’πœξ“π‘™=π‘˜βˆ’π‘˜πœ+2𝑖=𝑙π‘₯𝑇(𝑖)𝑄2π‘₯(𝑖)βˆ’π‘˜βˆ’πœξ“π‘™=π‘˜βˆ’πœ+1π‘˜βˆ’1𝑖=𝑙π‘₯𝑇(𝑖)𝑄2=π‘₯(𝑖)π‘˜βˆ’πœξ“π‘–=π‘˜+1βˆ’πœ(π‘˜+1)π‘₯𝑇(𝑖)𝑄2π‘₯(𝑖)+π‘˜βˆ’1𝑖=π‘˜βˆ’πœ+1π‘₯𝑇(𝑖)𝑄2π‘₯(𝑖)+π‘₯𝑇(π‘˜)𝑄2π‘₯(π‘˜)βˆ’π‘˜βˆ’1𝑖=π‘˜βˆ’πœ(π‘˜)+1π‘₯𝑇(𝑖)𝑄2π‘₯(𝑖)βˆ’π‘₯𝑇(π‘˜βˆ’πœ(π‘˜))𝑄2ξ€·π‘₯(π‘˜βˆ’πœ(π‘˜))+ξ€Έπ‘₯πœβˆ’πœπ‘‡(π‘˜)𝑄2π‘₯(π‘˜)βˆ’π‘˜βˆ’πœξ“π‘™=π‘˜βˆ’πœ+1π‘₯𝑇(𝑙)𝑄2≀π‘₯(𝑙)1+ξ€Έπ‘₯πœβˆ’πœπ‘‡(π‘˜)𝑄2π‘₯(π‘˜)βˆ’π‘₯𝑇(π‘˜βˆ’πœ(π‘˜))𝑄2π‘₯(π‘˜βˆ’πœ(π‘˜)).(3.9)Δ𝑉4ξ€·(π‘˜,π‘₯(π‘˜))≀1+ξ€Έπ‘”πœβˆ’πœπ‘‡(π‘₯(π‘˜))𝑄3𝑔(π‘₯(π‘˜))βˆ’π‘”π‘‡(π‘₯(π‘˜βˆ’πœ(π‘˜)))𝑄3𝑔(π‘₯(π‘˜βˆ’πœ(π‘˜))),Δ𝑉5ξ€·(π‘˜,π‘₯(π‘˜))≀21+πœβˆ’πœξ€Έξ€·π‘”(π‘₯(π‘˜))βˆ’πΊ1ξ€Έπ‘₯(π‘˜)𝑇𝐷1ξ€·π‘₯(π‘˜)βˆ’2𝑔(π‘₯(π‘˜βˆ’πœ(π‘˜)))βˆ’πΊ1ξ€Έπ‘₯(π‘˜βˆ’πœ(π‘˜))𝑇𝐷1π‘₯(π‘˜βˆ’πœ(π‘˜)),Δ𝑉6ξ€·(π‘˜,π‘₯(π‘˜))≀21+πΊπœβˆ’πœξ€Έξ€·2ξ€Έπ‘₯(π‘˜)βˆ’π‘”(π‘₯(π‘˜))𝑇𝐷2𝐺π‘₯(π‘˜)βˆ’22ξ€Έπ‘₯(π‘˜βˆ’πœ(π‘˜))βˆ’π‘”(π‘₯(π‘˜βˆ’πœ(π‘˜)))𝑇𝐷2π‘₯(π‘˜βˆ’πœ(π‘˜)),Δ𝑉7(π‘˜,π‘₯(π‘˜))=πœ‚π‘‡(π‘˜)𝑄41πœ‚(π‘˜)βˆ’π›Ώπ‘˜βˆ’1𝑖=π‘˜βˆ’π›Ώπœ‚π‘‡(𝑖)𝑄4πœ‚(𝑖)β‰€πœ‚π‘‡(π‘˜)𝑄41πœ‚(π‘˜)βˆ’π›Ώ2π‘˜βˆ’1𝑖=π‘˜βˆ’π›Ώπœ‚π‘‡(𝑖)𝑄4π‘˜βˆ’1𝑖=π‘˜βˆ’π›Ώπœ‚(𝑖)=π‘₯𝑇(π‘˜)(πΆβˆ’πΌ)𝑇𝑄4(πΆβˆ’πΌ)π‘₯(π‘˜)+2π‘₯𝑇(π‘˜)(πΆβˆ’πΌ)𝑇𝑄4𝐴𝑔(π‘₯(π‘˜))+2π‘₯𝑇(π‘˜)(πΆβˆ’πΌ)𝑇𝑄4𝐡𝑔(π‘₯(π‘˜βˆ’πœ(π‘˜)))+2π‘₯𝑇(π‘˜)(πΆβˆ’πΌ)𝑇𝑄4𝑒+𝑔𝑇(π‘₯(π‘˜))𝐴𝑇𝑄4𝐴𝑔(π‘₯(π‘˜))+2𝑔𝑇(π‘₯(π‘˜))𝐴𝑇𝑄4𝐡𝑔(π‘₯(π‘˜βˆ’πœ(π‘˜)))+2𝑔𝑇(π‘₯(π‘˜))𝐴𝑇𝑄4𝑒+𝑔𝑇(π‘₯(π‘˜βˆ’πœ(π‘˜)))𝐡𝑇𝑄4𝐡𝑔(π‘₯(π‘˜βˆ’πœ(π‘˜)))+2𝑔𝑇(π‘₯(π‘˜βˆ’πœ(π‘˜)))𝐡𝑇𝑄4𝑒+𝑒𝑇𝑄4+1𝛿2ξ‚Έξ‚Ήπ‘₯(π‘˜)π‘₯(π‘˜βˆ’π›Ώ)π‘‡ξ‚Έβˆ’π‘„4𝑄4𝑄4βˆ’π‘„4π‘₯ξ‚Ή.ξ‚Ήξ‚Έπ‘₯(π‘˜)(π‘˜βˆ’π›Ώ)(3.10) Similarly, one has πœβ‰€πœ(π‘˜)≀𝛿 When Δ𝑉8(π‘˜,π‘₯(π‘˜))=π‘₯𝑇(π‘˜)𝑄5π‘₯(π‘˜)βˆ’π‘₯𝑇(π‘˜βˆ’πœ)𝑄5+π‘₯(π‘˜βˆ’πœ)π›Ώβˆ’πœπœ‚πœβˆ’π›Ώπ‘‡(π‘˜)𝑄61πœ‚(π‘˜)βˆ’πœβˆ’π›Ώπ‘˜βˆ’πœβˆ’1𝑖=π‘˜βˆ’π›Ώπœ‚π‘‡(𝑖)𝑄6πœ‚(𝑖)≀π‘₯𝑇(π‘˜)𝑄5π‘₯(π‘˜)βˆ’π‘₯𝑇(π‘˜βˆ’πœ)𝑄5π‘₯(π‘˜βˆ’πœ)+πœ‚π‘‡(π‘˜)𝑄61πœ‚(π‘˜)βˆ’ξ€·ξ€Έ(πœβˆ’π›Ώπ›Ώβˆ’πœ)π‘˜βˆ’πœβˆ’1𝑖=π‘˜βˆ’π›Ώπœ‚π‘‡(𝑖)𝑄6π‘˜βˆ’πœβˆ’1𝑖=π‘˜βˆ’π›Ώπœ‚(𝑖)β‰€βˆ’π‘₯𝑇(π‘˜βˆ’πœ)𝑄5π‘₯(π‘˜βˆ’πœ)+π‘₯𝑇(π‘˜)(πΆβˆ’πΌ)𝑇𝑄6(πΆβˆ’πΌ)+𝑄5ξ€Έπ‘₯(π‘˜)+2π‘₯𝑇(π‘˜)(πΆβˆ’πΌ)𝑇𝑄6𝐴𝑔(π‘₯(π‘˜))+2π‘₯𝑇(π‘˜)(πΆβˆ’πΌ)𝑇𝑄6𝐡𝑔(π‘₯(π‘˜βˆ’πœ(π‘˜)))+2π‘₯𝑇(π‘˜)(πΆβˆ’πΌ)𝑇𝑄6𝑒+𝑔𝑇(π‘₯(π‘˜))𝐴𝑇𝑄6𝐴𝑔(π‘₯(π‘˜))+2𝑔𝑇(π‘₯(π‘˜))𝐴𝑇𝑄6𝐡𝑔(π‘₯(π‘˜βˆ’πœ(π‘˜)))+2𝑔𝑇(π‘₯(π‘˜))𝐴𝑇𝑄6𝑒+𝑔𝑇(π‘₯(π‘˜βˆ’πœ(π‘˜)))𝐡𝑇𝑄6𝐡𝑔(π‘₯(π‘˜βˆ’πœ(π‘˜)))+2𝑔𝑇(π‘₯(π‘˜βˆ’πœ(π‘˜)))𝐡𝑇𝑄6𝑒+𝑒𝑇𝑄6𝑒+1ξ€·ξ€Έπœβˆ’π›Ώ2ξ‚Έξ‚Ήπ‘₯(π‘˜βˆ’πœ)π‘₯(π‘˜βˆ’π›Ώ)π‘‡ξ‚Έβˆ’π‘„6𝑄6𝑄6βˆ’π‘„6ξ‚Ή.ξ‚Ήξ‚Έπ‘₯(π‘˜βˆ’πœ)π‘₯(π‘˜βˆ’π›Ώ)(3.11), 𝛿<𝜏(π‘˜)β‰€πœ When Δ𝑉8(π‘˜,π‘₯(π‘˜))=π‘₯𝑇(π‘˜)𝑄5π‘₯(π‘˜)βˆ’π‘₯π‘‡ξ€·π‘˜βˆ’πœξ€Έπ‘„5π‘₯ξ€·π‘˜βˆ’πœξ€Έ+πœ‚π‘‡(π‘˜)𝑄61πœ‚(π‘˜)βˆ’πœβˆ’π›Ώπ‘˜βˆ’π›Ώβˆ’1𝑖=π‘˜βˆ’πœπœ‚π‘‡(𝑖)𝑄6πœ‚(𝑖)≀π‘₯𝑇(π‘˜)𝑄5π‘₯(π‘˜)βˆ’π‘₯π‘‡ξ€·π‘˜βˆ’πœξ€Έπ‘„5π‘₯ξ€·π‘˜βˆ’πœξ€Έ+πœ‚π‘‡(π‘˜)𝑄61πœ‚(π‘˜)βˆ’ξ€·ξ€Έπœβˆ’π›Ώ2π‘˜βˆ’π›Ώβˆ’1𝑖=π‘˜βˆ’πœπœ‚π‘‡(𝑖)𝑄6π‘˜βˆ’π›Ώβˆ’1𝑖=π‘˜βˆ’πœπœ‚(𝑖)=βˆ’π‘₯π‘‡ξ€·π‘˜βˆ’πœξ€Έπ‘„5π‘₯ξ€·π‘˜βˆ’πœξ€Έ+π‘₯𝑇(π‘˜)(πΆβˆ’πΌ)𝑇𝑄6(πΆβˆ’πΌ)+𝑄5ξ€Έπ‘₯(π‘˜)+2π‘₯𝑇(π‘˜)(πΆβˆ’πΌ)𝑇𝑄6𝐴𝑔(π‘₯(π‘˜))+2π‘₯𝑇(π‘˜)(πΆβˆ’πΌ)𝑇𝑄6𝐡𝑔(π‘₯(π‘˜βˆ’πœ(π‘˜)))+2π‘₯𝑇(π‘˜)(πΆβˆ’πΌ)𝑇𝑄6𝑒+𝑔𝑇(π‘₯(π‘˜))𝐴𝑇𝑄6𝐴𝑔(π‘₯(π‘˜))+2𝑔𝑇(π‘₯(π‘˜))𝐴𝑇𝑄6𝐡𝑔(π‘₯(π‘˜βˆ’πœ(π‘˜)))+2𝑔𝑇(π‘₯(π‘˜))𝐴𝑇𝑄6𝑒+𝑔𝑇(π‘₯(π‘˜βˆ’πœ(π‘˜)))𝐡𝑇𝑄6𝐡𝑔(π‘₯(π‘˜βˆ’πœ(π‘˜)))+2𝑔𝑇(π‘₯(π‘˜βˆ’πœ(π‘˜)))𝐡𝑇𝑄6𝑒+𝑒𝑇𝑄6𝑒+1ξ€·ξ€Έπœβˆ’π›Ώ2ξ‚Έπ‘₯ξ€·π‘˜βˆ’πœξ€Έπ‘₯ξ‚Ή(π‘˜βˆ’π›Ώ)π‘‡ξ‚Έβˆ’π‘„6𝑄6𝑄6βˆ’π‘„6π‘₯ξ€·ξ‚Ήξ‚Έπ‘˜βˆ’πœξ€Έπ‘₯ξ‚Ή.(π‘˜βˆ’π›Ώ)(3.12), 𝐻1>0
For positive diagonal matrices 𝐻2>0 and ξ‚Έξ‚Ή0≀π‘₯(π‘˜)𝑔(π‘₯(π‘˜))π‘‡ξ‚Έβˆ’πΊ3𝐻1𝐺4𝐻1𝐺4𝐻1βˆ’π»1ξ‚Ή,ξ‚Έξ‚Ήξ‚Ήξ‚Έπ‘₯(π‘˜)𝑔(π‘₯(π‘˜))(3.13)0≀π‘₯(π‘˜βˆ’πœ(π‘˜))𝑔(π‘₯(π‘˜βˆ’πœ(π‘˜)))π‘‡ξ‚Έβˆ’πΊ3𝐻2𝐺4𝐻2𝐺4𝐻2βˆ’π»2ξ‚Ή.ξ‚Ήξ‚Έπ‘₯(π‘˜βˆ’πœ(π‘˜))𝑔(π‘₯(π‘˜βˆ’πœ(π‘˜)))(3.14), we can get from assumption (H) that [6] 𝛼(π‘˜)=(π‘₯𝑇(π‘˜),𝑔𝑇(π‘₯(π‘˜)),𝑔𝑇(π‘₯(π‘˜βˆ’πœ(π‘˜))),𝑒𝑇,π‘₯𝑇(π‘˜βˆ’π›Ώ),π‘₯𝑇(π‘˜βˆ’πœ(π‘˜)),πœ‰π‘‡(π‘˜))𝑇
Denoting ξƒ―π‘₯ξ€·πœ‰(π‘˜)=π‘₯(π‘˜βˆ’πœ),πœβ‰€πœ(π‘˜)≀𝛿,π‘˜βˆ’πœξ€Έ,𝛿<𝜏(π‘˜)β‰€πœ,(3.15), where Δ𝑉(π‘˜,π‘₯(π‘˜))≀π‘₯𝑇𝐢(π‘˜)π‘‡π‘ƒπΆβˆ’π‘ƒ+𝑄1+ξ€·1+π‘„πœβˆ’πœξ€Έξ€·2βˆ’2𝐺1𝐷1+2𝐺2𝐷2ξ€Έβˆ’πΊ3𝐻1+(πΆβˆ’πΌ)𝑇𝑄4+𝑄6ξ€Έ1(πΆβˆ’πΌ)βˆ’π›Ώ2𝑄4+𝑄5ξ‚Άπ‘₯(π‘˜)+2π‘₯𝑇(ξ€·πΆπ‘˜)𝑇𝑃𝐴+1+π·πœβˆ’πœξ€Έξ€·1βˆ’π·2ξ€Έ+𝐺4𝐻1+(πΆβˆ’πΌ)𝑇𝑄4+𝑄6𝐴𝑔(π‘₯(π‘˜))+2π‘₯𝑇𝐢(π‘˜)𝑇𝑃𝐡+(πΆβˆ’πΌ)𝑇𝑄4+𝑄6𝐡𝑔(π‘₯(π‘˜βˆ’πœ(π‘˜)))+2π‘₯𝑇𝐢(π‘˜)𝑇𝑃+(πΆβˆ’πΌ)𝑇𝑄4+𝑄61𝑒+2𝛿2π‘₯𝑇(π‘˜)𝑄4π‘₯(π‘˜βˆ’π›Ώ)+𝑔𝑇𝐴(π‘₯(π‘˜))𝑇𝑃𝐴+1+ξ€Έπ‘„πœβˆ’πœ3βˆ’π»1+𝐴𝑇𝑄4+𝑄6𝐴𝑔(π‘₯(π‘˜))+2𝑔𝑇𝐴(π‘₯(π‘˜))𝑇𝑃𝐡+𝐴𝑇𝑄4+𝑄6𝐡𝑔(π‘₯(π‘˜βˆ’πœ(π‘˜)))+2𝑔𝑇𝐴(π‘₯(π‘˜))𝑇𝑃+𝐴𝑇𝑄4+𝑄6𝑒+𝑔𝑇𝐡(π‘₯(π‘˜βˆ’πœ(π‘˜)))π‘‡π‘ƒπ΅βˆ’π‘„3βˆ’π»2+𝐡𝑇𝑄4+𝑄6𝐡𝑔(π‘₯(π‘˜βˆ’πœ(π‘˜)))+2𝑔𝑇𝐡(π‘₯(π‘˜βˆ’πœ(π‘˜)))𝑇𝑃+𝐡𝑇𝑄4+𝑄6𝑒+2𝑔𝑇(π‘₯(π‘˜βˆ’πœ(π‘˜)))βˆ’π·1+𝐷2+𝐺4𝐻2ξ€Έπ‘₯(π‘˜βˆ’πœ(π‘˜))+𝑒𝑇𝑃+𝑄4+𝑄6𝑒+π‘₯𝑇(π‘˜βˆ’π›Ώ)βˆ’π‘„1βˆ’1𝛿2𝑄4βˆ’1ξ€·ξ€Έπœβˆ’π›Ώ2𝑄6ξƒͺ1π‘₯(π‘˜βˆ’π›Ώ)+2ξ€·ξ€Έπœβˆ’π›Ώ2π‘₯𝑇(π‘˜βˆ’π›Ώ)𝑄6πœ‰(π‘˜)+π‘₯𝑇(π‘˜βˆ’πœ(π‘˜))βˆ’π‘„2+2𝐺1𝐷1βˆ’2𝐺2𝐷2βˆ’πΊ3𝐻2ξ€Έπ‘₯(π‘˜βˆ’πœ(π‘˜))+π‘₯𝑇(π‘˜βˆ’πœ)βˆ’π‘„5βˆ’1ξ€·ξ€Έπœβˆ’π›Ώ2𝑄6ξƒͺπ‘₯(π‘˜βˆ’πœ)=βˆ’π‘₯𝑇(π‘˜)π‘Šπ‘₯(π‘˜)+𝑒𝑇𝑅𝑒+𝛼𝑇(π‘˜)Π𝛼(π‘˜).(3.16) it follows from (3.7) to (3.14) that Δ𝑉(π‘˜,π‘₯(π‘˜))β‰€βˆ’π‘₯𝑇(π‘˜)π‘Šπ‘₯(π‘˜)+π‘’π‘‡π‘…π‘’β‰€βˆ’πœ†min(π‘Š)β€–π‘₯β€–2+𝑒𝑇𝑅𝑒<0,(3.17) From condition (3.2) and inequality (3.16), we get π‘₯βˆˆβ„π‘›β§΅π‘† when π‘₯βˆ‰π‘†, that is, 𝑆. Therefore, discrete-time neural network (2.1) is a globally dissipative system, and the set 𝑒𝑆=π‘₯βˆΆβ€–π‘₯β€–β‰€π‘‡π‘…π‘’πœ†minξ‚Ά(π‘Š)1/2ξƒ°(3.18) is a positive invariant and globally attractive set as LMI (3.2) holds. The proof is completed.

Next, we are now in a position to discuss the global exponential dissipativity of discrete-time neural network (2.1) as follows.

Theorem 3.2. Under the conditions of Theorem 3.1, neural network (2.1) is globally exponentially dissipative, and π‘₯βˆˆβ„π‘›β§΅π‘† is a positive invariant and globally attractive set.

Proof. When π‘₯βˆ‰π‘†, that is, Δ𝑉(π‘˜,π‘₯(π‘˜))β‰€βˆ’πœ†minβ€–(βˆ’Ξ )β€–π‘₯(π‘˜)2.(3.19), we know from (3.16) that 𝑉(π‘˜) From the definition of 𝑉(π‘˜)β‰€πœ†max(𝑃)β€–π‘₯(π‘˜)β€–2+πœŒπ‘˜βˆ’1𝑖=π‘˜βˆ’πœβ€–π‘₯(𝑖)β€–2,(3.20) in (3.5), it is easy to verify that 𝜌=πœ†max𝑄1ξ€Έ+ξ€·1+πœ†πœβˆ’πœξ€Έξ€·max𝑄2ξ€Έ+πœ†max𝑄3ξ€Έπœ†max(𝐿)+2πœ†max𝐿𝐷1ξ€Έ+2πœ†maxξ€·||𝐺1𝐷1||ξ€Έ+2πœ†max𝐿𝐷2ξ€Έ+2πœ†maxξ€·||𝐺2𝐷2||+4ξ€Έξ€Έπœπ›Ώπœ†max𝑄4ξ€Έ+πœ†max𝑄5ξ€Έξ€·+4ξ€Έπœ†πœβˆ’πœmax𝑄6ξ€Έ.(3.21) where πœ‡>1 For any scalar πœ‡π‘—+1𝑉(𝑗+1)βˆ’πœ‡π‘—π‘‰(𝑗)=πœ‡π‘—+1Δ𝑉(𝑗)+πœ‡π‘—β‰€ξ€Ίπœ‡(πœ‡βˆ’1)𝑉(𝑗)𝑗(πœ‡βˆ’1)πœ†max(𝑃)βˆ’πœ‡π‘—+1πœ†minξ€»(βˆ’Ξ )β€–π‘₯(𝑗)β€–2+πœŒπœ‡π‘—(πœ‡βˆ’1)π‘—βˆ’1𝑖=π‘—βˆ’πœβ€–π‘₯(𝑖)β€–2.(3.22), it follows from (3.19) and (3.20) that 0 Summing up both sides of (3.22) from π‘˜βˆ’1 to 𝑗 with respect to πœ‡π‘˜ξ€Ίπ‘‰(π‘˜)βˆ’π‘‰(0)≀(πœ‡βˆ’1)πœ†max(𝑃)βˆ’πœ‡πœ†minξ€»(βˆ’Ξ )π‘˜βˆ’1𝑗=0πœ‡π‘—β€–π‘₯(𝑗)β€–2+𝜌(πœ‡βˆ’1)π‘˜βˆ’1𝑗=0π‘—βˆ’1𝑖=π‘—βˆ’πœπœ‡π‘—β€–π‘₯(𝑖)β€–2.(3.23), we have π‘˜βˆ’1𝑗=0π‘—βˆ’1𝑖=π‘—βˆ’πœπœ‡π‘—β€–π‘₯(𝑖)β€–2β‰€βŽ›βŽœβŽœβŽβˆ’1𝑖=βˆ’πœπ‘–+πœξ“π‘—=0+π‘˜βˆ’1βˆ’πœξ“π‘–=0𝑖+πœξ“π‘—=𝑖+1+π‘˜βˆ’1𝑖=π‘˜βˆ’πœπ‘˜βˆ’1𝑗=𝑖+1βŽžβŽŸβŽŸβŽ πœ‡π‘—β€–π‘₯(𝑖)β€–2β‰€πœπœ‡πœsupπ‘ βˆˆπ‘[βˆ’πœ,0]β€–π‘₯(𝑠)β€–2+πœπœ‡πœπ‘˜βˆ’1𝑖=0πœ‡π‘–β€–π‘₯(𝑖)β€–2.(3.24) It is easy to compute that π‘‰ξ€·πœ†(0)≀max(𝑃)+πœŒπœξ€Έsupπ‘ βˆˆπ‘[βˆ’πœ,0]β€–π‘₯(𝑠)β€–2.(3.25) From (3.20), we obtain πœ‡π‘˜π‘‰(π‘˜)≀𝐿1(πœ‡)supπ‘ βˆˆπ‘[βˆ’πœ,0]β€–π‘₯(𝑠)β€–2+𝐿2(πœ‡)π‘˜ξ“π‘—=0πœ‡π‘–β€–π‘₯(𝑖)β€–2,(3.26) It follows from (3.23)–(3.25) that 𝐿1(πœ‡)=πœ†max(𝑃)+𝜌𝜏+𝜌(πœ‡βˆ’1)πœπœ‡πœ,𝐿2(πœ‡)=(πœ‡βˆ’1)πœ†max(𝑃)βˆ’πœ‡πœ†min(βˆ’Ξ )+𝜌(πœ‡βˆ’1)πœπœ‡πœ.(3.27) where 𝐿2(1)<0 Since 𝐿2(πœ‡), by the continuity of functions 𝛾>1, we can choose a scalar 𝐿2(𝛾)≀0 such that 𝐿1(𝛾)>0. Obviously, π›Ύπ‘˜π‘‰(π‘˜)≀𝐿1(𝛾)supπ‘ βˆˆπ‘[βˆ’πœ,0]β€–π‘₯(𝑠)β€–2.(3.28). From (3.26), we get 𝑉(π‘˜)
From the definition of 𝑉(π‘˜)β‰₯πœ†min(𝑃)β€–π‘₯(π‘˜)β€–2.(3.29) in (3.5), we have βˆšπ‘€=𝐿1(𝛾)/πœ†min(𝑃) Let βˆšπ›½=1/𝛾, 𝑀>0, then 0<𝛽<1, β€–π‘₯(π‘˜)β€–β‰€π‘€π›½π‘˜supπ‘ βˆˆπ‘[βˆ’πœ,0]β€–β€–π‘₯(𝑠)(3.30). It follows from (3.28) and (3.29) that π‘˜βˆˆβ„• for all 𝑆, which means that discrete-time neural network (2.1) is globally exponentially dissipative, and the set πΊβˆ’π‘— is a positive invariant and globally attractive set as LMI (3.2) holds. The proof is completed.

Remark 3.3. In the study on dissipativity of neural networks, the assumption (H) of this paper is as same as that in [28], the constants 𝐺+𝑗 and 𝑗=1,2,…,𝑛 ([𝜏,𝜏]) in assumption (H) of this paper are allowed to be positive, negative, or zero. Hence, assumption (H), first proposed by Liu et al. in [6], is weaker than the assumption in [23–27, 30].

Remark 3.4. The idea of constructing Lyapunov-Krasovskii functional (3.5) is that we divide the delay interval [𝜏,𝛿] into two subintervals [𝛿,𝜏] and 𝜏(π‘˜), thus the proposed Lyapunov-Krasovskii functional is different when the time-delay 𝜏(π‘˜) belongs to different subinterval. The main advantage of such Lyapunov-Krasovskii functional is that it makes full use of the information on the considered time-delay π‘₯(π‘˜+1)=(𝐢+Δ𝐢(π‘˜))π‘₯(π‘˜)+(𝐴+Δ𝐴(π‘˜))𝑔(π‘₯(π‘˜))+(𝐡+Δ𝐡(π‘˜))𝑔(π‘₯(π‘˜βˆ’πœ(π‘˜)))+𝑒,(3.31).

Now, let us consider the case when the parameter uncertainties appear in the discrete-time neural networks with time-varying delays. In this case, model (2.1) can be further generalized to the following one:

𝐢 where 𝐴, 𝐡, Δ𝐢(π‘˜) are known real constant matrices, and the time-varying matrices Δ𝐴(π‘˜), Δ𝐡(π‘˜) and 𝑁Δ𝐢(π‘˜)Δ𝐴(π‘˜)Δ𝐡(π‘˜)=𝑀𝐹(π‘˜)1𝑁2𝑁3ξ€»,(3.32) represent the time-varying parameter uncertainties that are assumed to satisfy the following admissible condition:

𝑀 where 𝑁𝑖(𝑖=1,2,3) and 𝐹(π‘˜) are known real constant matrices, and 𝐹𝑇(π‘˜)𝐹(π‘˜)≀𝐼.(3.33) is the unknown time-varying matrix-valued function subject to the following condition:

𝑃>0

For model (3.31), we have the following result readily.

Theorem 3.5. Suppose that (H) holds. If there exist nine symmetric positive definite matrices π‘Š>0, 𝑅>0, 𝑄𝑖>0(𝑖=1,2,3,4,5,6), 𝐷𝑖>0(𝑖=1,2), and four positive diagonal matrices 𝐻𝑖>0(𝑖=1,2), βŽ‘βŽ’βŽ’βŽ’βŽ’βŽ’βŽ’βŽ£Ξ“Ξ“=1Ξ“2Ξ“3Ξ“40πœ€Ξ“5βˆ—βˆ’π‘ƒ00𝑃𝑀0βˆ—βˆ—βˆ’π‘„40𝑄4𝑀0βˆ—βˆ—βˆ—βˆ’π‘„6𝑄6⎀βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ¦π‘€0βˆ—βˆ—βˆ—βˆ—βˆ’πœ€πΌ0βˆ—βˆ—βˆ—βˆ—βˆ—βˆ’πœ€πΌ<0,(3.34), such that the following LMI holds: Ξ“1=⎑⎒⎒⎒⎒⎒⎒⎒⎒⎣Ω11Ξ©1200Ξ©1500βˆ—Ξ©2200000βˆ—βˆ—Ξ©3300Ξ©360βˆ—βˆ—βˆ—Ξ©44000βˆ—βˆ—βˆ—βˆ—Ξ©550Ξ©57βˆ—βˆ—βˆ—βˆ—βˆ—Ξ©660βˆ—βˆ—βˆ—βˆ—βˆ—βˆ—Ξ©77⎀βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯⎦,Ξ“2=𝑃𝐢𝑃𝐴𝑃𝐡𝑃000𝑇,Ξ“3=𝑄4(πΆβˆ’πΌ)𝑄4𝐴𝑄4𝐡𝑄4ξ€»000𝑇,Ξ“4=𝑄6(πΆβˆ’πΌ)𝑄6𝐴𝑄6𝐡𝑄6ξ€»,Ξ“0005=𝑁1𝑁2𝑁3ξ€»,0000(3.35) where Ξ©11=βˆ’π‘ƒ+𝑄1+(1+πœβˆ’πœ)(𝑄2βˆ’2𝐺1𝐷1+2𝐺2𝐷2)βˆ’πΊ3𝐻1βˆ’(1/𝛿2)𝑄4+𝑄5+π‘Š and Ξ©12=(1+πœβˆ’πœ)(𝐷1βˆ’π·2)+𝐺4𝐻1, Ξ©15=(1/𝛿2)𝑄4, Ξ©22=(1+πœβˆ’πœ)𝑄3βˆ’π»1, Ξ©33=βˆ’π‘„3βˆ’π»2, Ξ©36=βˆ’π·1+𝐷2+𝐺4𝐻2, Ξ©44=βˆ’π‘…, Ξ©55=βˆ’π‘„1βˆ’(1/𝛿2)𝑄4βˆ’(1/(πœβˆ’π›Ώ)2)𝑄6, Ξ©57=(1/(πœβˆ’π›Ώ)2)𝑄6, Ξ©66=βˆ’π‘„2+2𝐺1𝐷1βˆ’2𝐺2𝐷2βˆ’πΊ3𝐻2, Ξ©77=βˆ’π‘„5βˆ’(1/(πœβˆ’π›Ώ)2)𝑄6,, and 𝑒𝑆=π‘₯βˆΆβ€–π‘₯β€–β‰€π‘‡π‘…π‘’πœ†minξ‚Ά(π‘Š)1/2ξƒ°(3.36) then uncertain discrete-time neural network (3.31) is globally dissipative and globally exponentially dissipative, and βŽ‘βŽ’βŽ’βŽ’βŽ£Ξ“1Ξ“2Ξ“3Ξ“4βˆ—βˆ’π‘ƒ00βˆ—βˆ—βˆ’π‘„40βˆ—βˆ—βˆ—βˆ’π‘„6⎀βŽ₯βŽ₯βŽ₯⎦+πœ€βˆ’1⎑⎒⎒⎒⎣0𝑄𝑃𝑀4𝑀𝑄6π‘€βŽ€βŽ₯βŽ₯βŽ₯⎦⎑⎒⎒⎒⎣0𝑄𝑃𝑀4𝑀𝑄6π‘€βŽ€βŽ₯βŽ₯βŽ₯βŽ¦π‘‡βŽ‘βŽ’βŽ’βŽ’βŽ£Ξ“+πœ€5000⎀βŽ₯βŽ₯βŽ₯βŽ¦βŽ‘βŽ’βŽ’βŽ’βŽ£Ξ“5000⎀βŽ₯βŽ₯βŽ₯βŽ¦π‘‡<0.(3.37) is a positive invariant and globally attractive set.

Proof. By Lemma 2.3, we know that LMI (3.34) is equivalent to the following inequality: βŽ‘βŽ’βŽ’βŽ’βŽ£Ξ“1Ξ“2Ξ“3Ξ“4βˆ—βˆ’π‘ƒ00βˆ—βˆ—βˆ’π‘„40βˆ—βˆ—βˆ—βˆ’π‘„6⎀βŽ₯βŽ₯βŽ₯⎦+⎑⎒⎒⎒⎣0𝑄𝑃𝑀4𝑀𝑄6π‘€βŽ€βŽ₯βŽ₯βŽ₯βŽ¦βŽ‘βŽ’βŽ’βŽ’βŽ£Ξ“πΉ(π‘˜)5000⎀βŽ₯βŽ₯βŽ₯βŽ¦π‘‡+βŽ‘βŽ’βŽ’βŽ’βŽ£Ξ“5000⎀βŽ₯βŽ₯βŽ₯βŽ¦πΉπ‘‡βŽ‘βŽ’βŽ’βŽ’βŽ£0𝑄(π‘˜)𝑃𝑀4𝑀𝑄6π‘€βŽ€βŽ₯βŽ₯βŽ₯βŽ¦π‘‡<0,(3.38) From Lemma 2.4, we know that (3.37) is equivalent to the following inequality: βŽ‘βŽ’βŽ’βŽ’βŽ’βŽ£Ξ“1Ξ“2+Ξ“5𝐹𝑇(π‘˜)𝑀𝑇𝑃Γ3+Ξ“5𝐹𝑇(π‘˜)𝑀𝑇𝑄4Ξ“4+Ξ“5𝐹𝑇(π‘˜)𝑀𝑇𝑄6βˆ—βˆ’π‘ƒ00βˆ—βˆ—βˆ’π‘„40βˆ—βˆ—βˆ—βˆ’π‘„6⎀βŽ₯βŽ₯βŽ₯βŽ₯⎦<0.(3.39) that is Ξ¦=Ξ“2+Ξ“5𝐹𝑇(π‘˜)𝑀𝑇𝑃 Let Ξ¨=Ξ“3+Ξ“5𝐹𝑇(π‘˜)𝑀𝑇𝑄4, Ξ₯=Ξ“4+Ξ“5𝐹𝑇(π‘˜)𝑀𝑇𝑄6, Ξ“1+Ξ¦π‘ƒβˆ’1Φ𝑇+Ψ𝑄4βˆ’1Ψ𝑇+Ξ₯𝑄6βˆ’1Ξ₯𝑇<0.(3.40). As an application of Lemma 2.3, we know that (3.39) is equivalent to the following inequality: [Δ𝐢(π‘˜)Δ𝐴(π‘˜)Δ𝐡(π‘˜)]=𝑀𝐹(π‘˜)[𝑁1𝑁2𝑁3] By simple computation and noting that ξ€Ίξ€»Ξ¦=𝑃(𝐢+Δ𝐢)𝑃(𝐴+Δ𝐴)𝑃(𝐡+Δ𝐡)𝑃000𝑇,𝑄Ψ=4(𝐢+Ξ”πΆβˆ’πΌ)𝑄4(𝐴+Δ𝐴)𝑄4(𝐡+Δ𝐡)𝑄4ξ€»000𝑇,𝑄Ξ₯=6(𝐢+Ξ”πΆβˆ’πΌ)𝑄6(𝐴+Δ𝐴)𝑄6(𝐡+Δ𝐡)𝑄6ξ€»000𝑇.(3.41), we have 𝐢+Δ𝐢 Therefore, inequality (3.40) is just the same as inequality (3.2) when we use 𝐴+Δ𝐴, 𝐡+Δ𝐡, 𝐢 to replace 𝐴, 𝐡, 𝑒𝑆=π‘₯βˆΆβ€–π‘₯β€–β‰€π‘‡π‘…π‘’πœ†minξ‚Ά(π‘Š)1/2ξƒ°(3.42) of inequality (3.2), respectively. From Theorems 3.1 and 3.2, we know that uncertain discrete-time neural network (3.31) is globally dissipative and globally exponentially dissipative, and ξ‚Έξ‚Ήξ‚Έξ‚Ήξ‚Έξ‚Ήξ‚Έξ‚Ή,𝑔𝐢=0.04000.01,𝐴=0.010.08βˆ’0.050.02,𝐡=βˆ’0.050.010.020.07,𝑒=βˆ’0.210.141(π‘₯)=tanh(0.7π‘₯)βˆ’0.1sinπ‘₯,𝑔2ξ‚€(π‘₯)=tanh(0.4π‘₯)+0.2cosπ‘₯,𝜏(π‘˜)=7βˆ’2sinπ‘˜πœ‹2.(4.1) is a positive invariant and globally attractive set. The proof is then completed.

4. Examples

Example 4.1. Consider a discrete-time neural network (2.1) with πΊβˆ’1=βˆ’0.1
It is easy to check that assumption (H) is satisfied, and 𝐺+1=0.8, πΊβˆ’2=βˆ’0.2, 𝐺+2=0.6, 𝜏=5, 𝜏=9, 𝐺1=ξ‚Έξ‚Ήβˆ’0.100βˆ’0.2,𝐺2=ξ‚Έξ‚Ή0.8000.6,𝐺3=ξ‚Έξ‚Ήβˆ’0.0800βˆ’0.12,𝐺4=ξ‚Έξ‚Ή0.35000.2,𝛿=7.(4.2). Thus, 𝑄1=ξ‚Έξ‚Ή7.1693βˆ’0.0107βˆ’0.01077.3526,𝑄2=ξ‚Έξ‚Ή1.5095βˆ’0.0059βˆ’0.00591.6330,𝑄3=ξ‚Έξ‚Ή,𝑄2.24550.03360.03362.71584=ξ‚Έξ‚Ή2.3895βˆ’0.0065βˆ’0.00651.8946,𝑄5=ξ‚Έξ‚Ή7.1927βˆ’0.0108βˆ’0.01087.3714,𝑄6=ξ‚Έξ‚Ή,ξ‚Έξ‚Ήξ‚Έξ‚Ήξ‚Έξ‚Ή,𝐷2.4575βˆ’0.0064βˆ’0.00641.8764𝑃=57.3173βˆ’0.0727βˆ’0.072757.7631,π‘Š=5.0249βˆ’0.0110βˆ’0.01105.1411,𝑅=72.23680.04450.044571.49161=ξ‚Έξ‚Ή1.2817001.7411,𝐷2=ξ‚Έξ‚Ή,𝐻2.1180002.31571=ξ‚Έξ‚Ή19.79460023.7062,𝐻2=ξ‚Έξ‚Ή.9.11100010.5938(4.3) By the Matlab LMI Control Toolbox, we can find a solution to the LMI in (3.2) as follows: 𝑆={π‘₯βˆΆβ€–π‘₯‖≀0.9553} Therefore, by Theorem 3.1, we know that model (2.1) with above given parameters is globally dissipative and globally exponentially dissipative. It is easy to compute that the positive invariant and global attractive set are ξ‚Έξ‚Ήξ‚Έξ‚Ήξ‚Έξ‚Ήξ‚Έξ‚Ή,𝑔𝐢=0.12000.68,𝐴=16.7425.38.4,𝐡=βˆ’9.9βˆ’29.719.2βˆ’19.7,𝑒=10.9βˆ’3.41(π‘₯)=tanh(0.7π‘₯)βˆ’0.1sinπ‘₯,𝑔2ξ‚€(π‘₯)=tanh(0.4π‘₯)+0.2cosπ‘₯,𝜏(π‘˜)=7βˆ’2sinπ‘˜πœ‹2.(4.4).

The following example is given to illustrate that when the sufficient conditions ensuring the global dissipativity are not satisfied, the complex dynamics will appear.

Example 4.2. Consider a discrete-time neural network (2.1) with 𝐺𝑖(𝑖=1,2,3,4) It is easy to check that the linear matrix inequality (3.2) with 𝛿 and π‘₯1(𝑠)=0.5 of Example 4.1 has not a feasible solution. Figure 1 depicts the states of the considered neural network (2.1) with initial conditions π‘₯2(𝑠)=0.45, π‘ βˆˆβ„•[βˆ’9,0], π‘₯1(π‘˜). One can see from Figure 1 that the chaos behaviors have appeared for neural network (2.1) with above given parameters.

5. Conclusions

In this paper, the global dissipativity and global exponential dissipativity have been investigated for uncertain discrete-time neural networks with time-varying delays and general activation functions. By constructing appropriate Lyapunov-Krasovskii functionals and employing linear matrix inequality technique, several new delay-dependent criteria for checking the global dissipativity and global exponential dissipativity of the addressed neural networks have been derived in terms of LMI, which can be checked numerically using the effective LMI toolbox in MATLAB. Illustrated examples are also given to show the effectiveness of the proposed criteria. It is noteworthy that because neither model transformation nor free weighting matrices are employed to deal with cross terms in the derivation of the dissipativity criteria, the obtained results are less conservative and more computationally efficient.

Acknowledgments

The authors would like to thank the reviewers and the editor for their valuable suggestions and comments which have led to a much improved paper. This work was supported by the National Natural Science Foundation of China under Grants 60974132, 60874088, and 10772152.

References

  1. K. Gopalsamy and X.-Z. He, β€œDelay-independent stability in bidirectional associative memory networks,” IEEE Transactions on Neural Networks, vol. 5, no. 6, pp. 998–1002, 1994. View at: Publisher Site | Google Scholar
  2. M. Forti, β€œOn global asymptotic stability of a class of nonlinear systems arising in neural network theory,” Journal of Differential Equations, vol. 113, no. 1, pp. 246–264, 1994. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  3. S. Arik, β€œAn analysis of exponential stability of delayed neural networks with time varying delays,” Neural Networks, vol. 17, no. 7, pp. 1027–1031, 2004. View at: Publisher Site | Google Scholar
  4. D. Xu and Z. Yang, β€œImpulsive delay differential inequality and stability of neural networks,” Journal of Mathematical Analysis and Applications, vol. 305, no. 1, pp. 107–120, 2005. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  5. T. Chen, W. Lu, and G. Chen, β€œDynamical behaviors of a large class of general delayed neural networks,” Neural Computation, vol. 17, no. 4, pp. 949–968, 2005. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  6. Y. Liu, Z. Wang, and X. Liu, β€œGlobal exponential stability of generalized recurrent neural networks with discrete and distributed delays,” Neural Networks, vol. 19, no. 5, pp. 667–675, 2006. View at: Publisher Site | Google Scholar
  7. X. Liao, Q. Luo, Z. Zeng, and Y. Guo, β€œGlobal exponential stability in Lagrange sense for recurrent neural networks with time delays,” Nonlinear Analysis: Real World Applications, vol. 9, no. 4, pp. 1535–1557, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  8. H. Zhang, Z. Wang, and D. Liu, β€œGlobal asymptotic stability of recurrent neural networks with multiple time-varying delays,” IEEE Transactions on Neural Networks, vol. 19, no. 5, pp. 855–873, 2008. View at: Publisher Site | Google Scholar
  9. J. H. Park and O. M. Kwon, β€œFurther results on state estimation for neural networks of neutral-type with time-varying delay,” Applied Mathematics and Computation, vol. 208, no. 1, pp. 69–75, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  10. S. Xu, W. X. Zheng, and Y. Zou, β€œPassivity analysis of neural networks with time-varying delays,” IEEE Transactions on Circuits and Systems II, vol. 56, no. 4, pp. 325–329, 2009. View at: Publisher Site | Google Scholar
  11. J.-C. Ban, C.-H. Chang, S.-S. Lin, and Y.-H. Lin, β€œSpatial complexity in multi-layer cellular neural networks,” Journal of Differential Equations, vol. 246, no. 2, pp. 552–580, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  12. Z. Wang, Y. Liu, and X. Liu, β€œState estimation for jumping recurrent neural networks with discrete and distributed delays,” Neural Networks, vol. 22, no. 1, pp. 41–48, 2009. View at: Publisher Site | Google Scholar
  13. J. H. Park, C. H. Park, O. M. Kwon, and S. M. Lee, β€œA new stability criterion for bidirectional associative memory neural networks of neutral-type,” Applied Mathematics and Computation, vol. 199, no. 2, pp. 716–722, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  14. H. J. Cho and J. H. Park, β€œNovel delay-dependent robust stability criterion of delayed cellular neural networks,” Chaos, Solitons and Fractals, vol. 32, no. 3, pp. 1194–1200, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  15. J. H. Park, β€œRobust stability of bidirectional associative memory neural networks with time delays,” Physics Letters A, vol. 349, no. 6, pp. 494–499, 2006. View at: Publisher Site | Google Scholar
  16. J. H. Park, S. M. Lee, and H. Y. Jung, β€œLMI optimization approach to synchronization of stochastic delayed discrete-time complex networks,” Journal of Optimization Theory and Applications, vol. 143, no. 2, pp. 357–367, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  17. S. Mohamad and K. Gopalsamy, β€œExponential stability of continuous-time and discrete-time cellular neural networks with delays,” Applied Mathematics and Computation, vol. 135, no. 1, pp. 17–38, 2003. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  18. S. Hu and J. Wang, β€œGlobal robust stability of a class of discrete-time interval neural networks,” IEEE Transactions on Circuits and Systems. I, vol. 53, no. 1, pp. 129–138, 2006. View at: Publisher Site | Google Scholar | MathSciNet
  19. H. Gao and T. Chen, β€œNew results on stability of discrete-time systems with time-varying state delay,” IEEE Transactions on Automatic Control, vol. 52, no. 2, pp. 328–334, 2007. View at: Publisher Site | Google Scholar | MathSciNet
  20. Z. Wu, H. Su, J. Chu, and W. Zhou, β€œImproved result on stability analysis of discrete stochastic neural networks with time delay,” Physics Letters A, vol. 373, no. 17, pp. 1546–1552, 2009. View at: Publisher Site | Google Scholar | MathSciNet
  21. J. Cao and F. Ren, β€œExponential stability of discrete-time genetic regulatory networks with delays,” IEEE Transactions on Neural Networks, vol. 19, no. 3, pp. 520–523, 2008. View at: Publisher Site | Google Scholar
  22. J. K. Hale, Asymptotic Behavior of Dissipative Systems, vol. 25 of Mathematical Surveys and Monographs, American Mathematical Society, Providence, RI, USA, 1988. View at: MathSciNet
  23. X. Liao and J. Wang, β€œGlobal dissipativity of continuous-time recurrent neural networks with time delay,” Physical Review E, vol. 68, no. 1, Article ID 016118, p. 7, 2003. View at: Publisher Site | Google Scholar | MathSciNet
  24. S. Arik, β€œOn the global dissipativity of dynamical neural networks with time delays,” Physics Letters A, vol. 326, no. 1-2, pp. 126–132, 2004. View at: Publisher Site | Google Scholar
  25. Q. Song and Z. Zhao, β€œGlobal dissipativity of neural networks with both variable and unbounded delays,” Chaos, Solitons and Fractals, vol. 25, no. 2, pp. 393–401, 2005. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  26. X. Y. Lou and B. T. Cui, β€œGlobal robust dissipativity for integro-differential systems modeling neural networks with delays,” Chaos, Solitons and Fractals, vol. 36, no. 2, pp. 469–478, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  27. J. Cao, K. Yuan, D. W. C. Ho, and J. Lam, β€œGlobal point dissipativity of neural networks with mixed time-varying delays,” Chaos, vol. 16, no. 1, Article ID 013105, 9 pages, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  28. Q. Song and J. Cao, β€œGlobal dissipativity analysis on uncertain neural networks with mixed time-varying delays,” Chaos, vol. 18, no. 4, Article ID 043126, 10 pages, 2008. View at: Publisher Site | Google Scholar
  29. W. M. Haddad and V. S. Chellaboina, β€œStability and dissipativity theory for nonnegative dynamical systems: a unified analysis framework for biological and physiological systems,” Nonlinear Analysis: Real World Applications, vol. 6, no. 1, pp. 35–65, 2005. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  30. Y. Huang, D. Xu, and Z. Yang, β€œDissipativity and periodic attractor for non-autonomous neural networks with time-varying delays,” Neurocomputing, vol. 70, no. 16–18, pp. 2953–2958, 2007. View at: Publisher Site | Google Scholar
  31. H. Tian, N. Guo, and A. Shen, β€œDissipativity of delay functional differential equations with bounded lag,” Journal of Mathematical Analysis and Applications, vol. 355, no. 2, pp. 778–782, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  32. M. S. Mahmoud, Y. Shi, and F. M. AL-Sunni, β€œDissipativity analysis and synthesis of a class of nonlinear systems with time-varying delays,” Journal of the Franklin Institute, vol. 346, no. 6, pp. 570–592, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  33. S. Gan, β€œDissipativity of ΞΈ-methods for nonlinear delay differential equations of neutral type,” Applied Numerical Mathematics, vol. 59, no. 6, pp. 1354–1365, 2009. View at: Publisher Site | ?-methods%20for%20nonlinear%20delay%20differential%20equations%20of%20neutral%20type&author=S. Gan&publication_year=2009" target="_blank">Google Scholar | MathSciNet
  34. S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, vol. 15 of SIAM Studies in Applied Mathematics, SIAM, Philadelphia, Pa, USA, 1994. View at: MathSciNet
  35. P. P. Khargonekar, I. R. Petersen, and K. Zhou, β€œRobust stabilization of uncertain linear systems: quadratic stabilizability and H∞ control theory,” IEEE Transactions on Automatic Control, vol. 35, no. 3, pp. 356–361, 1990. View at: Publisher Site | H8%20control%20theory&author=P. P. Khargonekar&author=I. R. Petersen&author=&author=K. Zhou&publication_year=1990" target="_blank">Google Scholar | Zentralblatt MATH | MathSciNet
  36. L. Xie, M. Fu, and C. E. de Souza, β€œH∞-control and quadratic stabilization of systems with parameter uncertainty via output feedback,” IEEE Transactions on Automatic Control, vol. 37, no. 8, pp. 1253–1256, 1992. View at: Publisher Site | H8-control%20and%20quadratic%20stabilization%20of%20systems%20with%20parameter%20uncertainty%20via%20output%20feedback&author=L. Xie&author=M. Fu&author=&author=C. E. de Souza&publication_year=1992" target="_blank">Google Scholar | Zentralblatt MATH | MathSciNet

Copyright © 2010 Qiankun Song and Jinde Cao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

970Β Views | 400Β Downloads | 15Β Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder