Table of Contents Author Guidelines Submit a Manuscript
Discrete Dynamics in Nature and Society
Volume 2008, Article ID 421614, 14 pages
http://dx.doi.org/10.1155/2008/421614
Research Article

Delay-Dependent Exponential Stability for Discrete-Time BAM Neural Networks with Time-Varying Delays

1Department of Mathematics, Henan Institute of Science and Technology, Xinxiang 453003, China
2College of Mathematics and Information Science, Henan Normal University, Xinxiang 453007, China
3Key Laboratory of Measurement and Control of Complex Systems of Engineering, Ministry of Education, School of Automation, Southeast University, Nanjing 210096, China

Received 18 May 2008; Revised 5 August 2008; Accepted 10 September 2008

Academic Editor: Yong Zhou

Copyright © 2008 Yonggang Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper considers the delay-dependent exponential stability for discrete-time BAM neural networks with time-varying delays. By constructing the new Lyapunov functional, the improved delay-dependent exponential stability criterion is derived in terms of linear matrix inequality (LMI). Moreover, in order to reduce the conservativeness, some slack matrices are introduced in this paper. Two numerical examples are presented to show the effectiveness and less conservativeness of the proposed method.

1. Introduction

Bidirectional associative memory (BAM) neural networks were first introduced by Kosko [1, 2]. It generalizes the single-layer autoassociative Hebbian correlator to a two-layer pattern-matched heteroassociative circuits. This class of neural networks has been successfully applied to pattern recognition and artificial intelligence. In those applications, it is very important to guarantee the designed neural networks to be stable. On the other hand, time delays are unavoidably encountered in the implementation of neural networks, and its existence will lead to instability, oscillation, and poor performances of neural networks. Therefore, the the asymptotic or exponential stability analysis for BAM neural networks with time delays has received great attention during the past years, see, for example, [315]. The obtained results are classified into two categories: delay-independent results [35, 7, 10] and delay-dependent results [6, 9, 11, 12, 1416]. Generally speaking, the delay-dependent results are less conservative than the delay-independent ones, especially when the size of time delay is small.

It should be pointed out that all of the above-mentioned references are concerned with continuous-time BAM neural networks. However, when implementing the continuous-time neural networks for computer simulation, for experimental or computational purposes, it is essential to formulate a discrete-time system that is an analogue of the continuous-time recurrent neural networks [17]. Generally speaking, the stability analysis of continuous-time neural networks is not applicable to the discrete version. Therefore, the stability analysis for various discrete-time neural networks with time delays is widely studied in recent years [1728]. Using M-matrix method, Liang and Cao [25] studied the exponential stability of continuous-time BAM neural network with constant delays and its discrete analogue. Liang et al. [26] also studied the dynamics of discrete-time BAM neural networks with variable time delays based on unreasonably severe constraints on the delay functions. By using the Lyapunov functional method and linear matrix inequality technique, the exponential stability and robust exponential stability for discrete-time BAM neural networks with variable delays were considered in [27, 28], respectively. However, the results presented in [27, 28] are still conservative, since the the Lyapunov functionals constructed in [27, 28] are so simple. Therefore, there is much room to improve the results in [27, 28].

In this paper, we present a new exponential stability criterion for discrete-time BAM neural networks with time-varying delays. Compared with the existing methods, the main contributions of this paper are as follows. Firstly, more general Lyapunov functional is employed to obtain the improved exponential stability criterion; secondly, in order to reduce the conservativeness, some slack matrices, which bring much flexibility in solving LMI, are introduced in this paper. The proposed exponential stability criterion is expressed in LMI which can be efficiently solved by LMI toolbox in Matlab. Finally, two numerical examples are presented to show that our result is less conservative than some existing ones [27, 28].

The organization of this paper is as follows. Section 2 presents problem formulation of discrete-time BAM neural networks with time delay-varying. In Section 3, our main result of this paper is established and some remarks are given. In Section 4, numerical examples are given to demonstrate the proposed method. Finally, conclusion is drawn in Section 5.

Notation
Throughout this paper, the superscript “𝑇” stands for the transpose of a matrix. 𝑅𝑛 and 𝑅𝑛×𝑛 denote the 𝑛-dimensional Euclidean space and set of all 𝑛×𝑛 real matrices, respectively. A real symmetric matrix 𝑃>0(0) denotes 𝑃 being a positive definite (positive semidefinite) matrix. 𝐼 is used to denote an identity matrix with proper dimension. For integers 𝑎,𝑏, and 𝑎<𝑏,𝑁[𝑎,𝑏] denote the discrete interval 𝑁[𝑎,𝑏]={𝑎,𝑎+1,,𝑏1,𝑏}. 𝜆𝑀(𝑋) and 𝜆𝑚(𝑋) stand for the maximum and minimum eigenvalues of the symmetric matrix 𝑋, respectively. Matrices, if not explicitly stated, are assumed to have compatible dimensions. The symmetric terms in a symmetric matrix are denoted by .

2. Problem Formulation

Consider the following discrete-time BAM neural network with time-varying delays:𝑢𝑖(𝑘+1)=𝑎𝑖𝑢𝑖(𝑘)+𝑚𝑗=1𝑤𝑖𝑗𝑓𝑗(𝑣𝑗(𝑘𝜏(𝑘)))+𝐼𝑖𝑣,𝑖=1,2,,𝑛,𝑗(𝑘+1)=𝑏𝑗𝑣𝑗(𝑘)+𝑛𝑖=1𝑣𝑗𝑖𝑔𝑖(𝑢𝑖(𝑘(𝑘)))+𝐽𝑗,𝑗=1,2,,𝑚,(2.1)where 𝑢𝑖(𝑘) and 𝑣𝑗(𝑘) are the states of the 𝑖th neuron from the neural field 𝐹𝑋 and the 𝑗th neuron from the neural field 𝐹𝑌 at time 𝑘, respectively; 𝑎𝑖,𝑏𝑗(0,1) denote the stability of internal neuron process on the 𝑋-layer and the 𝑌-layer, respectively; 𝑤𝑖𝑗,𝑣𝑗𝑖 are real constants, and denote the synaptic connection weights; 𝑓𝑗()𝑔𝑖() denote the activation functions of the 𝑗th neuron from the neural field 𝐹𝑌 and the 𝑗th neuron from the neural field 𝐹𝑋, respectively; 𝐼𝑖 and 𝐽𝑗 denote the external constant inputs from outside the network acting on the 𝑖th neuron from the neural field 𝐹𝑋 and the 𝑗th neuron from the neural field 𝐹𝑌, respectively. 𝜏(𝑘) and (𝑘) represent time-varying interval delays satisfying𝜏𝑚𝜏(𝑘)𝜏𝑀,𝑚(𝑘)𝑀,(2.2)where 𝜏𝑚,𝜏𝑀,𝑚, and 𝑀 are positive integers.

Throughout this paper, we make the following assumption on the activation functions 𝑓𝑗(),𝑔𝑖().

(A1) The activation functions 𝑓𝑗(),𝑔𝑖()(𝑖=1,2,,𝑛,𝑗=1,2,,𝑚) are bounded on 𝑅.(A2) For any 𝜁1,𝜁2𝑅, there exist positive scalars 𝑙1𝑗,𝑙2𝑖 such that𝑓0|𝑗(𝜁1𝑓)𝑗(𝜁2)|𝑙1𝑗|𝜁1𝜁2|,𝑗=1,2,,𝑚,0|𝑔𝑗(𝜁1)𝑔𝑗(𝜁2)|𝑙2𝑖|𝜁1𝜁2|,𝑖=1,2,,𝑛.(2.3)

It is clear that under Assumption (A1) and (A2), system (2.1) has at least one equilibrium. In order to simplify our proof, we shift the equilibrium point 𝑢=[𝑢1,𝑢2,,𝑢𝑛]𝑇,𝑣=[𝑣1,𝑣2,,𝑣𝑚]𝑇 of system (2.1) to the origin. Let 𝑥𝑖(𝑘)=𝑢𝑖(𝑘)𝑢𝑖,𝑦𝑗(𝑘)=𝑣𝑗(𝑘)𝑣𝑗,𝑓𝑗(𝑦𝑗𝑓(𝑘))=𝑗(𝑣𝑗𝑓(𝑘))𝑗(𝑣𝑗),𝑔𝑖(𝑥𝑖(𝑘))=𝑔𝑖(𝑢𝑖(𝑘))𝑔𝑖(𝑢𝑖), then system (2.1) can be transformed to𝑥𝑖(𝑘+1)=𝑎𝑖𝑥𝑖(𝑘)+𝑚𝑗=1𝑤𝑖𝑗𝑓𝑗(𝑦𝑗𝑦(𝑡𝜏(𝑘))),𝑖=1,2,,𝑛𝑗(𝑘+1)=𝑏𝑗𝑦𝑗(𝑘)+𝑛𝑖=1𝑣𝑗𝑖𝑔𝑖(𝑥𝑖(𝑡(𝑘))),𝑗=1,2,,𝑚.(2.4)Obviously, the the activation functions 𝑓𝑗(),𝑔𝑖() satisfy the following conditions.

(A3) For any 𝜁𝑅, there exist positive scalars 𝑙1𝑗,𝑙2𝑖 such that0|𝑓𝑗(𝜁)|𝑙1𝑗|𝜁|,𝑗=1,2,,𝑚,0|𝑔𝑖(𝜁)|𝑙2𝑖|𝜁|,𝑖=1,2,,𝑛.(2.5)Also we assume that system (2.4) with initial value𝑥𝑖(𝑠)=𝜙𝑖(𝑠),𝑦𝑗(𝑠)=𝜓𝑗(𝑠),𝑠𝑁[𝜏,0],𝑖=1,2,,𝑛,𝑗=1,2,,𝑚,(2.6)where 𝜏=max{𝜏𝑀,𝑀}.

For convenience, we rewrite system (2.4) in the form𝑥(𝑘+1)=𝐴𝑥(𝑘)+𝑊𝑓(𝑦(𝑘𝜏(𝑘))),𝑦(𝑘+1)=𝐵𝑦(𝑘)+𝑉𝑔(𝑥(𝑘(𝑘))),𝑥(𝑠)=𝜙(𝑠),𝑦(𝑠)=𝜓(𝑠),𝑠𝑁[𝜏,0],(2.7)where 𝑥(𝑘)=[𝑥1(𝑘),𝑥2(𝑘),,𝑥𝑛(𝑘)]𝑇,𝑦(𝑘)=[𝑦1(𝑘),𝑦2(𝑘),,𝑦𝑚(𝑘)]𝑇,𝑓(𝑦(𝑘))=[𝑓1(𝑦1(𝑘)),𝑓2(𝑦2(𝑘)),,𝑓𝑚(𝑦𝑚(𝑘))]𝑇,𝑔(𝑥(𝑘))=[𝑔1(𝑥1(𝑘)),𝑔2(𝑥2(𝑘)),,𝑔𝑛(𝑥𝑛(𝑘))]𝑇,𝐴=diag{𝑎1,𝑎2,,𝑎𝑛},𝐵=diag{𝑏1,𝑏2,,𝑏𝑚},𝑊=(𝑤𝑖𝑗)𝑚×𝑛,𝑉=(𝑣𝑖𝑗)𝑛×𝑚,𝜙(𝑠)=[𝜙1(𝑠),𝜙2(𝑠),,𝜙𝑛(𝑠)]𝑇,𝜓(𝑠)=[𝜓1(𝑠),𝜓2(𝑠),,𝜓𝑚(𝑠)]𝑇.

From above analysis, we can see that the exponential stability problem of system (2.1) on equilibrium 𝑥 is changed into the zero stability problem of system (2.7). Therefore, in the following part, we will devote into the exponential stability analysis problem of system (2.7).

Before giving the main result, we will firstly introduce the following definition and lemmas.

Definition 2.1. The trivial solution of BAM neural network (2.7) is said to be globally exponentially stable, if there exist scalars 𝑟>1 and 𝑀>1 such that𝑥(𝑘)2+𝑦(𝑘)2𝑀(sup𝑠𝑁[𝑀,0]𝜙(𝑠)2+sup𝑠𝑁[𝜏𝑀,0]𝜓(𝑠)2)𝑟𝑘.(2.8)

Lemma 2.2 (see [29]). For any real vectors 𝑎,𝑏, and any matrix 𝑄>0 with appropriate dimensions, it follows that 2𝑎𝑇𝑏𝑎𝑇𝑄𝑎+𝑏𝑇𝑄1𝑏.(2.9)

3. Main Result

In this section, we are in a position to present the global exponential stability criterion of system (2.7).

Theorem 3.1. For given diagonal matrices 𝐿1=diag{𝑙11,𝑙12,,𝑙1𝑚}and𝐿2=diag{𝑙21,𝑙22,,𝑙2𝑛}. Under Assumptions (A1) and (A2), the system (2.7) is globally exponentially stable, if there exist matrices 𝑃𝑖>0,𝑄𝑖>0,𝑅𝑖>0,𝑆𝑖>0,𝑍𝑖>0,(𝑖=1,2),𝐿𝑗,𝑀𝑗,𝑁𝑗,𝑇𝑗,(𝑗=1,2,,8), and diagonal matrices 𝐷1=diag{𝑑11,𝑑12,,𝑑1𝑚}>0,𝐷2=diag{𝑑21,𝑑22,,𝑑2𝑛}>0, such that the following LMI holds: Ω𝑀𝐿𝑀𝑚𝑀𝜏𝑀𝑁𝜏𝑀𝜏𝑚𝑇𝑍1000𝑍100𝑍20𝑍2<0,(3.1)where 𝜔𝜔=11𝜔12𝜔13𝜔14𝜔15𝜔16𝑚1+𝑙𝑡7𝑙𝑡8𝜔22𝜔23𝜔24𝜔25𝜔26𝜔27𝑡𝑡8𝑛𝑡8𝜔33𝑡𝑡4𝜔35𝜔36𝑚3𝑡𝑡7𝑡𝑡8𝜔44𝑛4𝑚4𝑙4𝑚40𝜔55𝜔56𝑚5+𝑛𝑡7𝜔58𝜔66𝜔67𝑙𝑡8+𝑚𝑡8𝜔77𝑚𝑡8𝜔88,𝑙𝑙=𝑡1𝑙𝑡2𝑙𝑡3𝑙𝑡4𝑙𝑡5𝑙𝑡6𝑙𝑡7𝑙𝑡8𝑡,𝑚𝑚=𝑡1𝑚𝑡2𝑚𝑡3𝑚𝑡4𝑚𝑡5𝑚𝑡6𝑚𝑡7𝑚𝑡8𝑡,𝑛𝑛=𝑡1𝑛𝑡2𝑛𝑡3𝑛𝑡4𝑛𝑡5𝑛𝑡6𝑛𝑡7𝑛𝑡8𝑡,𝑡𝑡=𝑡1𝑡𝑡2𝑡𝑡3𝑡𝑡4𝑡𝑡5𝑡𝑡6𝑡𝑡7𝑡𝑡8𝑡,𝜔11=𝑟𝑎𝑡𝑝1𝑎+𝑟𝑚𝑚(𝑎𝑖)𝑡𝑧1(𝑎𝑖)𝑝1+(𝑚𝑚+1)𝑞1+𝑟1+𝑙1+𝑙𝑡1,𝜔12=𝑙𝑡2+𝑡1𝑛1,𝜔13=𝑙𝑡3𝑡1,𝜔14=𝑟𝑎𝑡𝑝1𝑤+𝑟𝑚𝑚(𝑎𝑖)𝑡𝑧1𝑤+𝑙𝑡4,𝜔15=𝑙𝑡5+𝑛1,𝜔16=𝑙1+𝑙𝑡6+𝑚1,𝜔22=𝑟𝜏𝑚𝑞2+𝑡2+𝑡𝑡2+𝑑1𝑛2𝑛𝑡2,𝜔23=𝑡2+𝑡𝑡3𝑛𝑡3,𝜔24=𝑡𝑡4𝑛𝑡4,𝜔25=𝑡𝑡5+𝑛2𝑛𝑡5,𝜔26=𝑡𝑡6+𝑚2𝑙2𝑛𝑡6,𝜔27=𝑚2+𝑡𝑡7𝑛𝑡7,𝜔33=𝑟𝜏𝑚𝑟2𝑡3𝑡𝑡3,𝜔35=𝑛3𝑡𝑡5,(1)𝜔36=𝑚3𝑙3𝑡𝑡6,𝜔44=𝑟𝑤𝑡𝑝1𝑤+𝑟𝑚𝑚𝑤𝑡𝑧1𝑤𝑙11𝑑1𝑙11,𝜔55=𝑟𝑏𝑡𝑝2𝑏+𝑟𝜏𝑚𝜏𝑚(𝑏𝑖)𝑡𝑧2(𝑏𝑖)𝑝2+(𝜏𝑚𝜏𝑚+1)𝑞2+𝑟2+𝑛5+𝑛𝑡5,𝜔56=𝑚5𝑙5+𝑛𝑡6,𝜔58=𝑟𝑏𝑡𝑝2𝑣+𝑟𝜏𝑚𝜏𝑚(𝑏𝑖)𝑡𝑧2𝑣+𝑛𝑡8,𝜔66=𝑟𝑚𝑞1𝑙6𝑙𝑡6+𝑑2+𝑚6+𝑚𝑡6,𝜔67=𝑙𝑡7𝑚6+𝑚𝑡7,𝜔77=𝑟𝑚𝑟1𝑚7𝑚𝑡7,𝜔88=𝑟𝑣𝑡𝑝2𝑣+𝑟𝜏𝑚𝜏𝑚𝑣𝑡𝑧2𝑣𝑙21𝑑2𝑙21.(3.2)

Proof. Choose the following Lyapunov-Krasovskii candidate function of system (2.7) as follows:𝑉(𝑘)=𝑉1(𝑘)+𝑉2(𝑘)+𝑉3(𝑘)+𝑉4(𝑘)+𝑉5(𝑘),(3.3)where𝑣1(𝑘)=𝑟𝑘𝑥𝑡(𝑘)𝑝1𝑥(𝑘)+𝑟𝑘𝑦𝑡(𝑘)𝑝2𝑣𝑦(𝑘),2(𝑘)=𝑘1𝑙=𝑘(𝑘)𝑟𝑙𝑥𝑡(𝑙)𝑞1𝑥(𝑙)+𝑘1𝑙=𝑘𝜏(𝑘)𝑟𝑙𝑦𝑡(𝑙)𝑞2𝑣𝑦(𝑙),3(𝑘)=𝑚𝜃=𝑚+1𝑘1𝑙=𝑘+𝜃𝑟𝑙𝑥𝑡(𝑙)𝑞1𝑥(𝑙)+𝜏𝑚𝜃=𝜏𝑚+1𝑘1𝑙=𝑘+𝜃𝑟𝑙𝑦𝑡(𝑙)𝑞2𝑣𝑦(𝑙),4(𝑘)=𝑘1𝑙=𝑘𝑚𝑟𝑙𝑥𝑡(𝑙)𝑟1𝑥(𝑙)+𝑘1𝑙=𝑘𝜏𝑚𝑟𝑙𝑦𝑡(𝑙)𝑟2𝑣𝑦(𝑙),5(𝑘)=𝑟𝑚1𝜃=𝑚𝑘1𝑙=𝑘+𝜃𝑟𝑙𝜂𝑡1(𝑙)𝑧1𝜂1(𝑙)+𝑟𝜏𝑚1𝜃=𝜏𝑚𝑘1𝑙=𝑘+𝜃𝑟𝑙𝜂𝑡2(𝑙)𝑧2𝜂2(𝑙),(3.4)and 𝜂1(𝑙)=𝑥(𝑙+1)𝑥(𝑙),𝜂2(𝑙)=𝑦(𝑙+1)𝑦(𝑙).
Calculating the difference of 𝑉(𝑘) along the trajectories of system (2.7), we can obtain 𝛿𝑣1(𝑘)=𝑟𝑘+1[𝑎𝑥(𝑘)+𝑤𝑓(𝑦(𝑘𝜏(𝑘)))]𝑡𝑝1[𝑎𝑥(𝑘)+𝑤𝑓(𝑦(𝑘𝜏(𝑘)))]+𝑟𝑘+1[𝑏𝑦(𝑘)+𝑣𝑔(𝑥(𝑘(𝑘)))]𝑡𝑝2[𝑏𝑦(𝑘)+𝑣𝑔(𝑥(𝑘(𝑘)))]𝑟𝑘𝑥𝑡(𝑘)𝑝1𝑥(𝑘)𝑟𝑘𝑦𝑡(𝑘)𝑝2𝑦(𝑘)=𝑟𝑘{𝑥𝑡(𝑘)(𝑟𝑎𝑡𝑝1𝑎𝑝1)𝑥(𝑘)+2𝑟𝑥𝑡(𝑘)𝑎𝑡𝑝1𝑤𝑓(𝑦(𝑘𝜏(𝑘)))+𝑟𝑓𝑡(𝑦(𝑘𝜏(𝑘)))𝑤𝑡𝑝1𝑤𝑓(𝑦(𝑘𝜏(𝑘)))+𝑦𝑡(𝑘)(𝑟𝑏𝑡𝑝2𝑏𝑝2)𝑦(𝑘)+2𝑟𝑦𝑡(𝑘)𝑏𝑡𝑝2𝑣𝑔(𝑥(𝑘(𝑘)))+𝑟𝑔𝑡(𝑥(𝑘(𝑘)))𝑣𝑡𝑝2𝑣𝑔(𝑥(𝑘(𝑘)))},(3.5)𝛿𝑣2(𝑘)𝑟𝑘𝑥𝑡(𝑘)𝑞1𝑥(𝑘)𝑟𝑘(𝑘)𝑥𝑡(𝑘(𝑘))𝑞1𝑥(𝑘(𝑘))+𝑟𝑘𝑦𝑡(𝑘)𝑞2𝑦(𝑘)𝑟𝑘𝜏(𝑘)𝑦𝑡(𝑘𝜏(𝑘))𝑞2+𝑦(𝑘𝜏(𝑘))𝑘𝑚𝑙=𝑘+1(𝑘+1)𝑟𝑙𝑥𝑡(𝑙)𝑞1𝑥(𝑙)+𝑘𝜏𝑚𝑙=𝑘+1𝜏(𝑘+1)𝑟𝑙𝑦𝑡(𝑙)𝑞2𝑦(𝑙)𝑟𝑘𝑥𝑡(𝑘)𝑞1𝑥(𝑘)𝑟𝑘𝑚𝑥𝑡(𝑘(𝑘))𝑞1𝑥(𝑘(𝑘))+𝑟𝑘𝑦𝑡(𝑘)𝑞2𝑦(𝑘)𝑟𝑘𝜏𝑚𝑦𝑡(𝑘𝜏(𝑘))𝑞2+𝑦(𝑘𝜏(𝑘))𝑘𝑚𝑙=𝑘+1𝑚𝑟𝑙𝑥𝑡(𝑙)𝑞1𝑥(𝑙)+𝑘𝜏𝑚𝑙=𝑘+1𝜏𝑚𝑟𝑙𝑦𝑡(𝑙)𝑞2𝑦(𝑙),(3.6)𝛿𝑣3(𝑘)=(𝑚𝑚)𝑟𝑘𝑥𝑡(𝑘)𝑞1𝑥(𝑘)+(𝜏𝑚𝜏𝑚)𝑟𝑘𝑦𝑡(𝑘)𝑞2𝑦(𝑘)𝑘𝑚𝑙=𝑘+1𝑚𝑟𝑙𝑥𝑡(𝑙)𝑞1𝑥(𝑙)𝑘𝜏𝑚𝑙=𝑘+1𝜏𝑚𝑟𝑙𝑦𝑡(𝑙)𝑞2𝑦(𝑙),(3.7)𝛿𝑣4(𝑘)=𝑟𝑘𝑥𝑡(𝑘)𝑟1𝑥(𝑘)𝑟𝑘𝑚𝑥𝑡(𝑘𝑚)𝑟1𝑥(𝑘𝑚)+𝑟𝑘𝑦𝑡(𝑘)𝑟2𝑦(𝑘)𝑟𝑘𝜏𝑚𝑦𝑡(𝑘𝜏𝑚)𝑟2𝑦(𝑘𝜏𝑚),(3.8)𝛿𝑣5(𝑘)=𝑟𝑘𝑟𝑚𝑚𝜂𝑡1(𝑘)𝑧1𝜂1(𝑘)𝑟𝑚𝑘1𝑙=𝑘𝑚𝑟𝑙𝜂𝑡1(𝑙)𝑧1𝜂1(𝑙)+𝑟𝑘𝑟𝜏𝑚𝜏𝑚𝜂𝑡2(𝑘)𝑧2𝜂2(𝑘)𝑟𝜏𝑚𝑘1𝑙=𝑘𝜏𝑚𝑟𝑙𝜂𝑡2(𝑙)𝑧2𝜂2(𝑙)𝑟𝑘𝑟𝑚𝑚[(𝑎𝑖)𝑥(𝑘)+𝑤𝑓(𝑦(𝑘𝜏(𝑘)))]𝑡𝑧1[(𝑎𝑖)𝑥(𝑘)+𝑤𝑓(𝑦(𝑘𝜏(𝑘)))]+𝑟𝑘𝑟𝜏𝑚𝜏𝑚[(𝑏𝑖)𝑦(𝑘)+𝑣𝑔(𝑥(𝑘(𝑘)))]𝑡𝑧2[(𝑏𝑖)𝑦(𝑘)+𝑣𝑔(𝑥(𝑘(𝑘)))]𝑟𝑘𝑘1𝑙=𝑘(𝑘)𝜂𝑡1(𝑙)𝑧1𝜂1(𝑙)𝑟𝑘𝑘(𝑘)𝑙=𝑘𝑚𝜂𝑡1(𝑙)𝑧1𝜂1(𝑙)𝑟𝑘𝑘1𝑙=𝑘𝜏(𝑘)𝜂𝑡2(𝑙)𝑧2𝜂2(𝑙)𝑟𝑘𝑘𝜏(𝑘)𝑙=𝑘𝜏𝑚𝜂𝑡2(𝑙)𝑧2𝜂2(𝑙).(3.9)By Lemma 2.2, we have 𝑘1𝑙=𝑘(𝑘)𝜂𝑡1(𝑙)𝑧1𝜂1(𝑙)2𝑘1𝑙=𝑘(𝑘)𝜂𝑡1(𝑙)𝑙𝑡𝜉(𝑘)+𝑘1𝑙=𝑘(𝑘)𝜉𝑡(𝑘)𝑙𝑧11𝑙𝑡𝜉(𝑘)=2[𝑥𝑡(𝑘)𝑥𝑡(𝑘(𝑘))]𝑙𝑡𝜉(𝑘)+(𝑘)𝜉𝑡(𝑘)𝑙𝑧11𝑙𝑡𝜉(𝑘)𝜉𝑡(𝑘)(𝜓1𝑙𝑡+𝑙𝜓1)𝜉(𝑘)+𝑚𝜉𝑡(𝑘)𝑙𝑧11𝑙𝑡𝜉(𝑘),(3.10)𝑘(𝑘)𝑙=𝑘𝑚𝜂𝑡1(𝑙)𝑧1𝜂1(𝑙)2𝑘(𝑘)𝑙=𝑘𝑚𝜂𝑡1(𝑙)𝑚𝑡𝜉(𝑘)+𝑘(𝑘)𝑙=𝑘𝑚𝜉𝑡(𝑘)𝑚𝑧11𝑚𝑡𝜉(𝑘)=2[𝑥𝑡(𝑘(𝑘))𝑥𝑡(𝑘𝑚)]𝑚𝑡𝜉(𝑘)+(𝑚(𝑘))𝜉𝑡(𝑘)𝑚𝑧11𝑚𝑡𝜉(𝑘)𝜉𝑡(𝑘)(𝜓2𝑚𝑡+𝑚𝜓2)𝜉(𝑘)+(𝑚𝑚)𝜉𝑡(𝑘)𝑚𝑧11𝑚𝑡𝜉(𝑘),(3.11)𝑘1𝑙=𝑘𝜏(𝑘)𝜂𝑡2(𝑙)𝑧2𝜂2(𝑙)2𝑘1𝑙=𝑘𝜏(𝑘)𝜂𝑡2(𝑙)𝑛𝑡𝜉(𝑘)+𝑘1𝑙=𝑘𝜏(𝑘)𝜉𝑡(𝑘)𝑛𝑧21𝑛𝑡𝜉(𝑘)=2[𝑦𝑡(𝑘)𝑦𝑡(𝑘𝜏(𝑘))]𝑛𝑡𝜉(𝑘)+𝜏(𝑘)𝜉𝑡(𝑘)𝑛𝑧21𝑛𝑡𝜉(𝑘)𝜉𝑡(𝑘)(𝜓3𝑛𝑡+𝑛𝜓3)𝜉(𝑘)+𝜏𝑚𝜉𝑡(𝑘)𝑛𝑧21𝑛𝑡𝜉(𝑘),(3.12)𝑘𝜏(𝑘)𝑙=𝑘𝜏𝑚𝜂𝑡2(𝑙)𝑧2𝜂2(𝑙)2𝑘𝜏(𝑘)𝑙=𝑘𝜏𝑚𝜂𝑡2(𝑙)𝑡𝑡𝜉(𝑘)+𝑘𝜏(𝑘)𝑙=𝑘𝜏𝑚𝜉𝑡(𝑘)𝑡𝑧21𝑡𝑡𝜉(𝑘)=2[𝑦𝑡(𝑘𝜏(𝑘))𝑦𝑡(𝑘𝜏𝑚)]𝑡𝑡𝜉(𝑘)+(𝜏𝑚𝜏(𝑘))𝜉𝑡(𝑘)𝑡𝑧21𝑡𝑡𝜉(𝑘)𝜉𝑡(𝑘)(𝜓4𝑡𝑡+𝑡𝜓4)𝜉(𝑘)+(𝜏𝑚𝜏𝑚)𝜉𝑡(𝑘)𝑡𝑧21𝑡𝑡𝜉(𝑘),(3.13)where𝑙𝑙=𝑡1𝑙𝑡2𝑙𝑡3𝑙𝑡4𝑙𝑡5𝑙𝑡6𝑙𝑡7𝑙𝑡8𝑡,𝑚𝑚=𝑡1𝑚𝑡2𝑚𝑡3𝑚𝑡4𝑚𝑡5𝑚𝑡6𝑚𝑡7𝑚𝑡8𝑡,𝑛𝑛=𝑡1𝑛𝑡2𝑛𝑡3𝑛𝑡4𝑛𝑡5𝑛𝑡6𝑛𝑡7𝑛𝑡8𝑡,𝑡𝑡=𝑡1𝑡𝑡2𝑡𝑡3𝑡𝑡4𝑡𝑡5𝑡𝑡6𝑡𝑡7𝑡𝑡8𝑡,𝜓1=𝑖0000𝑖00𝑡,𝜓2=00000𝑖𝑖0𝑡,𝜓3=0𝑖00𝑖000𝑡,𝜓4=0𝑖𝑖00000𝑡,(3.14)and 𝜉(𝑘)=[𝑥𝑇(𝑘)𝑦𝑇(𝑘𝜏(𝑘))𝑦𝑇(𝑘𝜏𝑀)𝑓𝑇(𝑦(𝑘𝜏(𝑘)))𝑦𝑇(𝑘)𝑥𝑇(𝑘(𝑘))𝑥𝑇(𝑘𝑀)𝑔𝑇(𝑥(𝑘(𝑘)))]. By inequalities (2.5), it is well known that there exist positive diagonally matrices 𝐷𝑖0(𝑖=1,2) such that the following inequalities hold:𝑟𝑘[𝑦𝑡(𝑡𝜏(𝑡))𝑑1𝑦(𝑡𝜏(𝑡))𝑓𝑡(𝑦(𝑡𝜏(𝑡)))𝑙11𝑑1𝑙11𝑟𝑓(𝑦(𝑡𝜏(𝑡)))]0,𝑘[𝑥𝑡(𝑡(𝑡))𝑑2𝑥(𝑡(𝑡))𝑔𝑡(𝑥(𝑡(𝑡)))𝑙21𝑑2𝑙21𝑔(𝑥(𝑡(𝑡)))]0,(3.15)where 𝐿1=diag{𝑙11,𝑙12,,𝑙1𝑚},𝐿2=diag{𝑙21,𝑙22,,𝑙2𝑛}.
Substituting (3.10)–(3.13) into (3.9), we have𝛿𝑣(𝑘)𝛿𝑣1(𝑘)+𝛿𝑣2(𝑘)+𝛿𝑣3(𝑘)+𝛿𝑣4(𝑘)+𝛿𝑣5(𝑘)+𝑟𝑘[𝑦𝑡(𝑡𝜏(𝑡))𝑑1𝑦(𝑡𝜏(𝑡)))𝑓𝑡(𝑦(𝑡𝜏(𝑡)))𝑙11𝑑1𝑙11𝑓(𝑦(𝑡𝜏(𝑡)))]+𝑟𝑘[𝑥𝑡(𝑡(𝑡))𝑑2𝑥(𝑡(𝑡)))𝑔𝑡(𝑥(𝑡(𝑡)))𝑙21𝑑2𝑙21𝑔(𝑥(𝑡(𝑡)))]𝑟𝑘𝜉𝑡(𝑘)[𝜔+𝑚𝑙𝑧11𝑙𝑡+(𝑚𝑚)𝑚𝑧11𝑚𝑡+𝜏𝑚𝑛𝑧21𝑛𝑡+(𝜏𝑚𝜏𝑚)𝑡𝑧21𝑡𝑡]𝜉(𝑘),(3.16)where Ω,𝐿,𝑀,𝑁, and 𝑇 are defined in Theorem 3.1. Thus, if LMI (3.1) holds, it can be concluded that Δ𝑉(𝑘)<0 according to Schur complement, which implies that 𝑉(𝑘)<𝑉(0). Note that𝑣1(0)=𝑥𝑡(0)𝑝1𝑥(0)+𝑦𝑡(0)𝑝2𝑦(0)𝜆𝑚(𝑝1)sup𝑠𝑛[𝑚,0]𝜙(𝑠)2+𝜆𝑚(𝑝2)sup𝑠𝑛[𝜏𝑚,0]𝜓(𝑠)2,𝑣(3.17)2(0)=1𝑙=(0)𝑟𝑙𝑥𝑡(𝑙)𝑞1𝑥(𝑙)+1𝑙=𝜏(0)𝑟𝑙𝑦𝑡(𝑙)𝑞2𝑦(𝑙)𝜆𝑚(𝑞1)1𝑟𝑚𝑟1sup𝑠𝑛[𝑚,0]𝜙(𝑠)2+𝜆𝑚(𝑞2)1𝑟𝜏𝑚𝑟1sup𝑠𝑛[𝜏𝑚,0]𝜓(𝑠)2,𝑣(3.18)3(0)=𝑚𝜃=𝑚+11𝑙=𝜃𝑟𝑙𝑥𝑡(𝑙)𝑞1𝑥(𝑙)+𝜏𝑚𝜃=𝜏𝑚+11𝑙=𝜃𝑟𝑙𝑦𝑡(𝑙)𝑞2𝑦(𝑙)(𝑚𝑚)1𝑙=𝑚𝑟𝑙𝑥𝑡(𝑙)𝑞1𝑥(𝑙)+(𝜏𝑚𝜏𝑚)1𝑙=𝜏𝑚𝑟𝑙𝑦𝑡(𝑙)𝑞2𝑦(𝑙)(𝑚𝑚)𝜆𝑚(𝑞1)1𝑟𝑚𝑟1sup𝑠𝑛[𝑚,0]𝜓(𝑠)2+(𝜏𝑚𝜏𝑚)𝜆𝑚(𝑞2)1𝑟𝜏𝑚𝑟1sup𝑠𝑛[𝜏𝑚,0]𝜓(𝑠)2,𝑣(3.19)4(0)=1𝑙=𝑚𝑟𝑙𝑥𝑡(𝑙)𝑟1𝑥(𝑙)+1𝑙=𝜏𝑚𝑟𝑙𝑦𝑡(𝑙)𝑟2𝑦(𝑙)𝜆𝑚(𝑟1)1𝑟𝑚𝑟1sup𝑠𝑛[𝑚,0]𝜙(𝑠)2+𝜆𝑚(𝑟2)1𝑟𝜏𝑚𝑟1sup𝑠𝑛[𝜏𝑚,0]𝜓(𝑠)2,𝑣(3.20)5(0)=𝑟𝑚1𝜃=𝑚1𝑙=𝜃𝑟𝑙𝜂𝑡1(𝑙)𝑧1𝜂1(𝑙)+𝑟𝜏𝑚1𝜃=𝜏𝑚1𝑙=𝜃𝑟𝑙𝜂𝑡2(𝑙)𝑧2𝜂2(𝑙)𝑟𝑚𝑚𝜆𝑚(𝑧1)1𝑙=𝑚𝑟𝑙𝜂𝑡1(𝑙)𝜂1(𝑙)+𝑟𝜏𝑚𝜏𝑚𝜆𝑚(𝑧2)1𝑙=𝜏𝑚𝑟𝑙𝜂𝑡2(𝑙)𝜂2(𝑙)𝑟𝑚𝑚𝜆𝑚(𝑧1)1𝑟𝑚𝑟1sup𝑠𝑛[𝑚,1]𝜂1(𝑠)2+𝑟𝜏𝑚𝜏𝑚𝜆𝑚(𝑧2)1𝑟𝜏𝑚𝑟1sup𝑠𝑛[𝜏𝑚,1]𝜂2(𝑠)2,(3.21)Substituting (3.22), (3.23) into (3.21), and combining (3.17)–(3.21), then we have𝑉(0)=5𝑖=1𝑉𝑖(0)𝜌1sup𝑠𝑁[𝑀,0]𝜙(𝑠)2+𝜌2sup𝑠𝑁[𝜏𝑀,0]𝜓(𝑠)2,(3.24)where𝜌1=𝜆𝑚(𝑝1)+[(𝑚𝑚+1)𝜆𝑚(𝑞1)+𝜆𝑚(𝑟1)+4𝑟𝑚𝑚𝜆𝑚(𝑧1)]1𝑟𝑚,𝜌𝑟12=𝜆𝑚(𝑝2)+[(𝜏𝑚𝜏𝑚+1)𝜆𝑚(𝑞2)+𝜆𝑚(𝑟2)+4𝑟𝜏𝑚𝜏𝑚𝜆𝑚(𝑧2)]1𝑟𝜏𝑚.𝑟1(3.25)On the other hand,𝑉(𝑘)𝑟𝑘𝑥𝑇(𝑘)𝑃1𝑥(𝑘)+𝑟𝑘𝑦𝑇(𝑘)𝑃2𝑦(𝑘)𝑟𝑘{𝜆𝑚(𝑃1)𝑥(𝑘)2+𝜆𝑚(𝑃2)𝑦(𝑘)2}.(3.26)Therefore, we can obtain𝑥(𝑘)2+𝑦(𝑘)2𝛼𝛽(sup𝑠𝑁[𝑀,0]𝜙(𝑠)2+sup𝑠𝑁[𝜏𝑀,0]𝜓(𝑠)2)𝑟𝑘,(3.27)where 𝛼=max{𝜌1,𝜌2},𝛽=min{𝜆𝑚(𝑃1),𝜆𝑚(𝑃2)}. Obviously, 𝛼/𝛽>1, by Definition 2.1, the system (2.7) or (2.1) is globally exponentially stable.

Remark 3.2.   Based on the linear matrix inequality (LMI) technique and Lyapunov stability theory, the exponential stability for discrete-time delayed BAM neural network (2.1) was investigated in [2628]. However, it should be noted that the results in [26] are based on the unreasonably severe constraints on the delay functions 1<𝜏(𝑘+1)<1+𝜏(𝑘),1<(𝑘+1)<1+(𝑘), and the results in [27, 28] are based on the simple Lyapunov functionals. In this paper, in order to obtain the less conservative stability criterion, the novel Lyapunov functional 𝑉(𝑘) is employed, which contains 𝑉4(𝑘),𝑉5(𝑘), and more general than the traditional ones [2628]. Moreover, some slack matrices, which bring much flexibility in solving LMI, are introduced in this paper.

Remark 3.3. By setting 𝑟=1 in Theorem 3.1, we can obtain the global asymptotic stability criterion of discrete-time BAM neural network (2.7).

Remark 3.4. Theorem 3.1 in this paper depends on both the delay upper bounds 𝜏𝑀,𝑀 and the delay intervals 𝜏𝑀𝜏𝑚 and 𝑀𝑚.

Remark 3.5.  The proposed method in this paper can be generalized to more complex neural networks, such as delayed discrete-time BAM neural networks with parameter uncertainties [30, 31] and stochastic perturbations [3032], delayed interval discrete-time BAM neural networks [28], and discrete-time analogues of BAM neural networks with mixed delays [32, 33].

4. Numerical Examples

Example 4.1. Consider the delayed discrete-time BAM neural network (2.7) with the following parameters:𝐴=0.8000.9,𝐵=0.5000.4,𝑊=0.10.010.20.1,𝑉=0.1500.20.1.(4.1)The activation functions satisfy Assumptions (A1) and (A2) with𝐿1=1001,𝐿2=1001.(4.2)

For this example, by Theorem 3.1 in this paper with 𝑟=1, we can obtain some maximum allowable delay bounds for guaranteeing the asymptotic stability of this system. For a comparison with [27, Theorem 1] and in [28, Corollary 1], we made Table 1. For this example, it is obvious that our result is less conservative than those in [27, 28].

tab1
Table 1: Maximum allowable delay bounds for Example 4.1.

Example 4.2 (see [27]). Consider the delayed discrete-time BAM NN (2.7) with the following parameters:1𝐴=500151,𝐵=001100110,𝑊=81801,𝑉=0120020.(4.3)The activation functions satisfy Assumptions (A1) and (A2) with𝐿1=1001,𝐿2=1001.(4.4)
Now, we assume 𝜏(𝑘)=(𝑘)=42sin((𝜋/2)𝑘),𝑟=1.55. It is obvious that 2𝜏(𝑘)=(𝑘)6, that is, 𝑚=𝜏𝑚=2,𝑀=𝜏𝑚=6. In this case, it is found that the exponential conditions proposed in [27, 28] is not satisfied. However, it can be concluded that this system is globally exponentially stable by using Theorem 3.1 in this paper. If we assume 𝑚=𝜏𝑚=𝜏𝑀=2,𝑟=2.7. Using [27, Theorem 1] and [28, Corollary 1], the achieved maximum allowable delays for guaranteeing the globally exponentially stable of this system are 𝑀=3,𝑀=3, respectively. However, we can obtain the delay bound 𝑀=4 by using Theorem 3.1 in this paper. For this example, it is seen that our result improve some existing results [27, 28].

5. Conclusion

In this paper, we consider the delay-dependent exponential stability for discrete-time BAM neural networks with time-varying delays. Based on the Lyapunov stability theory and linear matrix inequality technique, the novel delay-dependent stability criterion is obtained. Two numerical examples show that the proposed stability condition in this paper is less conservative than previously established ones.

Acknowledgments

The authors would like to thank the Associate Editor and the anonymous reviewers for their constructive comments and suggestions to improve the quality of the paper. This work was supported by the National Natural Science Foundation of China (no. 60643003) and the Natural Science Foundation of Henan Education Department (no. 2007120005).

References

  1. B. Kosko, “Adaptive bidirectional associative memories,” Applied Optics, vol. 26, no. 23, pp. 4947–4960, 1987. View at Google Scholar
  2. B. Kosko, “Bidirectional associative memories,” IEEE Transactions on Systems, Man and Cybernetics, vol. 18, no. 1, pp. 49–60, 1988. View at Publisher · View at Google Scholar · View at MathSciNet
  3. J. Cao and L. Wang, “Exponential stability and periodic oscillatory solution in BAM networks with delays,” IEEE Transactions on Neural Networks, vol. 13, no. 2, pp. 457–463, 2002. View at Publisher · View at Google Scholar
  4. J. Cao and M. Dong, “Exponential stability of delayed bi-directional associative memory networks,” Applied Mathematics and Computation, vol. 135, no. 1, pp. 105–112, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  5. Y. Li, “Existence and stability of periodic solution for BAM neural networks with distributed delays,” Applied Mathematics and Computation, vol. 159, no. 3, pp. 847–862, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  6. X. Huang, J. Cao, and D.-S. Huang, “LMI-based approach for delay-dependent exponential stability analysis of BAM neural networks,” Chaos, Solitons & Fractals, vol. 24, no. 3, pp. 885–898, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  7. S. Arik, “Global asymptotic stability analysis of bidirectional associative memory neural networks with time delays,” IEEE Transactions on Neural Networks, vol. 16, no. 3, pp. 580–586, 2005. View at Publisher · View at Google Scholar
  8. M. Jiang, Y. Shen, and X. Liao, “Global stability of periodic solution for bidirectional associative memory neural networks with varying-time delay,” Applied Mathematics and Computation, vol. 182, no. 1, pp. 509–520, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  9. J. H. Park, “A novel criterion for global asymptotic stability of BAM neural networks with time delays,” Chaos, Solitons & Fractals, vol. 29, no. 2, pp. 446–453, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  10. X. Lou and B. Cui, “On the global robust asymptotic stability of BAM neural networks with time-varying delays,” Neurocomputing, vol. 70, no. 1–3, pp. 273–279, 2006. View at Publisher · View at Google Scholar
  11. J. H. Park, “Robust stability of bidirectional associative memory neural networks with time delays,” Physics Letters A, vol. 349, no. 6, pp. 494–499, 2006. View at Publisher · View at Google Scholar
  12. J. Cao, D. W. C. Ho, and X. Huang, “LMI-based criteria for global robust stability of bidirectional associative memory networks with time delay,” Nonlinear Analysis: Theory, Methods & Applications, vol. 66, no. 7, pp. 1558–1572, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  13. D. W. C. Ho, J. Liang, and J. Lam, “Global exponential stability of impulsive high-order BAM neural networks with time-varying delays,” Neural Networks, vol. 19, no. 10, pp. 1581–1590, 2006. View at Publisher · View at Google Scholar
  14. L. Sheng and H. Yang, “Novel global robust exponential stability criterion for uncertain BAM neural networks with time-varying delays,” Chaos, Solitons & Fractals. In press. View at Publisher · View at Google Scholar
  15. J. H. Park, S. M. Lee, and O. M. Kwon, “On exponential stability of bidirectional associative memory neural networks with time-varying delays,” Chaos, Solitons & Fractals. In press. View at Publisher · View at Google Scholar
  16. Y. Wang, “Global exponential stability analysis of bidirectional associative memory neural networks with time-varying delays,” Nonlinear Analysis: Real World Applications. In press. View at Publisher · View at Google Scholar
  17. Y. Liu, Z. Wang, A. Serrano, and X. Liu, “Discrete-time recurrent neural networks with time-varying delays: exponential stability analysis,” Physics Letters A, vol. 362, no. 5-6, pp. 480–488, 2007. View at Publisher · View at Google Scholar
  18. S. Hu and J. Wang, “Global stability of a class of discrete-time recurrent neural networks,” IEEE Transactions on Circuits and Systems I, vol. 49, no. 8, pp. 1104–1117, 2002. View at Publisher · View at Google Scholar · View at MathSciNet
  19. S. Mohamad and K. Gopalsamy, “Exponential stability of continuous-time and discrete-time cellular neural networks with delays,” Applied Mathematics and Computation, vol. 135, no. 1, pp. 17–38, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  20. W. Xiong and J. Cao, “Global exponential stability of discrete-time Cohen-Grossberg neural networks,” Neurocomputing, vol. 64, pp. 433–446, 2005. View at Publisher · View at Google Scholar
  21. L. Wang and Z. Xu, “Sufficient and necessary conditions for global exponential stability of discrete-time recurrent neural networks,” IEEE Transactions on Circuits and Systems I, vol. 53, no. 6, pp. 1373–1380, 2006. View at Publisher · View at Google Scholar · View at MathSciNet
  22. W.-H. Chen, X. Lu, and D.-Y. Liang, “Global exponential stability for discrete-time neural networks with variable delays,” Physics Letters A, vol. 358, no. 3, pp. 186–198, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  23. Q. Song and Z. Wang, “A delay-dependent LMI approach to dynamics analysis of discrete-time recurrent neural networks with time-varying delays,” Physics Letters A, vol. 368, no. 1-2, pp. 134–145, 2007. View at Publisher · View at Google Scholar
  24. B. Zhang, S. Xu, and Y. Zou, “Improved delay-dependent exponential stability criteria for discrete-time recurrent neural networks with time-varying delays,” Neurocomputing. In press. View at Publisher · View at Google Scholar
  25. J. Liang and J. Cao, “Exponential stability of continuous-time and discrete-time bidirectional associative memory networks with delays,” Chaos, Solitons & Fractals, vol. 22, no. 4, pp. 773–785, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  26. J. Liang, J. Cao, and D. W. C. Ho, “Discrete-time bidirectional associative memory neural networks with variable delays,” Physics Letters A, vol. 335, no. 2-3, pp. 226–234, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  27. X.-G. Liu, M.-L. Tang, R. Martin, and X.-B. Liu, “Discrete-time BAM neural networks with variable delays,” Physics Letters A, vol. 367, no. 4-5, pp. 322–330, 2007. View at Publisher · View at Google Scholar
  28. M. Gao and B. Cui, “Global robust exponential stability of discrete-time interval BAM neural networks with time-varying delays,” Applied Mathematical Modelling. In press. View at Publisher · View at Google Scholar
  29. M. S. Mahmoud, Resilient Control of Uncertain Dynamical Systems, vol. 303 of Lecture Notes in Control and Information Sciences, Springer, Berlin, Germany, 2004. View at Zentralblatt MATH · View at MathSciNet
  30. Z. Wang, S. Lauria, J. Fang, and X. Liu, “Exponential stability of uncertain stochastic neural networks with mixed time-delays,” Chaos, Solitons & Fractals, vol. 32, no. 1, pp. 62–72, 2007. View at Publisher · View at Google Scholar · View at MathSciNet
  31. Z. Wang, H. Shu, J. Fang, and X. Liu, “Robust stability for stochastic Hopfield neural networks with time delays,” Nonlinear Analysis: Real World Applications, vol. 7, no. 5, pp. 1119–1128, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  32. Z. Wang, Y. Liu, M. Li, and X. Liu, “Stability analysis for stochastic Cohen-Grossberg neural networks with mixed time delays,” IEEE Transactions on Neural Networks, vol. 17, no. 3, pp. 814–820, 2006. View at Publisher · View at Google Scholar
  33. Z. Wang, Y. Liu, and X. Liu, “On global asymptotic stability of neural networks with discrete and distributed delays,” Physics Letters A, vol. 345, no. 4–6, pp. 299–308, 2005. View at Publisher · View at Google Scholar