Abstract

This paper deals with the stability analysis problem for a class of discrete-time stochastic BAM neural networks with discrete and distributed time-varying delays. By constructing a suitable Lyapunov-Krasovskii functional and employing M-matrix theory, we find some sufficient conditions ensuring the global exponential stability of the equilibrium point for stochastic BAM neural networks with time-varying delays. The conditions obtained here are expressed in terms of LMIs whose feasibility can be easily checked by MATLAB LMI Control toolbox. A numerical example is presented to show the effectiveness of the derived LMI-based stability conditions.

1. Introduction

Recently, the study of Bidirectional associative memory neural networks has attracted the attention of many researchers due to its applications in many fields such as pattern recognition, automatic control, associative memory, signal processing, and optimization; see, for example, [19]. The (BAM) neural networks model, proposed by Kosko [10, 11], is a two layer nonlinear feedback network model and it was described that the neurons in one layer are fully interconnected to the neurons in the other layer, while there are no interconnections among neurons in the same layer.

Furthermore, due to the finite switching speed of neuron amplifiers and the finite speed of signal propagation time delays are unavoidable in the implementation of neural networks [1214]. According to the way it occurs, time delay can be classified as two types: discrete and distributed delays. Discrete time-delay is relatively easier to be identified in practice and hence the stability analysis for BAM with discrete delays has been an attractive subject of research in the past few years; see [15, 16]. On the other hand, due to the presence of an amount of parallel pathways of a variety of axon sizes and lengths, a neural network usually has a spatial nature. Therefore, it is necessary to introduce continuously distributed delays over a certain duration of time; see [17, 18].

Moreover, in implementations of neural networks, stochastic disturbances are inevitable owing to thermal noise in electronic devices. Practically, the stochastic phenomenon usually appears in the electrical circuit design of neural networks. The stochastic effects can have the ability to destabilize a neural system. Therefore, it is significant and of importance to consider stochastic effects to the stability property of the neural networks with delays. It is noted that most of the BAM neural networks have been assumed to act in a continuous-time manner. However, when it comes to the implementation of discrete-time BAM networks, there are only few works appeared in the literature; see [6, 1924] and the references cited therein. Therefore, there is a crucial need to study the dynamics of discrete-time BAM neural networks and it becomes more significant from practical point of view. In [19], Gao and Cui discussed the global robust exponential stability of discrete-time interval BAM neural networks with time-varying delays, and in [24], the authors investigated the global exponential stability for discrete-time BAM neural network with time variable delay. In the above said references the stability problem for BAM neural networks is considered only with discrete delay, and distributed delay has not been taken into account and remains challenging. So, our main aim in this work is to make the first attempt to shorten such a gap.

Motivated by the above points, in this paper, we will study the exponential stability problem for a new class of discrete-time stochastic BAM neural networks with both discrete and distributed delays. The existence of the equilibrium point is proved under mild conditions on the activation functions. By constructing an appropriate Lyapunov-Krasovskii functional, a linear matrix inequality (LMI) approach is developed to establish sufficient conditions for the discrete-time BAM neural networks to be globally exponentially stable in the mean square. Here, we note that the LMIs can be easily solved by using Matlab LMI toolbox, and no tuning of parameters is involved. Finally, a numerical example is presented to show the usefulness of the derived LMI-based stability conditions.

Notations. Throughout this paper, 𝑛 and 𝑛×𝑚 denote, respectively, the 𝑛-dimensional Euclidean space and the set of all 𝑛×𝑚 real matrices. 𝐼 denotes the identity matrix with appropriate dimensions and diag() denotes the diagonal matrix. For real symmetric matrices 𝑋 and 𝑌, the notation 𝑋𝑌 (resp., 𝑋>𝑌) means that the matrix 𝑋𝑌 is positive semidefinite (resp., positive definite). ={1,2,,𝑛} and stands for the Euclidean norm in 𝑛. 𝜆max(𝑋) (resp., 𝜆min(𝑋)) stands for the maximum (resp., minimum) eigenvalue of the matrix 𝑋. The symbol within a matrix represents the symmetric term of the matrix.

2. Problem Description and Preliminaries

Consider the following discrete-time stochastic BAM neural networks with both discrete and distributed delays of the following form:𝑥𝑖𝑎(𝑘+1)=𝑖𝑥𝑖(𝑘)+𝑛𝑗=1𝑐𝑗𝑖𝑓𝑗𝑦𝑗+(𝑘)𝑛𝑗=1𝑤𝑗𝑖𝑔𝑗𝑦𝑗𝑘𝜏𝑗𝑖+(𝑘)𝑛𝑗=1𝑚𝑗𝑖+=1𝜇𝑗𝑦𝑗(𝑘)+𝐼𝑖+𝛿𝑗𝑖𝑥𝑖(𝑘),𝑦𝑗𝑘𝜏𝑗𝑖𝑤(𝑘),𝑘1𝑦(𝑘),𝑖𝑁,𝑗𝑏(𝑘+1)=𝑗𝑦𝑗(𝑘)+𝑛𝑖=1𝑑𝑖𝑗𝑓𝑖𝑥𝑖+(𝑘)𝑛𝑖=1𝑣𝑖𝑗̂𝑔𝑖𝑥𝑖𝑘𝜎𝑖𝑗+(𝑘)𝑛𝑖=1𝑛𝑖𝑗+𝒩=1𝜌𝒩𝑖𝑥𝑖(𝑘𝒩)+𝐽𝑗+𝜒𝑖𝑗𝑦𝑗(𝑘),𝑥𝑖𝑘𝜎𝑖𝑗𝑤(𝑘),𝑘2(𝑘),𝑗𝑁,(2.1) or, in an equivalent form,𝑥(𝑘+1)=𝐴𝑥(𝑘)+𝐶𝑓(𝑦(𝑘))+𝑊𝑔(𝑦(𝑘𝜏(𝑘)))+𝑀+=1𝜇(𝑦(𝑘))+𝐼+𝛿(𝑥(𝑘),𝑦(𝑘𝜏(𝑘)),𝑘)𝑤1(𝑘),𝑦(𝑘+1)=𝐵𝑦(𝑘)+𝐷𝑓(𝑥(𝑘))+𝑉̂𝑔(𝑥(𝑘𝜎(𝑘)))+𝑁+𝒩=1𝜌𝒩(𝑥(𝑘𝒩))+𝐽+𝜒(𝑦(𝑘),𝑥(𝑘𝜎(𝑘)),𝑘)𝑤2(𝑘),(2.2) for 𝑘=1,2,, where 𝑥(𝑘) and 𝑦(𝑘) are the neural state vector; 𝐴=diag{𝑎1,𝑎2,,𝑎𝑛} and 𝐵=diag{𝑏1,𝑏2,,𝑏𝑛} are the state feedback coefficient matrices; 𝐶=[𝑐𝑖𝑗]𝑛×𝑛,𝐷=[𝑑𝑖𝑗]𝑛×𝑛,𝑊=[𝑤𝑖𝑗]𝑛×𝑛,𝑉=[𝑣𝑖𝑗]𝑛×𝑛,𝑀=[𝑚𝑖𝑗]𝑛×𝑛, and 𝑁=[𝑛𝑖𝑗]𝑛×𝑛 are, respectively, the connection weight matrices, the discretely delayed connection weight matrices, and distributed delayed connection weight matrices; 𝜏(𝑘) and 𝜎(𝑘) denote the discrete time-varying delays satisfying𝜏𝑚𝜏(𝑘)𝜏𝑀,𝜎𝑚𝜎(𝑘)𝜎𝑀,(2.3) where 𝜏𝑚,𝜏𝑀,𝜎𝑚, and 𝜎𝑀 are known positive integer; 𝑀,𝑁 denotes the distributed time-varying delays.Then 𝑓𝑓(𝑦(𝑘))=1𝑦1(𝑘),𝑓2𝑦2(𝑘),,𝑓𝑛𝑦𝑛(𝑘)𝑇,𝑓𝑓(𝑥(𝑘))=1𝑥1(,𝑓𝑘)2𝑥2(𝑓𝑘),,𝑛𝑥𝑛(𝑘)𝑇,𝑔𝑔(𝑦(𝑘))=1𝑦1(𝑘),𝑔2𝑦2(𝑘),,𝑔𝑛𝑦𝑛(𝑘)𝑇,̂𝑔(𝑥(𝑘))=̂𝑔1𝑥1(𝑘),̂𝑔2𝑥2(𝑘),,̂𝑔𝑛𝑥𝑛(𝑘)𝑇,(𝑦(𝑘))=1𝑦1(𝑘),2𝑦2(𝑘),,𝑛𝑦𝑛(𝑘)𝑇,(𝑥(𝑘))=1𝑥1,(𝑘)2𝑥2(𝑘),,𝑛𝑥𝑛(𝑘)𝑇,(2.4) denote the neuron activation functions. The constant vectors 𝐽=[𝐽1,𝐽2,,𝐽𝑛]𝑇 and 𝐼=[𝐼1,𝐼2,,𝐼𝑛]𝑇 are the external inputs from outside the system; 𝜇,(=1,2,) and 𝜌𝒩,(𝒩=1,2,) are scalar constants, where 𝑤1(𝑘) and 𝑤2(𝑘) are scalar Wiener process (Brownian motion) on the probability space (Ω,,𝔓) with𝔼𝑤1𝑤(𝑘)=0,𝔼21𝑤(𝑘)=1,𝔼1(𝑖)𝑤1𝔼𝑤(𝑗)=0(𝑖𝑗),2𝑤(𝑘)=0,𝔼22𝑤(𝑘)=1,𝔼2(𝑖)𝑤2(𝑗)=0(𝑖𝑗),(2.5) with 𝔼() being the mathematical expectation operator; 𝛿𝑛×𝑛×𝑛 and 𝜒𝑛×𝑛×𝑛 are the nonlinear vector function representing the disturbance intensities.

In this paper, we make following assumptions for the neuron activation functions.

Assumption 1. For 𝑗,𝑖{1,2,,𝑛}, the neuron activation functions 𝑓𝑗(), 𝑓𝑖(), 𝑔𝑗(), ̂𝑔𝑖(), 𝑗(), and 𝑖() in (2.2) are continuous as well as bounded on .

Assumption 2. For 𝑗,𝑖{1,2,,𝑛}, the neuron activation functions in (2.2) satisfy 𝑙𝑗𝑓𝑗𝑠1𝑓𝑗𝑠2𝑠1𝑠2𝑙+𝑗,𝑠1,𝑠2,𝑢𝑖𝑓𝑖𝑡1𝑓𝑖𝑡2𝑡1𝑡2𝑢+𝑖,𝑡1,𝑡2𝑚,𝑗𝑔𝑗𝑠1𝑔𝑗𝑠2𝑠1𝑠2𝑚+𝑗,𝑠1,𝑠2,𝑣𝑖̂𝑔𝑖𝑡1̂𝑔𝑖𝑡2𝑡1𝑡2𝑣+𝑖,𝑡1,𝑡2𝑛,𝑗𝑗𝑠1𝑗𝑠2𝑠1𝑠2𝑛+𝑗,𝑠1,𝑠2,𝑤𝑖𝑖𝑡1𝑖𝑡2𝑡1𝑡2𝑤+𝑖,𝑡1,𝑡2,(2.6) where 𝑙𝑗,𝑙+𝑗,𝑚𝑗,𝑚+𝑗,𝑛𝑗,𝑛+𝑗,𝑢𝑖,𝑢+𝑖,𝑣𝑖,𝑣+𝑖,𝑤𝑖, and 𝑤+𝑖 are some constants.

Remark 2.1. Assumption 2 was first introduced by Liu et al. [25]. The constants 𝑙𝑗,𝑙+𝑗,𝑚𝑗,𝑚+𝑗,𝑛𝑗,𝑛+𝑗,𝑢𝑖,𝑢+𝑖,𝑣𝑖,𝑣+𝑖,𝑤𝑖, and 𝑤+𝑖 in Assumption 2 are allowed to be positive, negative, or zero. So, the activation functions used in this paper may be nonmonotonic and more general than the usual sigmoid functions and Lipschitz functions. Such conditions are very rude in quantifying the lower and upper bounds of the activation functions; hence we use generalized activation functions, because it is very helpful for using LMI-based technique to reduce the possible conservatism.
In order to simplify our proof, we shift the equilibrium point 𝑥=(𝑥1,𝑥2,,𝑥𝑛)𝑇 and 𝑦=(𝑦1,𝑦2,,𝑦𝑛)𝑇 of system (2.2) to the origin. Let 𝑢(𝑘)=𝑥(𝑘)𝑥 and 𝑣(𝑘)=𝑦(𝑘)𝑦; then system (2.2) can be transformed to𝑢(𝑘+1)=𝐴𝑢(𝑘)+𝐶𝑓(𝑣(𝑘))+𝑊𝑔(𝑣(𝑘𝜏(𝑘)))+𝑀+=1𝜇(𝑣(𝑘))+𝛿(𝑢(𝑘),𝑣(𝑘𝜏(𝑘)),𝑘)𝑤1(𝑘),𝑣(𝑘+1)=𝐵𝑣(𝑘)+𝐷𝑓(𝑢(𝑘))+𝑉̂𝑔(𝑢(𝑘𝜎(𝑘)))+𝑁+𝒩=1𝜌𝒩(𝑢(𝑘𝒩))+𝜒(𝑣(𝑘),𝑢(𝑘𝜎(𝑘)),𝑘)𝑤2(𝑘),(2.7) where𝑢𝑢(𝑘)=1(𝑘),𝑢2(𝑘),,𝑢𝑛(𝑘)𝑇𝑣,𝑣(𝑘)=1(𝑘),𝑣2(𝑘),,𝑣𝑛(𝑘)𝑇,𝑓𝑓(𝑣(𝑘))=1𝑣1(𝑘),𝑓2𝑣2(𝑘),,𝑓𝑛𝑣𝑛(𝑘)𝑇=𝑓𝑦(𝑘)+𝑦𝑦𝑓,𝑔𝑔(𝑣(𝑘))=1𝑣1(𝑘),𝑔2𝑣2(𝑘),,𝑔𝑛𝑣𝑛(𝑘)𝑇=𝑔𝑦(𝑘)+𝑦𝑦𝑔,(𝑣(𝑘))=1𝑣1(𝑘),2𝑣2(𝑘),,𝑛𝑣𝑛(𝑘)𝑇=𝑦(𝑘)+𝑦𝑦,𝑓𝑓(𝑢(𝑘))=1𝑢1,𝑓(𝑘)2𝑢2𝑓(𝑘),,𝑛𝑢𝑛(𝑘)𝑇=𝑓𝑥(𝑘)+𝑥𝑓𝑥,̂𝑔(𝑢(𝑘))=̂𝑔1𝑢1(𝑘),̂𝑔2𝑢2(𝑘),,̂𝑔𝑛𝑢𝑛(𝑘)𝑇=̂𝑔𝑥(𝑘)+𝑥𝑥̂𝑔,(𝑢(𝑘))=1𝑢1,(𝑘)2𝑢2(𝑘),,𝑛𝑢𝑛(𝑘)𝑇=𝑥(𝑘)+𝑥𝑥.(2.8)

Assumption 3. Obviously, the activation functions 𝑓𝑗,𝑓𝑖,𝑔𝑗,̂𝑔𝑖,𝑗,and𝑖(𝑖,𝑗) satisfy the following condition: l𝑗𝑓𝑗(𝑠)𝑠𝑙+𝑗,𝑠,𝑢𝑖𝑓𝑖(𝑡)𝑡𝑢+𝑖𝑚,𝑡,𝑗𝑔𝑗(𝑠)𝑠𝑚+𝑗,𝑠,𝑣𝑖̂𝑔𝑖(𝑡)𝑡𝑣+𝑖𝑛,𝑡,𝑗𝑗(𝑠)𝑠𝑛+𝑗,𝑠,𝑤𝑖𝑖(𝑡)𝑡𝑤+𝑖,𝑡.(2.9)

Assumption 4. The constants 𝜇,𝜌𝒩0 satisfy the following convergent conditions: +=1𝜇<+,+=1𝜇<+,+𝒩=1𝜌𝒩<+,+𝒩=1𝒩𝜌𝒩<+.(2.10)

Remark 2.2. Assumption 4 ensures that the terms 𝑀+=1𝜇(𝑣(𝑘)) and 𝑁+𝒩=1𝜌𝒩(𝑢(𝑘𝒩)) are convergent, which is significant for the subsequent analysis.

Assumption 5. There exist constant matrices 𝐺 and 𝐾 such that 𝛿𝑇||||(𝑥,𝑦,𝑘)𝛿(𝑥,𝑦,𝑘)𝐺𝑥2,𝑥,𝑦𝑛,𝜒𝑇||||(𝑥,𝑦,𝑘)𝜒(𝑥,𝑦,𝑘)𝐾𝑦2,𝑥,𝑦𝑛.(2.11)

The following definition and lemmas will be essential in employing the exponential stability conditions.

Definition 2.3. The delayed discrete-time stochastic BAM neural network (2.7) is said to be globally exponentially stable, if there exist two positive scalars 𝜈>0 and 0<𝒢<1 such that 𝑢(𝑘)+𝑣(𝑘)𝜈𝒢𝑘sup𝜎𝑀𝑠0𝑢(𝑠)+sup𝜏𝑀𝑠0𝑣(𝑠).(2.12)

Lemma 2.4. Let 𝑋 and 𝑌 be any n-dimensional real vectors and let 𝑃 be an 𝑛×𝑛 positive semidefinite matrix. Then, the following matrix inequality holds: 2𝑋𝑇𝑃𝑌𝑋𝑇𝑃𝑋+𝑌𝑇𝑃𝑌.(2.13)

Lemma 2.5. Let 𝑀𝑛×𝑛 be a positive semidefinite matrix, 𝑥𝑖𝑛, and 𝑎𝑖0,(𝑖=1,2,). If the series concerned are convergent, the following inequality holds: +𝑖=1𝑎𝑖𝑥𝑖𝑇𝑀+𝑖=1𝑎𝑖𝑥𝑖+𝑖=1𝑎𝑖+𝑖=1𝑎𝑖𝑥𝑖𝑇𝑀𝑥𝑖.(2.14)

In the rest of the paper, we will focus on the stability analysis of SBAMNN (2.7). By choosing an appropriate Lyapunov-Krasovskii functional, we aim to develop an LMI approach for deriving sufficient conditions under which the SBAMNN (2.7) is globally exponentially stable.

3. Main Results

Now, we are in a position to state our main results in the following theorem.

Theorem 3.1. Under Assumptions 15, the discrete-time stochastic BAM neural network (2.7) is globally exponentially stable in the mean square, if there exist constants 𝜆0>0 and 𝜖0>0 if there exist diagonal matrices Λ1=diag{𝜆1(1),𝜆2(1),,𝜆𝑛(1)}>0, Λ2=diag{𝜆1(2),𝜆2(2),,𝜆𝑛(2)}>0, Γ1=diag{𝛾1(1),𝛾2(1),,𝛾𝑛(1)}>0, Γ2=diag{𝛾1(2),𝛾2(2),,𝛾𝑛(2)}>0, Ω1=diag{𝜔1(1),𝜔2(1),,𝜔𝑛(1)}>0, and Ω2=diag{𝜔1(2),𝜔2(2),,𝜔𝑛(2)}>0 and positive definite matrices 𝑃1>0,𝑃2>0,𝑄1>0,𝑄2>0,𝑅1>0,and𝑅2>0, such that the following LMIs hold: 𝑃1<𝜆0Ξ𝐼,1=Π110Λ2𝑈2Γ2𝑉20Ω2𝑊20𝑄200000Π330000Γ2000Π5500Ω20Π77𝑃<0,2<𝜖0Ξ𝐼,2=Θ110Λ1𝐿2Γ1𝑀20Ω1𝑁20𝑄100000Θ330000Γ1000Θ5500Ω10Θ77<0,(3.1) where Π11=𝐴𝑇𝑃1𝐴2𝑃1Λ2𝑈1Γ2𝑉1Ω2𝑊1+1+𝜎𝑀𝜎𝑚𝑄2+𝜌𝑅2+𝜆0𝐺𝑇Π𝐺,33=𝐵𝐷𝑃2𝐷𝑇𝐵𝑇Λ2+2𝑃2,Π55=𝐵𝑉𝑃2𝑉𝑇𝐵𝑇+𝐷𝑉𝑃2𝑉𝑇𝐷𝑇+𝑃2,Π77=𝐵𝑁𝑃2𝑁𝑇𝐵𝑇+𝐷𝑁𝑃2𝑁𝑇𝐷𝑇+𝑉𝑁𝑃2𝑁𝑇𝑉𝑇𝜌1𝑅2,Θ11=𝐵𝑇𝑃2𝐵2𝑃2Λ1𝐿1Γ1𝑀1Ω1𝑁1+1+𝜏𝑀𝜏𝑚𝑄1+𝜇𝑅1+𝜖0𝐾𝑇Θ𝐾,33=𝐴𝐶𝑃1𝐶𝑇𝐴𝑇Λ1+2𝑃1,Θ55=𝐴𝑊𝑃1𝑊𝑇𝐴𝑇+𝐶𝑊𝑃1𝑊𝑇𝐶𝑇+𝑃1,Θ77=𝐴𝑀𝑃1𝑀𝑇𝐴𝑇+𝐶𝑀𝑃1𝑀𝑇𝐶𝑇+𝑊𝑀𝑃1𝑀𝑇𝑊𝑇𝜇1𝑅1,𝐿1𝑙=diag+1𝑙1,𝑙+2𝑙2,,𝑙+𝑛𝑙𝑛,𝐿2𝑙=diag+1+𝑙12,𝑙+2+𝑙22𝑙,,+𝑛+𝑙𝑛2,𝑀1𝑚=diag+1𝑚1,𝑚+2𝑚2,,𝑚+𝑛𝑚𝑛,𝑀2𝑚=diag+1+𝑚12,𝑚+2+𝑚22𝑚,,+𝑛+𝑚𝑛2,𝑁1𝑛=diag+1𝑛1,𝑛+2𝑛2,,𝑛+𝑛𝑛𝑛,𝑁2𝑛=diag+1+𝑛12,𝑛+2+𝑛22𝑛,,+𝑛+𝑛𝑛2,𝑈1𝑢=diag+1𝑢1,u+2𝑢2,,𝑢+𝑛𝑢𝑛,𝑈2𝑢=diag+1+𝑢12,𝑢+2+𝑢22𝑢,,+𝑛+𝑢𝑛2,𝑉1𝑣=diag+1𝑣1,𝑣+2𝑣2,,𝑣+𝑛𝑣𝑛,𝑉2𝑣=diag+1+𝑣12,𝑣+2+𝑣22𝑣,,+𝑛+𝑣𝑛2,𝑊1𝑤=diag+1𝑤1,𝑤+2𝑤2,,𝑤+𝑛𝑤𝑛,𝑊2𝑤=diag+1+𝑤12,𝑤+2+w22𝑤,,+𝑛+𝑤𝑛2,𝜇=+𝑘=1𝜇𝑘,𝜌=+𝑘=1𝜌𝑘.(3.2)

Proof. Let us choose the Lyapunov-Krasovskii functional as 𝑉1(𝑘)=𝑢𝑇(𝑘)𝑃1𝑢(𝑘)+𝑣𝑇(𝑘)𝑃2𝑉𝑣(𝑘),2(𝑘)=𝑘1𝑖=𝑘𝜏(𝑘)𝑣𝑇(𝑖)𝑄1𝑣(𝑖)+𝑘1𝑖=𝑘𝜎(𝑘)𝑢𝑇(𝑖)𝑄2𝑉𝑢(𝑖),3(𝑘)=𝑘𝜏𝑚𝑗=𝑘𝜏𝑀+1𝑘1𝑖=𝑗𝑣𝑇(𝑖)𝑄1𝑣(𝑖)+𝑘𝜎𝑚𝑗=𝑘𝜎𝑀+1𝑘1𝑖=𝑗𝑢𝑇(𝑖)𝑄2𝑢𝑉(𝑖),4(𝑘)=+𝑖=1𝜇𝑖𝑘1𝑗=𝑘𝑖𝑣𝑇(𝑖)𝑅1𝑣(𝑖)++𝑖=1𝜌𝑖𝑘1𝑗=𝑘𝑖𝑢𝑇(𝑖)𝑅2𝑢(𝑖).(3.3)
In order to analyze the global exponential stability of the stochastic BAM neural network, we calculate differences Δ𝑉(𝑘) of the Lyapunov function 𝑉(𝑘), along with the trajectories of the BAM neural network (2.7); then we haveΔ𝑉(𝑘)=Δ𝑉1(𝑘)+Δ𝑉2(𝑘)+Δ𝑉3(𝑘)+Δ𝑉4(𝑘),(3.4) where Δ𝑉1(𝑘)=𝑢𝑇(𝑘+1)𝑃1𝑢(𝑘+1)𝑢𝑇(𝑘)𝑃1𝑢(𝑘)+𝑣𝑇(𝑘+1)𝑃2𝑣(𝑘+1)𝑣𝑇(𝑘)𝑃2=𝑣(𝑘)𝐴𝑢(𝑘)+𝐶𝑓(𝑣(𝑘))+𝑊𝑔(𝑣(𝑘𝜏(𝑘)))+𝑀+=1𝜇(𝑣(𝑘))𝑇×𝑃1𝐴𝑢(𝑘)+𝐶𝑓(𝑣(𝑘))+𝑊𝑔(𝑣(𝑘𝜏(𝑘)))+𝑀+=1𝜇(𝑣(𝑘))𝑢𝑇(𝑘)𝑃1+𝑓𝑢(𝑘)𝐵𝑣(𝑘)+𝐷(𝑢(𝑘))+𝑉̂𝑔(𝑢(𝑘𝜎(𝑘)))+𝑁+𝒩=1𝜌𝒩(𝑢(𝑘𝒩))𝑇×𝑃2𝐵𝑣(𝑘)+𝐷𝑓(𝑢(𝑘))+𝑉̂𝑔(𝑢(𝑘𝜎(𝑘)))+𝑁+𝒩=1𝜌𝒩(𝑢(𝑘𝒩))𝑣𝑇(𝑘)𝑃2𝑣(𝑘)+𝛿𝑇(𝑢,𝑣,𝑘)𝑃1𝛿(𝑢,𝑣,𝑘)+𝜒𝑇(𝑢,𝑣,𝑘)𝑃2𝜒(𝑢,𝑣,𝑘)𝑢𝑇(𝑘)𝐴𝑃1𝐴𝑃1𝑢(𝑘)+2𝑢𝑇(𝑘)𝐴𝑃1𝐶𝑓(𝑣(𝑘))+2𝑢𝑇(𝑘)𝐴𝑃1𝑊𝑔(𝑣(𝑘𝜏(𝑘)))+2𝑢𝑇(𝑘)𝐴𝑃1𝑀+=1𝜇(𝑣(𝑘))+𝑓𝑇(𝑣(𝑘))𝐶𝑃1𝐶𝑓(𝑣(𝑘))+2𝑓𝑇(𝑣(𝑘))×𝐶𝑃1𝑊𝑔(𝑣(𝑘𝜏(𝑘)))+2𝑓𝑇(𝑣(𝑘))𝐶𝑃1𝑀+=1𝜇(𝑣(𝑘))+𝑔𝑇(𝑣(𝑘𝜏(𝑘)))𝑊𝑃1𝑊𝑔(𝑣(𝑘𝜏(𝑘)))+2𝑔𝑇(𝑣(𝑘𝜏(𝑘)))𝑊𝑃1𝑀×+=1𝜇+(𝑣(𝑘))+=1𝜇(𝑣(𝑘))𝑇𝑀𝑃1𝑀×+=1𝜇(𝑣(𝑘))+𝑣𝑇(𝑘)𝐵𝑃2𝐵𝑃2𝑣(𝑘)+2𝑣𝑇(𝑘)𝐵𝑃2𝐷𝑓(𝑢(𝑘))+2𝑣𝑇(𝑘)𝐵𝑃2𝑉𝑓(𝑢(𝑘𝜎(𝑘)))+2𝑣𝑇(𝑘)𝐵𝑃2𝑁+𝒩=1𝜌𝒩+𝑓(𝑢(𝑘𝒩))𝑇(𝑢(𝑘))𝐷𝑃2𝐷𝑓𝑓(𝑢(𝑘))+2𝑇(𝑢(𝑘))𝐷𝑃2𝑓𝑉̂𝑔(𝑢(𝑘𝜎(𝑘)))+2𝑇(𝑢(𝑘))𝐷𝑃2𝑁×+𝒩=1𝜌𝒩(𝑢(𝑘𝒩))+̂𝑔𝑇(𝑢(𝑘𝜎(𝑘)))𝑉𝑃2𝑉̂𝑔(𝑢(𝑘𝜎(𝑘)))+2̂𝑔𝑇(𝑢(𝑘𝜎(𝑘)))𝑉𝑃2𝑁+𝒩=1𝜌𝒩+(𝑢(𝑘𝒩))+𝒩=1𝜌𝒩𝑁(𝑢(𝑘𝒩))×𝑃2𝑁+𝒩=1𝜌𝒩(𝑢(𝑘𝒩))+𝑢𝑇(𝑘)𝜆0𝐺𝑇𝐺𝑢(𝑘)+𝑣𝑇(𝑘)𝜖0𝐾𝑇𝐾𝑣(𝑘).(3.5) By using Lemma 2.4, we have 2𝑢𝑇(𝑘)𝐴𝑃1𝐶𝑓(𝑣(𝑘))𝑢𝑇(𝑘)𝑃1𝑢(𝑘)+𝑓𝑇(𝑣(𝑘))𝐴𝐶𝑃1𝐶𝑇𝐴𝑇𝑓(𝑣(𝑘)),2𝑢𝑇(𝑘)𝐴𝑃1𝑊𝑔(𝑣(𝑘𝜏(𝑘)))𝑢𝑇(𝑘)𝑃1𝑢(𝑘)+𝑔𝑇(𝑣(𝑘𝜏(𝑘)))𝐴𝑊𝑃1𝑊𝑇𝐴𝑇𝑔(𝑣(𝑘𝜏(𝑘))),2𝑢𝑇(𝑘)𝐴𝑃1𝑀+=1𝜇(𝑣(𝑘))𝑢𝑇(𝑘)𝑃1+𝑢(𝑘)+=1𝜇(𝑣(𝑘))𝑇𝐴𝑀𝑃1𝑀𝑇𝐴𝑇+=1𝜇,(𝑣(𝑘))2𝑓𝑇(𝑣(𝑘))𝐶𝑃1𝑊𝑔(𝑣(𝑘𝜏(𝑘)))𝑓𝑇(𝑣(𝑘))𝑃1𝑓(𝑣(𝑘))+𝑔𝑇(𝑣(𝑘𝜏(𝑘)))𝐶𝑊𝑃1𝑊𝑇𝐶𝑇𝑔(𝑣(𝑘𝜏(𝑘))),2𝑓𝑇(𝑣(𝑘))𝐶𝑃1𝑀+=1𝜇(𝑣(𝑘))𝑓𝑇(𝑣(𝑘))𝑃1𝑓(𝑣(𝑘))++=1𝜇(𝑣(𝑘))𝑇×𝐶𝑀𝑃1𝑀𝑇𝐶𝑇+=1𝜇,(𝑣(𝑘))2𝑔𝑇(𝑣(𝑘𝜏(𝑘)))𝑊𝑃1𝑀+=1𝜇(𝑣(𝑘))𝑔𝑇(𝑣(𝑘𝜏(𝑘)))𝑃1𝑔(𝑣(𝑘𝜏(𝑘)))++=1𝜇(𝑣(𝑘))𝑇×𝑊𝑀𝑃1𝑀𝑇𝑊𝑇+=1𝜇,(𝑣(𝑘))2𝑣𝑇(𝑘)𝐵𝑃2𝐷𝑓(𝑢(𝑘))𝑣𝑇(𝑘)𝑃2𝑓𝑣(𝑘)+𝑇(𝑢(𝑘))𝐵𝐷𝑃2𝐷𝑇𝐵𝑇𝑓(𝑢(𝑘)),2𝑣𝑇(𝑘)𝐵𝑃2𝑉̂𝑔(𝑢(𝑘𝜎(𝑘)))𝑣𝑇(𝑘)𝑃2𝑣(𝑘)+̂𝑔𝑇(𝑢(𝑘𝜎(𝑘)))𝐵𝑉𝑃2𝑉𝑇𝐵𝑇̂𝑔(𝑢(𝑘𝜎(𝑘))),2𝑣𝑇(𝑘)𝐵𝑃2𝑁+𝒩=1𝜌𝒩(𝑢(𝑘𝒩))𝑣𝑇(𝑘)𝑃2𝑣(𝑘)++𝒩=1𝜌𝒩(𝑢(𝑘𝒩))𝑇𝐵𝑁𝑃2𝑁𝑇𝐵𝑇+𝒩=1𝜌𝒩,2𝑓(𝑢(𝑘𝒩))𝑇(𝑢(𝑘))𝐷𝑃2𝑓𝑉̂𝑔(𝑢(𝑘𝜎(𝑘)))𝑇(𝑢(𝑘))𝑃2𝑓(𝑢(𝑘))+̂𝑔𝑇(𝑢(𝑘𝜎(𝑘)))𝐷𝑉𝑃2𝑉𝑇𝐷𝑇2𝑓̂𝑔(𝑢(𝑘𝜎(𝑘))),𝑇(𝑢(𝑘))𝐷𝑃2𝑁+𝒩=1𝜌𝒩𝑓(𝑢(𝑘𝒩))𝑇(𝑢(𝑘))𝑃2𝑓(𝑣(𝑘))++𝒩=1𝜌𝒩(𝑢(𝑘𝒩))𝑇×𝐷𝑁𝑃2𝑁𝑇𝐷𝑇+𝒩=1𝜌𝒩,(𝑢(𝑘𝒩))2̂𝑔𝑇(𝑢(𝑘𝜎(𝑘)))𝑉𝑃2𝑁+𝒩=1𝜌𝒩(𝑢(𝑘𝒩))̂𝑔𝑇(𝑢(𝑘𝜎(𝑘)))𝑃2+̂𝑔(𝑢(𝑘𝜎(𝑘)))+𝒩=1𝜌𝒩(𝑢(𝑘𝒩))𝑇𝑉𝑁𝑃2𝑁𝑇𝑉𝑇+𝒩=1𝜌𝒩,(𝑢(𝑘𝒩))Δ𝑉2(𝑘)=𝑉2(𝑘+1)𝑉2=(𝑘)𝑘𝑖=𝑘+1𝜏(𝑘+1)𝑣𝑇(𝑖)𝑄1𝑣(𝑖)𝑘1𝑖=𝑘𝜏(𝑘)𝑣𝑇(𝑖)𝑄1+𝑣(𝑖)𝑘𝑖=𝑘+1𝜎(𝑘+1)𝑢𝑇(𝑖)𝑄2𝑢(𝑖)𝑘1𝑖=𝑘𝜎(𝑘)𝑢𝑇(𝑖)𝑄2𝑢(𝑖)=𝑣𝑇(𝑘)𝑄1𝑣(𝑘)𝑣𝑇(𝑘𝜏(𝑘))𝑄1𝑣(𝑘𝜏(𝑘))+𝑘1𝑖=𝑘+1𝜏(𝑘+1)𝑣𝑇(𝑖)𝑄1𝑣(𝑖)𝑘1𝑖=𝑘𝜏(𝑘)+1𝑣𝑇(𝑖)𝑄1𝑣(𝑖)+𝑢𝑇(𝑘)𝑄2𝑢(𝑘)𝑢𝑇(𝑘𝜎(𝑘))𝑄2+𝑢(𝑘𝜎(𝑘))𝑘1𝑖=𝑘+1𝜎(𝑘+1)𝑢𝑇(𝑖)𝑄2𝑢(𝑖)𝑘1𝑖=𝑘𝜎(𝑘)+1𝑢𝑇(𝑖)𝑄2𝑢(𝑖)=𝑣𝑇(𝑘)𝑄1𝑣(𝑘)𝑣𝑇(𝑘𝜏(𝑘))𝑄1𝑣(𝑘𝜏(𝑘))+𝑘𝜏𝑚𝑖=𝑘𝜏(𝑘+1)+1𝑣𝑇(𝑖)𝑄1+𝑣(𝑖)𝑘1𝑖=𝑘𝜏𝑚+1𝑣𝑇(𝑖)𝑄1𝑣(𝑖)𝑘1𝑖=𝑘𝜏(𝑘)+1𝑣𝑇(𝑖)𝑄1𝑣(𝑖)+𝑢𝑇(𝑘)𝑄2𝑢(𝑘)𝑢𝑇(𝑘𝜎(𝑘))𝑄2𝑢(𝑘𝜎(𝑘))+𝑘𝜎𝑚𝑖=𝑘𝜎(𝑘+1)+1𝑢𝑇(𝑖)𝑄2+𝑢(𝑖)𝑘1𝑖=𝑘𝜎𝑚+1𝑢𝑇(𝑖)𝑄2𝑢(𝑖)𝑘1𝑖=𝑘𝜎(𝑘)+1𝑢𝑇(𝑖)𝑄2𝑢(𝑖)=𝑣𝑇(𝑘)𝑄1𝑣(𝑘)𝑣𝑇(𝑘𝜏(𝑘))𝑄1𝑣(𝑘𝜏(𝑘))+𝑘𝜏𝑚𝑖=𝑘𝜏𝑀+1𝑣𝑇(𝑖)𝑄1𝑣(𝑖)+𝑢𝑇(𝑘)𝑄2𝑢(𝑘)𝑢𝑇(𝑘𝜎(𝑘))𝑄2𝑢(𝑘𝜎(𝑘))+𝑘𝜎𝑚𝑖=𝑘𝜎𝑀+1𝑢𝑇(𝑖)𝑄2𝑢(𝑖),(3.6)Δ𝑉3(𝑘)=𝑉3(𝑘+1)𝑉3=(𝑘)𝑘𝜏𝑚+1𝑗=𝑘𝜏𝑀𝑘+2𝑖=𝑗𝑣𝑇(𝑖)𝑄1𝑣(𝑖)𝑘𝜏𝑚𝑗=𝑘𝜏𝑀𝑘+1𝑖=𝑗𝑣𝑇(𝑖)𝑄1𝑣+(𝑖)𝑘𝜎𝑚+1𝑗=𝑘𝜎𝑀𝑘+2𝑖=𝑗𝑢𝑇(𝑖)𝑄2𝑢(𝑖)𝑘𝜎𝑚𝑗=𝑘𝜎𝑀𝑘+1𝑖=𝑗𝑢𝑇(𝑖)𝑄2=𝑢(𝑖)𝑘𝜏𝑚𝑗=𝑘𝜏𝑀𝑘+1𝑖=𝑗𝑣𝑇(𝑖)𝑄1𝑣(𝑖)𝑘𝜏𝑚𝑗=𝑘𝜏𝑀+1𝑘1𝑖=𝑗𝑣𝑇(𝑖)𝑄1+𝑣(𝑖)𝑘𝜎𝑚𝑗=𝑘𝜎𝑀𝑘+1𝑖=𝑗𝑢𝑇(𝑖)𝑄2𝑢(𝑖)𝑘𝜎𝑚𝑗=𝑘𝜎𝑀+1𝑘1𝑖=𝑗𝑢𝑇(𝑖)𝑄2=𝑢(𝑖)𝑘𝜏𝑚𝑗=𝑘𝜏𝑀+1𝑣𝑇(𝑘)𝑄1𝑣(𝑘)𝑣𝑇(𝑗)𝑄1+𝑣(𝑗)𝑘𝜎𝑚𝑗=𝑘𝜎𝑀+1𝑢𝑇(𝑘)𝑄2𝑢(𝑘)𝑢𝑇(𝑗)𝑄2𝜏𝑢(𝑗)𝑀𝜏𝑚𝑣𝑇(𝑘)𝑄1𝑣(𝑘)𝑘𝜏𝑚𝑗=𝑘𝜏𝑀+1𝑣𝑇(𝑖)𝑄1𝜎𝑣(𝑖)+𝑀𝜎𝑚𝑢𝑇(𝑘)𝑄2𝑢(𝑘)𝑘𝜎𝑚𝑗=𝑘𝜎𝑀+1𝑢𝑇(𝑖)𝑄2𝑢(𝑖),(3.7)Δ𝑉4(𝑘)=𝑉4(𝑘+1)𝑉4=(𝑘)+𝑖=1𝜇𝑖𝑘𝑗=𝑘𝑖+1𝑣𝑇(𝑖)𝑅1𝑣(𝑖)+𝑖=1𝜇𝑖𝑘1𝑗=𝑘i𝑣𝑇(𝑖)𝑅1+𝑣(𝑖)+𝑖=1𝜌𝑖𝑘𝑗=𝑘𝑖+1𝑢𝑇(𝑖)𝑅2𝑢(𝑖)+𝑖=1𝜌𝑖𝑘1𝑗=𝑘𝑖𝑢𝑇(𝑖)𝑅2=𝑢(𝑖)+𝑖=1𝜇𝑖𝑣𝑇(𝑘)𝑅1𝑣(𝑘)𝑣𝑇(𝑘𝑖)𝑅1+𝑣(𝑘𝑖)+𝑖=1𝜌𝑖𝑢𝑇(𝑘)𝑅2𝑢(𝑘)𝑢𝑇(𝑘𝑖)𝑅2𝑢(𝑘𝑖)𝜇𝑣𝑇(𝑘)𝑅11𝑣(𝑘)𝜇+=1𝜇(𝑣(𝑘))𝑇𝑅1+=1𝜇+(𝑣(𝑘))𝜌𝑢𝑇(𝑘)𝑅21𝑢(𝑘)𝜌+𝒩=1𝜌𝒩(𝑢(𝑘𝒩))𝑇𝑅2+𝒩=1𝜌𝒩.(𝑢(𝑘𝒩))(3.8) It is clear from (2.9) that 𝑓𝑗𝑣𝑗(𝑘)𝑙+𝑗𝑣𝑗𝑓(𝑘)𝑗𝑣𝑗(𝑘)𝑙𝑗𝑣𝑗𝑔(𝑘)0,(3.9)𝑗𝑣𝑗(𝑘)𝑚+𝑗𝑣𝑗𝑔(𝑘)𝑗𝑣𝑗(𝑘)𝑚𝑗𝑣𝑗(𝑘)0,(3.10)𝑗𝑣𝑗(𝑘)𝑛+𝑗𝑣𝑗(𝑘)𝑗𝑣𝑗(𝑘)𝑛𝑗𝑣𝑗𝑓(𝑘)0,(3.11)𝑖𝑢𝑖(𝑘)𝑢+𝑖𝑢𝑖𝑓(𝑘)𝑖𝑢𝑖(𝑘)𝑢𝑖𝑢𝑖(𝑘)0,(3.12)̂𝑔𝑖𝑢𝑖(𝑘)𝑣+𝑖𝑢𝑖(𝑘)̂𝑔𝑖𝑢𝑖(𝑘)𝑣𝑖𝑢𝑖(𝑘)0,(3.13)𝑖𝑢𝑖(𝑘)𝑤+𝑖𝑢𝑖(𝑘)𝑖𝑢𝑖(𝑘)𝑤𝑖𝑢𝑖(𝑘)0,(3.14) which is equivalent to 𝑓𝑣(𝑘)(𝑣(𝑘))𝑇𝑙+𝑗𝑙𝑗𝑒𝑗𝑒𝑇𝑗𝑙+𝑗+𝑙𝑗2𝑒𝑗𝑒𝑇𝑗𝑙+𝑗+𝑙𝑗2𝑒𝑗𝑒𝑇𝑗𝑒𝑗𝑒𝑇𝑗𝑓𝑣(𝑘)(𝑣(𝑘))0,𝑘=1,2,,𝑛,(3.15) where 𝑒𝑘 denotes the unit column vector having “1” element on its 𝑘th row and zeros elsewhere.
Consequently,𝑛𝑗=1𝜆𝑗(1)𝑣(𝑘)𝑓(𝑣(𝑘))𝑇𝑙+𝑗𝑙𝑗𝑒𝑗𝑒𝑇𝑗𝑙+𝑗+l𝑗2𝑒𝑗𝑒𝑇𝑗𝑙+𝑗+𝑙𝑗2𝑒𝑗𝑒𝑇𝑗𝑒𝑗𝑒𝑇𝑗𝑓𝑣(𝑘)(𝑣(𝑘))0,𝑣(𝑘)𝑓(𝑣(𝑘))𝑇Λ1𝐿1Λ1𝐿2Λ1𝐿2Λ1𝑣(𝑘)𝑓(𝑣(𝑘))0.(3.16) Similarly, from (3.10)–(3.14), we have 𝑣(𝑘)𝑔(𝑣(𝑘))𝑇Γ1𝑀1Γ1𝑀2Γ1𝑀2Γ1𝑣(𝑘)𝑔(𝑣(𝑘))0,(3.17)𝑣(𝑘)(𝑣(𝑘))𝑇Ω1𝑁1Ω1𝑁2Ω1𝑁2Ω1𝑣(𝑘)(𝑣(𝑘))0,(3.18)𝑢(𝑘)𝑓(𝑢(𝑘))𝑇Λ2𝑈1Λ2𝑈2Λ2𝑈2Λ2𝑢(𝑘)𝑓(𝑢(𝑘))0,(3.19)𝑢(𝑘)̂𝑔(𝑢(𝑘))𝑇Γ2𝑉1Γ2𝑉2Γ2𝑉2Γ2𝑢(𝑘)̂𝑔(𝑢(𝑘))0,(3.20)𝑢(𝑘)(𝑢(𝑘))𝑇Ω2𝑊1Ω2𝑊2Ω2𝑊2Ω2𝑢(𝑘)(𝑢(𝑘))0.(3.21) Then from (3.5)–(3.8) and (3.16)–(3.21), we obtain Δ𝑉(𝑘)𝑢𝑇(𝑘)𝐴𝑃1𝐴2𝑃1Λ2𝑈1Γ2𝑉1Ω2𝑊1+1+𝜎𝑀𝜎𝑚𝑄2+𝜌𝑅2+𝜆0𝐺𝑇𝐺𝑢(𝑘)+𝑢𝑇(𝑘)Λ2𝑈2𝑓(𝑢(𝑘))+𝑢𝑇(𝑘)Γ2𝑉2̂𝑔(𝑢(𝑘))+𝑢𝑇(𝑘)×Ω2𝑊2(𝑢(𝑘))𝑢𝑇(𝑘𝜎(𝑘))𝑄2𝑓𝑢(𝑘𝜎(𝑘))+𝑇×(𝑢(𝑘))𝐵𝐷𝑃2𝐷𝑇𝐵𝑇Λ2+2𝑃2𝑓(𝑢(𝑘))̂𝑔𝑇(𝑢(𝑘))Γ2̂𝑔(𝑢(𝑘))+̂𝑔𝑇×(𝑢(𝑘𝜎(𝑘)))𝐵𝑉𝑃2𝑉𝑇𝐵𝑇+𝐷𝑉𝑃2𝑉𝑇D𝑇+𝑃2̂𝑔(𝑢(𝑘𝜎(𝑘)))𝑇(𝑢(𝑘))Ω2+(𝑢(𝑘))+𝒩=1𝜌𝒩(𝑢(𝑘𝒩))𝑇𝐵𝑁𝑃2𝑁𝑇𝐵𝑇+𝐷𝑁𝑃2𝑁𝑇𝐷𝑇+𝑉𝑁𝑃2𝑁𝑇𝑉𝑇𝜌1𝑅2×+𝒩=1𝜌𝒩(𝑢(𝑘𝒩))+𝑣𝑇×(𝑘)𝐵𝑃2𝐵2𝑃2Λ1𝐿1Γ1𝑀1Ω1𝑁1+1+𝜏𝑀𝜏𝑚𝑄1+𝜇𝑅1+𝜖0𝐾𝑇𝐾+𝑣𝑇(𝑘)Λ1𝐿2𝑓(𝑣(𝑘))+𝑣𝑇(𝑘)Γ1𝑀2𝑔(𝑣(𝑘))+𝑣𝑇(𝑘)Ω1𝑁2(𝑣(𝑘))𝑣𝑇(𝑘𝜏(𝑘))𝑄1𝑣(𝑘𝜏(𝑘))+𝑓𝑇(𝑣(𝑘))𝐴𝐶𝑃1𝐶𝑇𝐴𝑇Λ1+2𝑃1𝑓(𝑣(𝑘))𝑔𝑇(𝑣(𝑘))Γ1𝑔(𝑣(𝑘))+𝑔𝑇(𝑣(𝑘𝜏(𝑘)))𝐴𝑊𝑃1𝑊𝑇𝐴𝑇+𝐶𝑊𝑃1𝑊𝑇𝐶𝑇+𝑃1×𝑔(𝑣(𝑘𝜏(𝑘)))𝑇(𝑣(𝑘))Ω1(𝑣(𝑘))++=1𝜇(𝑣(𝑘))𝑇×𝐴𝑀𝑃1𝑀𝑇𝐴𝑇+𝐶𝑀𝑃1𝑀𝑇𝐶𝑇+𝑊𝑀𝑃1𝑀𝑇𝑊𝑇𝜇1𝑅1×+=1𝜇(𝑣(𝑘))𝑣(𝑘)𝑓(𝑣(𝑘))𝑇Λ1𝐿1Λ1𝐿2Λ1𝐿2Λ1𝑣(𝑘)𝑓(𝑣(𝑘))𝑣(𝑘)𝑔(𝑣(𝑘))𝑇Γ1𝑀1Γ1𝑀2Γ1𝑀2Γ1𝑣(𝑘)𝑔(𝑣(𝑘))𝑣(𝑘)(𝑣(𝑘))𝑇Ω1𝑁1Ω1𝑁2Ω1𝑁2Ω1𝑓𝑣(𝑘)(𝑣(𝑘))𝑢(𝑘)(𝑢(𝑘))𝑇Λ2𝑈1Λ2𝑈2Λ2𝑈2Λ2𝑓𝑢(𝑘)(𝑢(𝑘))𝑢(𝑘)̂𝑔(𝑢(𝑘))𝑇Γ2𝑉1Γ2𝑉2Γ2𝑉2Γ2𝑢(𝑘)̂𝑔(𝑢(𝑘))𝑢(𝑘)(𝑢(𝑘))𝑇Ω2𝑊1Ω2𝑊2Ω2𝑊2Ω2𝑢(𝑘)(𝑢(𝑘))=𝜉𝑇(𝑘)Ξ1𝜉(𝑘)+𝜂𝑇(𝑘)Ξ2𝜂(𝑘),(3.22) where 𝜉𝑇(𝑘)=[𝑢𝑇(𝑘)𝑢𝑇𝑓(𝑘𝜎(𝑘))𝑇(𝑢(𝑘))̂𝑔𝑇(𝑢(𝑘))̂𝑔𝑇(𝑢(𝑘𝜎(𝑘)))𝑇(𝑢(𝑘))(+𝒩=1𝜌𝒩(𝑢(𝑘𝒩)))𝑇]𝜂𝑇(𝑘)=[𝑣𝑇(𝑘)𝑣𝑇(𝑘𝜏(𝑘))𝑓𝑇(𝑣(𝑘))𝑔𝑇(𝑣(𝑘))𝑔𝑇(𝑣(𝑘𝜏(𝑘)))𝑇(𝑣(𝑘))(+=1𝜇(𝑣(𝑘)))𝑇].
Therefore, if the LMIs (3.1) hold, it can be concluded that Δ𝑉(𝑘)0. It follows that 𝑉(𝑘)𝑉(0). By (3.22), the SBAMNN is globally asymptotically stable in the mean square.
Now, we are in a position to establish the exponential stability of the SBAMNN (2.7).
Then, there exists a scalar 𝛽>0 such thatΔ𝑉(𝑘)𝛽𝑢(𝑘)2+𝑣(𝑘)2.(3.23) From (3.3), it can be verified that 𝑉(𝑘)𝜆max𝑃1𝑢(𝑘)2+𝜆max𝑃2𝑣(𝑘)2+𝜆max𝑄1𝑘1𝑖=𝑘𝜏𝑀𝑣(𝑖)2+𝜆max𝑄2𝑘1𝑖=𝑘𝜎𝑀𝑢(𝑖)2+𝜏𝑀𝜏𝑚𝜆max𝑄1𝑘1𝑖=𝑘𝜏𝑀𝑣(𝑖)2+𝜎𝑀𝜎𝑚𝜆max𝑄2𝑘1𝑖=𝑘𝜎𝑀𝑣(𝑖)2=𝜆max𝑃1𝑢(𝑘)2+𝜆max𝑃2𝑣(𝑘)2+𝛽1𝑘1𝑖=𝑘𝜎𝑀𝑢(𝑖)2+𝛽2𝑘1𝑖=𝑘𝜏𝑀𝑣(𝑖)2,(3.24) where 𝛽1=(1+𝜎𝑀𝜎𝑚)𝜆max(𝑄1) and 𝛽2=(1+𝜏𝑀𝜏𝑚)𝜆max(𝑄2).
Choose a scalar 𝜃>1, satisfying𝜆𝛽𝜃+(𝜃1)max𝑃1+𝜆max𝑃2𝛽+(𝜃1)1𝜏𝑀𝜃𝜏𝑀+𝛽2𝜎𝑀𝜃𝜎𝑀=0.(3.25) Then by (3.23) and (3.24), we have 𝜃𝑘+1𝑉(𝑘+1)𝜃𝑘𝑉(𝑘)=𝜃𝑘+1𝑉(𝑘+1)𝜃𝑘+1𝑉(𝑘)+𝜃𝑘+1𝑉(𝑘)𝜃𝑘𝑉(𝑘)=𝜃𝑘+1Δ𝑉(𝑘)+𝜃𝑘(𝜃1)𝑉(𝑘)𝛽3𝜃𝑘𝑢(𝑘)2+𝑣(𝑘)2+𝛽4𝜃𝑘𝑘1𝑖=𝑘𝜎𝑀𝑢(𝑖)2+𝑘1𝑖=𝑘𝜏𝑀𝑣(𝑖)2,(3.26) where 𝛽3=𝛽𝜃+(𝜃1)(𝜆max(𝑃1)+𝜆max(𝑃2)) and 𝛽4=(𝜃1)(𝛽1+𝛽2).
Therefore, for any integer 𝑁𝜏𝑀+1 and 𝑁𝜎𝑀+1, summing up both sides of (3.26) from 0 to 𝑁1 with respect to 𝑘, we have𝜃𝑁𝑉(𝑁)𝑉(0)𝛽3𝑁1𝑘=0𝜃𝑘𝑢(𝑘)2+𝑣(𝑘)2+𝛽4𝑁1𝑘=0𝜃𝑘𝑘1𝑖=𝑘𝜎𝑀𝑢(𝑖)2+𝑘1𝑖=𝑘𝜏𝑀𝑣(𝑖)2.(3.27) Here, we note that for 𝜏𝑀1, 𝜎𝑀1, we have 𝑁1𝑘=0𝜃𝑘𝑘1𝑖=𝑘𝜎𝑀𝑢(𝑖)2+𝑘1𝑖=𝑘𝜏𝑀𝑣(𝑖)2=𝜎𝑀𝜎𝑀𝜃+1𝜎𝑀sup𝜎𝑀𝑖0𝑢(𝑖)2+𝜏𝑀𝜏𝑀𝜃+1𝜏𝑀×sup𝜏𝑀𝑖0𝑣(𝑖)2+𝜎𝑀𝜃𝜎𝑀𝑁1𝑘=0𝜃𝑘𝑢(𝑘)2+𝜏𝑀𝜃𝜏𝑀𝑁1𝑘=0𝜃𝑘𝑣(𝑘)2.(3.28) Substituting (3.28) in (3.27) gives 𝜃𝑁𝑉(𝑁)𝛽3+𝛽4𝜎𝑀𝜃𝜎𝑀+𝜏𝑀𝜃𝜏𝑀𝑇1𝑘=0𝜃𝑘𝑢(𝑘)2+𝑣(𝑘)2+𝛽4𝜎𝑀𝜎𝑀𝜃+1𝜎𝑀+𝜏𝑀𝜏𝑀𝜃+1𝜏𝑀sup𝜎𝑀𝑖0𝑢(𝑖)2+sup𝜏𝑀𝑖0𝑣(𝑖)2+𝑉(0).(3.29) We can observe that 𝜆𝑉(𝑁)min𝑃1,𝜆min𝑃2𝑢(𝑁)2+𝑣(𝑁)2.(3.30) It follows easily from (3.24) that 𝑉(0)𝛽1𝜎𝑀sup𝜎𝑀𝑖0𝑢(𝑖)2+𝛽2𝜏𝑀sup𝜏𝑀𝑖0𝑣(𝑖)2.(3.31) Then, it follows from (3.25), (3.29), and (3.31) that 𝑢(𝑁)+𝑣(𝑁)𝜈𝒢𝑇sup𝜎𝑀𝑖0𝑢(𝑖)+sup𝜏𝑀𝑖0𝑣(𝑖),(3.32) where 𝒢=𝜔1/2 and 𝜈=𝛽1𝜎𝑀+𝛽2𝜏𝑀+𝛽4𝜎𝑀𝜎𝑀𝜃+1𝜎𝑀+𝜏𝑀𝜏𝑀𝜃+1𝜏𝑀𝜆min𝑃1,𝜆min𝑃2.(3.33) This indicates that the discrete-time stochastic BAM neural network (2.7) is said to be globally exponentially stable. This completes the proof of this theorem.

For a deterministic BAM neural network, we have the following system of equations:𝑥(𝑘+1)=𝐴𝑥(𝑘)+𝐶𝑓(𝑦(𝑘))+𝑊𝑔(𝑦(𝑘𝜏(𝑘)))+𝑀+=1𝜇(𝑦(𝑘))+𝐼,𝑦(𝑘+1)=𝐵𝑦(𝑘)+𝐷𝑓(𝑥(𝑘))+𝑉̂𝑔(𝑥(𝑘𝜎(𝑘)))+𝑁+𝒩=1𝜌𝒩(𝑥(𝑘𝒩))+𝐽.(3.34)

Then, by Theorem 3.1, it is very easy to obtain the following theorem.

Theorem 3.2. Under Assumptions 14, the discrete-time BAM neural network (3.34) is globally exponentially stable, if there exist diagonal matrices Λ1=diag{𝜆1(1),𝜆2(1),,𝜆𝑛(1)}>0, Λ2=diag{𝜆1(2),𝜆2(2),,𝜆𝑛(2)}>0, Γ1=diag{𝛾1(1),𝛾2(1),,𝛾𝑛(1)}>0, Γ2=diag{𝛾1(2),𝛾2(2),,𝛾𝑛(2)}>0, Ω1=diag{𝜔1(1),𝜔2(1),,𝜔𝑛(1)}>0 and Ω2=diag{𝜔1(2),𝜔2(2),,𝜔𝑛(2)}>0 and positive definite matrices 𝑃1>0,𝑃2>0,𝑄1>0,𝑄2>0𝑅1>0, and 𝑅2>0, such that the following LMIs hold: Ξ3=Ψ110Λ2𝑈2Γ2𝑉20Ω2𝑊20𝑄200000Π330000Γ2000Π5500Ω20Π77Ξ<0,4=Φ110Λ1𝐿2Γ1𝑀20Ω1𝑁20𝑄100000Θ330000Γ1000Θ5500Ω10Θ77<0,(3.35) where Ψ11=𝐴𝑇𝑃1𝐴2𝑃1Λ2𝑈1Γ2𝑉1Ω2𝑊1+1+𝜎𝑀𝜎𝑚𝑄2+𝜌𝑅2,Φ11=𝐵𝑇𝑃2𝐵2𝑃2Λ1𝐿1Γ1𝑀1Ω1𝑁1+1+𝜏𝑀𝜏𝑚𝑄1+𝜇𝑅1,(3.36) and Π33,Π55,Π77,Θ33,Θ55,andΘ77 are defined in Theorem 3.1.

Proof. Similar to the proof of Theorem 3.1, we can derive the stability result. The proof is straightforward and hence omitted.

If we neglect the distributed delay term in (2.2), it can be reduced to[]𝑥(𝑘+1)=𝐴𝑥(𝑘)+𝐶𝑓(𝑦(𝑘))+𝑊𝑔(𝑦(𝑘𝜏(𝑘)))+𝐼+𝛿(𝑥(𝑘),𝑦(𝑘𝜏(𝑘)),𝑘)𝑤1(𝑘),𝑦(𝑘+1)=𝐵𝑦(𝑘)+𝐷𝑓(𝑥(𝑘))+𝑉̂𝑔(𝑥(𝑘𝜎(𝑘)))+𝐽+𝜒(𝑦(𝑘),𝑥(𝑘𝜎(𝑘)),𝑘)𝑤2(𝑘).(3.37)

For system (3.37), we have the following stability result.

Corollary 3.3. Under Assumptions 15, the discrete-time BAM neural network (3.37) is globally exponentially stable, if there exist diagonal matrices Λ1=diag{𝜆1(1),𝜆2(1),,𝜆𝑛(1)}>0, Λ2=diag{𝜆1(2),𝜆2(2),,𝜆𝑛(2)}>0, Γ1=diag{𝛾1(1),𝛾2(1),,𝛾𝑛(1)}>0, and Γ2=diag{𝛾1(2),𝛾2(2),,𝛾𝑛(2)}>0, and positive definite matrices 𝑃1>0,𝑃2>0,𝑄1>0,and 𝑄2>0, such that the following LMI holds: Ξ5=Υ110Λ2𝑈2Γ2𝑉20𝑄2000Π3300Γ20Π55Ξ<0,6=Σ110Λ1𝐿2Γ1𝑀20𝑄1000Θ3300Γ10Θ55<0,(3.38) where Υ11=𝐴𝑇𝑃1𝐴2𝑃1Λ2𝑈1Γ2𝑉1+1+𝜎𝑀𝜎𝑚𝑄2,Σ11=𝐵𝑇𝑃2𝐵2𝑃2Λ1𝐿1Γ1𝑀1+1+𝜏𝑀𝜏𝑚𝑄1,(3.39) and Π33,Π55,Θ33,andΘ55 are defined in Theorem 3.1.

4. Numerical Example

To illustrate the effectiveness of our stability criterion, we give the following numerical example.

Example 4.1. Consider the SBAM neural networks (2.2) with the following parameters: ,,,𝐴=0.30000.30000.4,𝐵=0.30000.20000.2,𝐶=0.40.20.100.20.30.100.2𝐷=0.20.100.20.30.200.20.2,𝑊=0.20.20.60.30.1000.20.5,𝑉=0.40.40.200.10.20.300.3𝑀=0.20.60.10.10.3000.70.5,𝑁=0.40.50.300.10.30.400.4,𝐺=0.20000.30000.3𝐾=0.40000.20000.2,𝜏(𝑘)=3+sin𝑘𝜋2,𝜎(𝑘)=2cos𝑘𝜋2,𝐼=3sin𝑘𝜋2,𝐽=2cos𝑘𝜋2,𝜇𝑘=𝜌𝑘=𝑒4𝑘,𝑓(𝑦(𝑘))=𝑔(𝑦(𝑘))=(𝑦(𝑘))=tanh4𝑦1(𝑘)tanh4𝑦2(𝑘)tanh𝑦3,(𝑘)𝑓(𝑦(𝑘))=̂𝑔(𝑦(𝑘))=(𝑦(𝑘))=tanh𝑥1(𝑘)tanh4𝑥2(𝑘)tanh𝑥3.(𝑘)(4.1) It can be verified that 𝜎𝑚=𝜏𝑚=3, 𝜎𝑀=𝜏𝑀=4, 𝑙+1=𝑚+1=𝑛+1=2, 𝑙1=𝑚1=𝑛1=2, 𝑙+2=𝑚+2=𝑛+2=2, 𝑙2=𝑚2=𝑛2=2, 𝑙+3=𝑚+3=𝑛+3=1, 𝑙3=𝑚3=𝑛3=1, 𝑢+1=𝑣+1=𝑤+1=1, 𝑢1=𝑣1=𝑤1=1, 𝑢+2=𝑣+2=𝑤+2=2, 𝑢2=𝑣2=𝑤2=2, 𝑢+3=𝑣+3=𝑤+3=1 and 𝑢3=𝑣3=𝑤3=1 with 𝐿1=𝑀1=𝑁1=400040001,𝐿2=𝑀2=𝑁2=,𝑈0000000001=𝑉1=𝑊1=100040001,𝑈2=𝑉2=𝑊2=.000000000(4.2) By using Matlab LMI toolbox, we solve the LMIs (3.1) in Theorem 3.1 and obtain the feasible solutions as follows: 𝑃1=3.74770.51200.25860.51207.55570.65150.25860.65159.0264,𝑃2=,𝑄7.67440.25120.01540.25126.20200.00460.01540.00468.42071=2.84110.06800.22370.06803.64800.27020.22370.27022.1788,𝑄2=,𝑅2.53750.53890.62480.53893.63430.37810.62480.37812.14971=1.87380.60691.21000.60692.22121.33031.21001.33032.1414,𝑅2=,Λ0.56160.70362.20340.70360.82180.47702.20340.47701.21741=6.545000010.400400016.6900,Λ2=,Γ12.07950009.835700015.50961=1.70500003.67650002.1403,Γ2=,Ω2.31760003.29880002.00301=1.70500003.67650002.1403,Ω2=,𝜆2.31760003.29880002.00300=1.7593,𝜖0=3.6570.(4.3) Then, it follows from Theorem 3.1 that the SBAMNN (2.7) with given parameters is globally exponentially stable in the mean square. Our main purpose in this example is to estimate the maximum allowable upper bound delay 𝜎𝑀 and 𝜏𝑀for given lower bound 𝜎𝑚 and 𝜏𝑚 (Table 1). For instance, if we set 𝜎𝑚=𝜏𝑚=2, the allowable time delay upper bound obtained by Gao and Cui [19] is 4. However, in our paper, we obtained that for any time delay satisfying 0<𝜏(𝑡)𝜏𝑀=foranylargenitevalue, 0<𝜎(𝑡)𝜎𝑀=foranylargenitevalue. This is much larger than that in [19], which shows the less conservativeness of our developed method (Figure 1).

5. Conclusion

In this paper, we have considered the stability analysis problem for a class of discrete-time stochastic BAM neural networks with both discrete and distributed delays. Employing a Lyapunov-Krasovskii functional and a Linear matrix inequality approach has been developed to establish sufficient conditions for the SBAMNNs to be globally exponentially stable. It has been shown that the delayed SBAMNNs are globally exponentially stable if some LMIs are solvable and the feasibility of such LMIs can be easily checked by using the numerically efficient LMI toolbox in Matlab. A numerical example has been given to demonstrate the effectiveness of the obtained stability conditions.

Acknowledgments

The work of the first author was supported by UGC Rajiv Gandhi National Fellowship. The work of the second author was supported by the Korean Research Foundation Grant funded by the Korean Government with Grant no. KRF 2010-0003495 and the work of the third author was supported by the CSIR, New Delhi. The authors are very much thankful to the reviewers and editors for their valuable comments and suggestions for improving this work.