R. Raja, R. Sakthivel, S. Marshal Anthoni, "Exponential Stability for Discrete-Time Stochastic BAM Neural Networks with Discrete and Distributed Delays", International Scholarly Research Notices, vol. 2011, Article ID 153409, 23 pages, 2011. https://doi.org/10.5402/2011/153409
Exponential Stability for Discrete-Time Stochastic BAM Neural Networks with Discrete and Distributed Delays
R. Raja,1R. Sakthivel,2 and S. Marshal Anthoni3
1Department of Mathematics, Periyar University, Salem 636 011, India
2Department of Mathematics, Sungkyunkwan University, Suwon 440-746, Republic of Korea
3Department of Mathematics, Anna University of Technology, Coimbatore 641 047, India
Academic Editor: N. I. Trinajstić
Received13 Sep 2011
Accepted23 Oct 2011
Published16 Jan 2012
This paper deals with the stability analysis problem for a class of discrete-time stochastic
BAM neural networks with discrete and distributed time-varying delays. By constructing a suitable
Lyapunov-Krasovskii functional and employing M-matrix theory, we find some sufficient
conditions ensuring the global exponential stability of the equilibrium point for stochastic BAM
neural networks with time-varying delays. The conditions obtained here are expressed in terms
of LMIs whose feasibility can be easily checked by MATLAB LMI Control toolbox. A numerical
example is presented to show the effectiveness of the derived LMI-based stability conditions.
Recently, the study of Bidirectional associative memory neural networks has attracted the attention of many researchers due to its applications in many fields such as pattern recognition, automatic control, associative memory, signal processing, and optimization; see, for example, [1–9]. The (BAM) neural networks model, proposed by Kosko [10, 11], is a two layer nonlinear feedback network model and it was described that the neurons in one layer are fully interconnected to the neurons in the other layer, while there are no interconnections among neurons in the same layer.
Furthermore, due to the finite switching speed of neuron amplifiers and the finite speed of signal propagation time delays are unavoidable in the implementation of neural networks [12–14]. According to the way it occurs, time delay can be classified as two types: discrete and distributed delays. Discrete time-delay is relatively easier to be identified in practice and hence the stability analysis for BAM with discrete delays has been an attractive subject of research in the past few years; see [15, 16]. On the other hand, due to the presence of an amount of parallel pathways of a variety of axon sizes and lengths, a neural network usually has a spatial nature. Therefore, it is necessary to introduce continuously distributed delays over a certain duration of time; see [17, 18].
Moreover, in implementations of neural networks, stochastic disturbances are inevitable owing to thermal noise in electronic devices. Practically, the stochastic phenomenon usually appears in the electrical circuit design of neural networks. The stochastic effects can have the ability to destabilize a neural system. Therefore, it is significant and of importance to consider stochastic effects to the stability property of the neural networks with delays. It is noted that most of the BAM neural networks have been assumed to act in a continuous-time manner. However, when it comes to the implementation of discrete-time BAM networks, there are only few works appeared in the literature; see [6, 19–24] and the references cited therein. Therefore, there is a crucial need to study the dynamics of discrete-time BAM neural networks and it becomes more significant from practical point of view. In , Gao and Cui discussed the global robust exponential stability of discrete-time interval BAM neural networks with time-varying delays, and in , the authors investigated the global exponential stability for discrete-time BAM neural network with time variable delay. In the above said references the stability problem for BAM neural networks is considered only with discrete delay, and distributed delay has not been taken into account and remains challenging. So, our main aim in this work is to make the first attempt to shorten such a gap.
Motivated by the above points, in this paper, we will study the exponential stability problem for a new class of discrete-time stochastic BAM neural networks with both discrete and distributed delays. The existence of the equilibrium point is proved under mild conditions on the activation functions. By constructing an appropriate Lyapunov-Krasovskii functional, a linear matrix inequality (LMI) approach is developed to establish sufficient conditions for the discrete-time BAM neural networks to be globally exponentially stable in the mean square. Here, we note that the LMIs can be easily solved by using Matlab LMI toolbox, and no tuning of parameters is involved. Finally, a numerical example is presented to show the usefulness of the derived LMI-based stability conditions.
Notations. Throughout this paper, and denote, respectively, the -dimensional Euclidean space and the set of all real matrices. denotes the identity matrix with appropriate dimensions and denotes the diagonal matrix. For real symmetric matrices and , the notation (resp., ) means that the matrix is positive semidefinite (resp., positive definite). and stands for the Euclidean norm in . (resp., ) stands for the maximum (resp., minimum) eigenvalue of the matrix . The symbol within a matrix represents the symmetric term of the matrix.
2. Problem Description and Preliminaries
Consider the following discrete-time stochastic BAM neural networks with both discrete and distributed delays of the following form:
or, in an equivalent form,
for , where and are the neural state vector; and are the state feedback coefficient matrices; , and are, respectively, the connection weight matrices, the discretely delayed connection weight matrices, and distributed delayed connection weight matrices; and denote the discrete time-varying delays satisfying
where , and are known positive integer; denotes the distributed time-varying delays.Then
denote the neuron activation functions. The constant vectors and are the external inputs from outside the system; and are scalar constants, where and are scalar Wiener process (Brownian motion) on the probability space with
with being the mathematical expectation operator; and are the nonlinear vector function representing the disturbance intensities.
In this paper, we make following assumptions for the neuron activation functions.
Assumption 1. For , the neuron activation functions , , , , , and in (2.2) are continuous as well as bounded on .
Assumption 2. For , the neuron activation functions in (2.2) satisfy
where , and are some constants.
Remark 2.1. Assumption 2 was first introduced by Liu et al. . The constants , and in Assumption 2 are allowed to be positive, negative, or zero. So, the activation functions used in this paper may be nonmonotonic and more general than the usual sigmoid functions and Lipschitz functions. Such conditions are very rude in quantifying the lower and upper bounds of the activation functions; hence we use generalized activation functions, because it is very helpful for using LMI-based technique to reduce the possible conservatism. In order to simplify our proof, we shift the equilibrium point and of system (2.2) to the origin. Let and ; then system (2.2) can be transformed to
Assumption 3. Obviously, the activation functions satisfy the following condition:
Assumption 4. The constants satisfy the following convergent conditions:
Remark 2.2. Assumption 4 ensures that the terms and are convergent, which is significant for the subsequent analysis.
Assumption 5. There exist constant matrices and such that
The following definition and lemmas will be essential in employing the exponential stability conditions.
Definition 2.3. The delayed discrete-time stochastic BAM neural network (2.7) is said to be globally exponentially stable, if there exist two positive scalars and such that
Lemma 2.4. Let and be any n-dimensional real vectors and let be an positive semidefinite matrix. Then, the following matrix inequality holds:
Lemma 2.5. Let be a positive semidefinite matrix, , and . If the series concerned are convergent, the following inequality holds:
In the rest of the paper, we will focus on the stability analysis of SBAMNN (2.7). By choosing an appropriate Lyapunov-Krasovskii functional, we aim to develop an LMI approach for deriving sufficient conditions under which the SBAMNN (2.7) is globally exponentially stable.
3. Main Results
Now, we are in a position to state our main results in the following theorem.
Theorem 3.1. Under Assumptions 1–5, the discrete-time stochastic BAM neural network (2.7) is globally exponentially stable in the mean square, if there exist constants and if there exist diagonal matrices , , , , , and and positive definite matrices , such that the following LMIs hold:
Proof. Let us choose the Lyapunov-Krasovskii functional as
In order to analyze the global exponential stability of the stochastic BAM neural network, we calculate differences of the Lyapunov function , along with the trajectories of the BAM neural network (2.7); then we have
By using Lemma 2.4, we have
It is clear from (2.9) that
which is equivalent to
where denotes the unit column vector having “1” element on its th row and zeros elsewhere. Consequently,
Similarly, from (3.10)–(3.14), we have
Then from (3.5)–(3.8) and (3.16)–(3.21), we obtain
where . Therefore, if the LMIs (3.1) hold, it can be concluded that . It follows that . By (3.22), the SBAMNN is globally asymptotically stable in the mean square. Now, we are in a position to establish the exponential stability of the SBAMNN (2.7). Then, there exists a scalar such that
From (3.3), it can be verified that
where and . Choose a scalar , satisfying
Then by (3.23) and (3.24), we have
where and . Therefore, for any integer and , summing up both sides of (3.26) from 0 to with respect to , we have
Here, we note that for , , we have