Abstract

Stability of reaction-diffusion recurrent neural networks (RNNs) with continuously distributed delays and stochastic influence are considered. Some new sufficient conditions to guarantee the almost sure exponential stability and mean square exponential stability of an equilibrium solution are obtained, respectively. Lyapunov's functional method, M-matrix properties, some inequality technique, and nonnegative semimartingale convergence theorem are used in our approach. The obtained conclusions improve some published results.

1. Introduction

For decades, studies have been intensively focused on recurrent neural networks (RNNs) because of the successful hardware implementation and their various applications such as classification, associative memories, parallel computation, optimization, signal processing and pattern recognition, see, for example, [13]. These applications rely crucially on the analysis of the dynamical behavior of neural networks. Recently, it has been realized that the axonal signal transmission delays often occur in various neural networks and may cause undesirable dynamic network behaviors such as oscillation and instability. Consequently, the stability analysis problems resting with delayed recurrent neural networks (DRNNs) have drawn considerable attention. To date, a great deal of results on DRNNs have been reported in the literature, see, for example, [49] and references therein. To a large extent, the existing literature on theoretical studies of DRNNs is predominantly concerned with deterministic differential equations. The literature dealing with the inherent randomness associated with signal transmission seems to be scarce; such studies are, however, important for us to understand the dynamical characteristics of neuron behavior in random environments for two reasons: (i) in real nervous systems and in the implementation of artificial neural networks, synaptic transmission is a noisy process brought on by random fluctuations from the release of neurotransmitters and other probabilistic causes; hence, noise is unavoidable and should be taken into consideration in modeling [10]; (ii) it has been realized that a neural network could be stabilized or destabilized by certain stochastic effects [11, 12].

Although systems are often perturbed by various types of environmental “noise”, it turns out that one of the reasonable interpretation for the “noise” perturbation is the so-called white noise , where is the Brownian motion process, also called as Wiener process [12, 13]. More detailed mechanism of the stochastic effects on the interaction of neurons and analog circuit implementing can be found in [13, 14]. However, because the Brownian motion is nowhere differentiable, the derivative of Brownian motion can not be defined in the ordinary way, the stability analysis for stochastic neural networks is difficult. Some initial results have just appeared, for example, [11, 1523]. In [11, 15], Liao and Mao discussed the exponentially stability of stochastic recurrent neural networks (SRNNs); in [16], the authors continued their research to discuss almost sure exponential stability for a class of stochastic CNN with discrete delays by using the nonnegative semimartingale convergence theorem; in [18], exponential stability of SRNNs via Razumikhin-type was investigated; in [17], Wan and Sun investigated mean square exponential stability of stochastic delayed Hopfield neural networks(HNN); in [19], Zhao and Ding studied almost sure exponential stability of SRNN; in [20], Sun and Cao investigated th moment exponential stability of stochastic recurrent neural networks with time-varying delays.

The delays in all above-mentioned papers have been largely restricted to be discrete. As is well known, the use of constant fixed delays in models of delayed feedback provides of a good approximation in simple circuits consisting of a small number of cells. However, neural networks usually have a spatial extent due to the presence of a multitude of parallel pathways with a variety of axon sizes and lengths. Thus there will be a distribution of conduction velocities along these pathways and a distribution of propagation delays. In these circumstances, the signal propagation is not instantaneous and cannot be modeled with discrete delays and a more appropriate way is to incorporate continuously distributed delays. For instance, in [24], Tank and Hopfield designed an analog neural circuit with distributed delays, which can solve a general problem of recognizing patterns in a time-dependent signal. A more satisfactory hypothesis is that to incorporate continuously distributed delays, we refer to [5, 25, 26]. In [27], Wang et al. developed a linear matrix inequality (LMI) approach to study the stability of SRNNs with mixed delays. To the best of the authors' knowledge, few authors investigated the convergence dynamics of SRNNs with unbounded distributed delays. On the other hand, if the RNNs depend on only time or instantaneously time and time delay, the model is in fact an ordinary differential equation or a functional differential equation. In the factual operations, however, the diffusion phenomena could not be ignored in neural networks and electric circuits once electrons transport in a nonuniform electromagnetic field. Hence, it is essential to consider the state variables varying with the time and space variables. The neural networks with diffusion terms can commonly be expressed by partial differential equations [2830].

Keeping this in mind, in this paper, we consider the SRNNs described by the following stochastic reaction-diffusion RNNs In the above model, is the number of neurons in the network, is space variable, is the state variable of the th neuron at time and in space variable , , and denote the output of the th unit at time and in space variable ; , , are constants: represents the rate with which the th unit will reset its potential to the resting state in isolation when disconnected from the network and the external stochastic perturbation, and is a positive constant; and weight the strength of the th unit on the th unit at time . Moreover, are independent scalar standard Wiener processes on the complete probability space with the natural filtration generated by the standard Wiener process which is independent of , where we associate with the canonical space generated by , and denote by the associated -algebra generated by with the probability measure ; Furthermore, we assume the following boundary conditionsmooth function is a diffusion operator, is a compact set with smooth boundary and measure in . , and are the boundary value and initial value, respectively.

For the sake of convenience, some of the standing definitions and assumptions are formulated below: is differentiable and , for ,, , and are Lipschitz continuous with positive Lipschitz constants , , , respectively, and , for , ,For , there is a positive constant , such that

Remark 1.1. The authors in [28] (see , , ) and [30] (see , , ) also studied the convergence dynamics of (1.1) under the following foundational conditions:the kernels , are real-valued nonnegative piecewise continuous functions defined on and satisfy: ,for each , we have ,for each , there is a positive constant such that for each , there is a positive constant such that Just take the widely applied delay kernels as mentioned in [3133] as an example, which given by for , where , . One can easy to find out that the conditions about the kernels in [28, 30] are not satisfy at the same time. Therefore, the applications of the main results in [28, 30] appears to be somewhat certain limits because of the obviously restrictive assumptions on the kernels. The main purpose of this paper is to further investigate the convergence of stochastic reaction-diffusion RNNs with more general kernels.

2. Preliminaries

Let and is the space of scalar value Lebesgue measurable functions on which is a Banach space for the -norm: , , then we define the norm as . Assume be an -measurable valued random variable, where, for example, restricted on , and is the space of all continuous -valued functions defined on with a norm . Clearly, (1.1) admits an equilibrium solution .

Definition 2.1. Equation (1.1) is said to be almost surely exponentially stable if there exists a positive constant such that for each pair of and , such that

Definition 2.2. Equation (1.1) is said to be exponentially stable in mean square if there exists a pair of positive constants and such that

Lemma 2.3 (semimartingale convergence theorem [12]). Let and be two continuous adapted increasing processes on with a.s. Let be a real-valued continuous local martingale with a.s. Let be a nonnegative -measurable random variable. Define If is nonnegative, then where (2.4) a.s. means (2.4). In particular, If (2.4) a.s., then for almost all (2.4) that is both and converge to finite random variables.

Lemma 2.4 (see [34]). A nonsingular -matrix is equivalent to the following properties: there exists , such that , .

Furthermore, an -matrix is a -matrix with eigenvalues whose real parts are positive. In mathematics, the class of -matrices are those matrices whose off-diagonal entries are less than or equal to zero, that is, a -matrix satisfies

3. Main Results

Theorem 3.1. Under the assumptions , , , , and , is a nonsingular -matrix, where then the trivial solution of system (1.1) is almost surely exponentially stable and also is exponential stability in mean square.

Proof. From and Lemma 2.4, there exist constants , such that That also can be expressed as follows: From the assumption , one can choose a constant such that Consider the following Lyapunov functional: From the boundary condition , we get Using the Itô formula, for , we have Notice that Submitting inequality (3.9) to (3.8), it is easy to calculate that On the other hand, we observe that From inequality (3.4), we have Hence, is bounded. Integrate both sides of (3.10) with respect to , we have Therefore, we have It is obvious that the right-hand side of (3.14) is a nonnegative martingale, From Lemma 2.3, it can be easily seen that that is On the other hand, since , taking expectation on both sides of the equality (3.14) yields From (3.16) and (3.17), the trivial solution of system (1.1) is almost surely exponentially stable, which is also exponential stability in mean square. This completes the proof.

Removing the reaction-diffusion term from the system (1.1), we investigate the following stochastic recurrent neural networks with unbounded distributed delays: We have the following Corollary 3.2 for system (3.18). The derived conditions for almost surely exponentially stable and exponential stability in mean square can be viewed as byproducts of our results from Theorem 3.1, so the proof is trivial, we omit it.

Corollary 3.2. Under the assumptions , , , and , is a nonsingular -matrix, where then the trivial solution of system (3.18) is almost surely exponentially stable and also is exponential stability in mean square.

Furthermore, if we remove noise from the system, then system (3.18) turns out to be system as the following: To investigate the stability of model (3.20), we should modify the assumption as follows:, , and are Lipschitz continuous with positive Lipschitz constants , , respectively, and , for , .

The derived conditions for exponential stability of system (3.20) can be obtained directly from of our results from Corollary 3.2.

Corollary 3.3. Under the assumptions , , , and , is a nonsingular -matrix, where then the trivial solution of system (3.20) is exponentially stable.

4. An Illustrative Example

In this section, a numerical example is presented to illustrate the correctness of our main result.

Example 4.1. Consider a two-dimensional stochastic reaction-diffusion recurrent neural networks with unbounded distributed delays as follows: where, , ; ; , , , ; , , , ; ; , , , ; , , , for . In the example, let is a positive constant, according to Theorem 3.1, by simple computation, we get Therefore, With the help of Matlab, one can get the eigenvalues of matrix quickly, which are , , by Lemma 2.4, is a nonsingular -matrix, therefore, all conditions in Theorem 3.1 are hold, that is to say, the trivial solution of system (4.1) is almost surely exponentially stable and also is exponential stability in mean square. Just choose , then these conclusions can be verified by the following numerical simulations (Figures 14).

5. Conclusions

In this paper, stochastic recurrent neural networks with unbounded distributed delays and reaction-diffusion have been investigated. All features of stochastic systems, reaction-diffusion systems have been taken into account in the neural networks. The proposed results generalized and improved some of the earlier published results. The results obtained in this paper are independent of the magnitude of delays and diffusion effect, which implies that strong self-regulation is dominant in the networks. If we remove the noise and reaction-diffusion terms from the system, the derived conditions for stability of general deterministic neural networks can be viewed as byproducts of our results.

Acknowledgments

The authors are extremely grateful to Professor Binggen Zhang and the anonymous reviewers for their constructive and valuable comments, which have contributed a lot to the improved presentation of this paper. This work was supported by the Key Project of Chinese Ministry of Education (2011), the Foundation of Chinese Society for Electrical Engineering (2008), the Excellent Youth Foundation of Educational Committee of Hunan Province of China (10B002), the National Natural Science Funds of China for Distinguished Young Scholar (50925727), the National Natural Science Foundation of China (60876022), and the Scientific Research Fund of Yunnan Province of China (2010ZC150).