Abstract

This paper addresses the issue of mean square exponential stability of stochastic Cohen-Grossberg neural networks (SCGNN), whose state variables are described by stochastic nonlinear integrodifferential equations. With the help of Lyapunov function, stochastic analysis technique, and inequality techniques, some novel sufficient conditions on mean square exponential stability for SCGNN are given. Furthermore, we also establish some sufficient conditions for checking exponential stability for Cohen-Grossberg neural networks with unbounded distributed delays.

1. Introduction

Consider the Cohen-Grossberg neural networks (CGNN) described by a system of ordinary differential equations where , ; corresponds to the number of units in a neural network; denotes the potential (or voltage) of cell at time ; denotes a non-linear output function between cell and ; represents an amplification function; represents an appropriately behaved function; the connection matrix denotes the strengths of connectivity between cells, and if the output from neuron excites (resp., inhibits) neuron , then (resp., ).

During hardware implementation, time delays do exist due to finite switching speed of the amplifiers and communication time and, thus, it is important to incorporate delays in the neural networks. Just take the delayed cellular neural network as an example, which had been successfully applied to solve some moving image processing problem [1]. For model (1.1), Ye et al. [2] introduced delays by considering the following delay differential equations: Guo and Huang [3] generalized model (1.1) as the following delay differential equations: Some other more detailed justifications for introducing delays into model equations of neural networks can be found in [4, 5] and references therein.

The delays in all above mentioned papers have been largely restricted to be discrete. As is well known, the use of constant fixed delays in models of delayed feedback provides of a good approximation in simple circuits consisting of a small number of cells. However, neural networks usually have a spatial extent due to the presence of a multitude of parallel pathways with a variety of axon sizes and lengths. Thus there will be a distribution of conduction velocities along these pathways and a distribution of propagation delays. In these circumstances, the signal propagation is not instantaneous and cannot be modeled with discrete delays and a more appropriate way is to incorporate continuously distributed delays. For instance, in [6], Tank and Hopfield designed an analog neural circuit with distributed delays, which can solve a general problem of recognizing patterns in a time-dependent signal. A more satisfactory hypothesis is that to incorporate continuously distributed delays, we refer to [711]. Then model (1.3) can be modified as a system of integro-differential equations of the form with initial values given by for , where each is bounded and continuous on .

In the past few years, the dynamical behaviors of stochastic neural networks have emerged as a new subject of research mainly for two reasons: (i) in real nervous systems and in the implementation of artificial neural networks, synaptic transmission is a noisy process brought on by random fluctuations from the release of neurotransmitters and other probabilistic causes, hence, noise is unavoidable and should be taken into consideration in modeling [1214]; (ii) it has been realized that a neural network could be stabilized or destabilized by certain stochastic effects [1517]. Although systems are often perturbed by various types of environmental “noise” [1214, 18], it turns out that one of the reasonable interpretation for the “noise” perturbation is the so-called white noise , where is the Brownian motion process, also called as Wiener process [17, 19]. More detailed mechanism of the stochastic effects on the interaction of neurons can be found in [19]. However, because the Brownian motion is nowhere differentiable, the derivative of Brownian motion cannot be defined in the ordinary way, the stability analysis for stochastic neural networks is difficult. In [12], through constructing a novel Lyapunov-Krasovskii functional, Zhu and Cao obtain several novel sufficient conditions to ensure the exponential stability of the trivial solution in the mean square. In [13], using linear matrix inequality (LMI) approach, Zhu et al. investigated the asymptotical mean square stability of Cohen-Grossberg neural networks with random delay, In [14], by utilizing Poincaré inequality, Pan and Zhong derived some sufficient conditions to check the almost sure exponential stability and mean square exponential stability of stochastic reaction-diffusion Cohen-Grossberg neural networks. In [20], Wang et al. developed a linear matrix inequality (LMI) approach to study the stability of SCGNN with mixed delays. To the best of the authors' knowledge, the convergence dynamics of stochastic Cohen-Grossberg neural networks with unbounded distributed delays have not been studied yet, and still remain as a challenging task.

Keeping this in mind, in this paper, we consider the SCGNN described by the following stochastic nonlinear integro-differential equations: where is the diffusion coefficient matrix and is an -dimensional Brownian motion defined on a complete probability space with a natural filtration (i.e., ).

Obviously, model (1.5) is quite general and it includes several well-known neural networks models as its special cases such as Hopfield neural networks, cellular neural networks, and bidirectional association memory neural networks [21, 22].

The remainder of this paper is organized as follows. In Section 2, the basic notations and assumptions are introduced. In Section 3, some criteria are proposed to determine mean square exponential stability for (1.5). Furthermore, we also establish some sufficient conditions for checking exponential stability for Cohen-Grossberg neural networks with unbounded distributed delays in this section. In Section 4, an illustrative examples is given. We conclude this paper in Section 5.

2. Preliminaries

Noting that a vector usually can be equipped with the common norms as For the sake of convenience, some of the standing definitions and assumptions are formulated below.

Definition 2.1 (see [17]). The trivial solution of (1.5) is said to be mean square exponentially stable if there is a pair of positive constants and such that for all , where also called as convergence rate.
One also assumes that() there exist positive constants , such that ()There exist positive constants , such that () There exist positive constants , such that () Assume ( to be determined later), and there exist positive constants , such that

Remark 2.2. The activation functions are typically assumed to be continuous, bounded, differentiable, and monotonically increasing, such as the functions of sigmoid type; these conditions are no longer needed in this paper. For example, when neural networks are designed for solving optimization problems in the presence of constraints (linear, quadratic, or more general programming problems), unbounded activations modelled by diode-like exponential-type functions are needed to impose constraints satisfaction [3]. In this paper, the activation functions also including some kinds of typical functions widely used in the circuit designs, such as nondifferentiable piecewise linear output functions of the form , nonmonotonically increasing functions of the form Gaussian and inverse Gaussian functions, see [4, 5, 23] and references therein.

Using variable substitution, , we get therefore, system (1.5) for convenience can be put in the form As usual, the initial conditions for system (2.8) are , , , here is the family of all -measurable -valued random variables satisfying that

The conditions and imply that (2.8) has a unique global solution on for the initial conditions[17].

If , define an operator associated with (2.8) as where , .

We always assume that the delay kernels to be real-valued nonnegative functions defined on and satisfy for some positive constant . A typical example of such delay kernel function is given by for , where . These kernels have been used in [6, 9, 24] for various stability investigations on neural network models. In [24], these kernels are called as the Gamma Memory Filter.

3. Main Results

A point in is called to be an equilibrium (or trivial solution) of system (1.4) if this point satisfies the following: Different from the bounded activation function case where the existence of an equilibrium point is always guaranteed [25], for unbounded activations, it may happen that there is no equilibrium point [26]. In fact, we have the following theorem.

Theorem 3.1. Under the assumptions , if there exist a positive diagonal matrix , such that then for every input , system (1.4) has a unique equilibrium .

Proof of Theorem 3.1. A point is said to be an equilibrium of system (1.5) if this point satisfies the following: Let Similar to the proofs of Theorems , , and of [25], one can get that is injective on , and . Then, is homeomorphism on , therefore, has a unique root on . This completes the proof.

Obviously, the following inequality (3.5) impling the inequality (3.2) holds if the assumptions are true, then system (2.8) admits an equilibrium solution We have the following theorems on the stochastic stability of the unique equilibrium .

Theorem 3.2. Under the assumptions , if there exist a positive diagonal matrix , such that then the trivial solution of system (2.8) is mean square exponentially stable.

Proof of Theorem 3.2. Let , , , , using (3.3), model (2.8) take the following form: with the initial condition , where , , , , and .
Then the assumptions , , can be transformed as follows: () and there exist positive constants , such that ()There exist positive constants , such that () and there exist positive constants , such that From assumptions (2.10) and (3.5), we pick a constant : satisfying the following requirements: Define the following Lyapunov functional As at most has one point where it is not differentiable, we can calculate by instead using Dini derivative. Obviously, calculate the operator associated with (3.6) as follows: On the other hand, by directly applying the elementary inequality , we have At the same time, calculate the operator associated with (3.6) as follows: From (3.14) and (3.15), it is easy to get Using the It formula, for , from inequality (3.11) and inequality (3.16), we have On the other hand, we observe that From inequality (3.10), we have Hence, is bounded. According to stochastic analysis theory [17], we get Take expectation on both sides of (3.17) yields That is to say Therefore, there exist a positive constant , such that then the trivial solution of system (3.6) is mean square exponentially stable that is to say, the trivial solution of system (2.8) is mean square exponentially stable. This completes the proof.

Furthermore, if we remove noise from the system, then system (2.8) turns out to be system (1.4), the derived conditions for exponential stability of system (1.4) can be viewed as byproducts of our results from general SRNN. For convenience of reading, we provide the definition of the exponential stability.

Definition 3.3. The trivial solution of CGNN model (1.4) is said to be globally exponentially stable if there exist positive constants , which are independent of the initial values of the system, such that Where, the initial values take the following continuous function

We have the following Corollary 3.4 for system (1.4). As the main idea come from Theorem 3.2, so the proof is trivial, we omit it.

Corollary 3.4. Under the assumptions , if there exist a positive diagonal matrix , such that then the trivial solution of system (1.5) is globally exponentially stable.

Remark 3.5. To the best of our knowledge, few authors have considered exponential stability for recurrent neural networks with unbounded distributed delays. We can find paper [27] in this direction. However, it is assumed in [27] that satisfies: (i) ; (ii) ; (iii) . Obviously, these requirements are strong, and our paper relax assumptions on . In fact, using assumptions (2.10) instead of assumptions (i), (ii), (iii), and choosing the same Lyapunov functionals in Theorem 3.2 of this paper, one can get the same results as those in [27].

Remark 3.6. We notice that Wang et al. developed a linear matrix inequality (LMI) approach to study the stability of SCGNN with mixed delays and obtained some novel results in a very recent paper [20], where the considered model including both discrete and distributed delays. Yet the distributed delays are bounded. In fact, a neural network model with unbounded distributed delay is more general than that with discrete and bounded distributed delays, this is because the distributed delay becomes a discrete delay when the delay kernel is a function at a certain time.

Remark 3.7. The results in this paper show that, the convergence criteria on SGNN with unbounded distributed delays are independent of delays, but are dependent of the magnitude of noise, and therefore, noisy fluctuations should be regarded adequately when design SGNN.

Remark 3.8. Comparing our results with the previous results derived in the literature for the usual continuously distributed delays CNN without stochastic perturbation, by Corollary 3.4, we can find that the corresponding main results obtained in [810] are trivial.

4. Illustrative Examples

In this section, an example is presented to demonstrate the correctness and effectiveness of the main obtained results in this paper.

Example 4.1. Consider the following stochastic neural networks with distributed delays: Figure 1 shows the schematic of the entire delayed neural network, where the nonlinear neuron transfer function is constructed by using the voltage operational amplifiers. The time delay is achieved by using a digital signal processor (DSP) with an analog-to-digital converter (ADC) and a digital-to-analog converter (DAC). In the experiment, the circuit of time delay consists of a TMS320F2812 device and a DAC7724 device. Here, white noise is generated by signal generator Agient E8257D.

In the example, let and . Notice that each delay kernel satisfies (2.10), and each satisfies assumptions . In the example, let , by simple computation, one can easily get that It follows from Corollary 3.4 that system (4.1) is mean square exponentially stable.

5. Conclusions

In this paper, we studied a nonlinear continuous-time stochastic Cohen-Grossberg neural networks (SCGNN) with unbounded distributed delays. Without assuming the smoothness, monotonicity and boundedness of the activation functions, by applying Lyapunov functional method, stochastic analysis technique, and inequality techniques, some novel sufficient conditions on mean square exponential stability for SCGNN are given. Furthermore, as the byproducts of our main results, we also establish some sufficient conditions for checking exponential stability for Cohen-Grossberg neural networks with unbounded distributed delays. The significant of this paper does offer a wider selection on the networks parameters in order to achieve some necessary convergence in practice.

Acknowledgments

The authors are extremely grateful to Professor Juan J. Nieto and the anonymous reviewers for their constructive and valuable comments, which have contributed a lot to the improved presentation of this paper. This work was supported by the Foundation of Chinese Society for Electrical Engineering (2008), the Excellent Youth Foundation of Educational Committee of Hunan Provincial (10B002), the National Natural Science Funds of China for Distinguished Young Scholar (50925727), and the National Natural Science Foundation of China (60876022).