Neural Network for Complex Systems: Theory and ApplicationsView this Special Issue
Research Article | Open Access
Ruofeng Rao, Shouming Zhong, "Stability Analysis of Impulsive Stochastic Reaction-Diffusion Cellular Neural Network with Distributed Delay via Fixed Point Theory", Complexity, vol. 2017, Article ID 6292597, 9 pages, 2017. https://doi.org/10.1155/2017/6292597
Stability Analysis of Impulsive Stochastic Reaction-Diffusion Cellular Neural Network with Distributed Delay via Fixed Point Theory
This paper investigates the stochastically exponential stability of reaction-diffusion impulsive stochastic cellular neural networks (CNN). The reaction-diffusion pulse stochastic system model characterizes the complexity of practical engineering and brings about mathematical difficulties, too. However, the difficulties have been overcome by constructing a new contraction mapping and an appropriate distance on a product space which is guaranteed to be a complete space. This is the first time to employ the fixed point theorem to derive the stability criterion of reaction-diffusion impulsive stochastic CNN with distributed time delays. Finally, an example is provided to illustrate the effectiveness of the proposed methods.
In 1988, cellular neural networks (CNN) were originally introduced in [1, 2]. Since then, dynamic neural networks have received extensive attention due to their classification, associative memory, and parallel computing tasks and the ability to solve complex optimization problems. It is generally known that almost all neural networks have similar applications ([3–12]), but the key to the success of these applications lies in the stability of the system. In fact, there are a number of literatures involved in the stability analysis of CNN ([5, 7, 12–14]). In practical engineering, time delay and pulse are unavoidable. Since neural networks usually have spatial properties, due to the existence of parallel paths of various axonal sizes and lengths, it is necessary to introduce continuous distributed delays to simulate them over a given time horizon. Besides, many evolutionary processes, especially the biological neural network in biological systems and bursting rhythm models in pathology, frequency-modulated signal processing systems, are characterized by abrupt changes of states at certain time instants. In addition, electrons have diffusion behavior in inhomogeneous media. Noise disturbance is unavoidable in real nervous systems, which is a major source of instability and poor performance in neural networks. A neural network can be stabilized or destabilized by certain stochastic inputs. The synaptic transmission in real neural networks can be viewed as a noisy process introduced by random fluctuations from the release of neurotransmitters and other probabilistic causes. Hence, the above influent factors should be also taken into consideration in stability analysis of neural networks. So, in this paper, we consider a class of impulsive stochastic reaction-diffusion cellular neural networks with distributed delay. Lyapunov function method is one of the common techniques to solve the stability of neural networks in recent decades. However, every method has its limit. Different methods lead to different criteria for stability criteria which may imply innovations. Fixed point theory and method is one of the alternative methods ([15–22]). Unlike the known literature, we try to employ Banach fixed point theory in this paper to derive the stability of impulsive stochastic reaction-diffusion cellular neural networks with distributed delay. In the next sections, we shall give some model descriptions and preliminaries and employ Banach fixed point theorem, inequality, Burkholder-Davis-Gundy inequality, and the continuous semigroup of Laplace operators to derive the stochastically exponential stability criterion of the complex system. Of course, to overcome the difficulty of the complex mathematical model, we need to formulate a new contraction mapping on a product space. Moreover, in order to guarantee the completeness of product space, we need to give a reasonable definition of distance. Finally, an example is provided to illustrate the effectiveness of the proposed result.
2. Model Description and Preliminaries
Consider the following reaction-diffusion impulsive stochastic cellular neural networks under Dirichlet boundary value:where is a bounded domain with the smooth boundary . is the state variable of the th neuron at time and in space variable for with . denotes the active function of neuron. is the rate with which the th neuron will reset its potential to the resting state in isolation when disconnected from the networks and the external inputs. , , and are elements of feedback template. Let be a real-valued Brownian motion defined on the complete probability space which has natural filtration . Denote by the space of all real-valued square integrable functions with the inner product , for which derives the norm for . is a Borel measurable function. Denote by the Laplace operator, with domain , which generates a strongly continuous semigroup , where and are the Sobolev spaces with compactly supported sets. denotes the divergence of (see, e.g., [25, 26]). is the diffusion coefficient, and time delays . Besides, initial value is continuous for . The fixed impulsive moments satisfy with . and stand for the right-hand and left-hand limit of at time , respectively. Further, suppose that , .
In this paper, we assume that
(H1) , where and are constants;
(H2) , and are Lipschitz continuous with Lipschitz constants , , and for , respectively. In addition, , .
Definition 1. For any and , a stochastic process is called a mild solution of impulsive system (1) if, for any , ; and, for any , is adapted to with and the following stochastic integral equations hold for all , a.s. for any and ,
Lemma 3 ( inequality). Assume that with , and , ; then,
Lemma 4 (Banach contraction mapping principle). Let be a contraction operator on a complete metric space ; then there exists a unique point for which .
3. Main Result: Stochastically Exponential Stability
Theorem 5. Assume that (H1) and (H2) hold. Then, CNN (1) is stochastically exponentially mean square stable if the following condition holds: where and
Proof. Firstly, we need to formulate a contraction mapping on a product space.
Let be the Banach space of all -adapted mean square continuous processes consisting of functions at with such that as , where is a positive scalar. Now, we construct an operator with as follows:Equipped with the following distance: becomes a complete metric space, where , .
Next, we are to apply contractive mapping theory to complete the proof via three steps.
Step 1. From (7), for , we claim that is mean square continuous. Indeed, let be a small enough scalar: Firstly, we estimate Next, we evaluateVia Burkholder-Davis-Gundy inequality, we can conclude that if , Due to , it is obvious that So, we have proved from (10)–(14) that is mean square continuous at with .
Next, we claim that Indeed, obviously, (11)–(13) hold for all , too. In addition, let be small enough: On the other hand, let be small enough: This together with (16) implies that (15) holds.
Step 2. We claim that Indeed, we have the following inequality similar to (10): Condition (H1) yields For any given , the assumption tells us that there exists such that Moreover, inequality gives which together with the arbitrariness of derives Besides, Using similar methods of (21) and (22), we can deduce from (24) that Similar to that of (24) and (22), we can also obtain Now, similar to that of (22), we know from (26) that Similarly, inequality yields Similar to (22), we can conclude from (28) that Hence,Burkholder-Davis-Gundy inequality and inequality derivewhich together with the arbitrariness of implies Next, we may assume that and .
In addition, one can deduce from (H1) Besides, we can estimate by means of definite integral Moreover, the arbitrariness of implies Hence, if , Combining (19), (20), (23), (30), (32), and (36) results in (18).
Step 3. Finally, we claim that is a contractive mapping on .
Indeed, from the above two steps, we know , and then .
On the other hand, for any and , Besides, it follows by the inequality that