Abstract

We study the exponential synchronization problem for a class of stochastic competitive neural networks with different timescales, as well as spatial diffusion, time-varying leakage delays, and discrete and distributed time-varying delays. By introducing several important inequalities and using Lyapunov functional technique, an adaptive feedback controller is designed to realize the exponential synchronization for the proposed competitive neural networks in terms of -norm. According to the theoretical results obtained in this paper, the influences of the timescale, external stimulus constants, disposable scaling constants, and controller parameters on synchronization are analyzed. Numerical simulations are presented to show the feasibility of the theoretical results.

1. Introduction

Neural networks are mathematical models that are inspired by the structure and functional aspects of biological neural networks. Meyer-Baese et al. [1] proposed competitive neural networks with different timescales, which describe the dynamics of cortical cognitive maps with unsupervised synaptic modifications. In the competitive neural networks model, there are two types of state variables: the short-term-memory (STM) variables describing the fast neural activity and the long-term-memory (LTM) variables describing the slow unsupervised synaptic modifications. Hence, there are two timescales in the competitive neural networks, one of which corresponds to the fast change of the state and the other to the slow change of the synapse by external stimuli. The above competitive neural networks are described by the following differential equations:where , is the neuron current activity level, is the synaptic efficiency, is the output of neurons, is the time constant of the neuron, denotes the connection strength of the th neuron on the th neuron, is the strength of the external stimulus, is the constant external stimulus, is the number of the constant external stimuli, and is the timescale of the STM state.

Synchronization problems of neural networks have been widely researched because of their extensive applications in secure communication, information processing, and chaos generators design. Synchronization of competitive neural networks with different timescales has attracted a great interest [27]. In [7], Gan et al. studied the adaptive synchronization for a class of competitive neural networks with different timescales and stochastic perturbation by constructing a Lyapunov-Krasovskii functional:where and are the discrete time-varying delay and the distributed time-varying delay, respectively; and are, respectively, the discrete time-varying delay connection strength and the distributed time-varying delay connection strength of the th neuron on the th neuron; is the disposable scaling constant.

The first term in each of the right sides of (2) is called leakage term corresponding to a stabilizing negative feedback of the system [8, 9]. In real world, the transmission delays often appear in leakage terms, which are called leakage delays [10]. It is well known that leakage delays have been incorporated into neural networks by many researchers [1114]. However, leakage delays of neural networks in most bibliographies listed above are constants. As pointed out in [1518], the delays in neural networks are usually time-varying. Hence, the results about the neural networks with constant delays in the leakage term are imperfect.

In addition, dynamic behaviors of neural networks derive from the interactions of neurons, which is dependent on not only the time of each neuron but also its space position [19, 20]. From this point, diffusion phenomena should not be ignored in neural networks. Many good results about reaction-diffusion neural networks have been obtained [2125]. The boundary conditions in most literatures listed are assumed to be Dirichlet boundary conditions. In engineering applications, such as thermodynamics, Neumann boundary conditions need to be considered. As far as we know, there are few results concerning the synchronization of competitive neural networks with reaction-diffusion term under Neumann boundary conditions.

Based on the above discussion, we are concerned with the combined effects of time-varying leakage delays, stochastic perturbation, and spatial diffusion on the synchronization of competitive neural networks with Neumann boundary conditions in terms of -norm via an adaptive feedback controller to improve the previous results. To this end, we discuss the following neural networks:where and is a bound compact set with smooth boundary and in space ; with denotes the state of the th neuron at time and in space ; is the Laplace operator; and are the discrete time-varying delay and the distributed time-varying delay, respectively; is the time-varying leakage delay; corresponds to the transmission diffusion coefficient along the th neuron.

Let , where and , and then then system (3) can be rewritten aswhere . Without loss of generality, the input stimulus vector is assumed to be normalized with magnitude . System (4) is simplified to

The boundary condition of system (5) takes the formThe initial value of system (5) takes the formwhere , , , , and is the Banach space of continuous functions which maps into with the topology of uniform converge and -norm ( is a positive integer) defined by

In order to observe the exponential synchronization behavior of system (5), the response system with stochastic perturbation is designed aswhere and denote the state of the response system; and are the synchronization error system; is the noise intensity matrix and the stochastic disturbance is a Brownian motion defined on (where is the sample, is the -algebra of subsets of the sample space, and is the probability measure on ), andwhere is the mathematical expectation operator with respect to the given probability measure ; is a feedback controller of the following form:The feedback strength is updated by the following law:where and are arbitrary positive constants.

The boundary condition and initial condition for response system (9) are given in the following forms:where and .

Subtracting (5) from (9) yields the error system as follows:where .

In this paper, we give the following hypotheses.

There exists a positive constant such that the neuron activation function satisfies the following conditions: where

There exists a positive constant such that for all , and

There exist positive constants and such that or for all , .

There exist positive constants and such that or for all .

There exist positive constants and such that or for all .

The paper is organized as follows. In the next section, we introduce some definitions and state several lemmas which will be essential to our proofs. In Section 3, by constructing a suitable Lyapunov functional, some new criteria are obtained to ensure the exponential synchronization of systems (5) and (9) under the adaptive feedback controller (11) and (12). Numerical simulations are carried out in Section 4 to illustrate the feasibility of the main theoretical results. A brief conclusion is given in Section 5.

2. Preliminary

In this section, we introduce some notations and lemmas which will be useful in the next section.

Definition 1. The noise-perturbed response system (9) and the drive system (5) can be exponentially synchronized under the adaptive controller (11) and (12) based on -norm, if there exist constants and such that where and are solutions of systems (9) and (5) with differential initial functions (14) and (7), respectively, and

Lemma 2 (Wang [26], Itô’s formula). Let be Itô processes, and where    is the space of absolutely integrable function and ( is the space of square integrable function). If , ( is the family of all nonnegative functions on which are continuously twice differentiable in and once differentiable in , then are still It processes, andwhere

Lemma 3 (Mei et al. [27]). Let and let . Then

Lemma 4 (Mao [28]). Let be continuous functions. Suppose that positive constants and satisfy Then

Lemma 5 (Gu et al. [29]). Suppose that is a bound domain of with a smooth boundary . are real-valued functions belonging to . Thenwhere is the gradient operator.

Lemma 6. Let be a positive integer and let be a bound domain of with a smooth boundary . is a real-valued function and . Thenwhere is the smallest positive eigenvalue of the Neumann boundary problem:

The proof of Lemma 6 is attached in Appendix.

Remark 7. If , the integral inequality (28) is the Poincaré integral inequality in [30]. The smallest eigenvalue of the Neumann boundary problem (29) is determined by the boundary of [30]. If , then

3. Exponential Synchronization Criterion

In this section, the exponential synchronization criterion of the drive system (5) and the response system (9) is obtained under the adaptive feedback controller (11) and (12). For convenience, the following denotations are introduced.

Denotewhere , , , , and , , , , , , , , , and are nonnegative real numbers, respectively.

Theorem 8. Under assumptions , the nonlinear couple neural networks (9) and (5) can be exponentially synchronized under the adaptive feedback controller (11) and (12) based on -norm, if the following condition is also satisfied.
.

Proof. Definewhere .
By (10), Itô’s differential formula, and Dini derivation, it can be deduced that From the boundary conditions (6) and (13) and Lemma 6, we getBy Lemma 4, we obtainIt follows from (23) thatIt follows from (24) thatSubstituting (34)–(37) into (33), it follows from (31) and thatwhich implies that Note thatSince thenSimilarly, Applying (42)-(43) into (40), we havewhere Therefore,Further, we obtainFrom (46) and (47), we haveHence, the nonlinear couple neural networks (9) and (5) can be exponentially synchronized under the adaptive feedback controller (11) and (12) based on -norm. The proof of Theorem 8 is complete.

Remark 9. It is the first time to consider the combined effects of time-varying leakage delays, the discrete time-varying delay, the distributed time-varying delay, stochastic perturbation, and spatial diffusion on the exponential synchronization of competitive neural networks under an adaptive feedback controller. The neural networks discussed in [6, 7, 31] are the special cases of the model in this paper. From this point, our results are more general.

Remark 10. In Theorem 8, the sufficient conditions are derived to achieve the adaptive synchronization for the proposed competitive neural networks. Compared with the adaptive synchronization criteria given in [7], the conditions obtained in Theorem 8 depend on not only the timescale but also the controller parameter . It is beneficial to design an adaptive controller to realize the adaptive synchronization for the neural networks. Therefore, the criteria derived in this paper have wider application.

4. Numerical Simulations

In this section, some numerical simulation examples demonstrate the main results in Theorem 8.

In system (5), we choose . Then system (5) takes the formwhere , , , and . The parameters of (49) are assumed as follows: , , , , , , , , , , , , , , , , , , , , , and The initial conditions of system (49) are chosen aswhere .

Numerical simulation illustrates that the reaction-diffusion neural network (49) with boundary condition (6) and the initial condition (50) exhibits a chaotic behavior (see Figure 1).

The noise-perturbed response system is described bywhere The adaptive controller is

The initial conditions for the response system (51) are chosen as where .

Evidently, , , , and . Let , , , and . By simple computation, it is easy to verify that assumptions are satisfied. According to Theorem 8, the drive system (49) and the response system (51) are exponentially synchronized based on -norm. Numerical simulation illustrates our results (see Figure 2).

Remark 11. The conclusions given in Theorem 8 show that the adaptive synchronization criteria for competitive neural networks are dependent on the timescale , the disposable scaling constant , and the external stimulus . When increases, and increases or decreases, respectively, assumption can be satisfied more easily, and the adaptive synchronization of the competitive neural networks is more easily realized. Dynamical behaviors of the synchronization errors between systems (49) and (51) with the differential timescale, disposable scaling constant, and external stimulus, respectively, are shown in Figures 35.

Remark 12. By (48), it is clear to see that the controller parameter denotes the rate of the synchronization. That is, the larger the controller parameter is, the faster systems (49) and (51) realize synchronization. Hence, our results are consistent with the practical situation. Dynamical behaviors of the synchronization errors between systems (49) and (51) with differential controller parameter are shown in Figure 6.

The parameter is another controller parameter in the feedback controller (11). By numerical simulations, we can see that it is beneficial for competitive neural networks to realize the synchronization by increasing controller parameter . Dynamical behaviors of the synchronization errors between systems (49) and (51) with differential controller parameter are shown in Figure 7. However, we cannot prove it. It is an interesting open problem to research.

Remark 13. In many cases, two-neuron networks show the same behavior as large-size networks and many research methods used in two-neuron networks can be applied to large-size networks. Therefore, a two-neuron network can be used as an example to improve our understanding of our theoretical results. In addition, the parameter values are selected randomly to ensure that neural networks (49) exhibit a chaotic behavior.

5. Conclusion

In this paper, an adaptive feedback controller was designed to achieve the exponential synchronization for stochastic competitive neural networks with spatial diffusion, time-varying leakage delays, and discrete and distributed time-varying delays based on -norm. Evidently, the model discussed in this paper is more general than those correspondent models when the delays are constant delays. By constructing the Lyapunov functional and using and the stochastic analysis theory, the novel exponential synchronization criteria dependent on the timescale , external stimulus constants , disposable scaling constants , and controller parameter were obtained. By theory analysis, it was shown that competitive neural networks can achieve exponential synchronization more easily by increasing the timescale and disposable scaling constants or reducing disposable scaling constants, respectively. Numerical examples and their simulations are given to show the effectiveness of the obtained results.

Figures 8 and 9 show that it is beneficial for competitive neural networks with reaction-diffusion terms to realize the synchronization by increasing diffusion coefficients or decreasing diffusion space , respectively. However, the exponential synchronization criteria obtained in this paper are independent of the diffusion coefficients and the diffusion space. They cannot reflect the influence of the diffusion coefficients and diffusion space on synchronization, which limits the application scopes of the results. Therefore, we will investigate that in our future work.

Appendix

Proof of Lemma 6. According to the eigenvalue theory of elliptic operators, the Laplacian on with the Neumann boundary conditions is a self-adjoint operator with compact inverse, so there exists a sequence of nonnegative eigenvalues , as well as a sequence of corresponding eigenfunctions for the Neumann boundary problem (29); that is,Multiply the second equation of (A.1) by and integrate . By Green formula (27), we obtainIt is easy to show that (A.2) is also true for .
The sequence of eigenfunctions generates an orthonormal basis of . Hence, for any , there exists a sequence of constant such thatIt follows from (A.2) and (A.3) that The proof of Lemma 6 is complete.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the Science and Technology Support Program of Hebei Academy of Sciences (17606), the Science and Technology Support Program of Hebei Province (16290106D), and the National Natural Science Foundation of China (61305076).