Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2015, Article ID 140857, 8 pages
http://dx.doi.org/10.1155/2015/140857
Research Article

Input-to-State Stability of Stochastic Memristive Neural Networks with Time-Varying Delay

Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education), Jiangnan University, Wuxi 214122, China

Received 24 September 2014; Accepted 10 January 2015

Academic Editor: Nazim I. Mahmudov

Copyright © 2015 Xu Y. Lou and Qian Ye. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper is concerned with the input-to-state stability problem of a class of memristive neural networks. We consider the neural networks that take into account both the stochastic effects and time-varying delay, and introduce the notions of meansquare exponential input-to-state stability. Using the stochastic analysis theory and Itô formula for stochastic differential equations, we establish sufficient conditions for both mean-square exponential input-to-state stability and mean-square exponential stability. Numerical simulations are also provided to demonstrate the theoretical results.

1. Introduction

In the recent years, the memristor-based or memristive neural network models have been extensively investigated and successfully applied to function approximation [1], associative memory [2], chaos synchronization [3], and image processing [4]. As is well known, due to the existence of time delays, the stability issue in delayed memristive neural networks becomes one of the most important problems and there have been many stability results on delayed memristive neural networks reported in the literature; for instance, see [59] and references therein. The authors in [5] studied the global exponential stability for a class of memristor-based recurrent neural networks with time-varying delays. Exponential stability was addressed in [6] for a class of stochastic memristor-based recurrent neural networks with time-varying delays. The authors in [7] considered the exponential dissipativity of memristor-based recurrent neural networks with time-varying delays. The recent work of [8] analyzed the robust stability for uncertain memristive neural networks with norm-bounded parameter uncertainty.

Despite the rich achievements, most of the above results mainly focused on deterministic memristive neural network models rather than stochastic memristive neural network models. However, while modeling neural networks, noise is unavoidable and should be taken into consideration. In real neural networks, as pointed out in [10], the synaptic transmission can be viewed as a noisy process introduced by random fluctuations from the release of neurotransmitters and other probabilistic causes [11]. Therefore, it is of practical importance to analyze the dynamics of the stochastic memristive neural networks (SMNNs). However, to the best of our knowledge, few researches focus on the stability of stochastic memristor-based neural networks.

The input-to-state stability (ISS) [12] is an important property in nonlinear systems as it provides an effective way to tackle the stabilization of nonlinear systems or the problem of robust/adaptive nonlinear control in the presence of various uncertainties arising from control engineering applications. As we know, the states in stochastic neural networks may remain bounded instead of converging to an equilibrium point as time goes to infinity. The ISS analysis opens a new path for application of dynamic neural networks to nonlinear control. For instance, when we design a nonlinear controller based on an identified model built by using this kind of neural network from experimental data, the identification error remains bounded even in the presence of model mismatching [13]. Therefore, it is significant to study the input-to-state stability of neural networks [11, 13]. Note that the ISS property of neural networks investigated in [13] was analyzed in deterministic case and without considering delays. Moreover, only asymptotical ISS property was considered in that work rather than exponential ISS property. The recent work of [11] presented sufficient conditions for mean-square exponential ISS of stochastic delayed neural networks, but the synaptic weights are constants. However, memristor-based synapses are shown to be essential for performing useful computation and adaptation in large scale artificial neural networks [14]. Moreover, memristive neural networks have been used in image denoising, edge extraction, and Chinese character recognition and have shown that the memristor synaptic neural network has more bionic feasibility, more integrated and more easy to replace the template [4]. Therefore, it is vital to deeply explore the ISS property of delayed SMNNs and this is also a difficult task due to the presence of the memristor-based synapses.

In this paper, we attempt to construct SMNNs with time-varying delay and focus on the mean-square exponential input-to-state stability (eISS). The proposed method is applicable to deal with the synchronization problem of SMNNs. We present sufficient conditions for mean-square eISS and the mean-square exponential stability based on the stochastic analysis theory and Itô formula. In addition, the exploited conditions do not require assuming the boundedness, differentiability, and monotonicity of the activation functions in SMNNs.

The paper is organized as follows. In Section 2, we introduce the model of SMNNs and some preliminaries. In Section 3, some sufficient conditions are derived to guarantee the mean-square eISS and mean-square exponential stability of the proposed model. Section 4 gives an example to illustrate the effectiveness of our results. Finally, conclusions are drawn in Section 5.

Notations. Throughout the paper, we write for -dimensional Euclidean space. Given a vector , denotes the Euclidean norm. denotes the class of essentially bounded functions from to with . denotes closure of the convex hull of . Let denote the family of continuous functions from to with the uniform norm . Let denote the family of all measurable, -valued stochastic variables such that , where stands for the correspondent expectation operator with respect to the given probability measure .

2. Model Description and Preliminaries

Consider an SMNN described by the following differential equations: where denotes the number of the neurons in the network, is the state of the th neuron at time and represent the neuron activation functions of the th neuron at time , and   is the external constant input of the th neuron at time . corresponds to the transmission delay which is supposed to be differential and satisfies , , where and are positive constants. is a standard Brownian motion defined on the complete probability space with a natural filtration . is a Borel measurable function. is a self-feedback connection matrix. and are memristive connection weights, which represent the neuron interconnection matrix and the delayed neuron interconnection matrix, respectively. In artificial neural networks, and are state-dependent bounded functions for the existence of the memristors working as synaptic weights. The connection weights change according to the state of each subsystem.

The memristor-based synaptic weights and will change, as pinched hysteresis loops change [15]. According to the feature of the memristor, letwhere and , , are known constants relating to memristance.

Let , , , and for . Using the set-valued analysis [16], it follows from (1) that

Then by Filippov’s Theorem [17], there exist , , such that solutions to the differential inclusion (3) are the trajectories of the following system for .

The initial conditions associated with system (1) are given by where is bounded and continuous on .

In order to obtain the main theorem, we require the following assumptions.

Assumption 1. For all , the neuron activation functions and satisfy the following Lipschitz condition: where are positive Lipschitz constants.

Assumption 2. For all , there exist nonnegative constants and such that for all , .

Assumption 3. Consider , , .

Define for all . Under Assumptions 1 and 2 and by using the inequality , we have Therefore, and satisfy the Lipschitz condition. Moreover, it follows from Assumptions 13 that where . Therefore, and satisfy the linear growth condition. It follows by Theorem 3.1 in the book [18] that there exists a unique solution of system (4) or (1). Let denote the solution from the initial data on in . It is clear that, under Assumption 3, system (1) with zero inputs admits a trivial solution .

Definition 4. The trivial solution of system (1) is said to be mean-square exponentially input-to-state stable if, for every and , there exist scalars , and such that the following inequality holds:

Remark 5. It follows from the definition that if the system is mean-square input-to-state stable, then the states remain bounded when the inputs of the system are bounded. In addition, the definition is more general than mean-square asymptotic stability or mean-square exponential stability, which is explicit by taking .

3. Main Results

In this section, the mean-square eISS and exponential stability of the trivial solution of SMNN (1) are addressed.

Theorem 6. Under Assumptions 13, the trivial solution of system (1) is mean-squre exponentially input-to-state stable, if there exist positive scalars and constants such that the following conditions hold:

Proof. For the sake of simplicity, denote , where . Let us construct the following Lyapunov-Krasovskii functional: According to the Itô formula, it follows where and is the weak infinitesimal operator such that Based on Assumptions 13, we have Using the well-known Young’s inequality , it follows that where , , are any positive constants.
Substituting (18) into (17) yields
Consequently, we can derive
In order to further derive the eISS in mean-square sense, similar to [11], define as a stopping time. Applying the Dynkin formula, we obtain that is,
Using the monotone convergence theorem and conditions (12), when goes to infinity one can easily derive On the other hand, it follows from (13) that Combining (23) and (24) yields where , . Therefore, the inequality in Definition 4 holds, which verifies that the trivial solution of system (1) is mean-square exponentially input-to-state stable. The proof of Theorem 6 is completed.

Remark 7. Since there are several free parameters related to the sufficient conditions of Theorem 6, by selecting proper parameters one has more choices to sufficiently satisfy the conditions for a given neural network. Theorem 6 seems to be a natural extension of the main scheme proposed in [11] for stochastic delayed neural networks, but the ISS analysis is more challenging and the conditions are different due to the presence of the memristor-based synaptic weights. Moreover, the criterion in Theorem 6 is less conservative due to the introduction of extra positive constants .

Remark 8. With the progress in the experimental realization of memristive devices, the implementation of artificial neural networks with memristive synapses is feasible. For instance, a simple winner-take-all architecture in neural systems is implemented in [19] based on spike-timing-dependent plasticity (STDP) using the device integration in memristor-MOS technology.
When , Theorem 6 reduces to mean-square exponential stability of the SMNNs. This conclusion is stated as follows.

Corollary 9. Under Assumptions 13, the trivial solution of system (1) with is mean-square exponentially stable, if there exist positive scalars and constants , , such that the following conditions hold:

4. Example

In this section, a numerical example is employed to illustrate our results. Simulation results show that the derived conditions are valid.

Example 1. Consider a two-dimensional SMNN with time-varying delay: where , , , , and with

Obviously, the activation functions and satisfy Assumptions 3 and 1 with . In addition, satisfies Assumption 2 with , , , , , , , and . According to the definition of , we have , . Take , , , , , and . Then, it is easy to check conditions in (12) as follows: where

Therefore, all conditions in Theorem 6 are satisfied, which implies that the trivial solution of system (27) is mean-square exponentially input-to-state stable. Figure 1 shows the time responses of the state with different initial values (where ). When the inputs , by Corollary 9, the trivial solution of system (27) is mean-square exponentially stable, which is verified in Figure 2.

Figure 1: Transient response of variables in system (27).
Figure 2: Transient response of variables with zero inputs in system (27).

5. Conclusions

In this paper, the input-to-state stability for a class of stochastic memristive recurrent neural networks with time-varying delay has been addressed. By utilizing the Lyapunov functional method and combining the stochastic analysis theory, a sufficient criterion for exponential input-to-state stability of the memristive recurrent neural networks has been obtained, which has never been discussed in the field of stochastic memristive neural networks. Finally, simulation results have been proposed to illustrate the effectiveness of the proposed results. Further investigations will be aimed at studying the input-to-output stability of stochastic memristive neural networks and the learning mechanism of artificial neural networks with memristor-based synaptic weights.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is partially supported by National Natural Science Foundation of China (61473136), the Fundamental Research Funds for the Central Universities (JUSRP51322B), the 111 Project (B12018), and the China Scholarship Council.

References

  1. L. D. Wang, M. T. Duan, and S. K. Duan, “Memristive chebyshev neural network and its applications in function approximation,” Mathematical Problems in Engineering, vol. 2013, Article ID 429402, 7 pages, 2013. View at Publisher · View at Google Scholar · View at Scopus
  2. Y. V. Pershin and M. Di Ventra, “Experimental demonstration of associative memory with memristive neural networks,” Neural Networks, vol. 23, no. 7, pp. 881–886, 2010. View at Publisher · View at Google Scholar · View at Scopus
  3. G. D. Zhang and Y. Shen, “Exponential synchronization of delayed memristor-based chaotic neural networks via periodically intermittent control,” Neural Networks, vol. 55, pp. 1–10, 2014. View at Publisher · View at Google Scholar · View at Scopus
  4. L. D. Wang, M. T. Duan, S. K. Duan, and X. F. Hu, “Neural networks based on STDP rules and memristor bridge synapses with applications in image processing,” Science China: Information Sciences, vol. 44, no. 7, pp. 920–930, 2014. View at Google Scholar
  5. G. D. Zhang, Y. Shen, and J. W. Sun, “Global exponential stability of a class of memristor-based recurrent neural networks with time-varying delays,” Neurocomputing, vol. 97, pp. 149–154, 2012. View at Publisher · View at Google Scholar · View at Scopus
  6. J. Li, M. Hu, and L. Guo, “Exponential stability of stochastic memristor-based recurrent neural networks with time-varying delays,” Neurocomputing, vol. 138, pp. 92–98, 2014. View at Publisher · View at Google Scholar · View at Scopus
  7. Z. Y. Guo, J. Wang, and Z. Yan, “Global exponential dissipativity and stabilization of memristor-based recurrent neural networks with time-varying delays,” Neural Networks, vol. 48, pp. 158–172, 2013. View at Publisher · View at Google Scholar · View at Scopus
  8. X. Wang, C. D. Li, and T. W. Huang, “Delay-dependent robust stability and stabilization of uncertain memristive delay neural networks,” Neurocomputing, vol. 140, pp. 155–161, 2014. View at Publisher · View at Google Scholar · View at Scopus
  9. A. Wu and Z. G. Zeng, “Improved conditions for global exponential stability of a generalclass of memristive neural networks,” Communications in Nonlinear Science and Numerical Simulation, vol. 20, no. 3, pp. 975–985, 2015. View at Publisher · View at Google Scholar
  10. S. Haykin, Neural Networks, Prentice-Hall, Upper Saddle River, NJ, USA, 1994.
  11. Q. Zhu and J. D. Cao, “Mean-square exponential input-to-state stability of stochastic delayed neural networks,” Neurocomputing, vol. 131, pp. 157–163, 2014. View at Publisher · View at Google Scholar · View at Scopus
  12. E. D. Sontag, “On the input-to-state stability property,” European Journal of Control, vol. 1, no. 1, pp. 24–36, 1995. View at Publisher · View at Google Scholar · View at Scopus
  13. E. N. Sanchez and J. P. Perez, “Input-to-state stability (ISS) analysis for dynamic neural networks,” IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, vol. 46, no. 11, pp. 1395–1398, 1999. View at Publisher · View at Google Scholar · View at Scopus
  14. S. P. Wen, Z. G. Zeng, and T. W. Huang, “Associative learning of integrate-and-fire neurons with memristor-based synapses,” Neural Processing Letters, vol. 38, no. 1, pp. 69–80, 2013. View at Publisher · View at Google Scholar · View at Scopus
  15. S. P. Wen, Z. G. Zeng, and T. W. Huang, “Exponential stability analysis of memristor-based recurrent neural networks with time-varying delays,” Neurocomputing, vol. 97, pp. 233–240, 2012. View at Publisher · View at Google Scholar · View at Scopus
  16. J. P. Aubin and H. Frankowska, Set-Valued Analysis, Birkhäuser, Boston, Mass, USA, 2009.
  17. A. F. Filippov, “Classical solutions of differential equations with multi-valued right-hand side,” SIAM Journal on Control, vol. 5, no. 4, pp. 609–621, 1967. View at Publisher · View at Google Scholar
  18. X. R. Mao, Stochastic Differential Equations and Applications, Horwood Publishing, Chichester, UK, 2nd edition, 2007.
  19. I. E. Ebong and P. Mazumder, “CMOS and memristor-based neural network design for position detection,” Proceedings of the IEEE, vol. 100, no. 6, pp. 2050–2060, 2012. View at Publisher · View at Google Scholar · View at Scopus