Mathematical Problems in Engineering

Volume 2017, Article ID 3273758, 10 pages

https://doi.org/10.1155/2017/3273758

## Stability Analysis for Stochastic Neutral-Type Memristive Neural Networks with Time-Varying Delay and S-Type Distributed Delays

Department of Math, Nanchang University, Nanchang, Jiangxi Province 330031, China

Correspondence should be addressed to Zuoliang Xiong; moc.361@1061gnoix

Received 5 August 2016; Accepted 9 November 2016; Published 12 February 2017

Academic Editor: Qingling Zhang

Copyright © 2017 Changjian Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

In this paper, we consider the input-to-stability for a class of stochastic neutral-type memristive neural networks. Neutral terms and S-type distributed delays are taken into account in our system. Using the stochastic analysis theory and Itô formula, we obtain the conditions of mean-square exponential input-to-stability for system. A numerical example is given to illustrate the correctness of our conclusions.

#### 1. Introduction

The complex network is considered to be one of the leading research subjects of science and technology in twenty-first Century, which include neural networks, communication networks, power networks, and social networks. In particular, the research on the neural network is very widely including the control theory, stability theory, and bifurcation theory (see [1–4]). As a special case of complex network, memristive neural networks can better simulate the human brain, so it has also become a focus for the majority of scholars. The memristor is a kind of nonlinear resistor which has memory function to simulate the mechanism of human neuron and synapse. Recently, memristive neural networks systems have been successfully applied in associative memory, chaos synchronization, image processing, and so on. Although a lot of achievements have been made in the field of application, most of the current research efforts on memristive neural networks are mainly focused on deterministic models (see [5–7]). However, in reality, noise disturbance always exists, which may cause instability and other poor performances. So it is necessary to study the memristive neural networks with random disturbance due to its theoretical and practical significance.

In addition, in order to deal with the dynamic image, we need to introduce delays between the signal transmissions of neurons, which have formed the memristive neural networks with delays. In practical application, it is more common for a dynamic system with time-varying delay, because the constant delay is only an ideal approximation of the time-varying delay. Many scholars have made great achievements in this respect (see [8–11]). Here, we have to point out that neural networks are composed of a large number of neurons, many of which are clustered into spherical or layered structures and interact with each other and are connected to a variety of complex neural pathways through the axon. Thus, there exists distributed delay in the transmission of signals. Usually, the discrete delays and distributed delays cannot contain each other in the same system; however, in [12] we can see that discrete delays and distributed delays can be written in a unified form under Stieltjes-Lebesgue integral, that is, S-type distributed delays (see [13, 14]). In fact, the differential expression of the systems not only is related to the derivative of the current state but also has a great relationship with the derivative of the past state. It is called neutral delay neural network. Therefore, it is very significant to study the stochastic neutral-type memristive neural network with time-varying delays and S-type distributed delays.

The control input has a great influence on the dynamic behavior of the neural network. The input-to-state stability (ISS) was first proposed by Sontag to check robust stability, which is more general than the traditional exponential stability. The traditional exponential stability includes asymptotical stability, exponential stability, and almost sure stability. In [15, 16], global asymptotical stability analysis for a kind of discrete-time recurrent neural network has been studied. In [17–22], exponential stability and almost sure stability of the neural network have been investigated. As far as we know, the traditional stability is that the state of the neural network is close to the equilibrium point when the time approaches infinity. But this does not always happen in our reality. ISS control analysis opens up a new dynamic neural network application in nonlinear system. In [23], input-to-state stability for a class of stochastic memristive neural networks with time-varying delay has been studied.

Motivated by the above discussion, even though the stability problem of stochastic neural networks has been studied, there are few studies on the stability of stochastic neutral-type memristive neural network. In this paper, we consider the stochastic neutral-type memristive neural networks to end the gap. Using the stochastic analysis theory and Itô formula, we obtain the sufficient conditions of mean-square exponential input-to-stability and some corollaries for system (1).

The rest of the paper is organized as follows. In Section 2, we present a model and give some hypotheses. The main conclusions are proved in Section 3. In Section 4, a numerical example is given to illustrate the correctness of our conclusions. Finally, the further discussion is drawn in Section 5.

Throughout this paper, solutions of all the systems considered are intended in the Filippovs sense. Let represent -dimensional Euclidean space. The superscript “” denotes the transpose of a matrix or vector. denotes the class of essentially bounded functions from to with . Let and denote the family of continuous functions from to with the norm , where is the Euclidean norm in . Let denote the family of all measurable, -valued stochastic variables such that , where stands for the correspondent expectation operator with respect to the given probability measure .

#### 2. Preliminaries

Considering the following stochastic neutral-type memristive neural network, for , with initial conditions , , where is the voltage of the capacitor , , , and represent the neuron activation functions of the th neuron at time , is the external constant input of the th neuron at time , is a standard Brownian motion defined on the complete probability space with a natural filtration , and is a Borel measurable function. and are self-feedback connection matrices, and , , and represent memristor-based weights,where denote the memductances of memristors , And is the Lebesgue-Stieltjes integral and is nonnegative function of bounded variation on , which satisfies According to the pinched hysteretic loops of ideal memristors, letfor , and , , , , , and are constants. Let , , , , , , for By applying theory of differential inclusions and set-valued maps in system (1), it follows that with initial conditions , , for . Using Filippovs Theorem in [24], there exist , , and such thatwith initial conditions , , for .

To obtain the main results, we need the following hypotheses.(), , and satisfy the following conditions: where , , and are positive constants, , ()For all , , which satisfy () satisfies the following conditions: where and are positive constants.(), there exists a positive constant , which satisfies where () is a matrix, represents the matrix norm which satisfies , where matrix norm is defined as , and means the maximum eigenvalue (spectral radius) of the matrix.

*Definition 1 (see [25]). *The trivial solution of system (1) is said to be mean-square exponentially input-to-state stable if for every and there exist scalars , , and such that the following inequality holds:

#### 3. Main Results

In this section, the mean-square exponential input-to-state stability of the trivial solution for system (1) is addressed.

Theorem 2. *Under ()–(), the trivial solution of system (1) is mean-square exponentially input-to-state stable, if there exist positive constants , , such that the following conditions hold:*

*Proof. *In order to obtain the mean-square exponential input-to-state stability, we consider the following Lyapunov-Krasovskii functional:whereBy Itô formula, it follows thatwhere and is the weak infinitesimal operator which satisfieswhere is a matrix which satisfies , and under () and () we haveBy the inequality , we can obtainBy condition (12), there exists a sufficiently small constant such thatDefine as a stopping time; we haveNote that letting Then we havewhere By the inequalities , , it follows thatWe can find a large enough constant and a sufficiently small constant such that , sothat is,whereFurthermore, Hence, the desired assertion is derived.

Theorem 3. *Under ()–(), if conditions of (12) are satisfied, the trivial solution of system (1) with is mean-square exponentially stable.**Moreover, when we remove the S-type distributed delay system (1) becomes the following system: *

Corollary 4. *Under ()–() and (), the trivial solution of system (29) is mean-square exponentially input-to-state stable, if there exist positive constants , , such that the following conditions hold: *

Corollary 5. *Under ()–() and (), if conditions of (30) are satisfied, the trivial solution of system (29) with is mean-square exponentially stable.*

*Remark 6. *In particular, when we remove the neutral terms, system (29) becomes the system in [23]; from [23] we can see that the trivial solution of system is mean-square exponentially input-to-state stable and the trivial solution of system with is mean-square exponentially stable in the certain condition. So we can say that our model is the extension of model in [23].

*Remark 7. *In fact, letwhere is nonnegative function of bounded variation on and S-type distributed delays terms become , so system (1) contains the system in [25]. In addition, suppose that ; we have that S-type distributed delays terms become the generally distributed delays , so our system contains the recent work of [26]. It shows that this paper is more general than the existing articles.

*Remark 8. *On the achievements of [23, 25], this paper discusses a class of more general neural network systems through introducing many factors such as neutral terms, S-type distributed delays, and stochastic perturbations and analyzes the mean-square exponential input-to-state stability of the given neutral stochastic system by utilizing the Lyapunov-Krasovskii functional method, stochastic analysis techniques, and Itô formula. The considered Lyapunov-Krasovskii functional in our paper is more complex comparing with those in [23, 25] since it covers neutral terms and double integrals. Therefore, our theoretical results can be seen as an extension in [23, 25]. In addition, our results are computationally efficient as the sufficient conditions can be easily checked without using linear matrix inequality toolbox.

#### 4. Numerical Simulation

In this section, a numerical example is given to illustrate the correctness of our conclusions. Consider a two-dimensional system with , , , , , , and :Take , , , , , , , , , , , and , which satisfy () to (). Then, it is easy to check following conditions: