Abstract

An innovative stability analysis approach for a class of discrete-time stochastic neural networks (DSNNs) with time-varying delays is developed. By constructing a novel piecewise Lyapunov-Krasovskii functional candidate, a new sum inequality is presented to deal with sum items without ignoring any useful items, the model transformation is no longer needed, and the free weighting matrices are added to reduce the conservatism in the derivation of our results, so the improvement of computational efficiency can be expected. Numerical examples and simulations are also given to show the effectiveness and less conservatism of the proposed criteria.

1. Introduction

In the past decades, neural networks (NNs) have attracted considerable attention due to their potential applications in associative memory, pattern recognition, optimization and signal processing, and so forth [13]. It is well known to us that stability is one of the preconditions in the design of neural networks. For example, if a neural network is employed to solve some optimization problems, it is highly desirable for the NNs to have a unique globally stable equilibrium. Therefore, stability analysis of NNs is a very important issue and has been studied extensively [411]. It is worth noting that most of the NNs have been analyzed by using a continuous-time model. However, when it comes to the implementation of continuous-time networks for the sake of computer-based simulation, experimentation or computation, it is necessary to discretize the continuous-time networks to formulate a discrete-time system. Under mild or no restriction on the discretization of step size, the dynamic characteristics of the continuous-time counterpart can be inherited by the discrete-time analogue to a certain extent, and the discrete-time model also remains similar to some other properties of the continuous-time system.

On the other hand, as a result of the finite switching speed of amplifiers and the inherent communication time of neurons, time delays are frequently encountered in neural networks in electronic implementations. Time delays can change the dynamic behaviors of neural networks evidently, which is very often the sources of instability, oscillation, and poor performance. Therefore, stability analysis of neural networks with time delays has been studied extensively during the past years; see [1214] and the references therein. In practice, when modeling real neural systems, stochastic disturbances are probably part of the main sources leading to unwilling behaviors of neural networks. It has been proved that certain stochastic inputs could make the neural network unstable. Therefore, it is necessary to take into account both time delay and stochastic external flucation when modeling neural networks.

Recently, stochastic discrete-time system has been studied extensively; [15] presented a necessary and sufficient condition for the existence of control to transform controller design into solving coupled matrix-valued equation for discrete-time system with state and disturbance-dependent noise. In [16], the robust filtering analysis and synthesis of nonlinear stochastic systems with state and exogenous disturbance-dependent noise are presented. Instead of solving the Hamilton-Jacobi inequalities, a more convenient algorithm for practical applications is given by solving several linear matrix inequalities, and a few examples have shown the effectiveness of the proposed methods. In [17], based on a nonlinear stochastic bounded real lemma and an exponential estimate formula, an exponential mean square filtering design of nonlinear stochastic time-delay systems is presented via solving a Hamilton-Jacobi inequality.

But as pointed out in some previous articles, the discretization cannot preserve the dynamics of continuous-time system even for a small sampling period. Therefore, the study on the dynamics of discrete-time neural networks is crucially needed; see [1822] and the references therein.

Based on the discussions previously mentioned, the problem of stability analysis for discrete stochastic neural networks (DSNNs) with time-varying delays has been investigated recently. In [23], the problem of exponentially stability analysis for uncertain discrete-time stochastic neural networks with time-varying delays is investigated by utilizing Lyapunov-Krasovskii method of converting the addressed stability analysis problem into a convex optimization problem. In [24], combining with the free weighting matrix method and a new Lyapunov-Krasovskii functional candidate, a delay-dependent stability condition has been obtained, which proves to be less conservative than [23]. In [25], a new Lyapunov-Krasovskii functional candidate and the delay partition idea are used to solve the problem of asymptotic stability analysis in the mean square for a class of DSNNs with time-varying delay; the stability analysis problem is converted into a feasibility problem of LMIs, and a numerical example is provided to show the usefulness of the proposed condition.

In [26], the global exponential stability problem for a class of discrete-time uncertain stochastic neural networks with time-varying delays is studied, and an improved result is obtained.

In [27], the midpoint of the time delay variation interval is introduced, and the variation interval is divided into two subintervals; a new Lyapunov-Krasovskii functional candidate is constructed, and the variation in the two subintervals are checked by LMIs; some novel delay-dependent stability criteria for the addressed neural networks are derived, and less conservation is obtained.

In [28], a new Lyapunov functional candidate with the idea of delay partitioning is introduced; the effects of both variation range and distribution probability of the time delay are taken into account at the same time; the time-varying delay is characterized by introducing a Bernoulli stochastic variable; the distribution probability of time delay is translated into parameter matrices of the transferred DSNNs model, so the conservation has been reduced further. However, one of the main issues in the stability criteria is how to reduce the possible conservatism induced by introduction of the Lyapunov-Krasovskii functional candidate when dealing with time delay, which leaves much room for further research by using the latest analysis techniques.

In this paper, we develop an innovative stability analysis approach for a class of discrete-time stochastic neural networks with time-varying delays. By constructing a novel piecewise Lyapunov-Krasovskii functional, a new sum inequality is presented to deal with sum items without ignoring any useful items, and the model transformation is no longer needed in the derivation of our results. All results are expressed in the form of LMIs, whose feasibility can be easily checked by using the numerically efficient Matlab LMI toolbox, and no tuning of parameters is required, so the improvement of computational efficiency can be expected. Numerical examples are also given to show the effectiveness and less conservatism of the proposed criteria.

Notation. Throughout this paper, if not explicit, matrices are assumed to have compatible dimensions. The notation means that the symmetric matrix is positive definite (positive semidefinite, negative, negative semidefinite). The superscript stands for the transpose of a matrix; the shorthand denotes the block diagonal matrix; represents the Euclidean norm for vector or the spectral norm of matrices; and denote the maximal and minimal eigenvalue of matrix , respectively. refers to an identity matrix of appropriate dimensions, stands for the mathematical expectation, and means the symmetric terms. Sometimes, the arguments of a function will be omitted in the analysis when no confusion can arise.

2. System Description

Consider the following -neuron discrete stochastic neural networks with mixed time-varying delays: where is the neuron state vector; and denotes the neuron activation function. with . are the connection weight matrix and the delayed connection weight matrix, respectively, represents the transmission time-varying delay, is a continuous function, and is a Brown Motion defined on the complete probability space with , .

For further discussion, we introduce the following assumptions and lemmas.

Assumption 1. There exist two positive constants and such that

Assumption 2. For , the neuron activation functions in the DSNNS in (1) satisfy where are some constants.

Remark 3. Assumption 2 previously mentioned on the activation function was widely used in many papers; see [1621, 2326], for example.

Lemma 4 (see [27]). For any symmetric constant matrix , , integers , and vector valued function , one has

Remark 5. Lemma 4 is called discrete Jensen inequality, which is a very important tool for us to get the main results in this paper. The Lemma has been used in some literatures such as [28, 29].

Lemma 6 (see [30]). For any constant matrix , a scalar , a positive integer time-varying , and vector function , such that the following sum is well defined, the following inequalities hold: Furthermore, if then where and , ,??,

Remark 7. Lemma 6 is given and proved in [29], which is an effective method to reduce the conservation when studying the time delay stability problem for discrete system; see the literature mentioned previously [30]. Our main study is based on Lemma 6. It is worth mentioning that if and in the proof of (6) are directly ignored, then the following inequality can be derived:
Compared with the literature [1218], none of useful items is ignored in (5); what is more, (5) provides a tighter bound to deal with sum terms than those based on (10). However, since (6) is related to time-varying delay items and , it cannot be directly solved based on MATLAB LMI toolbox. To tackle this problem, (6) is used to transfer the time-varying matrix inequality to a set of solvable LMIs. That is, the additional information and in (10) is effectively expressed by (7), and then the less conservative results can be expected.

3. Main Results

For convenience of presentation, we use the following notations:

In this section, a new delay-dependent stability criteria is proposed for system (1) with time-varying delay satisfying (3); the sufficient conditions of stability are given as follows by Theorem 8.

Theorem 8. For given and , diagonal matrices , and , the system (1) is said to be exponentially stable, if there exist scalar , diagonal matrices and symmetric matrices such that the following LMIs (12), (13) and (14) hold for : where where The other terms in have the same expression as that in .

Proof. Define ,.
Construct the following Lyapunov-Krasovskii functional candidates: where where if is an integer, then , else .
By calculating the difference of along the solution of the system (1), and taking the mathematical expectation, we have where From Assumption 2, we can obtain that So we can get that
Now, we are in the position to prove that holds for both and .
Case I (when ). Using Lemmas 4 and Lemma 6 to deal with the second sum items in the right side of (25), we have where
At the same time, for any matrices of appropriate dimensions, we have where
In addition, it can be deduced from Assumption 2 that there exist two positive diagonal matrices such that
By substituting (22)–(25) into (19), adding (30) and (32) into the right side of (19), and using (26) and (27), we can get that where ,?? and ,
So (12) and (13) imply that So from Lemma 6, (35) guarantees that , so there exists a positive scalar such that
Case II (when ). Keeping (26) and by utilizing Lemmas 4 and 6 to deal with the accumulative items of (27) and (28), we can get where
So the whole difference of Lyapunov-Krasovskii functional candidate is given as follows: where , and??. For and , we can also get that there exists a positive scalar such that Combining Cases I and II, we can conclude that (12), (13) and (14) guarantee that Defining a new function and then using the similar analysis method of Theorem 1 in [18], we can easily get that the system (1) is globally exponentially stable in the mean square sense. This completes the proof.

Remark 9. It can be seen from the proof previously mentioned that no model transformation has been employed to deal with the sum terms, and none of the useful items are ignored in the proof.
If we neglect the effect of the stochastic term in (1), then and (1) will reduce to For system (42), we can obtain the following corollary based on Theorem 8.

Corollary 10. For given , , and and diagonal matrices , , and , the system (42) is said to be exponentially stable, if there exist diagonal matrices , symmetric positive definite matrices and ? such that (12) and the following LMIs hold for :

Remark 11. The system (42) has been studied by many researchers, many stability criteria have been proposed, and many improved analysis results have been obtained; see [2022].

4. Numerical Example

In this section, three examples are given to demonstrate the benefits of the proposed method.

Example 1. Consider the following stochastic discrete neural networks [25, 26]: with the following parameters:
So it can be verified that
For , by using Matlab LMI toolbox, the maximum allowable value is . Setting and in Theorem 8, we can solve a set of feasible solutions for the LMIs (12), (13) and (14) which are listed as follows:
At the same time, we set ; the state curve can be obtained as Figure 1 by using Matlab simulation software. From Figure 1, we can see that the system (1) is asymptotically stable.
In our study, our purpose is to compare the maximum upper bound for different . Now assume the different lower bound ; by solving the LMIs (12), (13) and (14), we can get the maximum delays of upper bound, which are listed in Table 1. From Table 1, we can see that Theorem 8 is less conservative than the criterion proposed in [25, 26].

Remark 12. According to Theorem 8, we can obtain that system (1) with the pervious parameters is mean square exponentially stable, and meantime it is very easy to check that our results have improved the conclusions in [26] by the Matlab LMI Toolbox. If we negelect the uncertainty effect and take , then apply the criteria in [26], the value of for mean square exponentially stable of system (1) is 55, respectively. While by using Theorem 8 in this paper, taking 62 into LMIs (12), (13) and (14), we find that LMIs (12), (13) and (14) are feasible, which shows that our result is less conservative than [26] under the identical conditions.

Example 2. Consider the neural networks (1) with the following parameters [23]:
Take the activation function as follows: From the previous parameters, it can be verified that
For the Corollary 1 in [23], using the previous parameters, we find that it is unsolvable, but it is solvable in our criteria in Theorem 8. By virtue of the Matlab Toolbox, we can obtain the feasible solutions as follows:

Example 3. Consider the neural networks (1) with the following parameters [23, 27]:
Take the activation function as follows:
From the previous parameters, it can be verified that
For this example, these conditions in [23] cannot be satisfied. For , it has been verified that the maximum allowable time delay for system (42) is in [27]. By letting and in Example 3, we find that systems (42) are feasible, which implies that the exponential stability result proposed in Corollary 10 in this paper provides less conservatism than in [23, 27].

5. Conclusion

An effective sum inequality has been introduced to derive the delay-dependent stability criteria for a class of discrete stochastic neural networks system with an interval time-varying delay. By choosing piecewise Lyapunov-Krasovskii functional candidate and employing the proposed sum inequalities, significant performance improvement has been achieved with noticeably reducing the number of LMIs scalar decision variables. All results are given by the form of LMIs. Numerical examples show that the achieved results are less conservative than some existing literatures.