Abstract

Together with Lyapunov-Krasovskii functional theory and reciprocal convex technique, a new sufficient condition is derived to guarantee the global stability for recurrent neural networks with both time-varying and continuously distributed delays, in which one improved delay-partitioning technique is employed. The LMI-based criterion heavily depends on both the upper and lower bounds on state delay and its derivative, which is different from the existent ones and has more application areas as the lower bound of delay derivative is available. Finally, some numerical examples can illustrate the reduced conservatism of the derived results by thinning the delay interval.

1. Introduction

Recently, various classes of neural networks have been increasingly studied in the past few decades, due to their practical importance and successful applications in many areas such as optimization, image processing, and associative memory design. In those applications, the key feature of the designed neural network is to be convergent. Meanwhile, since there inevitably exist communication delay which is the main source of oscillation and instability in various dynamical systems, great efforts have been made to analyze the dynamical behaviors of time-delay systems including delayed neural networks (DNNs), and many elegant results have been reported; see [131] and the references therein. In practical application, though it is difficult to describe the form of delay precisely, the ranges of time delay and its variation rate can be measured. Since Lyapunov functional approach imposed no restriction on delay and its derivative and presented the simple stability results, Lyapunov-Krasovskii functional (LKF) one has been widely utilized due to that it can fully utilize the information on time-delay system. Thus recently, the delay-dependent stability or delay-derivative-dependent one for DNNs has become an important topic of primary significance, in which its main purpose was to derive the maximum allowable upper bound on time delay such that the system can be convergent [79, 1116]. Meanwhile, since a neural network usually has a spatial nature due to the presence of an amount of parallel pathways of a variety of axon sizes and lengths, it is desired to model them by introducing a distributed delay over a certain duration of time such that the distant past has less influence compared to the recent behavior of the state. In other words, when studying stability for DNNs, the distributed delay should be taken into consideration simultaneously [1826].

Presently, during tackling the effect of time delay, the delay-partitioning idea has been verified to be more effective in reducing the conservatism and widely employed [1017]. In [11], one delay-partitioning idea has been used to tackle time-varying delay of DNNs based on the improved idea in [10] and in [12, 13], and some researchers also put forward the other novel delay-partitioning idea to tackle constant delay, which can be more evident and concise than the idea based on the one in [10]. Later, though the idea was extended to time-varying delay case [14, 15, 17], the improved idea cannot deal with the interval varying delay efficiently, especially as the lower bound of delay is greater than 0. Meanwhile, as for time-varying delay, because convex combination technique can play an important role in reducing the conservatism, it has received much attention and achieved some great improvements in studying the stability for time-delay systems including DNNs [1518, 2830]. Its basic idea is to approximate the integral terms of quadratic quantities into a convex combination of quadratic terms of the LMIs. However, owing to the inversely weighted nature of coefficients in Jensen inequality approach or the limitation of free-weighting matrix method, when estimating the derivative of Lyapunov-Krasovskii functional, some important terms still have been ignored based on the convex combination technique [1518]. In [31], together with integral inequality lemma, the authors put forward the reciprocal convex technique, which can consider these previously ignorant terms. Yet, we have noticed that the reciprocal convex technique could not tackle the case that the lower and upper bound of time delay can be measured simultaneously, which still needs the convex technique. Now, together with the improved delay-partitioning idea in [17] and combining reciprocal convex technique with convex combination one, no researcher has investigated the stability for DNNs as the lower bound of delay derivative is available, which motivates the focus of this presented work.

In the paper, we make some great efforts to investigate asymptotical stability for recurrent neural networks with both time-varying and continuously distributed delay, in which both the upper and lower bounds of time delay and its derivative are treated. Through applying an improved delay-partitioning idea, one LMI-based condition is derived based on combination of reciprocal convex technique and convex one, which can present the pretty delay dependence and computational efficiency. Finally, we give three numerical examples to illustrate the reduced conservatism.

Notations
For symmetric matrices (resp., ) means that is a positive-definite (resp., positive-semidefinite) matrix; denotes the trace of the matrix ; with denotes the symmetric term in a symmetric matrix.

2. Problem Formulations

Consider the DNNs with continuously distributed delay of the following form: where is the neuron state vector; and are known constant matrices; stands for the neuron activation function, is a constant input vector, and ; here the delay kernel is a real-valued nonnegative continuous function defined on and satisfies for .

The following assumptions on system (2.1) are utilized throughout this paper. (H1) The delay denotes one continuous function satisfying in which are constants. Here we denote and . (H2) For the constants , the functions in (2.1) are bounded and satisfy the following condition: In what follows, we denote , and set , , respectively.

Remark 2.1. As pointed out in [22], the constants in (H2) are allowed to be positive, negative, or zero. Thus, some previously used Lipschitz conditions are just special cases of (H2), which means that the activation functions are of more general descriptions than these previous ones.
Suppose is the equilibrium point of the system (2.1). In order to prove the result, we will shift the equilibrium to the origin by changing the variable . Then the system (2.1) can be transformed into where , and with . Note that the function satisfies , and
In order to establish the stability criterion, firstly, the following lemmas are introduced.

Lemma 2.2 (see [27]). For any constant matrix , scalar functional , and a vector function such that the following integration is well defined, then .

Lemma 2.3 (see [31]). Let the functions have the positive values in an open subset of and satisfy with and , then the reciprocal convex technique of over the set satisfies

Then the problem to be addressed in next section can be formulated as developing a condition ensuring that the DNNs (2.4) are asymptotically stable.

3. Delay-Derivative-Dependent Stability

In the section, through utilizing the reciprocal convex technique idea in [31], we present one novel delay-derivative-dependent stability criterion for the system (2.4) in terms of LMIs.

Theorem 3.1. Given a positive integer and setting , then the system (2.4) satisfying (2.2) and (2.5) is globally asymptotically stable if there exist matrices , diagonal matrices , matrices , and constant matrices such that the LMIs in (3.1)-(3.2) hold: where , and with representing the appropriately dimensional 0 matrix making of columns, , , , , , , , and

Proof. Based on (2.5) and denoting , we construct the Lyapunov-Krasovskii functional candidate as where with , , waiting to be determined, and Through directly calculating and using any constant matrices , the time derivative of functional (3.5) along the trajectories of system (2.4) yields Moreover, together with Lemmas 2.2 and 2.3 and (3.1), we can estimate as follows: From (2.5), the following inequality holds for any diagonal matrices with the compatible dimensions and setting , Now, adding the terms on the right-hand side of (3.8)–(3.12) and employing inequality (3.13), we can deduce in which are presented in (3.2), and Then by employing convex combination technique, the LMIs in (3.2) can guarantee , which indicates that there must exist a positive scalar such that for . Then it follows from Lyapunov-Krasovskii stability theory that the system (2.4) is asymptotically stable, and the proof is completed.

Remark 3.2. Presently, the convex combination technique has been widely employed to tackle time-varying delay owing to the truth that it could reduce the conservatism more effectively than the previous ones, see [1518, 2830]. In [31], the authors put forward the reciprocal convex approach, which can reduce the conservatism more effectively than convex combination ones. Yet, it has come to our attention that no researchers have utilized both of them simultaneously to tackle the stability for DNNs.

Remark 3.3. One can easily check that the theorem in this work achieves some great improvements over the one in [17], which can be illustrated in the following. Firstly, some ignored terms in [17] have been fully considered in this paper when estimating the derivative of Lyapunov-Krasovskii functional in (3.11), such as the ignored ones.
Secondly, owing to the introduction of reciprocal convex approach, Theorem 3.1 can be much less complicated than the ones in [17], which will result in some computation simplicity in some degree. Thirdly, though reciprocal convex approach plays an important role in tackling the range of time delay, it cannot efficiently deal with the effect of lower and upper bound on delay derivative, which can be checked in (3.14). Thus we employ the convex combination technique to overcome this shortcoming.

Remark 3.4. When is not differentiable or (resp., ) is unknown, through setting or (resp., ) in (3.5), our theorem still can be true.

Remark 3.5. Owing to the introduction of delay-partitioning idea in this work, the difficulty and complexity in checking the theorem will become more and more evident when the integer increases and the dimension of the LMIs in (3.2) will become much higher. Yet based on the results in [1216], the maximum allowable upper bound of would become unapparent enlarging as . Thus if we want to employ the idea to real cases, we do not necessarily partition the interval into more than subintervals.

4. Numerical Examples

In the section, three numerical examples will be presented to illustrate that our results are superior over the ones by convex combination technique.

Example 4.1. We revisit the system considered in [9, 11, 17] with the following parameters: in which is set. If we do not consider the existence of , then by utilizing Theorem 3.1 and Remark 3.2, the corresponding maximum allowable upper bounds (MAUBs) for different derived by Theorem 3.1 in [17] and in the paper can be summarized in Table 1, which demonstrates that Theorem 3.1 in this paper of is less conservative than the theorem in [17]. Yet, if we set , it is still easy to verify that our results can yield much less conservative results than the one in [17], which can be shown in Table 2.

Based on Tables 1 and 2, it indicates that the conservatism of stability criterion can be greatly deduced if we take into consideration. Moreover, though the delay-partitioning idea has been used in [17], the corresponding MAUBs derived by [17] and Theorem 3.1 are summarized in Table 3, which shows that the idea in this work can be more efficient than the one in [17] even for .

Example 4.2. Consider the delayed neural networks (2.1) with which has been addressed extensively, see [2, 15, 17] and the references therein. Together with the delay-partitioning idea and for different , the works [15, 17] have calculated the MAUBs such that the origin of the system is globally asymptotically stable for satisfying . By resorting to Theorem 3.1 and Remark 3.2, the corresponding results can be given in Table 4, which indicates that our delay-partitioning idea can be more effective than the relevant ones in [15, 17] for and .

Example 4.3. Consider the delayed neural networks (2.4) with and . For , choosing various in Table 5 and applying Theorem 3.1 in our work and the one in [17], we can find the MAUBs on for which the system remains asymptotically stable.

Based on Table 5, it can indicate that the delay-partitioning idea in our work can be less conservative than the ones in [17].

5. Conclusion

This paper has investigated the asymptotical stability for DNNs with continuously distributed delay. Through employing one improved delay-partitioning idea and combining reciprocal convex technique with convex combination one, one stability criterion with significantly reduced conservatism has been established in terms of LMIs. The proposed stability condition benefits from the partition of delay intervals and reciprocal convex technique. Three numerical examples have been given to demonstrate the effectiveness of the presented criteria and the improvements over some existent ones. Finally, it should be worth noting that the delay-partitioning idea presented in this work is widely applicable in many cases.

Acknowledgment

This work is supported by the National Natural Science Foundation of China no. 60875035, no. 60904020, no. 61004064, no. 61004032, and the Special Foundation of China Postdoctoral Science Foundation Projects no. 201003546.