Discrete Dynamics in Nature and Society

Volume 2012, Article ID 426350, 20 pages

http://dx.doi.org/10.1155/2012/426350

## Combined Convex Technique on Delay-Distribution-Dependent Stability for Delayed Neural Networks

Key Laboratory of Measurement and Control of CSE, School of Automation, Southeast University, Ministry of Education, Nanjing 210096, China

Received 26 March 2012; Revised 27 May 2012; Accepted 27 May 2012

Academic Editor: Zhengqiu Zhang

Copyright © 2012 Ting Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

Together with the Lyapunov-Krasovskii functional approach and an improved delay-partitioning idea, one novel sufficient condition is derived to guarantee a class of delayed neural networks to be asymptotically stable in the mean-square sense, in which the probabilistic variable delay and both of delay variation limits can be measured. Through combining the reciprocal convex technique and convex technique one, the criterion is presented via LMIs and its solvability heavily depends on the sizes of both time-delay range and its variations, which can become much less conservative than those present ones by thinning the delay intervals. Finally, it can be demonstrated by four numerical examples that our idea reduces the conservatism more effectively than some earlier reported ones.

#### 1. Introduction

In past decades, neural networks have been applied to various signal processing problems, such as optimization, image processing, associative memory design, and other engineering fields. In those applications, the key feature of the designed neural network is to be globally stable. Meanwhile, since there inevitably exists communication delay which can induce the oscillation and instability in various dynamical systems, great efforts have been made to analyze the dynamics of delayed systems including delayed neural networks (DNNs) and many elegant results have been reported; see [1–35]. In practical applications, though it is difficult to describe the form of time-delay precisely, the bounds of time-delay and its variation rates still can be measured. Since the Lyapunov functional approach imposes no restriction on delay variation and presents some simple stability criteria, the Lyapunov-Krasovskii functional (LKF) has been widely utilized due to that its analysis can fully make use of the information on time-delay of DNNs as much as possible. Thus recently, the delay-dependent stability has become an important topic of primary significance, in which the main purpose is to derive an allowable delay upper bound guaranteeing the global stability for addressed DNNs in [4–9, 16, 17, 21–35]. Furthermore, in recent years, since the delay-partitioning ideas have been proven to be more effective in reducing the conservatism than some previously reported techniques and received much research attention in [21–27], yet these convex ideas above still need some further improvements since they cannot effectively tackle interval variable delay or cannot fully utilize every delay subintervals, which have been fully addressed in [28].

Meanwhile, it can be seen from many existing references that only the deterministic time-delay case was concerned, and the stability criteria were derived based on the information of delay and its variation range. Actually, time-delay in some DNNs is often existent in a stochastic fashion. In practice, to propagate and control the stochastic signals through universal learning networks, a probabilistic universal leaning network (PULN) was proposed. In a PULN, the output signal of the node is transferred to another node by multibranches with arbitrary time-delays which are random and its probability often can be measured by the statistical methods. For this case, if some values of time-delay are very large but its probability of the delay taking such large values is very small, it may lead to a more conservative result if only the information of delay variation range is considered. Thus, recently, some researchers have considered the stability for various systems including DNNs with probability-distributed delays [29–39]. In [36–39], the authors have analyzed the stability and its applications for networked control systems, uncertain linear systems, and T-S fuzzy systems, in which probability delay has been fully considered. As for discrete-time DNNs with probabilistic delay, the global stability has been considered and some pretty results have been proposed in [29–33]. Yet, it has come to our attention that, though some works have studied the dynamics of continuous-time DNNs with probabilistic delay [34, 35], the lower limits of delay variation have not been considered and, in fact, such available information could play an important role in extending the results' application area, which has been illustrated in [40] and yet not taking delay distribution probability into consideration. Presently, as for time-variable delay, the reciprocal convex approach in [41] has been proven to be more effective in reducing the conservatism than some earlier convex techniques [28, 42]. Yet to the authors' best knowledge, few authors have used the combination of the reciprocal convex technique and general convex ones to tackle the global stability for DNNs with probabilistic time-varying delay, which constitutes the main focus of this present work.

Together with taking both bounds on probabilistic time-delay and its time variations into consideration, we make some great efforts to investigate the mean-squared stability for DNNs, in which an improved delay-partitioning idea is utilized and a novel Lyapunov functional is chosen. Through combining the reciprocal convex technique and the general convex one, one less conservative condition is given in terms of LMIs, which can present the pretty delay dependence and computational efficiency. Finally, we give four numerical examples to illustrate that our derived results can be less conservative than some existent ones.

The notations in the paper are really standard. For symmetric matrices (resp., ) means that is a positive-definite (resp., positive-semidefinite) matrix; and denotes the symmetric term in a symmetric matrix, that is, .

#### 2. Problem Formulations and Preliminaries

Consider the delayed neural networks as follows: where is a real -vector denoting the state variables associated with the neurons, represents the neuron activation function, is a constant input vector, , and are the appropriately dimensional constant matrices.

The following assumptions on the system (2.1) are made throughout this paper.

*Assumption 2.1. *The time-varying delay satisfies . Moreover, consider the information of probability distribution of , two sets and functions are defined as , , and
where , , and . It is easy to check that means that the event occurs and means that the event occurs. Therefore, a stochastic variable can be defined as

*Assumption 2.2. * is a Bernoulli distributed sequence with
where is a constant and is the mathematical expectation of . It is easy to check that .

*Assumption 2.3. *For the constants , the nonlinear function in (2.1) satisfies the following condition:
Here, we denote , , and
It is clear that under Assumptions 2.1–2.3, system (2.1) has one equilibrium point . For convenience, we shift the equilibrium point to the origin by letting , , and the system (2.1) can be converted to
where . Based on the methods in [37–39], the system above can be equivalently converted to
It is easy to check that the function satisfies , and
Then, the problem to be addressed in the paper can be formulated as developing a condition ensuring that the system (2.9) is asymptotically stable.

In order to obtain the stability criterion for system (2.9), the following lemmas are introduced.

Lemma 2.4 (see [27]). *For any constant matrix , , a scalar functional , and a vector function such that the following integration is well defined, then .*

Lemma 2.5 (see [41]). *Let the functions have the positive values in an open subset of and satisfy with and , then the reciprocal convex technique of over the set satisfies
*

#### 3. Delay-Distribution-Dependent Stability

Firstly, we can rewrite the system (2.9) as Now letting be positive integers, we, respectively, divide the delay intervals and into segments averagely. Moreover, we introduce the following denotations: Then, based on (2.10) and (3.2), we can construct the following Lyapunov-Krasovskii functional candidate: where with , constant matrices , , constant matrices , constant matrices , and Denoting a parameter set , , then we give one proposition which is essential in the following deduction.

Proposition 3.1. *If the parameter set satisfies the following condition:
**
then the Lyapunov-Krasovskii functional (3.3) is definitely positive.*

Moreover, in order to simplify the subsequent proof, we also give some notations in the following:

Theorem 3.2. *For two given positive integers , and time-delay satisfying (2.2), the delayed neural network (3.1) is globally asymptotically stable in the mean square sense, if there exists one parameter set satisfying Proposition 3.1, matrices , and diagonal matrices making , making such that the following LMIs in (3.9) hold:
**
where
**
with the notations in and denoting the appropriately dimensional zero matrices making them columns:
*

*Proof. *Firstly, we show the uniqueness of the equilibrium point by the method of contradiction. Here we can denote the equilibrium point as of DNNs (2.9), then we have
Now we suppose that the other equilibrium point exists, then it follows that
Then combining (3.13) and (3.14) yields that
with , , and . Meanwhile, it is noted that
Yet on the other hand, let
Then multiplying the term in (3.9) by and on its left hand and right one, respectively, and using (3.15) and (3.16), we can deduce that
Thus we can derive , which contradicts with (3.15) and implies . This is to say, the origin of the DNNs (3.1) is the unique equilibrium point.

Next, through directly calculating and using the denotations in (3.7)-(3.8), the stochastic differential of in (3.3) along the trajectories of system (3.1) yields
Moreover, we can compute out as follows:
Then, by resorting to Lemmas 2.4 and 2.5 , and using the donations , the following inequalities can be derived:
Then, it follows from (3.21) that satisfies
From (2.10), for any diagonal matrices , , , , , and setting in (3.7), in (3.8), the following inequality holds:
Moreover, together with (3.1) and any constant matrices , one can deduce
Now adding the right terms (3.19) and (3.22)–(3.24) to and taking the mathematical expectation on its both sides, we can deduce
where are presented in (3.9), and
Then utilizing the general convex technique in [28, 39], the LMIs described by (3.9) can guarantee , which indicates that there must exist a positive scalar such that for any . Then, it follows from the Lyapunov-Krasovskii stability theorem that the system (3.1) is asymptotically stable in the mean-square sense, and it completes the proof.

*Remark 3.3. *Transmitted delays are always existent in various dynamical networks due to the finite switching speed of amplifiers in electronic neural networks or the finite signal propagation time in biological networks. Furthermore, with the development of network delay tomography, the probability of time-delay distribution can be estimated. Thus, when the probability is available, it will be helpful to utilize such information to reduce the conservatism [29–35]. Yet, up till now, few authors have utilized the delay-partitioning idea to investigate the stability of DNNs with probabilistic time-varying delay, and, in this work, we applied one improved idea, which can fully consider the information of delay subinterval. Moreover, though the stability criterion in (3.9) is not presented in the forms of standard LMIs, it is still convenient and straightforward to check the feasibility without tuning any parameters by resorting to LMI in Matlab Toolbox.

*Remark 3.4. *Presently, in the literatures [21–28], various convex combination techniques have been widely employed and improved to tackle constant or time-varying delays owing to that they can help reduce the conservatism efficiently. In [41], the authors put forward the reciprocal convex approach, which can consider those important terms ignored and be more effective than the ones [21–28]. Yet it has come to our attention that the reciprocal convex one cannot efficiently tackle the case that both the bounds on delay derivatives are available. In this paper, we first combine the reciprocal convex technique and the convex ones to study the stability for DNNs with probabilistic time-varying delay.

*Remark 3.5. *As for in (3.3), if we denote (resp., ), our results can be true as only (resp., ) are available. If we set in (3.3) simultaneously, Theorem 3.2 still holds as that are unknown, or are not differentiable. Moreover, the number of free-weighting matrices in Theorem 3.2 is much smaller than the ones of these present results [34, 37], and the reciprocal convex technique is used, it can induce much more computational simplicity in a mathematical point of view.

*Remark 3.6. *In view of the delay-partitioning idea employed in this work, with integers increasing, the dimension of the derived LMIs will become higher and it will take more computing time to check them. Yet, if the lower bound of is set and , the maximum allowable delay upper bound will become unapparently larger and approach to an approximate upper limitation [22–27]. Thus, if we want to employ the idea to real cases, we do not necessarily partition two delay intervals into too many segments.

*Remark 3.7. *In order to give the more general results, the delay intervals and are, respectively, divided into subintervals in this work, which makes the condition of Theorem 3.2 seem to be very complicated. Yet if we set , our results will avoid the complexity in some degree. Moreover, if we choose the simple Lyapunov-Krasovskii functional in (3.3) with and , the condition of Theorem 3.2 also will become much less complicated.

#### 4. Numerical Examples

In this section, four numerical examples will be presented to illustrate the derived results. Firstly, we will utilize a numerical example to illustrate the significance of studying the lower bound of delay derivative.

*Example 4.1. *As a special case of , we revisit the delayed neural networks considered in [21, 28] with
and is set. If we do not consider the existence of , then, by utilizing Theorem 3.2 and Remark 3.5, the corresponding maximum allowable upper bounds (MAUBs) for different derived by the results in [21] and in the paper can be summarized in Table 1, which demonstrates that Theorem 3.2 of is somewhat more conservative than the one in [21, 28]. Yet, if we set , it is easy to verify that our results can yield much less conservative results than the ones in [21, 28], which can be shown in Table 2.

Based on Tables 1 and 2, it is indicated that the conservatism of stability criterion can be greatly deduced if we take the available into consideration. Moreover, though the delay-partitioning idea has been used in [28], the corresponding MAUBs derived by [28] and Theorem 3.2 are summarized in Table 3, which shows that our idea can be more efficient than the one in [18] even for .

*Example 4.2. *Considering the special case of , we consider the delayed neural networks (2.1) with
which has been addressed extensively; see [26, 27] and the references therein. Together with the delay-partitioning idea and for different , the work [27] has calculated the MAUBs such that the origin of the system is globally asymptotically stable for satisfying . By resorting to Theorem 3.2 and Remark 3.5, the corresponding results can be given Table 4, which indicates that our delay-partitioning idea can be more effective than the relevant ones in [27] even for and .

*Example 4.3. *We still consider the DNNs with the following parameters [24, 28] by setting :