Research Article | Open Access

Xiongrui Wang, Ruofeng Rao, Shouming Zhong, "LMI-Based Stability Criterion for Impulsive CGNNs via Fixed Point Theory", *Mathematical Problems in Engineering*, vol. 2015, Article ID 281681, 10 pages, 2015. https://doi.org/10.1155/2015/281681

# LMI-Based Stability Criterion for Impulsive CGNNs via Fixed Point Theory

**Academic Editor:**Xinggang Yan

#### Abstract

Linear matrices inequalities (LMIs) method and the contraction mapping theorem were employed to prove the existence of globally exponentially stable trivial solution for impulsive Cohen-Grossberg neural networks (CGNNs). It is worth mentioning that it is the first time to use the contraction mapping theorem to prove the stability for CGNNs while only the Leray-Schauder fixed point theorem was applied in previous related literature. An example is given to illustrate the effectiveness of the proposed methods due to the large allowable variation range of impulse.

#### 1. Introduction and Preliminaries

Dynamics of Cohen-Grossberg neural networks (CGNNs) has been extensively investigated due to its immense potentials of application perspective in various areas such as pattern recognition, parallel computing, associate memory, combinational optimization, and signal and image processing. However, these successful applications always depend on the stability of the equilibrium solution for CGNNs. All the time, the method of Lyapunov theory has usually been employed to solve the stability problem of dynamical systems [1â€“11]. In studying the stability of neural networks, Lyapunov-Krasovskii functional method can always be combined with other methods in a perfect way, such as the linear matrix inequality (LMI) optimization approach, -Matrix theory, and nonsmooth analysis technique (see, e.g., [6â€“8]). Of course, as one of stability analysis methods, the Lyapunov method has its limitations. In fact, a stability criterion is regarded as an effective and efficient method for impulsive neural networks if a larger variation range of impulse is allowable (see, e.g., [12, Remark ]). Thereby, using the methods different from Lyapunov direct method may obtain a more efficient stability criterion for impulsive systems. Indeed, fixed point theories have always been considered by many authors. Burton [13, 14], Rao and Pu [15], Jung [16], Luo [17], Zhang [18], and Wu et al. [19] studied the stability by using the fixed point theory which solved the difficulties encountered in the study of stability by means of Lyapunovâ€™s direct method. Contraction mapping theorem was the usual method to study the stability of neural networks, except CGNNs. Owing to some difficulties, only the Leray-Schauder fixed point theorem was considered in investigating the stability of CGNNs [20, 21]. In this paper, contraction mapping theorem is applied to the stability analysis for CGNNs. We wish that our newly obtained stability criterion will allow a larger variation range of impulses against a series of previously related literatures. This is the main purpose of this paper.

Consider the following CGNNs:where . and represent an amplification function at time and an appropriately behaved function at time . and are the activation functions of the neurons, and and are connection parameters. The time-varying delays . The impulsive moments satisfy , and . and stand for the right-hand and left-hand limit of at time , respectively. means the abrupt change of at the impulsive moment and .

Throughout this paper, we assume that for and . Denote by the solution for system (1) with the initial condition , where and The solution of system (1) is, for the time variable , a continuous vector-valued function.

Throughout this paper, we assume the following.(H1)For any , there exist constants , , , and such that(H2)For any , is differentiable, and there exists a constant such that(H3)There exist nonnegative constants such that(H4)For , there exist and a constant such that

*Remark 1. * represents an amplification function; is an appropriate behavior function. There exist a lot of functions satisfying the above assumptions. For example, let , , , , and then ; hence . Now we let , and obviously .

From the above assumptions, it is obvious that the null solution is a trivial solution for system (1).

*Definition 2. *System (1) is said to be globally exponentially stable if for any initial condition there exist positive constants and such that

Lemma 3 (see [22]). *Let be a contraction operator on a complete metric space ; then there exists a unique point for which .*

#### 2. Main Result

Before giving the main result, we may firstly definite some matrices as follows:Assume, in addition, there exists the constant with for each , and is a positive constant, satisfying . Denote .

Theorem 4. *If there exists a positive constant such that the following LMI condition holds:then system (1) is globally exponentially stable.*

*Proof. *Our proof is based on the contraction mapping principle (Lemma 3), which may be divided into four steps.*Step 1*. We may set up a space frame.

Let , and let be the space consisting of functions such that, for any ,(a) is continuous on with ;(b);(c) as , where is the constant with ;(d)both and exist; besides, for .Moreover, is a complete metric space if it is equipped with a metric defined bywhere and .*Remark 5.* It is followed by thatand hence*Step 2*. Frame a mapping on the space .

Multiplying both sides of system (1) with yields, for and , Let be small enough. After integrating from to , we getLetting in (13), we haveSetting in (14), we obtainwhich yields by letting Combining (14) and (16) results inHence,which producesThereby, we may define the mapping , where , and . In addition, for any , the mapping satisfiesand as .

Next, we will prove that

It is obvious that conditions (a) and (b) hold in . So we only need to prove that, for any ,Indeed, it is obvious thatwhereIt follows by that as And hence as

Note that if . So, for any , there exists a positive constant such thatThereby,The continuity of derives that there exists a constant such thatIn addition, condition (H1) yields that . Let , and we can derive from (25), (27), and the change of variable formula for integrals thatNow we can conclude from the arbitrariness of that , .

Similarly, condition (H2) yieldsLet ; we can get by (24)Combining (29) and (30) results inThen the arbitrariness of yields that , .

It follows by (H3) and (25) thatIn addition,which implies the function is an increasing function on . This yieldsFrom (32) and (34), the arbitrariness of yields that , .

So we can deduce from the above analysis that , , and hence the condition (c) is satisfied.

Next we are to prove that condition (d) holds in .

Indeed, for a given , it follows from (20) thatwhereIt is obvious that as From (36), we can get by letting be small enoughwhich implies that On the other hand, letting be small enough, we havewhich leads to . Hence, condition (d) holds in .

It is derived by the above analysis that for all .*Step 3*. We claim that is a contraction mapping on .

Indeed, for and , we havewhereIt follows by the triangle inequality, (H1), (H2), and the differential mean value theorem thatOn the other hand,So we can get by the above analysis