Abstract

A class of impulsive Cohen-Grossberg neural networks with time delay in the leakage term is investigated. By using the method of -matrix and the technique of delay differential inequality, the attracting and invariant sets of the networks are obtained. The results in this paper extend and improve the earlier publications. An example is presented to illustrate the effectiveness of our conclusion.

1. Introduction

Cohen-Crossberg neural network model, which is initially proposed by Cohen and Grossberg [1] in 1983, has been found successful applications in many fields such as pattern recognition, parallel computing, associative memory, signal and image processing, and combinatorial optimization. Hence, there has been increasing interest in studying the stability and asymptotic behavior of this model with delays, impulses, and unique equilibrium, and many significant results have been obtained (see, e.g., [26]). However, the equilibrium point sometimes does not exist in many real physical systems, so it is an interesting subject to discuss the attracting and invariant sets of the neural networks [7, 8].

On the other hand, a leakage delay, which is the time delay in the leakage term and a factor affecting the stability of the system, has attracted considerable attentions (see, e.g., [913]). However, to the best of our knowledge, so far there are few results on the attracting and invariant sets of the Cohen-Grossberg neural networks with leakage delay. Motivated by the above discussion, in this paper, we investigate the attracting and quasi-invariant sets of a class of impulsive Cohen-Grossberg neural networks with leakage delay. By using the method of -matrix and the technique of delay differential inequality, the attracting and invariant sets of the addressed networks are obtained. The results in this paper extend and improve the earlier publications. An example is presented to illustrate the effectiveness of our conclusion.

2. Model Description and Preliminaries

Let be the space of -dimensional real column vectors, , , , and denotes the set of real matrices. Usually, denotes an unit matrix. For or , the notation (, , ) means that each pair of corresponding elements of and satisfies the inequality “ (, , )”. Particularly, is called a nonnegative matrix if , and is called a positive vector if .

Let ; for , we define

denotes the space of continuous mappings from the topological space to the topological space . Particularly, let .

is continuous and with continuous derivative everywhere except at finite number of point at which , , , and exist and , , where denotes the derivative of . is a space of piecewise right-hand continuous functions with the norm ,  , where is a norm in .

is continuous at , and exist, , for .

In this paper, we consider the following Cohen-Grossberg neural networks with impulses and time delays: where corresponds to the number of units in a neural network; corresponds to the state of the th unit at time ; and are the activation functions of the th unit; denotes the transmission delay and satisfies ( is a constant); is the leakage delay. Consider ; represents the amplification function of the th neuron; is the behaved function at time . Consider . The fixed impulsive moments () satisfy and .

Definition 1 (see [14]). A function is said to be a solution of (2) through , if as , and satisfies (2) with the initial condition
Throughout the paper, we always assume that, for any , system (2) has at least one solution through , denoted by or (simply and if no confusion should occur), where , .

Definition 2 (see [7]). The set is called a positive invariant set of (2), if for any initial value we have the solution for .

Definition 3 (see [7]). The set is called a quasi-invariant set of (2), if there exist a matrix and a vector such that, for any , there exists a vector such that the solution of (2) satisfies , , as . Obviously, the set is an invariant set of (2) if and .

Definition 4 (see [7]). The set is called a global attracting set of (2), if for any initial value the solution converges to as . That is, where , for .

Definition 5 (see [8]). The zero solution of (2) is said to be globally exponentially stable if for any solution there exist constants and such that ,  .

Definition 6 (see [15]). Let the matrix have nonpositive off-diagonal elements (i.e., ,  ); then each of the following conditions is equivalent to the statement that is a nonsingular -matrix.(i)All the leading principle minors of are positive.(ii) and , where and .(iii)The diagonal elements of are all positive and there exists a positive vector such that or .
For a nonsingular matrix , we denote .

Lemma 7 (see [14]). For a nonsingular -matrix , is nonempty, and for any we have So is a cone without conical surface in . We call it an “-cone.”

Lemma 8 (see [16]). Let and   satisfy where , , (), , , , and , . Suppose that is a nonsingular -matrix.
(1) If the initial condition satisfies where , , then , for .
(2) Suppose that there exist a scalar and a vector such that where If the initial condition satisfies then

3. Main Results

In this paper, we always suppose the following.(A1), where is a constant, .(A2) is differentiable, and there exist constants such that ,  , for any .(A3) and are Lipschitz continuous; that is, there exist constants and such that, for any , (A4) is a nonsingular -matrix, where (A5),  , for any , where ,  .(A6)For , where ,  , and ,   satisfy where .

Theorem 9. Assume that (A1)–(A6) hold. Then, for any , the set is a quasi-invariant set of (2).

Proof. Combining with the middle value theorem, from (2) we can get where .
Case 1. Let us first consider . In this case ,  . Notice that ; there exist such that , for all .
From (A1)–(A4) and (16), we have Hence, Case 2. Let us consider . From (16), we get Then from (A1)–(A4), we have From (17) and (20), we get
For the initial conditions , , where , we have By (21) and Lemma 8, we have
Suppose that for all the inequalities hold, where . Then, from (A5) and (A6), This, together with , leads to On the other hand, It follows from (A4), (26), (27), and Lemma 8 that By the induction, we can conclude that From (A6), and we have This implies that the conclusion holds and the proof is complete.

Theorem 10. Assume that (A1)–(A6) hold. Then the set is a global attracting set of (2).

Proof. By a similar proof of Theorem 9, we can get (21) and (31).
By continuity, we can find and an enough small such that where
For the initial conditions , , where , we have and so By the induction and Lemma 8, we can conclude that From (A6), we conclude that This implies that the conclusion holds and the proof is complete.

Theorem 11. Assume that (A1)–(A5) with hold. Then is a positive invariant set and also a global attracting set of (2).

Proof (straightforward). Obviously, if , then is a solution of (2).

Corollary 12. Assume that (A1)–(A5) with ,   hold. If where satisfy ,  , then the zero solution of (2) is globally exponentially stable.

Remark 13. If , and , then from Theorems 9 and 10 we can get Theorem 5.1 and Corollary 5.1 in [16].

Remark 14. If , and , then from Corollary 12 we can get Corollary 3.1 in [5].

4. Illustrative Example

Consider the Cohen-Grossberg neural networks (2) with the following parameters, activation functions, amplification functions, behaved functions, and delay functions ():

Obviously, ,  ,  ,  and . The parameters of condition (A4) are as follows:

We can easily observe that is a nonsingular -matrix and

Let and which satisfies the inequality

Clearly, all conditions of Theorems 9 and 10 are satisfied. So is a quasi-invariant set and also a global attracting set of (2).

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (11171374) and Natural Science Foundation of Shandong Province (ZR2011AZ001).