Table of Contents Author Guidelines Submit a Manuscript
Abstract and Applied Analysis

Volume 2014, Article ID 327070, 14 pages

http://dx.doi.org/10.1155/2014/327070
Research Article

Existence, Uniqueness, and Stability Analysis of Impulsive Neural Networks with Mixed Time Delays

1School of Mathematics, Shandong University, Jinan 250100, China

2School of Mathematics and Quantitative Economics, Shandong University of Finance and Economics, Jinan 250002, China

Received 22 January 2014; Revised 16 March 2014; Accepted 25 March 2014; Published 15 May 2014

Academic Editor: Ivanka Stamova

Copyright © 2014 Qiang Xi and Jianguo Si. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We study a class of impulsive neural networks with mixed time delays and generalized activation functions. The mixed delays include time-varying transmission delay, bounded time-varying distributed delay, and discrete constant delay in the leakage term. By using the contraction mapping theorem, we obtain a sufficient condition to guarantee the global existence and uniqueness of the solution for the addressed neural networks. In addition, a delay-independent sufficient condition for existence of an equilibrium point and some delay-dependent sufficient conditions for stability are derived, respectively, by using topological degree theory and Lyapunov-Krasovskii functional method. The presented results require neither the boundedness, monotonicity, and differentiability of the activation functions nor the differentiability (even differential boundedness) of time-varying delays. Moreover, the proposed stability criteria are given in terms of linear matrix inequalities (LMI), which can be conveniently checked by the MATLAB toolbox. Finally, an example is given to show the effectiveness and less conservativeness of the obtained results.

1. Introduction

As we know, time delay in a system is a common phenomenon that describes the fact that the future state of the system depends not only on the present state but also on the past state and is always unavoidably encountered in many fields such as automatic control, biological chemistry, physical engineer, and neural networks [15]. Moreover, time delays can affect the stability of a neural network and create oscillatory and bad dynamic performance [35]. Hence, it is significant and necessary to take into account the delay effects on dynamics of neural networks, for example, existence, uniqueness and stability, and so on. To date, neural network models with two categories of time delays, namely, discrete and continuously distributed time delays, have been extensively investigated by many researchers, using some effective approaches; see [622] and references therein. For instance, in [6], Kharitonov and Zhabko studied the robust stability of time-delay systems via Lyapunov-Krasovskii functional approach. Wu et al. [9] introduced free-weighting matrix approach and investigated the robust stability problem for time-varying delay systems. Gu introduced the delay decomposition method in [10].

Recently, a special type of time delay, namely, leakage delay (or forgetting delay), is identified and investigated due to its existence in many real systems. In 2007, Gopalsamy [11] proposed the bidirectional associative memory neural networks with constant delay in the leakage term and derived sufficient conditions for existence and stability of equilibrium. Based on this work, Li and Huang [12] investigated the stability of general nonlinear systems with leakage delay, by model transformation, contraction mapping theorem, and degenerate Lyapunov-Krasovskii functional. However, dynamical analysis of neural networks with time delay in leakage term has been little considered due to some theoretical and technical difficulties [2330]. In fact, time delay in the stabilizing negative feedback term has a tendency to destabilize a system [11] and has great impact on the dynamics of neural networks.

On the other hand, besides delay, impulses are also likely to exist in neural networks. In implementation of electronic networks, the state is subject to instantaneous perturbations and experiences abrupt change at certain moments, which may be caused by switching phenomenon, frequency change, or other sudden noises; that is, it does exhibit impulsive effects; see [1321, 3135]. Therefore, impulsive perturbations should be taken into account when studying the dynamics of neural networks. Since the existence of delays and impulses is frequently a source of instability, bifurcation, and chaos for dynamical systems, it is significant to study both delay and impulsive effects on dynamical systems [1321, 25, 26, 32]. In [25], Li et al. investigated the existence, uniqueness, and stability problems of recurrent neural networks with discrete time delay and time delay in the leakage term under impulsive perturbations, while being without distributed delay. Since neural networks usually have a spatial extent, there is a distribution of propagation delays over a period of time. In these circumstances the signal propagation is not instantaneous and cannot be modelled only with discrete delays and a more appropriate way is to incorporate continuously distributed delays in neural network models. To the best of our knowledge, so far, there has been very little existing work on impulsive neural networks with time delay in the leakage term and discrete and time-varying distributed delays via LMI approach [27].

Motivated by aforementioned discussion, in this paper, we consider a class of impulsive neural networks with mixed time delays and generalized activation functions which could be different from each other. The mixed time delays include time-varying transmission delay, bounded time-varying distributed delay, and constant delay in the leakage term. Firstly, by using the contraction mapping theorem, we obtain a delay-independent sufficient condition to guarantee the global existence and uniqueness of the solution for the addressed neural networks. Secondly, we present a delay-independent sufficient condition to guarantee the existence of an equilibrium point by using topological degree theory. Thirdly, some sufficient conditions which are dependent on the leakage delay, time-varying transmission delay, and distributed delay have been derived to guarantee the global asymptotic stability of the equilibrium point by using a new Lyapunov-Krasovskii functional and some analysis technique. The presented results require neither the boundedness, monotonicity, and differentiability of the activation functions nor the differentiability (even differential boundedness) of time-varying delays, which are more effective and less conservative than other existing literatures [27]. In the absence of leakage delay, the obtained results are also new ones. Moreover, the proposed stability criteria are given in terms of linear matrix inequalities (LMI) [36] and can be conveniently checked by the LMI toolbox in MATLAB. Finally, an example is given to show the effectiveness and less conservativeness of the obtained results.

Notations. Let    denote the set of (positive) real numbers, denote the set of positive integers, and denote the -dimensional real spaces equipped with the Euclidean norm . or denotes that the matrix is a symmetric and positive definite or negative definite matrix. The notations and mean the transpose of and the inverse of a square matrix. or denotes the maximum eigenvalue or the minimum eigenvalue of matrix . denotes the identity matrix with appropriate dimensions and . denotes the integer function. For any interval , set , , and is continuously differentiable everywhere except at finite number of points , at which , , , and exist and , , where denotes the derivative of . For any , and, for any , . For any , is defined by , , . In addition, the notation always denotes the symmetric block in one symmetric matrix.

2. Preliminaries

Consider the following impulsive neural networks model: where is the neuron state vector of the neural network; is a diagonal matrix with , , ;   , , and are the connection weight matrix, the delayed weight matrix, and the distributively delayed connection weight matrix, respectively; is an external input; is a constant which denotes the leakage delay; is a time-varying transmission delay of the neural network; is a time-varying distributed delay of the neural network; , , and represent the neuron activation functions; is the delay kernel function.

Throughout this paper, we make the following assumptions.(H1) represents the discrete transmission delay with ; represents the time-varying distributed delay with , where , are two positive constants.(H2)The delay kernels , , are some real valued continuous functions defined on and satisfy where is a positive constant.(H3)The neuron activation functions , , and , , are continuous on and satisfy for any , , , , where , , , , , and are some real constants and they may be positive, zero, or negative.(H4) , , are some continuous functions.(H5)The impulse times satisfy and .

We will consider model (1) with the initial condition where , , , whose norm is defined by

Definition 1 (see [37]). Assume that is a bounded and open set and is a continuous and differentiable function. If and for any , where denotes the boundary of and denotes the Jacobian determinant relative to , then the topological degree relative to and is defined by

Remark 2. Generally speaking, the topological degree of relative to and can be regarded as the algebraic number of solutions of in if . For instance, implies that has at least one solution in .

Lemma 3 (see [38]). Given any real matrix of appropriate dimension and a vector function , such that the integrations concerned are well defined, then

Lemma 4 (see [12]). Given any real matrices , , of appropriate dimensions and a scalar such that , then the following inequality holds:

Lemma 5 (see [27]). Supposing that , are symmetric matrices of appropriate dimensions, and , then holds if the following four inequalities , , , and hold simultaneously.

For presentation convenience, in the following, we denote

3. Global Existence and Uniqueness of Solution

In this section, by using the contraction mapping theorem, we give a delay-independent sufficient condition to guarantee the global existence and uniqueness of the solution for models (1) and (4).

Theorem 6. Assume that the assumptions hold; then the solution of models (1) and (4) exists uniquely on .

Proof. Transform the global existence and uniqueness of solution of the models (1) and (4) into a fixed point problem. Let be the norm in defined by where Then it is easy to see that is a Banach space endowed with the norm . Let and consider the operator defined by where , .

First we show that is a contraction on . Let ; we have In view of , we get Substituting the above inequality to (14), we obtain Since is increasing in , Then Note the definition of ; we have Thus is a contraction on , and it has a unique fixed point . Thus we get the fact that exists finitely. It implies that also exists finitely, since assumption holds. Then we replace with and define for later use.

Next we show that is a contraction on . For , let where is defined in (11). Let and consider the operator defined by where and By virtue of the definition of , similar to the proof of (19), we get, for , Thus is a contraction on , and it has a unique fixed point . Moreover, we know that exists finitely, which implies that exists finitely in view of assumption . Then we replace with and define for later use.

Finally we show that is a contraction on . For , let where is defined in (11). Let ; then we can similarly consider the operator defined by where and Then repeating the argument with replacing , similar to the proof of (19), we see that, for , Thus is a contraction on , and it has a unique fixed point .

Continuing in this manner, we construct Then is the global solution of models (1) and (4). If is another solution of models (1) and (4), then it is easy to check from the above argument that . Hence, the solution of models (1) and (4) exists uniquely on . This completes the proof.

4. Existence of an Equilibrium Point

In previous sections, we have showed the global existence and uniqueness of solution for models (1) and (4). In this section, without requiring the boundedness, differentiability, or monotonicity of the activation functions, we establish a delay-independent sufficient condition for the existence of an equilibrium point of model (1). As usual, we denote an equilibrium point of the model (1) by the constant vector , where satisfies where . In this paper, it is assumed that the impulsive function satisfies for . Hence, to prove the existence of solution of (29), it suffices to show that the following has a solution: in view of .

Theorem 7. Assume that the assumption holds. Then model (1) has at least one equilibrium point if, for any , is an -matrix, where , , , , and , .

Proof. From (30), we note that it suffices to prove that the following has at least one solution: where , , , . In order to use topological degree theory, we consider the following homotopic mapping: Note that is an -matrix; it can be deduced that is also an -matrix. This implies that and there exists a positive vector such that . It then follows that Let It is obvious that set is not empty and, for any , we have which implies that for all and . By topological degree invariance theory, we obtain Therefore, from Remark 2, we know that has at least one solution in . This completes the proof.

5. Global Asymptotic Stability

It should be noted that Theorem 7 can guarantee the existence of an equilibrium point but not the uniqueness. In this section, we will derive some sufficient conditions to guarantee not only the global asymptotic stability but also the uniqueness of the equilibrium point. For this purpose, the impulsive function which is viewed as a perturbation of the equilibrium point of models (1) and (4) without impulses is defined by where are some real matrices. It is clear that , . Such type of impulses describes the fact that the encountered instantaneous perturbations depend on not only the state of neurons at impulse times but also the state of neurons in its recent history, which reflects more realistic dynamics. The similar nonlinear impulsive perturbations, which include linear impulsive perturbations and nonimpulsive perturbations as their special cases, have also been investigated by some researchers recently [1721, 39].

Let ; then we rewrite the models (1) and (4) as follows: where , and . For convenience in our discussion, in the following, we replace with , replace with , and replace with . Then using a simple transformation, model (38) has an equivalent form as follows:

Theorem 8. Under the conditions in Theorem 7, model (1) has a unique equilibrium point which is globally asymptotically stable if there exist a constant , an inverse matrix , six matrices , , , , , and , three diagonal matrices , , and , and real matrices , with appropriate dimension such that where with

Proof. Consider the following Lyapunov-Krasovskii functional as where Calculating the upper right derivative of along the trajectories of model (39) at the continuous interval , we obtain It follows from Lemma 3 that In addition, we note that For simplicity, we denote By applying Lemma 4, for any with , , we have where Moreover, for any diagonal matrices, , , and , the following inequality holds by the methods in [40]: Combining (45)–(53), one may deduce that where From Lemma 5 we obtain that if the following inequalities hold simultaneously: Based on the well-known Schur complements [36], we get the fact that (56) is equivalent to (42). Therefore, , , .

For arbitrary , without loss of generality, we set , for some . Then integrating inequality (54) at each interval , , and , we derive