Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics
Volume 2012 (2012), Article ID 689845, 12 pages
http://dx.doi.org/10.1155/2012/689845
Research Article

Fixed Point and Asymptotic Analysis of Cellular Neural Networks

1School of Economics & Management, Nanjing University of Information Science & Technology, Nanjing 210044, China
2School of Mathematics & Statistics, Nanjing University of Information Science and Technology, Nanjing 210044, China

Received 24 April 2012; Revised 16 July 2012; Accepted 2 August 2012

Academic Editor: Naseer Shahzad

Copyright © 2012 Xianghong Lai and Yutian Zhang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We firstly employ the fixed point theory to study the stability of cellular neural networks without delays and with time-varying delays. Some novel and concise sufficient conditions are given to ensure the existence and uniqueness of solution and the asymptotic stability of trivial equilibrium at the same time. Moreover, these conditions are easily checked and do not require the differentiability of delays.

1. Introduction

Cellular neural networks (CNNs) were firstly proposed by Chua and Yang in 1988 [1, 2] and have become a research focus for their numerous successful applications in various fields such as optimization, linear, and nonlinear programming, associative memory, pattern recognition, and computer vision. Owing to the finite switching speed of neurons and amplifiers in the implementation of neural networks, it turns out that the time delays are inevitable and therefore the model of delayed cellular neural networks (DCNNs) is of greater realistic significance. Research on the dynamic behaviors of CNNs and DCNNs has received much attention, and nowadays there have been a large number of achievements reported [35].

In fact, besides delay effects, stochastic and impulsive as well as diffusing effects are also likely to exist in the neural networks. As a result, they have formed complex CNNs including impulsive delayed reaction-diffusion CNNs, stochastic delayed reaction-diffusion CNNs, and so forth. One can refer to [611] for the relevant researches. Synthesizing the existing publications about complex CNNs, we find that Lyapunov method is the primary technique. However, we also notice that there exist lots of difficulties in the applications of corresponding results to practical problems and so it does seem that new methods are needed to address those difficulties.

Encouragingly, Burton and other authors have recently applied the fixed point theory to investigate the stability of deterministic systems and obtained more applicable results, for example, see the monograph [12] and the papers [1324]. Furthermore, there has been found that the fixed point theory is also effective to the stability analysis of stochastic (delayed) differential equations, see [2531]. Particularly, in [2628], Luo used the fixed point theory to study the exponential stability of mild solutions of stochastic partial differential equations with bounded delays and with infinite delays. In [29, 30], Sakthivel and Luo used the fixed point theory to investigate the asymptotic stability in pth moment of mild solutions to nonlinear impulsive stochastic partial differential equations with bounded delays and with infinite delays. In [31], Luo used the fixed point theory to study the exponential stability of stochastic Volterra-Levin equations. With these motivations, we wonder if we can use the fixed point theory to study the stability of complex neural networks, thus obtaining more applicable results.

In the present paper, we aim to discuss the asymptotic stability of CNNs and DCNNs. Our method is based on the contraction mapping theory, which is different from the usual method of Lyapunov theory. Some new and easily checked algebraic criteria are presented ensuring the existence and uniqueness of solution and the asymptotic stability of trivial equilibrium at the same time. These sufficient conditions do not require even the differentiability of delays, let alone the monotone decreasing behavior of delays.

2. Preliminaries

Let R𝑛 denote the n-dimensional Euclidean space and represent the Euclidean norm. 𝒩{1,2,,𝑛}. R+=[0,). 𝐶(𝑋,𝑌) corresponds to the space of continuous mappings from the topological space 𝑋 to the topological space 𝑌.

In this paper, we consider the cellular neural network described by d𝑥𝑖(𝑡)d𝑡=𝑎𝑖𝑥𝑖(𝑡)+𝑛𝑗=1𝑏𝑖𝑗𝑓𝑗𝑥𝑗𝑥(𝑡),𝑡0,(2.1)𝑖(0)=𝑥0𝑖,(2.2) and the following cellular neural network with time-varying delays as d𝑥𝑖(𝑡)d𝑡=𝑎𝑖𝑥𝑖(𝑡)+𝑛𝑗=1𝑏𝑖𝑗𝑓𝑗𝑥𝑗+(𝑡)𝑛𝑗=1𝑐𝑖𝑗𝑔𝑗𝑥𝑗𝑡𝜏𝑗𝑥(𝑡),𝑡0(2.3)𝑖(𝑠)=𝜑𝑖(𝑠),𝜏𝑠0,(2.4) where 𝑖𝒩 and 𝑛 is the number of neurons in the neural network. 𝑥𝑖(𝑡) corresponds to the state of the ith neuron at time 𝑡. 𝐱0=(𝑥01,,𝑥0𝑛)TR𝑛. 𝑓𝑗(𝑥𝑗(𝑡)) denotes the activation function of the jth neuron at time 𝑡 and 𝑔𝑗(𝑥𝑗(𝑡𝜏𝑗(𝑡))) is the activation function of the jth neuron at time 𝑡𝜏𝑗(𝑡). The constant 𝑏𝑖𝑗 represents the connection weight of the jth neuron on the ith neuron at time 𝑡. The constant 𝑎𝑖>0 represents the rate with which the ith neuron will reset its potential to the resting state when disconnected from the network and external inputs. The constant 𝑐𝑖𝑗 represents the connection strength of the jth neuron on the ith neuron at time 𝑡𝜏𝑗(𝑡), where 𝜏𝑗(𝑡) corresponds to the transmission delay along the axon of the jth neuron and satisfies 0𝜏𝑗(𝑡)𝜏 (𝜏 is a constant). 𝑓𝑗(), 𝑔𝑗()𝐶(R,R). 𝜑(𝑠)=(𝜑1(𝑠),,𝜑𝑛(𝑠))TR𝑛 and 𝜑𝑖(𝑠)𝐶([𝜏,0],R). Denote |𝜑|=sup𝑠[𝜏,0]𝜑(𝑠).

Throughout this paper, we always assume that 𝑓𝑗(0)=𝑔𝑗(0)=0 for 𝑗𝒩 and therefore (2.1) and (2.3) admit a trivial equilibrium 𝐱=0.

Denote by 𝐱(𝑡;0,𝐱0)=(𝑥1(𝑡;0,𝑥01),,𝑥𝑛(𝑡;0,𝑥0𝑛))TR𝑛 the solution of (2.1) with the initial condition (2.2) and denote by 𝐱(𝑡;𝑠,𝜑)=(𝑥1(𝑡;𝑠,𝜑1),,𝑥𝑛(𝑡;𝑠,𝜑𝑛))TR𝑛 the solution of (2.3) with the initial condition (2.4).

Definition 2.1 (see [32]). The trivial equilibrium 𝐱=0 of (2.1) is said to be stable if for any 𝜀>0, there exists 𝛿>0 such that for any initial condition 𝐱0 satisfying 𝐱0<𝛿, 𝐱𝑡;0,𝐱0<𝜀,𝑡0.(2.5)

Definition 2.2 (see [32]). The trivial equilibrium 𝐱=0 of (2.1) is said to be asymptotically stable if it is stable and for any 𝐱0R𝑛, lim𝑡𝐱𝑡;0,𝐱0=0.(2.6)

Definition 2.3 (see [32]). The trivial equilibrium 𝐱=0 of (2.3) is said to be stable if for any 𝜀>0, there exists 𝛿>0 such that for any initial condition 𝜑(𝑠)𝐶([𝜏,0],R𝑛) satisfying |𝜑|<𝛿, 𝐱(𝑡;𝑠,𝜑)<𝜀,𝑡0.(2.7)

Definition 2.4 (see [32]). The trivial equilibrium 𝐱=0 of (2.3) is said to be asymptotically stable if it is stable and for any initial condition 𝜑(𝑠)𝐶([𝜏,0],R𝑛), lim𝑡𝐱(𝑡;𝑠,𝜑)=0.(2.8)

The consideration of this paper is based on the following fixed point theorem.

Lemma 2.5 (see [33]). Let Υ be a contraction operator on a complete metric space Θ, then there exists a unique point 𝜁Θ for which Υ(𝜁)=𝜁.

3. Asymptotic Stability of Cellular Neural Networks

In this section, we will simultaneously consider the existence and uniqueness of solution to (2.1)-(2.2) and the asymptotic stability of trivial equilibrium 𝐱=0 of (2.1) by means of the contraction mapping principle. Before proceeding, we firstly introduce the following assumption:(A1) There exist nonnegative constants 𝑙𝑗 such that for 𝜂, 𝜐R, ||𝑓𝑗(𝜂)𝑓𝑗||(𝜐)𝑙𝑗||||𝜂𝜐,𝑗𝒩.(3.1)

Let 𝒮=𝒮1×𝒮2××𝒮𝑛, where 𝒮𝑖 (𝑖𝒩) is the space consisting of continuous functions 𝜙𝑖(𝑡)R+R such that 𝜙𝑖(0)=𝑥0𝑖 and 𝜙𝑖(𝑡)0 as 𝑡, here 𝑥0𝑖 is the same as defined in Section 2. Also 𝒮 is a complete metric space when it is equipped with a metric defined by 𝑑𝐪(𝑡),𝐡(𝑡)=sup𝑛𝑡0𝑖=1||𝑞𝑖(𝑡)𝑖||,(𝑡)(3.2) where 𝐪(𝑡)=(𝑞1(𝑡),,𝑞𝑛(𝑡))𝒮 and 𝐡(𝑡)=(1(𝑡),,𝑛(𝑡))𝒮.

Theorem 3.1. Assume the condition (A1) holds. If the following inequalities hold 𝑛𝑖=11𝑎𝑖max𝑗||𝑏𝑖𝑗𝑙𝑗||<1,max𝑖𝒩𝜆𝑖<1𝑛,(3.3) where 𝜆𝑖=(1/𝑎𝑖)𝑛𝑗=1|𝑏𝑖𝑗𝑙𝑗|, then the trivial equilibrium 𝐱=0 of (2.1) is asymptotically stable.

Proof. Multiplying both sides of (2.1) with e𝑎𝑖𝑡 gives de𝑎𝑖𝑡𝑥𝑖(𝑡)=e𝑎𝑖𝑡d𝑥𝑖(𝑡)+𝑎𝑖𝑥𝑖(𝑡)e𝑎𝑖𝑡d𝑡=e𝑎𝑖𝑡𝑛𝑗=1𝑏𝑖𝑗𝑓𝑗𝑥𝑗(𝑡)d𝑡,𝑡0,𝑖𝒩,(3.4) which yields after integrating from 0 to 𝑡 as 𝑥𝑖(𝑡)=𝑥0𝑖e𝑎𝑖𝑡+e𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠𝑛𝑗=1𝑏𝑖𝑗𝑓𝑗𝑥𝑗(𝑠)ds,𝑡0,𝑖𝒩.(3.5) Now, for any 𝐲(𝑡)=(𝑦1(𝑡),,𝑦𝑛(𝑡))𝒮, we define the following operator Φ acting on 𝒮 as Φ𝐲Φ𝑦(𝑡)=1𝑦(𝑡),,Φ𝑛(𝑡),𝑡0,(3.6) where Φ𝑦𝑖(𝑡)=𝑥0𝑖e𝑎𝑖𝑡+e𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠𝑛𝑗=1𝑏𝑖𝑗𝑓𝑗𝑦𝑗(𝑠)ds,𝑖𝒩.(3.7)
The following proof is based on the contraction mapping principle, which can be divided into two steps as follows.
Step 1. We need to prove Φ(𝒮)𝒮. Recalling the construction of 𝒮, we know that it is necessary to show the continuity of Φ on [0,) and Φ(𝑦𝑖)(𝑡)|𝑡=0=𝑥0𝑖 as well as lim𝑡Φ(𝑦𝑖)(𝑡)=0 for 𝑖𝒩.
From (3.7), it is easy to see Φ(𝑦𝑖)(𝑡)|𝑡=0=𝑥0𝑖. Moreover, for a fixed time 𝑡10, we have Φ𝑦𝑖𝑡1𝑦+𝑟Φ𝑖𝑡1=𝑥0𝑖e𝑎𝑖(𝑡1+𝑟)𝑥0𝑖e𝑎𝑖𝑡1+e𝑎𝑖(𝑡1+𝑟)𝑡10+𝑟e𝑎𝑖𝑠𝑛𝑗=1𝑏𝑖𝑗𝑓𝑗𝑦𝑗(𝑠)dse𝑎𝑖𝑡1𝑡10e𝑎𝑖𝑠𝑛𝑗=1𝑏𝑖𝑗𝑓𝑗𝑦𝑗(𝑠)ds.(3.8) It is not difficult to see that Φ(𝑦𝑖)(𝑡1+𝑟)Φ(𝑦𝑖)(𝑡1)0 as 𝑟0 which implies Φ is continuous on [0,).
Next we shall prove lim𝑡Φ(𝑦𝑖)(𝑡)=0 for 𝑦𝑖(𝑡)𝒮𝑖. Since 𝑦𝑗(𝑡)𝒮𝑗, we get lim𝑡𝑦𝑗(𝑡)=0. Then for any 𝜀>0, there exists a 𝑇𝑗>0 such that 𝑠𝑇𝑗 implies |𝑦𝑗(𝑠)|<𝜀. Choose 𝑇=max𝑗𝒩{𝑇𝑗}. It is then derived form (A1) that e𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠𝑛𝑗=1𝑏𝑖𝑗𝑓𝑗𝑦𝑗(𝑠)dse𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠𝑛𝑗=1||𝑏𝑖𝑗𝑙𝑗||||𝑦𝑗||(𝑠)ds=e𝑎𝑖𝑡𝑇0e𝑎𝑖𝑠𝑛𝑗=1||𝑏𝑖𝑗𝑙𝑗||||𝑦𝑗||(𝑠)ds+e𝑎𝑖𝑡𝑡𝑇e𝑎𝑖𝑠𝑛𝑗=1||𝑏𝑖𝑗𝑙𝑗||||𝑦𝑗||(𝑠)dse𝑎𝑖𝑡𝑛𝑗=1||𝑏𝑖𝑗𝑙𝑗||sup𝑠0,𝑇||𝑦𝑗||(𝑠)𝑇0e𝑎𝑖𝑠ds+𝜀e𝑎𝑖𝑡𝑛𝑗=1||𝑏𝑖𝑗𝑙𝑗||𝑡𝑇e𝑎𝑖𝑠dse𝑎𝑖𝑡𝑛𝑗=1||𝑏𝑖𝑗𝑙𝑗||sup𝑠0,𝑇||𝑦𝑗||(𝑠)𝑇0e𝑎𝑖𝑠+𝜀ds𝑎𝑖𝑛𝑗=1||𝑏𝑖𝑗𝑙𝑗||.(3.9) As 𝑎𝑖>0, we obtain e𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠𝑛𝑗=1𝑏𝑖𝑗𝑓𝑗(𝑦𝑗(𝑠))ds0 as 𝑡. So lim𝑡Φ(𝑦𝑖)(𝑡)=0 for 𝑖𝒩. We therefore conclude that Φ(𝒮)𝒮.
Step 2. We need to prove Φ is contractive. For any 𝐲=(𝑦1(𝑡),,𝑦𝑛(𝑡))𝒮 and 𝐳=(𝑧1(𝑡),,𝑧𝑛(𝑡))𝒮, we compute sup[]𝑛𝑡0,𝑇𝑖=1||Φ𝑦𝑖𝑧(𝑡)Φ𝑖||(𝑡)sup[]𝑛𝑡0,𝑇𝑖=1e𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠𝑛𝑗=1||𝑏𝑖𝑗||||𝑓𝑗𝑦𝑗(𝑠)𝑓𝑗𝑧𝑗||(𝑠)dssup[]𝑛𝑡0,𝑇𝑖=1e𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠𝑛𝑗=1||𝑏𝑖𝑗𝑙𝑗||||𝑦𝑗(𝑠)𝑧𝑗||(𝑠)dssup[]𝑛𝑡0,𝑇𝑖=1max𝑗𝒩||𝑏𝑖𝑗𝑙𝑗||sup[]𝑠0,𝑇𝑛𝑗=1||𝑦𝑗(𝑠)𝑧𝑗||e(𝑠)𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠ds𝑛𝑖=11𝑎𝑖max𝑗𝒩||𝑏𝑖𝑗𝑙𝑗||sup[]𝑠0,𝑇𝑛𝑗=1||𝑦𝑗(𝑠)𝑧𝑗||.(𝑠)(3.10) As 𝑛𝑖=1{(1/𝑎𝑖)max𝑗𝒩{|𝑏𝑖𝑗𝑙𝑗|}}<1, Φ is a contraction mapping.
Therefore, by the contraction mapping principle, we see there must exist a unique fixed point 𝐮() of Φ in 𝒮 which means 𝐮T() is the solution of (2.1)-(2.2) and 𝐮T()0 as 𝑡.
To obtain the asymptotic stability, we still need to prove that the trivial equilibrium 𝐱=0 of (2.1) is stable. For any 𝜀>0, from the conditions of Theorem 3.1, we can find 𝛿 satisfying 0<𝛿<𝜀 such that 𝛿+max𝑖𝒩{𝜆𝑖}𝜀𝜀/𝑛.
Let 𝐱0<𝛿. According to what have been discussed above, we know that there must exist a unique solution 𝐱(𝑡;0,𝐱0)=(𝑥1(𝑡;0,𝑥01),,𝑥𝑛(𝑡;0,𝑥0𝑛))TR𝑛 to (2.1)-(2.2), and 𝑥𝑖𝑥(𝑡)=Φ𝑖(𝑡)=𝐽1(𝑡)+𝐽2(𝑡),𝑡0,(3.11) where 𝐽1(𝑡)=𝑥0𝑖e𝑎𝑖𝑡, 𝐽2(𝑡)=e𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠𝑛𝑗=1𝑏𝑖𝑗𝑓𝑗(𝑥𝑗(𝑠))ds.
Suppose there exists 𝑡>0 such that 𝐱(𝑡;0,𝐱0)=𝜀 and 𝐱(𝑡;0,𝐱0)<𝜀 as 0𝑡<𝑡. It follows from (3.11) that |𝑥𝑖(𝑡)||𝐽1(𝑡)|+|𝐽2(𝑡)|.
As |𝐽1(𝑡)|=|𝑥0𝑖e𝑎𝑖𝑡|𝛿 and |𝐽2(𝑡)|e𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠𝑛𝑗=1|𝑏𝑖𝑗𝑙𝑗𝑥𝑗(𝑠)|ds<(𝜀/𝑎𝑖)𝑛𝑗=1|𝑏𝑖𝑗𝑙𝑗|, we obtain |𝑥𝑖(𝑡)|<𝛿+𝜆𝑖𝜀. Hence 𝐱𝑡;0,𝐱02=𝑛𝑖=1||𝑥𝑖𝑡||2<𝑛𝑖=1||𝛿+𝜆𝑖𝜀||2|||𝑛𝛿+max𝑖𝒩𝜆𝑖𝜀|||2𝜀2.(3.12)
This contradicts to the assumption of 𝐱(𝑡;0,𝐱0)=𝜀. Therefore, 𝐱(𝑡;0,𝐱0)<𝜀 holds for all 𝑡0. This completes the proof.

4. Asymptotic Stability of Delayed Cellular Neural Networks

In this section, we will simultaneously consider the existence and uniqueness of solution to (2.3)-(2.4) and the asymptotic stability of trivial equilibrium 𝐱=0 of (2.3) by means of the contraction mapping principle. Before proceeding, we give the assumption as follows.(A2) There exist nonnegative constants 𝑘𝑗 such that for 𝜂, 𝜐R, ||𝑔𝑗(𝜂)𝑔𝑗||(𝜐)𝑘𝑗||||𝜂𝜐,𝑗𝒩.(4.1)

Let =1××𝑛, where 𝑖 (𝑖𝒩) is the space consisting of continuous functions 𝜙𝑖(𝑡)[𝜏,)R such that 𝜙𝑖(𝑠)=𝜑𝑖(𝑠) on 𝑠[𝜏,0] and 𝜙𝑖(𝑡)0 as 𝑡, here 𝜑𝑖(𝑠) is the same as defined in Section 2. Also is a complete metric space when it is equipped with a metric defined by 𝑑𝐪(𝑡),𝐡(𝑡)=sup𝑛𝑡𝜏𝑖=1||𝑞𝑖(𝑡)𝑖||,(𝑡)(4.2) where 𝐪(𝑡)=(𝑞1(𝑡),,𝑞𝑛(𝑡)) and 𝐡(𝑡)=(1(𝑡),,𝑛(𝑡)).

Theorem 4.1. Assume the conditions (A1)-(A2) hold. If the following inequalities hold 𝑛𝑖=11𝑎𝑖max𝑗𝒩||𝑏𝑖𝑗𝑙𝑗||+max𝑗𝒩||𝑐𝑖𝑗𝑘𝑗||<1,max𝑖𝒩𝜆𝑖<1𝑛(4.3) where 𝜆𝑖=(1/𝑎𝑖)𝑛𝑗=1|𝑏𝑖𝑗𝑙𝑗|+(1/𝑎𝑖)𝑛𝑗=1|𝑐𝑖𝑗𝑘𝑗|, then the trivial equilibrium 𝐱=0 of (2.3) is asymptotically stable.

Proof. Multiplying both sides of (2.3) with e𝑎𝑖𝑡 gives de𝑎𝑖𝑡𝑥𝑖(𝑡)=e𝑎𝑖𝑡d𝑥𝑖(𝑡)+𝑎𝑖𝑥𝑖(𝑡)e𝑎𝑖𝑡d𝑡=e𝑎𝑖𝑡𝑛𝑗=1𝑏𝑖𝑗𝑓𝑗𝑥𝑗(+𝑡)𝑛𝑗=1𝑐𝑖𝑗𝑔𝑗𝑥𝑗𝑡𝜏𝑗(𝑡)d𝑡,(4.4) which yields after integrating from 0 to 𝑡 as 𝑥𝑖(𝑡)=𝜑𝑖(0)e𝑎𝑖𝑡+e𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠𝑛𝑗=1𝑏𝑖𝑗𝑓𝑗𝑥𝑗+(𝑠)𝑛𝑗=1𝑐𝑖𝑗𝑔𝑗𝑥𝑗𝑠𝜏𝑗(𝑠)ds.(4.5)
Now for any 𝐲(𝑡)=(𝑦1(𝑡),,𝑦𝑛(𝑡)), we define the following operator 𝜋 acting on 𝜋𝐲𝜋𝑦(𝑡)=1𝑦(𝑡),,𝜋𝑛(𝑡),(4.6) where 𝜋𝑦𝑖(𝑡)=𝜑𝑖(0)e𝑎𝑖𝑡+e𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠𝑛𝑗=1𝑏𝑖𝑗𝑓𝑗𝑦𝑗+(𝑠)𝑛𝑗=1𝑐𝑖𝑗𝑔𝑗𝑦𝑗𝑠𝜏𝑗(𝑠)ds,𝑡0,(4.7) and 𝜋(𝑦𝑖)(𝑠)=𝜑𝑖(𝑠) on 𝑠[𝜏,0] for 𝑖𝒩.
Similar to the proof of Theorem 3.1, we shall apply the contraction mapping principle to prove Theorem 4.1. The subsequent proof can be divided into two steps.
Step 1. We need prove 𝜋(). To prove 𝜋(), it is necessary to show the continuity of 𝜋 on [𝜏,) and lim𝑡𝜋(𝑦𝑖)(𝑡)=0 for 𝑦𝑖(𝑡)𝑖 and 𝑖𝒩. In light of (4.7), we have, for a fixed time 𝑡10, 𝜋𝑦𝑖𝑡1𝑦+𝑟𝜋𝑖𝑡1=𝐼1+𝐼2+𝐼3,(4.8) where 𝐼1=𝜑𝑖(0)e𝑎𝑖(𝑡1+𝑟)𝜑𝑖(0)e𝑎𝑖𝑡1,𝐼2=e𝑎𝑖(𝑡1+𝑟)𝑡10+𝑟e𝑎𝑖𝑠𝑛𝑗=1𝑏𝑖𝑗𝑓𝑗𝑦𝑗(𝑠)dse𝑎𝑖𝑡1𝑡10e𝑎𝑖𝑠𝑛𝑗=1𝑏𝑖𝑗𝑓𝑗𝑦𝑗𝐼(𝑠)ds,3=e𝑎𝑖(𝑡1+𝑟)𝑡10+𝑟e𝑎𝑖𝑠𝑛𝑗=1𝑐𝑖𝑗𝑔𝑗𝑦𝑗𝑠𝜏𝑗(𝑠)dse𝑎𝑖𝑡1𝑡10e𝑎𝑖𝑠𝑛𝑗=1𝑐𝑖𝑗𝑔𝑗𝑦𝑗𝑠𝜏𝑗(𝑠)ds.(4.9) It is easy to see that lim𝑟0{𝜋(𝑦𝑖)(𝑡1+𝑟)𝜋(𝑦𝑖)(𝑡1)}=0. Thus, 𝜋 is continuous on [0,). Noting 𝜑𝑖(𝑠)𝐶([𝜏,0],R) and 𝜋(𝑦𝑖)(0)=𝜑𝑖(0), we obtain 𝜋 is indeed continuous on [𝜏,).
Next, we will prove lim𝑡𝜋(𝑦𝑖)(𝑡)=0 for 𝑦𝑖(𝑡)𝑖. As we did in Section 3, we know lim𝑡e𝑎𝑖𝑡=0 and e𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠𝑛𝑗=1𝑏𝑖𝑗𝑓𝑗(𝑦𝑗(𝑠))ds0 as 𝑡. In what follows, we will show e𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠𝑛𝑗=1𝑐𝑖𝑗𝑔𝑗(𝑦𝑗(𝑠𝜏𝑗(𝑠)))ds0 as 𝑡. In fact, since 𝑦𝑗(𝑡)𝑗, we have lim𝑡𝑦𝑗(𝑡)=0. Then for any 𝜀>0, there exists a 𝑇𝑗>0 such that 𝑠𝑇𝑗𝜏 implies |𝑦𝑗(𝑠)|<𝜀. Select 𝑇=max𝑗𝒩{𝑇𝑗}. It is then derived from (A2) that e𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠𝑛𝑗=1𝑐𝑖𝑗𝑔𝑗𝑦𝑗𝑠𝜏𝑗(𝑠)dse𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠𝑛𝑗=1||𝑐𝑖𝑗𝑘𝑗||||𝑦𝑗𝑠𝜏𝑗||(𝑠)ds=e𝑎𝑖𝑡𝑇0e𝑎𝑖𝑠𝑛𝑗=1||𝑐𝑖𝑗𝑘𝑗||||𝑦𝑗𝑠𝜏𝑗||(𝑠)ds+e𝑎𝑖𝑡𝑡𝑇e𝑎𝑖𝑠𝑛𝑗=1||𝑐𝑖𝑗𝑘𝑗||||𝑦𝑗𝑠𝜏𝑗||(𝑠)dse𝑎𝑖𝑡𝑛𝑗=1||𝑐𝑖𝑗𝑘𝑗||sup𝑠𝜏,𝑇||𝑦𝑗||𝑇(𝑠)0eaisds+𝜀e𝑎𝑖𝑡𝑛𝑗=1||𝑐𝑖𝑗𝑘𝑗||𝑡𝑇e𝑎𝑖𝑠dse𝑎𝑖𝑡𝑛𝑗=1||𝑐𝑖𝑗𝑘𝑗||sup𝑠𝜏,𝑇||𝑦𝑗||𝑇(𝑠)0e𝑎𝑖𝑠+𝜀ds𝑎𝑖𝑛𝑗=1||𝑐𝑖𝑗𝑘𝑗||.(4.10) As lim𝑡e𝑎𝑖𝑡=0, we obtain e𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠𝑛𝑗=1𝑐𝑖𝑗𝑔𝑗(𝑦𝑗(𝑠𝜏𝑗(𝑠)))ds0 as 𝑡, which leads to lim𝑡𝜋(𝑦𝑖)(𝑡)=0 for 𝑦𝑖(𝑡)𝑖 and 𝑖𝒩. We therefore conclude 𝜋().
Step 2. We need to prove 𝜋 is contractive. For any 𝐲=(𝑦1(𝑡),,𝑦𝑛(𝑡)) and 𝐳=(𝑧1(𝑡),,𝑧𝑛(𝑡)), we estimate 𝑛𝑖=1||𝜋𝑦𝑖𝑧(𝑡)𝜋𝑖||(𝑡)𝑛𝑖=1e𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠𝑛𝑗=1||𝑏𝑖𝑗||||𝑓𝑗𝑦𝑗(𝑠)𝑓𝑗𝑧𝑗||+(𝑠)ds𝑛𝑖=1e𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠𝑛𝑗=1||𝑐𝑖𝑗||||𝑔𝑗𝑦𝑗𝑠𝜏𝑗(𝑠)𝑔𝑗𝑧𝑗𝑠𝜏𝑗||(𝑠)ds𝑛𝑖=1e𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠𝑛𝑗=1||𝑏𝑖𝑗𝑙𝑗||||𝑦𝑗(𝑠)𝑧𝑗||+(𝑠)ds𝑛𝑖=1e𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠𝑛𝑗=1||𝑐𝑖𝑗𝑘𝑗||||𝑦𝑗𝑠𝜏𝑗(𝑠)𝑧𝑗𝑠𝜏𝑗||(𝑠)ds𝑛𝑖=1max𝑗𝒩||𝑏𝑖𝑗𝑙𝑗||sup[]𝑠0,𝑡𝑛𝑗=1||𝑦𝑗(𝑠)𝑧𝑗||𝑒(𝑠)𝑎𝑖𝑡𝑡0𝑒𝑎𝑖𝑠+ds𝑛𝑖=1max𝑗𝒩||𝑐𝑖𝑗𝑘𝑗||sup[]𝑠𝜏,𝑡𝑛𝑗=1||𝑦𝑗(𝑠)𝑧𝑗||e(𝑠)𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠ds𝑛𝑖=11𝑎𝑖max𝑗𝒩||𝑏𝑖𝑗𝑙𝑗||sup[]𝑠0,𝑡𝑛𝑗=1||𝑦𝑗(𝑠)𝑧𝑗||+(𝑠)𝑛𝑖=11𝑎𝑖max𝑗𝒩||𝑐𝑖𝑗𝑘𝑗||sup[]𝑠𝜏,𝑡𝑛𝑗=1||𝑦𝑗(𝑠)𝑧𝑗||.(𝑠)(4.11)
Hence, sup[]𝑛𝑡𝜏,𝑇𝑖=1||𝜋𝑦𝑖𝑧(𝑡)𝜋𝑖||(𝑡)𝑛𝑖=11𝑎𝑖max𝑗𝒩||𝑏𝑖𝑗𝑙𝑗||+𝑛𝑖=11𝑎𝑖max𝑗𝒩||𝑐𝑖𝑗𝑘𝑗||sup[]𝑠𝜏,𝑇𝑛𝑗=1||𝑦𝑗(𝑠)𝑧𝑗||.(𝑠)(4.12)
As 𝑛𝑖=1{(1/𝑎𝑖)(max𝑗𝒩|𝑏𝑖𝑗𝑙𝑗|+max𝑗𝒩|𝑐𝑖𝑗𝑘𝑗|)}<1, 𝜋 is a contraction mapping and hence there exists a unique fixed point 𝐮() of 𝜋 in which means 𝐮T() is the solution of (2.3)-(2.4) and 𝐮T()0 as 𝑡.
To obtain the asymptotic stability, we still need to prove that the trivial equilibrium of (2.3) is stable. For any 𝜀>0, from the conditions of Theorem 4.1, we can find 𝛿 satisfying 0<𝛿<𝜀 such that 𝛿+max𝑖𝒩{𝜆𝑖}𝜀𝜀/𝑛.
Let |𝜑|<𝛿. According to what have been discussed above, we know that there exists a unique solution 𝐱(𝑡;𝑠,𝜑)=(𝑥1(𝑡;𝑠,𝜑1),,𝑥𝑛(𝑡;𝑠,𝜑𝑛))TR𝑛 to (2.3)-(2.4), and 𝑥𝑖𝑥(𝑡)=𝜋𝑖(𝑡)=𝐽1(𝑡)+𝐽2(𝑡)+𝐽3(𝑡),𝑡0,(4.13) where 𝐽1(𝑡)=𝑥0𝑖e𝑎𝑖𝑡,𝐽2(𝑡)=e𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠𝑛𝑗=1𝑏𝑖𝑗𝑓𝑗𝑥𝑗𝐽(𝑠)ds,3=e𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠𝑛𝑗=1𝑐𝑖𝑗𝑔𝑗𝑥𝑗𝑠𝜏𝑗(𝑠)ds.(4.14)
Suppose there exists 𝑡>0 such that 𝐱(𝑡;𝑠,𝜑)=𝜀 and 𝐱(𝑡;𝑠,𝜑)<𝜀 as 0𝑡<𝑡. It follows from (4.13) that |𝑥𝑖(𝑡)||𝐽1(𝑡)|+|𝐽2(𝑡)|+|𝐽3(𝑡)|.
As |𝐽1(𝑡)|=|𝑥0𝑖e𝑎𝑖𝑡|𝛿, |𝐽2(𝑡)|<(𝜀/𝑎𝑖)𝑛𝑗=1|𝑏𝑖𝑗𝑙𝑗| and ||𝐽3𝑡||e𝑎𝑖𝑡𝑡0e𝑎𝑖𝑠𝑛𝑗=1||𝑐𝑖𝑗𝑘𝑗𝑥𝑗𝑠𝜏𝑗||𝜀(𝑠)ds<𝑎𝑖𝑛𝑗=1||𝑐𝑖𝑗𝑘𝑗||,(4.15) we obtain |𝑥𝑖(𝑡)|<𝛿+𝜆𝑖𝜀. Hence 𝐱𝑡;𝑠,𝜑2=𝑛𝑖=1||𝑥𝑖𝑡||2<𝑛𝑖=1||𝛿+𝜆𝑖𝜀||2|||𝑛𝛿+max𝑖𝒩𝜆𝑖𝜀|||2𝜀2.(4.16)
This contradicts to the assumption of 𝐱(𝑡;𝑠,𝜑)=𝜀. Therefore, 𝐱(𝑡;𝑠,𝜑)<𝜀 holds for all 𝑡0. This completes the proof.

Remark 4.2. In Theorems 3.1 and 4.1, we use the contraction mapping principle to study the existence and uniqueness of solution and the asymptotic stability of trivial equilibrium at the same time, while Lyapunov method fails to do this.

Remark 4.3. The provided sufficient conditions in Theorem 4.1 do not require even the differentiability of delays, let alone the monotone decreasing behavior of delays which is necessary in some relevant works.

5. Example

Consider the following two-dimensional cellular neural network with time-varying delays d𝑥𝑖(𝑡)d𝑡=𝑎𝑖𝑥𝑖(𝑡)+2𝑗=1𝑏𝑖𝑗𝑓𝑗𝑥𝑗+(𝑡)2𝑗=1𝑐𝑖𝑗𝑔𝑗𝑥𝑗𝑡𝜏𝑗,(𝑡)(5.1) with the initial conditions 𝑥1(𝑠)=cos(𝑠), 𝑥2(𝑠)=sin(𝑠) on 𝜏𝑠0, where 𝑎1=𝑎2=3,𝑏11=0,𝑏12=1/7,𝑏21=1/7,𝑏22=1/7,𝑐11=3/7,𝑐12=2/7,𝑐21=0,𝑐22=1/7, 𝑓𝑗(𝑠)=𝑔𝑗(𝑠)=(|𝑠+1||𝑠1|)/2, 𝜏𝑗(𝑡) is bounded by 𝜏.

It is easily to know that 𝑙𝑗=𝑘𝑗=1 for 𝑗=1,2. Compute 2𝑖=11𝑎𝑖max𝑗=1,2||𝑏𝑖𝑗𝑙𝑗||+max𝑗=1,2||𝑐𝑖𝑗𝑘𝑗||<1,max𝑖𝒩1𝑎𝑖𝑛𝑗=1||𝑏𝑖𝑗𝑙𝑗||+1𝑎𝑖𝑛𝑗=1||𝑐𝑖𝑗𝑘𝑗||<12.(5.2) From Theorem 4.1, we conclude that the trivial equilibrium 𝐱=0 of this two-dimensional cellular neural network is asymptotically stable.

Acknowledgments

This work is supported by National Natural Science Foundation of China under Grant 60904028 and 71171116.

References

  1. L. O. Chua and L. Yang, “Cellular neural networks: theory,” Institute of Electrical and Electronics Engineers, vol. 35, no. 10, pp. 1257–1272, 1988. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  2. L. O. Chua and L. Yang, “Cellular neural networks: applications,” Institute of Electrical and Electronics Engineers, vol. 35, no. 10, pp. 1273–1290, 1988. View at Publisher · View at Google Scholar
  3. J. Cao, “New results concerning exponential stability and periodic solutions of delayed cellular neural networks,” Physics Letters A, vol. 307, no. 2-3, pp. 136–147, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  4. P. P. Civalleri, M. Gilli, and L. Pandolfi, “On stability of cellular neural networks with delay,” IEEE Transactions on Circuits and Systems, vol. 40, no. 3, pp. 157–165, 1993. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  5. J. Cao, “A set of stability criteria for delayed cellular neural networks,” IEEE Transactions on Circuits and Systems, vol. 48, no. 4, pp. 494–498, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  6. G. T. Stamov and I. M. Stamova, “Almost periodic solutions for impulsive neural networks with delay,” Applied Mathematical Modelling, vol. 31, pp. 1263–1270, 2007. View at Google Scholar
  7. S. Ahmad and I. M. Stamova, “Global exponential stability for impulsive cellular neural networks with time-varying delays,” Nonlinear Analysis, vol. 69, no. 3, pp. 786–795, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  8. K. Li, X. Zhang, and Z. Li, “Global exponential stability of impulsive cellular neural networks with time-varying and distributed delay,” Chaos, Solitons and Fractals, vol. 41, no. 3, pp. 1427–1434, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  9. J. Qiu, “Exponential stability of impulsive neural networks with time-varying delays and reaction-diffusion terms,” Neurocomputing, vol. 70, pp. 1102–1108, 2007. View at Google Scholar
  10. X. Wang and D. Xu, “Global exponential stability of impulsive fuzzy cellular neural networks with mixed delays and reaction-diffusion terms,” Chaos, Solitons & Fractals, vol. 42, no. 5, pp. 2713–2721, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  11. Y. Zhang and Q. Luo, “Global exponential stability of impulsive delayed reaction-diffusion neural networks via Hardy-Poincarè Inequality,” Neurocomputing, vol. 83, pp. 198–204, 2012. View at Google Scholar
  12. T. A. Burton, Stability by Fixed Point Theory for Functional Differential Equations, Dover, New York, NY, USA, 2006.
  13. L. C. Becker and T. A. Burton, “Stability, fixed points and inverses of delays,” Proceedings of the Royal Society of Edinburgh A, vol. 136, no. 2, pp. 245–275, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  14. T. A. Burton, “Fixed points, stability, and exact linearization,” Nonlinear Analysis, vol. 61, no. 5, pp. 857–870, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  15. T. A. Burton, “Fixed points, Volterra equations, and Becker's resolvent,” Acta Mathematica Hungarica, vol. 108, no. 3, pp. 261–281, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  16. T. A. Burton, “Fixed points and stability of a nonconvolution equation,” Proceedings of the American Mathematical Society, vol. 132, no. 12, pp. 3679–3687, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  17. T. A. Burton, “Perron-type stability theorems for neutral equations,” Nonlinear Analysis, vol. 55, no. 3, pp. 285–297, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  18. T. A. Burton, “Integral equations, implicit functions, and fixed points,” Proceedings of the American Mathematical Society, vol. 124, no. 8, pp. 2383–2390, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  19. T. A. Burton and T. Furumochi, “Krasnoselskii's fixed point theorem and stability,” Nonlinear Analysis, vol. 49, no. 4, pp. 445–454, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  20. T. A. Burton and B. Zhang, “Fixed points and stability of an integral equation: nonuniqueness,” Applied Mathematics Letters, vol. 17, no. 7, pp. 839–846, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  21. T. Furumochi, “Stabilities in FDEs by Schauder's theorem,” Nonlinear Analysis, vol. 63, pp. 217–224, 2005. View at Google Scholar
  22. C. Jin and J. Luo, “Fixed points and stability in neutral differential equations with variable delays,” Proceedings of the American Mathematical Society, vol. 136, no. 3, pp. 909–918, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  23. Y. N. Raffoul, “Stability in neutral nonlinear differential equations with functional delays using fixed-point theory,” Mathematical and Computer Modelling, vol. 40, no. 7-8, pp. 691–700, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  24. B. Zhang, “Fixed points and stability in differential equations with variable delays,” Nonlinear Analysis, vol. 63, pp. 233–242, 2005. View at Google Scholar
  25. J. Luo, “Fixed points and stability of neutral stochastic delay differential equations,” Journal of Mathematical Analysis and Applications, vol. 334, no. 1, pp. 431–440, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  26. J. Luo, “Fixed points and exponential stability of mild solutions of stochastic partial differential equations with delays,” Journal of Mathematical Analysis and Applications, vol. 342, no. 2, pp. 753–760, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  27. J. Luo, “Stability of stochastic partial differential equations with infinite delays,” Journal of Computational and Applied Mathematics, vol. 222, no. 2, pp. 364–371, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  28. J. Luo and T. Taniguchi, “Fixed points and stability of stochastic neutral partial differential equations with infinite delays,” Stochastic Analysis and Applications, vol. 27, no. 6, pp. 1163–1173, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  29. R. Sakthivel and J. Luo, “Asymptotic stability of impulsive stochastic partial differential equations with infinite delays,” Journal of Mathematical Analysis and Applications, vol. 356, no. 1, pp. 1–6, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  30. R. Sakthivel and J. Luo, “Asymptotic stability of nonlinear impulsive stochastic differential equations,” Statistics & Probability Letters, vol. 79, no. 9, pp. 1219–1223, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  31. J. Luo, “Fixed points and exponential stability for stochastic Volterra-Levin equations,” Journal of Computational and Applied Mathematics, vol. 234, no. 3, pp. 934–940, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
  32. X. X. Liao, Theory and Application of Stability for Dynamical Systems, National Defence Industry Press, 2000.
  33. D. R. Smart, Fixed Point Theorems, Cambridge University Press, Cambridge, UK, 1980.