Research Article | Open Access

Nasser-Eddine Tatar, "Exponential Decay for a System of Equations with Distributed Delays", *Journal of Applied Mathematics*, vol. 2015, Article ID 981383, 6 pages, 2015. https://doi.org/10.1155/2015/981383

# Exponential Decay for a System of Equations with Distributed Delays

**Academic Editor:**Qiankun Song

#### Abstract

We prove convergence of solutions to zero in an exponential manner for a system of ordinary differential equations. The feature of this work is that it deals with nonlinear non-Lipschitz and unbounded distributed delay terms involving non-Lipschitz and unbounded activation functions.

#### 1. Introduction

Of concern is the following systemfor and with continuous on . Here , , , , , and , , are continuous functions and are subject to other conditions that will be specified below.

Similar forms of this system arise, for instance, in Neural Network Theory [1ā28] (see also the āApplicationsā section below). There, the functions and are much simpler. Usually, are equal to the identity and (called the activation functions) are assumed to be Lipschitz continuous. The integral terms represent the distributed delays. When the kernels are replaced by the delta distribution we recover the well-known discrete delays. The functions account for the input functions. The first terms in the right hand side of (1) may be looked at as dissipative terms.

Different methods have been used by many authors to study the well-posedness and the asymptotic behavior of solutions of these systems [1ā3, 6, 9ā17, 19ā21, 25, 27]. In particular, a lot of efforts are devoted to improving the conditions on the different coefficients involved in the system as well as on the class of activation functions. Regarding the latter issue, the early assumptions of boundedness, monotonicity, and differentiability have been all relaxed to merely a global Lipschitz condition. Since then, it seems that this assumption has not been weakened further considerably. It has been pointed out that there are many activation functions which are continuous but not necessarily Lipschitz continuous in applications [29]. A slightly weaker condition: , and there exist such that , where and is the equilibrium, which has been used in [4, 19, 26, 28] (see also [22ā24]). Finally, we cite [5] where the authors consider non-Lipschitz continuous but bounded activation functions. There are also many works on discontinuous activation functions.

Here we assume that the functions and are continuous monotone nondecreasing functions that are not necessarily Lipschitz continuous and they may be unbounded (like power type functions with powers bigger than one). We prove that, for sufficiently small initial data, solutions decay to zero exponentially.

We could not find similar interesting works on (continuous but) non-Lipschitz continuous activation functions to compare our results with. Our treatment is in fact concerned with a doubly non-Lipschitz continuous system.

Using standard techniques and the Gronwall-type lemma below we may prove local existence of solutions. The global existence follows from the estimation in our theorem below. The uniqueness, however, is delicate and does not hold in general.

In the next section we present and prove our result and illustrate it by an example.

#### 2. Exponential Convergence

In this section we state and prove our exponential convergence result. Before that we need to present a lemma due to Bainov and Simeonov [30].

Let , and let . We write if is nondecreasing in .

Lemma 1. *Let be a positive continuous function in . Assume that , , are nonnegative continuous functions for which are nondecreasing in for any fixed , , , are nondecreasing continuous functions in , with for , and is a nonnegative continuous function in . If in , then the inequality**implies that**where ,**and is chosen so that the functions , , are defined for .*

In order to shorten the statement of our result we define, for ,for some to be determined.

Theorem 2. *Assume that and are continuous monotone nondecreasing functions , in that can be relabelled as , , with , and their corresponding coefficients and are relabelled as . Assume further that , for , , , , , are continuous functions on , are continuously differentiable functions, and āā , , . Then, there exists a positive constant such that*(a)*if , , we have*(b)*if in addition and and are summable and of arbitrary signs, then the conclusion in (a) holds with , instead of in the new and the corresponding .*

*Proof. *From (1) we entail that for and or, by summation, we getwhere denotes the right Dini derivative. Therefore,and clearlyIt follows that (see [29])whereand and are as defined before the theorem. SetwhereDefineClearly, from (11) and (15) we have , , and*(a)*āā*,*āā*.* By integration we see thatwithAccording to our hypotheses we can relabel the terms in (17) so that it may be written aswith .

Applying Lemma 1 we obtainand hencewhere andand is chosen so that the functions , , are defined for .*(b)*āāā* of Arbitrary Signs.* Define, for ,That is,We have from (16) and (24) thatand by integration we find, for ,withNext, we proceed as in Case (a) with the new functional (24), the constant (27), and instead of in the new . The proof is complete.

Corollary 3. *If , (in (a) and , in (b)), then solutions of (1) are global in time. Moreover, if and (resp., ) grows up at most polynomially, then the decay is exponential.*

*Remark 4. *We have judged it useful to treat case (a) separately even though it is covered by case (b) for the simple reason that this case arises in real applications as it corresponds to the āfading memoryā situation. Same for the case , for some too looks unnecessary to study separately as it is covered by the second case in the proof but in fact it is also quite interesting. Indeed, in this case, from (16) we haveTherefore,and thusAt this point we must point out that, unlike in the proof of the theorem, we cannot pass to inside the arguments of and (in (30)). However, if the functions and belong to the class , that is, there exist and such that and , , , then for At this stage we may apply Lemma 1 (with the new coefficients and ) to get a bound for the function and thereafter for If as and that bound does not grow faster than we will get an exponential decay. This decay rate is to be compared with the general one obtained in the second case of our result.

#### 3. Applications

This system appears in Neural Network Theory. For a basic one the reader is referred to [7, 8].

A Neural Network is designed in order to mimic the human brain. It is formed by a number of āneuronsā with interconnections between them. In general there are an input layer, some (one or more) hidden layers, and an output layer. The input neurons feed the neurons in the hidden layers which perform a transformation of the signal and fires it to the output neurons (or other neurons). They are widely used for solving optimization problems, analyzing, classifying, and evaluating many things. They have the advantage (over traditional computers) of forecasting, predicting, and making decisions.

There are numerous applications of which we cite the following: economic indicator, data compression, complex mapping, biological systems analysis, optimization, process control, time series analysis, stock market, diagnosis of hepatitis, engineering design, soil permeability, speech processing, pattern recognition, and so on.

Most of the existing papers in this theory deal with the constant coefficients case. The few papers on variable coefficients treat mainly the existence of periodic solutions. In the constant coefficients case the system will have the formwhere and are constants. Our theorem gives the estimatefor some , where in case (a) and a similar estimation with in case (b). The corollary provides sufficient conditions ensuring global existence. In this case we have exponential decay provided that does not grow too fast.

*Example 5. *Consider the functions , , , . The order means ordering and in a nondecreasing manner . Therefore, , , andThe value will be the largest value of for whichfor all For the asymptotic behavior we need this to be infinity. In particular we need a smallness condition on

#### Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgment

The author is grateful for the financial support and the facilities provided by King Fahd University of Petroleum and Minerals through Grant no. IN121044.

#### References

- A. Bouzerdoum and T. R. Pattison, āNeural network for quadratic optimization with bound constraints,ā
*IEEE Transactions on Neural Networks*, vol. 4, no. 2, pp. 293ā304, 1993. View at: Publisher Site | Google Scholar - J. Cao, K. Yuan, and H.-X. Li, āGlobal asymptotical stability of recurrent neural networks with multiple discrete delays and distributed delays,ā
*IEEE Transactions on Neural Networks*, vol. 17, no. 6, pp. 1646ā1651, 2006. View at: Publisher Site | Google Scholar - L. O. Chua and T. Roska, āStability of a class of nonreciprocal cellular neural networks,ā
*IEEE Transactions on Circuits and Systems*, vol. 37, no. 12, pp. 1520ā1527, 1990. View at: Publisher Site | Google Scholar - C. Feng and R. Plamondon, āOn the stability analysis of delayed neural networks systems,ā
*Neural Networks*, vol. 14, no. 9, pp. 1181ā1188, 2001. View at: Publisher Site | Google Scholar - M. Forti, M. Grazzini, P. Nistri, and L. Pancioni, āGeneralized Lyapunov approach for convergence of neural networks with discontinuous or non-Lipschitz activations,ā
*Physica D: Nonlinear Phenomena*, vol. 214, no. 1, pp. 88ā99, 2006. View at: Publisher Site | Google Scholar - M. Forti and A. Tesi, āNew conditions for global stability of neural networks with application to linear and quadratic programming problems,ā
*IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications*, vol. 42, no. 7, pp. 354ā366, 1995. View at: Publisher Site | Google Scholar - J. J. Hopfield, āNeural networks and physical systems with emergent collective computational abilities,ā
*Proceedings of the National Academy of Sciences of the United States of America*, vol. 79, no. 8, pp. 2554ā2558, 1982. View at: Publisher Site | Google Scholar - J. J. Hopfield and D. W. Tank, āComputing with neural circuits: a model,ā
*Science*, vol. 233, no. 4764, pp. 625ā633, 1986. View at: Publisher Site | Google Scholar - L. Huang, J. Wang, and X. Zhou, āExistence and global stability of periodic solutions for Hopfield neural networks with discontinuous activations,ā
*Nonlinear Analysis: Real World Applications*, vol. 10, no. 3, pp. 1651ā1661, 2009. View at: Google Scholar - J.-I. Inoue, āRetrieval phase diagrams of non-monotonic Hopfield networks,ā
*Journal of Physics A: Mathematical and General*, vol. 29, no. 16, pp. 4815ā4826, 1996. View at: Publisher Site | Google Scholar | Zentralblatt MATH - M. P. Kennedy and L. O. Chua, āNeural networks for non-linear programming,ā
*IEEE Transactions on Circuits and Systems I*, vol. 35, pp. 554ā562, 1998. View at: Google Scholar - X. Liu and N. Jiang, āRobust stability analysis of generalized neural networks with multiple discrete delays and multiple distributed delays,ā
*Neurocomputing*, vol. 72, no. 7–9, pp. 1789ā1796, 2009. View at: Publisher Site | Google Scholar - S. Mohamad, āExponential stability in Hopfield-type neural networks with impulses,ā
*Chaos, Solitons & Fractals*, vol. 32, no. 2, pp. 456ā467, 2007. View at: Publisher Site | Google Scholar - S. Mohamad, K. Gopalsamy, and H. Akça, āExponential stability of artificial neural networks with distributed delays and large impulses,ā
*Nonlinear Analysis: Real World Applications*, vol. 9, no. 3, pp. 872ā888, 2008. View at: Publisher Site | Google Scholar - J. H. Park, āOn global stability criterion for neural networks with discrete and distributed delays,ā
*Chaos, Solitons & Fractals*, vol. 30, no. 4, pp. 897ā902, 2006. View at: Publisher Site | Google Scholar - J. H. Park, āOn global stability criterion of neural networks with continuously distributed delays,ā
*Chaos, Solitons and Fractals*, vol. 37, no. 2, pp. 444ā449, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH - Q. Zhang, M. A. Run-Nian, and X. Jin, āGlobal exponential convergence analysis of Hopfield neural networks with continuously distributed delays,ā
*Communications in Theoretical Physics*, vol. 39, no. 3, pp. 381ā384, 2003. View at: Publisher Site | Google Scholar | MathSciNet - H. Qiao, J. Peng, and Z.-B. Xu, āNonlinear measures: a new approach to exponential stability analysis for Hopfield-type neural networks,ā
*IEEE Transactions on Neural Networks*, vol. 12, no. 2, pp. 360ā370, 2001. View at: Publisher Site | Google Scholar - Q. Song, āNovel criteria for global exponential periodicity and stability of recurrent neural networks with time-varying delays,ā
*Chaos, Solitons and Fractals*, vol. 36, no. 3, pp. 720ā728, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH - S. I. Sudharsanan and M. K. Sundareshan, āExponential stability and a systematic synthesis of a neural network for quadratic minimization,ā
*Neural Networks*, vol. 4, no. 5, pp. 599ā613, 1991. View at: Publisher Site | Google Scholar - P. van den Driessche and X. Zou, āGlobal attractivity in delayed hopfield neural network models,ā
*SIAM Journal on Applied Mathematics*, vol. 58, no. 6, pp. 1878ā1890, 1998. View at: Publisher Site | Google Scholar - H. Wu, āGlobal exponential stability of Hopfield neural networks with delays and inverse Lipschitz neuron activations,ā
*Nonlinear Analysis: Real World Applications*, vol. 10, no. 4, pp. 2297ā2306, 2009. View at: Publisher Site | Google Scholar | Zentralblatt MATH - H. Wu, F. Tao, L. Qin, R. Shi, and L. He, āRobust exponential stability for interval neural networks with delays and non-Lipschitz activation functions,ā
*Nonlinear Dynamics*, vol. 66, no. 4, pp. 479ā487, 2011. View at: Publisher Site | Google Scholar | Zentralblatt MATH - H. Wu and X. Xue, āStability analysis for neural networks with inverse Lip-schitzian neuron activations and impulses,ā
*Applied Mathematical Modelling*, vol. 32, no. 11, pp. 2347ā2359, 2008. View at: Publisher Site | Google Scholar - Q. Zhang, X. P. Wei, and J. Xu, āGlobal exponential stability of Hopfield neural networks with continuously distributed delays,ā
*Physics Letters A*, vol. 315, no. 6, pp. 431ā436, 2003. View at: Publisher Site | Google Scholar - H. Y. Zhao, āGlobal stability of neural networks with distributed delays,ā
*Physical Review E*, vol. 68, no. 5, Article ID 051909, 7 pages, 2003. View at: Google Scholar - H. Zhao, āGlobal asymptotic stability of Hopfield neural network involving distributed delays,ā
*Neural Networks*, vol. 17, no. 1, pp. 47ā53, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH - J. Zhou, S. Y. Li, and Z. G. Yang, āGlobal exponential stability of Hopfield neural networks with distributed delays,ā
*Applied Mathematical Modelling*, vol. 33, no. 3, pp. 1513ā1520, 2009. View at: Publisher Site | Google Scholar - B. Kosko,
*Neural Network and Fuzzy System—A Dynamical System Approach to Machine Intelligence*, Prentice-Hall of India, New Delhi, India, 1991. - D. Bainov and P. Simeonov,
*Integral Inequalities and Applications*, vol. 57 of*Mathematics and Its Applications*, Kluwer Academic Publishers, London, UK, 1992.

#### Copyright

Copyright © 2015 Nasser-Eddine Tatar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.