Research Article | Open Access
Exponential Decay for a System of Equations with Distributed Delays
We prove convergence of solutions to zero in an exponential manner for a system of ordinary differential equations. The feature of this work is that it deals with nonlinear non-Lipschitz and unbounded distributed delay terms involving non-Lipschitz and unbounded activation functions.
Of concern is the following systemfor and with continuous on . Here , , , , , and , , are continuous functions and are subject to other conditions that will be specified below.
Similar forms of this system arise, for instance, in Neural Network Theory [1–28] (see also the “Applications” section below). There, the functions and are much simpler. Usually, are equal to the identity and (called the activation functions) are assumed to be Lipschitz continuous. The integral terms represent the distributed delays. When the kernels are replaced by the delta distribution we recover the well-known discrete delays. The functions account for the input functions. The first terms in the right hand side of (1) may be looked at as dissipative terms.
Different methods have been used by many authors to study the well-posedness and the asymptotic behavior of solutions of these systems [1–3, 6, 9–17, 19–21, 25, 27]. In particular, a lot of efforts are devoted to improving the conditions on the different coefficients involved in the system as well as on the class of activation functions. Regarding the latter issue, the early assumptions of boundedness, monotonicity, and differentiability have been all relaxed to merely a global Lipschitz condition. Since then, it seems that this assumption has not been weakened further considerably. It has been pointed out that there are many activation functions which are continuous but not necessarily Lipschitz continuous in applications . A slightly weaker condition: , and there exist such that , where and is the equilibrium, which has been used in [4, 19, 26, 28] (see also [22–24]). Finally, we cite  where the authors consider non-Lipschitz continuous but bounded activation functions. There are also many works on discontinuous activation functions.
Here we assume that the functions and are continuous monotone nondecreasing functions that are not necessarily Lipschitz continuous and they may be unbounded (like power type functions with powers bigger than one). We prove that, for sufficiently small initial data, solutions decay to zero exponentially.
We could not find similar interesting works on (continuous but) non-Lipschitz continuous activation functions to compare our results with. Our treatment is in fact concerned with a doubly non-Lipschitz continuous system.
Using standard techniques and the Gronwall-type lemma below we may prove local existence of solutions. The global existence follows from the estimation in our theorem below. The uniqueness, however, is delicate and does not hold in general.
In the next section we present and prove our result and illustrate it by an example.
2. Exponential Convergence
In this section we state and prove our exponential convergence result. Before that we need to present a lemma due to Bainov and Simeonov .
Let , and let . We write if is nondecreasing in .
Lemma 1. Let be a positive continuous function in . Assume that , , are nonnegative continuous functions for which are nondecreasing in for any fixed , , , are nondecreasing continuous functions in , with for , and is a nonnegative continuous function in . If in , then the inequalityimplies thatwhere ,and is chosen so that the functions , , are defined for .
In order to shorten the statement of our result we define, for ,for some to be determined.
Theorem 2. Assume that and are continuous monotone nondecreasing functions , in that can be relabelled as , , with , and their corresponding coefficients and are relabelled as . Assume further that , for , , , , , are continuous functions on , are continuously differentiable functions, and , , . Then, there exists a positive constant such that(a)if , , we have(b)if in addition and and are summable and of arbitrary signs, then the conclusion in (a) holds with , instead of in the new and the corresponding .
Proof. From (1) we entail that for and or, by summation, we getwhere denotes the right Dini derivative. Therefore,and clearlyIt follows that (see )whereand and are as defined before the theorem. SetwhereDefineClearly, from (11) and (15) we have , , and(a) , . By integration we see thatwithAccording to our hypotheses we can relabel the terms in (17) so that it may be written aswith .
Applying Lemma 1 we obtainand hencewhere andand is chosen so that the functions , , are defined for .
(b) of Arbitrary Signs. Define, for ,That is,We have from (16) and (24) thatand by integration we find, for ,withNext, we proceed as in Case (a) with the new functional (24), the constant (27), and instead of in the new . The proof is complete.
Corollary 3. If , (in (a) and , in (b)), then solutions of (1) are global in time. Moreover, if and (resp., ) grows up at most polynomially, then the decay is exponential.
Remark 4. We have judged it useful to treat case (a) separately even though it is covered by case (b) for the simple reason that this case arises in real applications as it corresponds to the “fading memory” situation. Same for the case , for some too looks unnecessary to study separately as it is covered by the second case in the proof but in fact it is also quite interesting. Indeed, in this case, from (16) we haveTherefore,and thusAt this point we must point out that, unlike in the proof of the theorem, we cannot pass to inside the arguments of and (in (30)). However, if the functions and belong to the class , that is, there exist and such that and , , , then for At this stage we may apply Lemma 1 (with the new coefficients and ) to get a bound for the function and thereafter for If as and that bound does not grow faster than we will get an exponential decay. This decay rate is to be compared with the general one obtained in the second case of our result.
A Neural Network is designed in order to mimic the human brain. It is formed by a number of “neurons” with interconnections between them. In general there are an input layer, some (one or more) hidden layers, and an output layer. The input neurons feed the neurons in the hidden layers which perform a transformation of the signal and fires it to the output neurons (or other neurons). They are widely used for solving optimization problems, analyzing, classifying, and evaluating many things. They have the advantage (over traditional computers) of forecasting, predicting, and making decisions.
There are numerous applications of which we cite the following: economic indicator, data compression, complex mapping, biological systems analysis, optimization, process control, time series analysis, stock market, diagnosis of hepatitis, engineering design, soil permeability, speech processing, pattern recognition, and so on.
Most of the existing papers in this theory deal with the constant coefficients case. The few papers on variable coefficients treat mainly the existence of periodic solutions. In the constant coefficients case the system will have the formwhere and are constants. Our theorem gives the estimatefor some , where in case (a) and a similar estimation with in case (b). The corollary provides sufficient conditions ensuring global existence. In this case we have exponential decay provided that does not grow too fast.
Example 5. Consider the functions , , , . The order means ordering and in a nondecreasing manner . Therefore, , , andThe value will be the largest value of for whichfor all For the asymptotic behavior we need this to be infinity. In particular we need a smallness condition on
Conflict of Interests
The author declares that there is no conflict of interests regarding the publication of this paper.
The author is grateful for the financial support and the facilities provided by King Fahd University of Petroleum and Minerals through Grant no. IN121044.
- A. Bouzerdoum and T. R. Pattison, “Neural network for quadratic optimization with bound constraints,” IEEE Transactions on Neural Networks, vol. 4, no. 2, pp. 293–304, 1993.
- J. Cao, K. Yuan, and H.-X. Li, “Global asymptotical stability of recurrent neural networks with multiple discrete delays and distributed delays,” IEEE Transactions on Neural Networks, vol. 17, no. 6, pp. 1646–1651, 2006.
- L. O. Chua and T. Roska, “Stability of a class of nonreciprocal cellular neural networks,” IEEE Transactions on Circuits and Systems, vol. 37, no. 12, pp. 1520–1527, 1990.
- C. Feng and R. Plamondon, “On the stability analysis of delayed neural networks systems,” Neural Networks, vol. 14, no. 9, pp. 1181–1188, 2001.
- M. Forti, M. Grazzini, P. Nistri, and L. Pancioni, “Generalized Lyapunov approach for convergence of neural networks with discontinuous or non-Lipschitz activations,” Physica D: Nonlinear Phenomena, vol. 214, no. 1, pp. 88–99, 2006.
- M. Forti and A. Tesi, “New conditions for global stability of neural networks with application to linear and quadratic programming problems,” IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, vol. 42, no. 7, pp. 354–366, 1995.
- J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” Proceedings of the National Academy of Sciences of the United States of America, vol. 79, no. 8, pp. 2554–2558, 1982.
- J. J. Hopfield and D. W. Tank, “Computing with neural circuits: a model,” Science, vol. 233, no. 4764, pp. 625–633, 1986.
- L. Huang, J. Wang, and X. Zhou, “Existence and global stability of periodic solutions for Hopfield neural networks with discontinuous activations,” Nonlinear Analysis: Real World Applications, vol. 10, no. 3, pp. 1651–1661, 2009.
- J.-I. Inoue, “Retrieval phase diagrams of non-monotonic Hopfield networks,” Journal of Physics A: Mathematical and General, vol. 29, no. 16, pp. 4815–4826, 1996.
- M. P. Kennedy and L. O. Chua, “Neural networks for non-linear programming,” IEEE Transactions on Circuits and Systems I, vol. 35, pp. 554–562, 1998.
- X. Liu and N. Jiang, “Robust stability analysis of generalized neural networks with multiple discrete delays and multiple distributed delays,” Neurocomputing, vol. 72, no. 7–9, pp. 1789–1796, 2009.
- S. Mohamad, “Exponential stability in Hopfield-type neural networks with impulses,” Chaos, Solitons & Fractals, vol. 32, no. 2, pp. 456–467, 2007.
- S. Mohamad, K. Gopalsamy, and H. Akça, “Exponential stability of artificial neural networks with distributed delays and large impulses,” Nonlinear Analysis: Real World Applications, vol. 9, no. 3, pp. 872–888, 2008.
- J. H. Park, “On global stability criterion for neural networks with discrete and distributed delays,” Chaos, Solitons & Fractals, vol. 30, no. 4, pp. 897–902, 2006.
- J. H. Park, “On global stability criterion of neural networks with continuously distributed delays,” Chaos, Solitons and Fractals, vol. 37, no. 2, pp. 444–449, 2008.
- Q. Zhang, M. A. Run-Nian, and X. Jin, “Global exponential convergence analysis of Hopfield neural networks with continuously distributed delays,” Communications in Theoretical Physics, vol. 39, no. 3, pp. 381–384, 2003.
- H. Qiao, J. Peng, and Z.-B. Xu, “Nonlinear measures: a new approach to exponential stability analysis for Hopfield-type neural networks,” IEEE Transactions on Neural Networks, vol. 12, no. 2, pp. 360–370, 2001.
- Q. Song, “Novel criteria for global exponential periodicity and stability of recurrent neural networks with time-varying delays,” Chaos, Solitons and Fractals, vol. 36, no. 3, pp. 720–728, 2008.
- S. I. Sudharsanan and M. K. Sundareshan, “Exponential stability and a systematic synthesis of a neural network for quadratic minimization,” Neural Networks, vol. 4, no. 5, pp. 599–613, 1991.
- P. van den Driessche and X. Zou, “Global attractivity in delayed hopfield neural network models,” SIAM Journal on Applied Mathematics, vol. 58, no. 6, pp. 1878–1890, 1998.
- H. Wu, “Global exponential stability of Hopfield neural networks with delays and inverse Lipschitz neuron activations,” Nonlinear Analysis: Real World Applications, vol. 10, no. 4, pp. 2297–2306, 2009.
- H. Wu, F. Tao, L. Qin, R. Shi, and L. He, “Robust exponential stability for interval neural networks with delays and non-Lipschitz activation functions,” Nonlinear Dynamics, vol. 66, no. 4, pp. 479–487, 2011.
- H. Wu and X. Xue, “Stability analysis for neural networks with inverse Lip-schitzian neuron activations and impulses,” Applied Mathematical Modelling, vol. 32, no. 11, pp. 2347–2359, 2008.
- Q. Zhang, X. P. Wei, and J. Xu, “Global exponential stability of Hopfield neural networks with continuously distributed delays,” Physics Letters A, vol. 315, no. 6, pp. 431–436, 2003.
- H. Y. Zhao, “Global stability of neural networks with distributed delays,” Physical Review E, vol. 68, no. 5, Article ID 051909, 7 pages, 2003.
- H. Zhao, “Global asymptotic stability of Hopfield neural network involving distributed delays,” Neural Networks, vol. 17, no. 1, pp. 47–53, 2004.
- J. Zhou, S. Y. Li, and Z. G. Yang, “Global exponential stability of Hopfield neural networks with distributed delays,” Applied Mathematical Modelling, vol. 33, no. 3, pp. 1513–1520, 2009.
- B. Kosko, Neural Network and Fuzzy System—A Dynamical System Approach to Machine Intelligence, Prentice-Hall of India, New Delhi, India, 1991.
- D. Bainov and P. Simeonov, Integral Inequalities and Applications, vol. 57 of Mathematics and Its Applications, Kluwer Academic Publishers, London, UK, 1992.
Copyright © 2015 Nasser-Eddine Tatar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.