About this Journal Submit a Manuscript Table of Contents
Advances in Artificial Neural Systems
Volume 2012 (2012), Article ID 571358, 5 pages
http://dx.doi.org/10.1155/2012/571358
Research Article

Hopfield Neural Networks with Unbounded Monotone Activation Functions

Department of Mathematics and Statistics, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia

Received 24 August 2012; Revised 17 December 2012; Accepted 17 December 2012

Academic Editor: Chao-Ton Su

Copyright © 2012 Nasser-eddine Tatar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

For the Hopfield Neural Network problem we consider unbounded monotone nondecreasing activation functions. We prove convergence to zero in an exponential manner provided that we start with sufficiently small initial data.

1. Introduction

Of concern is the following system: where , ,, are continuous functions, and are the activation functions which will be assumed continuous and bounded by some nondecreasing (and possibly unbounded functions).

This system appears in Neural Network theory [1, 2]. As is well-known, Neural Networks are an important tool in business intelligence. Their architecture differs from the one of standard computers in that it consists of a large number of processors (neurons) with high connections between them. In contrast to computers with a single processor, (Artificial) Neural Networks perform their computations in parallel.

Just as the human brain, the neurons receive weighted signals from the neurons in the input layer, sum up these inputs and test against a threshold value. Then they decide to fire or not.

The applications are numerous, we may cite few: modelling soil behavior, design of tunnels, image processing, graph flow, data deconvolution, energy demand forecasting, ecosystem evaluation, scheduling optimization, targeted marketing, medical diagnosis, time series analysis, and stock market.

Neural Networks are able to analyze and evaluate many phenomena in real world business as well as in industry. Some of their advantages over the conventional computers are forecasting, strategy planning, and predicting many phenomena.

Different methods have been used by many authors to study the well-posedness and the asymptotic behavior of solutions [320]. In particular, a lot of efforts are devoted in improving the set of conditions on the different coefficients involved in the system as well as the class of activation functions. Regarding the latter issue, the early assumptions of boundedness, monotonicity, and differentiability have been all relaxed to merely a global Lipschitz condition. Since then, it seems that, this assumption has not been weakened further considerably although there is a great need for that [21]. A slightly weaker condition: , and there exist such that , where and is the equilibrium, has been used in [2224] (see also [2527]).

Here we assume that the activation functions are bounded by continuous monotone nondecreasing functions , that is, The functions are not necessarily Lipschitz continuous and they may be unbounded (like power type functions with powers bigger than one). We can also consider activation functions with discrete delays as is explained below. We prove that, for sufficiently small initial data, solutions decay to zero exponentially.

The local existence and existence of equilibria is standard (see, the Gronwall-type Lemma 1 below) and the global existence follows from the estimation in our theorem below. However, the uniqueness of the equilibrium is not trivial. Here, as we are concerned with the convergence to zero rather than stability of equilibrium, the uniqueness of equilibrium is put aside.

The next section contains the statement and proof of our result as well as a crucial lemma we will be using.

2. Exponential Convergence

In this section it is proved that solutions converge to zero in an exponential manner when the activation functions are (or bounded by) continuous nondecreasing and unbounded functions. To this end we need a lemma due to Bainov and Simeonov [3].

Let and let . We write if is nondecreasing in .

Lemma 1. Let be a positive continuous function in are nonnegative continuous functions for which are nondecreasing in for any fixed are nondecreasing continuous functions in , with for and is a nonnegative continuous functions in . If in , then the inequality implies that where , and is chosen so that the functions are defined for .

For the statement of our theorem we will need the following notation:

Theorem 2. Assume that satisfy for some continuous nondecreasing (and possibly unbounded) functions , in , with for . Assume further that , are continuous functions. If in then, there exists such that where .

Proof. From (1) and our assumption on we see that or where denotes the right Dini derivative. Hence In virtue of (13) we derive and thereafter (see [28]) where Applying Lemma 1 we obtain the existence of such that where , and are as defined in (7)–(9).

Remark 3. To have global existence we need and this is possible when

Remark 4. Assuming that grows up at most polynomially, we see that the rate is exponential.

Remark 5. Note here that our assumptions in the previous remarks involve a smallness condition on the initial data.

3. Applications

Using Kirchhoff's law, Hopfield demonstrated that electrical circuits could behave as a small Neural Network. His original system has the form: where : Capacity, : Resistance, : Bias (external action on the neuron), : Input (voltage) of the th neuron, : Output of the neuron, : The coupling constants of the neuron with the th neuron, and : Activation functions.

are called elements of the weight matrix or connection matrix. This matrix describes the strength of connection between neurons. The expression is sometimes called the feedback factor.

The functions are nonlinear functions characterizing the response of the th neuron to changes in its state. Typical activation functions are the “Step function”, the “Sign function”, the “Gaussian” function, the “Hyperbolic function”, and the “Exponential type function”. However, it has been established that many other activation functions arise in practice which are not of these forms. Therefore there is a need to enlarge these classes of functions to more general ones.

In Neural Network Theory researchers are rather interested in designing models which are globally asymptotically stable. That is, the models must have a unique equilibrium which attracts all the solutions. Of course the rate of convergence is extremely important and it is preferable to have an exponential convergence rate. In the present work (for the case of variable coefficients) we prove that if solutions start close enough to zero then they will be attracted by zero. Our theorem shows that solutions remain bounded by as long as defined as a bound for the interval of existence of the 's (see (8)). In Remark 3 we gave a sufficient condition ensuring the existence of the 's for all time. That is conditions for which . It follows then that, under these conditions, the states actually converge to zero as goes to infinity with an exponential rate in case does not grow too fast and as .

The example below represents a possible practical situation for which our argument applies. Again we establish a sufficient explicit condition leading to exponential convergence to zero provided that the initial data are small enough.

Example 6. Consider the special (but common) functions . The order means . Clearly, in this case , , and for The value will be the largest value of for which for all . As we are interested in the long time behavior of solutions it is necessary that these conditions hold for all . Our theorem then implies that solutions are bounded by the expression which provides us with an exponential decay under some fairly reasonable assumptions.

3.1. Discrete Delays

The case where we have discrete delays in the activation functions, that is, where are different finite delays, can be treated similarly. We use the following functional to get rid of the delayed terms and replace them by terms without delays.

Acknowledgment

The author is grateful for the financial support and the facilities provided by King Fahd University of Petroleum and Minerals through Grant no. IN111052.

References

  1. J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” Proceedings of the National Academy of Sciences, vol. 79, pp. 2554––2558, 1982.
  2. J. J. Hopfield and D. W. Tank, “Computing with neural circuits: a model,” Science, vol. 233, pp. 625–633, 1986.
  3. D. Bainov and P. S. Simeonov, Integral Inequalities and Applications, Mathematics and Its Applications, East European Series, Kluwer Academic Publishers-Springer, London, 1992.
  4. A. Bouzerdoum and T. R. Pattison, “Neural network for quadratic optimization with bound constraints,” IEEE Transactions on Neural Networks, vol. 4, no. 2, pp. 293–304, 1993. View at Publisher · View at Google Scholar · View at Scopus
  5. L. O. Chua and T. Roska, “Stability of a class of nonreciprocal cellular neural networks,” IEEE Transactions on Circuits and Systems, vol. 37, no. 12, pp. 1520–1527, 1990. View at Publisher · View at Google Scholar · View at Scopus
  6. B. Crespi, “Storage capacity of non-monotonic neurons,” Neural Networks, vol. 12, no. 10, pp. 1377–1389, 1999. View at Publisher · View at Google Scholar · View at Scopus
  7. M. Forti, P. Nistri, and D. Papini, “Global exponential stability and global convergence in finite time of delayed neural networks with infinite gain,” IEEE Transactions on Neural Networks, vol. 16, no. 6, pp. 1449–1463, 2005. View at Publisher · View at Google Scholar · View at Scopus
  8. M. Forti and A. Tesi, “New conditions for global stability of neural networks with application to linear and quadratic programming problems,” IEEE Transactions on Circuits and Systems I, vol. 42, no. 7, pp. 354–366, 1995. View at Publisher · View at Google Scholar · View at Scopus
  9. L. Huang, J. Wang, and X. Zhou, “Existence and global asymptotic stability of periodic solutions for Hopfield neural networks with discontinuous activations,” Nonlinear Analysis: Real World Applications, vol. 10, no. 3, pp. 1651–1661, 2009. View at Publisher · View at Google Scholar · View at Scopus
  10. J. I. Inoue, “Retrieval phase diagrams of non-monotonic Hopfield networks,” Journal of Physics A, vol. 29, pp. 4815–4826, 1996.
  11. M. P. Kennedy and L. O. Chua, “Neural networks for non-linear programming,” IEEE Transactions on Circuits and Systems I, vol. 35, pp. 554–562, 1998.
  12. S. Mohamad, “Exponential stability in Hopfield-type neural networks with impulses,” Chaos, Solitons & Fractals, vol. 32, pp. 456–467, 2007.
  13. H. Qiao, J. G. Peng, and Z. Xu, “Nonlinear measures: a new approach to exponential stability analysis for Hopfield-type neural networks,” IEEE Transactions on Neural Networks, vol. 12, pp. 360–370, 2001.
  14. Q. Song, “Novel criteria for global exponential periodicity and stability of recurrent neural networks with time-varying delays,” Chaos, Solitons & Fractals, vol. 36, no. 3, pp. 720–728, 2008. View at Publisher · View at Google Scholar · View at Scopus
  15. S. I. Sudharsanan and M. K. Sundareshan, “Exponential stability and a systematic synthesis of a neural network for quadratic minimization,” Neural Networks, vol. 4, no. 5, pp. 599–613, 1991. View at Scopus
  16. P. Van Den Driessche and X. Zou, “Global attractivity in delayed hopfield neural network models,” SIAM Journal on Applied Mathematics, vol. 58, no. 6, pp. 1878–1890, 1998. View at Scopus
  17. S. Y. Xu, J. Lam, D. W. C. Ho, and Y. Zou, “Global robust exponential stability analysis for interval recurrent neural networks,” Physics Letters A, vol. 325, pp. 124–133, 2004.
  18. H. F. Yanai and S. I. Amari, “Auto-associative memory with two-stage dynamics of nonmonotonic neurons,” IEEE Transactions on Neural Networks, vol. 7, no. 4, pp. 803–815, 1996. View at Scopus
  19. E. H. Yang, “Perturbations of nonlinear systems of ordinary differential equations,” Journal of Mathematical Analysis and Applications, vol. 103, pp. 1–15, 1984.
  20. H. Y. Zhao, “Global stability of neural networks with distributed delays,” Physical Review E, vol. 68, Article ID 051909, pp. 1–7, 2003.
  21. B. Kosko, Neural Network and Fuzzy System—A Dynamical System Approach to Machine Intelligence, Prentice-Hall of India, New Delhi, India, 1991.
  22. C. Feng and R. Plamondon, “On the stability analysis of delayed neural networks systems,” Neural Networks, vol. 14, no. 9, pp. 1181–1188, 2001. View at Publisher · View at Google Scholar · View at Scopus
  23. H. Zhao, “Global asymptotic stability of Hopfield neural networks involving distributed delays,” Neural Networks, vol. 17, pp. 47–53, 2004.
  24. J. Zhou, S. Y. Li, and Z. G. Yang, “Global exponential stability of Hopfield neural networks with distributed delays,” Applied Mathematical Modelling, vol. 33, pp. 1513–1520, 2009.
  25. H. Wu, “Global exponential stability of Hopfield neural networks with delays and inverse Lipschitz neuron activations,” Nonlinear Analysis: Real World Applications, vol. 10, pp. 2297–2306, 2009.
  26. H. Wu, F. Tao, L. Qin, R. Shi, and L. He, “Robust exponential stability for interval neural networks with delays and non-Lipschitz activation functions,” Nonlinear Dynamics, vol. 66, pp. 479–487, 2011.
  27. H. Wu and X. Xue, “Stability analysis for neural networks with inverse Lipschizian neuron activations and impulses,” Applied Mathematical Modelling, vol. 32, pp. 2347–2359, 2008.
  28. V. Lakshmikhantam and S. Leela, “Differential and integral inequalities: theory and applications,” in Mathematics in Sciences and Engineering, R. Bellman, Ed., vol. 55, Academic Press, London, UK, 1969.