Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2009 / Article

Research Article | Open Access

Volume 2009 |Article ID 826908 | https://doi.org/10.1155/2009/826908

Kaihong Zhao, Yongkun Li, "Robust Stability Analysis of Fuzzy Neural Network with Delays", Mathematical Problems in Engineering, vol. 2009, Article ID 826908, 13 pages, 2009. https://doi.org/10.1155/2009/826908

Robust Stability Analysis of Fuzzy Neural Network with Delays

Academic Editor: Tamas Kalmar-Nagy
Received19 Apr 2009
Revised06 Jul 2009
Accepted29 Dec 2009
Published01 Feb 2010

Abstract

We investigate local robust stability of fuzzy neural networks (FNNs) with time-varying and S-type distributed delays. We derive some sufficient conditions for local robust stability of equilibrium points and estimate attracting domains of equilibrium points except unstable equilibrium points. Our results not only show local robust stability of equilibrium points but also allow much broader application for fuzzy neural network with or without delays. An example is given to illustrate the effectiveness of our results.

1. Introduction

For the study of current neural network, two basic mathematical models are commonly adopted: either local field neural network models or static neural network models. The basic model of local field neural network is described as

where denotes the activation function of the th neuron; is the state of the th neuron; is the external input imposed on the th neuron; denotes the synaptic connectivity value between the th neuron and the th neuron; is the number of neurons in the network. With the same notations, static neural network models can be written as

It is well known that local field neural network not only models Hopfield-type networks [1] but also models bidirectional associative memory networks [2] and cellular neural networks [3]. Many deep theoretical results have been obtained for local field neural network; we can refer to [412] and references cited therein. Meanwhile static neural network has a great potential of applications. It not only includes the recurrent back-propagation network [1315] but also includes other extensively studied neural network such as the optimization type network introduced in [1618] and the brain-state-in-a-box (BSB) type network [19, 20]. In the past few years, there has been increasing interest in studying dynamical characteristics such as stability, persistence, periodicity, local robust stability of equilibrium points, and domains of attraction of local field neural network (see[2125]).

However, in mathematical modeling of real world problems, we will encounter some other inconvenience, for example, the complexity and the uncertainty or vagueness. Fuzzy theory is considered as a more suitable setting for the sake of taking vagueness into consideration. Based on traditional cellular neural networks (CNNs), Yang and Yang proposed the fuzzy CNNs (FCNNs) [26], which integrates fuzzy logic into the structure of traditional CNNs and maintains local connectedness among cells. Unlike previous CNNs structures, FCNNs have fuzzy logic between its template input and/or output besides the sum of product operation. FCNNs are very useful paradigm for image processing problems, which is a cornerstone in image processing and pattern recognition. Therefor, it is necessary to consider both the fuzzy logic and delay effect on dynamical behaviors of neural networks. Nevertheless, to the best of our knowledge, there are few published papers considering the local robust stability of equilibrium points and domain of attraction for the fuzzy neural network (FNNs).

Therefore, in this paper, we will study the local robust stability of fuzzy neural network with time-varying and S-type distributed delays:

where and are elements of fuzzy feedback MIN template and fuzzy feedback MAX template, respectively. are elements of feedback template. stands for state of the th neurons. is the transmission delay and is the activation function. and denote the fuzzy AND and fuzzy OR operation, respectively. is the parameter. The main purpose of this paper is to investigate local robust stability of equilibrium points of FNNs (1.3). Sufficient conditions are gained for local robust stability of equilibrium points. Meanwhile, the attracting domains of equilibrium points are also estimated.

Throughout this paper, we always assume the following

(), and are bounded in () and are nondecreasing bounded variation function on with and is Lebesgue-Stieltjes integral. is a constant vector which denotes an external input.() are second-order differentiable, bounded, and Lipschitz continuous. There exist positive constants and such that and for any () The activation functions with bounded and Lipschitz continuous; that is, there are some numbers and such that and for any () Functions are nonnegative, bounded, and continuously differentiable defined on and

The rest of this paper is organized as follows. In Section 2, we will give some basic definitions and basic results about the attracting domains of FNNs (1.3). In Section 3, we discuss the local robust stability of equilibrium points of FNNs (1.3). In Section 4, an example is given to illustrate the effectiveness of our results. Finally, we make a conclusion in Section 5.

2. Preliminaries

As usual, we denote by the set of all real-valued continuous mappings from to equipped with supremum norm defined by

where Denote by the solution of FNNs (1.3) with initial condition

Definition 2.1. A vector is said to be an equilibrium point of FNNs (1.3) if for each , one has where Denote by the set of all equilibrium points of FNNs (1.3).

Definition 2.2. Let is said to be a locally robust attractive equilibrium point if for any given , there is a neighborhood such that implies that Otherwise, is said not to be a locally robust attractive equilibrium point. Denote by the set of all not locally robust attractive equilibrium points of FNNs (1.3).

Definition 2.3. Let be subsets of and let be a solution of FNNs (1.3) with (i)For any given if for some implies that for all then D is said to be an attracting domain of FNNs (1.3).(ii)For any given if for all implies that converges to then is said to be an attracting domain of Correspondingly, the union of all attracting domains of equilibrium points of is said to be an attracting domain of

For a class of differential equation with the term of fuzzy AND and fuzzy OR operation, there is the following useful inequality.

Lemma 2.4 ([26]). Let and be two states of (1.3); then one has

Lemma 2.5. Let be any solution of FNNs (1.3). Then is uniformly bounded. Moreover, is an attracting domain of FNNs (1.3), where

Proof. By (1.3) and Lemma 2.4, we have where By using differential inequality, we have for , which leads to the uniform boundedness of Furthermore, given any we get for all , Hence is an attracting domain of FNNs (1.3). The proof is complete.

By Lemma 2.4, we have the following theorem.

Theorem 2.6. All equilibrium points of FNNs (1.3) lie in the attracting domain that is,

3. Local Robust Stability of Equilibrium Points

In this section, we should investigate local robust stability of equilibrium points of FNNs (1.3). We derive some sufficient conditions to guarantee local robust stable of equilibrium points in and estimate the attracting domains of these equilibrium points.

Theorem 3.1. Let If there exist positive constants such that for each where then one has the following. () that is, is locally robust stable. ()Let Then every solution of FNNs (1.3) with satisfies where ()The open set is an attracting domain of is an attracting domain of

The proof of Theorem 3.1 relies on the following lemma.

Lemma 3.2. Let satisfying (3.1). Let be an arbitrary solution of FNNs (1.3) other than , where Let where is given by (3.1). Then one has the following. () If for some then () If and for some then () If then for all

Proof. Under transformation we get that due to where lies between and From (3.7), we can derive that As we have for each which imply that
Since and we have
Since from , we know that We assert that holds. Otherwise, there exist such that and for all This implies that is strictly monotonically decreasing on the interval It is obvious that By using we get that From This leads to a contradiction. Hence for all

Now we are in a position to complete the proof of Theorem 3.1.

Proof. Let be an arbitrary solution of FNNs (1.3) other than and satisfy It follows from that for all that is, for all Together with we get for all Take It is obvious that for each From (3.9) we have By integrating both sides of above inequality from 0 to , we have It follows that Note that is bounded on by Lemma 2.4; it follows from FNNs (1.3) that is bounded on Hence is uniformly continuous on From Lemma 2.5, we get that So the assertions of (1) and (2) hold. Let us consider an arbitrary solution of FNNs (1.3) satisfying for all and some Then it is obvious that From (2), we get Hence is an attracting domain of Consequently, the open set is an attracting domain of The proof is complete.

Corollary 3.3. Let If there exist positive constants such that for each where then one has the following.() that is, is locally asymptotically stable. ()Let Then every solution of FNNs (1.3) with satisfies where ()The open set is an attracting domain of is an attracting domain of

4. Illustrative Example

For convenience of illustrative purpose, we only consider simple fuzzy neural network with time-varying and S-type distributed delays satisfying

Then fuzzy neural network with two neurons can be modeled by

Take

It is easy to check that (H1)(H5) hold and for We can check that

From simple calculations, we know that is an attracting domain of FNNs (4.2). All equilibrium points of FNNs (4.2) lie in From some calculations, we have two equilibrium points For equilibrium we have and Taking we get

Similarly, we can check that (3.1) holds for Therefore, from Theorem 3.1, the four equilibrium points are locally robust stable and their convergent radius is 0.04.

Remark 4.1. The above example implies that the system has multiple equilibrium points under the (relevant) assumption of monotone nondecreasing activation functions. These equilibrium points do not globally converge to the unique equilibrium point.

5. Conclusions

In this paper, we derive some sufficient conditions for local robust stability of fuzzy neural network with time-varying and S-type distributed delays and give an estimate of attracting domains of stable equilibrium points except isolated equilibrium points. Our results not only show local robust stability of equilibrium points but also allow much broader application for fuzzy neural network with or without delays. An example is given to show the effectiveness of our results.

Acknowledgment

This work is supported by the National Natural Sciences Foundation of China under Grant 10971183.

References

  1. J. J. Hopfield, “Neurons with graded response have collective computational properties like those of two-state neurons,” Proceedings of the National Academy of Sciences of the United States of America, vol. 81, no. 10, pp. 3088–3092, 1984. View at: Publisher Site | Google Scholar
  2. B. Kosko, “Bidirectional associative memories,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 18, no. 1, pp. 49–60, 1988. View at: Publisher Site | Google Scholar | MathSciNet
  3. L. O. Chua and L. Yang, “Cellular neural networks: theory,” IEEE Transactions on Circuits and Systems, vol. 35, no. 10, pp. 1257–1272, 1988. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  4. C.-Y. Cheng, K.-H. Lin, and C.-W. Shih, “Multistability in recurrent neural networks,” SIAM Journal on Applied Mathematics, vol. 66, no. 4, pp. 1301–1320, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  5. X. F. Yang, X. F. Liao, Y. Tang, and D. J. Evans, “Guaranteed attractivity of equilibrium points in a class of delayed neural networks,” International Journal of Bifurcation and Chaos in Applied Sciences and Engineering, vol. 16, no. 9, pp. 2737–2743, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  6. Y. M. Chen, “Global stability of neural networks with distributed delays,” Neural Networks, vol. 15, no. 7, pp. 867–871, 2002. View at: Publisher Site | Google Scholar
  7. H. Zhao, “Global asymptotic stability of Hopfield neural network involving distributed delays,” Neural Networks, vol. 17, no. 1, pp. 47–53, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  8. J. Cao and J. Wang, “Global asymptotic and robust stability of recurrent neural networks with time delays,” IEEE Transactions on Circuits and Systems I, vol. 52, no. 2, pp. 417–426, 2005. View at: Publisher Site | Google Scholar | MathSciNet
  9. S. Mohamad, “Global exponential stability in DCNNs with distributed delays and unbounded activations,” Journal of Computational and Applied Mathematics, vol. 205, no. 1, pp. 161–173, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  10. S. Mohamad, K. Gopalsamy, and H. Akça, “Exponential stability of artificial neural networks with distributed delays and large impulses,” Nonlinear Analysis: Real World Applications, vol. 9, no. 3, pp. 872–888, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  11. Z. K. Huang, Y. H. Xia, and X. H. Wang, “The existence and exponential attractivity of κ-almost periodic sequence solution of discrete time neural networks,” Nonlinear Dynamics, vol. 50, no. 1-2, pp. 13–26, 2007. View at: Publisher Site | Google Scholar | MathSciNet
  12. Z. K. Huang, X. H. Wang, and F. Gao, “The existence and global attractivity of almost periodic sequence solution of discrete-time neural networks,” Physics Letters A, vol. 350, no. 3-4, pp. 182–191, 2006. View at: Publisher Site | Google Scholar
  13. L. B. Almeida, “Backpropagation in perceptrons with feedback,” in Neural Computers, pp. 199–208, Springer, New York, NY, USA, 1988. View at: Google Scholar
  14. F. J. Pineda, “Generalization of back-propagation to recurrent neural networks,” Physical Review Letters, vol. 59, no. 19, pp. 2229–2232, 1987. View at: Publisher Site | Google Scholar | MathSciNet
  15. R. Rohwer and B. Forrest, “Training time-dependence in neural networks,” in Proceedings of the 1st IEEE International Conference on Neural Networks, pp. 701–708, San Diego, Calif, USA, 1987. View at: Google Scholar
  16. M. Forti and A. Tesi, “New conditions for global stability of neural networks with application to linear and quadratic programming problems,” IEEE Transactions on Circuits and Systems I, vol. 42, no. 7, pp. 354–366, 1995. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  17. Y. S. Xia and J. Wang, “A general methodology for designing globally convergent optimization neural networks,” IEEE Transactions on Neural Networks, vol. 9, no. 6, pp. 1331–1343, 1998. View at: Publisher Site | Google Scholar
  18. Y. S. Xia and J. Wang, “On the stability of globally projected dynamical systems,” Journal of Optimization Theory and Applications, vol. 106, no. 1, pp. 129–150, 2000. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  19. J.-H. Li, A. N. Michel, and W. Porod, “Analysis and synthesis of a class of neural networks: linear systems operating on a closed hypercube,” IEEE Transactions on Circuits and Systems, vol. 36, no. 11, pp. 1405–1422, 1989. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  20. I. Varga, G. Elek, and S. H. Zak, “On the brain-state-in-a-convex-domain neural models,” Neural Networks, vol. 9, no. 7, pp. 1173–1184, 1996. View at: Publisher Site | Google Scholar
  21. H. Qiao, J. Peng, Z.-B. Xu, and B. Zhang, “A reference model approach to stability analysis of neural networks,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 33, no. 6, pp. 925–936, 2003. View at: Publisher Site | Google Scholar
  22. Z.-B. Xu, H. Qiao, J. Peng, and B. Zhang, “A comparative study of two modeling approaches in neural networks,” Neural Networks, vol. 17, no. 1, pp. 73–85, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  23. M. Wang and L. Wang, “Global asymptotic robust stability of static neural network models with S-type distributed delays,” Mathematical and Computer Modelling, vol. 44, no. 1-2, pp. 218–222, 2006. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  24. P. Li and J. D. Cao, “Stability in static delayed neural networks: a nonlinear measure approach,” Neurocomputing, vol. 69, no. 13–15, pp. 1776–1781, 2006. View at: Publisher Site | Google Scholar
  25. Z. K. Huang and Y. H. Xia, “Exponential p-stability of second order Cohen-Grossberg neural networks with transmission delays and learning behavior,” Simulation Modelling Practice and Theory, vol. 15, no. 6, pp. 622–634, 2007. View at: Publisher Site | Google Scholar
  26. T. Yang and L.-B. Yang, “The global stability of fuzzy cellular neural network,” IEEE Transactions on Circuits and Systems I, vol. 43, no. 10, pp. 880–883, 1996. View at: Publisher Site | Google Scholar | MathSciNet

Copyright © 2009 Kaihong Zhao and Yongkun Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views586
Downloads532
Citations

Related articles