Table of Contents Author Guidelines Submit a Manuscript
Abstract and Applied Analysis

Volume 2014, Article ID 901519, 20 pages

http://dx.doi.org/10.1155/2014/901519
Research Article

Multistability and Instability of Competitive Neural Networks with Mexican-Hat-Type Activation Functions

1Department of Mathematics, and Research Center for Complex Systems and Network Sciences, Southeast University, Nanjing 210096, China

2School of Automation, Southeast University, Nanjing 210096, China

3Department of Mathematics, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia

Received 2 January 2014; Accepted 9 April 2014; Published 6 May 2014

Academic Editor: Weinian Zhang

Copyright © 2014 Xiaobing Nie et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We investigate the existence and dynamical behaviors of multiple equilibria for competitive neural networks with a class of general Mexican-hat-type activation functions. The Mexican-hat-type activation functions are not monotonously increasing, and the structure of neural networks with Mexican-hat-type activation functions is totally different from those with sigmoidal activation functions or nondecreasing saturated activation functions, which have been employed extensively in previous multistability papers. By tracking the dynamics of each state component and applying fixed point theorem and analysis method, some sufficient conditions are presented to study the multistability and instability, including the total number of equilibria, their locations, and local stability and instability. The obtained results extend and improve the very recent works. Two illustrative examples with their simulations are given to verify the theoretical analysis.

1. Introduction

In the past decades, some famous neural network models, including Hopfield neural networks, cellular neural networks, Cohen-Grossberg neural networks, and bidirectional associative memory neural networks, had been proposed in order to solve some practical problems. It should be mentioned that in the above network models only the neuron activity is taken into consideration. That is, there exists only one type of variables, the state variables of the neurons in these models. However, in a dynamical network, the synaptic weights also vary with respect to time due to the learning process, and the variation of connection weights may have influences on the dynamics of neural network. Competitive neural networks (CNNs) constitute an important class of neural networks, which model the dynamics of cortical cognitive maps with unsupervised synaptic modifications. In this model, there are two types of state variables: that of the short-term memory (STM) describing the fast neural activity and that of long-term memory (LTM) describing the slow unsupervised synaptic modifications. The CNNs can be written in the following form: where , is the neuron current activity level, is the output of neurons, is the synaptic efficiency, is the constant external stimulus, represents the connection weight between the th neuron and the th neuron, is the strength of the external stimulus, is the constant input, and and denote disposable scaling constants.

After setting , where and and assuming the input stimulus to be normalized with unit magnitude , then the above networks are simplified as

The qualitative analysis of neural dynamics plays an important role in the design of practical neural networks. To solve problems of optimization and signal processing, neural networks have to be designed in such a way that, for a given external input, they exhibit only one globally stable state (i.e., monostability). This matter has been treated in [17]. On the other hand, if neural networks are used to analyze associative memories, the coexistence of multiple locally stable equilibria or periodic orbits is required (i.e., multistability or multiperiodicity), since the addressable memories or patterns are stored as stable equilibria or stable periodic orbits. In monostability analysis, the objective is to derive conditions that guarantee that each network contains only one steady state, and all the trajectories of the network converge to it, whereas in multistability analysis, the networks are allowed to have multiple equilibria or periodic orbits (stable or unstable). In general, the usual global stability conditions are not adequately applicable to multistable networks.

Recently, the multistability or multiperiodicity of neural networks has attracted the attention of many researchers. In [8, 9], based on decomposition of state space, the authors investigated the multistability of delayed Hopfield neural networks and showed that the -neuron neural networks can have stable orbits located in subsets of . Cao et al. [10] extended the above method to the Cohen-Grossberg neural networks with nondecreasing saturated activation functions with two corner points. In [11, 12], the multistability of almost-periodic solution in delayed neural networks was studied. Kaslik and Sivasundaram [13, 14] firstly revealed the effect of impulse on the multistability of neural networks. In [1517], high-order synaptic connectivity was introduced into neural networks and the multistability and multiperiodicity were considered, respectively, for high-order neural networks based on decomposition of state space, Cauchy convergence principle, and inequality technique. In [1822], the authors indicated that under some conditions, there exist equilibria for the -neuron neural networks and of which are locally exponentially stable. In [23], the Hopfield neural networks with nondecreasing piecewise linear activation functions with corner points were considered. It was proved that under some conditions, the -neuron neural networks can have and only have equilibria, of which are locally exponentially stable and others are unstable. In [24], the multistability of neural networks with step stair activation functions was discussed based on an appropriate partition of the -dimensional state space. It was shown that the -neuron neural networks can have equilibria, of which are locally exponentially stable. In particular, the case of was previously discussed in [25]. For more references, see [2632] and references therein.

It is well known that the type of activation functions plays a very important role in the multistability analysis of neural networks. In the abovementioned and most existing works, the activation functions employed in multistability analysis were mainly focused on sigmoidal activation functions and nondecreasing saturated activation functions, which are all monotonously increasing. In this paper, we will consider a class of continuous Mexican-hat-type activation functions, which are defined as follows (see Figure 1): where , , , , , , , are constants with , , and , . In particular, when , , , , , , , and    , the above activation functions reduce to the following special activation functions employed in [33]:

901519.fig.001
Figure 1: Mexican-hat-type activation functions (3).

It is necessary to point out that the Mexican-hat-type activation functions are not monotonously increasing, which are totally different from sigmoidal activation functions and nondecreasing saturated activation functions. Hence, the results and methods mentioned above cannot be applied to neural networks with activation functions (3). Very recently, the multistability and instability of Hopfield neural networks with activation functions (4) were studied in [33]. Inspired by [33], in this paper, we will investigate the multistability and instability of CNNs with activation functions (3). It should be noted that the structure of CNNs differs from and is more complex than that in [33]. Moreover, the activation functions (3) employed in this paper are more general than activation functions (4). More precisely, the contributions of this paper are three-fold as follows.

Firstly, we define four index subsets and present sufficient condition under which the CNNs with activation functions (3) have multiple equilibria, by tracking the dynamics of each state component and applying fixed point theorem. The index subsets are defined in terms of maximum and minimum values, which are different from and less restrictive than those given in [33]. Furthermore, we discuss the exact existence of equilibria for CNNs.

Secondly, based on some analysis method, we analyze the dynamical behaviors of each equilibrium point for CNNs, including local stability and instability. The dynamical behaviors of such system are much more complex than those of Hopfield neural networks considered in [33], due to the complexity of the networks structure and generality of activation functions.

Thirdly, specializing the model and activation functions to those in [33], we show that the obtained results extend and improve the very recent works in [33].

Finally, two examples with their simulations are given to verify and illustrate the validity of the obtained results.

2. Main Results

Firstly, we define the four index subsets as follows: where . It is easy to see that for .

Remark 1. In this paper, the index subsets are defined in terms of maximum and minimum values, which are different from those given in [33], where they are defined in terms of absolute values. In general, our conditions are less restrictive, which have been shown in [16].

Remark 2. The inequality holds for all .

Proof. By the definition of index subset , and , we obtain It follows from (6) and (7) that Noting that and substituting and into (8), we can derive that .

Remark 3. The inequality holds for all .

Proof. From Remark 2, we get . Thus, inequality holds for all , due to .

By the definition of index subset and equalities , , we get which implies that . By using equalities , and noting that , the inequality can be proved easily.

It follows from the second equation of system (2) that which leads to Therefore, always implies that . That is, if , then the solution will stay in for all .

Let be a solution of system (2) with initial state . In the following, we will discuss the dynamics of state components for    , respectively.

Lemma 4. All the state components , , will flow to the interval when tends to .

Proof. According to the different location of , there are two cases for us to discuss.

Case (i). Consider . In this case, if there exists some such that , for , then it follows from system (2) and the definition of that Hence, would never get out of . Similarly, we can also conclude that once for some , then would never escape from for all .

Case (ii). Consider . In this case, we claim that would monotonously decrease until it reaches the interval in some finite time.

In fact, when , noting that the definition of and , we obtain when , by virtue of equalities and , we get when , it follows from and system (2) that

In summary, wherever the initial state    is located in, would flow to and enter the interval and stay in this interval forever.

Lemma 5. All the state components , , will flow to the interval when tends to .

Proof. We prove it in the following three cases due to the different location of .

Case (i). Consider . In this case, if there exists some such that , for , then we have similarly, if there exists some such that , for , then we get

From the above two inequalities, we know that if , then would never get out of this interval. In the same way, we can also obtain that if there exists some such that , then would stay in it for all .

Case (ii). Consider . When , from the definition of index subset , we get when , it follows from the definition of index subset , equalities , that Thus, in this case, would monotonously increase until it reaches the interval .

Case (iii). Consider . When , it follows that Therefore, would monotonously decrease until it enters the interval .

In summary, wherever the initial state is located in, would flow to and enter the interval and stay in it finally.

Lemma 6. All the state components , , will flow to the interval when tends to .

Proof. Similar to the proof of Lemmas 4 and 5, we will prove it in the following two cases.

Case (i). Consider . If there exists some such that , for , then Therefore, would never get out of . By the same method, we also get that once for some , then would stay in this interval for all .

Case (ii). Consider . In this case, we claim that would monotonously increase until it enters the interval .

In fact, when , we have when , we get when , then we obtain

In summary, wherever the initial state    is located in, would flow to when is big enough and stay in it forever.

Denote ; ; . For any , let or or and define

It is easy to see that there exist    -type regions, where denotes the number of elements in the set . Now, we will prove the following theorem on the existence of multiple equilibria for system (2).

Theorem 7. Suppose that . Then, system (2) with activation functions (3) has equilibria.

Proof. Pick a region arbitrarily ; we will show that there exists an equilibrium point located in each .

Denote , , and . It is easy to see that .

Let be any solution of system (2) with initial state . Then, for , if there exists some such that , then we get from the definition of index subset that Similarly, for , if , note that ; then and if there exists some such that , then That is, the trajectory with , would enter and stay in the interval , which implies that there does not exist any equilibria with the corresponding th state component located in .

Combining with Lemmas 46, it can be concluded that , would never escape from the corresponding interval of . Furthermore, denote Then, from (26)–(28), we can derive that if has an equilibrium point, it must be located in .

Note that any equilibrium point of system (2) is a root of the following equations: Equivalently, the above equations can be rewritten as

In , any equilibrium point of system (2) satisfies the following equations:

In the subset region , define a map as follows: For , substituting into (6) and noting that (Remark 2), we get that Similarly, substituting into (7) results in For , note that , , and ; it follows from (6)-(7) and (Remark 3) that For , note that and ; we can get from the definition of index subset that That is, Then combining inequality and (38) together gives Therefore, the map maps a bounded and closed set into itself. Applying Brouwer’s fixed point theorem, there exists one such that where represent the elements of index subset .

Then for , define By virtue of the definition of index subsets and , we have and Therefore, there exists the unique such that .

Similarly, for , define and we can also derive that there exists the unique such that .

Denote , , . It is easy to see that is the equilibrium point located in subset , which is also the equilibrium point located in subset . It should be noted that, by the definition of , any state component of cannot touch the boundary of . That is, is located in the interior of (see Remark 8). Therefore, system (2) has equilibria.

Remark 8. is located in the interior of .

Proof. Note that satisfies the following equations: In the following, we prove that , . Otherwise, from the definition of , we have that is a contradiction. Similarly, from the definition of , we can also get That is, is located in the interior of .

Remark 9. Suppose that . Furthermore, if the following conditions hold, then system (2) with activation functions (3) can have and only have equilibria.

Proof. From the proof of Theorem 7, we only need to prove that the fixed point of is unique. In fact, suppose that there exists another fixed point