Abstract

We investigate associative memories for memristive neural networks with deviating argument. Firstly, the existence and uniqueness of the solution for memristive neural networks with deviating argument are discussed. Next, some sufficient conditions for this class of neural networks to possess invariant manifolds are obtained. In addition, a global exponential stability criterion is presented. Then, analysis and design of autoassociative memories and heteroassociative memories for memristive neural networks with deviating argument are formulated, respectively. Finally, several numerical examples are given to demonstrate the effectiveness of the obtained results.

1. Introduction

In recent decades, analysis and design of neurodynamic systems have received much attention [116]. Specifically, neurodynamics of associative memories is a hot research issue [916]. Associative memories refer to brain-inspired computing designed to store a set of prototype patterns such that the stored patterns can be retrieved with the recalling probes containing sufficient information about the contents of patterns. In associative memories, for any given probe (e.g., noisy or corrupted version of a prototype pattern), the retrieval dynamics can converge to an ideal equilibrium point representing the prototype pattern. At present, system analysts have two main theories for explaining strange dynamical behaviors in recalling processes: autoassociative memories and heteroassociative memories [12, 13]. Two types of methods are usually used for solving problems in regard to analysis and synthesis of associative memories. The first is when neurodynamic systems are multistable, and then system states may converge to locally stable equilibrium points, and these equilibrium points are encoded as the memorized patterns. The second is when neurodynamic systems are globally monostable, and then the memorized patterns are associated with the external input patterns.

As we know, the human brain is believed to be the most powerful information processor because of its structure of synapses. Actually, there is always a high demand for a single device to emulate artificial synaptic functions. Using a memristor, essential synaptic plasticity and learning behaviors can be mimicked. By integrating memristors into a large-scale neuromorphic circuit, memristive neural networks based neuromorphic circuits demonstrate spike-timing-dependent plasticity (a form of the Hebbian learning rule) [1721]. From the viewpoint of theory and experiment, a memristive neural network model is a state-dependent switching system. On the other hand, this new dynamical system shows many of the characteristic features of magnetic tunnel junction [19, 20]. For these reasons, system analysis and integration for memristive neural networks will be extremely tricky. For the moment, there are two major methods for qualitative analysis of memristive neural networks. One is the differential inclusions approach [1720], and the other is the fuzzy uncertainty approach [21]. For the differential inclusions approach, the core idea is to integrate a switched network cluster. For the fuzzy uncertainty approach, the central idea is to divide multijump network flows into a continuous subsystem and bounded subsystem. The two kinds of analysis frameworks are just the type and level of points, not the merits of good points of difference. From the perspective of cybernetics, many reported analytical skills are merely a breakthrough of system theory more than substance. How to develop more effective analysis methods for memristive neural networks is worth studying.

For the past few years, hybrid dynamic systems have remained one of the most active fields of research in the control community [2230]. For instance, to describe the stationary distribution of temperature along the length of a wire that is bended, the nonlinear dynamic model with deviating argument is often used. The right-hand side in nonlinear systems with deviating argument features a combination of continuous and discrete systems. Thus, nonlinear systems with deviating argument unify the advanced and retarded systems. Because of this property, related works on the control strategies for nonlinear systems with deviating argument are relatively rare. Viewed from systems involving the interplay, differential equations and difference equations are included in the nonlinear systems with deviating argument [2830]. However, it is still obvious that many basic issues on nonlinear systems with deviating argument remain to be addressed, such as nonlinear dynamics, systems design, and analysis.

Inspired by the above discussions, the goal of this paper is to formulate the analysis and design of associative memories for a class of memristive neural networks with deviating argument. On the whole, the highlights of this paper can be outlined as follows:(1)Sufficient conditions are derived to ascertain the existence, uniqueness, and global exponential stability of the solution for memristive neural networks with deviating argument.(2)The synthetic mechanism of the analysis and design of associative memories for a general class of memristive neural networks with deviating argument is revealed.(3)A uniform associative memories model, which unites autoassociative memories and heteroassociative memories, is proposed for memristive neural networks with deviating argument.

In addition, it should be noted that when special and strict conditions are exerted, associative memories’ performance of neural networks is usually limited. Therefore, the methods applicable to conventional stability analysis of neural networks cannot be directly employed to investigate the associative memories for memristive neural networks. Moreover, the study of dynamics for memristive neural networks with deviating argument is not an easy work, since the conditions for dynamics of neural networks cannot be simply utilized to analyze hybrid dynamic systems with deviating argument. In this paper, according to the fuzzy uncertainty approach, in combination with the theories of set-valued maps and differential inclusions, analysis and design of associative memories for memristive neural networks with deviating argument are described in detail.

The rest of the paper is arranged as follows. Design problem and preliminaries are given in Section 2. Main results are stated in Section 3. In Section 4, several numerical examples are presented. Finally, concluding remarks are given in Section 5.

2. Design Problem and Preliminaries

2.1. Problem Description

Let be the set of -dimensional bipolar vectors, where is a positive constant and denotes the transpose of a vector or matrix. The problem considered is described as follows.

Given pairs of bipolar vectors , where , for , design an associative memory model which satisfies the notion that the output of the model will converge to the corresponding pattern when is fed to the model input as a memory pattern.

Definition 1. The neural network is said to be an autoassociative memory if and a heteroassociative memory if , for .

In this paper, let be the natural number set; the norm of vector is defined as . Denote as the -dimensional Euclidean space. Fix two real number sequences , , , satisfying , , and .

2.2. Model

Consider the following neural network model:where is the neuron state, , which is a constant diagonal matrix with , , indicates the self-inhibition matrix, and are connection weight matrices at time and , respectively, and are defined byfor , and or , or , where represents the switching jump, and , and are all constants. is the activation function. The deviating function , when for any . stands for a transform matrix of order satisfying for . and indicate the external input pattern and the corresponding memorized pattern, respectively. is the output of (1) corresponding to .

Neural network model (1) is called an associative memory if the output converges to .

Obviously, neural network model (1) is of hybrid type: switching and mixed. When , , . When , , . From this perspective, (1) is a switching system. On the other hand, for any fixed interval , when , neural network model (1) is a retarded system. When , neural network model (1) is an advanced system. Then. from this point of view, (1) is a mixed system.

For neural network model (1), the conventional definition of solution for differential equations cannot apply here. To tackle this problem, the solution concept for differential equations with deviating argument is introduced [2430]. According to this theory, a solution of (1) is a consecutive function such that () exists on , except at the points , , where a one-sided derivative exists, and () satisfies (1) within each interval .

In this paper, the activation functions are selected aswhere and are used to adjust the slope and corner point of activation functions and , .

It is easy to see that satisfies andfor any , where .

For technical convenience, denote , , , , , , , , and , for . Let , , and . denotes the convex closure of a set constituted by real numbers and .

2.3. Autoassociative Memories

From Definition 1, matrix is selected as an identity matrix when designing autoassociative memories (i.e., for ). For the convenience of analysis and formulation in autoassociative memories, neural network model (1) can be rewritten in component form:

2.4. Properties

For neural network model (1), we consider the following set-valued maps:for .

Based on the theory of differential inclusions [1720], from (5), we can getor, equivalently, there exist and such thatfor almost all .

Employing the fuzzy uncertainty approach [21], neural network model (5) can be expressed as follows:whereand is the sign function.

Definition 2. A constant vector is said to be an equilibrium point of neural network model (5) if and only ifor, equivalently, there exist and such thatfor .

Definition 3. The equilibrium point of neural network model (5) is globally exponentially stable if there exist constants and such thatfor any , where is the state vector of neural network model (5) with initial condition .

Lemma 4. For neural network model (5), one hasfor , , where and are defined as those in (6) and (7), respectively.

Lemma 4 is a basic conclusion. To get more specific, please see [Neural Networks, vol. 51, pp. 1–8, 2014], or [Information Sciences, vol. 279, pp. 358–373, 2014], or others.

In the following, we end this section with some basic assumptions:(A1)There exists a positive constant satisfying(A2).(A3).

3. Main Results

In this section, we first present the conditions ensuring the existence and uniqueness of solution for neural network model (5) and then analyze its global exponential stability; finally, we discuss how to design the procedures of autoassociative memories and heteroassociative memories.

3.1. Existence and Uniqueness of Solution

Theorem 5. Let (A1) and (A2) hold. Then, there exists a unique solution , of (5) for every .

Proof. The proof of the assertion is divided into two steps.
Firstly, we discuss the existence of solution.
For a fixed , assume that and the other case can be discussed in a similar way.
Let , and consider the following equivalent equation:Construct the following sequence , , and .
Define a norm , and we can getwhere .
From (A2), we have , and henceTherefore, there exists a solution on the interval for (17). From (4), the solution can be consecutive to . Utilizing a similar method, can be consecutive to and then to . Applying the mathematical induction, the proof of existence of solution for (5) is completed.
Next, we analyze the uniqueness of solution.
Fix and choose , , . Denote and as two solutions of (5) with different initial conditions and , respectively.
Then, we getBased on the Gronwall-Bellman inequality,and, particularly,Hence,Suppose that there exists some satisfying ; then,From (A2), we getthat is,Substituting (24) into (25), from (27), we obtainThis poses a contradiction. Therefore, the uniqueness of solution is true. Taken together, the proof of the existence and uniqueness of solution for (5) is completed.

3.2. Invariant Manifolds

Lemma 6. The solution of neural network model (5) will fall into ultimately if, for , the following conditions hold:

Proof. We distinguish four cases to prove the lemma.(1)If, for some , , thenThat is to say, , when .(2)If, for some , , , thenThat is to say, , when .(3)If, for some , , , thenThat is to say, , when .(4)If, for some , , , thenThat is to say, , when .
From the above analysis, we can get that any component of the solution of neural network model (5) will fall into ultimately. Hence, the proof is completed.

Lemma 7. The solution of neural network model (5) will fall into ultimately if, for , the following conditions hold:

Proof. We distinguish four cases to prove the lemma.(1)If, for some , , , thenThat is to say, , when .(2)If, for some , , , thenThat is to say, , when .(3)If, for some , , , thenThat is to say, , when .(4)If, for some , , , thenThat is to say, , when .
From the above analysis, we can get that any component of the solution of neural network model (5) will fall into ultimately. Hence, the proof is completed.

3.3. Global Exponential Stability Analysis

Denote

In order to guarantee the existence of equilibrium point for neural network model (5), the following assumption is needed:(A4)For any , .

Theorem 8. Let (A4) hold. Then, neural network model (5) has at least one equilibrium point , where

Proof. Define a mappingwhereIt is obvious that is equicontinuous.
When , there exists a small enough positive number satisfyingand when , there exists a small enough positive number satisfyingTake and denotewhich is a bounded and closed set.
When , ,Similarly, we can obtainfor , .
From the above discussion, we can get for . Based on the generalized Brouwer’s fixed point theorem, there exists at least one equilibrium . This completes the proof.

Recalling (10), for , then

Let , ,where , , or , or there exists for , such that

According to Lemma 4,for .

Lemma 9. Let (A1)–(A3) hold and be a solution of (49) or (50). Then,for , where .

Proof. For any , there exists a unique , such that ; then,Taking the absolute value on both sides of (53),and thenApplying Lemma 4,that isthenAccording to the Gronwall-Bellman inequality,Exchanging the location of and in (53), we getand thenholds for , so , where . This completes the proof.

Remark 10. Neural network model (49) is also of a hybrid type. The difficulty in investigating this class of neural networks lies in the existence of a switching jump and deviating argument. For the switching jump, we introduce the differential inclusions and fuzzy uncertainty approach to compensate state-jump uncertainty. For the deviating argument, by the aid of effective computational analysis, we estimate the norm of deviating state by the corresponding state .

In the following, the criterion to guarantee the global exponential stability of neural network model (5) based on the Lyapunov method is established.

Theorem 11. Let (A1)–(A4) hold. The origin of neural network model (49) is globally exponentially stable, which implies that the equilibrium point of neural network model (5) is globally exponentially stable, if the following condition is satisfied:

Proof. Denotewhere is a positive constant satisfyingThen, for , , calculate the derivative of along the trajectory of (49) or (50):Applying Lemma 4,Letthen, for , ,Based on Lemma 9,From (64),then,It is easy to see thathencethat isfor . This completes the proof.

3.4. Design Procedure of Autoassociative Memories

Given vectors to be memorized, where , for , the design procedure of autoassociative memories is stated as follows:(1)Select matrix as an identity matrix.(2)According to (29) and (34), choose appropriate , , , , and .(3)Based on , , and , the values of , , , and can be selected.(4)Calculate , , and . Adjust the values of , , , and to make sure that .(5)Solving the following inequalities: , , we can get .(6)Take a proper such that .

The output will converge to the related memory pattern when matrices , , , and and scalar are chosen as above.

3.5. Design Procedure of Heteroassociative Memories

Given vectors to be memorized as , , which correspond to the external input patterns , , where , , , set , where , , , . It is clear that heteroassociative memories can be treated as autoassociative memories when transform matrix is obtained. Matrices , , and and scalar can be selected as the method used in autoassociative memories. The transform matrix is obtained by the following steps:(1)When , that is, and are square matrices, utilizing Matlab Toolbox, we can get inverse matrix of as , and can be obtained as .(2)When , add proper column vectors to and , which constructs new matrices and , respectively, such that the ranks of matrices and are full. And the transform matrix can be obtained by .(3)When , add proper row vectors to and , which constructs new matrices and , such that the ranks of matrices and are full. And the transform matrix can be obtained by . Note in particular, in this circumstance, that the dimensionality of each input and output pattern increases from to . And the front values of each external input pattern carry out associative memories.

The output will converge to the related memory pattern when matrices , , , and and scalar are chosen as above.

4. Experimental Verification

In this section, in order to verify the effectiveness of the proposed results, several numerical examples are provided.

Example 12. The first example is state response of autoassociative memory.
Consider neural network model (5) with , where , , , , andwith the activation functionsfor . Denote as the external input vector and the memorized vector, respectively.
It follows from Theorem 11 that neural network model (5) has a unique equilibrium point, which is globally exponentially stable. The simulation result is shown in Figure 1.
It can be calculated easily that the equilibrium point in Figure 1 isClearly, the output pattern iswhich is equivalent to the memorized pattern . Hence, neural network model (5) can be implemented effectively as an autoassociative memory.

Remark 13. Because the equilibrium point is globally exponentially stable, the initial values can be selected randomly, and there is no influence on the performance of associative memories.

Example 14. The second example includes influences of , , and on the location of equilibrium points.

The influences are divided into two cases to discuss.

Case 1. Let be a constant and denote as the external input vector and the memorized vector, respectively. Consider and as variables in neural network model (5), taking , and the rest of the parameters are the same as those in Example 12. The results are shown in Table 1, from which we can see that the smaller the value of , the bigger the value range can take for the sake of memorized pattern, and vice versa.

Case 2. Let be a constant and denote as the external input vector and the memorized vector, respectively, where . Consider and (or ) as variables in neural network model (5), taking , and the rest of the parameters are the same as those in Example 12. The results are shown in Table 2, from which we can see that (or ) should increase when increases for the sake of memorized pattern, and vice versa.

Remark 15. From Tables 1 and 2, it can be concluded that the associative memories performance becomes better with the increase of when is fixed, but not vice versa.

Example 16. The third example includes the influences of and perturbed on the recall success rate of memorized patterns.
Let , and consider () as variables in neural network model (5). The other parameters are the same as those in Example 12. In this example, the influences of and perturbed are analyzed on the recall success rate of memorized vectors. Experimental results are illustrated in Table 3, which makes it clear that the robustness of neural network model (5) for perturbed input will become stronger as is small enough.

Remark 17. Table 3 illustrates that the robustness of associative memories on perturbed becomes stronger with the decrease of .

Example 18. The fourth example includes the application of autoassociative memory without input perturbation.
According to neural network model (5), design an autoassociative memory to memorize pattern “” which is represented by a -pixel image (black pixel ; white pixel ) and the corresponding input pattern is . The parameters are the same as those in Example 12.
Simulation results with three random initial values are depicted in Figure 2, where the dark grey block indicates that the value of the related component of output is in and the light grey block indicates that the value of the related component of output is in . Obviously, the output pattern converges to the memorized pattern correctly.

Example 19. The fifth example includes the application of autoassociative memory with input perturbation.
The aim of this experiment is to discuss the influence of input pattern perturbation on the output pattern. The parameters are the same as those in Example 12. Choose pattern “” to be memorized, and the perturbed vector is selected as , where is the corresponding memorized vector and is a perturbation coefficient.
In what follows, the perturbed vector is imposed on neural network model (5) as input vector with and , respectively. Simulation results are shown in Figures 3 and 4. In Figure 3, the output pattern is exactly the same as the memorized pattern, and in Figure 4 the output pattern is different from the memorized pattern although they can be distinguished from the outlines.

Remark 20. From the analysis above, we can see that the output pattern will converge to the memorized pattern when the perturbation coefficient is not small enough. In fact, neural network model (5) is implemented well as an associative memory if the coefficient satisfying and the other parameters are the same as those in Example 19.

Example 21. The sixth example includes the application of heteroassociative memory.
Consider neural network model (1), and the parameters are the same as those in Example 12 except the transform matrix , which will be given later. The aim of this experiment is to design a heteroassociative memory to memorize pattern “” when the input pattern is “.” It is clear that the input vector is and the corresponding output vector is .

According to the heteroassociative memories design procedure, the matrices and are constructed asrespectively.

Using the Matlab Toolbox, we can have

The simulation results are depicted in Figure 5, which shows that the output pattern “” is memorized when the input pattern is “” and the transform matrix is selected as above. This indicates that neural network model (1) can perform well as a heteroassociative memory under a proper transform matrix , which can be easily obtained by using the Matlab Toolbox.

Remark 22. Simulation results in Figure 5 show that the associative memories model presented in this paper can be implemented well as autoassociative memories and heteroassociative memories. This model possesses the resistance capacity against external input perturbations and can be regarded as an extension and complement to the relevant results [10, 11, 16].

5. Concluding Remarks

Analysis and design of associative memories for recurrent neural networks have drawn considerable attention, whereas analysis and design of associative memories for memristive neural networks with deviating argument have not been well investigated. This paper presents a general class of memristive neural networks with deviating argument and discusses the analysis and design of autoassociative memories and heteroassociative memories. Meanwhile, the robustness of this class of neural networks is demonstrated when external input patterns are seriously perturbed. To the best of the author’s knowledge, this is the first theoretical result on associative memories for memristive neural networks with deviating argument. Future works may aim at () studying associative memories based on multistability of memristive neural networks with deviating argument; () analyzing associative memories based on discrete-time memristive neural networks with deviating argument; () exploring high-capacity associative memories for memristive neural networks with deviating argument.

Conflicts of Interest

The author declares that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work is supported by the Research Project of Hubei Provincial Department of Education of China under Grant T201412.