Abstract

A novel Chebyshev neural network combined with memristors is proposed to perform the function approximation. The relationship between memristive conductance and weight update is derived, and the model of a single-input memristive Chebyshev neural network is established. Corresponding BP algorithm and deriving algorithm are introduced to the memristive Chebyshev neural networks. Their advantages include less model complexity, easy convergence of the algorithm, and easy circuit implementation. Through the MATLAB simulation results, we verify the feasibility and effectiveness of the memristive Chebyshev neural networks.

1. Introduction

Many researchers have further studied the approximation capacity of neural networks. Research indicates that there are some drawbacks in Back Propagation (BP) neural networks, such as slow convergence, getting easily into a local minimum, and difficulties in determining the numbers of hidden neurons. The Radical Basis Function (RBF) neural networks are superior to BP neural networks in approximation capacity because their learning rate is fast, but the lack of theoretical guidance in determining the central position of basis function makes us set the central position based on experience. We cannot ensure whether the performance of networks is the best or not. Especially, there is a dilemma between approximation accuracy and networks complexity when we use the aforementioned two neural networks to approximate the nonlinear function. However, the Chebyshev neural networks can solve the bottleneck; their hidden layer neuron activation functions are a group of Chebyshev orthogonal polynomials. Nowadays, Chebyshev neural networks are widely used in complex nonlinear systems [1], chaos systems [2], and discrete-time nonlinear systems [3]. Their learning rate and approximation accuracy are better than traditional neural networks, and they can quickly determine the numbers of hidden neurons. Meanwhile, Chebyshev neural networks are widely used in aerospace [4, 5] and chemistry areas [6]. The nanoscale memristor has the potential of information storage because of the nonvolatility with respect to long periods of power-down, so it can be used as synapse in the neural networks. Many researchers in the field of memristors have suggested that this device has high potential for implementing artificial synapses. Afifi et al. studied the realization of STDP learning rules based on the pulsing neuromorphic networks with memristor cross array [7]. Hu et al. built a novel chaotic neural work with memristor to implement associative memory [8]. Wang et al. proposed a PID controller based on memristive CMAC network [9]. Pershin and di Ventra realized the associative memory based on memristive neural networks [10]. Sharifi and Banadaki applied the memristors as memory units to the nonvolatile RAMs and as synapse to the artificial neural networks [11]. Chabi and Klein put forward a neural network with high fault tolerance based on a structure of crossbar switches [12]. Cantley et al. built neural networks with synapses made up from amorphous silicon thin film transistor and memristor, which realized the Hebb learning rules [13]. Gao et al. designed cellular neural network, which is applied to image denoising and edge detection [14]. Kim et al. changed the synapse weight of artificial neural networks with a pulse-based programmable memristor [15].

The memristive Chebyshev neural networks have the following advantages: (a) single-input memristive Chebyshev neural networks can realize any nonlinear function approximation by only three layers; (b) the number of hidden neurons in memristive Chebyshev neural networks is significantly smaller than that in traditional BP neural networks; (c) we only need to adjust the weight from the hidden layer to the output layer, which can greatly help in quickening the convergence of the algorithm; (d) as memristive synapse is small and passive, the hardware circuits of the memristive Chebyshev neural networks can be implemented by the VLSI circuits easily.

In this paper, we establish the correspondence between memristor conductance and synapse weight through theoretical analysis and MATLAB simulations. The nanoscale memristor is the synapse in Chebyshev neural networks to realize function approximation by memristive BP algorithm and memristive deriving algorithm, respectively.

2. Memristive Synapse

The physical model of the memristor [16] is shown in Figure 1. and represent the thickness of the doped layer and total oxide films, respectively. is the value of the memristor when is equal to , and is the value of the memristor when is equal to zero. is the initial value of the memristor.

According to the mathematical model of HP memristor, and the flux-controlled model of memristor, we can obtain the following equation [17]: where ,  . is the average drift rate of oxygen vacancies, so we can obtain the following equation from (1):

The derivate of equation (2) is

Using calculus, is approximately equal to when approaches to zero, and then we can obtain the following equation: where ; the relationship curve between memristive conductance change and voltage is shown in Figure 2. It can be seen that memristive conductance change increases along with the increasing of applied voltage . If the learning error of neural networks is regarded as voltage , the memristive conductance change can reasonably be described as synapse weight update whose correspondence can be built.

3. Modeling of the Memristive Chebyshev Neural Networks

The nth power orthogonal polynomials which are related to the weight function are called Chebyshev polynomials of the first kind [18]. Based on definition, if the polynomial system (where  ) and the weight function satisfy the following relationship: the orthogonal polynomial system can be structured in the interval of .

In this paper, a model (see Figure 3) of the single-input memristive Chebyshev neural networks is proposed, which consists of the input layer, hidden layer, and output layer. The weights from input layer to hidden layer are set to 1, and the memristor conductance stores the weights of hidden layer to output layer. The thresholds of all neurons are set to 0. The hidden layer has neurons, and their transfer functions are a group of Chebyshev orthogonal polynomial functions . Namely, the hidden layer of the th neuron transfer function is , where

4. Realization of Function Approximation Algorithm

Set memristive Chebyshev neural networks model as follows:

input layer: ;

input of hidden layer neuron: ;

output of hidden layer neuron: = ();

output layer:

Samples (, )), ( is the number of the samples), are assumed as inputs, and the errors of network output and the target value are = .

Network training index is

4.1. Memristive BP Algorithm

In order to establish the relationship between the memristor and the synapse, we propose the learning rule for memristive synaptic weight update. According to (4), we assume the memristive BP algorithm as follows: where , is the learning rate, is the number of learning, and is the integral of the error .

The learning algorithm is described as follows.

Step 1. Take any number of hidden neurons , choose the initial weights as initial memristor conductance, let the learning rate , give the small positive number , and set the training sample set (, ) as , , .

Step 2. Calculate

Step 3. Adjust weight,

Step 4. ; if , then jump back to Step 2, otherwise Step 5.

Step 5. If , then stop the learning; otherwise, ,  , , and jump back to Step 2.

4.2. Memristive Deriving Algorithm

The basic principles of the memristive deriving algorithm are as follows. Take any less hidden neurons as the initial cells. After training times, the error isno longer changed; that is, = ( is a pretraining given precision index). The network derives automatically by adding a hidden neuron (). The previous process is repeated until when the hidden neurons stop deriving and learning. Until reaching target accuracy, the neural network topology is automatically generated.

The specific algorithm can be described as follows.

Step 1. Take number of hidden neurons , choose the initial weights and as initial memristor conductance, let the learning rate , give the small positive number , and set the training set , and the number of learning .

Step 2. Input of hidden layer neuron is
Calculate

Step 3. Weight update rule is

Step 4. Calculate
If , stop derivative and learning which illustrate that neural network topology is automatically generated. Otherwise, go to Step 5.

Step 5. If , the training error of the network has no longer reduced. We should increase the number of hidden neurons in order to improve network performance. Namely, ,  , and jump to Step 2 to continue learning.

5. Experimental Results

We use the memristive Chebyshev neural networks to realize nonlinear function approximation through memristive BP algorithm and deriving algorithm, respectively. We suppose that memristor model parameters are , , , , , .

5.1. Memristive BP Algorithm Realizes Function Approximation

The 1 × 8 × 1 network structure is shown in Figure 3. The learning rate . The number of samples . The number of learning is 1000. The target mean square error . The experimental result is shown in Figure 4. The relationship between training error and the number of learning is shown in Figure 4(a). The number of final learning is 735 when the minimum training error meets the requirements of function approximation. The memristive synaptic weights are constantly updated in the learning process as shown in Figure 4(b). Figures 4(c) and 4(d) describe the final memristive weights and the approximation curve of actual output signal relative to the teacher signal, respectively.

5.2. Memristive Deriving Algorithm Realizes Function Approximation

We adopt the 1 × 2 × 1 initial neural network, with and learning rate 0.02, and the experimental result is shown in Figure 5. The final network structure is 1 × 25 × 1, and the time of learning is 420. Figure 5(a) shows the final memristive synaptic weights in hidden layer. Figure 5(b) shows the approximation curve of the actual output signal relative to the teacher signal.

Compared with the memristive BP algorithm, the memristive deriving algorithm has less learning times (735 − 420 = 315) for its neural network is automatically generated.

6. Conclusions

The HP memristor model is analyzed, and the correspondence between the memristive conductance change and the synapse weight update in Chebyshev neural networks is established. Two learning algorithms of memristive synapse weight update are proposed. The nonlinear function approximation is implemented by memristive BP algorithm and memristive deriving algorithm, respectively. On the one hand, the memristive BP networks have fixed structure which is easily implemented. On the other hand, the memristive derivative networks have dynamic structure reaching the optimum solution. Memristive Chebyshev neural networks are hopeful to be used in aspects of pattern recognition and data compression.

Acknowledgments

The work was supported by National Natural Science Foundation of China (Grant nos. 60972155 and 61101233), Fundamental Research Funds for the Central Universities (Grant nos. XDJK2012A007 and XDJK2013B011), University Excellent Talents Supporting Foundations of Chongqing (Grant no. 2011-65), University Key Teacher Supporting Foundations of Chongqing (Grant no. 2011-65), Technology Foundation for Selected Overseas Chinese Scholars, Ministry of Personnel in China (Grant no. 2012-186), National Science Foundation for Postdoctoral Scientists of China (Grant no. CPSF20100470116), and “Spring Sunshine Plan” Research Project of Ministry of Education of China (Grant no. z2011148).