About this Journal Submit a Manuscript Table of Contents
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 429402, 7 pages
http://dx.doi.org/10.1155/2013/429402
Research Article

Memristive Chebyshev Neural Network and Its Applications in Function Approximation

School of Electronic and Information Engineering, Southwest University, Chongqing 400715, China

Received 1 February 2013; Accepted 22 April 2013

Academic Editor: Chuandong Li

Copyright © 2013 Lidan Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A novel Chebyshev neural network combined with memristors is proposed to perform the function approximation. The relationship between memristive conductance and weight update is derived, and the model of a single-input memristive Chebyshev neural network is established. Corresponding BP algorithm and deriving algorithm are introduced to the memristive Chebyshev neural networks. Their advantages include less model complexity, easy convergence of the algorithm, and easy circuit implementation. Through the MATLAB simulation results, we verify the feasibility and effectiveness of the memristive Chebyshev neural networks.

1. Introduction

Many researchers have further studied the approximation capacity of neural networks. Research indicates that there are some drawbacks in Back Propagation (BP) neural networks, such as slow convergence, getting easily into a local minimum, and difficulties in determining the numbers of hidden neurons. The Radical Basis Function (RBF) neural networks are superior to BP neural networks in approximation capacity because their learning rate is fast, but the lack of theoretical guidance in determining the central position of basis function makes us set the central position based on experience. We cannot ensure whether the performance of networks is the best or not. Especially, there is a dilemma between approximation accuracy and networks complexity when we use the aforementioned two neural networks to approximate the nonlinear function. However, the Chebyshev neural networks can solve the bottleneck; their hidden layer neuron activation functions are a group of Chebyshev orthogonal polynomials. Nowadays, Chebyshev neural networks are widely used in complex nonlinear systems [1], chaos systems [2], and discrete-time nonlinear systems [3]. Their learning rate and approximation accuracy are better than traditional neural networks, and they can quickly determine the numbers of hidden neurons. Meanwhile, Chebyshev neural networks are widely used in aerospace [4, 5] and chemistry areas [6]. The nanoscale memristor has the potential of information storage because of the nonvolatility with respect to long periods of power-down, so it can be used as synapse in the neural networks. Many researchers in the field of memristors have suggested that this device has high potential for implementing artificial synapses. Afifi et al. studied the realization of STDP learning rules based on the pulsing neuromorphic networks with memristor cross array [7]. Hu et al. built a novel chaotic neural work with memristor to implement associative memory [8]. Wang et al. proposed a PID controller based on memristive CMAC network [9]. Pershin and di Ventra realized the associative memory based on memristive neural networks [10]. Sharifi and Banadaki applied the memristors as memory units to the nonvolatile RAMs and as synapse to the artificial neural networks [11]. Chabi and Klein put forward a neural network with high fault tolerance based on a structure of crossbar switches [12]. Cantley et al. built neural networks with synapses made up from amorphous silicon thin film transistor and memristor, which realized the Hebb learning rules [13]. Gao et al. designed cellular neural network, which is applied to image denoising and edge detection [14]. Kim et al. changed the synapse weight of artificial neural networks with a pulse-based programmable memristor [15].

The memristive Chebyshev neural networks have the following advantages: (a) single-input memristive Chebyshev neural networks can realize any nonlinear function approximation by only three layers; (b) the number of hidden neurons in memristive Chebyshev neural networks is significantly smaller than that in traditional BP neural networks; (c) we only need to adjust the weight from the hidden layer to the output layer, which can greatly help in quickening the convergence of the algorithm; (d) as memristive synapse is small and passive, the hardware circuits of the memristive Chebyshev neural networks can be implemented by the VLSI circuits easily.

In this paper, we establish the correspondence between memristor conductance and synapse weight through theoretical analysis and MATLAB simulations. The nanoscale memristor is the synapse in Chebyshev neural networks to realize function approximation by memristive BP algorithm and memristive deriving algorithm, respectively.

2. Memristive Synapse

The physical model of the memristor [16] is shown in Figure 1. and represent the thickness of the doped layer and total oxide films, respectively. is the value of the memristor when is equal to , and is the value of the memristor when is equal to zero. is the initial value of the memristor.

429402.fig.001
Figure 1: The physical model of the memristor of HP [16].

According to the mathematical model of HP memristor, and the flux-controlled model of memristor, we can obtain the following equation [17]: where ,  . is the average drift rate of oxygen vacancies, so we can obtain the following equation from (1):

The derivate of equation (2) is

Using calculus, is approximately equal to when approaches to zero, and then we can obtain the following equation: where ; the relationship curve between memristive conductance change and voltage is shown in Figure 2. It can be seen that memristive conductance change increases along with the increasing of applied voltage . If the learning error of neural networks is regarded as voltage , the memristive conductance change can reasonably be described as synapse weight update whose correspondence can be built.

429402.fig.002
Figure 2: The relation curve of the change of memristive conductance and voltage.

3. Modeling of the Memristive Chebyshev Neural Networks

The nth power orthogonal polynomials which are related to the weight function are called Chebyshev polynomials of the first kind [18]. Based on definition, if the polynomial system (where  ) and the weight function satisfy the following relationship: the orthogonal polynomial system can be structured in the interval of .

In this paper, a model (see Figure 3) of the single-input memristive Chebyshev neural networks is proposed, which consists of the input layer, hidden layer, and output layer. The weights from input layer to hidden layer are set to 1, and the memristor conductance stores the weights of hidden layer to output layer. The thresholds of all neurons are set to 0. The hidden layer has neurons, and their transfer functions are a group of Chebyshev orthogonal polynomial functions . Namely, the hidden layer of the th neuron transfer function is , where

429402.fig.003
Figure 3: Model of single-input memristive Chebyshev neural networks.

4. Realization of Function Approximation Algorithm

Set memristive Chebyshev neural networks model as follows:

input layer: ;

input of hidden layer neuron: ;

output of hidden layer neuron: = ();

output layer:

Samples (, )), ( is the number of the samples), are assumed as inputs, and the errors of network output and the target value are = .

Network training index is

4.1. Memristive BP Algorithm

In order to establish the relationship between the memristor and the synapse, we propose the learning rule for memristive synaptic weight update. According to (4), we assume the memristive BP algorithm as follows: where , is the learning rate, is the number of learning, and is the integral of the error .

The learning algorithm is described as follows.

Step 1. Take any number of hidden neurons , choose the initial weights as initial memristor conductance, let the learning rate , give the small positive number , and set the training sample set (, ) as , , .

Step 2. Calculate

Step 3. Adjust weight,

Step 4. ; if , then jump back to Step 2, otherwise Step 5.

Step 5. If , then stop the learning; otherwise, ,  , , and jump back to Step 2.

4.2. Memristive Deriving Algorithm

The basic principles of the memristive deriving algorithm are as follows. Take any less hidden neurons as the initial cells. After training times, the error isno longer changed; that is, = ( is a pretraining given precision index). The network derives automatically by adding a hidden neuron (). The previous process is repeated until when the hidden neurons stop deriving and learning. Until reaching target accuracy, the neural network topology is automatically generated.

The specific algorithm can be described as follows.

Step 1. Take number of hidden neurons , choose the initial weights and as initial memristor conductance, let the learning rate , give the small positive number , and set the training set , and the number of learning .

Step 2. Input of hidden layer neuron is
Calculate

Step 3. Weight update rule is

Step 4. Calculate
If , stop derivative and learning which illustrate that neural network topology is automatically generated. Otherwise, go to Step 5.

Step 5. If , the training error of the network has no longer reduced. We should increase the number of hidden neurons in order to improve network performance. Namely, ,  , and jump to Step 2 to continue learning.

5. Experimental Results

We use the memristive Chebyshev neural networks to realize nonlinear function approximation through memristive BP algorithm and deriving algorithm, respectively. We suppose that memristor model parameters are , , , , , .

5.1. Memristive BP Algorithm Realizes Function Approximation

The 1 × 8 × 1 network structure is shown in Figure 3. The learning rate . The number of samples . The number of learning is 1000. The target mean square error . The experimental result is shown in Figure 4. The relationship between training error and the number of learning is shown in Figure 4(a). The number of final learning is 735 when the minimum training error meets the requirements of function approximation. The memristive synaptic weights are constantly updated in the learning process as shown in Figure 4(b). Figures 4(c) and 4(d) describe the final memristive weights and the approximation curve of actual output signal relative to the teacher signal, respectively.

fig4
Figure 4: Simulation results for memristive BP algorithm.
5.2. Memristive Deriving Algorithm Realizes Function Approximation

We adopt the 1 × 2 × 1 initial neural network, with and learning rate 0.02, and the experimental result is shown in Figure 5. The final network structure is 1 × 25 × 1, and the time of learning is 420. Figure 5(a) shows the final memristive synaptic weights in hidden layer. Figure 5(b) shows the approximation curve of the actual output signal relative to the teacher signal.

fig5
Figure 5: Simulation result for memristive deriving algorithm.

Compared with the memristive BP algorithm, the memristive deriving algorithm has less learning times (735 − 420 = 315) for its neural network is automatically generated.

6. Conclusions

The HP memristor model is analyzed, and the correspondence between the memristive conductance change and the synapse weight update in Chebyshev neural networks is established. Two learning algorithms of memristive synapse weight update are proposed. The nonlinear function approximation is implemented by memristive BP algorithm and memristive deriving algorithm, respectively. On the one hand, the memristive BP networks have fixed structure which is easily implemented. On the other hand, the memristive derivative networks have dynamic structure reaching the optimum solution. Memristive Chebyshev neural networks are hopeful to be used in aspects of pattern recognition and data compression.

Acknowledgments

The work was supported by National Natural Science Foundation of China (Grant nos. 60972155 and 61101233), Fundamental Research Funds for the Central Universities (Grant nos. XDJK2012A007 and XDJK2013B011), University Excellent Talents Supporting Foundations of Chongqing (Grant no. 2011-65), University Key Teacher Supporting Foundations of Chongqing (Grant no. 2011-65), Technology Foundation for Selected Overseas Chinese Scholars, Ministry of Personnel in China (Grant no. 2012-186), National Science Foundation for Postdoctoral Scientists of China (Grant no. CPSF20100470116), and “Spring Sunshine Plan” Research Project of Ministry of Education of China (Grant no. z2011148).

References

  1. S. Purwar, I. N. Kar, and A. N. Jha, “On-line system identification of complex systems using Chebyshev neural networks,” Applied Soft Computing Journal, vol. 7, no. 1, pp. 364–372, 2007. View at Publisher · View at Google Scholar · View at Scopus
  2. P. Akritas, I. Antoniou, and V. V. Ivanov, “Identification and prediction of discrete chaotic maps applying a Chebyshev neural network,” Chaos, Solitons and Fractals, vol. 11, no. 1, pp. 337–344, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  3. A. K. Shrivastava and S. Purwar, “State feedback and output feedback tracking control of discrete-time nonlinear system using Chebyshev neural networks,” in Proceedings of the International Conference on Power, Control and Embedded Systems (ICPCES '10), pp. 1–6, Allahabad, India, December 2010. View at Publisher · View at Google Scholar · View at Scopus
  4. A. M. Zou, K. D. Kumar, and Z. G. Hou, “Quaternion-based adaptive output feedback attitude control of spacecraft using chebyshev neural networks,” IEEE Transactions on Neural Networks, vol. 21, no. 9, pp. 1457–1471, 2010. View at Publisher · View at Google Scholar · View at Scopus
  5. A. M. Zou, K. D. Kumar, Z. G. Hou, and X. Liu, “Finite-time attitude tracking control for spacecraft using terminal sliding mode and chebyshev neural network,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 41, no. 4, pp. 950–963, 2011. View at Publisher · View at Google Scholar · View at Scopus
  6. S. P. Yan, M. F. Peng, J. F. Lei, et al., “CO2 concentration detection based on Chebyshev neural network and best approximation theory,” Instrument Technique and Sensor, vol. 6, pp. 107–110, 2011. View at Google Scholar
  7. A. Afifi, A. Ayatollahi, and F. Raissi, “Implementation of biologically plausible spiking neural network models on the memristor crossbar-based CMOS/nano circuits,” in Proceedings of the European Conference on Circuit Theory and Design Conference Program (ECCTD '09 ), pp. 563–566, Antalya, Turkey, August 2009. View at Publisher · View at Google Scholar · View at Scopus
  8. X. F. Hu, S. K. Duan, and L. D. Wang, “A novel chaotic neural network using memristors with applications in associative memory,” Abstract and Applied Analysis, vol. 2012, Article ID 405739, 19 pages, 2012. View at Publisher · View at Google Scholar
  9. L. D. Wang, X. Y. Fang, S. K. Duan, et al., “PID controller based on memristive CMAC network,” Abstract and Applied Analysis, vol. 2013, Article ID 510238, 2013. View at Publisher · View at Google Scholar
  10. Y. V. Pershin and M. di Ventra, “Experimental demonstration of associative memory with memristive neural networks,” Neural Networks, vol. 23, no. 7, pp. 881–886, 2010. View at Publisher · View at Google Scholar · View at Scopus
  11. M. J. Sharifi and Y. M. Banadaki, “General spice models for memristor and application to circuit simulation of memristor-based synapses and memory cells,” Journal of Circuits, Systems and Computers, vol. 19, no. 2, pp. 407–424, 2010. View at Publisher · View at Google Scholar · View at Scopus
  12. D. Chabi and J. O. Klein, “Hight fault tolerance in neural crossbar,” in Proceedings of the 5th Conference on Design and Technology of Integrated Systems in Nanoscale Era (DTIS '10), pp. 1–6, Hammamet, Tunisia, March 2010. View at Publisher · View at Google Scholar · View at Scopus
  13. D. K. Cantley, A. Subramaniam, H. J. Stiegler, et al., “Hebbian learning in spiking neural networks with nanocrystalline silicon TFTs and memristive synapses,” IEEE Transactions on Nanotechnology, vol. 10, no. 5, pp. 1066–1073, 2011. View at Publisher · View at Google Scholar
  14. S. Y. Gao, S. K. Duan, and L. D. Wang, “On memristive cellular neural network and its applications in noise removal and edge extraction,” Journal of Southwest University, vol. 33, no. 11, pp. 63–70, 2011. View at Google Scholar
  15. H. Kim, M. P. Sah, C. J. Yang, T. Roska, and L. O. Chua, “Neural synaptic weighting with a pulse-based memristor circuit,” IEEE Transactions on Circuits and Systems I, vol. 59, no. 1, pp. 148–158, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  16. D. B. Strukov, G. S. Snider, D. R. Stewart, et al., “The missing memristor found,” Nature, vol. 453, pp. 80–83, 2008. View at Publisher · View at Google Scholar
  17. L. D. Wang, E. Drakakis, S. K. Duan, et al., “Memristor model and its application for chaos generation,” International Journal of Bifurcation and Chaos, vol. 22, no. 8, Article ID 1250205, 14 pages, 2012. View at Google Scholar · View at Zentralblatt MATH
  18. G. D. Mo and K. D. Liu, Function Approximation Methods, Science Press, Beijing, China, 2003.