Table of Contents Author Guidelines Submit a Manuscript
Computational Intelligence and Neuroscience
Volume 2018, Article ID 7361628, 10 pages
https://doi.org/10.1155/2018/7361628
Research Article

Fractional-Order Deep Backpropagation Neural Network

College of Computer Science, Sichuan University, Chengdu 610065, China

Correspondence should be addressed to Yi Zhang; moc.kooltuo@ucs.gnahziy

Received 13 March 2018; Accepted 6 June 2018; Published 3 July 2018

Academic Editor: Friedhelm Schwenker

Copyright © 2018 Chunhui Bao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In recent years, the research of artificial neural networks based on fractional calculus has attracted much attention. In this paper, we proposed a fractional-order deep backpropagation (BP) neural network model with regularization. The proposed network was optimized by the fractional gradient descent method with Caputo derivative. We also illustrated the necessary conditions for the convergence of the proposed network. The influence of regularization on the convergence was analyzed with the fractional-order variational method. The experiments have been performed on the MNIST dataset to demonstrate that the proposed network was deterministically convergent and can effectively avoid overfitting.

1. Introduction

It is well known that artificial neural networks (ANNs) are the abstraction, simplification, and simulation of the human brains and reflect the basic characteristics of the human brains [1]. In recent years, great progress has been made in the research of deep neural networks. Due to the powerful ability of complex nonlinear mapping, many practical problems have been successfully solved by ANNs in the fields of pattern recognition, intelligent robot, automatic control, prediction, biology, medicine, economics, and other fields [2, 3]. BP neural network is one of the most basic and typical multilayer forward neural networks, which are trained by backpropagation (BP) algorithm. BP, which is an efficient way for optimization of ANNs, was firstly introduced by Werbos in 1974. Then, Rumelhart and McCelland et al. implemented the BP algorithm in detail in 1987 and applied it to the multilayer network version of Minsky [46].

The fractional calculus has a history as long as the integral order calculus. In the past three hundred years, the theory of fractional calculus has made great progresses [711]. Its basics are differentiation and integration of arbitrary fractional order. Nowadays, fractional calculus is widely used in diffusion processes [1214], viscoelasticity theory [15], automation control [1618], signal processing [1921], image processing [2225], medical imaging [2628], neural networks [2937], and many other fields. Due to the long-term memory, nonlocality, and weak singularity characteristics [2937], fractional calculus has been successfully applied to ANNs. For instance, Boroomand constructed the Hopfield neural networks based on fractional calculus [37]. Kaslik analyzed the stability of Hopfield neural networks [30]. Pu proposed a fractional steepest descent approach and offered a detailed analysis of its learning conditions, stability, and convergence [38]. Wang applied the fractional steepest descent algorithm to train BP neural networks and proved the monotonicity and convergence of a three-layer example [33]. However, there are three limitations in the proposed fractional-order BP neural network models in [33]. First, the neural network in [33] just had 3 layers, which was actually a shadow network and was not proper to demonstrate its potential for deep learning. Second, the fractional order of this model was restricted to without reasonable analysis. Third, the loss function did not contain the regularization term, which is an efficient way to avoid overfitting, especially when the training set has small scalar. Overfitting means that the model has high prediction accuracy on training set but has the low prediction accuracy on testing set. This makes the generalization ability of the model poor, and the application value is greatly reduced.

In this paper, we proposed a deep fractional-order BP neural network with regularization term, and the fractional-order could be any positive real number. With the fractional-order variational method, the influence of regularization on the convergence of the proposed model was exploited. The performance of the proposed model was evaluated on the MINST dataset.

The structure of the paper is as follows: in Section 2, the definitions and simple properties of fractional calculus are introduced. In Section 3, the proposed fractional-order multilayer BP neural networks are given in detail. In Section 4, the necessary conditions and the influence of regularization for the convergence of the proposed BP algorithm are stated. In Section 5, experimental results are presented to illustrate the effectiveness of our model. Finally, the paper is concluded in Section 6.

2. Background Theory for Fractional Calculus

In this section, the basic knowledge of fractional calculus is introduced, including the definitions and several simple properties used in this paper.

Different from integer calculus, fractional derivative does not have a unified temporal definition expression up to now. The commonly used definitions of fractional derivative are Grünwald-Letnikov (G-L), Riemann-Liouville (R-L), and Caputo derivatives [711].

The following is the G-L definition of fractional derivative:where denotes the fractional differential operator based on G-L definition, denotes a differintegrable function, is the fractional order, is the domain of , is the Gamma function, and [] is the rounding function.

The R-L definition of fractional derivative is as follows: where denotes the fractional differential operator based on G-L definition; . Moreover, the G-L fractional derivative can be deduced from the definition of the R-L fractional derivative.

The Caputo definition of fractional derivative is as follows:where is the fractional differential operator based on Caputo definition, .

Fractional calculus is more difficult to compute than integer calculus. Several mathematical properties used in this paper are given here. The fractional differential of a linear combination of differintegral functions is as follows:where and are differintegral functions and and are constants.

The fractional differential of constant function , ( is a constant) is different under different definitions:

For the G-L definition,

For the R-L definition,

And for the Caputo definition

According to (6), (7) and (8), we can know that for the G-L and R-L definition, the fractional differential of constant function is not equal to 0. Only with the Caputo definition, the fractional differential of constant function equals to 0, which is consistent to the integer-order calculus. Therefore, the Caputo definition is widely used in solving engineering problems and it was employed to calculate the fractional-order derivative in this paper. The fractional differential of function , is as follows:

3. Algorithm Description

3.1. Fractional-Order Deep BP Neural Networks

In this section, we introduce the fractional-order deep BP neural network with layers. , , is the number of neurons for the -th layer. denotes the weight matrix connecting the -th layer and the -th layer. denotes the corresponding activation function for the -th layer. and are the input and the corresponding ideal output of the -th sample and the training sample set is . denotes the total inputs of -th layer. If neurons in the -th layer are not connected to any neurons in previous layer, these neurons are called external outputs of the -th layer, denoted as . On the contrary, if neurons in the -th layer are connected to every neuron in previous layer, these neurons are called internal outputs of -th layer, denoted as . denotes the total outputs of -th layer. The forward computing of the fractional-order deep BP neural networks is as follows:

Particularly, external outputs can exist in any layer except the last one. With the square error function, the error corresponding to -th sample can be denoted as:where denotes the -th element of , denotes the -th element of .

The total error of the neural networks is defined as

In order to minimize the total error of the fractional-order deep BP neural network, the weights are updated by the fractional gradient descent method with Caputo derivative. Let . The backpropagation of fractional-order deep BP neural networks can be derived with the following steps.

Firstly, we define that

According to (13), we can know that

Then the relationship between and can be given by

Then, according to the chain rule and (17), we have

The updating formula iswhere denotes the -th iteration and is the learning rate.

3.2. Fractional Deep BP Neural Networks with Regularization

Fractional-order BP neural network can be overfitted easily when the training set has small scalar. regularization is a useful way to avoid models to be overfitted without modifying the architecture of network. Therefore, by introducing the regularization term into the total error, the modified error function can be presented aswhere denotes the sum of squares of all weights and denotes the regularization parameter.

By introducing (18), we have

The updating formula iswhere denotes the -th iteration and is the learning rate.

4. Convergence Analysis

In this section, the convergence of the proposed fractional-order BP neural network is analyzed. According to previous studies [3942], there are four necessary conditions for the convergence of BP neural networks:

(1) The activation functions are bounded and infinitely differentiable on R and all of their corresponding derivatives are also continuous and bounded on . This condition can be easily satisfied because the most common sigmoid activation functions are uniformly bounded on and infinitely differentiable.

(2) The boundedness of the weight sequence is valid during training procedure and is the domain of all weights with certain boundary.

(3) The learning rate has an upper bound.

(4) Let denote the weights matrix that consists of all weights and be the -order stationary point set of the error function. One necessary condition is that is a finite set.

Then, the influence of regularization on the convergence is derived by using the fractional-order variational method.

According to (20), is defined as a fractional-order multivariable function. The proposed fractional-order BP algorithm is to minimize . Let denote the fractional-order extreme point of and denotes an admissible point. In addition, is composed of where denotes the weights matrix between the -th and -th layer when reaches the extreme value. is composed of where corresponds to . The initial weights are random values, so the initial points of weights can be represented as , where is a vector that consists of small parameters , and corresponds to and . If , it means , then , and reaches the extreme value. Thus, the process of training the BP neural networks from a random initial weight to can be treated as the process of training with a random initial value to .

The fractional-order derivative of on is given aswhere is the fractional order, which is a positive real number.

From (23), we can see that when , if the -order differential of with respect to is existent, has a -order extreme point and we have

In this case, the output of each layer in the neural networks is still given by (10) and (11) and the input of each layer is turned into the following:

When , we have

Without loss of generality, according to (18), for the -th layer of the networks, the -order differential of with respect to can be calculated aswhere denotes the column vector .

Since the value of is stochastic, according to variation principle [43], to allow (24) to be set up, a necessary condition is that for every layer of the networks

Secondly, without loss of generality, for we have

To allow (29) to be set up, a necessary condition is

With (28) and (30), the Euler-Lagrange equation of can be written as

Equation (31) is the necessary condition for the convergence of the proposed fractional-order BP neural networks with regularization. From (31), we can see that if , then . is the first-order derivative of in terms of and can be calculated by and input sample . It means that the extreme point of the proposed algorithm is not equal to the extreme point of integer-order BP algorithm or fractional-order BP algorithm. changes with the different value of and . In addition, it is also clear that the regularization parameter is bounded since the values of input samples and weights are bounded and is a constant during the training process.

5. Experiments

In this section, the following simulations were carried out to evaluate the performance of the presented algorithm. The simulations have been performed on the MNIST handwritten digital dataset. Each digit in the dataset is a 28 × 28 image. Each image is associated with a label from 0 to 9. We divided each image into four parts, which were top-left, bottom-left, bottom-right, and top-right, and each part was a 14 × 14 matrix. We vectorized each part of the image as a 196 × 1 vector and each label as a 10 × 1 vector.

In order to identify the handwritten digits in MNIST dataset, a neural network with 8 layers was proposed. Figure 1 shows the topological structure of the neural networks. For the first four layers of the network, each layer has 196 external neurons and 32 internal neurons. The outputs of the external neurons are in turn four parts of an image and the outputs of the internal neurons of the first layer are 1. The last four layers have no external neurons. The fifth layer, sixth layer, and seventh layer have 64 internal nodes and the output layer has ten nodes. The activation functions of all neurons except the first layer are sigmoid functions, which can be given as follows:

Figure 1: The topological structure of the neural networks.

The MNIST dataset has a total number of 60000 training samples and 10000 testing samples. The simulations demonstrate the performance of the proposed fractional-order BP neural network with regularization, fractional-order BP neural network, traditional BP neural network, and traditional BP neural network with regularization. To evaluate the robustness of our proposed network for a small set of training samples, we set the number of training samples to be (10000, 20000, 30000, 40000, 50000, and 60000). Different fractional -order derivatives were employed to compute the gradient of error function, where , , , , , , , , , , , , , , , , , , and separately ( corresponds to standard integer-order derivative for the common BP; because if the change of weights after each iteration is 0, and the weights of the neural networks cannot be updated). The learning rate was set to be 3 and the batch size was set to be 100. The number of epochs was 300. Two main metrics—training accuracy and testing accuracy—were used to measure the performance of the results from different networks. Each network was trained 5 times and the average values were calculated.

In order to explore the relationship between the fractional orders and the neural network performance, the fractional-order neural networks with different orders were trained. Figure 2 shows the results of different networks with different sizes of training set. We can find that when the fractional order exceeds 1.6, both the training and testing accuracies declined rapidly, and when the fractional order , the performances of the fractional BP neural networks were much poorer than that with . The results of and were shown in Table 1 as examples. This result is consistent with that for describing physical problems, and usually the limitation is adopted in the fractional-order models.

Table 1: Performances of the algorithms when v>2.
Figure 2: The relationship between the fractional order of gradient descent method and the neural network performance.

From Figure 2, it can be observed that, with the increase of the size of training set, the performances of the networks were improved visibly. Furthermore, it is also obvious that the training and testing accuracies raised gradually with increasing fractional orders and then reached the peak while equaled or order. After that, the training and testing accuracies began to decline rapidly.

Table 2 shows the optimal orders under training set and testing set separately with different size of training set and it can be noticed that the optimal orders almost concentrated in and . The only exception is that when the number of training samples was 50000, the training accuracy of order 1 was slightly higher than that in or order case. Generally, for the MNIST dataset the performances of fractional-order BP neural networks are better than integer order.

Table 2: Optimal Orders and Highest Accuracies.

It also can be seen that, in each case, the training accuracy is much bigger than testing accuracy, which means that the BP neural networks have obvious overfitting phenomenon. To avoid overfitting, the integer-order and fractional-order BP neural networks with regularization were trained. With different sizes of training set we chose the regularization parameter to be (, , , , , and ). For the fractional-order neural networks, we chose the fractional order that had highest testing accuracy in previous simulations. When the numbers of training samples were (10000, 20000, 30000, 40000, 50000, and 60000), we separately set the fractional order to be ().

The performance of the proposed fractional-order BP neural networks with regularization and the performance comparison with integer-order BP neural networks (IOBP), integer-order BP neural networks with regularization, and fractional-order BP neural networks (FOBP) in terms of training and testing accuracy are shown in Table 3 and the change of the testing accuracy with the iterations was given in Figure 3

Table 3: Performance comparison of different type BP neural networks.
Figure 3: Performance comparison in terms of testing accuracy.

In Table 3 and Figure 3, it can be seen that, after the addition of regularization to BP neural networks, the training accuracy is slightly decreased but the testing accuracy significantly increased, which indicated that adding regularization can effectively suppress overfitting and improve the generalization of BP neural networks. Furthermore, it can be noticed that after adding regularization the performance of fractional-order BP neural network is better than integer order. One important merit of the regularization is that it gained more benefit while the training set is small. The most possible reason is that the network trained with the smallest number of training samples was affected most by the overfitting. With the increase of the training samples, the model gradually changed from overfitting to underfitting, so the improvement of the regularization method became faint.

Then, the stability and convergence of the proposed fractional-order BP neural networks with regularization are demonstrated in Figures 4 and 5. We used the network with optimal order, which means that the size of training set was 60000, fractional-order was 11/9, and the regularization parameter was . Figure 4 shows the change of the total error during the training process. Without loss of generality, the change of was randomly selected and Figure 5 shows the change of it during the training process. It is clear to see that and converged fast and stably and were finally close to zero. These observations effectively verify the proposed algorithm is deterministically convergent.

Figure 4: Changes of total error during the training process.
Figure 5: Changes of during the training process.

6. Conclusion

In this paper, we applied fractional calculus and regularization method to deep BP neural networks. Different from previous studies, the proposed model had no limitations on the number of layers and the fractional-order was extended to arbitrary real number bigger than 0. regularization was also imposed into the errorless function. Meanwhile, we analyzed the profits introduced by the regularization on the convergence of this proposed fractional-order BP network. The numerical results support that the fractional-order BP neural networks with regularization are deterministically convergent and can effectively avoid the overfitting phenomenon. Then, how to apply fractional calculus to other more complex artificial neural networks is an attracted topic in our future work.

Data Availability

The code of this work can be downloaded at https://github.com/BaoChunhui/Deep-fractional-BP-neural-networks.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported in part by the National Key R&D Program of China under Grant 2017YFB0802300, the National Natural Science Foundation of China under Grant 61671312, the Science and Technology Project of Sichuan Province of China under Grant 2018HH0070, and the Strategic Cooperation Project of Sichuan University and Luzhou City under Grant 2015CDLZ-G22.

References

  1. M. Kubat, “Neural networks: a comprehensive foundation by Simon Haykin, Macmillan, 1994, ISBN 0-02-352781-7.,” The Knowledge Engineering Review, vol. 13, no. 4, pp. 409–412. View at Publisher · View at Google Scholar
  2. S. A. Kalogirou, “Applications of artificial neural networks in energy systems: a review,” Energy Conversion and Management, vol. 40, no. 10, pp. 1073–1087, 1999. View at Publisher · View at Google Scholar · View at Scopus
  3. H. B. Demuth, M. H. Beale, O. De Jess, and M. T. Hagan, Neural network design: Martin Hagan, 2014.
  4. D. E. Rumelhart, J. L. McClelland, and P. R. Group, Parallel Distributed Processing, vol. 1, MIT press, Cambridge, MA, USA, 1987.
  5. J. Jia and H. Duan, “Automatic target recognition system for unmanned aerial vehicle via backpropagation artificial neural network,” Aircraft Engineering and Aerospace Technology, vol. 89, no. 1, pp. 145–154, 2017. View at Publisher · View at Google Scholar · View at Scopus
  6. Z. Wu and H. Wang, Super-resolution Reconstruction of SAR Image based on Non-Local Means Denoising Combined with BP Neural Network, 2016, arXiv preprint arXiv:1612.04755.
  7. E. R. Love, “Fractional derivatives of imaginary order,” Journal Of The London Mathematical Society-Second Series, vol. 3, pp. 241–259, 1971. View at Publisher · View at Google Scholar · View at MathSciNet
  8. Y. Povstenko, Linear Fractional Diffusion-Wave Equation for Scientists and Engineers, Birkhäuser, New York, 2015.
  9. A. McBride and G. Roach, Fractional Calculus (Pitman Research Notes in Mathematics, No 138), Longman Science & Technology, 1986.
  10. K. Nishimoto, Fractional Calculus: Integrations and Differentiations of Arbitrary Order, University of New Haven Press, New Haven, Conn, USA, 1989. View at MathSciNet
  11. I. Podlubny, Fractional differential equations: an introduction to fractional derivatives, fractional differential equations, to methods of their solution and some of their applications, vol. 198, Academic press, 1998.
  12. N. Özdemir and D. Karadeniz, “Fractional diffusion-wave problem in cylindrical coordinates,” Physics Letters A, vol. 372, no. 38, pp. 5968–5972, 2008. View at Publisher · View at Google Scholar · View at MathSciNet
  13. N. Özdemir, O. P. Agrawal, D. Karadeniz, and B. B. İskender, “Analysis of an axis-symmetric fractional diffusion-wave problem,” Journal of Physics A: Mathematical and Theoretical, vol. 42, no. 35, p. 355208, 2009. View at Publisher · View at Google Scholar
  14. Y. Povstenko, “Solutions to the fractional diffusion-wave equation in a wedge,” Fractional Calculus and Applied Analysis An International Journal for Theory and Applications, vol. 17, no. 1, pp. 122–135, 2014. View at Publisher · View at Google Scholar · View at MathSciNet
  15. R. L. Bagley and P. J. Torvik, “A theoretical basis for the application of fractional calilus to visoelasticity,” Journal of Rheology, vol. 27, no. 3, pp. 201–210, 1983. View at Publisher · View at Google Scholar · View at Scopus
  16. D. Baleanu, J. A. T. Machado, and A. C. J. Luo, Fractional Dynamics and Control, Springer, New York, NY, USA, 2012. View at Publisher · View at Google Scholar · View at MathSciNet
  17. C. Li and G. Chen, “Chaos in the fractional order Chen system and its control,” Chaos, Solitons $ Fractals, vol. 22, pp. 549–554, 2004. View at Google Scholar
  18. C. A. Monje, B. M. Vinagre, V. Feliu, and Y. Chen, “Tuning and auto-tuning of fractional order controllers for industry applications,” Control Engineering Practice, vol. 16, no. 7, pp. 798–812, 2008. View at Publisher · View at Google Scholar · View at Scopus
  19. L. B. Almeida, “Fractional fourier transform and time-frequency representations,” IEEE Transactions on Signal Processing, vol. 42, no. 11, pp. 3084–3091, 1994. View at Publisher · View at Google Scholar · View at Scopus
  20. M. S. Aslam and M. A. Z. Raja, “A new adaptive strategy to improve online secondary path modeling in active noise control systems using fractional signal processing approach,” Signal Processing, vol. 107, pp. 433–443, 2015. View at Publisher · View at Google Scholar · View at Scopus
  21. R. Panda and M. Dash, “Fractional generalized splines and signal processing,” Signal Processing, vol. 86, no. 9, pp. 2340–2350, 2006. View at Publisher · View at Google Scholar · View at Scopus
  22. M. Xu, J. Yang, D. Zhao, and H. Zhao, “An image-enhancement method based on variable-order fractional differential operators,” Bio-Medical Materials and Engineering, vol. 26, pp. S1325–S1333, 2015. View at Publisher · View at Google Scholar · View at Scopus
  23. Y.-F. Pu, N. Zhang, Y. Zhang, and J.-L. Zhou, “A texture image denoising approach based on fractional developmental mathematics,” PAA. Pattern Analysis and Applications, vol. 19, no. 2, pp. 427–445, 2016. View at Publisher · View at Google Scholar · View at MathSciNet
  24. Y.-F. Pu, J.-L. Zhou, and X. Yuan, “Fractional differential mask: a fractional differential-based approach for multiscale texture enhancement,” IEEE Transactions on Image Processing, vol. 19, no. 2, pp. 491–511, 2010. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  25. J. Bai and X.-C. Feng, “Fractional-order anisotropic diffusion for image denoising,” IEEE Transactions on Image Processing, vol. 16, no. 10, pp. 2492–2502, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  26. Y. Zhang, W. Zhang, Y. Lei, and J. Zhou, “Few-view image reconstruction with fractional-order total variation,” Journal of the Optical Society of America A: Optics, Image Science & Vision, vol. 31, no. 5, pp. 981–995, 2014. View at Publisher · View at Google Scholar
  27. Y. Zhang, Y. Wang, W. Zhang, F. Lin, Y. Pu, and J. Zhou, “Statistical iterative reconstruction using adaptive fractional order regularization,” Biomedical Optics Express, vol. 7, no. 3, pp. 1015–1029, 2016. View at Publisher · View at Google Scholar · View at Scopus
  28. Y. Zhang, Y.-F. Pu, J.-R. Hu, Y. Liu, Q.-L. Chen, and J.-L. Zhou, “Efficient CT metal artifact reduction based on fractional-order curvature diffusion,” Computational and Mathematical Methods in Medicine, Art. ID 173748, 9 pages, 2011. View at Google Scholar · View at MathSciNet
  29. Y.-F. Pu, Z. Yi, and J.-L. Zhou, “Defense Against Chip Cloning Attacks Based on Fractional Hopfield Neural Networks,” International Journal of Neural Systems, vol. 27, no. 4, Article ID 1750003, 2017. View at Publisher · View at Google Scholar · View at Scopus
  30. E. Kaslik and S. Sivasundaram, “Dynamics of fractional-order neural networks,” in Proceedings of the 2011 International Joint Conference on Neural Network, IJCNN 2011, pp. 611–618, usa, August 2011. View at Scopus
  31. Y.-F. Pu, Z. Yi, and J.-L. Zhou, “Fractional Hopfield neural networks: fractional dynamic associative recurrent neural networks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 10, pp. 2319–2333, 2017. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  32. C. Song and J. Cao, “Dynamics in fractional-order neural networks,” Neurocomputing, vol. 142, pp. 494–498, 2014. View at Publisher · View at Google Scholar · View at Scopus
  33. J. Wang, Y. Wen, Y. Gou, Z. Ye, and H. Chen, “Fractional-order gradient descent learning of BP neural networks with Caputo derivative,” Neural Networks, vol. 89, pp. 19–30, 2017. View at Publisher · View at Google Scholar · View at Scopus
  34. R. Rakkiyappan, R. Sivaranjani, G. Velmurugan, and J. Cao, “Analysis of global O (t- a) stability and global asymptotical periodicity for a class of fractional-order complex-valued neural networks with time varying delays,” Neural Networks, vol. 77, pp. 51–69, 2016. View at Google Scholar
  35. H. Wang, Y. Yu, and G. Wen, “Stability analysis of fractional-order Hopfield neural networks with time delays,” Neural Networks, vol. 55, pp. 98–109, 2014. View at Publisher · View at Google Scholar · View at Scopus
  36. H. Wang, Y. Yu, G. Wen, S. Zhang, and J. Yu, “Global stability analysis of fractional-order Hopfield neural networks with time delay,” Neurocomputing, vol. 154, pp. 15–23, 2015. View at Publisher · View at Google Scholar · View at Scopus
  37. A. Boroomand and M. B. Menhaj, “Fractional-Order Hopfield Neural Networks,” in Advances in Neuro-Information Processing, vol. 5506 of Lecture Notes in Computer Science, pp. 883–890, Springer Berlin Heidelberg, Berlin, Heidelberg, 2009. View at Publisher · View at Google Scholar
  38. Y.-F. Pu, J.-L. Zhou, Y. Zhang, N. Zhang, G. Huang, and P. Siarry, “Fractional extreme value adaptive training method: fractional steepest descent approach,” IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 4, pp. 653–662, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  39. H. Shao and G. Zheng, “Boundedness and convergence of online gradient method with penalty and momentum,” Neurocomputing, vol. 74, no. 5, pp. 765–770, 2011. View at Publisher · View at Google Scholar · View at Scopus
  40. W. Wu, G. Feng, Z. Li, and Y. Xu, “Deterministic convergence of an online gradient method for BP neural networks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 16, no. 3, pp. 533–540, 2005. View at Publisher · View at Google Scholar · View at Scopus
  41. W. Wu, J. Wang, M. Cheng, and Z. Li, “Convergence analysis of online gradient method for BP neural networks,” Neural Networks, vol. 24, no. 1, pp. 91–98, 2011. View at Publisher · View at Google Scholar · View at Scopus
  42. H. Zhang, W. Wu, F. Liu, and M. Yao, “Boundedness and convergence of online gadient method with penalty for feedforward neural networks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 20, no. 6, pp. 1050–1054, 2009. View at Publisher · View at Google Scholar · View at Scopus
  43. G. Leitmann, The calculus of variations and optimal control: an introduction, vol. 24, Springer Science & Business Media, 2013.