Table of Contents Author Guidelines Submit a Manuscript
Journal of Applied Mathematics

Volume 2014 (2014), Article ID 892653, 6 pages

http://dx.doi.org/10.1155/2014/892653
Research Article

The Construction and Approximation of the Neural Network with Two Weights

College of Mathematics, Hunan University, Changsha 410082, China

Received 16 June 2014; Revised 3 August 2014; Accepted 3 August 2014; Published 13 August 2014

Academic Editor: H. R. Karimi

Copyright © 2014 Zhiyong Quan and Zhengqiu Zhang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The technique of approximate partition of unity, the way of Fourier series, and inequality technique are used to construct a neural network with two weights and with sigmoidal functions. Furthermore by using inequality technique, we prove that the neural network with two weights can more precisely approximate any nonlinear continuous function than BP neural network constructed in (Chen et al., 2012).

1. Introduction

Many neural network models have been presented and applied in pattern recognition, automatic control, signal processing, artificial life, and aided decision. Among these models, BP neural networks and RBF neural networks are two classes of neural networks which are widely used in control because of their approximation ability to any continuous nonlinear function. Up to now, these two classes of neural networks have successfully been applied to approach any nonlinear continuous function [111].

Wang and Zhao [12] and Cao and Zhao [13] presented a class of neural network called the neural network with two weights by combining the advantage of BP neural networks with that of RBF neural networks. This model can not only simulate BP neural network and RBF neural network, but also simulate neural networks with higher order. This neural network contains not only direction weight value with respect to BP network, but also core weight value with respect to RBF network. The function of neurons of this neural network is of the following form: where is the output of neurons, is the activation function, is threshold value, is direction weight value, is core weight value, is input, and are two parameters.

In (1), when , , and , then (1) reduces to mathematical model of neurons of BP networks; when takes a fixed value, , and , then (1) becomes mathematical model of neurons of RBF networks.

Since Wang and Zhao [12] presented the neural network with two weights, the network had caused already domestic and international wide attention. So far, the network has been successfully applied in many studying fields such as the research fields of face recognition, voice recognition, and protein structure [13]. However, as a complete system, the theoretical research of approximation ability of the neural network with two weights to nonlinear functions is still in its initial stage. Because the function of its neurons is of very complicated form (see (1)), so far, the approximation results have been rarely found to any nonlinear function in the present literature. This motivates us to study the approximation ability of the neural network with two weights to any nonlinear continuous function.

In [14], the approximation operators with logarithmic sigmoidal function of a neural network with two weights (1) and a class of quasi-interpolation operators are investigated. Using these operators as approximation tools, the upper bounds of estimate errors are estimated for approximating continuous functions.

In [3], Bochner-Riesz means operators of double Fourier series are used to construct network operators for approximating nonlinear functions, and the errors of approximation by the operators are estimated.

However, so far, the approximation results of the neural network with two-weight operators and with sigmoidal functions obtained by using the way of Fourier series have not been found. So, in this paper, our objective is to prove that by adjusting the values of parameters , , and , the neural network with two weights and with sigmoidal functions can approximate any nonlinear continuous function arbitrarily and prove that the neural network with two weights is of better approximation ability than BP neural networks constructed in [3].

To help readers know the mathematical symbols used in the paper, we cite the following notations.

denotes the uniform norm of in , denotes the Euclidean norm of in , and is set of the continuous functions defined on and takes values in .   is the modulus of continuity of defined by where is the sign function defined by

2. Construction and Approximation of the Network Operators with Sigmoidal Function

A function is called a sigmoidal function if the following conditions are satisfied: and , where and are constants. The sigmoidal functions are a class of important functions, which play an important role in the research of neural networks.

One of the most familiar sigmoidal functions is the logarithmic type function defined by For the logarithmic type function, if we define then some better properties such that , the Fourier transform of is equal to 0, and can be implied (see [6]).

In this paper, we assume that the sigmoidal function is central symmetrical with respect to the point . Let be a sigmoidal function and Then and is an even function. From Poisson summation formula (see [15]), we can obtain the following Lemma.

Lemma 1 (see [3]). Assume that is a sigmoidal function and central symmetrical with respect to the point , and is given by (6). If there exist positive constants and such that then where denotes the Fourier transform of (see [15]).

If (9) holds, then one has, for , Let . Then using the property of Fourier transform, it follows that Thus one has

Lemma 2. If , then, when , .

Proof. Let , and then . Since , thus, when , ; when . Hence, . Namely, when , .

For each , , we construct networks operators as follows: where   , , and is the extension of defined by Now we give the first main result as follows.

Theorem 3. Assume that satisfies the conditions of Lemma 1 and satisfies . If , then there exists a positive constant such that

Proof. From (13) and (14), we have Since , , then from Lemma 2 for , , we have Hence Since , , then from Lemma 2 for , , we have Hence It is easy to see that Next we estimate . Consider where Since substituting (25) into (23) gives Substituting (19)–(22) and (26) into (17) gives This finishes the proof of Theorem 3.

Theorem 4. Assume that satisfies the conditions of Lemma 1 and p satisfies , . If , then the neural network with two weights can more precisely approximate any nonlinear continuous function than BP neural networks constructed in [3].

Proof. From Theorem 3, the error of approximation of the neural network with two weights to any nonlinear continuous function is By choosing and such that , , , and , this guarantees that the limit of the above error is zero. From [3, Theorem ], the error of approximation of BP neural networks to the same nonlinear continuous function is Since , then we obtain as sufficiently large which tells us that the approximation error of the neural network with two weights is smaller than that of the BP neural networks constructed in [3]. Hence, from (30), the neural network with two weights can more precisely approximate any nonlinear continuous function than BP neural networks constructed in [3].

Remark 5. We can choose and such that the two parameters satisfy all inequalities in [3, Theorem 2.1], Theorems 3, and 4. Now we give an example to illustrate the result. Let Since we have Hence Obviously, for some positive constant , Thus, . Obviously it satisfies all inequalities in [3, Theorem 2.1] and Theorems 3 and 4.

Remark 6. The method used to obtain approximation errors for the neural network with two weights in our paper is different from that used for BP neural networks in [3]. In our paper, we show and apply the inequality in Lemma 1 and other inequalities techniques which are different from those used in [3] to obtain more precisely approximation errors than BP neural networks.

Remark 7. Theorem 4 tells us, when parameters , take some values and satisfies two inequalities conditions, the neural network with two weights is of better approximation ability than BP neural networks constructed in [3].

3. Conclusions

By adjusting the values of two parameters , and parameter , we show that the neural network with two weights can more precisely approximate any nonlinear continuous function than BP neural network constructed in [3]. Hence, the neural network with two weights is of better approximation ability to any nonlinear continuous function than BP neural network constructed in [3]. In our approximation result, the parameter without restriction condition, hence, our future direction is that under the conditions which the parameter and other parameters satisfy, we will show that the neural network with two weights can more precisely approximate any nonlinear continuous function than BP neural network constructed in [3].

Conflict of Interests

The authors declare no conflict of interests regarding the publication of this paper

Acknowledgments

The project is supported by the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry of China (no. 20091341), and the Fund of National Natural Science of China (no. 61065008).

References

  1. J. Wang and Z. Xu, “New study on neural networks: the essential order of approximation,” Neural Networks, vol. 23, no. 5, pp. 618–624, 2010. View at Publisher · View at Google Scholar · View at Scopus
  2. F. Cao, T. Xie, and Z. Xu, “The estimate for approximation error of neural networks: a constructive approach,” Neurocomputing, vol. 71, no. 4–6, pp. 626–630, 2008. View at Publisher · View at Google Scholar · View at Scopus
  3. Z. X. Chen, F. L. Cao, and J. W. Zhao, “The construction and approximation of some neural networks operators,” Applied Mathematics A: Journal of Chinese Universities, vol. 27, no. 1, pp. 69–77, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  4. H. L. Tolman, V. M. Krasnopolsky, and D. V. Chalikov, “Neural network approximations for nonlinear interactions in wind wave spectra: direct mapping for wind seas in deep water,” Ocean Modelling, vol. 8, no. 3, pp. 253–278, 2005. View at Publisher · View at Google Scholar · View at Scopus
  5. B. Llanas and F. J. Sainz, “Constructive approximate interpolation by neural networks,” Journal of Computational and Applied Mathematics, vol. 188, no. 2, pp. 283–308, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  6. Z. Chen and F. Cao, “The approximation operators with sigmoidal functions,” Computers & Mathematics with Applications, vol. 58, no. 4, pp. 758–765, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. Z. Xu and F. Cao, “The essential order of approximation for neural networks,” Science in China F, vol. 47, no. 1, pp. 97–112, 2004. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  8. G. W. Yang, S. J. Wang, and Q. X. Yan, “Research of fractional linear neural network and its ability for nonlinear approach,” Chinese Journal of Computers, vol. 30, no. 2, pp. 189–199, 2007 (Chinese). View at Google Scholar · View at Scopus
  9. D. Costarelli and R. Spigler, “Approximation results for neural network operators activated by sigmoidal functions,” Neural Networks, vol. 44, pp. 101–106, 2013. View at Publisher · View at Google Scholar · View at Scopus
  10. D. Costarelli and R. Spigler, “Multivariate neural network operators with sigmoidal activation functions,” Neural Networks, vol. 48, pp. 72–77, 2013. View at Publisher · View at Google Scholar · View at Scopus
  11. P. C. Kainen and V. Kůrková, “An integral upper bound for neural network approximation,” Neural Computation, vol. 21, no. 10, pp. 2970–2989, 2009. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  12. S. Wang and X. Zhao, “Biomimetic pattern recognition theory and its applications,” Chinese Journal of Electronics, vol. 13, no. 3, pp. 373–377, 2004. View at Google Scholar · View at Scopus
  13. Y. Cao and X. Zhao, “Data fitting based on a new double weights neural network,” Chinese Journal of Electronics, vol. 32, no. 10, pp. 1671–1673, 2004. View at Google Scholar · View at Scopus
  14. Z. Zhang, K. Liu, L. Zhu, and Y. Chen, “The new approximation operators with sigmoidal functions,” Journal of Applied Mathematics and Computing, vol. 42, no. 1-2, pp. 455–468, 2013. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  15. E. M. Stein and G. Weiss, Fourier analysis on Euclidean Spaces, Princeton University Press, New Jersey, NJ, USA, 1971. View at MathSciNet