Abstract

The technique of approximate partition of unity, the way of Fourier series, and inequality technique are used to construct a neural network with two weights and with sigmoidal functions. Furthermore by using inequality technique, we prove that the neural network with two weights can more precisely approximate any nonlinear continuous function than BP neural network constructed in (Chen et al., 2012).

1. Introduction

Many neural network models have been presented and applied in pattern recognition, automatic control, signal processing, artificial life, and aided decision. Among these models, BP neural networks and RBF neural networks are two classes of neural networks which are widely used in control because of their approximation ability to any continuous nonlinear function. Up to now, these two classes of neural networks have successfully been applied to approach any nonlinear continuous function [111].

Wang and Zhao [12] and Cao and Zhao [13] presented a class of neural network called the neural network with two weights by combining the advantage of BP neural networks with that of RBF neural networks. This model can not only simulate BP neural network and RBF neural network, but also simulate neural networks with higher order. This neural network contains not only direction weight value with respect to BP network, but also core weight value with respect to RBF network. The function of neurons of this neural network is of the following form: where is the output of neurons, is the activation function, is threshold value, is direction weight value, is core weight value, is input, and are two parameters.

In (1), when , , and , then (1) reduces to mathematical model of neurons of BP networks; when takes a fixed value, , and , then (1) becomes mathematical model of neurons of RBF networks.

Since Wang and Zhao [12] presented the neural network with two weights, the network had caused already domestic and international wide attention. So far, the network has been successfully applied in many studying fields such as the research fields of face recognition, voice recognition, and protein structure [13]. However, as a complete system, the theoretical research of approximation ability of the neural network with two weights to nonlinear functions is still in its initial stage. Because the function of its neurons is of very complicated form (see (1)), so far, the approximation results have been rarely found to any nonlinear function in the present literature. This motivates us to study the approximation ability of the neural network with two weights to any nonlinear continuous function.

In [14], the approximation operators with logarithmic sigmoidal function of a neural network with two weights (1) and a class of quasi-interpolation operators are investigated. Using these operators as approximation tools, the upper bounds of estimate errors are estimated for approximating continuous functions.

In [3], Bochner-Riesz means operators of double Fourier series are used to construct network operators for approximating nonlinear functions, and the errors of approximation by the operators are estimated.

However, so far, the approximation results of the neural network with two-weight operators and with sigmoidal functions obtained by using the way of Fourier series have not been found. So, in this paper, our objective is to prove that by adjusting the values of parameters , , and , the neural network with two weights and with sigmoidal functions can approximate any nonlinear continuous function arbitrarily and prove that the neural network with two weights is of better approximation ability than BP neural networks constructed in [3].

To help readers know the mathematical symbols used in the paper, we cite the following notations.

denotes the uniform norm of in , denotes the Euclidean norm of in , and is set of the continuous functions defined on and takes values in .   is the modulus of continuity of defined by where is the sign function defined by

2. Construction and Approximation of the Network Operators with Sigmoidal Function

A function is called a sigmoidal function if the following conditions are satisfied: and , where and are constants. The sigmoidal functions are a class of important functions, which play an important role in the research of neural networks.

One of the most familiar sigmoidal functions is the logarithmic type function defined by For the logarithmic type function, if we define then some better properties such that , the Fourier transform of is equal to 0, and can be implied (see [6]).

In this paper, we assume that the sigmoidal function is central symmetrical with respect to the point . Let be a sigmoidal function and Then and is an even function. From Poisson summation formula (see [15]), we can obtain the following Lemma.

Lemma 1 (see [3]). Assume that is a sigmoidal function and central symmetrical with respect to the point , and is given by (6). If there exist positive constants and such that then where denotes the Fourier transform of (see [15]).

If (9) holds, then one has, for , Let . Then using the property of Fourier transform, it follows that Thus one has

Lemma 2. If , then, when , .

Proof. Let , and then . Since , thus, when , ; when . Hence, . Namely, when , .

For each , , we construct networks operators as follows: where   , , and is the extension of defined by Now we give the first main result as follows.

Theorem 3. Assume that satisfies the conditions of Lemma 1 and satisfies . If , then there exists a positive constant such that

Proof. From (13) and (14), we have Since , , then from Lemma 2 for , , we have Hence Since , , then from Lemma 2 for , , we have Hence It is easy to see that Next we estimate . Consider where Since substituting (25) into (23) gives Substituting (19)–(22) and (26) into (17) gives This finishes the proof of Theorem 3.

Theorem 4. Assume that satisfies the conditions of Lemma 1 and p satisfies , . If , then the neural network with two weights can more precisely approximate any nonlinear continuous function than BP neural networks constructed in [3].

Proof. From Theorem 3, the error of approximation of the neural network with two weights to any nonlinear continuous function is By choosing and such that , , , and , this guarantees that the limit of the above error is zero. From [3, Theorem ], the error of approximation of BP neural networks to the same nonlinear continuous function is Since , then we obtain as sufficiently large which tells us that the approximation error of the neural network with two weights is smaller than that of the BP neural networks constructed in [3]. Hence, from (30), the neural network with two weights can more precisely approximate any nonlinear continuous function than BP neural networks constructed in [3].

Remark 5. We can choose and such that the two parameters satisfy all inequalities in [3, Theorem 2.1], Theorems 3, and 4. Now we give an example to illustrate the result. Let Since we have Hence Obviously, for some positive constant , Thus, . Obviously it satisfies all inequalities in [3, Theorem 2.1] and Theorems 3 and 4.

Remark 6. The method used to obtain approximation errors for the neural network with two weights in our paper is different from that used for BP neural networks in [3]. In our paper, we show and apply the inequality in Lemma 1 and other inequalities techniques which are different from those used in [3] to obtain more precisely approximation errors than BP neural networks.

Remark 7. Theorem 4 tells us, when parameters , take some values and satisfies two inequalities conditions, the neural network with two weights is of better approximation ability than BP neural networks constructed in [3].

3. Conclusions

By adjusting the values of two parameters , and parameter , we show that the neural network with two weights can more precisely approximate any nonlinear continuous function than BP neural network constructed in [3]. Hence, the neural network with two weights is of better approximation ability to any nonlinear continuous function than BP neural network constructed in [3]. In our approximation result, the parameter without restriction condition, hence, our future direction is that under the conditions which the parameter and other parameters satisfy, we will show that the neural network with two weights can more precisely approximate any nonlinear continuous function than BP neural network constructed in [3].

Conflict of Interests

The authors declare no conflict of interests regarding the publication of this paper

Acknowledgments

The project is supported by the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry of China (no. 20091341), and the Fund of National Natural Science of China (no. 61065008).