Table of Contents Author Guidelines Submit a Manuscript
International Journal of Mathematics and Mathematical Sciences
Volume 2018, Article ID 9457578, 4 pages
https://doi.org/10.1155/2018/9457578
Research Article

Efficient Solving of Boundary Value Problems Using Radial Basis Function Networks Learned by Trust Region Method

Penza State University, Penza, Russia

Correspondence should be addressed to Vladimir Ivanovich Gorbachenko; ur.liam@ivrog

Received 7 March 2017; Revised 3 October 2017; Accepted 6 November 2017; Published 3 June 2018

Academic Editor: Irena Lasiecka

Copyright © 2018 Mohie Mortadha Alqezweeni et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A method using radial basis function networks (RBFNs) to solve boundary value problems of mathematical physics is presented in this paper. The main advantages of mesh-free methods based on RBFN are explained here. To learn RBFNs, the Trust Region Method (TRM) is proposed, which simplifies the process of network structure selection and reduces time expenses to adjust their parameters. Application of the proposed algorithm is illustrated by solving two-dimensional Poisson equation.

1. Introduction

Mesh-free methods of solving boundary value problems have been widely studied in different researches in the last decade [1]. They belong to the class of projection methods [26]. In this paper, we are studying one of the most promising mesh-free methods using the radial basis functions (RBFs). In the case of their application, the approximation of the decision is presented as the weighted sum of RBFs where weights are selected in such a way that the approximated solution satisfies the boundary value problem in the selected sample points. The major difficulty of using the mesh-free methods based on RBFs is the nonformalizable selection of basis function parameters. This issue can be overcome with radial basis function neural networks (RBFNs) [711]. The process of boundary value problems’ solution using RBFNs comes down to the RBFNs learning. The main difference between the RBFN method and other mesh-free methods is that it adjusts not only the weight of the basis functions but also their parameters [711]. In [12], it was suggested to use the Trust Region Method (TRM) for training RBNFs [13]. There are new results to improve the Trust Region Method performance in the submitted article, in particular the use of the Hessian matrix approximation to reduce the computation cost. Two approaches were considered for the initial allocation of radial basis function centers, and more optimal values of Trust Region Method hyperparameters were found. In Section 2, the RBFNs are discussed in order to solve the boundary value problems. Section 3 presents the proposed methodology of learning RBFNs using TRM. Section 4 presents the experimental analysis of the proposed solution to solve the boundary value problems. Finally, Section 5 presents the conclusion of the paper.

2. Radial Basis Function Networks

RBFN is a network consisting of two layers [14]:(i)The first layer realizes nonlinear conversion of an input vector .(ii)The second layer does the linear summation. RBF is used as a conversion function.

The output of a network is described by the expression where is the quantity of RBFs, is the weight of RBF, and is the RBF parameter vector . Gauss’s functions, multiquadrics, and the reverse multiquadrics, among others, are examples of RBFs [14].

Consider the process of solving boundary value problems using RBFNs using an example of the boundary value problem, given in the following operator form:where is a desired solution; is a differential operator; is an operator of the boundary conditions; is an area of solutions; is a border of the area; and are known functions. The solution is approximated by RBFN:

Select domestic and boundary sample points from the sets and .

Determine the structure of the network: the network type, the number of RBFs , and the RBFs type.

Specify initial values: are the weights of RBFs and are parameters of RBFs (the structure of the vector elements depends on the type of the functions; e.g., for a two-dimensional Gaussian function, it can be defined as , where , is the shape parameter (width), and is the position of its center).

Learn the network; that is, find such values of and that the error functional representing the sum of squares of residuals in sample points reaches the minimum value: where , is the selectable penalty factor, which takes into account the residuals in the boundary sample points. The trained network provides the solution at any arbitrary point when its coordinate comes to the input layer of the network.

3. Learning RBFN Using TRM

The efficiency of neural network method for solving boundary value problems depends on the efficiency of the method of solving the problem of the error functional minimization (3). TRM is one of the best approaches to solve problem (3) [12, 15]. The key features of this method are as follows: it has the possibility of simultaneous optimization of a large number of parameters; it has a high level of efficiency and convergence even for bad-conditioned problems; it can overcome local minimums; it has the capability of minimization of concave functions, that is, functions with negative definite Hessian matrix; and finally, when the second-order Taylor expansion is used as an approximating function, the problem of the objective function minimization reduces to the problem of minimizing a quadratic functional [13]. All these make TRM ideal for solution of the problem of minimizing the error functional, which often has a large number of parameters to be optimized and many local minima, and it is an ill-conditioned problem.

The basic idea of TRM is that, at each iteration of the function minimization, where ( is the domain of definition and is a set of real numbers), the function is replaced by an approximating function in the trust region and the minimum of is calculated in , which becomes a new minimum of .

Depending on how the decrease predicted by the model is confirmed by the objective function, the decision on the expansion or contraction of the trust region is taken. There is a formal description of the algorithm.

Algorithm 1 (TRM).
Step 1 (initialization). Specify an initial value of , the radius of the trust region , the threshold accuracy of the estimated models and such that , the transformation coefficients and of the trust region (), and the number of the iterations .
Step 2 (approximation of ). Select the norm and construct a function , approximating function in the area .
Step 3 (minimization of ). Select the method of conditional minimization of , which allows finding such a step that point is a global minimum of in .
Step 4. This step consists in evaluation of the model accuracy that is calculated as follows: If , then ; otherwise, .
Step 5. Change the trust region radiusStep 6 (increasing the number of the iterations ). If the required accuracy of the solution has been reached, or is equal to the maximum number of iterations, or the radius of the trust region is too small, then complete the learning; otherwise, go back to Step 2.
Step 7 (stop). The algorithm is a generalized algorithm of TRM, as it has no instructions on how to construct a function , select the norm , and what method to use for minimization.
Since the error functional is a twice differentiable function, hence the second-order Taylor series can be used as model of ; as the norm, we are using the Euclidean norm here. To get the second-order Taylor series, it is necessary to calculate the Hessian matrix, which has a large computation cost. Instead of the exact value of the Hessian, we are using its approximate value, which is a multiplication of the Jacobi matrices , where the Jacobian of the error function has the formwhereUsing the second-order Taylor series leads to the need for solving the conditional problem of a quadratic functional minimization. To solve it, Steihaug’s method [16] is used here. This method is a modification of the method of preconditioned conjugate directions, taking into account the restrictions on the solution (the solution should lie in ) during functional minimization and allowing working with negative definite Hessian matrix.

4. Experimental Study

As an example of solving boundary value problems using RBFN, learned by TRM, consider the boundary value problem for the two-dimensional Poisson equation, described in [8]This problem often arises in thermodynamics, electrostatics, and image processing.

Let the solution domain be square bounded by the points and , , . This problem has an analytical solution . We will use it to evaluate the accuracy of the resulting numerical solution. There is an error functional of this problem: Two experiments were done with different initial values of the network parameters. In both experiments, 144 randomly selected sample points were used to learn the networks. 100 of them were located in the area of solutions , and 44 were located on the border . Gaussian function was used as RBF. In the first experiment, RBFN consisted of 16 neurons, whose centers were randomly distributed in a square area bounded by points and . The width of the RBFs was randomly chosen from the interval . Initial values of the RBF weights have been chosen randomly from the interval . Penalty factor . TRM parameters were equal to , , , , and . Note that the presented hyperparameters of both RBFN and TRM were selected manually, although more advanced techniques can be used in the future, like grid search, genetic algorithm, and others.

Learning of the network was completed in 8 iterations, with the value of the solution error being equal to . The solution error was calculated according to the formula of the relative standard error: The error graph is shown in Figure 1.

Figure 1: Experiment number 1, error solutions.

In the second experiment, the RBFN consisted of 64 neurons, whose centers were located in the nodes of a uniform square grid bounded by points and ; the neurons’ width was constant and equal to 0.5. Initial values of weights of the RBFs were equal to zero. The solution error was equal to and was reached for 15 iterations. The error graph is shown in Figure 2.

Figure 2: Experiment number 2, error solutions.

As expected, the more the neurons in the network, the more accurately it approximates the desired solution. However, the process of learning such a network requires much more time (for learning the network from the first experiment, it needed two times fewer iterations than training a network from the second experiment). The dependency of the error (10) on the iteration number is shown in Figure 3.

Figure 3: The dependency of the error (10) on the iteration number .

5. Conclusion

In this paper, a method based on TRM was proposed for RBFNs learning. Approximate values of the Hessian matrix are used to improve its performance. This method allows reducing the time of network’s learning.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was funded by the Russian Foundation for Basic Research, Project no. 16-08-00906А.

References

  1. G. R. Liu, Mesh Free Methods: Moving Beyond the Finite Element Method, CRC Press, Boca Raton, Fla, USA, 2002. View at MathSciNet
  2. G. E. Fasshauer, Meshfree Approximation Methods with MATLAB, World Scientific, Singapore, 2007. View at Publisher · View at Google Scholar · View at MathSciNet
  3. M. D. Buhmann, Radial Basis Functions: Theory and Implementations, vol. 12 of Cambridge Monographs on Applied and Computational Mathematics, Cambridge University Press, Cambridge, UK, 2003. View at Publisher · View at Google Scholar · View at MathSciNet
  4. W. Chen and Z. J. Fu, Recent Advances in Radial Basis Function Collocation Methods, Springer, 2013.
  5. G. Pang, W. Chen, and Z. Fu, “Space-fractional advection-dispersion equations by the Kansa method,” Journal of Computational Physics, vol. 293, pp. 280–296, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  6. H. Sun, X. Liu, Y. Zhang, G. Pang, and R. Garrard, “A fast semi-discrete Kansa method to solve the two-dimensional spatiotemporal fractional diffusion equation,” Journal of Computational Physics, vol. 345, pp. 74–90, 2017. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  7. N. Mai-Duy and T. Tran-Cong, “Numerical solution of differential equations using multiquadric radial basis function networks,” Neural Networks, vol. 14, no. 2, pp. 185–199, 2001. View at Publisher · View at Google Scholar · View at Scopus
  8. L. Jianyu, L. Siwei, Q. Yingjian, and H. Yaping, “Numerical solution of elliptic partial differential equation using radial basis function neural networks,” Neural Networks, vol. 16, no. 5-6, pp. 729–734, 2003. View at Publisher · View at Google Scholar · View at Scopus
  9. A. N. Vasiliev and D. A. Tarhov, Principles and Techniques of Neural Network Modeling, Nestor History, St. Petersburg, Russia, 2014.
  10. N. Yadav, A. Yadav, and M. Kumar, An Introduction to Neural Network Methods for Differential Equations, SpringerBriefs in Applied Sciences and Technology, Springer, 2015. View at Publisher · View at Google Scholar · View at MathSciNet
  11. C. S. Dash, A. K. Behera, S. Dehuri, and S. Cho, “Radial basis function neural networks: a topical state-of-the-art survey,” Open Computer Science, vol. 6, no. 1, pp. 33–63, 2016. View at Publisher · View at Google Scholar
  12. V. I. Gorbachenko and M. V. Zhukov, “Solving boundary value problems of mathematical physics using radial basis function networks,” Computational Mathematics and Mathematical Physics, vol. 57, no. 1, pp. 145–155, 2017. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  13. A. R. Conn, N. I. M. Gould, and P. Toint, Trust Region Method, Society for Industrial and Applied Mathematics, Philadelphia, Pa, USA, 2000. View at Publisher · View at Google Scholar · View at MathSciNet
  14. S. O. Haykin, Neural Networks and Learning Machines, Pearson, 2008.
  15. V. I. Gorbachenko and M. V. Zhukov, “Approaches and methods of learning networks of radial basis functions for solving problems of mathematical physics,” Neurocomputers: Development, Application, vol. 9, pp. 12–18, 2013. View at Google Scholar
  16. T. Steihaug, “The conjugate gradient method and trust regions in large scale optimization,” SIAM Journal on Numerical Analysis, vol. 20, no. 3, pp. 626–637, 1983. View at Publisher · View at Google Scholar · View at MathSciNet