• Views 1,715
• Citations 1
• ePub 21
• PDF 748
`Journal of Applied MathematicsVolume 2014, Article ID 105469, 7 pageshttp://dx.doi.org/10.1155/2014/105469`
Research Article

## Fast Hankel Transforms Algorithm Based on Kernel Function Interpolation with Exponential Functions

The State Key Laboratory of Transmission Equipment and System Safety and Electrical New Technology, Chongqing 400044, China

Received 24 December 2013; Revised 24 March 2014; Accepted 25 March 2014; Published 5 May 2014

Copyright © 2014 Huaiqing Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The Pravin method for Hankel transforms is based on the decomposition of kernel function with exponential function. The defect of such method is the difficulty in its parameters determination and lack of adaptability to kernel function especially using monotonically decreasing functions to approximate the convex ones. This thesis proposed an improved scheme by adding new base function in interpolation procedure. The improved method maintains the merit of Pravin method which can convert the Hankel integral to algebraic calculation. The simulation results reveal that the improved method has high precision, high efficiency, and good adaptability to kernel function. It can be applied to zero-order and first-order Hankel transforms.

#### 1. Introduction

The Hankel transforms (HT) arise naturally in the discussion of problems posed in cylindrical coordinates and hence [1], as a result of separation of variables, involving Bessel functions. The HT are frequently used as a tool for solving numerous scientific problems. For example, the stratified model and cylindrical coordinates are widely used in geophysical research, and the HT arise in forward and inverse calculation with zero or first order. The general Hankel transforms pair with the kernel being is defined as and HT, being self-reciprocal, inverse is given by where is the th-order Bessel function of first kind. Analytical evaluations are rare and their numerical computations are difficult because of the oscillatory behavior of the Bessel function and the infinite length of the interval.

To overcome these difficulties, various different techniques are available in the literature. The first is the fast Hankel transforms as proposed by Siegman in [2]. Here, via an exponential change of variables, the problem is transformed in the space of the logarithmic coordinates and the fast Fourier transform in that space. The disadvantage is requiring sampling over an exponential grid, thereby leading to important errors for functions with an oscillating tail. Moreover, it is sensitive to the smoothness of the functions in that space [3]. The second is the back-projection and projection-slice methods [4], which carry out the HT as a double integral by means of one of the standard integral representations of the Bessel functions. But these methods generally require the efficient implementation of Chebyshev and Abel transforms. The computational complexity is unfortunately [5]. In [6], the authors used Filon quadrature philosophy to evaluate zero-order Hankel transforms. They separated the integrand into the product of slowly varying component and a rapidly oscillating one. This method works quite well for computing for , but the error is appreciable for . And the calculation of inverse HT is more difficult as is no longer a smooth function but a rapidly oscillating one. In 1998, Yu et al. [7] gave another method to compute zero-order quasi-discrete HT by approximating the input function by a Fourier-Bessel series over a finite integration interval. It leads to a symmetric transformation matrix. And later in 2004, Guizar-Sicairos and Gutiérrez-Vega [8] obtained a powerful scheme to calculate the HT of order n by extending the zero-order HT algorithm of Yu to higher orders. Their algorithm is based on the orthogonality properties of Bessel functions.

More recently, Postnikov [9] and Zykov and Postnikov [10] proposed, for the first time, a novel and powerful method for computing zero- and first-order HT by using Haar basis and piecewise-linear basis. After that, Singh et al. adopted the linear Legendre multiwavelets [3], rationalized Haar (RH) wavelets [11], and the hybrid method of block-pulse and Legendre polynomials [1] for small random perturbation in data function situation [12]. Above methods can be cast into a general class as expansion of integrand by wavelets.

Similarly, Gupta et al. [13] have proposed to transform the integral kernel function using orthogonal exponential expansion. This expansion reduces the integral to a simple algebraic sum because the analytical solution of HT of an exponential function is readily available. The essence of Pravin method is adopting exponential functions for fitting or interpolating basis [14]. However, only the monotonic exponential base function is selected, and, as a result, it is difficult to approximate the convex functions. So in this paper, we extend this idea further and obtain an improvement method by adding a new base function for fast calculation of HT. Numerical examples and comparison were given to illustrate the proposed algorithms.

#### 2. Calculation Principles Based on Kernel Function Decomposition

##### 2.1. Kernel Function Interpolation Based on the Exponential Function

In Pravin’s innovative idea of exponential expansion, the kernel function (namely, ) which is defined as the part to be multiplied with Bessel function is decomposed by a series of exponential functions. In other words, the kernel function represents a linear combination of exponential functions and the coefficient of every exponential function can be determined by interpolation. Because the analytical solution of HT of an exponential function is readily available in literature, so we can finally obtain the integral through a simple algebraic sum. Since the order of Bessel function in the majority of physical problems is an integer as zero or one, this thesis focuses on zero- and first-order Hankel transforms. The calculation principles based on the exponential function expansion can be summarized as the following.(1)Decomposition of kernel function using a linear combination form is as the following: where is the approximation parameter of the exponential function and it is determined by orthogonalization in Pravin method and is the coefficient and it can be determined by collocation method. The interpolation form of above formula can be expressed briefly in matrix as , so coefficients , where matrix element is .(2)Calculation of every component’s integral is as follows: the integral for the product of every base function with Bessel function is given in explicit form as (3)HT calculation procedure is as follows: based on collocation and explicit formula, we can obtain And it can be expressed as , where the matrix element is .

It is obvious that the Pravin method is simple and concise. And the calculation error is only caused by the interpolation. Nevertheless, the base functions are monotonically decreasing exponential functions. So it is difficult to approximate convex function, such as approximating the kernel function whose value is zero at the origin. Moreover, to determine the parameter is also hard in order to ensure the method’s accuracy and stability. Thus, this paper proposed to add a new type base function, which has a convex form and its value is zero at the origin.

##### 2.2. Improved Kernel Function Interpolation Theory

Based on the idea of exponential expansion, the new type base function should satisfy the following two conditions. Specifically, the function has convexity and explicit form of HT. After systematically analyzing, we choose the following function as where is zero or one, correspondingly to the order of Bessel function. And the improved interpolation formula is as the following: It can be written as , and coefficients ; the elements are and . So the coefficients can be determined by collocation. The HT of (7) can be obtained as The integrals of the second component can be given by the following formula: For , referring to formulas (4) and (9), we can obtain zero-order HT as And for , we can obtain the first-order HT as So the improvement method maintains the simple and concise characteristics. The HT calculation can be obtained by an algebraic summation.

#### 3. Numerical Examples

As illustrative numerical examples which possess exact solutions, consider the following functions. Firstly, we carry out the kernel function interpolation simulation which can reveal the approximation ability of improved method. Then, numerical examples of zero-order and first-order HT are implemented, and the method’s accuracy and computational efficiency are analyzed.

##### 3.1. Kernel Function Interpolation Simulation

In this section, we focus on the interpolation ability of added base functions, namely, and which are adopted, respectively, in zero- and first-order HT. Select two typical kernel functions as The first is a monotonic function and the second is a convex one. In numerical calculations, the interpolation nodes are uniformly distributed with interval and the test nodes interval . In Pravin’s method, the approximation parameter , the first value is 0.5, the common ratio is 1.2, and the last 51th point is 4550.2. So the parameter varies in a wide range with exponential form which maybe results in a poor condition number for interpolation matrix. In this paper, we choose the parameter with linear variation as . The added constant is to avoid overflow in (10) or (11). Comparisons between Pravin and the improved method were listed in Tables 1 and 2. The Max_Abs represents the maximum absolute error and RMS means the root mean square error which is defined as

Table 1: Interpolation error comparison between Pravin and improved method , .
Table 2: Interpolation error comparison between Pravin and improved method .

From Tables 1 and 2, we can make the following conclusions. Firstly, the Pravin method can obtain good approximation precision for the kernel function in two kinds of parameter settings. Because the and the interpolation basis are all monotonically decreasing functions. However, for function which is a convex one, the Pravin method is unstable when parameter is exponential variation and is still stable for linear variation. And secondly, the improved method not only maintains the stability for both types of functions, but also acquires higher accuracy whose interpolation error is lesser by one or two magnitudes than the Pravin’s. The numerical results showed that the condition number of interpolation matrix decreased dramatically in improved method. For example, the condition number was reduced from 1E38 to 1E17 when parameter isexponential variation and was reduced from 1E20 to 1E18 in linear variation. The only deficiency of improved method lies in it being time-consuming, that is, 0.1032 s (for Pravin) and 0.1557 s (for improved method).

In addition, the extrapolation ability has been investigated. We choose the region of for interpolation. The extrapolation test is defined in region . Numerical results show that the improved method’s maximum absolute errors were and , while the errors of Pravin were and 0.5713 correspondingly. So we can conclude that the improved method is feasible to approximate the kernel function and therefore to calculate HT.

##### 3.2. Hankel Transforms Calculation

Here, we have chosen four analytical integrals given by Anderson [15]. They involve the zero- and first-order HT with monotonic and convex types. The analytical integral pairs were

In order to compare with above interpolation simulation easily, all the parameters are chosen the same as Section 3.1, only changing the collocation nodes interval or 0.2 (). Using (7) in the improved method, we can obtain the coefficients and then substitute into (10) or (11) to calculate the unknown values. Relative error comparisons of four HT pairs among Anderson, Pravin, and the improved method proposed in this paper are listed in Tables 3 and 4.

Table 3: Relative error comparison among Anderson, Pravin, and improved method for zero-order HT.
Table 4: Relative error comparison among Anderson, Pravin, and improved method for first-order HT.

From Tables 3 and 4, we can conclude that the proposed method can improve the HT accuracy especially for integrals 1 and 3. In above four integrals, the worst accuracy is about the magnitude of 10−7, so the algorithm can obtain good approximation for the theoretical value of zero- or first-order HT. The following Figures 1, 2, 3, and 4 illustrated the absolute error of improved method.

Figure 1: Absolute error of Integral 1.
Figure 2: Absolute error of Integral 2.
Figure 3: Absolute error of Integral 3.
Figure 4: Absolute error of Integral 4.

Numerical results showed that the maximum absolute errors of the four integrals were , , , and separately. And the root mean square errors were corresponding to , , , and . The time consuming of above examples was almost 0.05 s.

##### 3.3. Application in Electromagnetic Sounding

Considering a four-hierarchical-layer model in DC sounding, the parameters are , , , , , , and . The resistivity conversion function complies with the following recurrence formula as The apparent resistivity function is the Hankel transforms of :

So the resistivity conversion function is a convex one in above parameters; then the Pravin method and the improved method were adopted to approximate the and the kernel function . The interpolation nodes were set in domain [] with interval ; the test nodes’ interval was . Choosing approximation parameter as , the approximation results were shown in Figures 5 and 6.

Figure 5: Comparison of approximation.
Figure 6: Comparison of approximation.

It is obvious that the resistivity conversion function and kernel function can be approximated much better by the improved method. Numerical results in approximation showed that the maximum absolute error and root mean square error of approximation were and (for Pravin), while the data were and (for improved method).

Finally, the Hankel transforms were performed and compared. The improved method also showed its significant superiority which can be revealed from Figure 7. The apparent resistivity function achieved a good approximation on the whole except for the oscillation at the beginning. However, the Pravin method was unstable. The maximum absolute error and root mean square error were and 6.3126 (for Pravin method), and corresponding to improved method, the error was 4.1050 and .

Figure 7: Comparison of apparent resistivity function by the Hankel transforms.

#### 4. Conclusions

Based on the decomposition of kernel function with exponential base functions in Pravin method, this thesis raised the problem of using monotonically decreasing functions to approximate the convex functions. Hence, we proposed an improved scheme by adding a new base function in interpolation which has convexity and explicit form of HT. The improvement method maintains the simple and concise characteristics that convert the Hankel integral to algebraic calculation without any additional numerical processing. The kernel function approximation simulation showed that the improved method acquires higher accuracy whose interpolation error is lesser by one or two magnitudes than the Pravin’s. And the proposed linear variation scheme of parameter is much more stable than exponential variation scheme. Nevertheless, how to choose the constant and interval in linear variation scheme is still a discussion problem. In DC sounding example, the improved method also has good precision and high speed even with fewer nodes. So it can be applied to zero-order and first-order HT computation.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgment

This work was supported by the National Science Foundation Project of CQ CSTC jjA00017.

#### References

1. V. K. Singh, R. K. Pandey, and S. Singh, “A stable algorithm for Hankel transforms using hybrid of Block-pulse and Legendre polynomials,” Computer Physics Communications, vol. 181, no. 1, pp. 1–10, 2010.
2. A. E. Siegman, “Quasi fast Hankel transforms,” Optics Letters, vol. 1, pp. 13–15, 1977.
3. V. K. Singh, O. P. Singh, and R. K. Pandey, “Numerical evaluation of the Hankel transform by using linear Legendre multi-wavelets,” Computer Physics Communications, vol. 179, no. 6, pp. 424–429, 2008.
4. A. V. Oppenheim, G. V. Frisk, and D. R. Martinez, “Computation of the Hankel transform using projections,” The Journal of the Acoustical Society of America, vol. 68, no. 2, pp. 523–529, 1980.
5. L. Knockaert, “Fast Hankel transform by fast sine and cosine transforms: the Mellin connection,” IEEE Transactions on Signal Processing, vol. 48, no. 6, pp. 1695–1701, 2000.
6. R. Barakat and E. Parshall, “Numerical evaluation of the zero-order Hankel transform using Filon quadrature philosophy,” Applied Mathematics Letters, vol. 9, no. 5, pp. 21–26, 1996.
7. L. Yu, M. Huang, M. Chen, W. Chen, W. Huang, and Z. Zhu, “Quasi-discrete Hankel transform,” Optics Letters, vol. 23, no. 6, pp. 409–411, 1998.
8. M. Guizar-Sicairos and J. C. Gutiérrez-Vega, “Computation of quasi-discrete Hankel transforms of integer order for propagating optical wave fields,” Journal of the Optical Society of America A: Optics and Image Science, and Vision, vol. 21, no. 1, pp. 53–58, 2004.
9. E. B. Postnikov, “About calculation of the Hankel transform using preliminary wavelet transform,” Journal of Applied Mathematics, vol. 2003, no. 6, pp. 319–325, 2003.
10. P. S. Zykov and E. B. Postnikov, “Application of the wavelet transform with a piecewise linear basis to the evaluation of the Hankel transform,” Computational Mathematics and Mathematical Physics, vol. 44, no. 3, pp. 396–400, 2004.
11. V. K. Singh, O. P. Singh, and R. K. Pandey, “Efficient algorithms to compute Hankel transforms using wavelets,” Computer Physics Communications, vol. 179, no. 11, pp. 812–818, 2008.
12. R. K. Pandey, V. K. Singh, and O. P. Singh, “An improved method for computing Hankel transform,” Journal of the Franklin Institute, vol. 346, no. 2, pp. 102–111, 2009.
13. P. K. Gupta, S. Niwas, and N. Chaudhary, “Fast computation of Hankel transform using orthonormal exponential approximation of complex kernel function,” Journal of Earth System Science, vol. 115, no. 3, pp. 267–276, 2006.
14. L. Gr. Ixaru and G. Vanden Berghe, Exponential Fitting, vol. 568 of Mathematics and Its Applications, Kluwer Academic Publishers, Dordrecht, The Netherlands, 2004.
15. W. L. Anderson, “Numerical integration of related Hankel transforms of orders 0 and 1 by adaptive digital filtering,” Geophysics, vol. 44, no. 7, pp. 1287–1305, 1979.