Journal of Function Spaces

Volume 2016 (2016), Article ID 5780718, 7 pages

http://dx.doi.org/10.1155/2016/5780718

## Fast Analytic Sampling Approximation from Cauchy Kernel

^{1}College of Mathematics and Information Science, Guangxi University, China^{2}Department of Mathematics, Shantou University, China

Received 4 November 2015; Accepted 22 February 2016

Academic Editor: Kehe Zhu

Copyright © 2016 Youfa Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The paper aims at establishing a fast numerical algorithm for , where is any function in the Hardy space and is the scale level. Here, is an approximation to we recently constructed by applying the multiscale transform to the Cauchy kernel. We establish the matrix expression of and find that it has the structure of a multilevel Hankel matrix. Based on the structure, a fast numerical algorithm is established to compute . The computational complexity is given. A numerical experiment is carried out to check the efficiency of our algorithm.

#### 1. Introduction

Approximation to a function (a time-continuous signal) by its samples is the heart of modern applied mathematics and engineering and has attracted much attention. Readers are referred to [1–5] for just a few references. For any function with , it can be reconstructed by its Fourier coefficients as follows: Here, is the space of -periodic and square integrable functions, equipped with the inner product where , , and . Truncating the series in (1) leads to the classical Fourier sampling approximation. Specifically, for sufficiently large , can be approximated by its Fourier coefficients as follows:where is the Euclid norm of a vector in . From the point of view of feature characterization, (3) can be used to characterize some features such as instantaneous phase and frequency by those of the linear Fourier atoms . Note that each linear Fourier atom has the linear phase and constant frequency. Therefore, is time-stable. Then, the characterization from (3) implies that, whether is time-stable or not, its features are characterized by those of the linear Fourier atoms. However, when is not time-stable, the effectiveness of characterization by the linear atoms in (3) is inferior to that by nonlinear Fourier atoms [6–8].

Recently, an interesting sampling approximation from Takenaka-Malmquist (T-M) system, a special class of nonlinear Fourier atoms, has attracted much attention in the literature; for example, see [9, 10] and the references therein. Note that the T-M system is substantially generated from the Schmidt orthogonalization of Cauchy kernel , where In this sense, we say that the sampling approximation from the T-M system is substantially from the Cauchy kernel. Implementing the multiscale transform on the tensor product of the Cauchy kernel, Li and Qian [11] constructed the analytic sampling approximation to any function in the Hardy space , where , a subspace of , is defined by with being the set of nonnegative integers. Then, using the partial Hilbert transform, [11] generalized the approximation to . A subsequent unsolved problem of the approximation in [11] is that the fast numerical algorithm has not been established. The aim of this present paper is to solve the problem by the fast computation theory of multilevel circulant matrices.

#### 2. Fast Algorithm for Multiscale Analytic Sampling Approximation

##### 2.1. Multiscale Analytic Sampling Approximation from Cauchy Kernel

Before introducing the approximation result in [11], we give the definition of the multiscale transform on . For any , define the -scale transform associated with the shift parameter by As for the parameter , we just need to focus on the case of due to the periodicity of , where Suppose that is a sequence of -nonstationary refinement functions in ; namely, they satisfywhere belongs to , the space of -periodic complex-valued sequences. From the perspective of the Fourier coefficient, it is easy to check that (7) is equivalent towhere . Related to , the so-called -scale projection operator is defined byDefining to be , the tensor product of the Cauchy kernel on the unit disc , Li and Qian [11] gave the expression of using the analytic samples of and estimated the error concretely.

Lemma 1 (see [11]). *Letwhere , , and . Construct bywhere belongs to such thatThen, for any , defined in (9) can be expressed by the -scale analytic samples as follows:where is the inverse Fourier transform of . Moreover,where .*

##### 2.2. Mathematical Materials on Multilevel Circulant Matrix

Following [12–14], we will define a multilevel circulant matrix. We begin with some denotations. For any and , let and , where is the Cartesian product. An matrix is referred to as a -level matrix of order . Recursively, matrix is a -level matrix of order if it consists of matrices of -level of order . To point to the entries of , denote by . A -level matrix of order is referred to as a -level circulant matrix ifwhere and .

An column (or row) vector is referred to as a -level vector of order . Following the definition of a multilevel matrix, vector is a -level vector of order if it consists of vectors of -level of order . To point to the entries of , denote by

A multilevel circulant matrix has a nice structure as shown in the following lemma.

Lemma 2 (see [12–14]). *Suppose that is a -level matrix of order and is the transpose of the first row of , where . Then, is a -level circulant matrix if and only ifwhere , , and is the transpose conjugate of . Here, is the Fourier matrix of order and is the Kronecker product of matrices.*

Following (15), a -level matrix of order is referred to as a -level Hankel matrix if Since it can be converted to a -level circulant matrix by a transform to be given in Lemma 3, the Hankel matrix with the additional propertyis crucial for establishing the fast algorithm in Section 2.3.

Lemma 3. *Define a -level matrix of order by where If is a -level Hankel matrix of order satisfying (18), thenis a -level circulant matrix.*

*Proof. *From (21) and (18), we arrive atwhere . By (22), if and satisfythenRecall that (23) is equivalent towhich together with (24) leads to The proof is concluded.

*Note 1. *It is clear that . Then, it follows from (21) that a -level Hankel matrix satisfying (18) can be written aswhere is the transpose of the first row of given in (21).

##### 2.3. Fast Multiscale Analytic Sampling Approximation

This subsection aims at developing a fast algorithm to compute the numerical values of at , where is any fixed point on , and . For convenient narration, define a column vector byand a row vector by

Lemma 4. *Using the matrix notation, in (13) can be expressed bywhere is a matrix defined by is a column vector, and is a matrix of functions given byMoreover, and are both -level Hankel matrices with property (18).*

*Proof. *For any , it follows from (7) thatThen, the column vector can be expressed byBy (33) and the Cauchy integral formula, for any , we have Therefore, the column vector can be rewritten asNow, the proof of (30) is concluded by (9), (34), and (36).

For any and , it is straightforward to check that and then is a -level Hankel matrix satisfying (18). Similarly, is also a -level Hankel matrix with property (18).

The following lemma is crucial for investigating the matrix expression of .

Lemma 5. *For any , define a matrix of functionswhere . Then, for any and , the value of at satisfieswhere *

*Proof. *We first prove by induction on thatSuppose (41) holds with being replaced by . Then, Therefore, (41) holds for any .

For any , we derive from (41) that The proof of (39) is concluded.

Using (39), we will give the matrix expression of defined in (29).

Theorem 6. *Let and be as in Lemmas 4 and 5, respectively. Define a -level matrix bywhere . Then, in (29) can be expressed by*

*Proof. *Using (41) to directly compute (44) gives us that from which we arrive at where . Hence, is a -level Hankel matrix with additional property (18).

It follows from (10), (32), (38), and (39) that the value of at defined in (28) can be computed byIt is deduced from (30) and (48) thatBy (49), the column vector defined in (29) can be expressed by Since both and are -level Hankel matrices, they are both symmetric. Hence, the proof of (45) is concluded.

*Note 2. *Since , , and are all -level Hankel matrices with property (18), by (27) and (45), can be factorized aswhere , , and are the transposes of the first rows of , , and , respectively.

Based on Note 2, a fast algorithm for will be established as follows.

*Algorithm 7. *Let , , and be as in Lemma 1. According to (51), defined in (29) can be computed by the following six steps.(1)By , (11), and (12), compute , where .(2)By implementing IFFT on , we compute .(3)Since and is the Fourier matrix of order , , , and are computed by IFFTs.(4)Consider(5)Consider(6)Consider*Computational Complexity*. It is easy to check that directly computing through (13) costs operations. In Algorithm 7, however, FFT and IFFT are used for three and seven times, respectively, which cost operations. Meanwhile, the complexity of other operations such as the multiplication of and is . Therefore, the computational complexity of Algorithm 7 is .

*Note 3. *So far, on computing , we have not found any numerical algorithm better than Algorithm 7. On the other hand, we notice that Algorithm 7 holds for equally spaced points on . In another occasion, we will establish the fast algorithm for the case of nonequally spaced points.

#### 3. Experiment

To check the efficiency of Algorithm 7, a numerical computation experiment on the functionwill be carried out. Table 1 shows the approximation error ratios and the times and corresponding to different choices of , , and , where and and are the compute running times cost by directly computing and Algorithm 7, respectively. The data on the time cost confirms that Algorithm 7 is faster than direct computing. Therefore, the result in this experiment matches the computational complexity analysis in Section 2. On the other hand, as for the approximation accuracy, as the scale level increases, the approximation efficiency becomes better. This coincides with the approximation error estimation given in (14).