Table of Contents
ISRN Signal Processing
Volume 2011 (2011), Article ID 421502, 6 pages
http://dx.doi.org/10.5402/2011/421502
Research Article

A Method for the Recovery of Gaps in General Analytic Signals

1School of Engineering, Ruppin Academic Center, Emek-Hefer 40250, Israel
2School of Electrical Engineering, Faculty of Engineering, Tel-Aviv University, Tel-Aviv 69978, Israel

Received 27 May 2011; Accepted 10 July 2011

Academic Editor: L. Senhadji

Copyright © 2011 Benjamin G. Salomon and Hanoch Ur. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

We propose a generalized Papoulis-Gerchberg algorithm for the recovery of gaps in general analytic functions. The continuous-time algorithm is based on signal expansions in terms of the Chebyshev polynomials of the first kind, and the discrete-time implementation is based on a suitable nonuniform sampling scheme and the discrete cosine transform.

1. Introduction

Gapped (or missing) data is encountered when it is hard to obtain contiguous measurements of a function for a long time. The recovery of the missing function values is possible, at least in theory, given a certain a priori knowledge about the function, for example, analyticity on a certain interval. A real-valued function 𝑓, defined on a closed interval 𝐼 on the real line, is called analytic on 𝐼, if there exists an analytic extension of 𝑓 onto some open set 𝐺 of the complex plane that contains 𝐼 [1]. That is, there is a unique single-valued analytic function, defined on 𝐺, that coincides with 𝑓 on 𝐼. Let 𝑓 be analytic on 𝐼. When the values of 𝑓 on a subinterval of 𝐼 are given, 𝑓 can be determined on 𝐼, by means of analytic continuation.

In many missing data problems, the functions are assumed to be bandlimited. Let 𝐿2() denote the Hilbert space of all square integrable functions defined on the real line . For 𝜏>0, and a function 𝑓𝐿2(), we define the operators 𝑃𝜏 and 𝑄𝜏 as follows:𝑃𝜏𝑄𝑓=𝑓(𝑡),|𝑡|𝜏,0,|𝑡|>𝜏,𝜏𝑓=𝑓𝑃𝜏𝑓.(1) Let 𝑓=𝑓(𝜔)=𝑓(𝑡)𝑒𝑗𝜔𝑡𝑑𝑡, and, 1𝑓 be the Fourier transform and inverse Fourier transform of 𝑓, respectively. For 𝜎>0, and a function 𝑓𝐿2(), we define the bandlimiting operator 𝑃𝜎 as follows:𝑃𝜎𝑓=1𝐿𝜎𝑓=𝑓(𝑡𝜏)sin𝜎𝜏𝜋𝜏𝑑𝜏,(2) where𝐿𝜎(𝜔)=1,|𝜔|𝜎,0,|𝜔|>𝜎.(3) The bandlimited signal extrapolation is defined as follows. Find 𝑓 having 𝑔=𝑃𝜏𝑓 and knowing that 𝑓=𝑃𝜎𝑓. Various methods were proposed for solving the bandlimited extrapolation problem; [2] contains a detailed comparison of some of the methods. The iterative bandlimited signal extrapolation algorithm of Papoulis and Gerchberg [3, 4] is attractive due to its relative simplicity, requiring only Fourier transform and inverse Fourier transform in each iteration. Based on the identity𝑔+𝑄𝜏𝑃𝜎𝑓=𝑃𝜏𝑓+𝑄𝜏𝑓=𝑓,(4) Papoulis [3] and Gerchberg [4] proposed the following algorithm:Initializationstep:𝑓0=𝑔,𝑚=1,2,:𝑓𝑚=𝑔+𝑄𝜏𝑃𝜎𝑓𝑚1.(5) A proof for the convergence of the algorithm was given by Papoulis [3] using signal expansions in terms of the prolate spheroidal wave functions. Gerchberg [4] actually solved the dual problem of extrapolating the spectrum of a finite object beyond its diffraction limits, leading to superresolution and image enhancement.

A bandlimited function 𝑓(𝑡), defined on the real-line , is an entire function of exponential type when 𝑡 is extended to [5]. Hence, bandlimited signals are a very special case of analytic functions, and other analytic signals may be encountered in practice. In this paper, we propose a method for the recovery of gaps in general analytic signals using a generalization of the Papoulis-Gerchberg algorithm, based on signal expansions in terms of the Chebyshev polynomials of the first kind. In Section 2, we briefly review the properties of Chebyshev polynomials. We generalize in Section 3 the Papoulis-Gerchberg algorithm from continuous-time bandlimited signals to continuous-time signals in polynomials spaces and general analytic signals. We then focus, in Section 4, on the discrete implementation of the continuous-time algorithm, based on a suitable nonuniform sampling scheme and the discrete cosine transform. The performance of the recovery algorithm is demonstrated, in Section 6, by a numerical example.

2. Chebyshev Polynomials

The Chebyshev polynomial 𝑇𝑛(𝑥) of the first kind is a polynomial in 𝑥 of degree 𝑛, defined by the relation𝑇𝑛(𝑥)=cos𝑛𝜃when𝑥=cos𝜃,(6) for 𝑥[1,1] [6]. The range of the corresponding variable 𝜃 is [0,𝜋] (these ranges are traversed in opposite directions). Intervals [𝑎,𝑏] other than [1,1] are easily handled by the change of variables𝑠=2𝑥(𝑎+𝑏).𝑏𝑎(7) Without loss of generality, we focus hereafter on the interval [1,1].

Let 𝐿2𝑤([1,1]) be the Hilbert space of all real-valued square integrable functions on [1,1] with respect to the nonnegative weight function1𝑤(𝑥)=1𝑥2.(8) This is the space of functions 𝑓 such that the norm𝑓=11||||𝑤(𝑥)𝑓(𝑥)2𝑑𝑥1/2(9) is finite. The associated inner product is𝑓,𝑔=11𝑤(𝑥)𝑓(𝑥)𝑔(𝑥)𝑑𝑥.(10) The set {𝑇𝑖(𝑥),𝑖=0,1,} forms a complete orthogonal polynomial system in 𝐿2𝑤([1,1]). The orthogonality relation is given by [6]𝑇𝑚(𝑥),𝑇𝑛(𝑥)=11𝑤(𝑥)𝑇𝑚(𝑥)𝑇𝑛=𝑑(𝑥)𝑑𝑥0,𝑚𝑛,𝑚𝜋2,𝑚=𝑛,(11) where𝑑𝑚=2,𝑚=0,1,𝑚1.(12) The 𝑁 zeroes of the Chebyshev polynomial 𝑇𝑁(𝑥) are𝑥𝑘=cos(2𝑘+1)𝜋2𝑁,𝑘=0,1,,𝑁1.(13) These zeroes are listed in decreasing order in 𝑥. The (𝑁1)th degree polynomial 𝑝𝑁1(𝑥), interpolating a function 𝑓(𝑥) in the zeroes of 𝑇𝑁, can be written as a sum of Chebyshev polynomials in the form𝑝𝑁1(𝑥)=𝑁1𝑛=0𝑐𝑛𝑇𝑛(𝑥),(14) where the coefficients 𝑐𝑛 in (14) are given by the explicit formula𝑐𝑛=𝛾𝑛𝑁1𝑘=0𝑓𝑥𝑘𝑇𝑛𝑥𝑘,(15) where𝛾𝑛=1𝑁2,𝑛=0,𝑁,0<𝑛𝑁1.(16) It follows [79] that𝑐𝑛=𝛾𝑛𝑁1𝑘=0𝑓𝑥𝑘cos𝜋(2𝑘+1)𝑛,2𝑁(17) which is the well-known discrete cosine transform (type II) [10] applied on the samples of the function 𝑓(𝑥) taken at a nonuniform sampling grid corresponding to the zeroes of 𝑇𝑁. Efficient algorithms exist for computing the discrete cosine transform (see, e.g., [11] and references therein).

3. Gap Recovery in General Analytic Signals Based on A Chebyshev Polynomial Series Expansion

Let 𝑁1 denote the subspace of all polynomials of degree <N (𝑁>1) in 𝐿2𝑤([1,1]). Let 𝑃𝑁1 denote the orthogonal projection operator onto 𝑁1𝑃𝑁1𝑓=𝑁1𝑘=0𝑓𝑘𝑇𝑘(𝑥),(18) where𝑓𝑘=2𝜋𝑑𝑘𝑓,𝑇𝑘,(19)𝑑𝑘 is defined in formula (12) and 𝑓𝐿2𝑤([1,1]).

Let 𝐶𝑆 be the subspace of all the functions in 𝐿2𝑤([1,1]) which vanish outside a collection of disjoint closed intervals in [1,1] and let 𝐶+𝑆 be the orthogonal complement of 𝐶𝑆 in 𝐿2𝑤([1,1]). We denote by 𝑃𝑆 and 𝑄𝑆 the orthogonal projection operators onto subspace 𝐶𝑆 and 𝐶+𝑆, respectively. It follows that 𝑄𝑆=𝐼𝑑𝑃𝑆, where 𝐼𝑑 is the identity operator in 𝐿2𝑤([1,1]). If we assume that 𝑓=𝑃𝑁1𝑓, then we have:𝑓=𝑃𝑆𝑓+𝑄𝑆𝑓=𝑃𝑆𝑓+𝑄𝑆𝑃𝑁1𝑓(20) The continuous-time recovery problem can be stated as follows. Find 𝑓 having 𝑔=𝑃𝑆𝑓 and knowing that 𝑓=𝑃𝑁1𝑓. We propose the following Papoulis-Gerchberg type algorithm for recovering gaps in polynomial signals in 𝐿2𝑤([1,1]) (PGP algorithm).

The PGP Algorithm
Initializationstep:𝑔=𝑓0=𝑃𝑆𝑓,(21)𝑚=1,2,:𝑓𝑚=𝑔+𝑄𝑆𝑃𝑁1𝑓𝑚1.(22) The iteration process is stopped when 𝑓𝑚𝑓𝑚1𝜖, where 𝜖>0 is a user specified threshold. 𝑓𝑚 or 𝑃𝑁1𝑓𝑚 is taken as the result of the recovery algorithm after 𝑚 iterations.
If 𝑓𝑁1, then 𝑓=𝑃𝑁1𝑓, and it follows that 𝑔=𝑓𝑄𝑆𝑃𝑁1𝑓.(23) We know from (22) and (23) that 𝑓𝑚𝑓=𝑄𝑆𝑃𝑁1𝑓𝑚1.𝑓(24) Therefore, 𝑓𝑚𝑄𝑓𝑆𝑃𝑁1𝑓𝑚1𝑓𝑓𝑚1,𝑓(25) which proves that the PGP algorithm has the property that it reduces the error energy during the iteration process.

Lemma 1. A signal 𝑓𝑁1 can be uniquely determined from 𝑃𝑆𝑓 by using the PGP algorithm (21), and (22).

Proof. Iterating (22), we obtain 𝑓𝑚=𝑔+𝑄𝑆𝑃𝑁1𝑓𝑚1=𝑔+𝑄𝑆𝑃𝑁1𝑔+𝑄𝑆𝑃𝑁1𝑓𝑚2=𝑔+𝑄𝑆𝑃𝑁1𝑔+𝑄𝑆𝑃𝑁1𝑄𝑆𝑃𝑁1𝑔+𝑄𝑆𝑃𝑁1𝑓𝑚3=𝑚𝑛=0𝑄𝑆𝑃𝑁1𝑛𝑔.(26) Substituting (23) in (26), we obtain 𝑓𝑚𝑄=𝑓𝑆𝑃𝑁1𝑚𝑓.(27) According to von Neumann's theorem on alternating projections [12], for every 𝑓𝑁1, lim𝑚𝑄𝑆𝑃𝑁1𝑚𝑓=𝑓𝑐,(28) where 𝑓𝑐 is the projection of 𝑓 onto the closed subspace 𝐶+𝑆𝑁1. Any 𝑓𝑁1 is analytic on [1,1] and can vanish in an interval if and only if it is the zero function. Hence, the subspace 𝐶+𝑆𝑁1 contains only the zero function. We obtain that 𝑓𝑐=0 and that the PGP converges in norm to the required function 𝑓.

𝑃𝑁1𝑃𝑆𝑃𝑁1 is a bounded positive self-adjoint compact operator. Hence, there exists an orthonormal basis of 𝑁1 consisting of eigenfunctions 𝜙𝑛, 𝑛=0,1,,𝑁1, of the operator 𝑃𝑁1𝑃𝑆𝑃𝑁1, with corresponding real positive eigenvalues 𝜆𝑛, such that 1>𝜆0>𝜆1>>𝜆𝑁1>0 [13].

Lemma 2. Let 𝑓=𝜙𝑘 be an eigenfunction of 𝑃𝑁1𝑃𝑆𝑃𝑁1 with corresponding eigenvalue 𝜆𝑘, and let 𝑔=𝑃𝑆𝑓 be the given values of 𝑓. Then, 𝑃𝑁1𝑓𝑚=𝐴𝑚𝜙𝑘where𝐴𝑚=11𝜆𝑘𝑚+1.(29)

Proof. We will prove (29) by induction. It is true for 𝑚=0 that 𝑃𝑁1𝑓0=𝑃𝑁1𝑔=𝑃𝑁1𝑃𝑆𝜙𝑘=𝑃𝑁1𝑃𝑆𝑃𝑁1𝜙𝑘=𝜆𝑘𝜙𝑘=𝐴0𝜙𝑘,(30) where we used the identity 𝑃𝑁1𝜙𝑘=𝜙𝑘 (since 𝜙𝑘𝑁1).
Suppose that it is true for some 𝑚1. By using (22), we obtain 𝑃𝑁1𝑓𝑚+1=𝑃𝑁1𝑔+𝑃𝑁1𝑄𝑆𝑃𝑁1𝑓𝑚=𝑃𝑁1𝑃𝑆𝑃𝑁1𝜙𝑘+𝑃𝑁1𝐼𝐿2𝑤([1,1])𝑃𝑆𝑃𝑁1𝑓𝑚=𝑃𝑁1𝑃𝑆𝑃𝑁1𝜙𝑘+𝑃𝑁1𝑓𝑚𝑃𝑁1𝑃𝑆𝑃𝑁1𝑓𝑚.(31) Substituting 𝑃𝑁1𝑃𝑆𝑃𝑁1𝜙𝑘=𝜆𝑘𝜙𝑘 and using the induction assumption, we obtain 𝑃𝑁1𝑓𝑚+1=𝜆𝑘𝜙𝑘+𝐴𝑚𝜙𝑘𝜆𝑘𝐴𝑚𝜙𝑘=𝐴𝑚+1𝐴𝑚𝜆𝑘𝜙𝑘.(32) In (31) and (32), we used the idempotent property of the orthogonal projection operator 𝑃𝑁1 (i.e., 𝑃𝑁1=𝑃𝑁1𝑃𝑁1). It is easy to see that 𝐴𝑚+1𝐴𝑚𝜆𝑘=11𝜆𝑘𝑚+1+1𝜆𝑘𝑚+1𝜆𝑘=11𝜆𝑘𝑚+2.(33) Hence, 𝑃𝑁1𝑓𝑚+1=𝐴𝑚+1𝜙𝑘,(34) which completes the proof.

Any 𝑓𝑁1 can be represented as𝑓=𝑁1𝑘=0𝛼𝑘𝜙𝑘,(35) where𝛼𝑘=11𝑤(𝑥)𝑓(𝑥)𝜙𝑘(𝑥)𝑑𝑥.(36) Using (29) and (35), we obtain that after 𝑚 iterations of the PGP algorithm𝑃𝑁1𝑓𝑚=𝑁1𝑘=0𝛼𝑘11𝜆𝑘𝑚+1𝜙𝑘.(37) The energy of error is given by𝑓𝑃𝑁1𝑓𝑚2=𝑁1𝑘=0𝛼2𝑘1𝜆𝑘2(𝑚+1).(38) It follows that the rate of convergence of the PGP algorithm is dependent on the spectral representation of the signal in terms of the eigenfunctions of the operator 𝑃𝑁1𝑃𝑆𝑃𝑁1. If the signal contains only significant components which correspond to relatively high eigenvalues, the convergence is fast. If this is not the case, the PGP will be slowly convergent.

The PGP can also be viewed as an example of signal restoration by the method of projections onto convex sets (POCS) [14, 15]. The two convex sets, or constraints, are the affine set 𝑉1, which contains all the functions 𝑓𝐿2𝑤([1,1]) with 𝑔=𝑃𝑆𝑓 and the subspace 𝑁1. If 𝑓𝑉1 but 𝑓𝑁1, we have two inconsistent convex constraints. The convergence of a POCS algorithm with two inconsistent convex constraints is a well-studied problem [16, 17]. In this case, the PGP algorithm iterates 𝑃𝑁1𝑓𝑚 converge to the signal 𝑞𝑁1 closest to 𝑉1, that is, with the minimum distance 𝑑, where 𝑑 is given by𝑑𝑞,𝑉1=inf𝑦𝑉1𝑞𝑦.(39) Thus, the PGP algorithm is exact for all 𝑓P𝑁1, and a certain “best polynomial approximation” in the sense of (39) is obtained when 𝑓𝑁1.

The expansion of analytic function in terms of the Chebyshev polynomials converges rapidly (exponentially). It can be proved that if the function 𝑓(𝑥) can be extended to a function 𝑓(𝑧) analytic on the ellipse {𝑧|𝑧+𝑧21|=𝑟}, where 𝑟>1, then |𝑓𝑃𝑁1𝑓|=𝑂(𝑟(𝑁1)) for all 𝑥 in [1,1] [6]. This property of rapid convergence combined with the approximation property (39) makes the proposed algorithm a practical gap recovery tool for general analytic functions.

4. Discrete Implementation of the Continuous-Time-Recovery Algorithm

A Chebyshev polynomial-based series expansion allows a convenient and accurate discrete implementation by a nonuniform sampling scheme, where the samples are taken at time locations that are Chebyshev polynomial zeroes and a discrete cosine transform (DCT) applied on the nonuniform samples [8, 9]. Equation (15) is a Gauss-Chebyshev quadrature for the original integral (19), which is known to be a very accurate numerical method [6]. In addition, if the maximum absolute error is used as the optimality criterion, then the interpolating polynomial given by (14) provides optimal approximation among polynomials of degree = 𝑁1 to the function 𝑓 [18].

Let 𝑁 denote the real 𝑁-dimensional space. The 𝑙2 inner product is given by𝑦,=𝑁1𝑚=0𝑦[𝑚][𝑚],(40) and the induced norm is given by𝑥2=𝑥,𝑥.(41) Let 𝐶 be the 𝑁×𝑁 unitary DCT matrix defined as𝐶𝑚,𝑛=1𝑁,𝑚=0,0𝑛𝑁1,2𝑁cos𝜋(2𝑛+1)𝑚2𝑁,1<𝑚𝑁1,0𝑛𝑁1.(42) The DCT of the vector 𝑦𝑁 (arranged as a column vector) is given by ̂𝑦=𝐶𝑦. The inverse DCT of the vector ̂𝑦 is given by 𝑦=𝐶𝑇̂𝑦, where 𝐶𝑇 is the transpose of 𝐶. A discrete signal 𝑦 is bandlimited if its DCT ̂𝑦 vanishes on some fixed set, that is, if[𝑚]̂𝑦=0,𝑚𝑆,(43) where 𝑆 is a fixed nonempty proper subset of {0,1,,𝑁1}. The set of signals bandlimited to a specific set 𝑆 is a linear subspace of 𝑁, of dimension equal to the cardinal of the complement of 𝑆.

Let 𝑦[𝑘]=𝑓(𝑥𝑘), 𝑘=0,,𝑁1 be a vector in 𝑁, whose elements are the samples of the analytic function 𝑓(𝑥) taken in the zeroes of 𝑇𝑁 (13). Since only 𝑃𝑆𝑓 is available, only certain 𝑄<𝑁 samples are known. Assuming that 𝑓𝑀1 and 𝑀𝑄, the discrete recovery problem is to determine the DCT coefficients ̂𝑦[𝑚] for 𝑚=0,1,,𝑀1 from the 𝑄 known samples. Using the DCT coefficients (with an appropriate simple scaling) in (14), we obtain the (𝑀1)th degree polynomial that approximates 𝑓(𝑥).

Let 𝑆1 be a subset of size 𝑄 in [0,1,2,,𝑁1] containing the indices of the known samples, let 𝑆2 denote the subset of indices [0,1,,𝑀1], and denote 𝐷=𝐶1=𝐶𝑇. The required DCT coefficients can be determined by solving the least squares problem𝑖𝑆2𝐷𝑘𝑖[𝑖][𝑘]̂𝑦𝑦,𝑘𝑆1.(44) It can be proved [19] that any square submatrix (𝑀=𝑄) of the matrix 𝐷 in (44) is the product of a Vandermonde and upper triangular matrix, and therefore, it is invertible. It follows that the submatrix 𝐷𝑘𝑖, 𝑖𝑆2, 𝑘𝑆1, is a full column rank matrix when 𝑀𝑄. The resulting least squares problem (44) can be solved by many methods (e.g., singular value decomposition, QR factorization, conjugate-gradient method for solving the normal equations, etc.) [20, 21]. A simple popular option (not necessarily the best in terms of computational complexity) is a version of the discrete Papoulis-Gerchberg algorithm to be described below.

Let the operator 𝑃𝐴 map a signal 𝑦𝑁 into another, 𝑃𝐴𝑦=𝐴𝑦, by multiplying 𝑦 with a diagonal matrix 𝐴 containing only zeroes or ones. 𝐴𝑚𝑚=1, if 𝑚𝑆1, and 𝐴𝑚𝑚=0, otherwise. It is easy to see that this operation is idempotent and self-adjoint. Hence, 𝑃𝐴 is the orthogonal projector onto the subspace of 𝑁 signals with zero-valued elements at the indices not belonging to 𝑆1.

Let the operator 𝑃𝐵 map a signal 𝑦 into another 𝑃𝐵𝑦=𝐶𝑇𝐵𝐶𝑦, where 𝐵 a diagonal matrix containing only zeroes or ones. 𝐵𝑚𝑚=1, if 𝑚𝑆2, and 𝐵𝑚𝑚=0, otherwise. Using the identities 𝐵2=𝐵 and 𝐵𝑇=𝐵, it follows that 𝑃𝐵 is idempotent𝑃𝐵𝑃𝐵𝑦=𝐶𝑇𝐵𝐶𝐶𝑇𝐵𝐶𝑦=𝐶𝑇𝐵𝐶𝑦=𝑃𝐵𝑦,(45) and self-adjoint 𝑃𝐵𝐶𝑦,=𝑇=𝐵𝐶𝑦,=𝐵𝐶𝑦,𝐶=𝐶𝑦,𝐵𝐶𝑦,𝐶𝑇𝐵𝐶=𝑦,𝑃𝐵,(46) for every 𝑦,𝑁. Hence, 𝑃𝐵 is the orthogonal projector onto the subspace of bandlimited (in terms of the DCT and 𝑆2) signals in 𝑁.

A discrete version of the PGP algorithm is obtained by working in the finite-dimensional space 𝑁 and using the operators 𝑃𝐴 instead of 𝑃𝑆 and 𝑃𝐵 instead of 𝑃𝑁1. The discrete version of the PGP is also a projection onto convex sets algorithm, and it follows that the iterates 𝑃𝐵𝑦𝑚 converge to the required least squares solution (see the discussion before (39)).

5. Discussion

Our model is based on two assumptions. First, the original analytic signal can be accurately approximated by a linear combination of a few Chebyshev polynomials (exponential convergence), and, secondly, that the Gauss-Chebyshev quadrature (15) provides a good approximation to the original integral (19). When these assumptions are not (approximately) satisfied (e.g., due to noise), the algorithm may converge (it always converges) to a crude approximation of the original signal. In such cases, a higher-order polynomial approximation and a denser sampling grid would provide an improvement. If the model assumptions are (approximately) satisfied, the resulting approximation is very accurate and minimizes a certain objective function in the sense of (39).

A Chebyshev polynomial series expansion is a Fourier cosine series with a change of variable 𝑥=cos𝜃 for 𝑓(cos𝜃) [6]. While 𝑓(𝑥) itself is not periodic in 𝑥, the function 𝑓(cos𝜃) is periodic in 𝜃. The smoother the function is, the more rapidly its Fourier coefficients decrease. Since 𝑓(cos𝜃) is periodic and analytic, its Fourier series must have exponential convergence. For a Fourier cosine series expansion for the nonperiodic function 𝑓(𝑥), the first derivative of the function is discontinuous at the interval's borders and for a Fourier series expansion, the function itself is discontinuous at the interval’s borders. It follows that a Chebyshev polynomial series expansion of an analytic function provides a better approximation than a Fourier series expansion (using a discrete Fourier transform in discrete implementations [22]) with the same number of terms, which makes it attractive to recovery problems. The improved approximation comes, however, at the price of a nonuniform sampling grid. The sampling points are clustered near the boundaries of the interval [1,1]. It follows that the proposed algorithm is more suited to the recovery of gaps that occur in the middle of the signal or nearby. In such cases, we benefit from both the excellent approximation properties of the Chebyshev polynomials and the fact that less missing samples have to be recovered.

6. Numerical Example

We demonstrate the performance of the discrete implementation of the recovery algorithm by a gap recovery example. The function 𝑓(𝑥) defined on the interval [1,1]1𝑓(𝑥)=1+25𝑥2.(47) It is analytic on [1,1] but has a pole at 𝑥=±0.2𝑗, where 𝑗=1. We assume that the piece [0.4,0.4] is missing and that signal is well approximated by 64 Chebyshev polynomials. The samples are taken at the 128 zeroes of the polynomial 𝑇128, and it follows that 34 samples are missing. The proposed algorithm is used to recover the missing samples: the 𝑙2 norm of the error is 0.16, while the norm of the missing samples is 3.5244. We see that the recovery algorithm succeeds in approximating the missing data. Finally, the function and the resulting polynomial approximation are given at a uniform grid of 128 points in the interval [1,1] in Figure 1.

421502.fig.001
Figure 1: Original function: solid, approximation by a Chebyshev polynomial expansion: dashdot.

7. Summary and Conclusions

The recovery of gaps in general analytic functions was obtained by using signal expansions in terms of the Chebyshev polynomials of the first kind. The recovery algorithm is exact for polynomial signals, and certain best polynomial approximations are obtained for general analytic signals. For the continuous-time case, we have extended the well-known Papoulis-Gerchberg algorithm for bandlimited signals to signals in polynomial subspaces. The discrete implementation is based on a specific nonuniform sampling grid (at the zeroes of a Chebyshev polynomial) and the discrete cosine transform and results in solving a linear least squares problem.

References

  1. G. G. Lorentz, Approximation of Functions, Chelsea, New York, NY, USA, 2nd edition, 1986.
  2. I. Sadka and H. Ur, “On Cadzow's non-iterative extrapolation of BL signals,” Signal Processing, vol. 59, no. 3, pp. 313–320, 1997. View at Google Scholar · View at Scopus
  3. A. Papoulis, “A new algorithm in spectral analysis and band-limited extrapolation,” IEEE Transactions on Circuits and Systems, vol. 22, pp. 735–742, 1975. View at Google Scholar
  4. R. W. Gerchberg, “Super-resolution through error energy reduction,” Optica Acta, vol. 21, pp. 709–720, 1974. View at Google Scholar
  5. R. P. Boas, Entire Functions, Academic Press, New York, NY, USA, 1954.
  6. J. C. Mason and D. C. Handscomb, Chebyshev Polynomials, Chapman Hall CRC, Boca Raton, Fla, USA, 2003.
  7. P. Corr, D. Stewart, P. Hanna, J. Ming, and F. J. Smith, “Discrete Chebyshev transform—a matural modification of the DCT,” in Proceedings of the 15th International Conference on Pattern Recognition (ICPR '00), vol. 3, pp. 1142–1145, Barcelona, Spain, 2000.
  8. V. E. Neagoe, “Chebyshev nonuniform sampling cascaded with the discrete cosine transform for optimum interpolation,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 38, no. 10, pp. 1812–1815, 1990. View at Publisher · View at Google Scholar · View at Scopus
  9. Z. D. Wang, G. A. Jullien, and W. C. Miller, “On computing Chebyshev optimal nonuniform interpolation,” Signal Processing, vol. 51, no. 3, pp. 223–228, 1996. View at Publisher · View at Google Scholar · View at Scopus
  10. N. Ahmed, T. Natarajan, and K. R. Rao, “Discrete cosine transform,” IEEE Transactions on Computers, vol. 23, pp. 90–93, 1974. View at Google Scholar
  11. V. Britanak, P. C. Yip, and K. R. Rao, Discrete Cosine and Sine Transforms: General Properties, Fast Algorithms and Integer Approximations, Academic Press, Amsterdam, The Netherlands, 2007.
  12. J. von Neumann, Functional Operators Vol. II: The Geometry of Orthogonal Spaces, Annals of Mathematics Studies no. 22, Princeton University Press, Princeton, NJ, USA, 1950, This is a reprint of mimeographed lecture notes first distributed in 1933.
  13. L. Debnath and P. Mikusinski, Introduction to Hilbert Spaces with Applications, Academic Press, New York, NY, USA, 2nd edition, 1998.
  14. H. Stark and Y. Yang, Vector Space Projections, John Willey & Sons, New York, NY, USA, 1998.
  15. D. C. Youla, “Generalized image reconstruction by the method of alternating orthogonal projections,” IEEE Transactions on Circuits and Systems, vol. 25, pp. 694–702, 1978. View at Google Scholar
  16. M. Goldburg and R. J. Marks, “Signal synthesis in the presence of an inconsistent set of constraints,” IEEE Transactions on Circuits and Systems, vol. 32, no. 7, pp. 647–663, 1985. View at Google Scholar · View at Scopus
  17. D. C. Youla and V. Velasco, “Extensions of a result on the synthesis of signals in the presence of inconsistent constraints,” IEEE Transactions on Circuits and Systems, vol. 33, no. 4, pp. 465–468, 1986. View at Google Scholar · View at Scopus
  18. L. Fox and I. B. Parker, Chebyshev Polynomials in Numerical Analysis, Oxford University Press, London, UK, 1968.
  19. J. L. Wu and J. Shiu, “Discrete cosine transform in error control coding,” IEEE Transactions on Communications, vol. 43, pp. 1857–1861, 1995. View at Google Scholar
  20. P. J. S. G. Ferreira, “Iterative and noniterative recovery of missing samples for 1-D band-limited signals,” in Nonuniform Sampling: Theory and Practice, F. Marvasti, Ed., pp. 235–278, Kluwer Academic/Plenum, New York, NY, USA, 2001. View at Google Scholar
  21. G. H. Golub and C. F. Van Loan, Matrix Computations, John Hopkins University Press, Baltimore, Md, USA, 3rd edition, 1996.
  22. X. G. Xia, “An extrapolation for general analytic signals,” IEEE Transactions on Signal Processing, vol. 40, no. 9, pp. 2243–2249, 1992. View at Publisher · View at Google Scholar · View at Scopus