International Scholarly Research Notices

International Scholarly Research Notices / 2011 / Article

Research Article | Open Access

Volume 2011 |Article ID 421502 | https://doi.org/10.5402/2011/421502

Benjamin G. Salomon, Hanoch Ur, "A Method for the Recovery of Gaps in General Analytic Signals", International Scholarly Research Notices, vol. 2011, Article ID 421502, 6 pages, 2011. https://doi.org/10.5402/2011/421502

A Method for the Recovery of Gaps in General Analytic Signals

Academic Editor: L. Senhadji
Received27 May 2011
Accepted10 Jul 2011
Published07 Sep 2011

Abstract

We propose a generalized Papoulis-Gerchberg algorithm for the recovery of gaps in general analytic functions. The continuous-time algorithm is based on signal expansions in terms of the Chebyshev polynomials of the first kind, and the discrete-time implementation is based on a suitable nonuniform sampling scheme and the discrete cosine transform.

1. Introduction

Gapped (or missing) data is encountered when it is hard to obtain contiguous measurements of a function for a long time. The recovery of the missing function values is possible, at least in theory, given a certain a priori knowledge about the function, for example, analyticity on a certain interval. A real-valued function ๐‘“, defined on a closed interval ๐ผ on the real line, is called analytic on ๐ผ, if there exists an analytic extension of ๐‘“ onto some open set ๐บ of the complex plane โ„‚ that contains ๐ผ [1]. That is, there is a unique single-valued analytic function, defined on ๐บ, that coincides with ๐‘“ on ๐ผ. Let ๐‘“ be analytic on ๐ผ. When the values of ๐‘“ on a subinterval of ๐ผ are given, ๐‘“ can be determined on ๐ผ, by means of analytic continuation.

In many missing data problems, the functions are assumed to be bandlimited. Let ๐ฟ2(โ„) denote the Hilbert space of all square integrable functions defined on the real line โ„. For ๐œ>0, and a function ๐‘“โˆˆ๐ฟ2(โ„), we define the operators ๐‘ƒ๐œ and ๐‘„๐œ as follows:๐‘ƒ๐œ๎‚ป๐‘„๐‘“=๐‘“(๐‘ก),|๐‘ก|โ‰ค๐œ,0,|๐‘ก|>๐œ,๐œ๐‘“=๐‘“โˆ’๐‘ƒ๐œ๐‘“.(1) Let ๎โˆซโ„ฑ๐‘“=๐‘“(๐œ”)=โˆžโˆ’โˆž๐‘“(๐‘ก)๐‘’โˆ’๐‘—๐œ”๐‘ก๐‘‘๐‘ก, and, โ„ฑโˆ’1๐‘“ be the Fourier transform and inverse Fourier transform of ๐‘“, respectively. For ๐œŽ>0, and a function ๐‘“โˆˆ๐ฟ2(โ„), we define the bandlimiting operator ๐‘ƒ๐œŽ as follows:๐‘ƒ๐œŽ๐‘“=โ„ฑโˆ’1๐ฟ๐œŽ๎€œโ„ฑ๐‘“=โˆžโˆ’โˆž๐‘“(๐‘กโˆ’๐œ)sin๐œŽ๐œ๐œ‹๐œ๐‘‘๐œ,(2) where๐ฟ๐œŽ๎‚ป(๐œ”)=1,|๐œ”|โ‰ค๐œŽ,0,|๐œ”|>๐œŽ.(3) The bandlimited signal extrapolation is defined as follows. Find ๐‘“ having ๐‘”=๐‘ƒ๐œ๐‘“ and knowing that ๐‘“=๐‘ƒ๐œŽ๐‘“. Various methods were proposed for solving the bandlimited extrapolation problem; [2] contains a detailed comparison of some of the methods. The iterative bandlimited signal extrapolation algorithm of Papoulis and Gerchberg [3, 4] is attractive due to its relative simplicity, requiring only Fourier transform and inverse Fourier transform in each iteration. Based on the identity๐‘”+๐‘„๐œ๐‘ƒ๐œŽ๐‘“=๐‘ƒ๐œ๐‘“+๐‘„๐œ๐‘“=๐‘“,(4) Papoulis [3] and Gerchberg [4] proposed the following algorithm:Initializationstep:๐‘“0=๐‘”,๐‘š=1,2,โ€ฆ:๐‘“๐‘š=๐‘”+๐‘„๐œ๐‘ƒ๐œŽ๐‘“๐‘šโˆ’1.(5) A proof for the convergence of the algorithm was given by Papoulis [3] using signal expansions in terms of the prolate spheroidal wave functions. Gerchberg [4] actually solved the dual problem of extrapolating the spectrum of a finite object beyond its diffraction limits, leading to superresolution and image enhancement.

A bandlimited function ๐‘“(๐‘ก), defined on the real-line โ„, is an entire function of exponential type when ๐‘ก is extended to โ„‚ [5]. Hence, bandlimited signals are a very special case of analytic functions, and other analytic signals may be encountered in practice. In this paper, we propose a method for the recovery of gaps in general analytic signals using a generalization of the Papoulis-Gerchberg algorithm, based on signal expansions in terms of the Chebyshev polynomials of the first kind. In Section 2, we briefly review the properties of Chebyshev polynomials. We generalize in Section 3 the Papoulis-Gerchberg algorithm from continuous-time bandlimited signals to continuous-time signals in polynomials spaces and general analytic signals. We then focus, in Section 4, on the discrete implementation of the continuous-time algorithm, based on a suitable nonuniform sampling scheme and the discrete cosine transform. The performance of the recovery algorithm is demonstrated, in Section 6, by a numerical example.

2. Chebyshev Polynomials

The Chebyshev polynomial ๐‘‡๐‘›(๐‘ฅ) of the first kind is a polynomial in ๐‘ฅ of degree ๐‘›, defined by the relation๐‘‡๐‘›(๐‘ฅ)=cos๐‘›๐œƒwhen๐‘ฅ=cos๐œƒ,(6) for ๐‘ฅโˆˆ[โˆ’1,1] [6]. The range of the corresponding variable ๐œƒ is [0,๐œ‹] (these ranges are traversed in opposite directions). Intervals [๐‘Ž,๐‘] other than [โˆ’1,1] are easily handled by the change of variables๐‘ =2๐‘ฅโˆ’(๐‘Ž+๐‘).๐‘โˆ’๐‘Ž(7) Without loss of generality, we focus hereafter on the interval [โˆ’1,1].

Let ๐ฟ2๐‘ค([โˆ’1,1]) be the Hilbert space of all real-valued square integrable functions on [โˆ’1,1] with respect to the nonnegative weight function1๐‘ค(๐‘ฅ)=โˆš1โˆ’๐‘ฅ2.(8) This is the space of functions ๐‘“ such that the normโ€–๎‚ต๎€œ๐‘“โ€–=1โˆ’1||||๐‘ค(๐‘ฅ)๐‘“(๐‘ฅ)2๎‚ถ๐‘‘๐‘ฅ1/2(9) is finite. The associated inner product is๎€œโŸจ๐‘“,๐‘”โŸฉ=1โˆ’1๐‘ค(๐‘ฅ)๐‘“(๐‘ฅ)๐‘”(๐‘ฅ)๐‘‘๐‘ฅ.(10) The set {๐‘‡๐‘–(๐‘ฅ),๐‘–=0,1,โ€ฆ} forms a complete orthogonal polynomial system in ๐ฟ2๐‘ค([โˆ’1,1]). The orthogonality relation is given by [6]โŸจ๐‘‡๐‘š(๐‘ฅ),๐‘‡๐‘›๎€œ(๐‘ฅ)โŸฉ=1โˆ’1๐‘ค(๐‘ฅ)๐‘‡๐‘š(๐‘ฅ)๐‘‡๐‘›=๎ƒฏ๐‘‘(๐‘ฅ)๐‘‘๐‘ฅ0,๐‘šโ‰ ๐‘›,๐‘š๐œ‹2,๐‘š=๐‘›,(11) where๐‘‘๐‘š=๎‚ป2,๐‘š=0,1,๐‘šโ‰ฅ1.(12) The ๐‘ zeroes of the Chebyshev polynomial ๐‘‡๐‘(๐‘ฅ) are๐‘ฅ๐‘˜=cos(2๐‘˜+1)๐œ‹2๐‘,๐‘˜=0,1,โ€ฆ,๐‘โˆ’1.(13) These zeroes are listed in decreasing order in ๐‘ฅ. The (๐‘โˆ’1)th degree polynomial ๐‘๐‘โˆ’1(๐‘ฅ), interpolating a function ๐‘“(๐‘ฅ) in the zeroes of ๐‘‡๐‘, can be written as a sum of Chebyshev polynomials in the form๐‘๐‘โˆ’1(๐‘ฅ)=๐‘โˆ’1๎“๐‘›=0๐‘๐‘›๐‘‡๐‘›(๐‘ฅ),(14) where the coefficients ๐‘๐‘› in (14) are given by the explicit formula๐‘๐‘›=๐›พ๐‘›๐‘โˆ’1๎“๐‘˜=0๐‘“๎€ท๐‘ฅ๐‘˜๎€ธ๐‘‡๐‘›๎€ท๐‘ฅ๐‘˜๎€ธ,(15) where๐›พ๐‘›=โŽงโŽชโŽจโŽชโŽฉ1๐‘2,๐‘›=0,๐‘,0<๐‘›โ‰ค๐‘โˆ’1.(16) It follows [7โ€“9] that๐‘๐‘›=๐›พ๐‘›๐‘โˆ’1๎“๐‘˜=0๐‘“๎€ท๐‘ฅ๐‘˜๎€ธcos๐œ‹(2๐‘˜+1)๐‘›,2๐‘(17) which is the well-known discrete cosine transform (type II) [10] applied on the samples of the function ๐‘“(๐‘ฅ) taken at a nonuniform sampling grid corresponding to the zeroes of ๐‘‡๐‘. Efficient algorithms exist for computing the discrete cosine transform (see, e.g., [11] and references therein).

3. Gap Recovery in General Analytic Signals Based on A Chebyshev Polynomial Series Expansion

Let โ„™๐‘โˆ’1 denote the subspace of all polynomials of degree <N (๐‘>1) in ๐ฟ2๐‘ค([โˆ’1,1]). Let ๐‘ƒ๐‘โˆ’1 denote the orthogonal projection operator onto โ„™๐‘โˆ’1๐‘ƒ๐‘โˆ’1๐‘“=๐‘โˆ’1๎“๐‘˜=0๐‘“๐‘˜๐‘‡๐‘˜(๐‘ฅ),(18) where๐‘“๐‘˜=2๐œ‹๐‘‘๐‘˜โŸจ๐‘“,๐‘‡๐‘˜โŸฉ,(19)๐‘‘๐‘˜ is defined in formula (12) and ๐‘“โˆˆ๐ฟ2๐‘ค([โˆ’1,1]).

Let ๐ถ๐‘† be the subspace of all the functions in ๐ฟ2๐‘ค([โˆ’1,1]) which vanish outside a collection of disjoint closed intervals in [โˆ’1,1] and let ๐ถ+๐‘† be the orthogonal complement of ๐ถ๐‘† in ๐ฟ2๐‘ค([โˆ’1,1]). We denote by ๐‘ƒ๐‘† and ๐‘„๐‘† the orthogonal projection operators onto subspace ๐ถ๐‘† and ๐ถ+๐‘†, respectively. It follows that ๐‘„๐‘†=๐ผ๐‘‘โˆ’๐‘ƒ๐‘†, where ๐ผ๐‘‘ is the identity operator in ๐ฟ2๐‘ค([โˆ’1,1]). If we assume that ๐‘“=๐‘ƒ๐‘โˆ’1๐‘“, then we have:๐‘“=๐‘ƒ๐‘†๐‘“+๐‘„๐‘†๐‘“=๐‘ƒ๐‘†๐‘“+๐‘„๐‘†๐‘ƒ๐‘โˆ’1๐‘“(20) The continuous-time recovery problem can be stated as follows. Find ๐‘“ having ๐‘”=๐‘ƒ๐‘†๐‘“ and knowing that ๐‘“=๐‘ƒ๐‘โˆ’1๐‘“. We propose the following Papoulis-Gerchberg type algorithm for recovering gaps in polynomial signals in ๐ฟ2๐‘ค([โˆ’1,1]) (PGP algorithm).

The PGP Algorithm
Initializationstep:๐‘”=๐‘“0=๐‘ƒ๐‘†๐‘“,(21)๐‘š=1,2,โ€ฆ:๐‘“๐‘š=๐‘”+๐‘„๐‘†๐‘ƒ๐‘โˆ’1๐‘“๐‘šโˆ’1.(22) The iteration process is stopped when โ€–๐‘“๐‘šโˆ’๐‘“๐‘šโˆ’1โ€–โ‰ค๐œ–, where ๐œ–>0 is a user specified threshold. ๐‘“๐‘š or ๐‘ƒ๐‘โˆ’1๐‘“๐‘š is taken as the result of the recovery algorithm after ๐‘š iterations.
If ๐‘“โˆˆโ„™๐‘โˆ’1, then ๐‘“=๐‘ƒ๐‘โˆ’1๐‘“, and it follows that ๐‘”=๐‘“โˆ’๐‘„๐‘†๐‘ƒ๐‘โˆ’1๐‘“.(23) We know from (22) and (23) that ๐‘“๐‘šโˆ’๐‘“=๐‘„๐‘†๐‘ƒ๐‘โˆ’1๎€ท๐‘“๐‘šโˆ’1๎€ธ.โˆ’๐‘“(24) Therefore, โ€–๐‘“๐‘šโ€–โ€–๐‘„โˆ’๐‘“โ€–โ‰ค๐‘†โ€–โ€–โ€–โ€–๐‘ƒ๐‘โˆ’1โ€–โ€–โ€–โ€–๐‘“๐‘šโˆ’1โ€–โ€–โ‰คโ€–โ€–๐‘“โˆ’๐‘“๐‘šโˆ’1โ€–โ€–,โˆ’๐‘“(25) which proves that the PGP algorithm has the property that it reduces the error energy during the iteration process.

Lemma 1. A signal ๐‘“โˆˆโ„™๐‘โˆ’1 can be uniquely determined from ๐‘ƒ๐‘†๐‘“ by using the PGP algorithm (21), and (22).

Proof. Iterating (22), we obtain ๐‘“๐‘š=๐‘”+๐‘„๐‘†๐‘ƒ๐‘โˆ’1๐‘“๐‘šโˆ’1=๐‘”+๐‘„๐‘†๐‘ƒ๐‘โˆ’1๎€ท๐‘”+๐‘„๐‘†๐‘ƒ๐‘โˆ’1๐‘“๐‘šโˆ’2๎€ธ=๐‘”+๐‘„๐‘†๐‘ƒ๐‘โˆ’1๐‘”+๐‘„๐‘†๐‘ƒ๐‘โˆ’1๐‘„๐‘†๐‘ƒ๐‘โˆ’1๎€ท๐‘”+๐‘„๐‘†๐‘ƒ๐‘โˆ’1๐‘“๐‘šโˆ’3๎€ธ=๐‘š๎“๐‘›=0๎€ท๐‘„๐‘†๐‘ƒ๐‘โˆ’1๎€ธ๐‘›๐‘”.(26) Substituting (23) in (26), we obtain ๐‘“๐‘š๎€ท๐‘„=๐‘“โˆ’๐‘†๐‘ƒ๐‘โˆ’1๎€ธ๐‘š๐‘“.(27) According to von Neumann's theorem on alternating projections [12], for every ๐‘“โˆˆโ„™๐‘โˆ’1, lim๐‘šโ†’โˆž๎€ท๐‘„๐‘†๐‘ƒ๐‘โˆ’1๎€ธ๐‘š๐‘“=๐‘“๐‘,(28) where ๐‘“๐‘ is the projection of ๐‘“ onto the closed subspace ๐ถ+๐‘†โˆฉโ„™๐‘โˆ’1. Any ๐‘“โˆˆโ„™๐‘โˆ’1 is analytic on [โˆ’1,1] and can vanish in an interval if and only if it is the zero function. Hence, the subspace ๐ถ+๐‘†โˆฉโ„™๐‘โˆ’1 contains only the zero function. We obtain that ๐‘“๐‘=0 and that the PGP converges in norm to the required function ๐‘“.

๐‘ƒ๐‘โˆ’1๐‘ƒ๐‘†๐‘ƒ๐‘โˆ’1 is a bounded positive self-adjoint compact operator. Hence, there exists an orthonormal basis of โ„™๐‘โˆ’1 consisting of eigenfunctions ๐œ™๐‘›, ๐‘›=0,1,โ€ฆ,๐‘โˆ’1, of the operator ๐‘ƒ๐‘โˆ’1๐‘ƒ๐‘†๐‘ƒ๐‘โˆ’1, with corresponding real positive eigenvalues ๐œ†๐‘›, such that 1>๐œ†0>๐œ†1>โ‹ฏ>๐œ†๐‘โˆ’1>0 [13].

Lemma 2. Let ๐‘“=๐œ™๐‘˜ be an eigenfunction of ๐‘ƒ๐‘โˆ’1๐‘ƒ๐‘†๐‘ƒ๐‘โˆ’1 with corresponding eigenvalue ๐œ†๐‘˜, and let ๐‘”=๐‘ƒ๐‘†๐‘“ be the given values of ๐‘“. Then, ๐‘ƒ๐‘โˆ’1๐‘“๐‘š=๐ด๐‘š๐œ™๐‘˜where๐ด๐‘š๎€ท=1โˆ’1โˆ’๐œ†๐‘˜๎€ธ๐‘š+1.(29)

Proof. We will prove (29) by induction. It is true for ๐‘š=0 that ๐‘ƒ๐‘โˆ’1๐‘“0=๐‘ƒ๐‘โˆ’1๐‘”=๐‘ƒ๐‘โˆ’1๐‘ƒ๐‘†๐œ™๐‘˜=๐‘ƒ๐‘โˆ’1๐‘ƒ๐‘†๐‘ƒ๐‘โˆ’1๐œ™๐‘˜=๐œ†๐‘˜๐œ™๐‘˜=๐ด0๐œ™๐‘˜,(30) where we used the identity ๐‘ƒ๐‘โˆ’1๐œ™๐‘˜=๐œ™๐‘˜ (since ๐œ™๐‘˜โˆˆโ„™๐‘โˆ’1).
Suppose that it is true for some ๐‘šโ‰ฅ1. By using (22), we obtain ๐‘ƒ๐‘โˆ’1๐‘“๐‘š+1=๐‘ƒ๐‘โˆ’1๐‘”+๐‘ƒ๐‘โˆ’1๐‘„๐‘†๐‘ƒ๐‘โˆ’1๐‘“๐‘š=๐‘ƒ๐‘โˆ’1๐‘ƒ๐‘†๐‘ƒ๐‘โˆ’1๐œ™๐‘˜+๐‘ƒ๐‘โˆ’1๎€ท๐ผ๐ฟ2๐‘ค([โˆ’1,1])โˆ’๐‘ƒ๐‘†๎€ธ๐‘ƒ๐‘โˆ’1๐‘“๐‘š=๐‘ƒ๐‘โˆ’1๐‘ƒ๐‘†๐‘ƒ๐‘โˆ’1๐œ™๐‘˜+๐‘ƒ๐‘โˆ’1๐‘“๐‘šโˆ’๐‘ƒ๐‘โˆ’1๐‘ƒ๐‘†๐‘ƒ๐‘โˆ’1๐‘“๐‘š.(31) Substituting ๐‘ƒ๐‘โˆ’1๐‘ƒ๐‘†๐‘ƒ๐‘โˆ’1๐œ™๐‘˜=๐œ†๐‘˜๐œ™๐‘˜ and using the induction assumption, we obtain ๐‘ƒ๐‘โˆ’1๐‘“๐‘š+1=๐œ†๐‘˜๐œ™๐‘˜+๐ด๐‘š๐œ™๐‘˜โˆ’๐œ†๐‘˜๐ด๐‘š๐œ™๐‘˜=๎€บ๐ด๐‘š+๎€ท1โˆ’๐ด๐‘š๎€ธ๐œ†๐‘˜๎€ป๐œ™๐‘˜.(32) In (31) and (32), we used the idempotent property of the orthogonal projection operator ๐‘ƒ๐‘โˆ’1 (i.e., ๐‘ƒ๐‘โˆ’1=๐‘ƒ๐‘โˆ’1๐‘ƒ๐‘โˆ’1). It is easy to see that ๐ด๐‘š+๎€ท1โˆ’๐ด๐‘š๎€ธ๐œ†๐‘˜๎€ท=1โˆ’1โˆ’๐œ†๐‘˜๎€ธ๐‘š+1+๎€ท1โˆ’๐œ†๐‘˜๎€ธ๐‘š+1๐œ†๐‘˜๎€ท=1โˆ’1โˆ’๐œ†๐‘˜๎€ธ๐‘š+2.(33) Hence, ๐‘ƒ๐‘โˆ’1๐‘“๐‘š+1=๐ด๐‘š+1๐œ™๐‘˜,(34) which completes the proof.

Any ๐‘“โˆˆโ„™๐‘โˆ’1 can be represented as๐‘“=๐‘โˆ’1๎“๐‘˜=0๐›ผ๐‘˜๐œ™๐‘˜,(35) where๐›ผ๐‘˜=๎€œ1โˆ’1๐‘ค(๐‘ฅ)๐‘“(๐‘ฅ)๐œ™๐‘˜(๐‘ฅ)๐‘‘๐‘ฅ.(36) Using (29) and (35), we obtain that after ๐‘š iterations of the PGP algorithm๐‘ƒ๐‘โˆ’1๐‘“๐‘š=๐‘โˆ’1๎“๐‘˜=0๐›ผ๐‘˜๎‚ƒ๎€ท1โˆ’1โˆ’๐œ†๐‘˜๎€ธ๐‘š+1๎‚„๐œ™๐‘˜.(37) The energy of error is given byโ€–โ€–๐‘“โˆ’๐‘ƒ๐‘โˆ’1๐‘“๐‘šโ€–โ€–2=๐‘โˆ’1๎“๐‘˜=0๐›ผ2๐‘˜๎€ท1โˆ’๐œ†๐‘˜๎€ธ2(๐‘š+1).(38) It follows that the rate of convergence of the PGP algorithm is dependent on the spectral representation of the signal in terms of the eigenfunctions of the operator ๐‘ƒ๐‘โˆ’1๐‘ƒ๐‘†๐‘ƒ๐‘โˆ’1. If the signal contains only significant components which correspond to relatively high eigenvalues, the convergence is fast. If this is not the case, the PGP will be slowly convergent.

The PGP can also be viewed as an example of signal restoration by the method of projections onto convex sets (POCS) [14, 15]. The two convex sets, or constraints, are the affine set ๐‘‰1, which contains all the functions ๐‘“โˆˆ๐ฟ2๐‘ค([โˆ’1,1]) with ๐‘”=๐‘ƒ๐‘†๐‘“ and the subspace โ„™๐‘โˆ’1. If ๐‘“โˆˆ๐‘‰1 but ๐‘“โˆ‰โ„™๐‘โˆ’1, we have two inconsistent convex constraints. The convergence of a POCS algorithm with two inconsistent convex constraints is a well-studied problem [16, 17]. In this case, the PGP algorithm iterates ๐‘ƒ๐‘โˆ’1๐‘“๐‘š converge to the signal ๐‘žโˆˆโ„™๐‘โˆ’1 closest to ๐‘‰1, that is, with the minimum distance ๐‘‘, where ๐‘‘ is given by๐‘‘๎€ท๐‘ž,๐‘‰1๎€ธ=inf๐‘ฆโˆˆ๐‘‰1โ€–๐‘žโˆ’๐‘ฆโ€–.(39) Thus, the PGP algorithm is exact for all ๐‘“โˆˆP๐‘โˆ’1, and a certain โ€œbest polynomial approximationโ€ in the sense of (39) is obtained when ๐‘“โˆ‰โ„™๐‘โˆ’1.

The expansion of analytic function in terms of the Chebyshev polynomials converges rapidly (exponentially). It can be proved that if the function ๐‘“(๐‘ฅ) can be extended to a function ๐‘“(๐‘ง) analytic on the ellipse โˆš{๐‘งโˆถ|๐‘ง+๐‘ง2โˆ’1|=๐‘Ÿ}, where ๐‘Ÿ>1, then |๐‘“โˆ’๐‘ƒ๐‘โˆ’1๐‘“|=๐‘‚(๐‘Ÿโˆ’(๐‘โˆ’1)) for all ๐‘ฅ in [โˆ’1,1] [6]. This property of rapid convergence combined with the approximation property (39) makes the proposed algorithm a practical gap recovery tool for general analytic functions.

4. Discrete Implementation of the Continuous-Time-Recovery Algorithm

A Chebyshev polynomial-based series expansion allows a convenient and accurate discrete implementation by a nonuniform sampling scheme, where the samples are taken at time locations that are Chebyshev polynomial zeroes and a discrete cosine transform (DCT) applied on the nonuniform samples [8, 9]. Equation (15) is a Gauss-Chebyshev quadrature for the original integral (19), which is known to be a very accurate numerical method [6]. In addition, if the maximum absolute error is used as the optimality criterion, then the interpolating polynomial given by (14) provides optimal approximation among polynomials of degree = ๐‘โˆ’1 to the function ๐‘“ [18].

Let โ„๐‘ denote the real ๐‘-dimensional space. The ๐‘™2 inner product is given byโŸจ๐‘ฆ,โ„ŽโŸฉ=๐‘โˆ’1๎“๐‘š=0๐‘ฆ[๐‘š]โ„Ž[๐‘š],(40) and the induced norm is given byโ€–๐‘ฅโ€–2=โŸจ๐‘ฅ,๐‘ฅโŸฉ.(41) Let ๐ถ be the ๐‘ร—๐‘ unitary DCT matrix defined as๐ถ๐‘š,๐‘›=โŽงโŽชโŽชโŽจโŽชโŽชโŽฉ1โˆš๐‘๎‚™,๐‘š=0,0โ‰ค๐‘›โ‰ค๐‘โˆ’1,2๐‘cos๐œ‹(2๐‘›+1)๐‘š2๐‘,1<๐‘šโ‰ค๐‘โˆ’1,0โ‰ค๐‘›โ‰ค๐‘โˆ’1.(42) The DCT of the vector ๐‘ฆโˆˆโ„๐‘ (arranged as a column vector) is given by ฬ‚๐‘ฆ=๐ถ๐‘ฆ. The inverse DCT of the vector ฬ‚๐‘ฆ is given by ๐‘ฆ=๐ถ๐‘‡ฬ‚๐‘ฆ, where ๐ถ๐‘‡ is the transpose of ๐ถ. A discrete signal ๐‘ฆ is bandlimited if its DCT ฬ‚๐‘ฆ vanishes on some fixed set, that is, if[๐‘š]ฬ‚๐‘ฆ=0,๐‘šโˆˆ๐‘†,(43) where ๐‘† is a fixed nonempty proper subset of {0,1,โ€ฆ,๐‘โˆ’1}. The set of signals bandlimited to a specific set ๐‘† is a linear subspace of โ„๐‘, of dimension equal to the cardinal of the complement of ๐‘†.

Let ๐‘ฆ[๐‘˜]=๐‘“(๐‘ฅ๐‘˜), ๐‘˜=0,โ€ฆ,๐‘โˆ’1 be a vector in โ„๐‘, whose elements are the samples of the analytic function ๐‘“(๐‘ฅ) taken in the zeroes of ๐‘‡๐‘ (13). Since only ๐‘ƒ๐‘†๐‘“ is available, only certain ๐‘„<๐‘ samples are known. Assuming that ๐‘“โˆˆโ„™๐‘€โˆ’1 and ๐‘€โ‰ค๐‘„, the discrete recovery problem is to determine the DCT coefficients ฬ‚๐‘ฆ[๐‘š] for ๐‘š=0,1,โ€ฆ,๐‘€โˆ’1 from the ๐‘„ known samples. Using the DCT coefficients (with an appropriate simple scaling) in (14), we obtain the (๐‘€โˆ’1)th degree polynomial that approximates ๐‘“(๐‘ฅ).

Let ๐‘†1 be a subset of size ๐‘„ in [0,1,2,โ€ฆ,๐‘โˆ’1] containing the indices of the known samples, let ๐‘†2 denote the subset of indices [0,1,โ€ฆ,๐‘€โˆ’1], and denote ๐ท=๐ถโˆ’1=๐ถ๐‘‡. The required DCT coefficients can be determined by solving the least squares problem๎“๐‘–โˆˆ๐‘†2๐ท๐‘˜๐‘–[๐‘–][๐‘˜]ฬ‚๐‘ฆโ‰ƒ๐‘ฆ,๐‘˜โˆˆ๐‘†1.(44) It can be proved [19] that any square submatrix (๐‘€=๐‘„) of the matrix ๐ท in (44) is the product of a Vandermonde and upper triangular matrix, and therefore, it is invertible. It follows that the submatrix ๐ท๐‘˜๐‘–, ๐‘–โˆˆ๐‘†2, ๐‘˜โˆˆ๐‘†1, is a full column rank matrix when ๐‘€โ‰ค๐‘„. The resulting least squares problem (44) can be solved by many methods (e.g., singular value decomposition, QR factorization, conjugate-gradient method for solving the normal equations, etc.) [20, 21]. A simple popular option (not necessarily the best in terms of computational complexity) is a version of the discrete Papoulis-Gerchberg algorithm to be described below.

Let the operator ๐‘ƒ๐ด map a signal ๐‘ฆโˆˆโ„๐‘ into another, ๐‘ƒ๐ด๐‘ฆ=๐ด๐‘ฆ, by multiplying ๐‘ฆ with a diagonal matrix ๐ด containing only zeroes or ones. ๐ด๐‘š๐‘š=1, if ๐‘šโˆˆ๐‘†1, and ๐ด๐‘š๐‘š=0, otherwise. It is easy to see that this operation is idempotent and self-adjoint. Hence, ๐‘ƒ๐ด is the orthogonal projector onto the subspace of โ„๐‘ signals with zero-valued elements at the indices not belonging to ๐‘†1.

Let the operator ๐‘ƒ๐ต map a signal ๐‘ฆ into another ๐‘ƒ๐ต๐‘ฆ=๐ถ๐‘‡๐ต๐ถ๐‘ฆ, where ๐ต a diagonal matrix containing only zeroes or ones. ๐ต๐‘š๐‘š=1, if ๐‘šโˆˆ๐‘†2, and ๐ต๐‘š๐‘š=0, otherwise. Using the identities ๐ต2=๐ต and ๐ต๐‘‡=๐ต, it follows that ๐‘ƒ๐ต is idempotent๐‘ƒ๐ต๐‘ƒ๐ต๐‘ฆ=๐ถ๐‘‡๐ต๐ถ๐ถ๐‘‡๐ต๐ถ๐‘ฆ=๐ถ๐‘‡๐ต๐ถ๐‘ฆ=๐‘ƒ๐ต๐‘ฆ,(45) and self-adjoint โŸจ๐‘ƒ๐ต๎ซ๐ถ๐‘ฆ,โ„ŽโŸฉ=๐‘‡๎ฌ=๎ซ๐ต๐ถ๐‘ฆ,โ„Ž=โŸจ๐ต๐ถ๐‘ฆ,๐ถโ„ŽโŸฉ=โŸจ๐ถ๐‘ฆ,๐ต๐ถโ„ŽโŸฉ๐‘ฆ,๐ถ๐‘‡๎ฌ๐ต๐ถโ„Ž=โŸจ๐‘ฆ,๐‘ƒ๐ตโ„ŽโŸฉ,(46) for every ๐‘ฆ,โ„Žโˆˆโ„๐‘. Hence, ๐‘ƒ๐ต is the orthogonal projector onto the subspace of bandlimited (in terms of the DCT and ๐‘†2) signals in โ„๐‘.

A discrete version of the PGP algorithm is obtained by working in the finite-dimensional space โ„๐‘ and using the operators ๐‘ƒ๐ด instead of ๐‘ƒ๐‘† and ๐‘ƒ๐ต instead of ๐‘ƒ๐‘โˆ’1. The discrete version of the PGP is also a projection onto convex sets algorithm, and it follows that the iterates ๐‘ƒ๐ต๐‘ฆ๐‘š converge to the required least squares solution (see the discussion before (39)).

5. Discussion

Our model is based on two assumptions. First, the original analytic signal can be accurately approximated by a linear combination of a few Chebyshev polynomials (exponential convergence), and, secondly, that the Gauss-Chebyshev quadrature (15) provides a good approximation to the original integral (19). When these assumptions are not (approximately) satisfied (e.g., due to noise), the algorithm may converge (it always converges) to a crude approximation of the original signal. In such cases, a higher-order polynomial approximation and a denser sampling grid would provide an improvement. If the model assumptions are (approximately) satisfied, the resulting approximation is very accurate and minimizes a certain objective function in the sense of (39).

A Chebyshev polynomial series expansion is a Fourier cosine series with a change of variable ๐‘ฅ=cos๐œƒ for ๐‘“(cos๐œƒ) [6]. While ๐‘“(๐‘ฅ) itself is not periodic in ๐‘ฅ, the function ๐‘“(cos๐œƒ) is periodic in ๐œƒ. The smoother the function is, the more rapidly its Fourier coefficients decrease. Since ๐‘“(cos๐œƒ) is periodic and analytic, its Fourier series must have exponential convergence. For a Fourier cosine series expansion for the nonperiodic function ๐‘“(๐‘ฅ), the first derivative of the function is discontinuous at the interval's borders and for a Fourier series expansion, the function itself is discontinuous at the intervalโ€™s borders. It follows that a Chebyshev polynomial series expansion of an analytic function provides a better approximation than a Fourier series expansion (using a discrete Fourier transform in discrete implementations [22]) with the same number of terms, which makes it attractive to recovery problems. The improved approximation comes, however, at the price of a nonuniform sampling grid. The sampling points are clustered near the boundaries of the interval [โˆ’1,1]. It follows that the proposed algorithm is more suited to the recovery of gaps that occur in the middle of the signal or nearby. In such cases, we benefit from both the excellent approximation properties of the Chebyshev polynomials and the fact that less missing samples have to be recovered.

6. Numerical Example

We demonstrate the performance of the discrete implementation of the recovery algorithm by a gap recovery example. The function ๐‘“(๐‘ฅ) defined on the interval [โˆ’1,1]1๐‘“(๐‘ฅ)=1+25๐‘ฅ2.(47) It is analytic on [โˆ’1,1] but has a pole at ๐‘ฅ=ยฑ0.2๐‘—, where โˆš๐‘—=โˆ’1. We assume that the piece [โˆ’0.4,0.4] is missing and that signal is well approximated by 64 Chebyshev polynomials. The samples are taken at the 128 zeroes of the polynomial ๐‘‡128, and it follows that 34 samples are missing. The proposed algorithm is used to recover the missing samples: the ๐‘™2 norm of the error is 0.16, while the norm of the missing samples is 3.5244. We see that the recovery algorithm succeeds in approximating the missing data. Finally, the function and the resulting polynomial approximation are given at a uniform grid of 128 points in the interval [โˆ’1,1] in Figure 1.

7. Summary and Conclusions

The recovery of gaps in general analytic functions was obtained by using signal expansions in terms of the Chebyshev polynomials of the first kind. The recovery algorithm is exact for polynomial signals, and certain best polynomial approximations are obtained for general analytic signals. For the continuous-time case, we have extended the well-known Papoulis-Gerchberg algorithm for bandlimited signals to signals in polynomial subspaces. The discrete implementation is based on a specific nonuniform sampling grid (at the zeroes of a Chebyshev polynomial) and the discrete cosine transform and results in solving a linear least squares problem.

References

  1. G. G. Lorentz, Approximation of Functions, Chelsea, New York, NY, USA, 2nd edition, 1986.
  2. I. Sadka and H. Ur, โ€œOn Cadzow's non-iterative extrapolation of BL signals,โ€ Signal Processing, vol. 59, no. 3, pp. 313โ€“320, 1997. View at: Google Scholar
  3. A. Papoulis, โ€œA new algorithm in spectral analysis and band-limited extrapolation,โ€ IEEE Transactions on Circuits and Systems, vol. 22, pp. 735โ€“742, 1975. View at: Google Scholar
  4. R. W. Gerchberg, โ€œSuper-resolution through error energy reduction,โ€ Optica Acta, vol. 21, pp. 709โ€“720, 1974. View at: Google Scholar
  5. R. P. Boas, Entire Functions, Academic Press, New York, NY, USA, 1954.
  6. J. C. Mason and D. C. Handscomb, Chebyshev Polynomials, Chapman Hall CRC, Boca Raton, Fla, USA, 2003.
  7. P. Corr, D. Stewart, P. Hanna, J. Ming, and F. J. Smith, โ€œDiscrete Chebyshev transformโ€”a matural modification of the DCT,โ€ in Proceedings of the 15th International Conference on Pattern Recognition (ICPR '00), vol. 3, pp. 1142โ€“1145, Barcelona, Spain, 2000. View at: Google Scholar
  8. V. E. Neagoe, โ€œChebyshev nonuniform sampling cascaded with the discrete cosine transform for optimum interpolation,โ€ IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 38, no. 10, pp. 1812โ€“1815, 1990. View at: Publisher Site | Google Scholar
  9. Z. D. Wang, G. A. Jullien, and W. C. Miller, โ€œOn computing Chebyshev optimal nonuniform interpolation,โ€ Signal Processing, vol. 51, no. 3, pp. 223โ€“228, 1996. View at: Publisher Site | Google Scholar
  10. N. Ahmed, T. Natarajan, and K. R. Rao, โ€œDiscrete cosine transform,โ€ IEEE Transactions on Computers, vol. 23, pp. 90โ€“93, 1974. View at: Google Scholar
  11. V. Britanak, P. C. Yip, and K. R. Rao, Discrete Cosine and Sine Transforms: General Properties, Fast Algorithms and Integer Approximations, Academic Press, Amsterdam, The Netherlands, 2007.
  12. J. von Neumann, Functional Operators Vol. II: The Geometry of Orthogonal Spaces, Annals of Mathematics Studies no. 22, Princeton University Press, Princeton, NJ, USA, 1950, This is a reprint of mimeographed lecture notes first distributed in 1933.
  13. L. Debnath and P. Mikusinski, Introduction to Hilbert Spaces with Applications, Academic Press, New York, NY, USA, 2nd edition, 1998.
  14. H. Stark and Y. Yang, Vector Space Projections, John Willey & Sons, New York, NY, USA, 1998.
  15. D. C. Youla, โ€œGeneralized image reconstruction by the method of alternating orthogonal projections,โ€ IEEE Transactions on Circuits and Systems, vol. 25, pp. 694โ€“702, 1978. View at: Google Scholar
  16. M. Goldburg and R. J. Marks, โ€œSignal synthesis in the presence of an inconsistent set of constraints,โ€ IEEE Transactions on Circuits and Systems, vol. 32, no. 7, pp. 647โ€“663, 1985. View at: Google Scholar
  17. D. C. Youla and V. Velasco, โ€œExtensions of a result on the synthesis of signals in the presence of inconsistent constraints,โ€ IEEE Transactions on Circuits and Systems, vol. 33, no. 4, pp. 465โ€“468, 1986. View at: Google Scholar
  18. L. Fox and I. B. Parker, Chebyshev Polynomials in Numerical Analysis, Oxford University Press, London, UK, 1968.
  19. J. L. Wu and J. Shiu, โ€œDiscrete cosine transform in error control coding,โ€ IEEE Transactions on Communications, vol. 43, pp. 1857โ€“1861, 1995. View at: Google Scholar
  20. P. J. S. G. Ferreira, โ€œIterative and noniterative recovery of missing samples for 1-D band-limited signals,โ€ in Nonuniform Sampling: Theory and Practice, F. Marvasti, Ed., pp. 235โ€“278, Kluwer Academic/Plenum, New York, NY, USA, 2001. View at: Google Scholar
  21. G. H. Golub and C. F. Van Loan, Matrix Computations, John Hopkins University Press, Baltimore, Md, USA, 3rd edition, 1996.
  22. X. G. Xia, โ€œAn extrapolation for general analytic signals,โ€ IEEE Transactions on Signal Processing, vol. 40, no. 9, pp. 2243โ€“2249, 1992. View at: Publisher Site | Google Scholar

Copyright © 2011 Benjamin G. Salomon and Hanoch Ur. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views565
Downloads424
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.