Table of Contents Author Guidelines Submit a Manuscript
ISRN Applied Mathematics
Volume 2012 (2012), Article ID 630702, 7 pages
Research Article

Sign Data Derivative Recovery

1Louisiana Accelerator Center, The University of Louisiana at Lafayette, Lafayette, LA 70504-4210, USA
2Ion Beam Modification and analysis Laboratory, Department of Physics, University of North Texas, Denton, TX 76203, USA

Received 2 November 2011; Accepted 29 November 2011

Academic Editors: J. Shen and F. Zirilli

Copyright © 2012 L. M. Houston et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Given only the signs of signal plus noise added repetitively or sign data, signal amplitudes can be recovered with minimal variance. However, discrete derivatives of the signal are recovered from sign data with a variance which approaches infinity with decreasing step size and increasing order. For industries such as the seismic industry, which exploits amplitude recovery from sign data, these results place constraints on processing, which includes differentiation of the data. While methods for smoothing noisy data for finite difference calculations are known, sign data requires noisy data. In this paper, we derive the expectation values of continuous and discrete sign data derivatives and we explicitly characterize the variance of discrete sign data derivatives.

1. Introduction

Sign-bit recording systems discard all information on the detailed motion of the geophone and ask only whether its output is positive or negative, whether it is going up or coming down. In a sign-bit system, therefore, the signal waveform is converted into a square wave. All amplitude information is lost [1].

It is well known that, for a range of signal-to-noise ratios between about 0.1 and 1, the final result of sign-bit recording, after stacking, correlating, and other processing, looks no less good, to the eye, than the result from full-fidelity recording. This is considered to be as intriguing as it is surprising [1]. Alternatively, what we present in this paper is evidence that the processing of sign-bit data (i.e., sign data) can be limited for certain cases relative to the processing of the full-bandwidth data.

Model signal appears as a one-dimensional function, 𝑓(𝑣), and noise as a random variable, 𝑋. In industries like the seismic industry, measurements of signal, 𝑓(𝑣) and noise, 𝑋Ω, 𝑓(𝑣)+𝑋 are recorded for multiple iterations of the noise. The average of the measurement (i.e., the expectation 𝐸) recovers the signal𝐸(𝑓(𝑣)+𝑋)=𝑓(𝑣).(1.1) If the noise is chosen to be uniform, where 𝜌(𝑥) is the density function such that1𝜌(𝑥)=2𝑎,𝑎𝑥𝑎0,else,(1.2) then the variance, 𝐸(𝑓(𝑣)+𝑋)2(𝐸(𝑓(𝑣)+𝑋))2, reduces to1Var(𝑓(𝑣)+𝑋)=3𝑎2.(1.3) As reported by O’Brien et al. [2], it was empirically discovered that the average of the signs of signal plus noise recovers the signal if the signal-to-noise ratio is less than or equal to one. This can be shown mathematically [3] using the signum function [4], sgn(𝑥)=+1,𝑥>0, sgn(𝑥)=1,𝑥<0, sgn(0)=0,𝐸(sgn(𝑓(𝑣)+𝑋))=sgn(𝑓(𝑣)+𝑥)𝜌(𝑥)𝑑𝑥=𝑓𝜌(𝑥)𝑑𝑥𝑓𝜌(𝑥)𝑑𝑥.(1.4) Because 𝜌(𝑥) is even and equals 𝑓𝑓𝜌(𝑥)𝑑𝑥(1.5)𝐸(sgn(𝑓(𝑣)+𝑋))=𝑓(𝑣)𝑎[],𝑓𝑎,𝑎.(1.6) The variance is 𝐸(sgn(𝑓(𝑣)+𝑋))2(𝐸(sgn(𝑓(𝑣)+𝑋)))2, reducing toVar(sgn(𝑓(𝑣)+𝑋))=1𝑓(𝑣)𝑎2.(1.7) Consequently, the error is minimal when the signal-to-noise ratio is near unity.

The advantage of retaining only the signs of signal plus noise is the requirement of approximately 1 bit to record the information as opposed to requiring 16 to 20 bits to record full amplitude data [2].

The goal of this paper is to examine the recovery of derivatives from sign data in uniform noise. The issue is that recovery of signal from sign data can be extended to recovery of derivatives of the signal through the use of finite differences and that recovery is constrained by the size of the variance. In this paper, we first examine sign data derivatives for both the discrete and continuous case. We follow with a derivation of variance. We conclude our analysis with a computational test, which lists the true variance versus the variance estimate derived statistically for a test function for selected step sizes.

2. Sign Data Derivatives

Let the signal 𝑓(𝑣) be an 𝑛th order differentiable function. Based on signal recovery from sign data, it can be shown that derivatives of the signal are also recoverable. Using the linearity of the expectation value,𝐸Δ𝑛𝑣(Δ𝑣)𝑛=Δsgn(𝑓(𝑣)+𝑋)𝑛𝑣(Δ𝑣)𝑛𝐸(sgn(𝑓(𝑣)+𝑋)),(2.1) where Δ𝑛𝑣 is the 𝑛th order finite difference operator with respect to the variable 𝑣 [5]. In this case, a nonunit step size, Δ𝑣, is used (e.g., [6]).

In detail, we can writeΔ𝑛𝑣(Δ𝑣)𝑛1sgn(𝑓(𝑣)+𝑋)=(Δ𝑣)𝑛𝑛𝑖=0(1)𝑖𝑛𝑖𝑓sgn𝑣+(𝑛𝑖)Δ𝑣+𝑋𝑖,(2.2) where the notation (𝑛𝑖) represents the binomial coefficient 𝑛!/𝑖!(𝑛𝑖)! and where 𝑋𝑖=𝑋0,𝑋1, are independent representations of the random variable, 𝑋.

Substituting from (1.6) into (2.1) yields𝐸Δ𝑛𝑣(Δ𝑣)𝑛=1sgn(𝑓(𝑣)+𝑋)𝑎Δ𝑛𝑣𝑓(𝑣)(Δ𝑣)𝑛.(2.3) In the limit of infinitesimal step size, this becomes a continuous derivativelimΔ𝑣0𝐸Δ𝑛𝑣(Δ𝑣)𝑛=1sgn(𝑓(𝑣)+𝑋)𝑎𝑑𝑛𝑓(𝑣)𝑑𝑣𝑛(2.4) or𝐸𝑑𝑛𝑑𝑣𝑛=1sgn(𝑓(𝑣)+𝑋)𝑎𝑑𝑛𝑓(𝑣)𝑑𝑣𝑛.(2.5) Equation (2.4) presents an alternative solution to direct integration. For example, using the rule, 𝑓(𝑥)𝛿(𝑛)(𝑥)𝑑𝑥=(𝜕𝑓/𝜕𝑥)𝛿(𝑛1)(𝑥)𝑑𝑥, [7], the integral𝐸𝑑3𝑑𝑣3=sgn(𝑓(𝑣)+𝑋)2𝑑2𝛿𝑑𝑢2𝑑𝑓𝑑𝑣3+6𝑑𝛿𝑑𝑢𝑑𝑓𝑑𝑑𝑣2𝑓𝑑𝑣2𝑑+2𝛿3𝑓𝑑𝑣3𝜌(𝑥)𝑑𝑥(2.6) loses all terms with derivatives of the delta functional, reducing to𝑑=2𝜌(𝑓)3𝑓𝑑𝑣3||||𝑓=𝑥.(2.7) In general,𝐸𝑑𝑛𝑑𝑣𝑛𝑑sgn(𝑓(𝑣)+𝑋)=2𝜌(𝑓)𝑛𝑓𝑑𝑣𝑛||||𝑓=𝑥=1𝑎𝑑𝑛𝑓𝑑𝑣𝑛.(2.8) It follows that the noise is restricted such that 𝑎|𝑓|.

3. The Variance of Sign Data Derivatives

Letting 𝑆𝑛(Δ𝑛𝑣/(Δ𝑣)𝑛)sgn(𝑓(𝑣)+𝑋), compute the variance, 𝐸(𝑆2𝑛)(𝐸(𝑆𝑛))2. From (2.3), it follows that (𝐸(𝑆𝑛))2=(Δ𝑛𝑣𝑓/𝑎(Δ𝑣)𝑛)2. 𝐸(𝑆2𝑛) can be found by inductively generalizing from 𝑛=2:𝐸𝑆22=1Δ𝑣4𝑏0𝑓sgn0+𝑋0+𝑏1𝑓sgn1+𝑋1+𝑏2𝑓sgn2+𝑋22=1Δ𝑣4𝑏20+𝑏21+𝑏22+2𝑏0𝑏1𝑓0𝑓1𝑎2+2𝑏0𝑏2𝑓0𝑓2𝑎2+2𝑏1𝑏2𝑓1𝑓2𝑎2,(3.1) where 𝑓𝑖=𝑓(𝑣+(𝑛𝑖)Δ𝑣), 𝑓𝑘=𝑓(𝑣+(𝑛𝑘)Δ𝑣), and 𝑏𝑖=(1)𝑖(𝑛𝑖).

These results generalize to𝑆Var𝑛=1(Δ𝑣)𝑛2𝑛𝑖=0𝑛𝑖2+2(Δ𝑣)𝑛2𝑛𝑖𝑘(1)𝑖+𝑘𝑛𝑖𝑛𝑘𝑓𝑖𝑓𝑘𝑎2Δ𝑛𝑣𝑓𝑎(Δ𝑣)𝑛2.(3.2) Since 𝑓 is differentiable, |(Δ𝑛𝑣𝑓/(Δ𝑣)𝑛)(𝑑𝑛𝑓/𝑑𝑣𝑛)|<𝜀 and, thus, Δ𝑛𝑣𝑓/(Δ𝑣)𝑛 is finite. Based on definition, Var(𝑆𝑛)>0.

Consequently, limΔ𝑣0Var(𝑆𝑛)=+. Similarly, lim𝑛Var(𝑆𝑛)=+,0<Δ𝑣<1. The variance of a discrete sign derivative approaches infinity with decreasing step size and increasing order. In addition, since limΔ𝑣0(𝑆𝑛)=(𝑑𝑛/𝑑𝑣𝑛)sgn(𝑓(𝑣)+𝑋), Var((𝑑𝑛/𝑑𝑣𝑛)sgn(𝑓(𝑣)+𝑋))=+, so in the case of the continuous derivatives (2.5) the variance is infinite.

Use (3.2) to find the variance of the first discrete sign derivative by letting 𝑛=1:𝑆Var1=1(Δ𝑣)2𝑓221+𝑓20𝑎2.(3.3) The variance of the second discrete sign derivative (𝑛=2) is similarly computed as𝑆Var2=1(Δ𝑣)416𝑎2𝑓20+4𝑓21+𝑓22.(3.4)

4. Computational Tests

These results can be tested computationally. Variance can be estimated for 𝑁 iterations withVar𝑁𝑆𝑛=1𝑁𝑁𝑚=1𝑆𝑛𝑆(𝑚)𝐸𝑛,(4.1) where the index 𝑚 designates the sample number.

Consider the test function 𝑓=sin(𝑣). Using the first-order sign data derivative (𝑛=1), compare Var(𝑆1) to Var𝑁(𝑆1), and using the second-order sign data derivative (𝑛=2), compare Var(𝑆2) to Var𝑁(𝑆2) for 𝑁=1000, 𝑎=1, and 𝑣=3. The results are shown in Tables 1 and 2.

Table 1: True variance, Var(𝑆1), versus the variance estimate, Var𝑁(𝑆1), for the function 𝑓=sin(𝑣), with the number of iterations 𝑁=1000, 𝑎=1, and 𝑣=3.
Table 2: True variance, Var(𝑆2), versus the variance estimate, Var𝑁(𝑆2), for the function 𝑓=sin(𝑣), with the number of iterations 𝑁=1000, 𝑎=1, and 𝑣=3.

We illustrate the change in variance in Figure 1, which shows three curves, each consisting of 𝑁=1000 iterations. The first curve in blue shows the sign data recovery of the function 𝑓=sin(𝑣) or 𝐸(𝑆0) for 𝑎=1 and Δ𝑣=0.5. The second curve in green shows the sign data recovery 𝐸(𝑆1), which approximates 𝑓 for 𝑎=1 and Δ𝑣=0.5. The third curve in red shows the sign data recovery 𝐸(𝑆2), which approximates 𝑓 for 𝑎=1 and Δ𝑣=0.5.

Figure 1: The expectation value curves for 𝑆𝑛=(1/(Δ𝑣)𝑛)𝑛𝑖=0(1)𝑖(𝑛𝑖)sgn(𝑓(𝑣+(𝑛𝑖)Δ𝑣+𝑋𝑖)) or 𝐸(𝑆𝑛) for 𝑛=0,1,2,𝑓(𝑣)=sin(𝑣),𝑎=1, and Δ𝑣=0.5. The number of iterations in the expectation values is 𝑁=1000. 𝐸(𝑆0) corresponds to the blue curve and approximates 𝑓, 𝐸(𝑆1) corresponds to the green curve and approximates 𝑓, and 𝐸(𝑆2) corresponds to the red curve and approximates 𝑓.

5. Conclusions

Recovery of signal from the signs of signal plus noise incurs a variance, which only depends on the noise amplitude, while recovery of discrete derivatives from the signs of signal plus noise (i.e., sign data) incurs a variance which grows infinite for infinitesimal step size and infinite order.

The application problem is that sign data can be used in the seismic industry in processes which may differentiate the data. In such cases, if the step size or order of the finite difference is not constrained, the process will incur large variance and convergence of the process will be minimized. While methods for smoothing noisy data for finite difference calculations are known, sign data requires noisy data. In this paper, we have characterized the problem by explicitly evaluating the variance of discrete sign data derivatives.


Clarification of 𝐸(𝑆22)

𝐸𝑆22=1Δ𝑣4𝑏0𝑓sgn0+𝑋0+𝑏1𝑓sgn1+𝑋1+𝑏2𝑓sgn2+𝑋22=1Δ𝑣4𝑏20sgn2𝑓0+𝑋0+2𝑏0𝑓sgn0+𝑋0𝑏1𝑓sgn1+𝑋1+𝑏21sgn2𝑓1+𝑋1+2𝑏2𝑓sgn2+𝑋2𝑏0𝑓sgn0+𝑋0+2𝑏2𝑓sgn2+𝑋2𝑏1𝑓sgn1+𝑋1+𝑏22sgn2𝑓2+𝑋2.(A.1) This simply reduces to𝐸𝑆22=1Δ𝑣4𝑏20+2𝑏0𝑏1𝐸𝑓sgn0+𝑋0𝑓sgn1+𝑋1+𝑏21+2𝑏2𝑏0𝐸𝑓sgn2+𝑋2𝑓sgn0+𝑋0+2𝑏2𝑏1𝐸𝑓sgn2+𝑋2𝑓sgn1+𝑋1+𝑏22.(A.2) In order to compute (A.2), we must compute an integral of the form𝐸𝑓sgn𝑖𝑓+𝑋sgn𝑘=+𝑋𝑓sgn𝑖+𝑥𝑖𝑓sgn𝑘+𝑥𝑘𝜌𝑥𝑖𝜌𝑥𝑘𝑑𝑥𝑖𝑑𝑥𝑘.(A.3) The probability densities are both uniform:𝜌𝑥𝑖𝑥=𝜌𝑘=12𝑎,𝑎𝑥𝑎,0,else(A.4) and using the results of (1.6),𝐸𝑓sgn𝑖𝑓+𝑋sgn𝑘=𝑓+𝑋𝑖𝑓𝑘𝑎2.(A.5) Consequently, (A.2) reduces to𝐸𝑆22=1Δ𝑣4𝑏20+𝑏21+𝑏22+2𝑏0𝑏1𝑓0𝑓1𝑎2+2𝑏0𝑏2𝑓0𝑓2𝑎2+2𝑏1𝑏2𝑓1𝑓2𝑎2.(A.6)


Thanks are due to Gwendolyn Houston for advice and proofreading.


  1. N. A. Anstey, Seismic Prospecting Instruments, Gebruder Borntraeger, Berlin, Germany, 2nd edition, 1981.
  2. J. T. O'Brien, W. P. Kamp, and G. M. Hoover, “Sign-bit amplitude recovery with applications to seismic data,” Geophysics, vol. 47, no. 11, pp. 1527–1539, 1982. View at Google Scholar
  3. L. M. Houston and B. A. Richard, “The Helmholtz-Kirchoff 2.5D integral theorem for sign-bit data,” Journal of Geophysics and Engineering, vol. 1, no. 1, pp. 84–87, 2004. View at Publisher · View at Google Scholar
  4. R. A. Gabel and R. A. Roberts, Signals and Linear Systems, Wiley, New York, NY, USA, 3rd edition, 1987.
  5. W. G. Kelley and A. C. Peterson, Difference Equations, Academic Press, Boston, Mass, USA, 1991. View at Zentralblatt MATH
  6. D. M. Dubois, “Computing anticipatory systems with incursion and hyperincursion, computing anticipatory systems,” in Proceedings of the 1st International Conference on Computing Anticipatory Systems (CASYS '98), vol. 437 of AIP Conference Proceedings, pp. 3–29, The American Institute of Physics, 1998.
  7. G. Arfken, Mathematical Methods for Physicists, Academic Press, New York, NY, USA, 1966. View at Zentralblatt MATH