About this Journal Submit a Manuscript Table of Contents
ISRN Applied Mathematics
Volume 2012 (2012), Article ID 539359, 10 pages
http://dx.doi.org/10.5402/2012/539359
Research Article

Green's Theorem for Sign Data

The University of Louisiana at Lafayette, Lafayette, LA 70504-4210, USA

Received 14 March 2012; Accepted 19 April 2012

Academic Editors: A. Bellouquid and M.-H. Hsu

Copyright © 2012 Louis M. Houston. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Sign data are the signs of signal added to noise. It is well known that a constant signal can be recovered from sign data. In this paper, we show that an integral over variant signal can be recovered from an integral over sign data based on the variant signal. We refer to this as a generalized sign data average. We use this result to derive a Green's theorem for sign data. Green's theorem is important to various seismic processing methods, including seismic migration. Results in this paper generalize reported results for 2.5D data volumes in which Green's theorem applies to sign data based only on traditional sign data recovery.

1. Introduction

In certain cases, data, consisting of coherent signal and random noise, is summed repeatedly in order to enhance the signal and reduce the noise. It has been found that when the signal-to-noise ratio is between 0.1 and 1, if only the signs of the data are retained prior to summation, then the signal can be recovered [1]. We refer to this type of data as sign data. Sign data has advantages in significantly reducing the amount of space needed to record the data. The reduction in space can amount to a ratio of 20 to 1 bits of data storage [1].

In the seismic industry, it has been discovered that most of the data processing methods used on regular data are also effective on sign data [2]. Many of these data processing methods are dependent on Green’s theorem. In particular, Kirchhoff migration is dependent on Green’s theorem. Migration is a seismic process that corrects data coordinates which are distorted by attributes of the seismic experiment. In accordance with Green’s theorem, we show that an integral of an operation on sign data over a volume is equal to an integral of an operation of the original data over a surface.

This result generalizes the work of Houston and Richard [3] in which, based only on traditional sign data recovery, it was shown that Green’s theorem is satisfied when the sign data encompasses a 2.5D volume. 2.5D data is data which is three-dimensional but only has variations along two dimensions. In this paper, we find that Green’s theorem is satisfied for sign data when the data volume is largely symmetric. For cases in which the data volume is of arbitrary shape we have derived a variant of Green’s theorem.

2. A Generalized Sign Data Average

It was shown by O’Brien et al. [1] that there is a recovery of amplitude from sign data in the presence of uniform random noise. If the random noise 𝑋𝑗 has amplitude 𝑎, the signal is designated by the function 𝑓𝑘, and the number of iterations is 𝑀=𝑗, then we can write 𝑓𝑘=𝑎𝑀𝑗𝑓sgn𝑘+𝑋𝑗,(2.1) where sgn(𝑦)=+1,𝑦>00,𝑦=01,𝑦<0.(2.2)

Figure 1 illustrates the data recovery process including sign data.

539359.fig.001
Figure 1: A synthetic signal (sin𝑐(𝑥/3)𝑒(𝑥/15)2) and the computer-generated uniform random noise used to examine sign-bit amplitude recovery. This test has noise of unit magnitude (𝑎=1) and a signal-to-noise ratio of one. Shown from bottom to top is the signal, the signal plus noise, the sign of the signal plus noise, the average over 200 iterations of signal plus noise, and the average over 200 iterations of the sign of signal plus noise.

Multiply both sides of (2.1) by the function 𝑔𝑘: 𝑔𝑘𝑓𝑘=𝑎𝑀𝑗𝑔𝑘𝑓sgn𝑘+𝑋𝑗.(2.3) Now sum both sides of (2.3) over the 𝑘 index: 𝑘𝑔𝑘𝑓𝑘=𝑎𝑀𝑘𝑗𝑔𝑘𝑓sgn𝑘+𝑋𝑗.(2.4) If we allow 𝑘𝑗, then (2.4) becomes 𝑗𝑔𝑗𝑓𝑗=𝑎𝑀𝑗𝑗𝑔𝑗𝑓sgn𝑗+𝑋𝑗(2.5) or 𝑗𝑔𝑗𝑓𝑗=𝑎𝑗𝑔𝑗𝑓sgn𝑗+𝑋𝑗.(2.6) It is clear that if 𝑓𝑗𝑓 and 𝑔𝑗𝑔, then (2.6) becomes 𝑎𝑓=𝑀𝑗sgn𝑓+𝑋𝑗,(2.7) which is essentially (2.1). A continuous version of (2.6) might be written as 𝑔(𝑣)𝑓(𝑣)𝑑𝑣=𝑎𝑔(𝑣)sgn(𝑓(𝑣)+𝑣)𝑑𝑣.(2.8) An argument for the consistency of (2.8) is as follows. Let 𝑓(𝑣)𝑓. Then we have 𝑎𝑓=𝑔(𝑣)sgn(𝑓+𝑣)𝑑𝑣𝑔(𝑣)𝑑𝑣.(2.9) Now integrate over all values of 𝑣: 𝑎𝑓=𝑔(𝑣)sgn(𝑓+𝑣)𝑑𝑣𝑔(𝑣)𝑑𝑣.(2.10) Let 𝑔(𝑣) be a uniform probability density, 𝜌(𝑣).

That implies 𝑔(𝑣)𝑑𝑣=1,(2.11) and (2.10) becomes 𝑓=𝑎𝜌(𝑣)sgn(𝑓+𝑣)𝑑𝑣,(2.12) which was shown by Houston et al. [4].

3. A Generalized Average for Sign Data Derivatives

Consider the following 𝑛th order forward finite difference [5]: Δ𝑛𝑓(𝑣)=𝑛𝑖=0(1)𝑖𝑛𝑖𝑓(𝑣+(𝑛𝑖)),(3.1) with (𝑛𝑖)=𝑛!/𝑖!(𝑛𝑖)!.

If we make the variable, 𝑣 discrete by choosing a small real interval, 𝑞 and writing 𝑓𝑗=𝑓(𝑗𝑞),(3.2) where 𝑗 is an integer index, then (3.1) becomes Δ𝑛𝑓𝑗=𝑛𝑖=0(1)𝑖𝑛𝑖𝑓(𝑗𝑞+(𝑛𝑖)).(3.3) Because the finite difference is a linear operator, we can use (2.6) to derive 𝑗𝑔𝑗Δ𝑛𝑓𝑗=𝑎𝑗𝑔𝑗Δ𝑛𝑓sgn𝑗+𝑋𝑗.(3.4) Equation (3.4) takes into account the fact that 𝑗𝑔(𝑗𝑞)𝑓(𝑗𝑞+𝑙)=𝑎𝑗𝑔(𝑗𝑞)sgn𝑓(𝑗𝑞+𝑙)+𝑋𝑗,(3.5) where 𝑙 is an arbitrary index.

Since 𝑑𝑛𝑑𝑣𝑛=lim0Δ𝑛𝑛,(3.6) (3.4) suggests the following equation: 𝑑𝑔(𝑣)𝑛𝑑𝑣𝑛𝑑𝑓(𝑣)𝑑𝑣=𝑎𝑔(𝑣)𝑛𝑑𝑣𝑛sgn(𝑓(𝑣)+𝑣)𝑑𝑣.(3.7)

Let 𝑑𝑛/𝑑𝑣𝑛𝑑𝑛/𝑑𝑢𝑛 and 𝑓(𝑣)𝑓(𝑢).

Then from (3.7) we have 𝑑𝑛𝑑𝑢𝑛𝑎𝑓(𝑢)=𝑔(𝑣)(𝑑𝑛/𝑑𝑢𝑛)sgn(𝑓(𝑢)+𝑣)𝑑𝑣𝑔(𝑣)𝑑𝑣.(3.8) Once again, integrate over all 𝑣, let 𝑔(𝑣)=𝜌(𝑣), and (3.8) becomes 𝑑𝑛𝑑𝑢𝑛𝑓(𝑢)=𝑎𝜌(𝑣)sgn(𝑓(𝑢)+𝑣)𝑑𝑣,(3.9) which was shown by Houston et al. [4].

4. The Application to Green’s Theorem

Equation (3.7) implies the special case: 𝑑𝑔(𝑣)2𝑑𝑣2𝑑𝑓(𝑣)𝑑𝑣=𝑎𝑔(𝑣)2𝑑𝑣2𝑠𝑔𝑛(𝑓(𝑣)+𝑣)𝑑𝑣.(4.1) Employing three variables in (4.1) yields 𝑔𝑣1,𝑣2,𝑣3𝜕2𝜕𝑣2𝑖𝑓𝑣1,𝑣2,𝑣3𝑑𝑣1𝑑𝑣2𝑑𝑣3𝑔𝑣=𝑎1,𝑣2,𝑣3𝜕2𝜕𝑣2𝑖𝑓𝑣sgn1,𝑣2,𝑣3+𝑣𝑖𝑑𝑣1𝑑𝑣2𝑑𝑣3,(4.2) which can be simplified if 𝑔=𝑔(𝑣1,𝑣2,𝑣3), 𝑓=𝑓(𝑣1,𝑣2,𝑣3), and 𝑑𝑉=𝑑𝑣1𝑑𝑣2𝑑𝑣3: 𝑔𝜕2𝜕𝑣2𝑖𝑔𝜕𝑓𝑑𝑉=𝑎2𝜕𝑣2𝑖sgn𝑓+𝑣𝑖𝑑𝑉.(4.3) Summation over the variables yields 𝑖𝑔𝜕2𝜕𝑣2𝑖𝑓𝑑𝑉=𝑎𝑖𝑔𝜕2𝜕𝑣2𝑖sgn𝑓+𝑣𝑖𝑑𝑉.(4.4) Consequently, (4.4) can be written as 𝑔2𝑔𝑓𝑑𝑉=𝑎𝑖𝜕2𝜕𝑣2𝑖sgn𝑓+𝑣𝑖𝑑𝑉.(4.5) If we make the mapping 𝜕𝑔(𝑣)2𝜕𝑣2𝑖𝑔,(4.6) then (2.8) becomes 𝑓𝜕2𝜕𝑣2𝑖𝑔𝑑𝑉=𝑎sgn𝑓+𝑣𝑖𝜕2𝜕𝑣2𝑖𝑔𝑑𝑉.(4.7) Summation over the variables yields 𝑖𝑓𝜕2𝜕𝑣2𝑖𝑔𝑑𝑉=𝑎𝑖sgn𝑓+𝑣𝑖𝜕2𝜕𝑣2𝑖𝑔𝑑𝑉,(4.8) which can be written as 𝑓2𝑔𝑑𝑉=𝑎𝑖sgn𝑓+𝑣𝑖𝜕2𝜕𝑣2𝑖𝑔𝑑𝑉.(4.9) Subtracting (4.9) from (4.5) yields 𝑔2𝑓𝑓2𝑔𝑔𝑑𝑉=𝑎𝑖𝜕2𝜕𝑣2𝑖sgn𝑓+𝑣𝑖𝑖sgn𝑓+𝑣𝑖𝜕2𝜕𝑣2𝑖𝑔𝑑𝑉.(4.10) Green’s theorem is 𝑆𝑈1𝜕𝑈2𝜕𝑛𝑈2𝜕𝑈1𝜕𝑛𝑑𝑆=𝑉𝑈12𝑈2𝑈22𝑈1𝑑𝑉.(4.11) Consequently, we can write a variant of Green’s theorem for sign data as 𝑆𝑔𝜕𝑓𝜕𝑛𝑓𝜕𝑔𝑔𝜕𝑛𝑑𝑆=𝑎𝑖𝜕2𝜕𝑣2𝑖sgn𝑓+𝑣𝑖𝑖sgn𝑓+𝑣𝑖𝜕2𝜕𝑣2𝑖𝑔𝑑𝑉.(4.12) If the spatial intervals are uniform, that is, 𝑣1[𝑎1,𝑏1], 𝑣2[𝑎2,𝑏2], 𝑣3[𝑎3,𝑏3] and [𝑎1,𝑏1]=[𝑎2,𝑏2]=[𝑎3,𝑏3], then (4.12) becomes 𝑆𝑔𝜕𝑓𝜕𝑛𝑓𝜕𝑔𝜕𝑛𝑑𝑆=𝑎𝑔2sgn𝑓+𝑣𝑖sgn𝑓+𝑣𝑖2𝑔𝑑𝑉.(4.13) In the case of (4.13), we see that Green’s theorem is satisfied by replacing the function 𝑓 with sign data and dividing the surface integral by the noise amplitude. Therefore, all processing of real data based on Green’s theorem will be effective for sign data with an associated variance. We also note that the effectiveness of Green’s theorem on sign data is enhanced by operating over a symmetric volume. When the volume is not symmetric, we can apply the variant of Green’s theorem given by (4.12).

5. The 2.5D Case

Now let us consider the special case for which the functions 𝑓 and 𝑔 have only two-dimensional variation. That is, 𝑣𝑓𝑓1,𝑣2𝑣𝑔𝑔1,𝑣2.(5.1) Without using a generalized sign data average, this implies 𝑎𝑔2sgn𝑓+𝑣𝑖sgn𝑓+𝑣𝑖2𝑔𝑏𝑑𝑉=3𝑎3𝑔2𝑓𝑓2𝑔𝑑𝑣1𝑑𝑣2(5.2) or 𝑆𝑔𝜕𝑓𝜕𝑛𝑓𝜕𝑔𝑏𝜕𝑛𝑑𝑆=3𝑎3𝑔2𝑓𝑓2𝑔𝑑𝑣1𝑑𝑣2.(5.3) Equation (5.3) is Green’s theorem for sign data when the data encompasses a 2.5D volume and is consistent with results reported in Houston and Richard [3].

6. Computational Tests

Equation (2.6) derives from a weighted average of the function 𝑦𝑗=sgn(𝑓𝑗+𝑋𝑗). We can thus write the expectation value of 𝑦𝑖 as 𝐸(𝑌)=𝑗𝑔𝑗𝑦𝑗𝑗𝑔𝑗.(6.1) We find that 𝐸(𝑌)=𝑗𝑔𝑗𝑓𝑗𝑎𝑗𝑔𝑗.(6.2) The variance, Var(𝑌), can be written as Var(𝑌)=𝐸(𝑌𝐸(𝑌))2.(6.3) This can be reduced to a simpler form by using the fact that 𝐸𝑌2=1.(6.4) Thus, we have Var(𝑌)=1(𝐸(𝑌))2.(6.5) We can demonstrate (2.6), 𝑗𝑔𝑗𝑓𝑗=𝑎𝑗𝑔𝑗𝑓sgn𝑗+𝑋𝑗,(6.6) computationally. Let 𝑓𝑗𝑔=sin(𝑗𝑞),𝑗=cos(𝑗𝑞).(6.7) Consequently, we want to demonstrate that 𝑗cos(𝑗𝑞)sin(𝑗𝑞)=𝑎𝑗cos(𝑗𝑞)sgnsin(𝑗𝑞)+𝑋𝑗.(6.8) We can compute the average percentage error 𝛽 as 𝛽=100𝑗cos(𝑗𝑞)sin(𝑗𝑞)𝑎𝑗cos(𝑗𝑞)sgnsin(𝑗𝑞)+𝑋𝑗2𝑗cos(𝑗𝑞)sin(𝑗𝑞)2.(6.9) We should see a correlation between 𝛽 and Var(𝑌). The results of this comparison are shown in Table 1.

tab1
Table 1: Twenty trials which compare the values of the indicated expressions for 𝑀 iterations per trial. The trials are divided into ten trials for two different noise amplitudes. Recall that 𝑀=𝑗. Let 𝑀=10000 and 𝑞=0.0001.

7. Conclusions

Using the results of sign data signal recovery leads to a derivation of a generalized sign data average. Extending these results to incorporate derivatives leads to a variant of Green’s theorem for sign data. We find that Green’s theorem directly applies to sign data when the data volume is symmetric and the surface integral is divided by the noise amplitude. A specific application of this result is that Green’s theorem applies to sign data when the data volume is 2.5D and a generalized sign data average is not required.

Acknowledgment

Discussions with Maxwell Lueckenhoff are appreciated.

References

  1. J. T. O'Brien, W. P. Kamp, and G. M. Hoover, “Sign-bit amplitude recovery with applications to seismic data,” Geophysics, vol. 47, no. 11, pp. 1527–1539, 1982. View at Scopus
  2. N. A. Anstey, Seismic Prospecting Instruments, Gebruder Borntraeger, Berlin, Germany, 2nd edition, 1981.
  3. L. M. Houston and B. A. Richard, “The Helmholtz-Kirchoff 2.5D integral theorem for sign-bit data,” Journal of Geophysics and Engineering, vol. 1, no. 1, pp. 84–87, 2004. View at Publisher · View at Google Scholar · View at Scopus
  4. L. M. Houston, G. A. Glass, and A. D. Dymnikov, “Sign data derivative recovery,” ISRN Applied Mathematics, vol. 2012, Article ID 630702, 7 pages, 2012. View at Publisher · View at Google Scholar
  5. W. G. Kelley and A. C. Peterson, Difference Equations: an Introduction with Applications, Academic Press, San Diego, Calif, USA, 1st edition, 1991. View at Zentralblatt MATH