Abstract

Recently, a closed-form approximated expression was derived by the same author for the achievable residual intersymbol interference (ISI) case that depends on the step-size parameter, equalizer’s tap length, input signal statistics, signal-to-noise ratio (SNR), and channel power. But this expression was obtained by assuming that the input noise is a white Gaussian process where the Hurst exponent (H) is equal to 0.5. In this paper, we derive a closed-form approximated expression (or an upper limit) for the residual ISI obtained by blind adaptive equalizers valid for fractional Gaussian noise (fGn) input where the Hurst exponent is in the region of . Up to now, the statistical behaviour of the residual ISI was not investigated. Furthermore, the convolutional noise for the latter stages of the deconvolutional process was assumed to be a white Gaussian process (). In this paper, we show that the Hurst exponent of the residual ISI is close to one, almost independent of the SNR or equalizer’s tap length but depends on the step-size parameter. In addition, the convolutional noise obtained in the steady state is a noise process having a Hurst exponent depending on the step-size parameter.

1. Introduction

We consider a blind deconvolution problem in which we observe the output of an unknown, possibly nonminimum phase, linear system (SISO-FIR system) from which we want to recover its input (source) using an adjustable linear filter (equalizer). The problem of blind deconvolution arises comprehensively in various applications such as digital communications, seismic signal processing, speech modeling and synthesis, ultrasonic nondestructive evaluation, and image restoration [1]. Blind deconvolution algorithms are essentially adaptive filtering algorithms designed such that they do not require the external supply (training sequence) of a desired response to generate the error signal in the output of the adaptive equalization filter [2, 3]. The algorithm itself generates an estimate of the desired response by applying a nonlinear transformation to sequences involved in the adaptation process [2, 3]. Let us consider for a moment the digital communication case. During transmission, a source signal undergoes a convolutive distortion between its symbols and the channel impulse response. This distortion is referred to as ISI. Thus, a blind adaptive equalizer is used to remove the convolutive effect of the system to produce the source signal [2, 46].

Up to recently [7, 8], the performance of a chosen equalizer (the achievable residual ISI) could be obtained only via simulation. The equalization performance depends on the nature of the chosen equalizer (on the memoryless nonlinearity situated at the output of the equalizer’s filter), on the channel characteristics, on the added noise, on the step-size parameter used in the adaptation process, on the equalizer’s tap length, and on the input signal statistics. Fast convergence speed and reaching a residual ISI where the eye diagram is considered to be open (for the communication case) are the main requirements from a blind equalizer. Fast convergence speed may be obtained by increasing the step-size parameter. But increasing the step-size parameter may lead to a higher residual ISI which might not meet any more the system’s requirements. Up to recently, the system designer had to carry out, for a given channel and type of equalizer, many simulations in order to obtain the optimal step-size parameter for a required residual ISI. Recently [7, 8], a closed-form approximated expression was derived by the same author for the achievable residual ISI case that depends on the step-size parameter, equalizer’s tap length, input signal statistics, signal-to-noise ratio (SNR), and channel power. But this expression was obtained by assuming that the input noise is a white Gaussian process where the Hurst exponent () is equal to 0.5. A white Gaussian process is a special case () of the fractional Gaussian noise (fGn) model [9]. FGn with corresponds to the case of long-range dependency (LRD) [9]. LRD implies heavy-tailed probability density functions, which in general imply more random, see [1013]. This point of view was recently detailed by [14, 15]. An fGn is a generalization of ordinary white Gaussian noise, and it is a versatile model for broadband noise dominated by no particular frequency band [16]. For , fGn is bandlimited to at level [17].

In this paper, we derive a closed-form approximated expression (or an upper limit) for the residual ISI obtained by blind adaptive equalizers valid for fGn input where the Hurst exponent is in the region of . Please note, is the limit case, which does not have much practical sense [18, 19]. Up to now, the statistical behaviour of the residual ISI was not investigated. Furthermore, the convolutional noise for the latter stages of the deconvolutional process was assumed to be a white Gaussian process . In this paper, we show that the Hurst exponent of the residual ISI is close to one, almost independent of the SNR or equalizer’s tap length but depends on the step-size parameter. In addition, the convolutional noise obtained in the steady state is a noise process with a Hurst exponent depending on the step-size parameter.

The paper is organized as follows. After having described the system under consideration in Section 2, the closed-form approximated expression (or upper limit) for the residual ISI is introduced in Section 3. In Section 4 the statistical behavior of the residual ISI and convolutional noise are presented and the conclusion is given in Section 5.

2. System Description

The system under consideration is illustrated in Figure 1, where we make the following assumptions.(1) The input sequence belongs to a two independent quadrature carrier case constellation input with variance , where and are the real and imaginary parts of , respectively, and is the variance of . In the following we denote as .(2) The unknown channel is a possibly nonminimum phase linear time-invariant filter in which the transfer function has no “deep zeros”; namely, the zeros lie sufficiently far from the unit circle.(3) The equalizer is a tap-delay line.(4) The noise consists of , where and are the real and imaginary parts of , respectively, and and are independent. Both and are fractional Gaussian noises (fGn) with zero mean. Consider , , , and , where denotes the expectation operator on and is the Hurst exponent.

The transmitted sequence is sent through the channel and is corrupted with noise . Therefore, the equalizer’s input sequence may be written as where “” denotes the convolution operation. The equalized output signal can be written as: where is the convolutional noise, namely, the residual intersymbol interference (ISI) arising from the difference between the ideal equalizer’s coefficients and those chosen in the system and . The ISI is often used as a measure of performance in equalizers’ applications, defined by where is the component of , given in (4), having the maximal absolute value where is the Kronecker delta function and stands for the difference (error) between the ideal and the actual value used for . Although the ISI is often used as a measure of performance in equalizers’ applications, its statistical properties were never investigated.

Next we turn to the adaptation mechanism of the equalizer which is based on a predefined cost function that characterizes the intersymbol interference, see [2024]. Minimizing this with respect to the equalizer parameters will reduce the convolutional error. Minimization is performed with the gradient descent algorithm that searches for an optimal filter tap setting by moving in the direction of the negative gradient over the surface of the cost function in the equalizer filter tap space [25]. Thus the updated equation is given by [25] where is the step-size parameter and is the equalizer vector where the input vector is and is the equalizer’s tap length. The operator denotes for transpose of the function .

Recently [8], a closed-form approximated expression was derived for the achievable residual ISI case (based on [7]) that depends on the step-size parameter, equalizer’s tap length, input signal statistics, SNR, and channel power and is given by where is the variance of the real part of the input sequence and is defined by or where The channel length is , is the variance of the real part of , , and , , are properties of the chosen equalizer and are found by where is the real part of and , are the real and imaginary parts of the equalized output , respectively.

It should be pointed out that the closed-form approximated expression for the residual ISI [8] was obtained by assuming that the noise is an additive Gaussian white noise with zero mean and variance . Therefore, it is not applicable for the fGn case (for ). Thus, a new expression for the achievable residual ISI is needed.

3. Residual ISI for Fractional Gaussian Noise Input

In this section, a closed-form approximated expression (or an upper limit) is derived for the residual ISI valid for the fGn input case supported by simulation results.

3.1. Derivation of the Residual ISI

Theorem 1. Consider the following assumptions.(1)The convolutional noise is a zero mean, white Gaussian process with variance . The real part of is denoted as and .(2)The source signal is a rectangular QAM (Quadrature Amplitude Modulation) signal (where the real part of is independent of the imaginary part of with known variance and higher moments. (3)The convolutional noise and the source signal are independent. (4) can be expressed as a polynomial function of the equalized output, namely, as of order three.(5)The gain between the source and equalized output signal is equal to one.(6)The convolutional noise is independent of .(7)The added noise is fGn with zero mean.(8)The channel has real coefficients.(9)The Hurst exponent is in the range of .

The residual ISI expressed in dB units may be defined as (6), (8), and (9) where is the channel length, , and , , are properties of the chosen equalizer and one found by (11) and

Comments. Please note that for (Gaussian white noise case), the expression for given in (12) and (10) is equivalent. By repeating the steps in [8] for the calculation of the expression of the residual ISI, the only place where the difference between the assumption of and being Gaussian white noises or fractional Gaussian noises has a major role on the total result of the approximated derived expression for the residual ISI, is in the calculation of . Thus, we bring here only the various steps that led to (12).

It should be pointed out that assumptions from above are precisely the same assumptions made in [8].

Proof. The real part of , namely, , may be expressed as Thus, the variance of is given by which can be also written as Next, by using the assumption that the gain between the source and equalized output signal is equal to one, we may write the following expression: By substituting (16) into (15) we obtain The expression (17) can be upper limited by With the help of the Holder Inequality [26], we may write Now, by substituting (16) into (19) we obtain Thus having This expression (21) can be therefore written as According to [17], By substituting (23) into (22) and taking into account that for , we obtain Now, since we deal with the rectangular QAM case and where is independent of , we obtain that and where is the variance of the imaginary part of . Therefore, we may write By substituting (25) into (24) we obtain: It turned out, according to simulation results, that (26) is too far away from the averaged residual ISI. Thus, it can not serve in practice as an upper limit for the expected residual ISI. In reality, we may find approximately only around coefficients in the equalizer’s tap length that may have significant values. Therefore, we may write This completes our proof.

3.2. Simulation

In this section we test our new proposed expression for the residual ISI for the 16QAM case (a modulation using levels for in-phase and quadrature components) with Godard’s algorithm [20] for two different SNR and equalizer’s tap length values and for two different channel types. The equalizer taps for Godard’s algorithm [20] were updated according to where is the step-size. The values for , , and corresponding to Godard’s [20] algorithm are given by The following two channels were considered.

Channel 1 (initial ). The channel parameters were determined according to [24]: Channel 2 (initial ). The channel parameters were determined according to The equalizer was initialized by setting the center tap equal to one and all others to zero.

In the following we denote the residual ISI performance according to (6) with (8), (9), and (12) as “Calculated ISI.” Figures 2, 5, 8, 11, and 13 show the ISI performance as a function of the iteration number of our proposed expression (6) (with (8), (9), and (12)) compared with the simulated results for two different channels and SNR values, various values for , and three different step-size values and equalizer’s length. According to Figures 2, 5, 8, and 13 (for ), a high correlation is observed between the simulated and calculated results. According to Figures 11 and 13 (for ), the Calculated ISI may be considered as an upper limit for the simulated results. Figures 3, 4, 6, 7, 9, 10, and 12 show the equalizer’s coefficients in the steady state for various values for , , equalizer’s tap-length, step-size parameters, and two different channels. According to simulation results (Figures 3, 4, 6, 7, 9, 10, and 12), it was reasonable to take approximately only instead of coefficients in (12) since indeed not all the coefficients in the equalizer’s tap length have significant values.

4. The Statistical Behaviour of the Residual ISI and Convolutional Noise

The statistical behaviour of the residual ISI was never investigated in the literature. Furthermore, the convolutional noise is usually assumed (in the latter stages of the deconvolution process [27]) to be a white Gaussian process. In this section we investigate the statistical behaviour of the residual ISI and convolutional noise in the steady state.

4.1. The Statistical Behaviour of the Residual ISI

In this subsection we estimate the Hurst exponent of the residual ISI from the Rescaled Range (R/S) [28] with overlapping regions. For that purpose, let us denote as a vector of samples of the residual ISI (obtained from a single Monte Carlo trial) with length . The mean of consecutive samples in may be defined as where ,   denotes the th segment in vector and means the estimate of . Next we define The Rescaled Range [28] for the th segment with samples in the vector is defined as where The averaged Rescaled Range over multiple regions of the data with length is defined as [28] where is a constant. Now, by applying the operator on both sides of (36) we obtain [28] This expression (37) may be seen as a line , where , , and the slope of the line is [28]. In order to estimate the Hurst exponent, several values for are needed. Figure 14 presents the Hurst exponent value estimated for different step-size values. For each step-size value we used 100 Monte Carlo trials where for each trial the Hurst exponent was estimated. Thus, we have for each step-size value 100 estimated values for the Hurst exponent. According to Figure 14, we observe two things. The averaged Hurst exponent (obtained from 100 results) is higher (closer to one) for a lower step-size value. The averaged Hurst exponent obtained for the different step-size values is very high (above 0.9). This means that the residual ISI is trending. If there is an increase from time index to time index , there will probably be an increase from time index to time index . The same is true of decreases, where a decrease will tend to follow a decrease. The large Hurst exponent value indicates that the trend is strong. In our case, we see according to Figure 14 that the trend is stronger for lower step-size values.

Figures 15 and 16 present the Hurst exponent value estimated for different equalizer’s tap lengths and SNR values, respectively. For each tap-length and SNR value we used 100 Monte Carlo trials where for each trial the Hurst exponent was estimated. Thus, we have for each tap-length and SNR value 100 estimated values for the Hurst exponent. According to Figures 15 and 16, the equalizer’s tap-length and SNR have nearly no impact on the Hurst exponent estimated from the residual ISI series. Figure 17 describes the simulated histogram of the estimated Hurst exponent for three different step-size parameters. According to Figure 17, the histogram of the estimated Hurst exponent resembles the Gaussian shape.

4.2. The Statistical Behaviour of the Convolutional Noise

In this subsection we estimate the Hurst exponent of the real part of the convolutional noise from the Rescaled Range (R/S) [28] with overlapping regions. For that purpose, let us denote now as a vector of samples of the real part of the convolutional noise (obtained from a single Monte Carlo trial in the convergence region for the noiseless case) with length . Please note that the real and imaginary parts of the convolutional noise are independent. Thus, the statistical behavior of the real and imaginary parts of the convolutional noise is approximately the same. In the following we use (37) for estimating the Hurst exponent. Figure 18 presents the Hurst exponent value estimated for different step-size values. For each step-size value we used 100 Monte Carlo trials where for each trial the Hurst exponent was estimated. Thus, we have for each step-size value 100 estimated values for the Hurst exponent. According to Figure 18, we observe that as we enlarge the step-size parameter, the estimated Hurst exponent is closer to , while for lower values for the step-size parameter, the Hurst exponent is smaller than . According to [29], a Hurst Exponent value between and exists for time series with “antipersistent behaviour.” This means that an increase will tend to be followed by a decrease (or a decrease will be followed by an increase). This behaviour is sometimes called “mean reversion” which means future values will have a tendency to return to a longer term mean value. The strength of this mean reversion increases as approaches . As it was already mentioned earlier in this paper, the convolutional noise is often considered as a white Gaussian process. But a white Gaussian process has a Hurst exponent value of which in our case is achieved only for high values for the step-size parameter.

5. Conclusion

In this paper, we proposed a closed-form approximated expression (or an upper limit) for the residual ISI obtained by blind adaptive equalizers valid for the fGn input case where the Hurst exponent is in the region of . According to simulation results, a high correlation is obtained between the calculated and simulated results for the residual ISI for some cases while for others, the new obtained expression is a relative tight upper limit for the averaged residual ISI results. In this paper we investigated the statistical behavior of the residual ISI as well as the convolutional noise. We have found that the Hurst exponent of the residual ISI is close to one, almost independent of the SNR or equalizer’s tap length but depends on the step-size parameter. Since the Hurst exponent of the residual ISI is above , the residual ISI is trending. This means that if there is an increase from time index to time index , there will probably be an increase from time index to time index . The same is true of decreases, where a decrease will tend to follow a decrease. Concerning the convolutional noise, we have found that the convolutional noise obtained in the steady state is a Gaussian noise process having a Hurst exponent depending on the step-size parameter. Only for large values for the step-size parameter we obtain approximately a white Gaussian process () which is the statistical model assumed in the literature for the convolutional noise for the entire range of step-size values.