Abstract

Recently, the Edgeworth expansion up to order 4 was used to represent the convolutional noise probability density function (pdf) in the conditional expectation calculations where the source pdf was modeled with the maximum entropy density approximation technique. However, the applied Lagrange multipliers were not the appropriate ones for the chosen model for the convolutional noise pdf. In this paper we use the Edgeworth expansion up to order 4 and up to order 6 to model the convolutional noise pdf. We derive the appropriate Lagrange multipliers, thus obtaining new closed-form approximated expressions for the conditional expectation and mean square error (MSE) as a byproduct. Simulation results indicate hardly any equalization improvement with Edgeworth expansion up to order 4 when using optimal Lagrange multipliers over a nonoptimal set. In addition, there is no justification for using the Edgeworth expansion up to order 6 over the Edgeworth expansion up to order 4 for the 16QAM and easy channel case. However, Edgeworth expansion up to order 6 leads to improved equalization performance compared to the Edgeworth expansion up to order 4 for the 16QAM and hard channel case as well as for the case where the 64QAM is sent via an easy channel.

1. Introduction

In this work, we deal with the convolutional noise arising at the output from a blind deconvolutional process. A blind deconvolution process arises in many applications such as seismology, underwater acoustic, image restoration, and digital communication [1]. Consider the digital communication case. During transmission, a source signal undergoes a convolutive distortion between its symbols and the channel impulse response. This distortion is referred to as intersymbol interference (ISI) [2]. Thus, a blind adaptive filter is used to remove the convolutive effect of the system to produce the source signal [2]. This process is called blind deconvolution. Since the updated coefficients used in the blind adaptive filter are not the ideal values, a noise named convolutional noise occurs at the output of the deconvolution process in addition to the source signal. Blind deconvolution algorithms based on adaptive filtering techniques generate an estimate of the desired response by applying a nonlinear transformation to sequences involved in the adaptation process [1, 3]. The Bussgang algorithm is one of the three important families of blind equalization algorithms, where the nonlinearity is in the output of the adaptive equalization filter [1, 3]. According to [4, 5], we may find among the traditional Bussgang-type methods the Sato’s [6], Godard’s [7], Benveniste et al. [8], Besnveniste-Goursat’s [9], and the Stop-and-Go [10] algorithm. For Bussgang-type methods, the nonlinearity is designed to minimize a cost function based on high-order statistics (HOS) according to one approach [6, 7], or calculated directly according to the Bayes rules [1115]. According to [1, 3], the main difference between the Bussgang type algorithms lies in the choice of the memoryless nonlinearity. Obviously, the performance of this kind of blind equalizer depends substantially on the memoryless nonlinearity [1]. Chapter eight in [5] deals with the question whether the chosen equalizer leads to perfect equalization performance from the MSE point of view. From the work in [1] we know that if the MSE tends to zero, then also the residual ISI tends to zero (perfect equalization). Furthermore, chapter eight in [5] distinguishes between the MSE obtained from the cost function approach and the MSE obtained from the Bayesian approach. According to chapter eight in [5], the MSE related to the Bayesian approach valid in the convergence state, where the convolutional noise is very small, is approximately given by the convolutional noise power (variance of the convolutional noise) multiplied by a constant. To show this outcome the approximated MSE obtained in [1] was recalled in this chapter [5]. Also the approximated MSE obtained in [16] could have been used for showing that the MSE related to the Bayesian approach valid in the convergence state, where the convolutional noise is very small, is approximately given by the convolutional noise power multiplied by a constant. The only difference between the approximated expression for the MSE obtained in [16] and that in [1] is the value of the constant multiplying the convolutional noise power. In [1] this constant is approximately equal to one regardless of the constellation input, while in [16] this constant is constellation input dependent and is approximately equal to two for the 16QAM input. This paper also adheres to the Bayesian approach where the approximated MSE is developed as a byproduct. We will show in this paper that the approximated MSE valid in the convergence state, where the convolutional noise is very small, is approximately given by the convolutional noise power multiplied by a constant. This constant is constellation input independent, and is approximately equal to one in addition to higher order statistics of the convolutional noise such as the kurtosis. For the case where the convolutional noise pdf is assumed to be Gaussian, our new approximated MSE tends to the expression obtained in [1].

Next we turn to the approximated MSE related to the cost function approach where the derivation of the cost function with respect to the equalized output signal is a polynomial function of the equalized output signal as is in Godard’s case [7], for example. For that case, the approximated MSE is approximately equal to a constant in addition to another constant multiplied with the convolutional noise power [5]. Since those constants are constellation input dependent, the approximated MSE may not tend to zero when the convolutional noise power tends to zero unlike in the Bayesian approach. For example, let us consider the 16QAM input and Godard’s [7] algorithm (cost function approach). The residual MSE for that case is not zero even when the convolutional noise power tends to zero while this is not the case for the Bayesian approach [1].

In the literature, we find many blind adaptive algorithms based on the cost function approach (e.g [7, 1720]), while only a few are based on Bayes rules. The reason may lie in the fact that for the latter case (algorithm based on Bayes rules), the conditional expectation (the expectation of the source signal given the equalized output signal) has to be calculated. But, this task might be difficult because: (i) the source pdf might be non-Gaussian, and might not be known. In addition, the expression for the conditional expectation should hold for a wide range of source pdfs and not only for a specific source. (ii) No model is available for the convolutional noise pdf that is valid for the whole deconvolution process. Thus, without having the source and convolutional noise pdfs the derivation of the conditional expectation seems to be impossible. According to [1] the conditional expectation was derived for non-Gaussian sources by Bellini [11, 12], Fiori [13, 14] and Haykin [15]. However, all the mentioned expressions for the conditional expectation [1115] are suitable only for uniformly distributed source signals. Thus they cannot cope with a source having a general pdf shape [1]. In addition, the works in [1115] modeled the convolutional noise pdf as Gaussian. Recently ([1, 16]), two closed-form approximated expressions were obtained for the conditional expectation where the source pdf was approximated with the maximum entropy density estimation technique and Edgeworth Expansion series, respectively. Those expressions for the conditional expectation ([1, 16]) do not impose any restrictions (except of even symmetric) on the pdf of the unobserved input sequence. Hence they are suitable for a wider range of source pdf compared with Bellini’s [11, 12], Fiori’s [13, 14], or Haykin’s [15] expression. In [1, 16], the Laplace integral method was needed for approximating the integrals involving in the conditional expectation calculations. In addition, in both cases ([1, 16]), the convolutional noise pdf was assumed to be Gaussian for the whole deconvolution process as was assumed in [1115]. But, according to Haykin [15], in the early stages of the iterative deconvolution process, the ISI is typically large with the result that the data sequence and the convolutional noise are strongly correlated. Furthermore, the convolutional noise sequence is more uniform than Gaussian [21]. The maximum entropy algorithm [1], where the conditional expectation was calculated with the maximum entropy density approximation technique and Gaussian model for the source signal and convolutional noise pdf, respectively, has shown to have improved equalization performance compared with the classical methods ([22] (RCA algorithm), [7, 9, 13, 17, 18, 23, 24]). However, probably a more appropriate model for the convolutional noise pdf than the choice of the Gaussian one might lead to improved equalization performance compared to the results presented in [1]. The first attempt to characterize the convolutional noise pdf different than the Gaussian model was introduced in [5]. Here the conditional expectation was approximately obtained by approximating the source and convolutional noise pdf with the maximum entropy density approximation technique and Edgeworth expansion series up to order four, respectively. An unknown pdf may be approximated with three types of orthogonal expansions using the Hermite polynomials, namely, the Gram-Charlier of type A, Gauss-Hermite, and Edgeworth expansions [4, 25]. According to [4, 2527], the Edgeworth expansion is much more useful in many applications, since it is directly connected to the moments and cumulants of a pdf (the property which is lost in the Gauss-Hermite series). It is also a true asymptotic expansion, so that the error of the approximation is controlled (a property which is not found in the Gram-Charlier of type A) [4, 25]. Thus, Edgeworth expansion up to order 6 will describe the unknown pdf better than Edgeworth Expansion up to order four. The idea of approximating a non-Gaussian signal is not new in the literature. As a matter of fact, we may find several works [2832] dealing with pdfs applicable for the non-Gaussian case but encompass also the Gaussian model. However, the idea of modeling the convolutional noise pdf, which is changing in time, different than the Gaussian pdf is quite new and was introduced at the first time in [5]. Here the convolutional noise pdf was modeled with the Edgeworth expansion series up to order four and the source signal was approximated with the maximum entropy density approximation technique.

The maximum entropy density approximation technique involves Lagrange multipliers which have to be defined; otherwise, this approximation is not applicable. However, according to [1], finding the Lagrange multipliers is not an easy task. In some cases, an analytical solution for the Lagrange multipliers does not exist (please refer to [1] for more details). In order to overcome the problem, the approximated MSE was derived in [1] and the required Lagrange multipliers were those Lagrange multipliers that lead the approximated MSE to minimum. Note that those Lagrange multipliers [1] were obtained assuming a Gaussian model for the convolutional noise pdf, but were also applied in [5] where the conditional expectation was calculated using the Maximum entropy density approximation technique and Edgeworth expansion series up to order four for the source signal and convolutional noise pdf, respectively even though those Lagrange multipliers are not the optimal set for that case. Although those Lagrange multipliers were not the appropriate set for [5], improved equalization performance was obtained with the new approximated expression for the conditional expectation introduced in [5] compared to the conditional expectation obtained in [1]. Note that the only difference between those expressions ([1, 5]) was the chosen model for the convolutional noise pdf that was taken in the conditional expectation calculations. Thus, it is probable that even more improved equalization performance may be obtained, if the appropriate Lagrange multipliers are applied in [5] and the Edgeworth expansion series up to order six are used instead of order four for the convolutional noise pdf. But to apply the appropriate Lagrange multiplier in [5], we must first derive the MSE related to the conditional expectation that uses the maximum entropy density approximation technique for the source signal pdf and the Edgeworth expansion series up to order four and six for the convolutional noise pdf. Hitherto, this derivation has not been done. Furthermore, the approximated conditional expectation has not been derived where the source signal and the convolutional noise pdf are approximated with the maximum entropy density approximation technique and Edgeworth expansion series up to order six, respectively.

In this paper we derive the appropriate set of Lagrange multipliers where the source signal and convolutional noise pdf are approximated with the maximum entropy density approximation technique and Edgeworth expansion series up to order four and six, respectively. We derive new approximated expressions for the conditional expectation and MSE where the source signal and convolutional noise pdf are approximated with the maximum entropy density approximation technique and Edgeworth expansion series up to order six, respectively. In this paper we test the effect of the use of the optimal set of Lagrange multipliers to the equalization performance. Furthermore, we examine whether increasing the series order in the Edgeworth Expansion can supply even more improved equalization performance compared to those already presented in [5] where we saw significant equalization performance improvement compared to the maximum entropy algorithm [1]. We also assess whether the increase in the series order is justified given the increase in the complexity of the algorithm.

The paper is organized as follows. In Section 2 we describe the system under consideration, and in Section 3 we present the new model for the convolutional noise pdf, closed-form approximated expressions for the conditional expectation, MSE, and Lagrange multipliers. Simulation results are given in Section 4, and Section 5 is our conclusion.

2. System Description

The system is illustrated in Figure 1, where we make the same assumptions as in [1]:(1)the input sequence has an even symmetric probability distribution function with zero mean. may be a real or a complex variables (where real and imaginary parts of are independent);(2)the unknown channel can be a nonminimum phase linear time-invariant filter with no deep zeros (the zeros lie sufficiently far from the unit circle);(3) is a tap-delay equalizer;(4) is an additive Gaussian white noise;(5) is a memoryless nonlinear function.

The input sequence is transmitted through the channel and is corrupted with noise . Therefore, the equalizer’s input sequence may be written as where “ ” denotes the convolution operation.

From [1], the equalized output signal may be defined as where is the convolution noise arising from the difference between the ideal value of and the initial guess of , and represents the convolution between and .

The ISI is often used as a measure of performance in equalizer applications, as defined by Pinchas and Bobrovsky [1]: where is the maximum absolute value of the convolution between and .

Next we consider the adaptation mechanism of the equalizer. According to Figure 1, we define as an estimator of , which is produced by the function .

Thus the error signal is The adaptive mechanism uses this error to update the equalizers taps [33]: where is the conjugate operation on , is the step-size parameter, and is the equalizer vector where the input vector is and is the equalizer’s tap length. The operator denotes the transpose of the function . The conditional expectation , where stands for the expectation operation, is considered as a good estimate of [3]. In the following, is represented by the conditional expectation as was done in [1].

3. New Model for the Convolutional Noise pdf

In this section we describe the proposed model for the convolutional noise pdf which is based on the Edgeworth expansion. It differs from the Gaussian model but can turn under some conditions back to the Gaussian model as will be shown in this section. In Section 3.1 we derive the conditional expectation based on the new proposed model for the convolutional noise pdf. In Section 3.2 we present the new Lagrange multipliers via the MSE expression. It should be pointed out that we are not trying to find or propose a new expression for the conditional expectation but rather trying to show via the conditional expectation whether the new proposed model for the convolutional noise pdf (Edgeworth expansion of orders four and six) can lead to improved equalization performance with optimal Lagrange multipliers. The convolutional noise pdf for the real valued case may be defined as where represents factorial, the convolution noise is defined as , and is the variance of the convolutional noise. are normalized Hermite Polynomials of order described in [34] and presented in Appendix A as . are the one-dimensional central cumulants in terms of central moments described in [34, Table 3], and are also presented in Appendix A.

Thus, (6) up to order 6 becomes where according to Appendices A and B we have

So, with the help of (8), (7) becomes According to (7) and (8) the convolutional noise pdf based on the Edgeworth expansion of order 4 can be obtained (by resetting the last product in (7)) as According to Haykin [15] a Gaussian convolutional noise pdf model is only applicable in the latter stages of the deconvolution process when the process is close to optimal. This is reflected in our model (6) when the expressions in the parentheses are close to one thus getting back to the expression that describes a Gaussian distribution:

3.1. The Approximated Expression for the Conditional Expectation

In this subsection we use our new model for the convolutional noise pdf to calculate the conditional expectation.

Theorem 1. For the following assumptions:(1)the source signal is an independent signal with known variance and higher moments. In the following we denote as ,(2)no noise is added,(3)the convolutional noise and the source signal are independent; thus, where represents the variance of the equalizer’s output,(4)the convolutional noise has zero mean and variance ,the conditional expectation is given by where where and denote the second and fourth derivative of , respectively, , , and are the real parts of and , respectively. and are the imaginary parts of and , respectively. , are the variances of the real and imaginary parts of the source signal, respectively, and , are the variances of the real and imaginary parts of the equalized output signal, respectively. The Lagrange multipliers are given in Section 3.2. For more details concerning , , , and , please refer to Appendix B.

Comments. Please note that (12) is similar to the expression for the conditional expectation derived in [1]. However, , , , , , and are very different from those obtained in [1].

Proof. We start our derivation for the real valued case and then we extend them to the two independent quadrature carrier one. By using Bayes’ rule we obtain where in our case is the convolutional noise pdf and the unknown source pdf, , will be estimated with the maximum entropy density approximation technique as was done in [1] and defined as where are the Lagrange multipliers given in Section 3.2.
Substituting (6) and (15) into (14) and by using (2) we obtain Next we define so (16) becomes This integral (20) may be approximated with the Laplace integral method according to [1, 16, 35]. Thus, we may write where is defined as , and is a constant and By substituting (21) into (20) and dividing the numerator and denominator by , using (22), we obtain Note that (23) is very similar to the approximated expression for the conditional expectation derived in [1]; however, and its derivatives are very different. According to [11], the conditional mean estimate of the complex datum given the complex observation can be written as This completes our Proof.

3.2. MSE and Lagrange Multipliers

Since we deal with the real valued or two independent quadrature carrier case, it is enough to derive the MSE as well as the expression for the Lagrange multipliers for the real valued case only. Thus, we consider in the following only the real valued case.

In this section we search for those Lagrange multipliers that bring the MSE to a minimum as was done in [1]: where is the conditional expectation given in (23).

The approximated MSE according to [1] is given by where is the conditional expectation given in [1].

Although , , , , , and are very different from those obtained in [1], (23) looks quite similar to the approximated expression for the conditional expectation derived in [1]. Thus, it make sense to use in this paper the same technique for deriving the MSE as was carried out in [1]. The convolutional noise pdf (9) is for the pdf used in [1] (for ) multiplied by a constant parameter , where and are given in Appendix B in and , respectively. Therefore, based on (94), (95), and (96) in [1], the MSE for our case is the same MSE given in (26) multiplied by with our expression for . Thus, we obtain According to [36], for small values of such that we obtain Next we minimize (28) with respect to the Lagrange multipliers, , In Appendix E we derive (29) and obtain where .

Please refer to Appendices C and D for explicit expressions for the Lagrange multipliers valid for the 16QAM and 64QAM input constellation case, respectively. It should be pointed out that (30) is the same expression obtained in [1] for and .

4. Simulation

In this section we test and compare the equalization performance obtained for the 16QAM and 64QAM input case with the convolutional noise pdf (6) to the case where the convolutional noise pdf is modeled as Gaussian. In addition, we examine whether the optimal Lagrange multipliers have a significant effect on the quality of the equalization performance and determine whether the transition from order four to order six in the Edgeworth Expansion series is worthwhile despite the increase in complexity in the algorithm.

According to [1], where the convolutional noise pdf was modeled as a Gaussian pdf, the equalizer’s taps were updated as with (12) and (30) for , , and where stands for the estimated expectation operator, and are positive step-size parameters, , and stands for th tap of the equalizer and . In the following, this equalizer [1] will be denoted as “MaxEnt.” To illustrate the equalization performance with the convolutional noise pdf (6) we used the equalizer’s update mechanism given in (31) and (32) with the conditional expectation presented in (12) for the following settings.

For , the equalizer will be called “Edgeworth order 4.”

For the equalizer will be called “Edgeworth order 6.”

For “Edgeworth order 4” we need and for “Edgeworth order 6” we also need . In this work we used for and the following settings: where and were chosen for fast convergence speed and low steady state ISI.

The parameters and are denoted in the following as for “Edgeworth order 4” ( for “Edgeworth order 4”) and for “Edgeworth order 6.”

The step-size parameter is denoted in the following as , , and for “MaxEnt,” “Edgeworth order 4,” and “Edgeworth order 6,” respectively. In the following, the parameter is denoted as the , , and for “MaxEnt,” “Edgeworth order 4,” and “Edgeworth order 6,” respectively. According to [1] the denominator of (23) cannot be zero. Therefore, during our simulations, the equalizer’s taps were updated only if the denominator was greater than . For the equalization performance comparison we also used the algorithm defined by Godard [7]. The equalizer’s taps for Godard’s algorithm [7] were updated according to where is the step-size and is the absolute operator.

Two different sources were considered:16QAM and 64QAM sources (modulations using , levels respectively, for in-phase and quadrature components).

Two different channels were considered:channel case 1 (initial ISI = 0.44), where the channel parameters were taken according to Shalvi and Weinstein [17]: ,channel case 2 (initial ISI = 1.402), where the channel parameters were taken according to [18]: .

For channel case 1 and channel case 2 we used an equalizer with 13 and 21 taps, respectively. The equalizers where initialized by setting the center tap equal to one and all others to zero.

The step-size parameters, , , , , , , and were chosen for fast convergence speed and low steady state ISI.

Figures 24 show the simulated equalization performance (ISI as a function of iteration number) of three equalization methods: “Godard,” “MaxEnt,” “Edgeworth order 4,” and “Edgeworth order 6” for channel case 1, 16QAM, and SNR = 30 dB.

Table 1 lists the various equalization methods and the corresponding model for the convolutional noise pdf used in the algorithm.

Two cases were considered for “Edgeworth order 4”:

Case a. Nonoptimal Lagrange multipliers (Lagrange multipliers similar to those used in “MaxEnt” [1]).

Case b. Optimal Lagrange multipliers (according to Appendix C).

Based on the simulation results (Figures 24), “Edgeworth order 4” and “Edgeworth order 6” achieve better equalization performance in terms of convergence speed and lower residual ISI compared to the other methods.

According to Figure 2, “Edgeworth order 4” with optimal Lagrange multipliers has a slightly faster convergence speed and a slightly lower residual ISI compared to the non-optimal Lagrange multipliers case. According to Figure 3, “Edgeworth order 6” with optimal Lagrange multipliers leads to a faster convergence speed compared with “Edgeworth order 4” with nonoptimal Lagrange multipliers but leads the system to a slightly higher residual ISI. Figure 4 shows that “Edgeworth order 4” and “Edgeworth order 6” with optimal Lagrange multipliers have both approximately the same convergence speed while “Edgeworth order 4” leads the system to a lower residual ISI compared to “Edgeworth order 6”. Figures 24 indicate that there is no justification using “Edgeworth order 6” over “Edgeworth order 4” and “Edgeworth order 4” with optimal Lagrange multipliers over “Edgeworth order 4” with nonoptimal Lagrange multipliers for channel case 1.

Figures 5, 6, and 7 indicate that our previously made conclusion based on Figures 24 also holds when the SNR is decreased.

Figures 810 show the simulated equalization performance (ISI as a function of iteration number) of four equalization methods: “Godard,” “MaxEnt,” “Edgeworth order 4,” and “Edgeworth order 6” for channel case 2, 16QAM, and SNR = 30 dB. Based on Figures 810, “Edgeworth order 4” and “Edgeworth order 6” have better equalization performance in terms of convergence speed compared to “MaxEnt” but lead the system to a slightly higher residual ISI compared to the “MaxEnt” method.

Figure 8 indicates that “Edgeworth order 4” with optimal Lagrange multipliers leads to a slightly faster convergence speed compared to the nonoptimal Lagrange multipliers case while leaving the system with approximately the same residual ISI.

Figures 9 and 10 show that “Edgeworth order 6” with optimal Lagrange multipliers leads to a much faster convergence speed compared to the optimal and nonoptimal Lagrange multiplier case with “Edgeworth order 4,” while having approximately the same residual ISI for both cases.

Figure 11 shows the simulated equalization performance (ISI as a function of iteration number) of three equalization methods: “MaxEnt,” “Edgeworth order 4,” and “Edgeworth order 6” for channel case 1, 64QAM constellation input and for the noiseless case.

Three optimal Lagrange multipliers were used for “Edgeworth order 4” and for “Edgeworth order 6” (please refer to Appendix D).

Based on the simulation results (Figure 11), a much faster convergence speed is obtained by using “Edgeworth order 6” with optimal Lagrange multipliers compared to the case where “Edgeworth order 4” with optimal Lagrange multipliers or the “MaxEnt” method is used.

5. Conclusion

In this paper we used the Edgeworth expansion up to order 4 and up to order 6 to model the convolutional noise pdf in the conditional expectation calculations where the source pdf was approximated according to the maximum entropy density approximation technique. We derived new Lagrange multipliers and obtained new closed-form approximated expressions for the conditional expectation and MSE as byproducts. According to simulation results for the 16QAM constellation input, there is no justification for using “Edgeworth order 4” with optimal Lagrange multipliers over “Edgeworth order 4” with nonoptimal Lagrange multipliers when dealing with easy or hard channels. In addition, hardly any equalization performance improvement was observed with “Edgeworth order 6” (with optimal Lagrange multipliers) compared to “Edgeworth order 4” with optimal and nonoptimal Lagrange multipliers for the easy channel case (channel case 1). However, a much faster convergence speed was observed with “Edgeworth order 6” with optimal Lagrange multipliers for the hard channel case (channel case 2) compared to “Edgeworth order 4” with optimal and nonoptimal Lagrange multipliers. Furthermore, a much faster convergence speed was obtained for the 64QAM input and easy channel case with “Edgeworth order 6” compared to “Edgeworth order 4” both with optimal Lagrange multipliers.

Appendices

A. Hermite Polynomials and the One Dimensional Central Cumulants

A closed-form expression for the Hermite Polynomials can be defined as in [37]

Specifically, the first Hermite Polynomials up to order 6 are According to [34, Table 3], the one-dimensional central cumulants in terms of central moments are

B. Expressions for , , , , , , , , , and

In order to facilitate the derivation process of (17) and (18) we divide (17) into two parts: where and

According to (A.3), the values for and are

Their derivatives may be expressed as

For and as described in (A.3) we define the following constants (results after normalization):

We set as follows:

The second and fourth derivatives of are as follows: The definitions for and its derivatives are

By substituting (B.9), (B.10) in (23) the closed-form approximated expression for the conditional expectation is obtained.

C. Lagrange Multipliers Equations for 16QAM

For the 16QAM case we use two Lagrange multipliers ( ); thus, we have .

In order to get the required Lagrange multipliers we recall (30) where we use and from (B.6) and (B.7), respectively. For we have ; thus, when substituting and into (30) we get

For we have ; thus, when substituting and into (30) we get

From (C.2) and (C.3) we get a linear system with two equations:

which can be written by where the solution is

Next, we substitute the known moments , , and into (C.5) and obtain which can be written as:

By substituting and into (C.8), we get the Lagrange multipliers for the 16QAM input as appeared in [1].

D. Lagrange Multipliers Equations for 64QAM

For the 64QAM case we use three Lagrange multipliers ( ); thus, we have .

In order to get the required Lagrange multipliers we recall (30) where we use and from (B.6) and (B.7), respectively.

For we have ; thus, when substituting and into (30) we get

For we have ; thus, when substituting and into (30) we get

For we have ; thus, when substituting and into (30) we get

From (D.2), (D.3), and (D.4) we get a linear system with three equations:

By substituting the known moments, into (D.5) we obtain

E. Derivation of

By using (B.8) and (B.9) we can write

Thus we have

Next we set (E.2) to zero and obtain or

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The authors would like to thank the reviewers for their helpful comments.