Abstract
Modulation with correlated signal waveforms is considered. Such correlation arises naturally in a number of modern communications systems and channels, for example, in code-division multiple-access (CDMA) and multiple-antenna systems. Data entering the channel in parallel streams either naturally or via inverse multiplexing is transmitted redundantly by adding additional signal waveforms populating the same original time-frequency space, thus not requiring additional bandwidth or power. The transmitted data is spread over a frame of N signaling intervals by random permutations. The receiver combines symbol likelihood values, calculates estimated signals and iteratively cancels mutual interference. For a random choice of the signal waveforms, it is shown that the capacity of the expanded waveform set is nondecreasing and achieves the capacity of the Gaussian multiple access channel as its upper limit when the number of waveforms becomes large. Furthermore, it is proven that the iterative demodulator proposed here can achieve a fraction of 0.995 or better of the channel capacity irrespective of the number of transmitted data streams. It is also shown that the complexity of this iterative demodulator grows only linearly with the number of data streams.
1. Introduction
Modulation is the process of injectively mapping elements of a discrete set, called the messages, onto functions of time, called the signals, for the purpose of information transmission. The signals form a (finite-dimensional) Hilbert space, called the signal space. Geometric representations of the signals are often called signal constellations. Basic modulation methods prefer the use of orthogonal bases of the signal space as the signals themselves, since demodulation can be accomplished by projection onto these bases. For example, equidistant -ary pulse-amplitude modulation (PAM) uses discrete amplitudes on each basis [1].
Signals experience distortion during transmission which is modeled probabilistically, mainly due to the addition of noise. The received signals are therefore no longer identical with the transmitted signals. The demodulation problem is that of mapping a received signal back to a message such that the probability of the demodulated message not equalling the original transmitted message is minimized. Under the assumption of additive white Gaussian noise, picking the message whose signal is closest to the received signal using the natural Euclidean distance metric is optimal (if noise is correlated, for example, a generalized metric needs to be used) [1]. This is referred to as maximum-likelihood (ML) decoding since it minimizes the message error probability.
However, ML decoding quickly becomes practically infeasible by the βcurse of combinatorics,β and other methods are needed to be considered. Shannon [2] showed that every transmission channel has a maximum possible transmission rate which it can support, called the Shannon capacity, and that there exist coding and decoding methods which can operate to within of this capacity at arbitrarily low error rates. Shannon's nonconstructive proofs did not require ML decoding, opening the door to possibly low-complexity capacity-achieving signaling methods. Unfortunately, to achieve capacity requires continuous input alphabets which is highly impractical. Discrete modulations, mapped onto orthogonal bases, such as PAM, cannot achieve the Shannon capacity on the Gaussian channel. Certain high-dimensional discrete constellations, such as lattices, have been reported to achieve capacity, but in many ways their regular (discrete) structure is lost in the process [3].
In this paper we pursue another approach, abandoning the use of orthogonal bases as signals. In many practical situations the signals utilized are correlated, either by design, or by the effects that occur during transmission. An example of the former is (random) code-division multiple-access (CDMA) [4], and an example of the latter is multiple-antenna transmission (MIMO) [5]. In both cases the signals are densely correlated-which makes efficient demodulation extremely difficult. If the correlation pattern is sparse, that is, if any given signal waveform interferes with only a few other (neighboring) signal waveforms, sequence detection algorithms like the Viterbi algorithm can be used efficiently. A number of modulation methods based on superposition of individual data streams have been proposed (see [6β8]). When a number of independent signals add up in the channel, they can sometimes be decoded sequentially. Onion-peeling decoding starts from the largest power signal, decodes it treating the rest of the signals as noise and subtracts the result from the composite received signal. The decoding then continues analogously with the second strongest signal to the weakest. A number of methods based on successive decoding have been proposed and studied for various types of signals (data streams), including binary [9]. Channel capacity can be approached in the case when powers and rates of the signals follow specific precise arrangements, which is, however, challenging to accomplish in practice.
In this paper we assume a random correlation among the signals by postulating that these signals correspond to random vectors in signal space. The CDMA and MIMO channels are practical examples of such random channels [5, 10, 11]. Transmission relies on repeating the symbols of a message with random delays. Each time the symbol is modulated onto a new signal. While this increases the number of signals utilized, it allows for a very efficient iterative demodulation method to be used. This iterative demodulator forms the first stage of a two-stage receiver, where the second stage is a conventional forward error control (FEC) decoder for individual (binary) data streams, That is, the iterative first stage efficiently separates the correlated data streams. Specific adaptations of generalized modulation have recently been proposed for both CDMA [12] and MIMO channels [13].
Our contributions in this paper are two-fold. First we show that using random signals incurs no capacity loss, and furthermore, that regular-spaced PAM-type modulation on these random signals can achieve the Shannon capacity. We then discuss transmission using redundant signaling and an iterative demodulation method for which we show that it can operate close to the Shannon capacity over the entire range of operational interest. In showing this, we will only assume that we have capacity-achieving binary error control codes available, a very reasonable assumption given the current state-of-the art in error control coding [14].
2. Modulation
Generally, a discrete data stream is mapped onto signals from a finite set of such signals according to some mapping rule. In the ubiquitous pulse-amplitude (PAM) modulation, a discrete amplitude for each value of from is first selected then used to multiply the th signal. Most commonly, one of amplitude levels is selected for each -bit data symbol. In the case of -PAM, for example, with , the discrete equispaced amplitudes shown in Figure 1 are used on each signal. This signal constellation can be interpreted as the superposition of three simple binary constellations, where Bit 1 has 4 times the power of Bit 0, and Bit 2 has 16 times its power.
In general, any properly labeled -PAM modulation can be written as the superposition of binary antipodal amplitudes, that is, where . If an entire sequence of -ary PAM symbols is considered, it may be viewed as the superposition of binary modulated data streams with powers on binary data streams which make up the PAM symbol sequence .
This viewpoint is quite productive in that it suggests a capacity-achieving demodulation method for large constellations based on cancellation. Consider the case where the highest-power uncanceled data stream considers all lower power data streams as noise. Its maximum rate is then given by the mutual information where is the output of the channel. As long as the rate on the th binary data stream , it can be correctly decoded using a binary Shannon-capacity-achieving code.(While no class of binary codes with nonexponential decoding complexity exist which can provably achieve the capacity on a binary-input channel, codes which can achieve this capacity βpracticallyβ with βimplementableβ complexities have recently emerged from intense research. The most popular representatives are turbo codes and low-density parity-check codes. Both utilize iterative message passing decoding algorithms [14].) By virtue of (1), knowledge of implies that these data streams can be canceled from the received signal, and is the capacity of the th binary data stream. This thought model leads to a successive decoding and canceling method which can achieve the mutual information rate by the chain rule of mutual information. is of course not equal to the capacity of the additive white Gaussian noise channel , since the input distribution of is uniform, rather than being Gaussian distributed as required to achieve the channel capacity. In fact, loses the so-called shaping gain of βdB with respect to the capacity of the Gaussian channel [14].
3. Main Result
In this paper, we propose a generalized PAM modulation method which operates with random signals, rather than the orthogonal bases implied by the discussion in the previous section. We present a two-stage demodulator/decoder which remedies the difficulties of the onion-peeling method discussed above. Specifically, the demodulator consists of an iterative demodulator which operates in parallel and achieves a signal-to-noise ratio (SNR) improvement on each of the binary data streams. These are then decoded using external binary error control codes. The latency issue is basically confined to that of the parallel demodulator and that of the follow-up error control decoder. The iterative demodulator is based on cancelation. This means that its complexity, and that of the entire decoder scales linearly with the number of data streams.
Our main result is that we will prove that such an iterative demodulator/decoder can achieve a cumulative data rate per dimension such that where is the Shannon capacity of the Gaussian multiple-access channel. That is, our system can achieve a fraction of of channels information theoretic capacity, irrespective of system size.
In order for the iterative demodulator to function, we require that the number of signals in the signal space is increased, but not the power or spectral resources.
4. System Model
4.1. Signaling
We are considering communication of multiple data streams using random signals of dimension . If is sufficiently large, the number of useful signals is arbitrarily large. A set of data symbols from the data streams is transmitted at each time interval . There are basically two ways to do this. Conventionally each symbol is directly modulated onto an individual signal . In this paper, however, we propose an alternative where we duplicate each symbol -fold. These duplicates are then modulated onto separate signals at random time intervals within a certain signal block, where is the random location within the block where the th copy of symbol is located. The function is a permutation function with inverse . Even though we have increased the number of signals by a factor , scaling the power with , and requiring that the signal set occupies the original -dimensional signal space, this will not affect total power or the total spectrum utilization. (Another form of modulation based on randomly correlated signals called βpartitioned transmissionβ has been recently proposed in [15]. Partition signalling creates redundancy and sparseness in transmitted data by partitioning existing -dimensional signal waveforms and permuting the resulting partitions. Generalized modulation relies on populating the signals space with additional -dimensional signal waveforms. The latter gives an opportunity to create the requited level of redundancy independently of signal dimensionality . Further, near capacity operation with generalized modulation does not require .) A diagram of this modulator is given in Figure 2.
We make the convenient, but in no way necessary assumption that the channel is block-synchronous, that is, that the signal waveforms at time interval interfere only within that time interval, and that there is no correlation of signal waveforms between time intervals. With this we can write the channel in the linear matrix form where the matrix contains the signal vectors as columns. The capacity per dimension of this channel is well known [10] and is given by where is a diagonal matrix with the powers used for transmission of the different signal vectors. We now assume that the signals are chosen randomly from the signal space(the individual components , , of signals can, for example, be selected randomly out of the set picking each entry with probability 1/2. However, other random selections satisfying (8) are also possible) such that the mutual pairwise expected correlation between signals is This model captures among others the random code-division multiple-access channel and the isotropic multiple-antenna channel model.
The capacity of this random vector channel is given by the expectation over in (6). Using Jensen's inequality with equality if and only if (see [16]). That is to say that an equal distribution of powers over the signals maximizes the capacity of the random vector channel (5).
We now investigate the information theoretic impact of increasing the signal population as proposed by the -fold duplication. The following lemma addresses this issue.
Lemma 1. Keeping the transmit power constant, the capacity as a function of and approaches the capacity of the Gaussian multiple-access channel in the limit, that is, It approaches this limit from below as , that is, .
Proof. See Appendix A.
Lemma 1 reveals useful information in several ways. Firstly, it guarantees that the signaling strategy presented above, that is, the addition of extra random signal waveforms, incurs no capacity loss, and secondly, in the limit, arbitrary power assignments become capacity achieving, not only the equal power assignment.
4.2. Demodulation
The first stage of the demodulation process starts with matched filtering of the received signal with respect to each transmitted signal waveform in each time interval. Given the received signal embedded in Gaussian noise as these matched filter outputs are given by where is the sampled noise of variance , and is the correlation value between the target signal and a given interfering signal. The matched filter outputs in (12) consist of and an interference and noise term, which is given by
At this point the graphical illustration shown in Figure 3 may prove helpful, which shows how the different symbols and signals combine to generate the sequence of received signal vectors . Note that in the interference equation (13) above, self-interference is not included. Apart from unnecessarily complicating the notation, this self-interference is negligible as shown below. Furthermore, in many cases it is not difficult to ensure that the signal vectors used for the different signals from a user impinging on channel symbol are orthogonal, that is, and cause no self-interference. In [12], for example, different time intervals are used for the duplicate signals to accomplish this. The graphical representation reveals the similarity with graph-based error control codes, in particular with fountain codes [17]. Consequently, we will explore a demodulation method based on message passing.
Iterative demodulation follows the following message-passing principle. At the channel nodes, updated matched filter output signals are computed at each iteration by subtracting interference to generate where is a soft symbol estimate of the th copy of . Note that, following the extrinsic principle, the different estimates for the same symbol are not necessarily identical (see below). These soft symbol estimates, in turn, are computed at the symbol nodes from the matched filter signals for each copy of . While can, in general, be any complex integer, we will concentrate on the basic binary case were . We will show later how to build larger modulation alphabets from this basic binary case using the binary decomposition of PAM signals.
In the binary case, the soft symbols are calculated as which is the optimal local minimum-variance estimate of given that interference and noise combined form a Gaussian random variable with power . The variance of the symbol estimates (16) will be required in the analysis in Section 5. Defining this variance at iteration as , and assuming that correlation between interference experienced by different replicas of the same symbol is negligible due to sufficiently large interleaving, it can be calculated adapting the development in [18] for CDMA as where and from (16), and . The function has no closed form, but the following bounds are quite tight [19]: where is the complementary error function. The final output signal after iterations is , which is passed to binary error control decoders for data stream . The final signal-to-noise/interference ratio (SINR) of is what primarily matters for the error performance of these error control decoders.
After detection iterations of the first stage the data is passed to the second stage of demodulation. The second stage of the reception is the error control decoding which is executed for each of the data streams individually. SINR for data stream equals and it can be argued that the residual noise and interference is Gaussian [15]. Ultimately the information rate (i.e., the rate of the error control code) of stream should satisfy for error-free decoding at the second stage. Here by we denote the capacity of the binary-input real-valued output AWGN channel with SNR .
5. Generalized Modulation
The discussion above treated the case of binary modulation on the different signal waveforms, however, as illustrated in Section 2, we can create the regular-spaced PAM modulations with geometrically scaled binary modulations using powers We assume that there are data streams with powers . Thus, the total number of streams equals .
Assuming large enough interleavers, the evolution of the interference in this iterative demodulator can be captured with a standard density evolution analysis. Since the average correlation between signal waveforms (see (8)), the interference and noise on stream is given by which is common to all streams. The upper bound in (22) contains the self-interference term for , which, however, becomes negligible as and grow. Using (17) in (22) and the PAM power distribution we obtain The next theorem proves that generalized PAM modulation used with the two-stage demodulation described above can closely approach the channel capacity.
Theorem 1. Consider generalized PAM modulation (21) with levels and data streams per level for giving a total number of streams . One assumes that each data stream is encoded with a binary error control code which is capacity achieving on the binary-input AWGN channel, that is, and let . Then the resulting spectral efficiency per dimension
Proof. See Appendix B.
We note that for the corresponding capacity approaching power profiles and coincide for . The importance of these results is that new data streams can always be added without affecting decodability of the existing streams.
The gap between achieved spectral efficiency and the channel capacity can also be introduced in terms of average , instead of data stream power profile. Average for the power profile used in Theorem 1 can be upper bounded as from (B.14), and therefore the corresponding capacity of AWGN channel , using can be upper bounded as As a result we obtain
In Figure 4 we plot the achievable spectral efficiencies for the proposed generalized PAM modulation (21) for levels and assume ideal posterror control decoding with rates satisfying (24). Such performance can be closely approached with appropriate standard error control codes, which are very well developed for the binary case [20, 21]. We assume that , for , where parameter . Each curve corresponds to fixed and plotted as a function of average which is in turn the function of . We can observe that spectral efficiency of generalized PAM modulation exceeds the capacity of the same PAM modulation using orthogonal waveforms. This is because the number of allowable correlated signal waveforms exceeds the number of available orthogonal dimensions . This advantage is most noticeable for 2-PAM, where the maximum achievable capacity of is more than twice the number of orthogonal dimensions. For higher PAM constellations, the capacity per level rapidly from above. Note that for the gap between the performance curves and the capacity curve satisfies (29). Specifically, point for gives βdB, for gives βdB, for gives βdB, and for gives βdB.
6. Conclusions
We have presented and analyzed a two-stage iterative demodulation methodology for generalized PAM constellations using correlated random signals rather than the usual orthogonal bases. The method operates by introducing redundant duplicate copies of the data symbols modulated onto extra signals. An exponential power distribution, inherently present in PAM modulations, allows this two-stage iterative demodulator to achieve of the Shannon capacity using binary capacity-achieving error control codes for each data stream. This generalized PAM modulation format was shown to approach the channel capacity over a wide range of operating SNRs, and can exceed the capacity of traditional PAM constellations on orthogonal signals.
Appendices
A. Proof of Lemma 1
Decompose the argument of (6) as where is the matrix of the off-diagonal elements with where and with uniform probabilities. The entries have zero mean and variance which vanishes as (i) or (ii) . Condition (ii), however, requires the Lindeberg condition to hold on the set .
Using (i), or (ii), the elements in are sufficiently small to apply Jacobi's formula, that is, Since , and the second moment of vanishes, the limit value of the Lemma is proven.
Using Hadamard's inequality it is straightforward to show that and the limit is approached from below. While in probability, almost surely.
B. Proof of Theorem 1
Let us define , , and . Consider here for simplicity and define . Convergence defined by (23) (for , ) follows from and . Success of the demodulation stage happens if is close to . This means that the interstream interference is canceled almost entirely. Let us choose a somewhat arbitrary lowest power such that and prove that .
Let us define the functions The function monotonically increases for and monotonically decreases for , where . To find an upper bound on we consider the terms for very small and very large arguments separately, that is, Using the fact that for any we obtain for any From (19) we obtain for such that . The last inequality in (B.6) is computed by upper bounding the sum by the geometrical progression using the inequality Using (B.5) we get for any Analogously, (B.6) gives By choosing and , we compute numeric values of the bounds (B.8) and (B.9) for the tails as Let us define It follows from (B.4), (B.10), and (B.11) that for any We also notice that it is enough to consider . Numerical calculation shows that the only root of on the interval equals . Thus, due to (B.13).
We calculate the spectral efficiency (or sum-rate per dimension) as follows: where we use a bound from [15] to upper bound as
The capacity of the additive Gaussian channel corresponding to power profile (21) with streams per level can be calculated as follows Combining (B.18) and (B.14), we obtain (25).