Abstract

Due to inevitable propagation delay involved in deep-space communication systems, very high cost is associated with the retransmission of erroneous segments. Quantization with linear index coding (QLIC) scheme is known to provide compression along with robust transmission of deep-space images, and thus the likelihood of retransmissions is significantly reduced. This paper aims to improve its spectral efficiency as well as robustness. First, multiple quantization refinement levels per transmitted source block of QLIC are proposed to increase spectral efficiency. Then, iterative multipass decoding is introduced to jointly decode the subsource symbol-planes. It achieves better PSNR of the reconstructed image as compared to the baseline one-pass decoding approach of QLIC.

1. Introduction

The deep-space communication is more challenging than its near-Earth counterpart due to the associated huge intertransceiver propagation delay [1, 2]. The inevitable propagation delay increases the likelihood of larger fluctuations in channel SNR during transmission, e.g., due to antenna pointing errors, atmospheric conditions, etc. [3, 4]. The error correcting codes defined in [5, 6] provide excellent performance for both near-Earth and deep-space communication systems. However, the postdecoding bit-error rate (BER) increases dramatically when the channel SNR degrades slightly below the decoding threshold of the code selected for transmission. The retransmission of erroneous frames is possibly required with a low rate code, if the postdecoding BER is higher than the value acceptable by the target application. Therefore, visually acceptable reconstruction quality of the images transmitted from deep-space despite moderate-to-high BER is considered as a desirable feature.

It is well-known that the state-of-the-art image compression standards (JPEG2000 and ICER) are sensitive to postdecoding residual errors [79] due to their embedded fixed-to-variable length entropy coders. Even a single error after the channel decoder makes the rest of the bitstream unsuitable for decoding. Although this catastrophic propagation of errors is spatially confined by partitioning the image into segments, the complete loss of certain segments may reduce the visual quality of reconstructed image. Considering the strict integrity requirement of deep-space scientific images, retransmission of the lost segments is necessary.

In order to provide error resilient source transmission, a vast number of schemes are proposed in literature under the category of joint source-channel coding (JSCC) [1018]. A comprehensive survey of such schemes aiming towards robust transmission of images can be found in [1924] and references therein. Particularly, this paper is interested in the JSCC based linear index coding scheme, which is known to reconstruct better visual quality images on mismatched deep-space channels. The scheme, referred to as quantization with linear index coding (QLIC), is proposed in [24] for the transmission of deep-space images using Raptor codes [25]. It replaces the concatenation of entropy coding and channel coding with a single linear encoding map, and channel codes are used to provide both source compression and error protection.

Indeed, QLIC is able to withstand significant SNR fluctuations while still preserving the visual quality of the image, but its spectral efficiency is inferior to the currently used deep-space image transmission system [26, 27]. Therefore, this paper is aimed towards enhancing the spectral efficiency as well as the robustness of QLIC. Firstly, the proposed encoder utilizes multiple refinement levels to efficiently reduce the redundant symbol-planes thus increasing the overall spectral efficiency. Then, we show that the multipass decoding of QLIC provides significant gain, if the information which maximizes the virtual channel capacity is utilized in the subsequent decoding passes. Our iterative decoding provides better PSNR of the reconstructed image as compared to the baseline one-pass decoding.

On the encoder side, QLIC [24] determines the minimum number of symbol-planes from partitioned subbands in order to achieve a target distortion. The baseline QLIC uses the same block length for partitioned subbands as well as for the channel codeword. On one hand, large is required for capacity achieving performance of channel codes. On the other hand, small is necessary to efficiently identify the redundant transform coefficients. As a result, the spectral efficiency is compromised in both the cases. This paper proposes to partition the subbands in small block lengths while still using large block length for channel coding. Consequently, the proposed arrangement decreases the entropy rate of symbol-planes and thus less budget is required for a given channel capacity. Simulation results show that the proposed approach reduces the transmission overhead by up to while still preserving the robustness feature of QLIC. Besides, it is shown that the degree distribution of the Raptor codes can be optimized by only considering the virtual correlation channel between the symbol-planes.

On the decoder side, QLIC [24] utilizes one-way virtual correlation channel among the symbol-planes. Consequently, higher significant symbol-planes provide their decoding decisions to the subsequent lower significant symbol-planes in a multistage manner. In this paper, we show that lower significant symbol-planes can also provide new extrinsic information to the higher significant symbol-planes by executing multiple decoding passes. In fact, different from [28, 29], only the reliably recovered symbol of a lower level is used to provide effective extrinsic information, if its corresponding symbol of higher level was recovered with low reliability in the previous decoding pass. Therefore, we propose to only utilize the information from those combinations of the symbol-planes which results in the maximum capacity of its observed virtual correlation channel. As a result, the decoding performance as well as the quality of the reconstructed image will improve with multipass decoder. It is shown in Section 4.2 that multipass decoding provides a gain of up to 1.5 dB in terms of PSNR of the reconstructed image as compared to the baseline one-pass decoding approach.

The main contributions of this paper are summarized below:(i)An efficient quantization scheme is proposed for QLIC to assign quantization precision to the transformed image considering relatively small blocks. The proposed scheme achieves better overall transmission bandwidth efficiency.(ii)A multipass decoding scheme is proposed to utilize the redundancy left in the lower level symbol-planes in order to improve the BER of the higher level symbol-planes. The improved BER of the higher levels eventually results in improved reconstruction quality.

The rest of this paper is organized as follows: In Section 2, we present the background of QLIC and notations used in the paper. The proposed idea of multiple refinement levels per subblock is discussed in Section 3 whereas the details of multipass decoding approach are described in Section 4. Section 5 concludes the paper.

2. Quantization with Linear Index Coding

In this section, we briefly outline QLIC scheme, the optimization problem, the solution of which is used in Section 3 for the assignment of multiple refinement levels to each source block. We also introduce the baseline encoding-decoding approach and the notations used throughout the paper which are consistent with the notations in [24, 31].

In classical separated source-channel coding systems the aim of the source coding is to reduce the redundancy from the source as much as possible and then the error protection is provided by the channel codes. In this way, controlled amount of redundancy is added to the transmitted data stream in order to protect it against channel induced errors. However, in QLIC, channel codes are used to provide compression as well as error protection. The source data after quantization is arranged into bit-planes and the bit-planes are directly mapped to the channel codewords. Consequently, the rate-budget assigned to each encoded bit-plane is directly proportional to the conditional entropy rate of the bit-plane given the higher level bit-planes. The direct mapping of the bit-planes has been shown to be more robust against channel induced errors [24].

Let us consider that an image after level biorthogonal discrete wavelet transform (DWT) is divided into parallel source components with block length equal to the length of LL0 subband, where LL0 represents the low pass subband after 2D wavelet transform. In fact, LL0 is the subsampled version of the original image after DWT. Let represent the sequence of the th source transform coefficients, where . All the operations in the context of the QLIC are essentially the same for every source component except for the LL0 subband. Therefore, similar to [24], we do not consider the encoding and decoding of the LL0 subband and assume that it is available error-free at the receiver. However, in Sections 3.3 and 4.2, we consider the transmission overhead of LL0 subband in comparing the simulation results of the proposed enhancements with the baseline QLIC scheme.

Let , for , be an embedded dead-zone uniform scaler quantizer (DZUSQ) applied to the transform coefficients , where is the highest level of refinement and is the lowest one. The alphabets and decision regions of DZUSQ are shown in Figure 1. We denote as the quantization distortion of the th source component at the th quantization level and as the block of ternary quantization indices arranged in a two-dimensional array. The th row of , denoted by , is referred to as the th “symbol-plane.” All the symbols-planes from to are included for a refinement level of . According to the chain rule of mutual information, the entropy rate of every subsource , wherefor , is the entropy rate of each symbol-plane in bits/source symbol.

Let denote the operational rate-distortion (R-D) function of the th source component with respect to the mean-square error (MSE), where and is the estimate of after transmission using a suitable source-channel code. is then given by the lower convex envelope of the following set of points considering the concatenation of various refinement levels of quantizer and ideal entropy coding:where by definition. Since is a piecewise linear and convex function, it is possible to represent it with a family of straight lines obtained by joining the consecutive R-D points in (2).

Let us consider that a successive refinement source-channel code exists which encodes all the subsources up to a particular refinement level such that the overall distortion is given bywhere is a set of nonnegative weights. The nonnegative weights are due to the fact that MSE distortion in the pixel domain is not equal to the MSE distortion in the transform domain for a biorhtogonal transform. Further, if the source-channel code transmits the data in channel uses, the resulting transmission bandwidth efficiency for the corresponding scheme is given by . This notation is analogous to the “bit per pixel” (bpp) concept used in image coding and we will use to compare the spectral efficiency of our proposed enhancement in Section 3.3 with the baseline scheme.

Consequently, the refinement level which is necessary for every subsource to achieve an overall distortion is then the result of the following weighted MSE distortion linear programThe above program considers idealized channel codes; however, practical channel codes require an overhead for successful decoding. The modified set of R-D points for the practical channel codes is given bywhere the overhead can be determined experimentally for a particular family of channel codes. This modified set of R-D points is then used to determine and in (4) which are in fact the coefficients of the straight line.

The solution of the above program is then used to quantize each subsource up to a refinement level of . In order to use binary Raptor codes, the symbol-planes are decomposed into bit-planes by considering ternary to binary alphabets mapping, the details of which are given in [31]. Therefore, in the rest of the paper we use the term “bit-planes” instead of “symbol-planes.” The bit-planes of each block are directly encoded to the channel codeword with multilayer encoding as shown in Figure 2(a). The systematic bits are punctured and only coded bits are transmitted. On the decoder side, the decoding is performed in a multistage manner starting from the highest significant bit-plane as shown in Figure 2(b). Each th stage decoder makes use of the source probability model which is already conveyed to the decoder as a part of the header. Consequently, the coded nodes receive intrinsic information from the channel. The systematic nodes, however, receive the messagewhere denote the hard decisions obtained from the already decoded higher significant bit-planes.

The soft information can also be used instead of hard decisions; however, no significant improvement in performance was observed as mentioned in [11]. The errors, if any, in the higher significant bit-planes will eventually feed the lower levels with false log-likelihood ratios (LLRs), though the error propagation is not as catastrophic as for the variable-to-fixed length decoding. Both hard and soft reconstruction are possible considering the posterior probability , where is the channel output. In the rest of the paper, we refer to this multistage decoder as the one-pass baseline multistage decoder.

3. Multiple Refinement Levels

Different from JPEG2000 and ICER which use sophisticated context based probability models to derive the entropy coder, the pure compression performance of QLIC is derived by the solution of optimization problem in (4). The solution identifies the refinement levels for every subsource block to achieve a target overall distortion. The bit-planes are then directly encoded to the channel codewords for transmission. The pure compression performance of QLIC, i.e., using ideal channel codes, is inferior to the compression performance provided by the state-of-the-art image compression systems. This difference, however, may become significant in practical transmission scenario due to the overhead associated with the nonideal performance of finite-length channel codes.

The high level and low level transform coefficients are usually not uniformly distributed within the subband due to various image features. The choice of , corresponding to portion of the subband, is thus critical for better solution of the optimization problem, where are the dimensions of the image. Since modern block codes tend to be capacity achieving asymptotically, a large favors increased spectral efficiency due to relatively low . Therefore, a relatively larger block length of is used in the numerical results of [24, 31].

However, large block length is associated with a serious drawback. The optimization problem to find out the refinement levels considers each subsource as a single unit. Consequently, the solution assigns the same refinement level to all the transform coefficients within a subsource. However, due to different spatial location of high and low level coefficients within a subsource, this assignment of refinement levels is usually not optimal. For example, let us consider a length quantized subsource of an image taken from Mars exploration rover as shown in Figure 3. A refinement level of is used for quantization. The zero values are represented with black pixels whereas the rest of the values are represented with white pixels. Let us focus on quadrant II of the image. Very few pixels in quadrant II have nonzero values and are quantized with a refinement level of . However, it is likely that if these values are quantized with a refinement level lower than , it will not significantly affect the reconstruction quality of the image. In that case, the conditional entropy rate of the bit-planes also decreases which increases the overall spectral efficiency.

3.1. Proposed Encoding Approach

A possible solution is to use small block lengths to find the quantization refinement levels. Figure 4 compares the pure compression performance of QLIC for block lengths of and . It is clear from Figure 4 that the pure compression performance for is superior; e.g., PSNR of 48 dB is achieved at 0.86 bpp for , whereas 0.92 bpp is required in case of to achieve the same PSNR. Although the pure compression performance for is better, the encoding of bit-planes with short block length channel codes requires significant overhead for convergence. The improved compression performance is thus easily overwhelmed by the accumulated overhead of the channel codes.

Another possible solution is to combine the promising features of both the block lengths; i.e., use small to find the solution of the optimization problem whereas use large block length channel codes for transmission. A trivial approach is to further partition each subsource into segments, where is a power of and the length of each segment is . The optimization problem (4) is then solved by feeding subsource segments as input. The obtained refinement levels are eventually used to quantize each segment of length thus generating bit-planes per segment. The segments of the same subsource can then be concatenated together to make the overall block length . For example, if , there will be segments per subsource which corresponds to and . Consequently, each bit-plane can be encoded and transmitted using a channel code of block length .

The above scheme is straightforward but unable to improve the spectral efficiency in practice. Every segment associated with a subsource may have different range of transform coefficients. Therefore, the quantization step size which depends on the dynamic range of transform coefficients in every segment is different. In other words, the quantization of every th segment may result in a different codebook. Thus, the quantization index assignment is also different even for the same value coefficients but associated with different segments of the same subsource. Consequently, the overall entropy rate increases after concatenation of all the segments of the subsource as compared to the case when the same refinement level is used by considering the subsource as a whole, i.e., composed of all the segments. Since the rate budget assigned to each bit-plane is dependent on its entropy rate, the overall spectral efficiency decreases. In fact, the resulting spectral efficiency is even worse as compared to the large block length case.

Therefore, we propose to use the overall dynamic range of the transform coefficients of a subsource to determine the quantization step size for every th segment. In other words, the codebook for all the segments remains the same. Although this approach is suboptimal in the sense that a wider is used, we observed that it works quite well as input to the optimization problem (4). The solution of the optimization problem may identify segments to be quantized with lower as compared to the case when optimization is performed by considering the whole subsource as a single segment. As shown in Figure 4, although the pure compression in this case is reduced relative to the previous arrangement, the conditional entropy rate of the bit-planes is also decreased. Apparently, less budget is required for encoding which ultimately increases the overall spectral efficiency. The conditional entropy rate of the bit-planes is decreased due to the fact that certain quantization symbols are replaced with “0” (“B” in our assignment of quantization alphabets). This process is equivalent to the one as shown in Figure 5(a) which depicts that the nd segment is originally to be quantized with but later reduced to .

The above solution enables getting rid of redundant symbols-planes quite effectively and thus increases the overall spectral efficiency. However, there are also some associated drawbacks. Let us assume that is the subsource refinement level whereas is the refinement level of its th segment obtained with help of the procedure explained in the earlier paragraphs. Figure 5(a) shows the arrangement of the quantization alphabets without partitioning the source block into segments as well as with partitioning. The red square in the figure shows the quantization alphabets which become insignificant due to the partitioning of the source block; i.e., they are represented with “zeros.” In other words, it is necessary to quantize the 2nd segment with without partitioning of the source block but after partitioning its quantization precision requirement is reduced to . Consequently, in the overall arrangement of the quantization alphabets of the -length source block, e.g., “A A” appears as if it is reduced in precision with reference to the other segments. Therefore, it is necessary to convey information to the receiver about its updated refinement level. This overhead becomes significant particularly in case of small . Since lower bit-planes are more prone to errors in case of mismatched channel SNR, the quantization alphabets of the 2nd segment are more susceptible to channel noise for the partitioning case although they originally correspond to higher significant level. The quantization symbols with decreased significance are thus more vulnerable to errors and robustness of QLIC is compromised.

In order to overcome this issue, we propose to update the arrangement of the quantization alphabets as shown in Figure 5(b). This arrangement is robust in the sense that it preserves the significance of higher significant bit-planes. Consequently, it alleviates the problem associated with the multistage decoding in case of mismatched channel capacity. Added to that, it is no longer required to convey the information about the updated refinement levels to the receiver. The reason can be quickly explained by considering the ternary quantization alphabets and the DZUSQ such that is the quantization alphabet which corresponds to . By design [31], the quantizer cells corresponding to alphabets “A” and “C” can only be subdivided into “A” and “C” for further refinement. Therefore, the assignment of “B” (0) to the higher refinement level where the lower refinement level is alphabet “A” or “C” is illegal and can easily be spotted by the dequantizer.

3.2. Robustness of Raptor Codes

The multilayer encoding approach of QLIC directly maps the symbols-planes to the channel codewords as shown in Figure 2. The rate budget assigned to each bit-plane is in proportion to the conditional entropy rate of the bit-plane given the higher significant bit-planes and is given bywhere is a small rate margin associated with the particular family of channel codes used for encoding. The layered encoding uses systematic channel codes such that the systematic symbols are punctured before transmission and only parity bits are transmitted. Since is dependent on , discrete set of coding rates may result in reduced bandwidth efficiency as it is difficult to generate matching code rates for every possible . However, this issue can be alleviated with the use of rateless codes which have an added advantage of virtually generating infinite number of coding rates.

The nonuniversality of Raptor codes is well-known for general noisy channels [32]. Due to multilayer encoding, every bit-plane observes a composite channel at the decoder. The parity symbols observe a physical channel with capacity whereas the systematic symbols observe a virtual correlation channel with capacity . For capacity achieving performance, the output degree distribution of Raptor codes must be optimized for the observed composite channel. This is achieved by finding the matching distribution for various pairs on a sufficiently fine grid as proposed in [24]. However, it is quite impractical to optimize the output degree distribution for every pair. Even on a grid with uniform spacing of bits/sec for both and , the number of output degree distribution will be in terms of thousands. Therefore, in the following we show that the output degree distribution of the Raptor codes optimized for a particular pair is robust to the variations in . According to [33], this is particularly true when the joint decoding of the constituent codes (LT and LDPC) is performed.

Inspired by the work of [33], we numerically compute the threshold of systematic binary Raptor codes for the case of composite channel, i.e., by considering both the virtual correlation channel and the physical channel. In order to observe the effect of channel capacity on a degree distribution optimized for a particular pair, we fix and numerically find the threshold for different values of . If the decoding is successful, we increase the code rate and repeat the process with a new ensemble of Raptor codes and different noise realization. Similarly, if the decoding fails, we decrease the code rate and repeat the same process. The consistency of is then established by repeating the same process a large number of times and averaging the results. The decoding threshold of the Raptor codes degree distribution optimized for the pairs: , , and are shown in Figure 6. The numerical results show that there is no big difference in the decoding threshold of Raptor codes, optimized for a particular physical channel capacity, over the range of channel capacities relevant in deep-space communication. Therefore, it is not necessary to optimize the degree distribution of Raptor codes for every value of . It is reasonable to perform the optimization for every pair, which leads to system simplification. The channel capacity may correspond to the middle of the range of relevant channel SNRs. In fact, in all the numerical results, we used the degree distribution of Raptor codes optimized for SNR of 3 dB. As an example, the degree distribution of the LT part of the Raptor codes optimized for the pair is given byA high rate regular LDPC code with degree (2,100) and rate is used as a precode in all the Raptor codes.

3.3. Spectral Efficiency Comparison

In this subsection, we present the simulation results depicting the spectral efficiency of QLIC using the multiple refinement levels approach. We also compare these results with the numerical simulations of [24] in order to highlight the improved spectral efficiency of our proposed approach. The test image used in all the simulations results is from Mars exploration rover and referred to as MER1. The image is shown in Figure 7. It is the same image used in [24]. It is a BW uncoded image and bears a resolution of 1024x1024 pixels with 12-bit pixel value. The image is compressed using a 3-level Cohen–Daubechies–Feauvea (CDF) 9/7 wavelet transform as defined in JPEG2000 standard for lossy compression [8]. The transform coefficients are then quantized with a UDZSQ, producing ternary quantization indices. The ternary quantization indices are further decomposed into binary indices to facilitate the use of binary Raptor codes [31]. Considering the resolution of the image, it is divided into subsources of 128x128 pixels each, to make it compatible with the packet length defined in CCSDS recommendations [26], i.e., . Every subsource is further subdivided into segments to apply the multiple refinement levels concept.

A target PSNR of 49 dB is defined for the reconstructed image. The LL0 subband [8] is not transmitted using the QLIC approach. Instead, it is transmitted along with the header using a rate channel code which results in negligible overhead as explained in [24]. We included this overhead in the simulation results although it accounts for only 0.004 in (bit-plane). We reproduce the results of [24] using binary Raptor codes for comparison purpose. Figure 8 compares the spectral efficiency of the baseline as well as the proposed QLIC approach achieved at various values of channel SNR. The following comments are in order:(i)The ()-curve corresponds to spectral efficiency achieved by the baseline QLIC scheme using binary Raptor codes. Similar to the numerical results of [24], the degree distribution of the Raptor codes is optimized by considering both the virtual correlation channel and the physical channel on a sufficiently fine scale. Therefore, this curve serves as a benchmark to compare the spectral efficiency of the proposed scheme.(ii)The (+)-curve corresponds to the spectral efficiency for the case when small block length is used in the encoding process. The length of each segment ; however, the quantization step size is assigned according to the dynamic range of the transform coefficient within the segment. As explained earlier, due to the increased randomness of the data the resulting spectral efficiency is even worse as compared to the baseline scheme.(iii)The (○)-curve corresponds to the spectral efficiency achieved by the proposed approach. We would like to highlight that, in order to obtain the spectral efficiency at various values of channel SNR, only Raptor codes optimized for 3 dB SNR are used. The gap between the proposed scheme (○)-curve and the baseline scheme ()-curve is significant in terms of transmission efficiency. For example, 2.16 bpp is required by the baseline scheme at SNR of 2 dB. However, only 2.08 bpp is required by the proposed scheme in order to achieve the same performance. This corresponds to reduced transmission overhead with reference to the performance of infinite length channel codes at 2 dB SNR denoted by () in Figure 8.(iv)The ()-curve shows the bandwidth expansion for the infinite length channel codes using EXIT charts.

4. Multipass Decoding

In this section, we first establish the importance of observed virtual correlation channel in multipass decoding. Then, we present the analysis of multipass decoding. It reveals that certain symbols recovered in the first decoding pass can only provide effective extrinsic information in the subsequent decoding passes. In particular, a reliably recovered symbol of a lower level can only provide effective extrinsic information, if its corresponding symbol of higher level was recovered with low reliability in the previous decoding pass. Therefore, we propose to utilize information from only those combinations of the bit-planes which results in the higher extrinsic information. The block diagram of the proposed multipass decoder is shown in Figure 9.

4.1. Proposed Decoding Approach

During decoding, every level of the multistage decoder observes a composite channel. A fraction of the systematic output nodes (output nodes in accordance with the Raptor codes) in the decoding bipartite graph are connected to the virtual correlation channel with capacity . The remaining fraction of output nodes are connected to the physical channel with capacity , where . A higher value of the ratio indicates more dependence on the decoding decisions of higher significant bit-planes than the physical transmission channel output .

The overall observed channel capacity at the th decoding stage can be given by , where . Since the assignment of parity budget is probably different for every bit-plane, the channel capacity observed by each bit-plane for convergence is also different. Therefore, in order to compare the overall observed channel capacity at various decoding levels, we introduce the normalized average channel capacity in the following, considering the BER of the higher levels.

Let the multistage decoder at every decoding level provide the estimate of after running sufficient iterations of belief-propagation algorithm. Further, is the probability that the decoding decisions of the higher decoding levels are correct. The normalized bit-level average channel capacity observed at every th level can then be defined aswhere denotes the degradation in channel capacity from its nominal value. becomes provided that and .

Now let us consider the transmission using QLIC for such that and are the bit-planes to be transmitted over a binary input-output symmetric channel with nominal capacity . The rate budget assigned to is proportional to the marginal entropy rate of , whereas is assigned a rate budget in proportion to its conditional entropy rate given . and are received corresponding to the transmission of and , respectively, over a mismatched transmission channel with capacity . The baseline multistage decoder first decodes and outputs the estimate . The decoding of is then followed such that is available and LLRs according to (7) are provided to the systematic nodes of .

The normalized average channel capacities and observed by and , respectively, during the first decoding pass (baseline multistage decoder), are given bywhere due to the independent decoding of .

Let us define , where is an index set which labels the systematic nodes of the decoding bipartite graph. Now consider that every decoding stage of the multistage decoder is equipped with a genie aided decoder which is capable of correctly identifying the reliably recovered source nodes after sufficient iterations of BP algorithm. Consequently, it is possible to only provide the true LLRs to the corresponding systematic nodes of . The LLRs are assumed as erased and thus set to zero. Obviously, such a decoder will restrict the propagation of errors to the subsequent decoding stages. Further, this genie aided decoder is somehow similar to the case when the soft information recovered from the decoding levels is used to scale . Indeed for practical multipass decoding, we used this later approach.

The decoded output for any arbitrary value can only be improved by integrating new extrinsic information at the systematic nodes of the decoding bipartite graph. In other words, can only be increased by enhancing the capacity of the virtual channel by executing a second decoding-pass and utilizing the extrinsic information available from the decoded output . Now let us consider the following two hypothetical scenarios for the second decoding pass of with the assumption that , where is the cardinality of . This assumption is true in case of layered QLIC encoding approach.(i)Scenario 1: ; i.e., the reliably recovered nodes indices of are exactly the same as of in the first decoding pass. Now if a second decoding pass is executed for , new extrinsic information from is available only for those systematic nodes of which were already recovered reliably in the first decoding pass as shown in Figure 10. Effectively, negligible improvement in is observed and hence no further improvement in the decoding performance of is possible in the subsequent decoding passes.(ii)Scenario 2: now consider that all the nodes recovered in the first decoding pass of are different from the recovered nodes of ; i.e., . In this case, new extrinsic information from is available corresponding to the nodes of which were not recovered in the first decoding pass. Consequently, a second decoding pass of is likely to increase as compared to the first decoding pass.

Consequently, let us define as the probability with which the reliably recovered nodes indices of are different from the reliably recovered nodes indices of after the first decoding pass. We can estimate for the case of two correlated sources asThe updated normalized average capacity observed by in the second decoding pass is thus given bywhere is the conditional entropy rate of w.r.t. . Consequently, if , and the decoding performance of will be improved. Further, is given by

According to (13), multipass decoding gain is dependent not only on the correlation among the bit-planes but also on the probability . However, contrary to the genie aided decoding case, it is not possible to identify the reliably recovered nodes for practical decoders. Therefore, the practical solution is to use the reliability information which is implict in the soft decisions. The reliability information can then be used to scale the LLRs of the systematic nodes in the subsequent decoding passes. Therefore, different from [24], the decoding with soft decisions is necessary for the multipass approach. Therefore, in the following we use the soft decisions estimates for practical case.

The extrinsic LLR for the th bit-plane in each decoding pass can be represented aswhere and can be expressed with reference to all the bit-planes except as follows.andwhere , and is an index set which labels all the bit-planes. The multipass decoding thus improves the decoding performance by providing to each bit-plane according to (15). However, the performance improvement even by using the soft decisions is not very significant as shown in Figure 11 (-curve). We explain the reason in the following and then improve the multipass decoding.

In case of more than two bit-planes, different amount of correlation exists between the combinations of the bit-planes. For example, in case of 3 bit-planes, , , and are known at the decoder for the most significant bit-plane. Due to the multistage decoding, certain symbols are recovered with very low reliability in case of mismatched channel SNR. Particularly, this is true for bit-planes with high . Added to that, the layered encoding approach makes the lower significant bit-planes more susceptible to channel noise. It is mainly because the bit-planes are transmitted over a combination of virtual and physical channel and thus affected by the noise on both the channels.

Therefore, it is likely that including symbols from every bit-plane without any criterion in calculating the extrinsic information may decrease . This issue becomes more significant as the number of bit-planes increases. Therefore, excluding such low reliability symbols for the calculation of extrinsic information may increase the multipass decoding performance. Indeed, it results in higher capacity of the virtual correlation channel observed by the th bit-plane as compared to the former case. However, it is difficult to define a threshold to identify the unreliable symbols. Therefore, in order to get maximum benefit of multipass decoding for every th bit-plane, we propose to calculate (15) for all the combinations of the bit-planes belonging to the set . Then, the maximum value of is used as the extrinsic information. In the next subsection, we show that this approach outperforms the general approach.

4.2. Robustness Comparison

In this subsection, we compare the simulation results for the baseline one-pass and the multipass decoding approach. The simulation setup similar to the one explained in Section 3.2 is used. Figure 11 shows the reconstructed PSNR of the MER1 image. Originally, the rate budget assigned to the image corresponds to the nominal channel SNR of 3 dB. However, as the SNR decreases from its nominal value, the reconstruction quality of the image also decreases in terms of PSNR. The following comments are in order with reference to Figure 11:(i)(○)-curve shows the performance of one-pass decoding scheme as the channel degrades from its nominal value. The PSNR degrades gradually as the channel SNR decreases similar to the results of [24]. The baseline QLIC encoder is used to generate these results.(ii)()-curve corresponds to reconstructed PSNR of the image at various values of channel SNR by using the multipass decoding approach. Three iterations of multipass decoder are executed to generate these results. It appears that multipass decoding provides gain over the one-pass decoding approach. Particularly, the gain is significant for low values of mismatched channel SNR. For example, the decoding gain is 1.5 dB when channel degrades by 0.3 dB from its nominal value. Similar to the (○)-curve, the baseline encoder is used to generate these simulations results.(iii)()-curve corresponds to the typical multipass decoding, i.e., without using the approach proposed in the previous subsection. The multipass decoding provides a maximum gain of 0.6 dB at SNR of 2.8 dB. The gain decreases sharply in case of further channel degradation. It is due to the reason explained earlier that the symbols recovered with low reliability are unable to provide significant extrinsic information in subsequent decoding passes.(iv)(+)-curve is similar to the (○)-curve. However, the proposed multiple refinement level encoder is used. The simulation results confirm that the robustness of QLIC is retained by using the multiple refinement levels per subsource.(v)()-curve corresponds to the multipass decoding and the proposed multiple refinement encoder. The multipass decoding performance is similar to the ()-curve. It is shown that the multirefinement level approach outperforms the multipass decoding approach.

5. Conclusion

In this paper, we propose to efficiently remove the redundant bit-planes for spectrally efficient linear index coding of images. Further, the bit-planes are arranged to preserve their significance, and hence the similar robustness performance is achieved even with higher spectral efficiency. Then, multipass decoding is used to iteratively decode the bit-planes. We show that the multipass decoding provides better gain by using extrinsic information from selected bit-planes.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by National Natural Science Foundation of China under Grant 61471022, and NSAF under Grant U1530117.