Research Article | Open Access

# Multiband CCD Image Compression for Space Camera with Large Field of View

**Academic Editor:**Shiping Lu

#### Abstract

Space multiband CCD camera compression encoder requires low-complexity, high-robustness, and high-performance because of its captured images information being very precious and also because it is usually working on the satellite where the resources, such as power, memory, and processing capacity, are limited. However, the traditional compression approaches, such as JPEG2000, 3D transforms, and PCA, have the high-complexity. The Consultative Committee for Space Data Systems-Image Data Compression (CCSDS-IDC) algorithm decreases the average PSNR by 2 dB compared with JPEG2000. In this paper, we proposed a low-complexity compression algorithm based on deep coupling algorithm among posttransform in wavelet domain, compressive sensing, and distributed source coding. In our algorithm, we integrate three low-complexity and high-performance approaches in a deeply coupled manner to remove the spatial redundant, spectral redundant, and bit information redundancy. Experimental results on multiband CCD images show that the proposed algorithm significantly outperforms the traditional approaches.

#### 1. Introduction

Space multiband charge coupled devices (CCD) camera is now heading for a high spatial resolution, high spectral resolution, high radiation resolution, large field of view, and wide coverage development. All these result in the number of CCD mosaic growing, read-out rate increasing, quantization bits of AD converter increasing, and average shooting time increasing, so that the amount of digitization image data increases sharply. However, the highest data transmission rates of on-board downlink channel are limited. In addition, the amount of flash memory based SSR used on the satellite is also limited. So, it is necessary to compress the on-board CCD images as well.

Space multiband CCD camera compression encoder requires low-complexity, high-robustness, and high-performance because of its captured images information being very precious, and also because it is usually working on the satellite where the resources, such as power, memory, and processing capacity, are limited. Reference [1] does statistics about the on-board image compression algorithm based on using the basis compression theory used in compression system for more than 40 space missions. The statistics result is shown in Figure 1. More than half of on-board image compression algorithms are based on a transform approach. About 80% are based on a prediction method.

The transform-based approach usually has an image transform stage, such as discrete cosine transform (DCT), discrete wavelet transform (DWT) [2], and Karhunen-Loeve transform (KLT) [3]. Then the transform coefficients are coding by compression algorithms, such as EZW [4], SPIHT, SPECK, EBCOT [5], and bit plane encoder (BPE). The typical transform-based algorithms are JPEG2000 [6] and the Consultative Committee for Space Data Systems-Image Data Compression (CCSDS-IDC) [7]. In JPEG2000 algorithm, EBCOT is very efficient to remove the redundancy between wavelet transforms coefficients, which makes JPEG2000 become the best-performing compression encoder in the existing image compression algorithms. However, it is too complex to be implemented in space mission. The CCSDS-IDC algorithm is composed of DWT and BPE. The BPE, which is a zero tree encoder makes the most of the structures of spatiotemporal orientation trees in bit plane. That is, grandchildren coefficients also become not important when children coefficients are important. This zero tree characteristic makes the bit plane exit a large amount of zero areas. Taking full advantage of these zero areas can improve coding efficiency. CCDS-IDC has progressive coding and fault-tolerant capability characteristic. But, BPE has low-complexity and occupies less storage capacity, which is very suitable for the application of on-board camera. However, it decreases the average PSNR by 2 dB compared with JPEG2000. In addition, CCDS-IDC is only suitable for the 2D image, which cannot exploit the spectral redundancy for 3D image.

The prediction-based approaches are widely used by 3D image (like multispectral, hyperspectral image) compression. For now, to cover 1D, 2D, and 3D, coefficients prediction algorithms include hundreds of predictors. For the on-board application, the main prediction methods were DPCM, adaptive DPCM, CCSDS-LDC, CCSDS-MHD, JPEG-LS, and LUT [8]. The prediction-based approaches are very simple and realized easily in hardware. However, its compression performance is limited and it has low anti-BER (bit error rate). For now, on-board image compression based on prediction only is unusual. Generally, compression algorithm uses them in combination with other compression approaches, such as CCSDS-IDC, which use DPCM encoding the DC coefficients.

Recently, two low-complexity compression approaches are already appearing which are distributed source coding (DSC) [9] and compressive sampling (CS) [10]. They have common characteristics, which can shift algorithm complexity from encoder to decoder. Different from 3D transform-based multispectral image compressor which uses the dependency of information source to improve the compression performance, DSC uses the dependency of the decoder to improve the compression performance. For CS, it is a new sampling theory, which breaks out of the Nyquist sampling theory. CS can reconstruct the original signal from a small number of measured signals. Many DSC-based compression schemes are proposed at present. Pan et al. propose a low-complexity DCT-based DSC compression method for hyperspectral images. Cheung et al. propose a wavelet-based predictive Slepian-Wolf coding for hyperspectral images. A number of CS-based compression approaches have also been proposed for now. Deng et al. proposed wavelet-based CS compression method. Meriño et al. proposed robust compression using compressive sensing. These compression schemes can be easily implemented from both software and hardware which made this algorithm more practical to on-board applications. However, DSC and CS also only combine the DSC with traditional approaches for now.

For now, Peyré and Mallat [11] propose a new low-complexity compression approach based on posttransform (PT). PT is a transform to wavelet coefficients block. This approach can remove the redundancy between wavelets’ coefficients, which can improve compression performance. In addition, because it processes a 16-coefficient block and only carries out dot product operation, it does not require large amount of memories and could simply implement on hardware. To adapt on-board application, Delaunay et al. in [12] proposed a compression scheme using BPE from the CCSDS recommendation to code posttransform coefficients. However, the posttransforms destroy the zero-tree structure. The basis vectors must be ordered and sorted with respect to the energy contribution. This causes it to have an additive complexity and a low compression performance.

In this paper, we proposed a low-complexity compression algorithm based on deep coupling algorithm among posttransform in wavelet domain, compressive sensing, and distributed source coding. In our algorithm, we integrate three low-complexity and high-performance approaches in a deeply coupled manner to remove the spatial redundancy, spectral redundancy, and bit information redundancy.

The latter part of this paper is organized as follows. In Section 2, we present a low-complexity compression algorithm based deep coupling approach. In Section 3, the experimental results are demonstrated. Section 4 concludes the proposed method.

#### 2. Proposed Scheme

To weigh the computational complexity and compression performance, in this paper, we proposed a low-complexity compression algorithm based on deep coupling algorithm for multiband CCD images. We integrate posttransform in wavelet domain, compressive sensing, and distributed source coding in a deeply coupled manner to remove the spatial redundant, spectral redundant, and bit information redundancy.

##### 2.1. Proposed Algorithm Architecture

Figure 2 shows the proposed deep coupling architecture. The key band is processed by deep coupling between the posttransform and CS. This key band is used to reconstruct the other bands in decoder. Other bands are processed by deep coupling between the posttransform, CS, and DSC. First, is performed by wavelet transform and posttransform. Then, the posttransform coefficients are sampled by the CS measurement matrix to obtain the resultant CS measurements. Finally, the resultant CS measurements are encoded by a Slepian-Wolf (SW) encoder. In our scheme, we choose QC-LDPC code as the powerful error-correcting code to realize the DSC strategy. The QC-LDPC-based SW coder is implemented to encode the resultant CS measurements bits to generate the check bits. The check bits are only entered into the compressed bit streams.

In decoding, the check bits and side information are combined in a new code word to correct errors. So the result measurements of can be recovered. The side information can be which can be obtained by the prediction approach from the and , which are reconstructed from the and . The reconstructed result measurements can be recovered by a recovery algorithm based on orthogonal matching pursuit (OMP) to obtain the reconstructed posttransformed coefficients. The reconstructed posttransformed coefficients are reversely transformed to obtain reconstructed wavelet coefficients. Finally, the reconstructed wavelet coefficients are reversely transformed to obtain the .

##### 2.2. Deep Coupling between PT and CS

Figure 3 shows the proposed deep coupling between the posttransform and CS. An image is decomposed by two-dimension DWT into three-level high-frequency subbands and one low-frequency subband. Each high-frequency subband is performed by posttransform. In our algorithm, the dictionary has two bases, which work at high bit rate and low bit rate. We use norm-based, the best posttransform. Three levels of high posttransformed subbands are sampled by 3 CS measurements’ matrices , , and . The resultant CS measurements are quantized and entropy coding. The compressed stream bits can be obtained and then are grouped into packets and are sent to the decoder via signal channels. The size measurements matrix can be evaluated by an information evaluation. And the base used is selected by a bit evaluation.

In this paper, to obtain a low-complexity yet an efficient posttransform, we use a very simple dictionary which is composed of the Hadamard basis and DCT basis; that is, . Under the low bit rate, the bit rate can be expressed as where is the number of nonzero posttransformed coefficients. Under the high bit rate, the bit rate can be expressed as where is the entropy of image and is a quantizer step. According to (6) and (7), we proposed a new approach based on norm for the best posttransform basis selection.

Under the low bit rate, the best posttransform basis selection can be expressed as where is a posttransformed coefficient.

Under the high bit rate, the best posttransform basis selection can be expressed as

Given the different subbands which include varying degrees of image information, the CS sensing matrices have different sizes. Four CS sensing matrices are denoted as , , , and . The sensing process can be expressed as where , , , and are CS measured result samples from posttransformed low subband and high subbands and , , , and are posttransformed coefficients in low subband and high subbands, respectively.

CS sensing matrices’ size is determined by bit rate. Since the best transform select is reflecting the image information, the bit rate can be evaluated by . The bit rate can be expressed as where denotes th subband allocated bit rate, denotes total bit rate, denotes th subband of the amount of information, denotes the number of th subbands of the posttransform blocks, and is th subband of th best posttransform selection.

The base used in dictionary selection is determined by CS result measurements. In result measurements, we use a prediction approach to compute the amount of information of CS result measurements. The prediction can be expressed as Each CS result measurement norm can be expressed as When is considered as , from (1), the base used in dictionary selection can be determined.

##### 2.3. Deep Coupling among PT, CS, and DSC

Figure 4 shows the proposed deep coupling among the posttransform, CS, and DSC.

The subset of the bands and , which have been encoded, is used to compute the linear predictor coefficients and *. *The band is decomposed by DWT and posttransform to obtain the posttransformed coefficients. The posttransformed coefficients combined with the coefficients and to obtain the prediction posttransformed coefficients’ band . The posttransformed coefficients and are sampled by CS measurements’ matrix to obtain the result measurements and . and are used to compute the crossover probability . The crossover probability determines the check matrix and the posttransform base used. The crossover probability and the best transform select decided mutually the measurements’ matrix size.

To ensure that the and are similar as far as possible, we use a second-order filter to compute the prediction coefficients and . This can be expressed as where is the pixels colocated with the current pixel in band , is the pixels colocated with the current pixel in bands , and and are the expectation value of the random variables and , respectively. This can be expressed as Under the minimum mean square error criteria, the statistical optimal prediction can be computed by solving a Wiener-Hopf equation as Define the context windows of the current band and the previous band as shown in Figure 5; the statistical parameters can be approximated as where is the context window size. Other parameters can be likewise calculated.

**(a)**

**(b)**

**(c)**

The theoretical limit of Slepian-Wolf bit rate is ; bit rate determinates the parity-check matrix of QC-LDPC. The conditional entropy can be expressed as where is the crossover probability. The crossover probability can be computed as To improve efficiency, we use subsample to evaluate the crossover probability. In addition, we use the QC-LDPC code to implement the SW encoder. For fast and efficient coding, in [13], cyclic matrix A uses a permutation matrix as follows: Consider that is a permutation matrix; it is obtained by unit matrix that shifts times right. is a positive integer. is a zero matrix. A parity-check matrix is expressed as where . The bit rate is . In this paper, the designed parity-check matrix is expressed as follows: The bit rate can be determined by and . And the parity-check matrix can also be determined, and then coding is implemented. Since the elements of proposed parity-check matrix are 0 or 1 only and can be computed by simple additions and shifts, QC-LDPC encoder is suitable for the hard implementation.

#### 3. Experimental Results

As usually done in the literature, we use the Quickbird multiband remote sensing images. Each group remote sensing images are used to evaluate our deep coupling compression scheme. The size of each group is , and the bit depth of pixel is 8 bpp (bits per pixel). The compression rate is set to 1 bpp. All algorithms are implemented in MATLAB and run on PCs with 1.87 GHz E8400 CPU and 1 GB memory. The following diagrams illustrate the results. Figure 6 demonstrated the four original and reconstructed bands. From the displayed images, the original image and reconstructed image have almost no difference because the proposed compression algorithm has a high signal-to-noise ratio.

**(a) Original band 1**

**(b) Original band 2**

**(c) Original band 3**

**(d) Original band 4**

**(e) Reconstructed band 1**

**(f) Reconstructed band 2**

**(g) Reconstructed band 3**

**(h) Reconstructed band 4**

To objectively evaluate the performance of proposed deep coupling-based compression scheme, extensive experiments were carried out on a number of multispectral data at various coding bit rates. In the first experimental part, we compare the compression results obtained with the proposed coder against those achieved with AT-3DSPIHT [14], 3D-DWT [15], 3D-PCA [16], 3D-SPECK [17], and SA-DCT [18] implemented independently. The quality assessment of the decoded images is based on rate-distortion results measured by means of the overall SNR given by where and MSE denote the power of the original image and the mean squared error, respectively. The PSNR comparisons were demonstrated in Figure 7. Due to the full usage of posttransform, CS, and DSC, the proposed deep coupling-based compression scheme achieves the best compression performance and 0.3~1.3 dB PSNR gain on average against the other five compression codecs in norm bit rate 2.0~0.25 bpp. Overall, the proposed scheme shows excellent lossy compression performance and delivers better compression results than that of commonly used coders.

**(a)**

**(b)**

#### 4. Conclusion

In this paper, we proposed a low-complexity compression algorithm based on deep coupling algorithm among posttransform in wavelet domain, compressive sensing, and distributed source coding. In our algorithm, we integrate three low-complexity and high-performance approaches in a deeply coupled manner to remove the spatial redundant, spectral redundant, and bit information redundancy. Experimental results on multiband CCD images show that the proposed algorithm significantly outperforms the traditional approaches.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

The project is sponsored by the China Postdoctoral Science Foundation (no. 2014M550720), the National High Technology Research and Development Program of China (863 Program) (no. 2012AA121503, no. 2012AA121603), and China NSF Projects (no. 61377012, no. 60807004).

#### References

- G. Yu, T. Vladimirova, and M. N. Sweeting, “Image compression systems on board satellites,”
*Acta Astronautica*, vol. 64, no. 9-10, pp. 988–1005, 2009. View at: Publisher Site | Google Scholar - A. Aggoun, “Compression of 3D integral images using 3D wavelet transform,”
*Journal of Display Technology*, vol. 7, no. 11, pp. 586–592, 2011. View at: Publisher Site | Google Scholar - I. Blanes and J. Serra-Sagrista, “Cost and scalability improvements to the Karhunen-Loêve transform for remote-sensing image coding,”
*IEEE Transactions on Geoscience and Remote Sensing*, vol. 48, no. 7, pp. 2854–2863, 2010. View at: Publisher Site | Google Scholar - J. M. Shapiro, “An embedded wavelet hierarchical image coder,” in
*Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing*, vol. 4, pp. 657–660, 1992. View at: Google Scholar - D. Taubman, “High performance scalable image compression with EBCOT,”
*IEEE Transactions on Image Processing*, vol. 9, no. 7, pp. 1158–1170, 2000. View at: Publisher Site | Google Scholar - D. S. Taubman and M. W. Marcellin,
*JPEG2000: Image Compression Fundamentals, Standards, and Practice*, Kluwer Academic Publishers, Norwell, Mass, USA, 2001. - The Consultative Committee for Space Data System, “Image data compression,”
*Blue Book, Recommended Standard*CCSDS-122.0-B-1, The Consultative Committee for Space Data System, Washington, DC, USA, 2005. View at: Google Scholar - J. Mielikainen, “Lossless compression of hyperspectral images using lookup tables,”
*IEEE Signal Processing Letters*, vol. 13, no. 3, pp. 157–160, 2006. View at: Publisher Site | Google Scholar - J. Zhang, H. Li, and W. C. Chang, “Distributed coding techniques for onboard lossless compression of multispectral images,” in
*Proceedings of the IEEE International Conference on Multimedia and Expo (ICME '09)*, pp. 141–144, New York, NY, USA, July 2009. View at: Publisher Site | Google Scholar - B. Han, F. Wu, and D. P. Wu, “Image representation by compressive sensing for visual sensor networks,”
*Journal of Visual Communication and Image Representation*, vol. 21, no. 4, pp. 325–333, 2010. View at: Publisher Site | Google Scholar - G. Peyré and S. Mallat, “Discrete bandelets with geometric orthogonal filters,” in
*Proceedings of the IEEE International Conference on Image Processing (ICIP '05)*, pp. 65–68, September 2005. View at: Publisher Site | Google Scholar - X. Delaunay, M. Chabert, V. Charvillat, G. Morin, and R. Ruiloba, “Satellite image compression by directional decorrelation of wavelet coefficients,” in
*Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '08)*, pp. 1193–1196, Las Vegas, Nev, USA, April 2008. View at: Publisher Site | Google Scholar - S. Myung, K. Yang, and J. Kim, “Quasi-cyclic LDPC codes for fast encoding,”
*IEEE Transactions on Information Theory*, vol. 51, no. 8, pp. 2894–2901, 2005. View at: Publisher Site | Google Scholar | MathSciNet - D. Ma, C. Ma, and C. Luo, “A compression algorithm of AT-3DSPIHT for LASIS's hyperspectral image,”
*Acta Optica Sinica*, vol. 30, no. 2, pp. 378–381, 2010. View at: Publisher Site | Google Scholar - B. Das and S. Banerjee, “Data-folded architecture for running 3D DWT using 4-tap Daubechies filters,”
*IEE Proceedings Circuits, Devices and Systems*, vol. 152, no. 1, pp. 17–24, 2005. View at: Google Scholar - Q. Du and W. Zhu, “Integration of PCA and JPEG2000 for hypespectral image compression,” in
*Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XIII*, vol. 6565 of*Proceedings of SPIE*, pp. 1–8, 2007. View at: Publisher Site | Google Scholar - J. Wu, K. Jiang, Y. Fang, L. Jiao, and G. Shi, “Hyperspectral image compression using distributed source coding and 3D SPECK,” in
*Multispectral Image Acquisition and Processing*, vol. 7494 of*Proceedings of SPIE*, 2009. View at: Google Scholar - A. Kinane, V. Muresan, and N. O'Connor, “An optimal adder-based hardware architecture for the DCT/SA-DCT,” in
*Visual Communications and Image Processing*, vol. 5960 of*Proceedings of SPIE*, July 2005. View at: Publisher Site | Google Scholar

#### Copyright

Copyright © 2014 Jin Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.