Table of Contents Author Guidelines Submit a Manuscript
The Scientific World Journal
Volume 2014 (2014), Article ID 738735, 8 pages
Research Article

Multispectral Image Compression Based on DSC Combined with CCSDS-IDC

Jin Li,1,2 Fei Xing,1,2 Ting Sun,1,2 and Zheng You1,2

1Department of Precision Instrument, The State Key Laboratory of Precision Measurement Technology and Instruments, Tsinghua University, Beijing 100084, China
2Collaborative Innovation Center for Micro/Nano Fabrication, Device and System, China

Received 17 April 2014; Revised 11 June 2014; Accepted 19 June 2014; Published 7 July 2014

Academic Editor: Brajesh K. Kaushik

Copyright © 2014 Jin Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.

1. Introduction

Remote sensing multispectral images are obtained by optical multispectral camera carried on the satellite imaging multiple contiguous narrow bands of the same objects [1, 2].

In general, there are dozens of bands to a few ones in the wavelength range from visible to near infrared spectrum, and the spectral resolution of multispectral is 0.1λ, such as multispectral images produced PLEIADES satellite, IKONOS satellite, and QuickBird satellite. They can obtain abundant spatial and spectral information of measured objects simultaneously. Multispectral imaging technique has been widely applied in many fields, like science research, airborne and airspace remote sensing, medical devices, environment monitoring, geological survey, agricultural monitoring, military applications, and so on [3, 4]. Another efficient method for collecting images of an object in a series of spectral windows is hyperspectral imaging. Hyperspectral images have narrower but more number of bands. Typical applications of hyperspectral imaging approach also appeared on science research, airborne and airspace remote sensing, and military reconnaissance [5, 6]. In general, hyperspectral images consist of more finely divided spectral channels than multispectral images. However, hyperspectral images have lower spatial resolution than multispectral images. Multispectral images can sometimes refer to a set of images taken at vastly different parts of the electromagnetic spectrum. In this paper, we mainly research how to compress multispectral images using an algorithm having low complexity, high robust, and high performance according to characteristics of multispectral images.

With the development of remote sensing multispectral camera, its performance requirements, such as the field of view, resolution, and wide swath, are continuously improved. At the same time, photodetectors read rate and adopted AD quantization bits are also growing and improving. Finally, the captured digital image data are increased rapidly. However, because of the limited solid-state storage capacity on board and restricted satellite downlink channel bandwidth, it is difficult to adapt to the huge amounts of data of multispectral image. So, multispectral image data must be compressed.

Most multispectral image compression algorithms are based on a 3D transform [7] approach. The transform-based approach usually has an image transform stage. The transform approaches have block-based DCT [6], multiresolution scheme DWT [8], postwavelet transform, KLT or PCA [9], and so on. Then, the transform coefficients are coded by compression algorithms, such as EZW [10], SPIHT, SPECK, EBCOT [8], bit plane encoder (BPE) recommended by CCSDS-IDC, and so on. Recently, the typical transform-based algorithms are JPEG2000 [11] and remote sensing panchromatic images compression algorithm CCSDS-IDC [12]. In JPEG2000 algorithm, EBCOT is very efficient to remove the redundancy between wavelet transforms coefficients, which makes JPEG2000 become the best-performing compression encoder in the existing image compression algorithms. However, it is too complex to be implemented in space mission. The CCSDS-IDC algorithm is composed of DWT and BPE. The BPE, a zero tree encoder, makes most of the structures of spatiotemporal orientation trees in bit plane. That is, grandchildren coefficients also become not important when children coefficients are important. This zero tree characteristic makes the bit plane exit in a large amount of zero area. The coding efficiency can be improved by taking full advantage of this zero area can improve coding efficiency. CCDS-IDC has progressive coding and fault-tolerant capability characteristic. But also, BPE is of low complexity and occupies less storage capacity, which is very suitable for the application of on-board camera. However, it decreases the average PSNR by 2 dB compared with JPEG2000. In addition, CCDS-IDC is only suitable for the 2D image, which cannot exploit the spectral redundancy for 3D image.

In this paper, we propose an effective multispectral image compression method based on DSC combined with CCSDS-IDC by deep coupling way. The proposed algorithm has low complexity, high robust, and good performance, which is well suitable to application requirements of the new generation of high-resolution multispectral camera with wide field.

The remainder of this paper is organized as follows. In Section 2, we present the idea of the proposed CCSDS-DSC compression algorithm for multispectral images. In Section 3, we perform the verification experiments on multigroup multispectral images and analyze the experimental results. Section 4 concludes the proposed algorithm and presents further scope for modifications that can be incorporated.

2. Proposed Scheme

To weigh the computational complexity and compression performance, in this paper, we proposed a multispectral images compression scheme, which reduces the computation complexity and is easy to hardware-implement without sacrificing spatial and spectral information. We integrate improved CCSDS-IDC and distributed source coding by deep coupling way to remove the spatial redundancy, spectral redundancy, and bit information redundancy. The proposed compression encoders are less extensive than traditional ones, which is suitable for the on-board camera application.

2.1. Improved CCSD-IDC Algorithm

In CCSDS-IDC algorithm, a 3-level 9/7 lifting 2D DWT is performed to remove the spatial redundancy. The 2D DWT can decompose the image into lower resolution and detail subband. The decomposing process is also viewed as multilevel low-pass filtering and high-pass filtering. An each level, the low resolution subband produced by the previous level decomposition is processed by the high-pass filter to produce wavelet coefficients and by the low-pass filter to produce the approximate information called scaling coefficients. However, after each band of remote sensing, multispectral images are transformed by the 2D DWT; the residual directional spatial correlation between wavelet coefficients in a small adjacent area still existed (see Figure 1). These residual directional correlations produce a large number of large amplitude high-frequency coefficients, which are disadvantage for the later coding.

Figure 1: The wavelet transform of high resolution remote sensing image with a zoom on some coefficients.

In this paper, we apply a Hadamard transform (HT) to wavelet coefficients to remove residual redundancy between adjacent wavelet coefficients. According to CCSDS-IDC algorithm, three levels of two-dimensional wavelet decomposition are performed to 10 subbands, which are denoted as LL, , , , and . The BPE encodes wavelet-transformed coefficients by using blocks as basic processing units. Each block is composed of 64 coefficients having one DC coefficient and 63 AC coefficients, which is shown as in Figure 2. The DC coefficient is one of the scaling coefficients decomposed in the maximum level, and 63 AC coefficients are wavelet coefficients obtained in the level. It is noted that each block corresponds exactly to a small adjacent area in the original image.

Figure 2: A coefficients block having 64 coefficients from different levels, and the 64 coefficients are all decomposed coefficients of a small adjacent area in original band of multispectral images.

In this paper, we apply HT to each subband of each single block. In grandchildren block, HT is performed. In children block, HT is performed. Each wavelet coefficients block is denoted by that contains 4 or 16 coefficients which are denoted by ( or 16) vector elements. can be regarded as a vector which is composed of elements in . A HT orthonormal base is denoted by , which is composed of base vectors, where is a sequence number of subbands in single wavelet coefficients block, . Each base vector is denoted by , . So, the orthonormal base can be expressed as

The HT-transformed coefficients block is denoted by . The HT-transform of one or wavelet coefficients block can be expressed as follows: After each block projected on HT bases, one HT-transformed block and one original coefficients block can be obtained. The best-transformed coefficients block needs to be determined by an evaluation function based on the minimum rate-distortion (-) Lagrangian cost, which can be expressed as follows: where denotes the quantized HT-transformed coefficients when quantization step is , is a Lagrangian multiplier, which can be determined as and is the square error (SE) between , which is the coefficients obtained by HT, and ,which is quantized at a quantization step ; that is, is the need bit rate to encode , which can be expressed as where is the need bit rate to encode , which can be estimated by is the overhead of bit rate in order to encode the identification best base index , which can be expressed as One wavelet coefficients block carries out the HT-transform algorithm, which is shown as in Algorithm 1.

Algorithm 1: The HT-transform algorithm for one block.

2.2. CCSDS-DSC Strategy

Figure 3 shows the proposed CCSDS-DSC entropy encoding algorithm. The key spectral band is sparsely represented by wavelet transform and HT-transform. The transformed coefficients are encoded and decoded independently by CCSDS-BPE. The first spectral band is considered as the key band, which is used to predict for encoding the second spectral band and to reconstruct the second spectral band. The other spectral band is predicted from the previous band . is firstly HT-transform in wavelet domain, and then HT-transformed coefficients are encoded by a SW-BPE encoder. The SW-BPE encoder includes the CCSDS BEP, performing bit plane encoding, and Slepian-Wolf encoder, encoding the coded bit plane. Slepian-Wolf encoder is implemented by QC-LDPC to generate the corresponding check bits, which are only transported into the compressed bit stream.

Figure 3: Proposed compression algorithm based on CCSDS-IDC merged to DSC by deep coupling way for multispectral images.

In decoding, the check bits combined with side information can be grouped together into new words to correct error. The side information can be obtained from BPE encoding HT-transformed coefficients of which can be predicted from reconstructed by spectral bans . The new code words after correcting the error and DC coefficients are combined into one to be decoded by using BPE to get the staging coefficients . The and the HT-transformed coefficients of carry out auxiliary reconstruction, inverse HT-transform, and inverse wavelet transform to the reconstructed . Since the bit streams after Slepian-Wolf encoding are only the check bits and not the bit plane bits, the high compression performance can be reached.

In the proposed algorithm, the side information is obtained by prediction from the previous band for the current band. This is the same as the traditional prediction-based approach and only prediction in decoder. In encoder, each band is independently encoded. This is the same as removing the traditional spectral prediction for multispectral image and can shift the complexity of encoder to the decoder. The strategy can reduce the complexity of encoder and is very suitable for the on-board application.

Figure 4 shows the encoding process of SW-BPE.

Figure 4: The encoding process of SW-BPE.

In Figure 4, denotes one bit plane of image , denotes th bit plane of image , and denotes th bit in the th bit plane of image . The encoding step is shown as in Algorithm 2.

Algorithm 2: SW-BPE encoding process.

In order to ensure the relationship of zero tree of bit plane, we use a first-order linear predictor to compute the coefficients as where denotes the pixels of previous band, which have the same spatial locations with the current pixels. In sensing of least minimum mean square error (LMMSE) for prediction error, the prediction parameters of the statistically optimal prediction can be expressed as where is the expected values for the random variables and is the expected values for the random variables . In order to efficiently compute parameters and according to (10), we define two subimage windows, one is part of the current band, and another is part of the previous band. The pixels in the one window have the same spatial locations with another window, which is shown in Figure 5. The statistical parameter can be approximated as Similarly, other parameter can be likewise calculated.

Figure 5: Two subimage windows, their pixels have the same spatial locations; (a) is a previous band and (b) is a current band.

In addition, we consider the QC-LDPC as the strategy of SW encoding because QC-LDPC is easy to hardware-implement. For fast and efficient coding, cyclic matrix uses permutation matrix as Consider is a permutation matrix, and it is obtained by unit matrix that shifts times to the right. is a positive integer. is a zero matrix. The parity-check matrix can be constructed by , which can be expressed as where . The bit rate is . According to the bit rate, the constructed parity-check matrix can be expressed as The bit rate can be determined by and . And the parity-check matrix can also be determined, and then coding is implemented. Since the elements of proposed parity-check matrix are 0 or 1 only and are computed by simple additions and shifts, QC-LDPC encoder is suitable for the hard implementation.

3. Experimental Results

In order to verify the feasibility and evaluate the proposed DSC combined with the CCSDS-IDC- (DSC-CCSDS-) based compression algorithm performance for multispectral images, we use several group multispectral remote sensing images. Each group of multispectral images is composed of three bands. In each group, the bit depth of each pixel is 8 bpp (bit per pixel). In our algorithm, each band performs 3-level two-dimensional Daubechies 9/7 DWT discrete wavelet transform and HT-transform, and each HT-transformed coefficient was quantized to 8 bits. The whole algorithm is implemented by MATLAB on a personal computer (PC). The working parameters of the PC are configured as: (1) two dual-core Intel CPU; (2) 3.6 GHz main frequency; (3) 4 GB of RAM.

Firstly, we verify the feasibility of our algorithm. A test multispectral image Akashi Kaikyō Bridge is used to test our approach. The multispectral test has three bands. The first band is denoted by B1, The second band by B2, and the third band by B3. Each band is . The spectrum of B1 is 0.76–0.90 μm, B2 is 0.63–0.69 μm, and B3 is 0.52–0.60 μm. Figure 6 demonstrated the three reconstructed bands of the testing multispectral images. The B3 is considered as the key band. The compression encoding rate is set to 2.0 bpp. From the displayed images, the reconstructed images obtain better results because the proposed algorithm has a high signal-to-noise ratio.

Figure 6: Encoding multiband images at 2.0 bpp.

Secondly, we evaluate the performance of our algorithm. To thoroughly evaluate our algorithm, a series of remote sensing multispectral images having different textures characteristics are used to perform compression testing. The testing experiment is performed at different compression code rates. The compression code rate is 0.25~4 bpp. In addition, we also use CCSDS-IDC algorithm to compress these multispectral images at different compression code rates. The tested compression results of the proposed CCSDS-DSC approach are compared with the CCSDS-IDC. We use the PSNR formula to evaluate the rate distortion performance of two compression approaches objectively. The PSNR can be expressed as where is equal to , is the quantization bits of original images, denotes the value of the pixel of original images at , denotes the value of the pixel of reconstructed images at , and and denote the number of row and column of multi-spectral images, respectively.

Figure 7 shows the comparison results of our algorithm and CCSDS-IDC approach. Because we apply the HT to DWT domain of each band and merge CCSDS-IDC to DSC by deep coupling way, the proposed algorithm has the better compression performance than CCSDS-IDC algorithm, and the PSNR improves 0.15~0.56 dB than the CCSDS-IDC compression codec in norm bit rate 2.0~0.25 bpp.

Figure 7: The test result of multiband images.

In [13], Zhang et al. proposed ViDis model remote sensing compression algorithm. In their testing results, the PSNR is 28.84 dB, 29.67 dB, 33.84 dB, and 38.83 dB at 0.4 bpp, 0.5 bpp, 1.0 bpp, and 2.0 bpp, respectively. They also compare their algorithm with JP2, DCQ, and GEO. Their algorithm improves 0.04 ~0.76 dB. In [14], Valsesia and Magli use CCSDS-123+rate Control B and CCSDS-122+POT+Reverse Warterfill compress SPOT5 multispectral images at 1.0 ~4 bpp. Their PSNR reaches 23.80~44.30 dB and 24.26~41.29 dB, respectively. In [15], adaptive transform + CSPIHT reaches 32.09~46.10 dB at 0.25~2.5 bpp. In [16], JPEG2000 MC, 3D SPECL, and LVQ-SPECK reach 22.44~34.54 dB, 24.41~41.53 dB, and 22.86~39.51 dB, respectively. Comparing the above compression algorithms with our approaches, our algorithm has the best compression performance.

The following compression times are only evaluations since our FPGA implementation of the CCSDS-DSC is not optimized. These evaluations are based on the lossy compression of an image of size 3072 × 128 at 1.0 bpp on a FPGA EVM board with system clock frequency at 88 MHz. Table 1 demonstrates the comparison of complexity between the proposed multispectral compression algorithm and others.

Table 1: The results of complexity comparison.

From Table 1, the processing time of our algorithm reaches 0.013 μs/sample; the data throughput is 76.9 MSPS, which is higher than JPEG2000 and CCSDS-IDC, so our approach has low complexity. In our project, the space CCD camera works at an orbits altitude of 500 km, the scroll angle of −40°~+40°, the latitudes of −70°~+70°, and the line working frequency is 7.2376~3.4378 kHz. In the line working frequency, capturing the image of 128 × 3072 requires 70.74 ms. Using our approach, the compressing the image of 128 × 3072 requires 5.12 ms. So, our approach can process the four bands images simultaneously. This meets the requirement of the project.

In addition, we use the XC2V6000-6FF1152 FPGA to implement the proposed algorithm. The design language is VerilogHDL, the development platform is ISE8.2, and synthesis tool is XST. Table 2 demonstrates the occupancy of resources of our approach.

Table 2: The occupancy of resources.

From Table 2, the LUT occupies 67%, Slices occupies 70%, and BRAM occupies 80%. Various indicators are lower than 95%, which meet the requirement of our project.

4. Conclusion

A DSC combined with the CCSDS-IDC compression algorithm is proposed for multispectral images in this paper. In our algorithm, first, each band is sparse represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by BPE. Finally, the BEP is merged to the DSC strategy based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC- (DSC-CCSDS-) based algorithm has better compression performance than traditional compression approaches in each band. To further reduce on-board encoding complexity, we will use a compressed sensing (CS) to take the place of CCSDS-IDC in our algorithm. We expect it will have a better compression performance for multispectral images.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


The project is sponsored by the China Postdoctoral Science Foundation (no. 2014M550720), the National High Technology Research and Development Program of China (863 Program) (no. 2012AA121503 and no. 2012AA121603), and China NSF Projects (no. 61377012 and no. 60807004).


  1. M. Imani and H. Ghassemian, “Band clustering-based feature extraction for classification of hyperspectral images using limited training samples,” IEEE Geoscience and Remote Sensing Letters, vol. 11, no. 8, pp. 1325–1329, 2014. View at Publisher · View at Google Scholar
  2. M. Pantazis, B. van Gorp, O. Robert et al., “Portable remote imaging spectrometer coastal ocean sensor: design, characteristics, and first flight results,” Applied Optics, vol. 53, no. 7, pp. 1363–1380, 2014. View at Publisher · View at Google Scholar
  3. K. Mukherjee, A. Bhattacharya, J. K. Ghosh, and M. K. Aroraa, “Comparative performance of fractal based and conventional methods for dimensionality reduction of hyperspectral data,” Optics and Lasers in Engineering, vol. 55, pp. 267–274, 2014. View at Publisher · View at Google Scholar
  4. R. Ji, Y. Gao, R. Hong et al., “Spectral-spatial constraint hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 3, pp. 1811–1824, 2014. View at Google Scholar
  5. H. Zhao, P. Zhou, Y. Zhang et al., “Development of a dual-path system for band-to-band registration of an acousto-optic tunable filter-based imaging spectrometer,” Optics Letters, vol. 38, no. 22, pp. 4120–4123, 2013. View at Publisher · View at Google Scholar
  6. A. Aggoun, “Compression of 3D integral images using 3D wavelet transform,” IEEE/OSA Journal of Display Technology, vol. 7, no. 11, pp. 586–592, 2011. View at Publisher · View at Google Scholar · View at Scopus
  7. S. Nichols, H. Kim, A. A. Humos, and H. J. Cho, “A performance evaluation on DCT and wavelet-based compression methods for remote sensing images based on image content,” in Proceedings of the 17th International Conference on Geoinformatics (Geoinformatics '09), vol. 1, pp. 454–458, August 2009. View at Publisher · View at Google Scholar · View at Scopus
  8. J. M. Shapiro, “An embedded wavelet hierarchical image coder,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '92), vol. 4, pp. 657–660, San Francisco, Calif, USA, March 1992. View at Publisher · View at Google Scholar
  9. I. Blanes and J. Serra-Sagrista, “Cost and scalability improvements to the Karhunen-Loêve transform for remote-sensing image coding,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 7, pp. 2854–2863, 2010. View at Publisher · View at Google Scholar · View at Scopus
  10. D. Taubman, “High performance scalable image compression with EBCOT,” IEEE Transactions on Image Processing, vol. 9, no. 7, pp. 1158–1170, 2000. View at Publisher · View at Google Scholar · View at Scopus
  11. L. Lu, L. Zhang, and T. Zhang, “JPEG2000-based optimization algorithm for effective compression display of remote sensing images,” Applied Mechanics and Materials, vol. 327, pp. 1602–1609, 2013. View at Publisher · View at Google Scholar · View at Scopus
  12. F. Garcia-Vilchez and J. Serra-Sagrista, “Extending the CCSDS recommendation for image data compression for remote sensing scenarios,” IEEE Transactions on Geoscience and Remote Sensing, vol. 47, no. 7, pp. 3431–3445, 2009. View at Publisher · View at Google Scholar
  13. Y. Zhang, H. Cao, H. Jiang, and B. Li, “Visual distortion sensitivity modeling for spatially adaptive quantization in remote sensing image compression,” IEEE Geoscience and Remote Sensing Letters, vol. 11, no. 4, pp. 723–727, 2013. View at Publisher · View at Google Scholar · View at Scopus
  14. D. Valsesia and E. Magli, “A novel rate control algorithm for on board predictive coding of multispectral and hyperspectral images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 10, pp. 6341–6355, 2014. View at Google Scholar
  15. U. Bayazit, “Adaptive spectral transform for wavelet-based color image compression,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 21, no. 7, pp. 983–992, 2011. View at Publisher · View at Google Scholar · View at Scopus
  16. A. J. S. Dutra, W. A. Pearlman, and E. A. B. da Silva, “Successive approximation wavelet coding of AVIRIS hyperspectral images,” IEEE Journal on Selected Topics in Signal Processing, vol. 5, no. 3, pp. 370–385, 2011. View at Publisher · View at Google Scholar · View at Scopus
  17. L. Leibo, N. Chen, H. Meng, and L. Zhang, “A VLSI architecture of JPEG2000 encoder,” IEEE Journal of Solid-State Circuits, vol. 39, no. 11, pp. 2032–2040, 2004. View at Publisher · View at Google Scholar
  18. K. Mathiang and O. Chitsobhuk, “Efficient pass-pipelined VLSI architecture for context modeling of JPEG2000,” in Proceedings of the Asia-Pacific Conference on Communications (APCC '07), pp. 63–66, 2007.
  19. W. Huaichao, C. Jun, G. Xiaodong et al., “High speed and bi-mode image compression core for onboard space application,” in Proceedings of the SPIE-The International Society for Optical Engineering, vol. 7651, pp. 765122–765127, 2010.
  20. A. Lin, C. F. Chang, C. M. Lin et al., “High-performance computing in remote sensing image compression,” in Proceedings of the Conference on High-Performance Computing in Remote Sensing, vol. 8183, 2011.
  21. Y.-H. Seo and D.-W. Kim, “VLSI architecture of line-based lifting wavelet transform for motion JPEG2000,” IEEE Journal of Solid-State Circuits, vol. 42, no. 2, pp. 431–440, 2007. View at Publisher · View at Google Scholar · View at Scopus