Research Article  Open Access
Jin Li, Fei Xing, Zheng You, "An Efficient Image Compressor for Charge Coupled Devices Camera", The Scientific World Journal, vol. 2014, Article ID 840762, 20 pages, 2014. https://doi.org/10.1155/2014/840762
An Efficient Image Compressor for Charge Coupled Devices Camera
Abstract
Recently, the discrete wavelet transforms (DWT) based compressor, such as JPEG2000 and CCSDSIDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude highfrequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a lowcomplexity posttransform coupled with compressing sensing (PTCS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bitrate, respectively. The best posttransform is selected by the normbased approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on onboard CCD camera images show that the proposed approach significantly outperforms the CCSDSIDCbased coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000.
1. Introduction
Charge coupled devices (CCD) camera is now heading for a high spatial resolution, high radiation resolution, large field of view, and wide coverage development [1–3]. In order to meet the performance of CCD camera, the CCD pixels number is growing, readout rate is increasing, quantization bits of an analogdigital (AD) converter are increasing, and average shooting time is increasing. Therefore, the amounts of digitization image data in CCD camera are increasing sharply. Table 1 shows input date rates of compression system for France earth observation satellites in recent years.

From Table 1, onboard image data rates are continuously increasing. However, the highest data transmission rates of onboard downlink channel are limited. In addition, the amount of flashbased nonvolatile solid state memory on the satellite is also limited. So, it is necessary to compress the onboard CCD images as well.
Space CCD camera compressor requires low complexity, high robustness, and high performance because of its captured images information being very precious, and also because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited [4, 5]. Yu et al. [6] do statistics about the onboard image compression algorithm based on the basis compression theory used in compression system for more than 40 space missions. The statistics result is shown as in Figure 1. As is described in the picture, more than half of onboard image compression algorithms are based on a transform approach. For now, the most advanced onboard compression is based on the wavelet transform, which will also be the key technique in space camera compression application.
In recent years, many of discrete wavelet transforms (DWT) based compression approaches have been proposed, such as EZW [7], SPIHT [8], and SPECK [9]. The typical DWTbased algorithms are JPEG2000 [10] and the Consultative Committee for Space Data SystemsImage Data Compression (CCSDSIDC) [11]. JPEG2000 algorithm is composed primarily of DWT and embedded block coding with optimal truncation points (EBCOT) [12]. JPEG2000 has the good compression results. However, JPEG2000 algorithm is too complex because three coding passes are required for each bit plane. In addition, the optimal rate control in JPEG2000 algorithm has high implementation complexity whereas the suboptimal rate control is inaccurate. This makes the implementation of JPEG2000 on space limited hardware particularly challenging. Therefore, The Consultative Committee for Space Data Systems (CCSDS) thinks JPEG2000 is not adapted to onboard compression. The CCSDSIDC algorithm is composed of DWT and BPE. The BPE, which a zerotree encoder, makes the most of the structures of spatiotemporal orientation trees in bit plane. That is, grandchildren coefficients also become not important when children coefficients are important. This zerotree characteristic makes the bit plane exit a large amount of zero area. To take full advantage of these zero areas can improve coding efficiency. CCDSIDC has progressive coding and faulttolerant capability characteristic. But also, BPE has low complexity and occupies less storage capacity, which is very suitable for the application of onboard camera. However, it decreases the average PSNR by 2 dB compared with JPEG2000.
For remote sensing images having the abundant texture and edge features, DWT is not the optimal sparse representation [18–21], so that remote sensing images project on the DWT basis to produce a large number of large amplitude highfrequency coefficients, which are disadvantage for the later coding. In JPEG2000 algorithm, EBCOT [22] is very efficient in removing the redundancy between wavelet transforms coefficients, which makes JPEG2000 the bestperforming compression encoder in the existing image compression algorithms. To overcome DWT issue, several promising transforms such as bandelets [23, 24], curvelets [25], contourlets [26], wedgelets [27], edgelets [28], and complex wavelet [29] have already been studied. However, these approaches usually require oversampling having higher complexity when compared to the wavelet and require nonseparable processing and nonseparable filter design.
Attempts on removing the redundancy between wavelets coefficients transform can be classified into two categories: one category transform in spatial domain and another category transform in transform domain. In [30], a twodimensional (2D) edge adaptive lifting structure was presented. The 2D prediction filter predicts the value of the next poly phase component according to an edge orientation estimator of the image. In [31], Chang and Girod proposed an adaptive lifted discrete wavelet transform to locally adapt the filtering direction to the geometric flow in the image. In [32], a directionadaptive DWT (DADWT) was proposed, which locally adapts the filtering directions to image content based on directional lifting. In [33, 34], oriented ID multiscale decomposition on a quincunx sampling grid was proposed, which obtains transform by adapting the lifting steps of an ID wavelet transform along local orientation. In [35, 36], adaptive directional lifting (ADL) was proposed, which performs liftingbased prediction in local windows in the direction of high pixel correlation. In [37, 38], weighted adaptive lifting (WAL) based wavelet transform was proposed, which uses the weighted function to make sure that the prediction and update stages are consistent. In [39], a 2D oriented wavelet transform (OWT) was introduced, which can perform integrative oriented transform in arbitrary direction and achieve a significant transform coding gain. However, these approaches usually produce blocking artifacts because the transform is performed in spatial domain.
To overcome this issue, Peyré et al. [40, 41] propose a new lowcomplexity compression approach based on posttransform (PT). PT is a transform to wavelet coefficients block. This approach can remove the redundancy between wavelets coefficients, which can improve compression performance. In addition, because it processes a 16coefficient block and only carries out dot product operation, it does not require the large amount memories and could simply implement on hardware. The posttransforms destroy the zerotree structure, so that the posttransformed coefficients are only encoded by entropy coding approaches, such as arithmetic coding, Huffman coding, not zerotree coding approaches, such as BPE and SPIHT. However, onboard compression approaches require the embedded and progressive coding characteristics. To adapt onboard application, Delaunay et al. in [42–44] proposed a compression scheme using BPE from the CCSDS recommendation to code posttransform coefficients. However, they only apply the posttransform to the grandchildren coefficients, so that the compression performance is not that much better.
In this paper, we proposed a lowcomplexity posttransform coupled with compressing sensing (PTCS) compression approach for remote sensing image. In the DWT domain, a pair base, DCT and Hadamard base being able to use, respectively, on the high and low bitrate, the best posttransform is selected by the normbased approach. The posttransform is considered as the sparse represent stage of CS. The posttransform coefficients are resampled by sensing measurement matrix.
The rest of this paper is organized as follows. Section 2 introduces the proposed algorithm. In Section 3, the experimental results are demonstrated. Section 4 concludes the proposed method.
2. Proposed Algorithm
2.1. The Imaging Principle of CCD
In order to explain the background of efficient image compression system for CCD image data, this paper first briefly introduces the imaging principle of CCD camera. The general structure of the system is plotted in Figure 2. The system contains CCDs. The pixel number of the panchromatic CCD is , such as 12000. Moreover, The CCD has four channels of analog signal parallel outputs and 96 integral numbers. To avoid failure of function of the whole system due to single point, the analog video signal of each CCD is processed independently. So, the system needs mutually independent image compression systems which compress image data of each CCD output. A 12bit special video processor is used for each channel of each CCD. The calculation formula of the data rate of each CCD output image is where is the number of CCD valid elements, is the number of quantization bits, is the pushbroom line frequency, is the focus of space cameras, is the average height of satellite orbit, is the subsatellite point velocity, and is the CCD pixel size.
When is 500 km, is 3.5 m, and is 7063 m/s, the line frequency of the CCD is 7.06 KHz. The total image data rate of four channel of CCD is 1.01664 Gbps. However, the datatransfer rate of onboard downlink channel is 300~600 Mbps. Let the CCD number be equal to 4. In order to meet the task, our compressor requires the compression ratio at 4 : 1~32 : 1.
The general structure of the imaging principle of CCD is plotted in Figure 3. The long linear CCD consists of one linear array that measures the panchromatic spectra in the 0.4–0.9 region. The light radiated or reflected by the hundreds of kilometers of linear arrays of ground pixels is concentrated on optical thin film of CCD detector through an optical system. The space and spectrum distributing of ground targets radiation, acquired by CCD detector, can be expressed as where is the solar height angle, is wavelength, is ground luminometer, is spectral reflectance, is atmospheric transmittances, is the point spread function, and is radiant flux. The optical thin film of linear array CCD allows the light of corresponding wavelength through. is captured by the linear CCD array to produce analog signals. Then, the analog signals are processed to produce onelineimage data. Onedimensional spatial information is gained. When the CCD camera scans the ground target, other spatial dimension information is gained. Therefore, CCD image is considered as a 2D image. Based on the imaging principle of CCD, CCD images have the spatial redundancies between the adjacent pixels. According to compression sensing (CS) sampling theory [45–47], sampling redundancy widely exists in images. In addition, a visual redundancy also exists in images. Therefore, the compression algorithm must remove spatial, sampling redundancy and visual redundancy efficiently. In order to meet the task, our compressor requires the PSNR greater than or equal to 35 dB at 4 : 1~32 : 1.
2.2. Spatial Decorrelation
The 2D DWT can decompose the image into lower resolution and detailed subband, which is viewed as successive lowpass and highpass filtering. At each level, the highpass filter produces detailed information called wavelet coefficients, while the lowpass filter associated with scaling function produces the approximate information called scaling coefficients. The DWT has been widely employed exploiting the spatial correlations for remote sensing image, such as JPEG2000 and CCSDSIDC. In this paper, we apply a 2D DWT coupled with a posttransform to CCD image. In our approach, the 2D DWT is performed on CCD image, to reduce spatial correlations and then reduce remaining intraband correlations via a posttransform of the wavelet coefficients.
The 2D DWT has residual directional correlation between wavelet coefficients in a small neighborhood (see Figure 4). Statistical dependence between DWT coefficients has been studied for many years. In [48], correlations between nearby wavelet coefficients are reported in the range [0.01–0.54] at a distance less than or equal to 3 pixels. We found even wider range, and here, we provide a more detailed discussion regarding this topic. We use a Pearson correlation coefficient [49] to analysis statistical dependency of DWT coefficients, which can be expressed as where denotes the covariance between the variables and ; and denote the standard deviation of and , respectively. According to project experience, threelevel 2D DWT is appropriate for onboard compressor and we used threelevel 2D DWT in this paper.
Threelevel 2D DWT is performed on each image band to produce one lowfrequency subband (denoted by LL) and nine highfrequency subbands (denoted by , , and , , 2, 3). The test is performed from ten CCD images. The residual directional correlation between wavelet coefficients in a small neighborhood is shown as in Figure 5. In Level (), the residual directional correlation within 16connected region is . In Level (), the residual directional correlation within 16connected region is . In Level (), the residual directional correlation within 16connected region is . These imply a strong redundancy among neighboring 4 coefficients. The results in Figure 3 also indicate that the residual correlations between nearby wavelet coefficients are much weaker at a distance equal to 5 pixels.
(a)
(b)
(c)
For optimum compression performance of TDICCD image, algorithms must fully exploit the abovementioned statistical properties. Several promising transforms such as contourlets, curvelets, ridgelets, and bandelets have been studied. However, their implementation complexity is too high. In [50], EBCOT has been reported to be very efficient in capturing these residual values. However, its implementation complexity is also too high. In [51], Delaunay has proposed a posttransform to exploit remaining redundancies between wavelet coefficients. After the wavelet transform of the image, posttransforms are applied on each block of wavelet coefficients. This block size of posttransform is the best for simple and effective compression: the residual correlations between nearby wavelet coefficients are very low at a distance greater than and equal to 5 pixels, and the bigger the blocks, the more complex the computation; however when the block size decreases, the number of blocks and thus the side information increases. Note that no blocking artifacts are visible on the reconstructed image because the blocks are processed in the wavelet domain.
2.3. Posttransform Theory
This section gives a short review of posttransform, as introduced in [42, 44, 48]. The core idea behind posttransform compression is that wavelet coefficients blocks are further transformed using one group particular direction basis (such as bandelet, DWT, DCT, and PCA) in a dictionary. First, a 2D DWT is applied to an image. Next, blocks of DWT coefficients are projected on orthonormal bases of the dictionary . Then, a Lagrangian cost is computed and posttransformed coefficients are encoded. Each DWT coefficients block is considered as a vector of the space with . The vectors of the basis are noted with . The posttransformed block can be expressed as follows: Since a dictionary has bases, (including one original block) posttransformed blocks can be obtained. Among all the posttransformed blocks , the best posttransformed block is selected according to minimizing the Lagrangian ratedistortion cost. The minimizing of the Lagrangian ratedistortion cost can be expressed as where denotes the quantized posttransformed coefficients, is the quantization step, denotes the square error due to quantization of the posttransformed block , is a Lagrangian multiplier, and denotes an estimate of the bit rate required for encoding and the associated side information .
2.4. Our Posttransform Method
In the posttransform, the dictionary has multiple bases. The better the compression performance is, the larger the number of bases is, and the higher the computational complexity is. However, onboard compression requires low computational complexity. The space CCD compressor allows only a one basis posttransform. In [52], a lowcomplexity compression scheme using the posttransform in only Hadamard basis has been proposed and the posttransform is applied only at the first level of the wavelet transform. The PSNR increase has been reported to be only between 0.4 dB and 0.6 dB compared to the DWT alone. In this paper, to obtain a low complexity yet efficient posttransform, we thus consider a very simple dictionary containing only one dynamic base, which is Hadamard basis at the low bit rates and DCT basis at the high bit rates.
In [53], Delaunay et al. have shown that the Lagrangian approach to select the best posttransformed block has two main drawbacks: the bit rate estimation of encoding is computationally intensive and not always accurate; the choice of the best posttransformed block depends on the quantization step q in the ratedistortion criterion. However, the coder does not define the quantization step when coding. Therefore, Delaunay et al. proposed an norm minimization approach to select the best posttransformed block. In this paper, we propose a new method based norm minimization approach, which replaces an norm minimization method. It is adapted to the space TDICCD compressor constraints of low complexity.
At the low bit rate, the bit rate can be expressed as where is the number of nonzero posttransformed coefficients. We propose an norm minimization approach to select the best posttransformed block. The selected best posttransformed block is the one with the fewest high magnitude coefficients: where is a posttransformed coefficient. At the high bit rate, the bit rate can be expressed as We use the norm minimization approach to select the best posttransformed block. The selected best posttransformed block is the one with the fewest high magnitude coefficients:
Figure 6 shows the proposed posttransform architecture. Each highfrequency subband is performed by posttransform. The bitrate comparator decides the type of coding bit rate. We define the coding bit rate to be greater than and equal to 0.5 bpp as the type of high bit rate (type 1) and less than 0.5 bpp as the type of low bit rate (type 0). The posttransform is performed using DCT basis and minimization when coding type 1, Hadamard basis and minimization when coding type 0.
2.5. Proposed Posttransform Compressing Sensing (PTCS)
In [54], an adaptive arithmetic coder is used to encode both the posttransformed blocked coefficients and the side information of the chosen posttransform bases on each block. In [53], the bitplane encoder (BPE) is used to encode both the posttransformed blocked coefficients and the side information. However, the posttransforms destroy the zerotree structure; the PSNR using BPE is 0.2 dB less than using DWT and even worse at high bit rates. A basis vector ordering approach has been used. This ordering is defined by processing several thousand blocks of wavelet coefficients from a learning set of images. The ordering approach has two drawbacks for space CCD compression. First, the ordering for the basis vector is computationally extensive and not always accurate. Second, the ordering highly depends on a learning set of images learning. However, the learning set of images is not captured when coding.
Indeed, after 2D DWT and posttransform, the TDICCD image has been sparse in posttransform domain. In order to achieve a higher compression ratio and the low complexity yet, we consider the 2D DWT and posttransform as the sparse representation stage for the TDICCD image, and thus the posttransform coefficients can be resampled using sensing matrices to achieve compression. According to compressed sensing (CS), the sparse signal with a few significant samples in one basis can be reconstructed almost perfectly from a small number of random projections onto a second basis that is incoherent with the first.
First, the following gives a short review of CS, as introduced in [55]. A good overview can be found in [56]. A complete CS involves three main stages: sparse representation, sensing measurements, and signal reconstruction. The sparse representation stage is to represent the original signal with a number of coefficients in an orthonormal transform basis matrix : If the number of nonzero or significant coefficients in the vector is , the original signal is defined as sparsity in the basis. The sensing measurements stage is to project an original signal into a cluster of measurements with significantly less elements than : where is an measurement matrix. Then, the above expression can be written as Since , compression can be achieved. Indeed, the core idea of CS is to remove sampling redundancy by requiring only samples of the signal. The resultant measurements are used for the recovery of original signal . If the measurement matrix is properly designed, can satisfy the socalled restricted isometry property (RIP). And the original signal could be exactly or approximately recovered by solving the following standard convex optimization problem:
In this paper, we apply CS to compress remote sensing image. Figure 7 shows the compressive sensing process using DWT sparse representation for remote sensing. The remote image is . The DWT can produce one lowfrequency subband, LL, and three highfrequency subbands, HL, LH, and HH. Each highfrequency subband performs sensing measurements using Gaussian random measurement matrix. Since , compression can be achieved. For three highfrequency subbands, the sparsity is 113, 71, and 7, respectively. The measurement numbers of all highfrequency subbands satisfy . The highfrequency subband coefficients can be reconstructed. The inverse DWT is performed to obtain reconstructed image. The PSNR of reconstructed image can reach 37.02 dB. Therefore, the CS offers better compression performance for remote sensing image.
(a) Original remote sensing image; its size is
(b) Onelevel DWT producing one lowfrequency subband and three highfrequency subbands; each subband is 256×256
(c) The resultant measurements of HL subband; its size is
(d) The resultant measurements of LH subband; its size is
(e) The resultant measurements of HH subband; its size is
(f) Reconstructed image; the PSNR is 37.02 dB
In this paper, we consider the posttransformed coefficients as the . Let be a DWT orthonormal basis matrix and let be a posttransform orthonormal basis matrix. is considered as . So, the posttransformed coefficients perform the sensing measurements using measurement matrix to achieve compression. In CS sense, the posttransformed coefficients achieve compression by removing sampling redundancy in place of the BPE coder.
In [57], waveletbased CS has been proposed. The waveletbased CS is considered for images are sparse in a wavelet basis. In our approach, we use DWT and posttransform as the transform basis at sparse representation stage. After image sparse representation, most of the image information is concentrated on small transform coefficients in , and most of the transform coefficients in are not zero but very small. We use the hard threshold (HT) based image denoising to indirectly measure the sparsity of transformed image since the better the sparsity is, the more the significant coefficients are after HT and the higher the peak signaltonoise ratio (PSNR) of denoised image is. We use AVIRIS images in our test and choose the same threshold as [58]. Figure 8 shows the PSNR results for the various transform approaches used in our method. As the picture shows, the posttransform offers better sparsity. This is because the posttransform can exploit remaining redundancies between wavelet coefficients.
In CS system, the sparsity of transformed image is one of the key factors affecting reconstructed image quality. Below, we analyze the relationship between the number of measurement matrix and the sparsity of transformed image . We use Gaussian random matrix as sensing matrix. First, we study the relationship between and for onedimension (1D) signal. We use a 1D signal with 256 samples and orthogonal matching pursuit (OPM) method to recover original signal. Assume denotes the ratio of correct data numbers of recovery signal to all data numbers of original signal. Figure 9 shows the variation trend between the sparsity and measurement number . To accurately recover original signal, the more is, the more is. And when is higher than some threshold, the signal can be accurately recovered. That is, the better the sparsity of signal is, the less the measurement number is, and the better the performance of compression and reconstructed signal quality are.
For 2D remote images, we consider the DWT, DCT, and our posttransform as the sparse basis. All sparse bases lead to the PSNRs for the same value (see Figures 10 and 11). The better the sparsity of image is, the better the reconstructed image quality is. Since our posttransform offers a better sparsity than waveletbased transform, our posttransform as CS sparse representation stage is very suitable.
(a) Original image
(b) Our method,
(c) Our method,
(d) DWT,
(e) DWT,
2.6. Deep Coupling between CS and Posttransform
The sensing measurement number exercises a great influence on reconstructed signal quality (see Figures 10 and 12) when using the posttransform basis. The more is, the better reconstructed image quality is. The measurement number of sensing matrix is determined by compression ratio (CR). The lower values of give the higher compression ratio.
(a)
(b)
In our approach, each subband performs sensing measurement independently. The measurement numbers for subbands after posttransform, that is, , , and , are denoted by , , and 3). The result measurement can be expressed as The CR can be considered as the ratio between the total number of bits in the original transformed coefficients and the number of bits that must be transmitted, which can be expressed as where denotes the bit depth of each pixel.
In order to efficiently determine the measurement number of all measurement matrices, we proposed a deep coupling between posttransform and CS to not only determine the measurement number and reduce the side information of posttransform but also to code these measurement results and complete the bitrate control. Figure 13 shows the proposed deep coupling between the posttransform and CS.
The bitrate allocation module allocates the bit rates for each sensing matrix. The information evaluation module evaluates the information of each tensor. Now the target bit rates for different sensing matrixes can be allocated based on their information contents. Then, the measurement number of each sensing matrix can be determined according its allocated bit rates.
In the first place, the information content in LL subband is calculated through an approach. Let denote the coefficients in LL subband. The information content in LL subband can be calculated as
In the second place, the information content of 9 tensors is calculated. Let , , and , , denote the information content of 9 tensors, respectively. Since the selected representation reflects the image information, the information content of 9 tensors can be evaluated through. The dimensions of each tensor are . Each tensor contains posttransformed blocks. Let denote the selected representation of th posttransform block. The information content of the tensor can be calculated as Other tensors can be likewise calculated.
In the third place, the weight of bitrate allocation for each tensor is acquired through
In this paper, the bitplane coding procedure is used to code the quantized measurement results. We modify the SPITH algorithm [59] to complete the bitplane coding. The bitplane coding processes each quantized measurement result at a time. After one quantized measurement result is processed, the next quantized measurement result is processed. For each time, the bitplane coding includes two passes: the significant pass (SP) and the refinement pass (RP). We define a significant map of a given threshold and the quantized measurement result element at the location (, , ). Let be the significant state for the threshold (where is an power of 2) in the th bit plane; for example, For , the is considered as the significant element. The significant element must be encoded and removed from the quantized measurement result. The insignificant elements are preserved for the next bit plane. After that, the significant threshold is divided in half, and the process is repeated for the next pass.
2.7. Our Proposed Codec Architecture
In order to remove spatial, sampling redundancy and visual redundancy efficiently, we proposed a lowcomplexity posttransform coupled with compressing sensing (PTCS) compression approach for remote sensing image. Figure 14 shows the proposed architecture. The compression is performed using four steps. In the first step, 3level 2D DWT is applied to the TDICCD image to obtain one lowfrequency subband and 9 highfrequency subbands. In the second step, blocks of 4 × 4 DWT coefficients of all the subbands except the LL subband are posttransformed. In the third step, the CS is applied to the subbands. In the fourth step, the posttransformed coefficients in lowfrequency subband are encoded by DPCM. The measurement results perform quantization and the bitplane coding and then entropy coding using adaptive arithmetic coding. In the fifth step, the bit streams are packed and transmitted to the decoder via signal channels.
Note that CCD outputs fourchannel images. The fourchannel images is processed simultaneously by the PTCS compressor. Therefore, the synchronicity of the fourchannel images is very important and will affect the compression performance of the proposed algorithm. In fact, the linear CCD produces 12 k pixels (see Figure 15), which is output by four channels or eight channels. In our system, we use the fourchannel way. Image zone produces charges and the charges are moved to the output zone. In output zone, charges are read out by four channels. In order to avoid the pixel smear, charges in output zone must be read out by four channels and simultaneously by diver clocks before charges in image zone moving to output zone. Therefore, the fourchannel image is always output simultaneously.
3. Experimental Results
3.1. Experimental Scheme
In order to test the performance of the proposed algorithm in this paper, we use the independent development ground test equipment. The experimental system used to test the performance of the proposed algorithm is shown in Figure 16. The experiment system is composed of TDICCD camera, image simulation resource, TDICCD image compression system, ground test equipment, DVI monitor, and server. The TDICCD image compression system is shown as in Figure 17. The server injects the remote sensing image into the image simulation resource, which adjusts the images to simulate the output of CCD and then transfers the images to TDICCD image compression system to verify the compression algorithm. The ground test equipment implements the image decompression to gain the reconstructed images and then transfers them to the server through camera link bus. Finally, the compression performance is analyzed in the server.
3.2. TDICCD Image Compression Validation and Analysis
In order to verify the validation of the proposed algorithm, we perform two step experiments. In the first place, CCD camera sends test image to compression system using the rotating cylinder as a target (see Figure 18). CCD camera captures rotating cylinder target to produce test image. CCD outputs image by line to line. The working line frequency is set to 7.06 KHz. The pixels number of each line image is 3072. Within the camera shooting 15 minutes, the captured images store the NAND flash array in the camera. 20frame image from the NAND flash array is sent to compression system. Each frame image is , the bit depth per pixel is 10 bit. The test image is encoded, transferred by highspeed serial Gbit transfer system, reconstructed by ground test equipment, and then sent to PC computer through Camera Link. Figure 19 demonstrates the reconstructed images at different bit rates.
(a) Original image
(b) 0.25 bpp,
(c) 0.5 bpp,
(d) 1 bpp,
(e) 2 bpp,
In the second place, the server injects the AVIRIS multiband remote sensing images into the image simulation resource. The number of test remote sensing images is six. The size of images is , 8 bpp (bits per pixel). The compression rate CR is set to . Figure 20 demonstrates the reconstructed remote sensing images at different bit rates. From the displayed images, the original image and reconstructed image have almost no difference because the proposed compression algorithm has a high signaltonoise ratio. The PSNR reaches 46.75 dB, 45.52 dB, 43.60 dB, and 40.40 dB, respectively, in Figures 20(a)~20(e). After each image was encoded by the proposed algorithm at 1 bpp, the dynamic range of pixels is 0~2 bit, and the value of most pixels are about 1 bit. So, the proposed algorithm is validation for space CCD image compression.
(a) Original image
(b) 2 bpp,
(c) 1 bpp,
(d) 0.5 bpp,
(e) 0.25 bpp,
3.3. Remote Sensing Image Compression Algorithm Performance Analysis
To objectively evaluate the performance of proposed deep couplingbased compression scheme, extensive experiments were carried out on a number of multispectral data at various coding bit rates. In the first experimental part, in order to test the compressed performance of proposed approach, we use three groups of SPOT1 remote images having different texture characteristics. The quality assessment of the decoded images is based on ratedistortion results measured by means of the overall SNR given by where and denote the power of the original image and the mean squared error, respectively. Table 2 demonstrates the tested results of our approach at different bit rates.

According to Table 2, the average PSNR reaches up to 40 dB, so the proposed algorithm has well compressibility and satisfies the requirements of design index.
In the second experiment, in order to compare compressed performance of our approach with other algorithms, we compare the compression results obtained with the proposed coder against those achieved with CCSDSIDC, JPEG2000, and Hadamard posttransform implemented independently. We use the AVIRIS remote sensing images in JPL laboratory. Each group is , the depth of every pixel is 8 bit. The images are Low Altitude, Lunar Lake, Jasper Ridge, and Cuprite. Each test remote sensing images number is three. The compression rate is set to 2.0~0.25 bpp. The average comparisons were demonstrated in Figure 21. From Figure 21, due to the full usage of posttransform and CS the proposed compression scheme achieves the higher compression performance and 0.3~1.3 dB PSNR gain on average against CCSDSIDC and Hadamard posttransform compression codec in norm bit rate 2.0~0.25 bpp. And our approach is only lower (0.1~0.9 dB) than JPEG2000. Overall, the proposed scheme shows excellent lossy compression performance and delivers better compression results than that of commonly used coders.
(a)
(b)
3.4. Proposed Algorithm Complexity Analysis and Compression Time
In the following, we analyze the complexity of the algorithm. In our method, threelevel 2D DWT is applied to the spatial bands. We considered an tap filter bank and denoted by the number of wavelet decomposition levels in the spatial band. The complexity of applying 2DWT to multispectral images with size of is . In our method we use 9/7 DWT and also three levels of decomposition, so the complexity of our algorithm is . After applying 2D DWT, in low bit rate, we use the Hadamard posttransform. For each block of coefficients, the Hadamard transform needs the operations. In high bit rate, we use the DCT posttransform. It is pointed out that the calculation load of DCT is much less than that of DWT. One of the most efficient ways for 2D DCT requires only 54 multiplications and 462 additions. It means that each point needs merely 0.84 multiplications and 7.22 additions. Therefore, 2D DCT requires only 27 multiplications and 231 additions. Then, the best posttransform selection requires additions and one comparison. So, the complexity required is operations per block of coefficients.
After applying posttransform, the multiplication complexity of sensing measurement sensing measurement is of order The summation complexity of sensing measurement sensing measurement is of order
The following compression times are only evaluations since our FPGA implementation of the posttransform TD is not optimized. These evaluations are based on the lossy compression of an image of size at 1.0 bpp on a FPGA EVM board with system clock frequency at 88 MHz. Table 3 demonstrates the comparison of complexity between the proposed multispectral compression algorithm and others.
From Table 3, the processing time of our algorithm reaches 0.016 us/sample, and the data throughput is 62.5MSPS, which is higher than JPEG2000 and CCSDSIDC, so our approach has the low complexity. In our project, the space CCD camera works at an orbits altitude of 500 km, scroll angle of ~, latitudes of −~, and line working frequency of 7.2376 kHz~3.4378 kHz. In the line working frequency, capturing the image of requires 70.74 ms. Using our approach, the compressed image of requires 7.86 ms. So, our approach can process the fourband images simultaneously. This meets the requirement of the project.
In addition, we use the XC2V60006FF1152 FPGA to implement the proposed algorithm. The design language is Verilog HDL, the development platform is ISE8.2, and synthesis tool is XST. Table 4 demonstrates the occupancy of resources of our approach.

From Table 4, the LUT occupies 67%, slices occupy 70%, and BRAM occupies 80%. Various indicators are lower than 95%, which meet the requirement of our project.
4. Conclusion
In this paper, we proposed a lowcomplexity posttransform coupled with compressing sensing (PTCS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bitrate, respectively. The best posttransform is selected by the normbased approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on onboard TDICCD camera images show that the proposed approach significantly outperforms the CCSDSIDCbased coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The project is sponsored by China Postdoctoral Science Foundation (no. 2014M550720), the National High Technology Research and Development Program of China (863 Program) (no. 2012AA121503, no. 2012AA121603), and China NSF Projects (no. 61377012, no. 60807004).
References
 Q. Liu, S. Wang, X. Zhang, and Y. Hou, “Improvement of the space resolution of the optical remote sensing image by the principle of CCD imaging,” in Proceedings of the 2nd International Conference on Image Processing Theory, Tools and Applications (IPTA '10), pp. 477–481, Paris, France, July 2010. View at: Publisher Site  Google Scholar
 J. Zang, Y. Li, X. Xue, and Y. Guo, “Multichannel highspeed TDICCD image data acquisition and storage system,” in Proceedings of the International Conference on EProduct EService and EEntertainment (ICEEE '10), pp. 1–4, Henan, China, 2010. View at: Google Scholar
 C. Fan and B. Zhang, “Analysis on the dynamic image quality of the TDICCD camera,” in Proceedings of the International Conference on Optics, Photonics and Energy Engineering (OPEE '10), vol. 1, pp. 62–64, Wuhan, China, May 2010. View at: Publisher Site  Google Scholar
 C. LambertNebout and G. Moury, “Survey of onboard image compression for CNES space missions,” in Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS '99), vol. 4, pp. 2032–2034, July 1999. View at: Google Scholar
 A. S. Dawood, J. A. Williams, and S. J. Visser, “Onboard satellite image compression using reconfigurable FPGAs,” in Proceedings of the IEEE International Conference on FieldProgrammable Technology, pp. 306–310, 2002. View at: Google Scholar
 G. Yu, T. Vladimirova, and M. N. Sweeting, “Image compression systems on board satellites,” Acta Astronautica, vol. 64, no. 910, pp. 988–1005, 2009. View at: Publisher Site  Google Scholar
 J. M. Shapiro, “Embedded image coding using zerotrees of wavelet coefficients,” IEEE Transactions on Signal Processing, vol. 41, no. 12, pp. 3445–3462, 1993. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 A. Said and W. A. Pearlman, “A new, fast, and efficient image codec based on set partitioning in hierarchical trees,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 6, no. 3, pp. 243–250, 1996. View at: Publisher Site  Google Scholar
 A. Islam and W. A. Pearlman, “An embedded and efficient lowcomplexity hierarchical image coder,” in Proceedings of the Meeting on Visual Communications and Image Processing, vol. 3653, pp. 294–305, January 1999. View at: Google Scholar
 T. Acharya and P.S. Tsai, JPEG2000 Standard for Image Compression: Concepts, Algorithms and VLSI Architectures, John Wiley and Sons, 2005.
 CCSDS, “Image Data Compression Recommended Standard,” CCSDS 122.0B1 Blue Book, November 2005. View at: Google Scholar
 D. S. Taubman and M. W. Marcellin, “JPEG2000: standard for interactive imaging,” Proceedings of the IEEE, vol. 90, no. 8, pp. 1336–1357, 2002. View at: Publisher Site  Google Scholar
 L. Liu, N. Chen, H. Meng, L. Zhang, Z. Wang, and H. Chen, “A VLSI architecture of JPEG2000 encoder,” IEEE Journal of SolidState Circuits, vol. 39, no. 11, pp. 2032–2040, 2004. View at: Publisher Site  Google Scholar
 K. Mathiang and O. Chitsobhuk, “Efficient passpipelined VLSI architecture for context modeling of JPEG2000,” in Proceedings of the AsiaPacific Conference on Communications (APCC '07), pp. 63–66, Bangkok, Thailand, October 2007. View at: Publisher Site  Google Scholar
 H. Wang, J. Chen, X. Gu, and X. Chen, “High speed and bimode image compression core for onboard space application,” in International Conference on Space Information Technology 2009, vol. 7651 of Proceedings of SPIE, Beijing, China, April 2010. View at: Google Scholar
 A. Lin, C. F. Chang, M. C. Lin, and L. J. Jan, “Highperformance computing in remote sensing image compression,” in HighPerformance Computing in Remote Sensing, vol. 8183 of Proceedings of SPIE, Prague, Czech Republic, September 2011. View at: Publisher Site  Google Scholar
 Y. Seo and D. Kim, “VLSI architecture of linebased lifting wavelet transform for motion JPEG2000,” IEEE Journal of SolidState Circuits, vol. 42, no. 2, pp. 431–440, 2007. View at: Publisher Site  Google Scholar
 V. Velisavljević, B. BeferullLozano, M. Vetterli, and P. L. Dragotti, “Directionlets: anisotropic multidirectional representation with separable filtering,” IEEE Transactions on Image Processing, vol. 15, no. 7, pp. 1916–1933, 2006. View at: Publisher Site  Google Scholar
 D. L. Donoho and I. M. Johnstone, “Ideal spatial adaptation by wavelet shrinkage,” Biometrika, vol. 81, no. 3, pp. 425–455, 1994. View at: Publisher Site  Google Scholar  Zentralblatt MATH  MathSciNet
 A. Chambolle, R. A. DeVore, and N . Lee, “Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage,” IEEE Transactions on Image Processing, vol. 7, no. 3, pp. 319–335, 1998. View at: Publisher Site  Google Scholar  MathSciNet
 L. Şendur and I. W. Selesnick, “Bivariate shrinkage functions for waveletbased denoising exploiting interscale dependency,” IEEE Transactions on Signal Processing, vol. 50, no. 11, pp. 2744–2756, 2002. View at: Publisher Site  Google Scholar
 D. Taubman, “High performance scalable image compression with EBCOT,” IEEE Transactions on Image Processing, vol. 9, no. 7, pp. 1158–1170, 2000. View at: Publisher Site  Google Scholar
 F. Qi, Y. Li, and K. Zhang, “Infrarde image denoising by fuzzy threshold based on Bandelets transform,” Acta Photonica Sinica, vol. 37, no. 12, pp. 2564–2567, 2008. View at: Google Scholar
 E. Le Pennec and S. Mallat, “Sparse geometric image representations with bandelets,” IEEE Transactions on Image Processing, vol. 14, no. 4, pp. 423–438, 2005. View at: Publisher Site  Google Scholar  MathSciNet
 E. J. Candès and D. L. Donoho, “New tight frames of curvelets and optimal representations of objects with piecewise ${C}^{2}$ singularities,” Communications on Pure and Applied Mathematics, vol. 57, no. 2, pp. 219–266, 2004. View at: Publisher Site  Google Scholar  MathSciNet
 M. N. Do and M. Vetterli, “The contourlet transform: an efficient directional multiresolution image representation,” IEEE Transactions on Image Processing, vol. 14, no. 12, pp. 2091–2106, 2005. View at: Publisher Site  Google Scholar
 M. Wakin, J. Romberg, H. Choi, and R. Baraniuk, “Ratedistortion optimized image compression using wedgelets,” in Proceedings of the International Conference on Image Processing, vol. 3, pp. III237–III240, September 2002. View at: Google Scholar
 M. B. Wakin, J. K. Romberg, H. Choi, and R. G. Baraniuk, “Waveletdomain approximation and compression of piecewise smooth images,” IEEE Transactions on Image Processing, vol. 15, no. 5, pp. 1071–1087, 2006. View at: Publisher Site  Google Scholar
 J. J. Ranjani and S. J. Thiruvengadam, “Dualtree complex wavelet transform based sar despeckling using interscale dependence,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 6, pp. 2723–2731, 2010. View at: Publisher Site  Google Scholar
 Ö. N. Gerek and A. E. Çetin, “A 2D orientationadaptive prediction filter in lifting structures for image coding,” IEEE Transactions on Image Processing, vol. 15, no. 1, pp. 106–111, 2006. View at: Publisher Site  Google Scholar
 C. Chang and B. Girod, “Directionadaptive discrete wavelet transform via directional lifting and bandeletization,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '06), pp. 1149–1152, Atlanta, Ga, USA, October 2006. View at: Publisher Site  Google Scholar
 C. L. Chang and B. Girod, “Directionadaptive discrete wavelet transform for image compression,” IEEE Transactions on Image Processing, vol. 16, no. 5, pp. 1289–1302, 2007. View at: Publisher Site  Google Scholar  MathSciNet
 V. Chappelier and C. Guillemot, “Oriented wavelet transform on a quincunx pyramid for image compression,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '05), vol. 1, pp. 81–84, September 2005. View at: Publisher Site  Google Scholar
 V. Chappelier and C. Guillemot, “Oriented wavelet transform for image compression and denoising,” IEEE Transactions on Image Processing, vol. 15, no. 10, pp. 2892–2903, 2006. View at: Publisher Site  Google Scholar
 W. Ding, F. Wu, and S. Li, “Liftingbased wavelet transform with directionally spatial prediction,” in Proceedings of the Picture Coding Symposium, pp. 483–488, San Francisco, Calif, USA, December 2004. View at: Google Scholar
 W. Ding, F. Wu, X. Wu, and S. Li, “Adaptive directional liftingbased wavelet transform for image coding,” IEEE Transactions on Image Processing, vol. 16, no. 2, pp. 416–427, 2007. View at: Publisher Site  Google Scholar  MathSciNet
 Y. Liu and K. N. Ngan, “Weighted adaptive liftingbased wavelet transform,” in Proceedings of the 14th IEEE International Conference on Image Processing (ICIP '07), pp. III189–III192, San Antonio, Tex, USA, September 2007. View at: Publisher Site  Google Scholar
 Y. Liu and K. N. Ngan, “Weighted adaptive liftingbased wavelet transform for image coding,” IEEE Transactions on Image Processing, vol. 17, no. 4, pp. 500–511, 2008. View at: Publisher Site  Google Scholar  MathSciNet
 B. Li, R. Yang, and H. Jiang, “Remotesensing image compression using twodimensional oriented wavelet transform,” IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 1, pp. 236–250, 2011. View at: Publisher Site  Google Scholar
 G. Peyré, Geometrie multiéchelles pour les images et les textures [Ph.D. thesis], Ecole Polytechnique, 2005.
 G. Peyré and S. Mallat, “Discrete bandelets with geometric orthogonal filters,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '05), vol. 1, pp. 65–68, September 2005. View at: Publisher Site  Google Scholar
 X. Delaunay, E. Christophe, C. Thiebaut, and V. Charvillat, “Best posttransforms selection in a ratedistortion sense,” in Proceedings of the 15th IEEE International Conference on Image processing (ICIP '08), pp. 2896–2899, October 2008. View at: Publisher Site  Google Scholar
 X. Delaunay, M. Chabert, V. Charvillat, G. Morin, and R. Ruiloba, “Satellite image compression by directional decorrelation of wavelet coefficients,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '08), pp. 1193–1196, April 2008. View at: Publisher Site  Google Scholar
 X. Delaunay, M. Chabert, V. Charvillat, and G. Morin, “Satellite image compression by posttransforms in the wavelet domain,” Signal Processing, vol. 90, no. 2, pp. 599–610, 2010. View at: Publisher Site  Google Scholar  Zentralblatt MATH
 E. J. Candès and M. B. Wakin, “An introduction to compressive sampling: a sensing/sampling paradigm that goes against the common knowledge in data acquisition,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 21–30, 2008. View at: Publisher Site  Google Scholar
 D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. View at: Publisher Site  Google Scholar  MathSciNet
 L. ZelnikManor, K. Rosenblum, and Y. C. Eldar, “Sensing matrix optimization for blocksparse decoding,” IEEE Transactions on Signal Processing, vol. 59, no. 9, pp. 4300–4312, 2011. View at: Publisher Site  Google Scholar  MathSciNet
 X. Delaunay, M. Chabert, V. Charvillat, and G. Morin, “Satellite image compression by posttransforms in the wavelet domain,” Signal Processing, vol. 90, no. 2, pp. 599–610, 2010. View at: Publisher Site  Google Scholar
 P. F. Dunn, Measurement and Data Analysis for Engineering and Science, McGrawHill, 1st edition, 2004.
 X. Delaunay, M. Chabert, G. Morin, and V. Charvillat, “Bitplane analysis and contexts combining of JPEG2000 contexts for onboard satellite image compression,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '07), vol. 1, pp. I1057–I1060, April 2007. View at: Publisher Site  Google Scholar
 X. Delaunay, M. Chabert, V. Charvillat, G. Morin, and R. Ruiloba, “Satellite image compression by directional decorrelation of wavelet coefficients,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '08), pp. 1193–1196, Las Vegas, Nev, USA, MarchApril 2008. View at: Publisher Site  Google Scholar
 X. Delaunay, C. Thiebaut, E. Christophe et al., “Lossy compression by posttransforms in the wavelet domain,” in Proceedings of the OnBoard Payload Data Compression Workshop, June 2008. View at: Google Scholar
 X. Delaunay, M. Chabert, V. Charvillat, and G. Morin, “Satellite image compression by concurrent representations of wavelet blocks,” Annals of Telecommunications, vol. 67, no. 12, pp. 71–80, 2012. View at: Publisher Site  Google Scholar
 X. Delaunay, E. Christophe, C. Thiebaut, and V. Charvillat, “Best posttransforms selection in a ratedistortion sense,” in Proceedings of the 15th IEEE International Conference on Image Processing (ICIP '08), pp. 2896–2899, San Diego, Calif, USA, October 2008. View at: Publisher Site  Google Scholar
 J. M. Kim, O. K. Lee, and J. C. Ye, “Compressive MUSIC: revisiting the link between compressive sensing and array signal processing,” IEEE Transactions on Information Theory, vol. 58, no. 1, pp. 278–301, 2012. View at: Publisher Site  Google Scholar  MathSciNet
 S. Engelberg, “Compressive sensing [Instrumentation Notes],” IEEE Instrumentation and Measurement Magazine, vol. 15, no. 1, pp. 42–46, 2012. View at: Publisher Site  Google Scholar
 C. Deng, W. Lin, B.S. Lee, and C. T. Lau, “Robust image coding based upon compressive sensing,” IEEE Transactions on Multimedia, vol. 14, no. 2, pp. 278–290, 2012. View at: Publisher Site  Google Scholar
 A. L. da Cunha, J. Zhou, and M. N. Do, “The nonsubsampled contourlet transform: theory, design, and applications,” IEEE Transactions on Image Processing, vol. 15, no. 10, pp. 3089–3101, 2006. View at: Publisher Site  Google Scholar
 Y. Jin and H.J. Lee, “A blockbased passparallel SPIHT algorithm,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 22, no. 7, pp. 1064–1075, 2012. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2014 Jin Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.