Table of Contents Author Guidelines Submit a Manuscript
The Scientific World Journal
Volume 2014 (2014), Article ID 840762, 20 pages
http://dx.doi.org/10.1155/2014/840762
Research Article

An Efficient Image Compressor for Charge Coupled Devices Camera

Collaborative Innovation Center for Micro/Nano Fabrication, State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instruments, Tsinghua University, Beijing 100084, China

Received 3 April 2014; Accepted 30 May 2014; Published 7 July 2014

Academic Editor: Wen-Jyi Hwang

Copyright © 2014 Jin Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Recently, the discrete wavelet transforms- (DWT-) based compressor, such as JPEG2000 and CCSDS-IDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the -norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000.

1. Introduction

Charge coupled devices (CCD) camera is now heading for a high spatial resolution, high radiation resolution, large field of view, and wide coverage development [13]. In order to meet the performance of CCD camera, the CCD pixels number is growing, read-out rate is increasing, quantization bits of an analog-digital (AD) converter are increasing, and average shooting time is increasing. Therefore, the amounts of digitization image data in CCD camera are increasing sharply. Table 1 shows input date rates of compression system for France earth observation satellites in recent years.

tab1
Table 1: Input date rates of compression system for France earth observation satellites in recent years.

From Table 1, on-board image data rates are continuously increasing. However, the highest data transmission rates of on-board downlink channel are limited. In addition, the amount of flash-based nonvolatile solid state memory on the satellite is also limited. So, it is necessary to compress the on-board CCD images as well.

Space CCD camera compressor requires low complexity, high robustness, and high performance because of its captured images information being very precious, and also because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited [4, 5]. Yu et al. [6] do statistics about the on-board image compression algorithm based on the basis compression theory used in compression system for more than 40 space missions. The statistics result is shown as in Figure 1. As is described in the picture, more than half of on-board image compression algorithms are based on a transform approach. For now, the most advanced on-board compression is based on the wavelet transform, which will also be the key technique in space camera compression application.

840762.fig.001
Figure 1: The statistics result of on-board image compression.

In recent years, many of discrete wavelet transforms- (DWT-) based compression approaches have been proposed, such as EZW [7], SPIHT [8], and SPECK [9]. The typical DWT-based algorithms are JPEG2000 [10] and the Consultative Committee for Space Data Systems-Image Data Compression (CCSDS-IDC) [11]. JPEG2000 algorithm is composed primarily of DWT and embedded block coding with optimal truncation points (EBCOT) [12]. JPEG2000 has the good compression results. However, JPEG2000 algorithm is too complex because three coding passes are required for each bit plane. In addition, the optimal rate control in JPEG2000 algorithm has high implementation complexity whereas the suboptimal rate control is inaccurate. This makes the implementation of JPEG2000 on space limited hardware particularly challenging. Therefore, The Consultative Committee for Space Data Systems (CCSDS) thinks JPEG2000 is not adapted to on-board compression. The CCSDS-IDC algorithm is composed of DWT and BPE. The BPE, which a zero-tree encoder, makes the most of the structures of spatiotemporal orientation trees in bit plane. That is, grandchildren coefficients also become not important when children coefficients are important. This zero-tree characteristic makes the bit plane exit a large amount of zero area. To take full advantage of these zero areas can improve coding efficiency. CCDS-IDC has progressive coding and fault-tolerant capability characteristic. But also, BPE has low complexity and occupies less storage capacity, which is very suitable for the application of on-board camera. However, it decreases the average PSNR by 2 dB compared with JPEG2000.

For remote sensing images having the abundant texture and edge features, DWT is not the optimal sparse representation [1821], so that remote sensing images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients, which are disadvantage for the later coding. In JPEG2000 algorithm, EBCOT [22] is very efficient in removing the redundancy between wavelet transforms coefficients, which makes JPEG2000 the best-performing compression encoder in the existing image compression algorithms. To overcome DWT issue, several promising transforms such as bandelets [23, 24], curvelets [25], contourlets [26], wedgelets [27], edgelets [28], and complex wavelet [29] have already been studied. However, these approaches usually require oversampling having higher complexity when compared to the wavelet and require nonseparable processing and nonseparable filter design.

Attempts on removing the redundancy between wavelets coefficients transform can be classified into two categories: one category transform in spatial domain and another category transform in transform domain. In [30], a two-dimensional (2D) edge adaptive lifting structure was presented. The 2D prediction filter predicts the value of the next poly phase component according to an edge orientation estimator of the image. In [31], Chang and Girod proposed an adaptive lifted discrete wavelet transform to locally adapt the filtering direction to the geometric flow in the image. In [32], a direction-adaptive DWT (DA-DWT) was proposed, which locally adapts the filtering directions to image content based on directional lifting. In [33, 34], oriented ID multiscale decomposition on a quincunx sampling grid was proposed, which obtains transform by adapting the lifting steps of an ID wavelet transform along local orientation. In [35, 36], adaptive directional lifting (ADL) was proposed, which performs lifting-based prediction in local windows in the direction of high pixel correlation. In [37, 38], weighted adaptive lifting- (WAL-) based wavelet transform was proposed, which uses the weighted function to make sure that the prediction and update stages are consistent. In [39], a 2D oriented wavelet transform (OWT) was introduced, which can perform integrative oriented transform in arbitrary direction and achieve a significant transform coding gain. However, these approaches usually produce blocking artifacts because the transform is performed in spatial domain.

To overcome this issue, Peyré et al. [40, 41] propose a new low-complexity compression approach based on posttransform (PT). PT is a transform to wavelet coefficients block. This approach can remove the redundancy between wavelets coefficients, which can improve compression performance. In addition, because it processes a 16-coefficient block and only carries out dot product operation, it does not require the large amount memories and could simply implement on hardware. The posttransforms destroy the zero-tree structure, so that the posttransformed coefficients are only encoded by entropy coding approaches, such as arithmetic coding, Huffman coding, not zero-tree coding approaches, such as BPE and SPIHT. However, on-board compression approaches require the embedded and progressive coding characteristics. To adapt on-board application, Delaunay et al. in [4244] proposed a compression scheme using BPE from the CCSDS recommendation to code posttransform coefficients. However, they only apply the posttransform to the grandchildren coefficients, so that the compression performance is not that much better.

In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. In the DWT domain, a pair base, DCT and Hadamard base being able to use, respectively, on the high and low bit-rate, the best posttransform is selected by the -norm-based approach. The posttransform is considered as the sparse represent stage of CS. The posttransform coefficients are resampled by sensing measurement matrix.

The rest of this paper is organized as follows. Section 2 introduces the proposed algorithm. In Section 3, the experimental results are demonstrated. Section 4 concludes the proposed method.

2. Proposed Algorithm

2.1. The Imaging Principle of CCD

In order to explain the background of efficient image compression system for CCD image data, this paper first briefly introduces the imaging principle of CCD camera. The general structure of the system is plotted in Figure 2. The system contains CCDs. The pixel number of the panchromatic CCD is , such as 12000. Moreover, The CCD has four channels of analog signal parallel outputs and 96 integral numbers. To avoid failure of function of the whole system due to single point, the analog video signal of each CCD is processed independently. So, the system needs mutually independent image compression systems which compress image data of each CCD output. A 12-bit special video processor is used for each channel of each CCD. The calculation formula of the data rate of each CCD output image is where is the number of CCD valid elements, is the number of quantization bits, is the push-broom line frequency, is the focus of space cameras, is the average height of satellite orbit, is the subsatellite point velocity, and is the CCD pixel size.

840762.fig.002
Figure 2: CCD image compression system.

When is 500 km, is 3.5 m, and is 7063 m/s, the line frequency of the CCD is 7.06 KHz. The total image data rate of four channel of CCD is 1.01664 Gbps. However, the data-transfer rate of on-board downlink channel is 300~600 Mbps. Let the CCD number be equal to 4. In order to meet the task, our compressor requires the compression ratio at 4 : 1~32 : 1.

The general structure of the imaging principle of CCD is plotted in Figure 3. The long linear CCD consists of one linear array that measures the panchromatic spectra in the 0.4–0.9 region. The light radiated or reflected by the hundreds of kilometers of linear arrays of ground pixels is concentrated on optical thin film of CCD detector through an optical system. The space and spectrum distributing of ground targets radiation, acquired by CCD detector, can be expressed as where is the solar height angle, is wavelength, is ground luminometer, is spectral reflectance, is atmospheric transmittances, is the point spread function, and is radiant flux. The optical thin film of linear array CCD allows the light of corresponding wavelength through. is captured by the linear CCD array to produce analog signals. Then, the analog signals are processed to produce one-line-image data. One-dimensional spatial information is gained. When the CCD camera scans the ground target, other spatial dimension information is gained. Therefore, CCD image is considered as a 2D image. Based on the imaging principle of CCD, CCD images have the spatial redundancies between the adjacent pixels. According to compression sensing (CS) sampling theory [4547], sampling redundancy widely exists in images. In addition, a visual redundancy also exists in images. Therefore, the compression algorithm must remove spatial, sampling redundancy and visual redundancy efficiently. In order to meet the task, our compressor requires the PSNR greater than or equal to 35 dB at 4 : 1~32 : 1.

840762.fig.003
Figure 3: Imaging principle of CCD.
2.2. Spatial Decorrelation

The 2D DWT can decompose the image into lower resolution and detailed subband, which is viewed as successive low-pass and high-pass filtering. At each level, the high-pass filter produces detailed information called wavelet coefficients, while the low-pass filter associated with scaling function produces the approximate information called scaling coefficients. The DWT has been widely employed exploiting the spatial correlations for remote sensing image, such as JPEG2000 and CCSDS-IDC. In this paper, we apply a 2D DWT coupled with a posttransform to CCD image. In our approach, the 2D DWT is performed on CCD image, to reduce spatial correlations and then reduce remaining intraband correlations via a posttransform of the wavelet coefficients.

The 2D DWT has residual directional correlation between wavelet coefficients in a small neighborhood (see Figure 4). Statistical dependence between DWT coefficients has been studied for many years. In [48], correlations between nearby wavelet coefficients are reported in the range [0.01–0.54] at a distance less than or equal to 3 pixels. We found even wider range, and here, we provide a more detailed discussion regarding this topic. We use a Pearson correlation coefficient   [49] to analysis statistical dependency of DWT coefficients, which can be expressed as where denotes the covariance between the variables and ; and denote the standard deviation of and , respectively. According to project experience, three-level 2D DWT is appropriate for on-board compressor and we used three-level 2D DWT in this paper.

840762.fig.004
Figure 4: The wavelet transform of high resolution image with a zoom on some coefficients.

Three-level 2D DWT is performed on each image band to produce one low-frequency subband (denoted by LL) and nine high-frequency subbands (denoted by , , and , , 2, 3). The test is performed from ten CCD images. The residual directional correlation between wavelet coefficients in a small neighborhood is shown as in Figure 5. In Level (), the residual directional correlation within 16-connected region is    . In Level (), the residual directional correlation within 16-connected region is   . In Level (), the residual directional correlation within 16-connected region is . These imply a strong redundancy among neighboring 4 coefficients. The results in Figure 3 also indicate that the residual correlations between nearby wavelet coefficients are much weaker at a distance equal to 5 pixels.

fig5
Figure 5: The correlation between wavelet coefficients in a small neighborhood.

For optimum compression performance of TDICCD image, algorithms must fully exploit the above-mentioned statistical properties. Several promising transforms such as contourlets, curvelets, ridgelets, and bandelets have been studied. However, their implementation complexity is too high. In [50], EBCOT has been reported to be very efficient in capturing these residual values. However, its implementation complexity is also too high. In [51], Delaunay has proposed a posttransform to exploit remaining redundancies between wavelet coefficients. After the wavelet transform of the image, posttransforms are applied on each block of wavelet coefficients. This block size of posttransform is the best for simple and effective compression: the residual correlations between nearby wavelet coefficients are very low at a distance greater than and equal to 5 pixels, and the bigger the blocks, the more complex the computation; however when the block size decreases, the number of blocks and thus the side information increases. Note that no blocking artifacts are visible on the reconstructed image because the blocks are processed in the wavelet domain.

2.3. Posttransform Theory

This section gives a short review of posttransform, as introduced in [42, 44, 48]. The core idea behind posttransform compression is that wavelet coefficients blocks are further transformed using one group particular direction basis (such as bandelet, DWT, DCT, and PCA) in a dictionary. First, a 2D DWT is applied to an image. Next, blocks of DWT coefficients are projected on orthonormal bases of the dictionary . Then, a Lagrangian cost is computed and posttransformed coefficients are encoded. Each DWT coefficients block is considered as a vector of the space with . The vectors of the basis are noted with . The posttransformed block can be expressed as follows: Since a dictionary has bases, (including one original block) posttransformed blocks can be obtained. Among all the posttransformed blocks , the best posttransformed block is selected according to minimizing the Lagrangian rate-distortion cost. The minimizing of the Lagrangian rate-distortion cost can be expressed as where denotes the quantized posttransformed coefficients, is the quantization step, denotes the square error due to quantization of the posttransformed block , is a Lagrangian multiplier, and denotes an estimate of the bit rate required for encoding and the associated side information .

2.4. Our Posttransform Method

In the posttransform, the dictionary has multiple bases. The better the compression performance is, the larger the number of bases is, and the higher the computational complexity is. However, on-board compression requires low computational complexity. The space CCD compressor allows only a one basis posttransform. In [52], a low-complexity compression scheme using the posttransform in only Hadamard basis has been proposed and the posttransform is applied only at the first level of the wavelet transform. The PSNR increase has been reported to be only between 0.4 dB and 0.6 dB compared to the DWT alone. In this paper, to obtain a low complexity yet efficient posttransform, we thus consider a very simple dictionary containing only one dynamic base, which is Hadamard basis at the low bit rates and DCT basis at the high bit rates.

In [53], Delaunay et al. have shown that the Lagrangian approach to select the best posttransformed block has two main drawbacks: the bit rate estimation of encoding is computationally intensive and not always accurate; the choice of the best posttransformed block depends on the quantization step q in the rate-distortion criterion. However, the coder does not define the quantization step when coding. Therefore, Delaunay et al. proposed an -norm minimization approach to select the best posttransformed block. In this paper, we propose a new method based -norm minimization approach, which replaces an -norm minimization method. It is adapted to the space TDICCD compressor constraints of low complexity.

At the low bit rate, the bit rate can be expressed as where is the number of nonzero posttransformed coefficients. We propose an -norm minimization approach to select the best posttransformed block. The selected best posttransformed block is the one with the fewest high magnitude coefficients: where is a posttransformed coefficient. At the high bit rate, the bit rate can be expressed as We use the -norm minimization approach to select the best posttransformed block. The selected best posttransformed block is the one with the fewest high magnitude coefficients:

Figure 6 shows the proposed posttransform architecture. Each high-frequency subband is performed by posttransform. The bit-rate comparator decides the type of coding bit rate. We define the coding bit rate to be greater than and equal to 0.5 bpp as the type of high bit rate (type 1) and less than 0.5 bpp as the type of low bit rate (type 0). The posttransform is performed using DCT basis and minimization when coding type 1, Hadamard basis and minimization when coding type 0.

840762.fig.006
Figure 6: The proposed posttransform architecture.
2.5. Proposed Posttransform Compressing Sensing (PT-CS)

In [54], an adaptive arithmetic coder is used to encode both the posttransformed blocked coefficients and the side information of the chosen posttransform bases on each block. In [53], the bit-plane encoder (BPE) is used to encode both the posttransformed blocked coefficients and the side information. However, the posttransforms destroy the zero-tree structure; the PSNR using BPE is 0.2 dB less than using DWT and even worse at high bit rates. A basis vector ordering approach has been used. This ordering is defined by processing several thousand blocks of wavelet coefficients from a learning set of images. The ordering approach has two drawbacks for space CCD compression. First, the ordering for the basis vector is computationally extensive and not always accurate. Second, the ordering highly depends on a learning set of images learning. However, the learning set of images is not captured when coding.

Indeed, after 2D DWT and posttransform, the TDICCD image has been sparse in posttransform domain. In order to achieve a higher compression ratio and the low complexity yet, we consider the 2D DWT and posttransform as the sparse representation stage for the TDICCD image, and thus the posttransform coefficients can be resampled using sensing matrices to achieve compression. According to compressed sensing (CS), the sparse signal with a few significant samples in one basis can be reconstructed almost perfectly from a small number of random projections onto a second basis that is incoherent with the first.

First, the following gives a short review of CS, as introduced in [55]. A good overview can be found in [56]. A complete CS involves three main stages: sparse representation, sensing measurements, and signal reconstruction. The sparse representation stage is to represent the original signal with a number of coefficients in an orthonormal transform basis matrix : If the number of nonzero or significant coefficients in the vector is , the original signal is defined as -sparsity in the basis. The sensing measurements stage is to project an original signal into a cluster of measurements with significantly less elements than : where is an measurement matrix. Then, the above expression can be written as Since , compression can be achieved. Indeed, the core idea of CS is to remove sampling redundancy by requiring only samples of the signal. The resultant measurements are used for the recovery of original signal . If the measurement matrix is properly designed, can satisfy the so-called restricted isometry property (RIP). And the original signal could be exactly or approximately recovered by solving the following standard convex optimization problem:

In this paper, we apply CS to compress remote sensing image. Figure 7 shows the compressive sensing process using DWT sparse representation for remote sensing. The remote image is . The DWT can produce one low-frequency subband, LL, and three high-frequency subbands, HL, LH, and HH. Each high-frequency subband performs sensing measurements using Gaussian random measurement matrix. Since   , compression can be achieved. For three high-frequency subbands, the sparsity is 113, 71, and 7, respectively. The measurement numbers of all high-frequency subbands satisfy . The high-frequency subband coefficients can be reconstructed. The inverse DWT is performed to obtain reconstructed image. The PSNR of reconstructed image can reach 37.02 dB. Therefore, the CS offers better compression performance for remote sensing image.

fig7
Figure 7: The compressive sensing process using DWT sparse representation for remote sensing.

In this paper, we consider the posttransformed coefficients as the . Let be a DWT orthonormal basis matrix and let be a posttransform orthonormal basis matrix. is considered as . So, the posttransformed coefficients perform the sensing measurements using measurement matrix to achieve compression. In CS sense, the posttransformed coefficients achieve compression by removing sampling redundancy in place of the BPE coder.

In [57], wavelet-based CS has been proposed. The wavelet-based CS is considered for images are sparse in a wavelet basis. In our approach, we use DWT and posttransform as the transform basis at sparse representation stage. After image sparse representation, most of the image information is concentrated on small transform coefficients in , and most of the transform coefficients in are not zero but very small. We use the hard threshold- (HT-) based image denoising to indirectly measure the sparsity of transformed image since the better the sparsity is, the more the significant coefficients are after HT and the higher the peak signal-to-noise ratio (PSNR) of denoised image is. We use AVIRIS images in our test and choose the same threshold as [58]. Figure 8 shows the PSNR results for the various transform approaches used in our method. As the picture shows, the posttransform offers better sparsity. This is because the posttransform can exploit remaining redundancies between wavelet coefficients.

840762.fig.008
Figure 8: Denoising performance with both DWT and posttransform method.

In CS system, the sparsity of transformed image is one of the key factors affecting reconstructed image quality. Below, we analyze the relationship between the number of measurement matrix and the sparsity of transformed image . We use Gaussian random matrix as sensing matrix. First, we study the relationship between and for one-dimension (1D) signal. We use a 1D signal with 256 samples and orthogonal matching pursuit (OPM) method to recover original signal. Assume denotes the ratio of correct data numbers of recovery signal to all data numbers of original signal. Figure 9 shows the variation trend between the sparsity and measurement number . To accurately recover original signal, the more is, the more is. And when is higher than some threshold, the signal can be accurately recovered. That is, the better the sparsity of signal is, the less the measurement number is, and the better the performance of compression and reconstructed signal quality are.

840762.fig.009
Figure 9: Variation trend between the sparsity and measurement number .

For 2D remote images, we consider the DWT, DCT, and our posttransform as the sparse basis. All sparse bases lead to the PSNRs for the same value (see Figures 10 and 11). The better the sparsity of image is, the better the reconstructed image quality is. Since our posttransform offers a better sparsity than wavelet-based transform, our posttransform as CS sparse representation stage is very suitable.

840762.fig.0010
Figure 10: PSNR variation trend with measurement number when using three bases.
fig11
Figure 11: Reconstructed image using our method and DWT as the sparse basis when and ; the size of each image is .
2.6. Deep Coupling between CS and Posttransform

The sensing measurement number exercises a great influence on reconstructed signal quality (see Figures 10 and 12) when using the posttransform basis. The more is, the better reconstructed image quality is. The measurement number of sensing matrix is determined by compression ratio (CR). The lower values of give the higher compression ratio.

fig12
Figure 12: Reconstructed image using our method at different ; the size of each image is .

In our approach, each subband performs sensing measurement independently. The measurement numbers for subbands after posttransform, that is, , , and , are denoted by , , and 3). The result measurement can be expressed as The CR can be considered as the ratio between the total number of bits in the original transformed coefficients and the number of bits that must be transmitted, which can be expressed as where denotes the bit depth of each pixel.

In order to efficiently determine the measurement number of all measurement matrices, we proposed a deep coupling between posttransform and CS to not only determine the measurement number and reduce the side information of posttransform but also to code these measurement results and complete the bit-rate control. Figure 13 shows the proposed deep coupling between the posttransform and CS.

840762.fig.0013
Figure 13: Proposed deep coupling algorithm.

The bit-rate allocation module allocates the bit rates for each sensing matrix. The information evaluation module evaluates the information of each tensor. Now the target bit rates for different sensing matrixes can be allocated based on their information contents. Then, the measurement number of each sensing matrix can be determined according its allocated bit rates.

In the first place, the information content in LL subband is calculated through an approach. Let denote the coefficients in LL subband. The information content in LL subband can be calculated as

In the second place, the information content of 9 tensors is calculated. Let , , and , , denote the information content of 9 tensors, respectively. Since the selected representation reflects the image information, the information content of 9 tensors can be evaluated through. The dimensions of each tensor are . Each tensor contains posttransformed blocks. Let denote the selected representation of th posttransform block. The information content of the tensor can be calculated as Other tensors can be likewise calculated.

In the third place, the weight of bit-rate allocation for each tensor is acquired through

In this paper, the bit-plane coding procedure is used to code the quantized measurement results. We modify the SPITH algorithm [59] to complete the bit-plane coding. The bit-plane coding processes each quantized measurement result at a time. After one quantized measurement result is processed, the next quantized measurement result is processed. For each time, the bit-plane coding includes two passes: the significant pass (SP) and the refinement pass (RP). We define a significant map of a given threshold and the quantized measurement result element at the location (, , ). Let be the significant state for the threshold (where is an power of 2) in the th bit plane; for example, For , the is considered as the significant element. The significant element must be encoded and removed from the quantized measurement result. The insignificant elements are preserved for the next bit plane. After that, the significant threshold is divided in half, and the process is repeated for the next pass.

2.7. Our Proposed Codec Architecture

In order to remove spatial, sampling redundancy and visual redundancy efficiently, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. Figure 14 shows the proposed architecture. The compression is performed using four steps. In the first step, 3-level 2D DWT is applied to the TDICCD image to obtain one low-frequency subband and 9 high-frequency subbands. In the second step, blocks of 4 × 4 DWT coefficients of all the subbands except the LL subband are posttransformed. In the third step, the CS is applied to the subbands. In the fourth step, the posttransformed coefficients in low-frequency subband are encoded by DPCM. The measurement results perform quantization and the bit-plane coding and then entropy coding using adaptive arithmetic coding. In the fifth step, the bit streams are packed and transmitted to the decoder via signal channels.

840762.fig.0014
Figure 14: Compression algorithm architecture for CCD image using the proposed PT-CS method.

Note that CCD outputs four-channel images. The four-channel images is processed simultaneously by the PT-CS compressor. Therefore, the synchronicity of the four-channel images is very important and will affect the compression performance of the proposed algorithm. In fact, the linear CCD produces 12 k pixels (see Figure 15), which is output by four channels or eight channels. In our system, we use the four-channel way. Image zone produces charges and the charges are moved to the output zone. In output zone, charges are read out by four channels. In order to avoid the pixel smear, charges in output zone must be read out by four channels and simultaneously by diver clocks before charges in image zone moving to output zone. Therefore, the four-channel image is always output simultaneously.

840762.fig.0015
Figure 15: The outline structure of the linear CCD.

3. Experimental Results

3.1. Experimental Scheme

In order to test the performance of the proposed algorithm in this paper, we use the independent development ground test equipment. The experimental system used to test the performance of the proposed algorithm is shown in Figure 16. The experiment system is composed of TDICCD camera, image simulation resource, TDICCD image compression system, ground test equipment, DVI monitor, and server. The TDICCD image compression system is shown as in Figure 17. The server injects the remote sensing image into the image simulation resource, which adjusts the images to simulate the output of CCD and then transfers the images to TDICCD image compression system to verify the compression algorithm. The ground test equipment implements the image decompression to gain the reconstructed images and then transfers them to the server through camera link bus. Finally, the compression performance is analyzed in the server.

840762.fig.0016
Figure 16: The structure of experiment system.
840762.fig.0017
Figure 17: The designed compression system for CCD camera using our PT-CS algorithm.
3.2. TDICCD Image Compression Validation and Analysis

In order to verify the validation of the proposed algorithm, we perform two step experiments. In the first place, CCD camera sends test image to compression system using the rotating cylinder as a target (see Figure 18). CCD camera captures rotating cylinder target to produce test image. CCD outputs image by line to line. The working line frequency is set to 7.06 KHz. The pixels number of each line image is 3072. Within the camera shooting 15 minutes, the captured images store the NAND flash array in the camera. 20-frame image from the NAND flash array is sent to compression system. Each frame image is , the bit depth per pixel is 10 bit. The test image is encoded, transferred by high-speed serial G-bit transfer system, reconstructed by ground test equipment, and then sent to PC computer through Camera Link. Figure 19 demonstrates the reconstructed images at different bit rates.

840762.fig.0018
Figure 18: The rotating cylinder as a target.
fig19
Figure 19: Reconstructed image at different bit rate.

In the second place, the server injects the AVIRIS multiband remote sensing images into the image simulation resource. The number of test remote sensing images is six. The size of images is , 8 bpp (bits per pixel). The compression rate CR is set to . Figure 20 demonstrates the reconstructed remote sensing images at different bit rates. From the displayed images, the original image and reconstructed image have almost no difference because the proposed compression algorithm has a high signal-to-noise ratio. The PSNR reaches 46.75 dB, 45.52 dB, 43.60 dB, and 40.40 dB, respectively, in Figures 20(a)~20(e). After each image was encoded by the proposed algorithm at 1 bpp, the dynamic range of pixels is 0~2 bit, and the value of most pixels are about 1 bit. So, the proposed algorithm is validation for space CCD image compression.

fig20
Figure 20: Reconstructed remote image at different bit rate.
3.3. Remote Sensing Image Compression Algorithm Performance Analysis

To objectively evaluate the performance of proposed deep coupling-based compression scheme, extensive experiments were carried out on a number of multispectral data at various coding bit rates. In the first experimental part, in order to test the compressed performance of proposed approach, we use three groups of SPOT-1 remote images having different texture characteristics. The quality assessment of the decoded images is based on rate-distortion results measured by means of the overall SNR given by where and denote the power of the original image and the mean squared error, respectively. Table 2 demonstrates the tested results of our approach at different bit rates.

tab2
Table 2: Test results of compression.

According to Table 2, the average PSNR reaches up to 40 dB, so the proposed algorithm has well compressibility and satisfies the requirements of design index.

In the second experiment, in order to compare compressed performance of our approach with other algorithms, we compare the compression results obtained with the proposed coder against those achieved with CCSDS-IDC, JPEG2000, and Hadamard posttransform implemented independently. We use the AVIRIS remote sensing images in JPL laboratory. Each group is , the depth of every pixel is 8 bit. The images are Low Altitude, Lunar Lake, Jasper Ridge, and Cuprite. Each test remote sensing images number is three. The compression rate is set to 2.0~0.25 bpp. The average comparisons were demonstrated in Figure 21. From Figure 21, due to the full usage of posttransform and CS the proposed compression scheme achieves the higher compression performance and 0.3~1.3 dB PSNR gain on average against CCSDS-IDC and Hadamard posttransform compression codec in norm bit rate 2.0~0.25 bpp. And our approach is only lower (0.1~0.9 dB) than JPEG2000. Overall, the proposed scheme shows excellent lossy compression performance and delivers better compression results than that of commonly used coders.

fig21
Figure 21: The test and comparison result of multiband images, the left is PSNR of 3 compression codec, and the right is PSNR difference between CCSDS with other 2 compression codecs.
3.4. Proposed Algorithm Complexity Analysis and Compression Time

In the following, we analyze the complexity of the algorithm. In our method, three-level 2D DWT is applied to the spatial bands. We considered an -tap filter bank and denoted by the number of wavelet decomposition levels in the spatial band. The complexity of applying 2DWT to multispectral images with size of is . In our method we use 9/7 DWT and also three levels of decomposition, so the complexity of our algorithm is . After applying 2D DWT, in low bit rate, we use the Hadamard posttransform. For each block of coefficients, the Hadamard transform needs the operations. In high bit rate, we use the DCT posttransform. It is pointed out that the calculation load of DCT is much less than that of DWT. One of the most efficient ways for 2D DCT requires only 54 multiplications and 462 additions. It means that each point needs merely 0.84 multiplications and 7.22 additions. Therefore, 2D DCT requires only 27 multiplications and 231 additions. Then, the best posttransform selection requires additions and one comparison. So, the complexity required is operations per block of coefficients.

After applying posttransform, the multiplication complexity of sensing measurement sensing measurement is of order The summation complexity of sensing measurement sensing measurement is of order

The following compression times are only evaluations since our FPGA implementation of the posttransform TD is not optimized. These evaluations are based on the lossy compression of an image of size at 1.0 bpp on a FPGA EVM board with system clock frequency at 88 MHz. Table 3 demonstrates the comparison of complexity between the proposed multispectral compression algorithm and others.

tab3
Table 3: The results of complexity comparison.

From Table 3, the processing time of our algorithm reaches 0.016 us/sample, and the data throughput is 62.5MSPS, which is higher than JPEG2000 and CCSDS-IDC, so our approach has the low complexity. In our project, the space CCD camera works at an orbits altitude of 500 km, scroll angle of ~, latitudes of −~, and line working frequency of 7.2376 kHz~3.4378 kHz. In the line working frequency, capturing the image of requires 70.74 ms. Using our approach, the compressed image of requires 7.86 ms. So, our approach can process the four-band images simultaneously. This meets the requirement of the project.

In addition, we use the XC2V6000-6FF1152 FPGA to implement the proposed algorithm. The design language is Verilog HDL, the development platform is ISE8.2, and synthesis tool is XST. Table 4 demonstrates the occupancy of resources of our approach.

tab4
Table 4: The occupancy of resources.

From Table 4, the LUT occupies 67%, slices occupy 70%, and BRAM occupies 80%. Various indicators are lower than 95%, which meet the requirement of our project.

4. Conclusion

In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the -norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board TDI-CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The project is sponsored by China Postdoctoral Science Foundation (no. 2014M550720), the National High Technology Research and Development Program of China (863 Program) (no. 2012AA121503, no. 2012AA121603), and China NSF Projects (no. 61377012, no. 60807004).

References

  1. Q. Liu, S. Wang, X. Zhang, and Y. Hou, “Improvement of the space resolution of the optical remote sensing image by the principle of CCD imaging,” in Proceedings of the 2nd International Conference on Image Processing Theory, Tools and Applications (IPTA '10), pp. 477–481, Paris, France, July 2010. View at Publisher · View at Google Scholar · View at Scopus
  2. J. Zang, Y. Li, X. Xue, and Y. Guo, “Multi-channel high-speed TDICCD image data acquisition and storage system,” in Proceedings of the International Conference on E-Product E-Service and E-Entertainment (ICEEE '10), pp. 1–4, Henan, China, 2010.
  3. C. Fan and B. Zhang, “Analysis on the dynamic image quality of the TDICCD camera,” in Proceedings of the International Conference on Optics, Photonics and Energy Engineering (OPEE '10), vol. 1, pp. 62–64, Wuhan, China, May 2010. View at Publisher · View at Google Scholar · View at Scopus
  4. C. Lambert-Nebout and G. Moury, “Survey of on-board image compression for CNES space missions,” in Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS '99), vol. 4, pp. 2032–2034, July 1999. View at Scopus
  5. A. S. Dawood, J. A. Williams, and S. J. Visser, “On-board satellite image compression using reconfigurable FPGAs,” in Proceedings of the IEEE International Conference on Field-Programmable Technology, pp. 306–310, 2002.
  6. G. Yu, T. Vladimirova, and M. N. Sweeting, “Image compression systems on board satellites,” Acta Astronautica, vol. 64, no. 9-10, pp. 988–1005, 2009. View at Publisher · View at Google Scholar · View at Scopus
  7. J. M. Shapiro, “Embedded image coding using zerotrees of wavelet coefficients,” IEEE Transactions on Signal Processing, vol. 41, no. 12, pp. 3445–3462, 1993. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  8. A. Said and W. A. Pearlman, “A new, fast, and efficient image codec based on set partitioning in hierarchical trees,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 6, no. 3, pp. 243–250, 1996. View at Publisher · View at Google Scholar · View at Scopus
  9. A. Islam and W. A. Pearlman, “An embedded and efficient low-complexity hierarchical image coder,” in Proceedings of the Meeting on Visual Communications and Image Processing, vol. 3653, pp. 294–305, January 1999. View at Scopus
  10. T. Acharya and P.-S. Tsai, JPEG2000 Standard for Image Compression: Concepts, Algorithms and VLSI Architectures, John Wiley and Sons, 2005.
  11. CCSDS, “Image Data Compression Recommended Standard,” CCSDS 122.0-B-1 Blue Book, November 2005.
  12. D. S. Taubman and M. W. Marcellin, “JPEG2000: standard for interactive imaging,” Proceedings of the IEEE, vol. 90, no. 8, pp. 1336–1357, 2002. View at Publisher · View at Google Scholar · View at Scopus
  13. L. Liu, N. Chen, H. Meng, L. Zhang, Z. Wang, and H. Chen, “A VLSI architecture of JPEG2000 encoder,” IEEE Journal of Solid-State Circuits, vol. 39, no. 11, pp. 2032–2040, 2004. View at Publisher · View at Google Scholar · View at Scopus
  14. K. Mathiang and O. Chitsobhuk, “Efficient pass-pipelined VLSI architecture for context modeling of JPEG2000,” in Proceedings of the Asia-Pacific Conference on Communications (APCC '07), pp. 63–66, Bangkok, Thailand, October 2007. View at Publisher · View at Google Scholar · View at Scopus
  15. H. Wang, J. Chen, X. Gu, and X. Chen, “High speed and bi-mode image compression core for onboard space application,” in International Conference on Space Information Technology 2009, vol. 7651 of Proceedings of SPIE, Beijing, China, April 2010.
  16. A. Lin, C. F. Chang, M. C. Lin, and L. J. Jan, “High-performance computing in remote sensing image compression,” in High-Performance Computing in Remote Sensing, vol. 8183 of Proceedings of SPIE, Prague, Czech Republic, September 2011. View at Publisher · View at Google Scholar · View at Scopus
  17. Y. Seo and D. Kim, “VLSI architecture of line-based lifting wavelet transform for motion JPEG2000,” IEEE Journal of Solid-State Circuits, vol. 42, no. 2, pp. 431–440, 2007. View at Publisher · View at Google Scholar · View at Scopus
  18. V. Velisavljević, B. Beferull-Lozano, M. Vetterli, and P. L. Dragotti, “Directionlets: anisotropic multidirectional representation with separable filtering,” IEEE Transactions on Image Processing, vol. 15, no. 7, pp. 1916–1933, 2006. View at Publisher · View at Google Scholar · View at Scopus
  19. D. L. Donoho and I. M. Johnstone, “Ideal spatial adaptation by wavelet shrinkage,” Biometrika, vol. 81, no. 3, pp. 425–455, 1994. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet · View at Scopus
  20. A. Chambolle, R. A. DeVore, and N . Lee, “Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage,” IEEE Transactions on Image Processing, vol. 7, no. 3, pp. 319–335, 1998. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  21. L. Şendur and I. W. Selesnick, “Bivariate shrinkage functions for wavelet-based denoising exploiting interscale dependency,” IEEE Transactions on Signal Processing, vol. 50, no. 11, pp. 2744–2756, 2002. View at Publisher · View at Google Scholar · View at Scopus
  22. D. Taubman, “High performance scalable image compression with EBCOT,” IEEE Transactions on Image Processing, vol. 9, no. 7, pp. 1158–1170, 2000. View at Publisher · View at Google Scholar · View at Scopus
  23. F. Qi, Y. Li, and K. Zhang, “Infrarde image denoising by fuzzy threshold based on Bandelets transform,” Acta Photonica Sinica, vol. 37, no. 12, pp. 2564–2567, 2008. View at Google Scholar · View at Scopus
  24. E. Le Pennec and S. Mallat, “Sparse geometric image representations with bandelets,” IEEE Transactions on Image Processing, vol. 14, no. 4, pp. 423–438, 2005. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  25. E. J. Candès and D. L. Donoho, “New tight frames of curvelets and optimal representations of objects with piecewise C2 singularities,” Communications on Pure and Applied Mathematics, vol. 57, no. 2, pp. 219–266, 2004. View at Publisher · View at Google Scholar · View at MathSciNet
  26. M. N. Do and M. Vetterli, “The contourlet transform: an efficient directional multiresolution image representation,” IEEE Transactions on Image Processing, vol. 14, no. 12, pp. 2091–2106, 2005. View at Publisher · View at Google Scholar
  27. M. Wakin, J. Romberg, H. Choi, and R. Baraniuk, “Rate-distortion optimized image compression using wedgelets,” in Proceedings of the International Conference on Image Processing, vol. 3, pp. III-237–III-240, September 2002. View at Scopus
  28. M. B. Wakin, J. K. Romberg, H. Choi, and R. G. Baraniuk, “Wavelet-domain approximation and compression of piecewise smooth images,” IEEE Transactions on Image Processing, vol. 15, no. 5, pp. 1071–1087, 2006. View at Publisher · View at Google Scholar · View at Scopus
  29. J. J. Ranjani and S. J. Thiruvengadam, “Dual-tree complex wavelet transform based sar despeckling using interscale dependence,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 6, pp. 2723–2731, 2010. View at Publisher · View at Google Scholar · View at Scopus
  30. Ö. N. Gerek and A. E. Çetin, “A 2-D orientation-adaptive prediction filter in lifting structures for image coding,” IEEE Transactions on Image Processing, vol. 15, no. 1, pp. 106–111, 2006. View at Publisher · View at Google Scholar · View at Scopus
  31. C. Chang and B. Girod, “Direction-adaptive discrete wavelet transform via directional lifting and bandeletization,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '06), pp. 1149–1152, Atlanta, Ga, USA, October 2006. View at Publisher · View at Google Scholar · View at Scopus
  32. C. L. Chang and B. Girod, “Direction-adaptive discrete wavelet transform for image compression,” IEEE Transactions on Image Processing, vol. 16, no. 5, pp. 1289–1302, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  33. V. Chappelier and C. Guillemot, “Oriented wavelet transform on a quincunx pyramid for image compression,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '05), vol. 1, pp. 81–84, September 2005. View at Publisher · View at Google Scholar · View at Scopus
  34. V. Chappelier and C. Guillemot, “Oriented wavelet transform for image compression and denoising,” IEEE Transactions on Image Processing, vol. 15, no. 10, pp. 2892–2903, 2006. View at Publisher · View at Google Scholar · View at Scopus
  35. W. Ding, F. Wu, and S. Li, “Lifting-based wavelet transform with directionally spatial prediction,” in Proceedings of the Picture Coding Symposium, pp. 483–488, San Francisco, Calif, USA, December 2004. View at Scopus
  36. W. Ding, F. Wu, X. Wu, and S. Li, “Adaptive directional lifting-based wavelet transform for image coding,” IEEE Transactions on Image Processing, vol. 16, no. 2, pp. 416–427, 2007. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  37. Y. Liu and K. N. Ngan, “Weighted adaptive lifting-based wavelet transform,” in Proceedings of the 14th IEEE International Conference on Image Processing (ICIP '07), pp. III-189–III-192, San Antonio, Tex, USA, September 2007. View at Publisher · View at Google Scholar · View at Scopus
  38. Y. Liu and K. N. Ngan, “Weighted adaptive lifting-based wavelet transform for image coding,” IEEE Transactions on Image Processing, vol. 17, no. 4, pp. 500–511, 2008. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  39. B. Li, R. Yang, and H. Jiang, “Remote-sensing image compression using two-dimensional oriented wavelet transform,” IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 1, pp. 236–250, 2011. View at Publisher · View at Google Scholar · View at Scopus
  40. G. Peyré, Geometrie multi-échelles pour les images et les textures [Ph.D. thesis], Ecole Polytechnique, 2005.
  41. G. Peyré and S. Mallat, “Discrete bandelets with geometric orthogonal filters,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '05), vol. 1, pp. 65–68, September 2005. View at Publisher · View at Google Scholar · View at Scopus
  42. X. Delaunay, E. Christophe, C. Thiebaut, and V. Charvillat, “Best post-transforms selection in a rate-distortion sense,” in Proceedings of the 15th IEEE International Conference on Image processing (ICIP '08), pp. 2896–2899, October 2008. View at Publisher · View at Google Scholar · View at Scopus
  43. X. Delaunay, M. Chabert, V. Charvillat, G. Morin, and R. Ruiloba, “Satellite image compression by directional decorrelation of wavelet coefficients,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '08), pp. 1193–1196, April 2008. View at Publisher · View at Google Scholar · View at Scopus
  44. X. Delaunay, M. Chabert, V. Charvillat, and G. Morin, “Satellite image compression by post-transforms in the wavelet domain,” Signal Processing, vol. 90, no. 2, pp. 599–610, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at Scopus
  45. E. J. Candès and M. B. Wakin, “An introduction to compressive sampling: a sensing/sampling paradigm that goes against the common knowledge in data acquisition,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 21–30, 2008. View at Publisher · View at Google Scholar · View at Scopus
  46. D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  47. L. Zelnik-Manor, K. Rosenblum, and Y. C. Eldar, “Sensing matrix optimization for block-sparse decoding,” IEEE Transactions on Signal Processing, vol. 59, no. 9, pp. 4300–4312, 2011. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  48. X. Delaunay, M. Chabert, V. Charvillat, and G. Morin, “Satellite image compression by post-transforms in the wavelet domain,” Signal Processing, vol. 90, no. 2, pp. 599–610, 2010. View at Publisher · View at Google Scholar · View at Scopus
  49. P. F. Dunn, Measurement and Data Analysis for Engineering and Science, McGraw-Hill, 1st edition, 2004.
  50. X. Delaunay, M. Chabert, G. Morin, and V. Charvillat, “Bit-plane analysis and contexts combining of JPEG2000 contexts for on-board satellite image compression,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '07), vol. 1, pp. I-1057–I-1060, April 2007. View at Publisher · View at Google Scholar · View at Scopus
  51. X. Delaunay, M. Chabert, V. Charvillat, G. Morin, and R. Ruiloba, “Satellite image compression by directional decorrelation of wavelet coefficients,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '08), pp. 1193–1196, Las Vegas, Nev, USA, March-April 2008. View at Publisher · View at Google Scholar · View at Scopus
  52. X. Delaunay, C. Thiebaut, E. Christophe et al., “Lossy compression by post-transforms in the wavelet domain,” in Proceedings of the On-Board Payload Data Compression Workshop, June 2008.
  53. X. Delaunay, M. Chabert, V. Charvillat, and G. Morin, “Satellite image compression by concurrent representations of wavelet blocks,” Annals of Telecommunications, vol. 67, no. 1-2, pp. 71–80, 2012. View at Publisher · View at Google Scholar · View at Scopus
  54. X. Delaunay, E. Christophe, C. Thiebaut, and V. Charvillat, “Best post-transforms selection in a rate-distortion sense,” in Proceedings of the 15th IEEE International Conference on Image Processing (ICIP '08), pp. 2896–2899, San Diego, Calif, USA, October 2008. View at Publisher · View at Google Scholar · View at Scopus
  55. J. M. Kim, O. K. Lee, and J. C. Ye, “Compressive MUSIC: revisiting the link between compressive sensing and array signal processing,” IEEE Transactions on Information Theory, vol. 58, no. 1, pp. 278–301, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  56. S. Engelberg, “Compressive sensing [Instrumentation Notes],” IEEE Instrumentation and Measurement Magazine, vol. 15, no. 1, pp. 42–46, 2012. View at Publisher · View at Google Scholar · View at Scopus
  57. C. Deng, W. Lin, B.-S. Lee, and C. T. Lau, “Robust image coding based upon compressive sensing,” IEEE Transactions on Multimedia, vol. 14, no. 2, pp. 278–290, 2012. View at Publisher · View at Google Scholar · View at Scopus
  58. A. L. da Cunha, J. Zhou, and M. N. Do, “The nonsubsampled contourlet transform: theory, design, and applications,” IEEE Transactions on Image Processing, vol. 15, no. 10, pp. 3089–3101, 2006. View at Publisher · View at Google Scholar · View at Scopus
  59. Y. Jin and H.-J. Lee, “A block-based pass-parallel SPIHT algorithm,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 22, no. 7, pp. 1064–1075, 2012. View at Publisher · View at Google Scholar · View at Scopus