Research Article  Open Access
Compression of Multispectral Images with Comparatively Few Bands Using Posttransform Tucker Decomposition
Abstract
Up to now, data compression for the multispectral chargecoupled device (CCD) images with comparatively few bands (MSCFBs) is done independently on each multispectral channel. This compression codec is called a “monospectral compressor.” The monospectral compressor does not have a removing spectral redundancy stage. To fill this gap, we propose an efficient compression approach for MSCFBs. In our approach, the one dimensional discrete cosine transform (1DDCT) is performed on spectral dimension to exploit the spectral information, and the posttransform (PT) in 2DDWT domain is performed on each spectral band to exploit the spatial information. A deep coupling approach between the PT and Tucker decomposition (TD) is proposed to remove residual spectral redundancy between bands and residual spatial redundancy of each band. Experimental results on multispectral CCD camera data set show that the proposed compression algorithm can obtain a better compression performance and significantly outperforms the traditional compression algorithmbased TD in 2DDWT and 3DDCT domain.
1. Introduction
Multispectral CCD images can be used to identify the targets using their geometric feature and spectrum information simultaneously. Multispectral CCD imaging technology has extensive application in ground target detection, geology monitoring, climate monitoring, military reconnaissance, scientific research, and so on [1, 2]. With the increase of application requirements, the performance indexes of multispectral CCD camera, such as field of view, coverage width, spatial resolution, spectral resolution, time resolution, and radiation resolution, continue improving. These lead to rapid increase of amount of multispectral image data. However, the amounts of memory and the channel bandwidth used on the satellite are limited. Therefore, the suitable compression approaches play an important role in the successful applications of multispectral CCD camera used on the satellite [3–5].
Multispectral CCD camera has two types of instruments: (1) comparatively many spectral channels and (2) comparatively few spectral channels [6, 7]. Therefore, multispectral CCD images also have types: (1) multispectral CCD images with comparatively few bands (MSCFBs) and (2) multispectral CCD images with comparatively large number of bands (MSCLBs). MSCFBs and MSCLBs both have spectral redundancy and spatial redundancy. For MSCLBs, the correlation between the different bands is usually stronger than spatial correlation among pixels in the same band [8]. There have existed several wellknown MSCLBs compression algorithms, like principal components analysis (PCA) [9]. The PCA approach combines all bands into one single spectral matrix. The covariance matrix of the spectral matrix can be obtained. The inverse of eigenvector matrix of covariance matrix multiplies by the spectral matrix to obtain new bands called principal components. The number of principal components is equal to the number of bands. There is no correlation between different components. Therefore, the PCA is mainly used to remove spectral redundancy. In order to remove the spectral and spatial correlation simultaneously, the MSCLBs compression algorithms use three dimensions (3D) transform approach, like 3DDCT [10], 3DDWT [11, 12], and KarhunenLoeve transform (KLT) + 2D transform [13]. It is generally accepted that the 3DDWTbased approach is one of best compression algorithms for MSCLBs. After each dimension of MSCLBs performs 1DDWT, the MSCLBs are decomposed into several different resolution 3D subimages. The 3D subimages have pyramid and multiresolution characteristics. These characteristics are very suitable for the latter entropy encoding and can complete progressive encoding.
The compression algorithms based on 3DDWT, such as 3DEZW, 3DSPIHT [14], 3DSPECK [15, 16], and 3DEBCOT, have been widely used to compress the MSCLBs. In [17], Tang and Pearlman show that the 3DSPECK approach can improve 0.061~0.173 dB more than 3DSPIHT at 0.2~2.0 bpppb. It is generally accepted that JPEG2000 algorithm [18] is the best algorithm for 2D still images. Recently, JPEG2000 algorithm has extended to MSCLBs. JPEG2000 algorithm in 3D model [19] uses 3DDWT combined with 3DEBCOT to compress MSCLBs. JPEG2000 algorithm is usually considered as a reference standard to evaluate other algorithms. In [20], Du and Fowler proposed a compression approach based on the JPEG2000 combined with PCA. The proposed algorithm can get better rate distortion (RD) performance than JPEG2000 combined with DWT for removing spectral correlation. Up to now, the only compression standard for MSCFBs and MSCLBs is CCSDSMHDC. The CCSDSMHDC [21] algorithm mainly targets space multispectral images data compression. The CCSDSMHDC algorithm uses FLbased approach to trade off between computation complexity and compression performance. The CCSDSMHDC algorithm is suitable for the application of onboard multispectral camera. However, the CCSDSMHDC is only suitable for lossless compression. The CCSDSMHDC can only provide limited compression ratios which are not a reasonable value in highresolution remote sensing multispectral camera.
Recently, a tensor decomposition (TD) [22] method is applied to the hyperspectral images coding. Hyperspectral images are considered as a thirdorder nonnegative tensor. And then the nonnegative tensor is decomposed by Tucker method to reach the compression results. In [23], Karami et al. proposed a new hyperspectral image compression method based on 3DDCT combined with TD. Their core idea is that the TD is used to decompose 3DDCT coefficients tensor to remove spectral and spatial redundancies. In [24], Karami et al. proposed a new hyperspectral image compression method based on 2DDWT combined with TD. 2DDWT performs on each band to remove spatial correlation. The TD is used to decompose wavelet subband tensor to remove spectral correlation. These methods are also suitable for MSCLBs. However, they are too complex to realize in onboard multispectral camera.
For MSCFBs, the comparatively few bands, like three or four bands, are captured by onboard CCD camera. These proposed MSCLBs compression algorithms are not suitable for MSCFBs because they have comparatively few bands. Up to now, compression methods for MSCFBs are done independently on each multispectral channel. This compression codec is called a “monospectral compressor.” The monospectral compressor does not remove spectral redundancy stage. In order to reach the excellent compression performance, MSCFBs compression algorithm must remove both spatial and spectral redundancies. In this paper, we propose an efficient compression approach for MSCFBs. In our approach, the 1DDCT is performed on spectral dimension to exploit the spectral information, and the posttransform (PT) in 2DDWT domain is performed on each spectral band to remove the spatial redundancies. MSCFBs are decomposed into several subtensors by PT. Then, these subtensors are decomposed by multilevel Tucker decomposition (TD) to reach compression results.
2. Proposed Algorithm
2.1. The Imaging Principle of MSCFBs
Figure 1 shows the imaging principle of MSCFBs. The long linear multispectral CCD has four parallel CCD linear arrays. Each array is one imaging spectral band. Four imaging spectral bands are red, blue, green, and nearinfrared, respectively. The light radiated or reflected by the hundreds of kilometers of linear arrays of ground pixels is concentrated onto optical thin film of CCD detector through an optical system. The space and spectrum distributing of ground targets radiation, acquired by CCD detector, can be expressed as where is the solar height angle, is wavelength, is ground luminometer, is spectral reflectance, is atmospheric transmittances, is the point spread function, and is radiant flux. The optical thin film of each linear array CCD allows the light of corresponding wavelength through. () of each band is captured by a linear CCD array, and one dimensional spatial and spectral information is gained. When the CCD camera scans the ground target, spatial dimension information is gained. Therefore, MSCFBs are considered as 3D images, , which is shown in Figure 2.
Based on the imaging principle of MSCFBs, MSCFBs have the spectral redundancies between the adjacent spectral bands and spatial redundancies between the adjacent pixels in each spectral band. Therefore, the compression algorithm must remove the spectral redundancies and spatial redundancies simultaneously.
2.2. Spectral Decorrelation
MSCFBs have two types of correlations: spatial and spectral correlations. The spatial correlation is defined as the correlation between the adjacent pixels in each spectral band. The spatial line correlation of MSCFBs can be expressed as
The spatial column correlation of MSCFBs can be expressed as where is the line number of each band, is the pixel number of each line, is the function of the band gray, and is the average value of the band gray. The spectral correlation is defined as the correlation between the pixels located in the same spatial position in the different spectral band, which can be expressed as where and are the value of pixel located in in two adjacent bands and is the crosscorrelation function of and . Based on (2)~(4), the Landsat7 Thematic Mapper (TM) images are chosen to perform an experiment of the spatial and spectral correlation. Figure 3 shows the test results.
(a) Spatial correlation test result
(b) Spectral correlation test result
The correlations between the adjacent bands for an MSCFB are in the range , and the line and column correlations between the adjacent pixels for an MSCFB are in the ranges and , respectively. Therefore, the MSCFBs have higher spatial correlation than spectral correlation. Because the spectral correlations are comparatively weak, the traditional 3DDWTbased compression algorithms are not suitable for MSCFBs. When the 3DDWTbased approaches compress Landsat7 images, the good compression results are not obtained. Simultaneously removing the spectral redundancies and spatial redundancies is primary target for the successful implementation of a highperformance compression method.
Because the MSCFBs have a smaller number of bands, 1DDCT is performed on the spectral dimension to decorrelate the spectral bands. The number of bands is denoted as ; the size of each band is denoted as . An MSCFB is composed of 1D spectral vector (Length ). The definition of the 1DDCT for an input 1D spectral vector () and output image () is The values are called the DCT coefficients of . Equation (5) is performed times to complete the spectral transform. In order to speed up calculation, we proposed an iterative method. The iterative equation can be expressed as The DCT coefficients of the current spectral vector can be computed by the intermediate and output data obtained from the previous spectral vector.
2.3. Spatial Decorrelation
The 2DDWT can decompose the image into multilevel wavelet to obtain lower resolution subband and detail subbands. The multilevel decomposition can be regarded as the process of low pass and high pass filtering. At each level, the low subband produced by previous level performs the high pass filtering to obtain detail information called wavelet coefficients, while performing the low pass filtering linked with scaling function to obtain the approximate information called scaling coefficients. The DWT has been widely employed exploiting the spatial correlations for remote sensing image, such as JPEG2000 and CCSDSIDC. In this paper, we apply a 2DDWT coupled with a posttransform to each band of MSCFBs. In our approach, the 2DDWT is performed on each image band to reduce spatial correlations, and then the transformed wavelet coefficients are transformed by the posttransform approach to reduce remaining correlations in wavelet subbands.
After remote sensing MSCFBs are transformed by 2DDWT, there exits residual correlation between coefficients in a small neighborhood (see Figure 4). Statistical dependencies between DWT coefficients have been studied for many years. In [25], correlations between nearby wavelet coefficients are reported to be in the range at a distance less than or equal to 3 pixels. We found even wider range, and here we provide a more detailed discussion regarding this topic. We use a Pearson correlation coefficient to analyze statistical dependency of DWT coefficients, which can be expressed as where denotes the covariance between the variables and and and denote the standard deviation of and , respectively. According to project experience, threelevel 2DDWT is appropriate for onboard compressor and we used threelevel 2DDWT in this paper.
Threelevel 2DDWT is performed on each image band to produce one lowfrequency subband (denoted as LL) and nine highfrequency subbands (denoted as , , , ). The test is performed from ten MSCFBs. The residual correlations between wavelet coefficients in a small neighborhood of wavelet subbands are shown as in Figure 5. In Level (), the residual directional correlation within 16connected region is (HL1, HH1, LH1, , ) > 0.3. In Level (), the residual directional correlation within 16connected region is (HL2, HH2, LH2, , ) > 0.4. In Level (), the residual directional correlation within 16connected region is (HL3, HH3, LH3, , ) > 0.3. These imply a strong redundancy among neighboring 4 coefficients. The results in Figure 5 also indicate that the residual correlations between nearby wavelet coefficients are much weaker at a distance equal to 5 pixels.
(a)
(b)
(c)
To obtain the optimum compression performance for MSCFBs, the compression algorithms must fully consider the abovementioned statistical properties. In [26, 27], EBCOT has been reported to be very efficient method to remove these residual redundancy values. However, its implementation complexity is too high. In [28], Delaunay has proposed a posttransform to remove remaining redundancies between the adjacent wavelet coefficients within a small region. After the wavelet transform of an image, wavelet coefficients or scaling coefficients subbands are divided into several coefficients blocks. Each block has 4 × 4 wavelet coefficients. Posttransform performs on each coefficients block to remove the residual redundancies. From testing a large number of remote sensing images, the selected size of wavelet coefficients block is the best to a low complexity and efficient compression method. There are two major reasons: (1) the correlation between the adjacent wavelet coefficients is very weak when a distance is greater than or equal to 5 pixels; (2) the larger size of wavelet coefficients block leads to the higher computational complexity, and the smaller size of block results in increasing the number of coefficients blocks and the size information. In addition, since the posttransform is performed on coefficients blocks in wavelet transform domain, not on image domain, the compression algorithm has no additional overhead time to perform deblocking filter.
The core idea behind posttransform compression is that wavelet coefficients blocks further are transformed using one group particular direction basis (such as Bandelet, DWT, DCT, and PCA) in a dictionary. First, a 2DDWT is applied to an image. Next, blocks of 4 × 4 DWT coefficients are projected on orthonormal bases of the dictionary . Then, the minimizing of the Lagrangian cost function is calculated to select the best posttransformed block. Finally, the posttransformed coefficients are encoded by entropy coding method. Each 4 × 4 DWT coefficients block is considered as an input vector with . The vectors of the basis are noted with . The posttransformed block can be expressed as follows: Since a dictionary has bases, (including one original block) posttransformed blocks can be obtained. Among all the posttransformed blocks , the best posttransformed block is selected according to calculating the minimizing of the Lagrangian ratedistortion cost. The minimizing of the Lagrangian ratedistortion cost can be expressed as where denotes the quantized posttransformed coefficients, is the quantization step, and denotes the square error between the posttransformed coefficients and the quantized posttransformed coefficients . is a Lagrangian multiplier and denotes the required bit rates for encoding and the associated side information .
In the posttransform, the dictionary has multiple bases. The better the compression performance is, the more the number of bases is and the higher the computational complexity is. However, onboard compression requires a low computational complexity. The space MSCFBs compressor allows the dictionary of posttransform to have only one basis. In [25], Delaunay et al. proposed a simple posttransformbased compression method. They only perform onelevel wavelet transform and only use a Hadamard basis. Then, the posttransform only performs on the coefficients subbands. Compared with the compression method only using the DWT, the PSNR can only improve 0.4 dB~0.6 dB. In this paper, to obtain a posttransform with the low computational complexity and still have high compression performance, we thus consider a very simple dictionary containing only one dynamic base, which is Hadamard basis at the low bit rates, and DCT basis at the high bit rates.
In [29], Delaunay has shown that the Lagrangian approach for selecting the best posttransformed block has two main drawbacks. First, the estimation of the required bit rate for is computationally intensive and not always accurate. Second, calculating the Lagrangian function depends on the quantization step . However, the coder does not define the quantization step when coding. Therefore, Delaunay proposed an norm minimization approach to select the best posttransformed block. In this paper, we propose a new method based on norm minimization approach to evaluate best posttransform, which replaces an norm minimization. It is very suitable for the space MSCFBs compressor which has the low complexity constraints.
At the low compression bit rate, the bit rate can be expressed as where is the number of nonzero posttransformed coefficients. We propose an norm () minimization approach to select the best posttransformed block. The selected best posttransformed block has the minimal sum of coefficients magnitude, which can be expressed as where is a posttransformed coefficient. At the high compression bit rate, the bit rate can be expressed as We use the norm () minimization approach to select the best posttransformed block. The selected best posttransformed block has the minimal sum of coefficients magnitude, which can be expressed as
Figure 6 shows the proposed posttransform architecture. Each highfrequency subband is performed by posttransform. The bitrate comparator decides the type of coding bit rate. We define the coding bit rate greater than or equal to 0.5 bpp as the type of high bit rate (type 1) and that less than 0.5 bpp as the type of low bit rate (type 0). The posttransform is performed using DCT basis and norm minimization when coding type 1 and Hadamard basis and norm minimization when coding type 0.
3. Tucker Decomposition in Posttransform Domain
In [30], the posttransformed blocked coefficients are encoded by an adaptive arithmetic coding method. In [29], the posttransformed blocked coefficients are encoded by the bitplane encoding (BPE) method. However, the posttransforms destroy the zerotree structure. The PSNR using BPE is less than 0.2 dB when compared with the method using DWT and even worse at high bit rates. A basis vector ordering approach has been used. This ordering is obtained by training or learning with thousands of wavelet coefficients blocks from a large number of remote sensing images. The orderingbased strategy has two major shortcomings for space MSCFBs compression. For the first weakness, the ordering of basis vector has huge calculations and is not always accurate. For the second weakness, the ordering highly depends on a training set with thousands of wavelet coefficients blocks of remote sensing images. However, the training set is not captured when coding.
Indeed, after 1DDCT, 2DDWT, and posttransform, the MSCFBs still have residual spatial and spectral correlation. In order to obtain farther excellent compression performance, we consider the posttransform coefficients of MSCFBs as a threeorder 3D tensor. The tensor performs Tucker decomposition (TD) to remove residual spatial and spectral redundancies simultaneously.
A thirdorder tensor is denoted as . The thirdorder tensor is decomposed into a low dimension core tensor multiplied by three factor matrices using TD (see Figure 7). The core tensor is denoted as . The factor matrices are denoted as ().
The thirdorder TD can be expressed as follows [31]: where , , are dimensions of the core tensor , tensor is an approximation of tensor , and tensor is an error tensor. The tensor depends on the () values. Equation (14) can be solved by the following optimal problem:
The size of each band is denoted as , and the number of band is denoted as . Each band performs threelevel 2DDWT to produce nine highfrequency wavelet subbands (, , , ) and one lowfrequency wavelet subband (LL). All high subbands perform the proposed posttransform. The size of each subband is . The size of posttransformed block is . All bands , , and (), which have been posttransformed, are considered as one tensor, respectively. Here, 9 tensors are denoted as , , and (), respectively (see Figure 8). The size of each tensor is . Note that the elements of each tensor are the posttransformed coefficients.
Since the spatial dimension of each tensor is , the tensor decomposition computing cost is very high. To reduce the computational complexity, in this paper, we propose a multilevel TD approach. First, each tensor is divided into several subtensors (see Figures 8 and 9). In each tensor, each subband is one posttransformed block. Therefore, the size of each subtensor is . Each tensor contains subtensors, and the 9 tensors have threeorder subtensors. In the first level, all subtensors for each tensor are performed by TD to produce the core tensor and factor matrices. The produced core tensors are organized into a new tensor, which is performed by TD in the next level. In the proposed codec we are using one level of TD. Since the spatial dimension of each subtensor is 4 × 4, TD of each subtensor has low complexity.
To find optimal component matrices and , we apply HALSNTD algorithm [31] to each subtensor in this paper. The HALSNTD algorithm can be shown as Algorithm 1.

3.1. Deep Coupling between Posttransform and Tucker Decomposition
The dimensions of each tensor or each subtensor () are determined by compression ratio (CR). The lower values of make compression algorithm have a higher CR. The dimensions of 9 tensors, that is, , , and (), are denoted as , , and (). The CR can be defined as a ratio value between the required bit numbers for representing original image data and the required bit numbers for representing compressed bitstreams, which can be expressed as
In order to efficiently determine the size of all core tensors of subtensor when each tensor performs TD, we proposed a deep coupling between posttransform and TD not only to determine the size of the core tensor and reduce the side information of posttransform but also to code these core tensors and complete the bitrate control. Figure 10 shows the proposed deep coupling between the posttransform and TD.
The bitrate allocation module allocates the bit rates for each tensor. The information evaluation module evaluates the information of each tensor. Now the target bit rates for different tensors can be allocated based on their information contents. Then, the allocated bit rates of each tensor are allocated to each subtensor, and the dimensions of each subtensor can be determined according to its allocated bit rates.
In the first place, the information content in subband is calculated through an norm approach. Let denote the coefficients in subband. The information content in subband can be calculated as
In the second place, the information content of 9 tensors (, , , ) is calculated. Let , , and , , denote the information content of 9 tensors, respectively. Since the selected representation reflects the image information, the information content of 9 tensors can be evaluated through . The dimensions of each tensor are . Each tensor contains posttransformed blocks. Let denote the selected representation of th posttransform block. The information content of the tensor can be calculated as Other tensors can be likewise calculated.
In the third place, the weight of bitrate allocation for each tensor is acquired through
Finally, the allocated bit rates of each tensor are allocated to each subtensor. The bit rate assigned to the th subtensor can be calculated as
The side information denotes the posttransform used on this block. Since the posttransform has only one basis at low and high bit rates, the side information is one bit per block and is not entropy coded. Table 1 shows the side information cost. The side information always has 1 bpb of bitrate overhead in whole coding process. The side information occupies 12.5% of the total bit rate at 0.5 bpp, 4.16% at 1.5 bpp, and only 2.08% at 3 bpp, respectively. Therefore, the side information has more effect on compression performance at low bit rates than at high bit rates. Indeed, these nonsignificant blocks do not require the side information because their coefficients are all zeros. Coincidentally, there are more nonsignificant blocks at low bit rate than high bit rate. Therefore, the allocated bit rate for side information can be reduced. As the number of nonsignificant blocks decreases, the required bit rate of side information will certainly increase.

In this paper, we use an improved BPE method to encode the core tensor. And then the side information is integrated into the bitstreams. We modify the SPITH algorithm [32] to complete the bitplane coding. The bitplane coding processes each core tensor of each subtensor at a time. After one core tensor is processed, the next core tensor is processed. For each time coding, the improved BPE defines two encoding paths: (1) significant path (SP) and (2) the refining path (RP). We define a significant state threshold . The element of core tensor is denoted as at the location (). We use to denote an index of significant state when the threshold is in the th bit plane. The can be expressed as In the th bit plane, when , the is a significant element, which passes into the SP. When , the is insignificant and passes into the RP. The elements in RP perform BPE in the next bit plane. In the next bit plane, , . The coding method is the same with the previous bit plane. When traversing all the bit planes, the elements in SP are moved from the core tensor and encoded by entropy coding approach. If the SP for a core tensor has any elements, the side information is embedded into compressed bitstreams. Otherwise, the side information is not embedded into compressed bitstreams.
3.2. Our Proposed Architecture
Figure 11 shows the whole architecture of the proposed posttransform Tucker decomposition (PTTD) based compression method in this paper. The whole process of PTTDbased approach can be divided into four steps as follows.
In the first step, MSCFBs perform 1DDCT in the spectral dimension to remove the spectral correlation.
In the second step, each spectral band of the MSCFBs performs 3level 2DDWT to remove the spatial correlation. And 10 wavelet subbands (one lowfrequency and nine highfrequency subbands) for each band can be obtained.
In the third step, all highfrequency subbands are divided into several blocks. Each block has 4 × 4 DWT coefficients. Each block performs the proposed posttransform.
In the fourth step, each posttransformed subtensor performs TD to remove the residual spectral and spatial redundancies.
In the fifth step, the posttransformed coefficients in lowfrequency subband are encoded by DPCM. The factor matrices perform quantization. The core tensors perform the bitplane coding and entropy coding using adaptive arithmetic coding.
In the sixth step, the bit streams are packed and transmitted to the decoder via signal channels.
4. Experimental Results
4.1. Experimental Scheme
In order to test the performance of the proposed algorithm in this paper, we use the research and development ground test equipment to build experimental system. Figure 12 shows the structure of experimental system to test the performance of the proposed algorithm. The experimental system is composed of image simulation resource, multispectral compression system, ground test equipment, DVI monitor, and server. The server injects the remote sensing image into the image simulation resource. The image simulation resource adjusts the size of image and line frequency of output to simulate the output of CCD. The multispectral compression system receives the test remote images from the image simulation resource and then compresses them to verify the proposed compression algorithm. The ground test equipment completes image decompression to gain the reconstructed images. The reconstructed images are transferred to the server through camera link bus. Finally, the compression performance is analyzed in the server.
The server is a high performance computer using 4.00 GHz and eightcore AMD processor and 28.0 GB of RAM. The image simulation resource has 64 GB NAND flash arrays to store remote sensing images. It outputs image as linetoline way. The maximum number of pixels in each line is 4096, and the line frequency is 0~8 KHz. In the multispectral compression system, the bit rate can be set to 2.0~0.25 bpp. The working frequency of system is 88 MHz.
4.2. Multispectral Image Compression Validation and Analysis
In order to verify the validation of the proposed algorithm, the server injects a series of the AVIRIS multiband remote sensing images into the image simulation resource. The group of multispectral images have four bands; the size of each band is 512 × 512. The bit depth of pixels is 8 bpp (bits per pixel). The bit rate is set to 1 bpp. Figure 13 demonstrated the four original and reconstructed bands. From the displayed images, the original image and reconstructed image are almost not different because the proposed compression algorithm has a high signaltonoise ratio. The PSNR reaches 48.61 dB, 47.19 dB, 47.07 dB, and 46.38 dB, respectively, in Figures 13(e)–13(h). After each band is encoded by the proposed algorithm, the dynamic range of pixels is 0~2 bits. And most pixels are about 1 bit. So, the proposed compression algorithm is feasible for multispectral CCD image.
(a) Original band 1
(b) Original band 2
(c) Original band 3
(d) Original band 4
(e) Reconstructed band 1
(f) Reconstructed band 2
(g) Reconstructed band 3
(h) Reconstructed band 4
4.3. Multispectral Image Compression Algorithm Performance Analysis
To objectively evaluate the performance of the proposed deep couplingbased compression scheme, the server injects multiple groups of multispectral images having different texture characteristics into the image simulation resource. The compression rate is set to 2.0~0.25 bpp. We use the PSNR formula to evaluate the rate distortion performance of the proposed deep couplingbased compression approaches objectively. The PSNR can be expressed as where is equal to , is the quantization bits of original images, denotes the value of the pixel of original images at , denotes the value of the pixel of reconstructed images at , and and denote the number of rows and columns of multispectral images, respectively.
In the first experimental part, in order to test the compressed performance of the proposed approach, we use three groups of SPOT1 multispectral images having different texture characteristics. Each group is composed of red, green, blue, and nearinfrared spectral bands. The size of each band is 512 × 512. The bit depth of pixels is 8 bpp (bits per pixel). Table 2 demonstrates the tested PSNR results of our approach at different bit rates. According to Table 2, the average PSNR reaches upwards of 40 dB. Therefore, the proposed algorithm based on deep coupling approach has the good compression performance and can satisfy the requirements of design index.

In the second experimental, in order to compare compressed performance of our approach, we use several proposed compression algorithms, AT3DSPIHT [33], 3DDWT [34], 3DPCA [35], 3DSPECK [36], and SADCT [37], to compress remote sensing MSCFBs. We use the AVIRIS multiband remote sensing images in JPL laboratory. Each group is ; the depth of every pixel is 8 bits. The images are Low Altitude, Lunar Lake, Jasper Ridge, and Cuprite. The compression bit rate is set to 2.0~0.25 bpp. Figure 14 shows the comparison results of average PSNR for different compression methods. From Figure 14, due to the full usage of posttransform and TD, the proposed PTTDbased compression algorithm has the best compression performance. The PTTDbased method can improve 0.3~1.3 dB of average PSNR compared with other compression methods at 2.0~0.25 bpp.
(a)
(b)
Overall, the PTTDbased compression method has excellent compression performance. It can fully meet the requirement of space multispectral CCD camera.
4.4. The Spectral Distortion Analysis
We use the max similarity of spectrum (MSS) to evaluate the distortion of multispectral images for our algorithm. Let be original remote sensing MSCFBs and let be reconstructed remote sensing MSCFBs. The MSS can be expressed as where , , and are line, column, and band number of multispectral image, respectively, is the total number of bands, and and are the average value and mean square of the corresponding pixel in the spectral dimension, respectively.
The greater the value of MSS is, the greater the distortion of spectra is. The MSS between original and reconstructed multispectral images for the image group using different approach is shown as in Figure 15.
From Figure 15, our approach has the lowest MSS, so the proposed algorithm can protect spectral information well.
Figure 16 demonstrates the reconstructed third band of two remote sensing images, which is 1024 × 1024; the depth of every pixel is 8 bits. The bit rate is set to 0.5 bpp.
In order to view the detailed information of reconstructed image, the part of image is . From Figure 14, under the low bit rate, 3DSPIHTbased approach causes the visible ringing effects and the blurred edge. However, our approach does not cause the blurred edge. So, the proposed algorithm can protect spectral information well.
4.5. Proposed Algorithm Complexity Analysis and Compression Time
In our method, 1DDCT is performed on spectral dimension to exploit the spectral information. It is reported that DCT has the lower amount of computations than DWT. When one of the most efficient methods is used to perform DCT, 8 × 8 points of 2DDCT use only 462 time adding operations and 54 time multiplication operators. This means that each point of 2DDCT needs only 0.84 time multiplication operators and 7.22 adding operations. Therefore, bands of MSCFBs performing 1DDCT need multiplications and additions.
After applying 1DDCT, threelevel 2DDWT is applied to the spatial bands. Let be decomposition level of 2DDWT. For the MSCFBs performing 2DDWT, the calculation complexity is . In the PTTDbased method, a 9/7 2DDWT is used to transform each band of MSCFBs. And we perform three levels of decomposition. Therefore, the calculation complexity of the PTTDbased method is .
After MSCFBs performing 2DDWT, we use the Hadamard posttransform in low bit rate. Let each DWT coefficients block have coefficients. The Hadamardbased posttransform needs the () operations. In high bit rate, we use the DCT posttransform. 2D 4 × 4 DCT requires only 27 multiplications and 231 additions. In addition, the best posttransform selection uses only time adding operations and one comparison operator. Therefore, the complexity of posttransform stage is operations for each block with coefficients.
After 2DDWT coefficients performing posttransform, the dimension of each subtensor is . Let be the average dimensions of core tensor . The complexity of the calculation of tensor decomposition is , where Therefore, the calculation complexity of TD in the PTTDbased method is .
In the following, we test and analyze the calculation times of the proposed PTTDbased compression method. Note that, in this test experiment, they only evaluate the calculation time of the PTTDbased compression because it is not the best optimized implementation on FPGAbased compression system. The evaluations testing performs the lossy compression for an image of size 3072 × 128 at 1.0 bpp on FPGA EVM board with system clock frequency at 88 MHz. Table 3 demonstrates the comparison of complexity between the proposed multispectral compression algorithm and others.
From Table 3, the processing time of our algorithm reaches 0.042 us/sample; the data throughput is 23.81MSPS, which is higher than JPEG2000, KLT, and 3DSPIHT, so our approach has the lowest complexity. In our project, the multispectral CCD camera works at an orbits altitude of 500 km, the scroll angle is ~, the latitudes are ~, and the line working frequency is 1.8094 kHz~0.85946 kHz. In the line working frequency, capturing the image of 128 × 3072 required 70.74 ms. Using our approach, compressing the image of requires 16.51 ms. So, our approach can process the four bands images simultaneously. This meets the requirement of the project.
In addition, we use the XC2V60006FF1152 FPGA to implement the proposed algorithm. The design language is VerilogHDL, the development platform is ISE8.2, and synthesis tool is XST. Table 4 demonstrates the occupancy of resources of our approach.

From Table 4, the LUTs occupy 67%, slices occupy 70%, and BRAM occupy 80%. Various indicators are lower than 95%, which meet the requirement of our project.
5. Conclusion
In this paper, we propose an efficient compression approach for MSCFBs. In our approach, the 1DDCT is performed on spectral dimension to exploit the spectral information, and the PT in 2DDWT domain is performed on each spectral band to exploit the spatial information. A deep coupling approach between the PT and TD is proposed to remove residual spectral redundancy between bands and residual spatial redundancy of each band. Experimental results show that the PTTDbased method has the average PSNR reaching upwards of 40 dB. And the PTTDbased method can improve 0.3~1.3 dB of average PSNR compared to other compression methods at 2.0~0.25 bpp. The processing time of our algorithm reaches 0.042 us/sample; the data throughput is 23.81MSPS, which is higher than JPEG2000, KLT, and 3DSPIHT. Therefore, the proposed PTTDbased compression algorithm has the excellent compression performance and the low calculation complexity. It is very suitable for the space MSCFBs compression.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgment
This work is sponsored by the China Postdoctoral Science Foundation (no. 2014M550720).
References
 D. Chaudhuri and A. Samal, “An automatic bridge detection technique for multispectral images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 46, no. 9, pp. 2720–2727, 2008. View at: Publisher Site  Google Scholar
 P. Lier, G. A. Moury, C. Latry, and F. Cabot, “Selection of the SPOT5 image compression algorithm,” in Earth Observing Systems III, vol. 3439 of Proceedings of SPIE, pp. 541–552, San Diego, Calif, USA, July 1998. View at: Publisher Site  Google Scholar
 F. Khelifi, A. Bouridane, and F. Kurugollu, “Joined spectral trees for scalable SPIHTbased multispectral image compression,” IEEE Transactions on Multimedia, vol. 10, no. 3, pp. 316–329, 2008. View at: Publisher Site  Google Scholar
 D. Valsesia and E. Magli, “A novel rate control algorithm for onboard predictive coding of multispectral and hyperspectral images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 10, pp. 6341–6355, 2014. View at: Google Scholar
 N. R. Mat Noor and T. Vladimirova, “Investigation into lossless hyperspectral image compression for satellite remote sensing,” International Journal of Remote Sensing, vol. 34, no. 14, pp. 5072–5104, 2013. View at: Publisher Site  Google Scholar
 Y. Liang, J. Li, and K. Guo, “Lossless compression of hyperspectral images using hybrid context prediction,” Optics Express, vol. 20, no. 7, pp. 8199–8206, 2012. View at: Publisher Site  Google Scholar
 A. Ruedin and D. Acevedo, “A classconditioned lossless waveletbased predictive multispectral image compressor,” IEEE Geoscience and Remote Sensing Letters, vol. 7, no. 1, pp. 166–170, 2010. View at: Publisher Site  Google Scholar
 C. Huo, R. Zhang, and T. Peng, “Lossless compression of hyperspectral images based on searching optimal multibands for prediction,” IEEE Geoscience and Remote Sensing Letters, vol. 6, no. 2, pp. 339–343, 2009. View at: Publisher Site  Google Scholar
 J. Wang and C. Chang, “Independent component analysisbased dimensionality reduction with applications in hyperspectral image analysis,” IEEE Transactions on Geoscience and Remote Sensing, vol. 44, no. 6, pp. 1586–1600, 2006. View at: Publisher Site  Google Scholar
 D. A. Adjeroh and S. D. Sawant, “Errorresilient transmission for 3D DCT coded video,” IEEE Transactions on Broadcasting, vol. 55, no. 2, pp. 178–189, 2009. View at: Publisher Site  Google Scholar
 M. Weeks and M. A. Bayoumi, “Threedimensional discrete wavelet transform architectures,” IEEE Transactions on Signal Processing, vol. 50, no. 8, pp. 2050–2063, 2002. View at: Publisher Site  Google Scholar
 A. Aggoun, “Compression of 3D integral images using 3D wavelet transform,” IEEE/OSA Journal of Display Technology, vol. 7, no. 11, pp. 586–592, 2011. View at: Publisher Site  Google Scholar
 P. L. Dragotti, G. Poggi, and A. R. P. Ragozini, “Compression of multispectral images by threedimensional SPIHT algorithm,” IEEE Transactions on Geoscience and Remote Sensing, vol. 38, no. 1, pp. 416–428, 2000. View at: Publisher Site  Google Scholar
 G. Liu and F. Zhao, “Efficient compression algorithm for hyperspectral images based on correlation coefficients adaptive 3D zerotree coding,” IET Image Processing, vol. 2, no. 2, pp. 72–82, 2008. View at: Publisher Site  Google Scholar
 X. L. Tang, W. A. Pearlman, and J. W. Modestino, “Hyperspectral image compression using threedimensional wavelet coding,” in Proceedings of the Conference on Image and Video Communications and Processing, vol. 5022, pp. 1037–1047, 2003. View at: Google Scholar
 X. Tang and W. A. Pearlman, “Lossytolossless blockbased compression of hyperspectral volumetric data,” in Proceedings of the IEEE International Conference on Image Processing (ICIP '06), pp. 1133–1136, October 2006. View at: Publisher Site  Google Scholar
 X. Tang and W. A. Pearlman, “Threedimensional waveletbased compression of hyperspectral images,” in Hyperspectral Data Compression, pp. 273–308, 2006. View at: Google Scholar
 T.J. Chen, S.C. Lin, Y.C. Lin, R.G. Cheng, L.H. Lin, and W. Wu, “JPEG2000 still image coding quality,” Journal of Digital Imaging, vol. 26, no. 5, pp. 866–874, 2013. View at: Publisher Site  Google Scholar
 X. Tang and W. A. Pearlman, “Progressive resolution coding of hyperspectral imagery featuring region of interest access,” in Proceedings of the International Society for Optical Engineering, vol. 5817 of Proceedings of SPIE, pp. 270–280, March 2005. View at: Publisher Site  Google Scholar
 Q. Du and J. E. Fowler, “Hyperspectral image compression using JPEG2000 and principal component analysis,” IEEE Geoscience and Remote Sensing Letters, vol. 4, no. 2, pp. 201–205, 2007. View at: Publisher Site  Google Scholar
 J. E. Sanchez, E. Auge, J. Santalo et al., “Review and implementation of the emerging CCSDS recommended standard for multispectral and hyperspectral lossless image coding,” in Proceedings of the 1st International Conference on Data Compression, Communications and Processing, pp. 222–228, 2011. View at: Google Scholar
 A. T. Mahfoodh and H. Radha, “Compression of image ensembles using tensor decomposition,” in Picture Coding Symposium (PCS '13), pp. 21–24, 2013. View at: Google Scholar
 A. Karami, M. Yazdi, and A. Z. Asli, “Hyperspectral image compression based on tucker decomposition and discrete cosine transform,” in Proceedings of the 2nd International Conference on Image Processing Theory Tools and Applications (IPTA '10), pp. 122–125, Paris, France, July 2010. View at: Publisher Site  Google Scholar
 A. Karami, M. Yazdi, and G. Mercier, “Compression of hyperspectral images using discrete wavelet transform and tucker decomposition,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 5, no. 2, pp. 444–450, 2012. View at: Google Scholar
 X. Delaunay, M. Chabert, V. Charvillat et al., “Satellite image compression by posttransforms in the wavelet domain,” Signal Processing, vol. 90, no. 2, pp. 599–610, 2010. View at: Google Scholar
 X. Delaunay, M. Chabert, G. Morin, and V. Charvillat, “Bitplane analysis and contexts combining of JPEG2000 contexts for onboard satellite image compression,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '07), vol. 1, pp. 1057–1060, Honolulu, Hawaii, USA, 2007. View at: Publisher Site  Google Scholar
 T. Bruylants, J. Barbarien, A. Munteanu, J. Cornelis, and P. Schelkens, “On the use of directional transforms for still image coding,” in Applications of Digital Image Processing XXXIV, vol. 8135 of Proceedings of SPIE, San Diego, Calif, USA, August 2011. View at: Publisher Site  Google Scholar
 X. Delaunay, M. Chabert, V. Charvillat et al., “Satellite image compression by directional decorrelation of wavelet coefficients,” in Proceedings of the 33rd IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 1193–1196, Las Vegas, Nev, USA, 2008. View at: Google Scholar
 X. Delaunay, M. Charvillat, V. Charvillat et al., “Satellite image compression by concurrent representations of wavelet blocks,” Annals of TelecommunicationsAnnales des Télécommunications, vol. 67, no. 12, pp. 71–80, 2012. View at: Google Scholar
 X. Delaunay, E. Christophe, C. Thiebaut, and V. Charvillat, “Best posttransforms selection in a ratedistortion sense,” in Proceedings of the 15th IEEE International Conference on Image Processing (ICIP '08), pp. 2896–2899, October 2008. View at: Publisher Site  Google Scholar
 G. Zhou, A. Cichocki, Q. Zhao et al., “Nonnegative matrix and tensor factorizations: an algorithmic perspective,” IEEE Signal Processing Magazine, vol. 31, no. 3, pp. 54–65, 2014. View at: Google Scholar
 Y. Jin and H.J. Lee, “A blockbased passparallel SPIHT algorithm,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 22, no. 7, pp. 1064–1075, 2012. View at: Publisher Site  Google Scholar
 D. Ma, C. Ma, and C. Luo, “A compression algorithm of AT3DSPIHT for LASIS's hyperspectral image,” Acta Optica Sinica, vol. 30, no. 2, pp. 378–381, 2010. View at: Publisher Site  Google Scholar
 B. Das and S. Banerjee, “Datafolded architecture for running 3D DWT using 4tap Daubechies filters,” IEE Proceedings—Circuits, Devices and Systems, vol. 152, no. 1, pp. 17–24, 2005. View at: Google Scholar
 Q. Du and W. Zhu, “Integration of PCA and JPEG2000 for hyperspectral image compression,” in Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery, vol. 6565 of Proceedings of SPIE, May 2007. View at: Publisher Site  Google Scholar
 J. Wu, K. Jiang, Y. Fang, L. Jiao, and G. Shi, “Hyperspectral image compression using distributed source coding and 3D SPECK,” in MIPPR 2009: Multispectral Image Acquisition and Processing, vol. 7494 of Proceedings of the SPIE, 2009. View at: Publisher Site  Google Scholar
 A. Kinane, V. Muresan, and N. O'Connor, “An optimal adderbased hardware architecture for the DCT/SADCT,” in Visual Communications and Image Processing 2005, vol. 5960 of Proceedings of SPIE, 2005. View at: Google Scholar
 I. Blanes and J. SerraSagrista, “Cost and scalability improvements to the KarhunenLoêve transform for remotesensing image coding,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 7, pp. 2854–2863, 2010. View at: Publisher Site  Google Scholar
 J. GonzálezConejero, J. BartrinaRapesta, and J. SerraSagristà, “JPEG2000 encoding of remote sensing multispectral images with nodata regions,” IEEE Geoscience and Remote Sensing Letters, vol. 7, no. 2, pp. 251–255, 2010. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2014 Jin Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.