Abstract

This paper proposes a JPEG lifting algorithm based on adaptive block compressed sensing (ABCS), which solves the fusion between the ABCS algorithm for 1-dimension vector data processing and the JPEG compression algorithm for 2-dimension image data processing and improves the compression rate of the same quality image in comparison with the existing JPEG-like image compression algorithms. Specifically, mean information entropy and multifeature saliency indexes are used to provide a basis for adaptive blocking and observing, respectively, joint model and curve fitting are adopted for bit rate control, and a noise analysis model is introduced to improve the antinoise capability of the current JPEG decoding algorithm. Experimental results show that the proposed method has good performance of fidelity and antinoise, especially at a medium compression ratio.

1. Introduction

Image processing technology has always been a research hotspot in the field of computer science. Especially, in the recent years, under the emergence of high-definition and large-scale images and the impact of massive video information, image compression processing technology has become particularly noticeable. Image compression technology can use limited storage space to save a larger proportion of image data; at the same time, it can also reduce the data size of images of the same quality, which can effectively improve the efficiency of network data transmission. The traditional image compression technology includes two independent parts, image acquisition and image compression, which limit the fusion improvement method of the two correlated compression technology parts. The emergence of compressed sensing (CS) theory breaks the above frame of image compression, and it completes the image acquisition and compression in the step of sparse observation synchronously; on the one hand, it simplifies the image processing process, and on the other hand, it also provides new research areas for image fusion compression.

There are many types of images processed in image compression technology, and this article selects a still image as the research object. The common still image compression formats include JPEG, JPEG2000, JPEG-XR, TIFF, GIF, and PCX. This paper focuses on the research of image compression algorithms with a JPEG similar structure and improves them with the combination of CS technology. In addition, the algorithms with a similar principle architecture to JPEG are collectively referred to as JPEG-like algorithms, including traditional JPEG, JPEG-LS, JPEG2000, and JPEG-XR. Data redundancy is essential to the compression of a still image. JPEG-like algorithms use time-frequency transform algorithms and entropy coding as main methods to eliminate data redundancy [13]. Although having achieved certain effects of still image compression, these algorithms have insufficient considerations on three types of data redundancy (coding redundancy, interpixel redundancy, and psychological visual redundancy) [4]. Firstly, the simple image blocking without guidance could not support the effective coding efficiency to eliminate redundancy in the existing JPEG-like algorithms. Secondly, the uniform time-frequency transform of the same dimension cannot reasonably use the a priori information between pixels of different subimage blocks to reduce interpixel redundancy. In the end, the former JPEG-like algorithms fail to eliminate psychological visual redundancy by considering overall and local saliency. CS technology breaks through the limitations of the Nyquist sampling theorem to provide innovative ideas for sparse reconstruction of signals [5]. In particular, the adaptive block compressed sensing (ABCS) combined with adaptive partitioning and sampling provides a feasible solution for the optimization of JPEG-like algorithms [6, 7]. That is, the block compression measurement matrix could be used as the forward discrete cosine transform (FDCT) matrix in the JPEG coding, and the inverse discrete cosine transform (IDCT) process is replaced by sparse reconstruction. In addition, multiple feature saliency and noise analysis are introduced to implement adaptive control of the observation matrix and minimal error iterative reconstruction [8, 9].

In this article, we proposed a JPEG lifting algorithm based on the ABCS, and named it as JPEG-ABCS. This proposed algorithm focuses on the following aspects: (1) guiding best morphological blocking by minimizing mean information entropy (MIE); (2) generating an element vector of subimage pixels using the texture feature and 2-dimensional direction DCT; (3) selecting the dimension of the measurement matrix by variance and local significance factors; (4) rate control by matching the overall sampling rate and the quantization matrix; (5) realizing iterative reconstruction of a minimum error under noise condition by using noise influence model analysis.

The remainder of this paper is organized as follows. In Section 2, the basic theories of JPEG-like algorithms and the ABCS algorithm are illustrated. In Section 3, we focus on the introduction of the JPEG-ABCS algorithm. Then, the implementation of the proposed JPEG-ABCS algorithm is analyzed in Section 4. In Section 5, the experiment and result analysis shows the benefit of the JPEG-ABCS. The paper concludes in Section 6.

2. Preliminary Knowledge

2.1. Background of the Existing JPEG-Like Algorithms

The existing JPEG-like algorithms are similar in structure, mainly including blocking, forward time-frequency transform, quantization, entropy coding, and the inverse operation of the above four processes. As the basic one of JPEG-like algorithms, the structure of the JPEG model is shown in Figure 1.

It can be seen from Figure 1 that in the entire JPEG model, the original image I is treated as two-dimensional data, and its key link is adopting the 2-dimensional DCT. Generally, the block size is square, such as , and the recommended quantization matrix (light-table) is given in equation (1) [10]. Based on the Hoffman coding, the encoding part adopts differential pulse code modulation (DPCM) for DC coefficients and run length coding (RLC) for AC coefficients:

Compared with the fixed bit rate of the JPEG algorithm, the JPEG-LS algorithm adds the function of rate control by using a quality factor. JPEG2000 adopts nonfixed square blocking (tile) and discrete wavelet transform (DWT) to improve the quality of the restored image. JPEG-XR introduces the lapped orthogonal transform (LOT) to reduce the blocking artifact at low bit rates.

2.2. Basic Theory of CS Algorithm

CS theory was originally proposed by Candès et al. in 2006, which proved that the original signal can be accurately reconstructed by partial Fourier transform coefficients. The advent of CS technology solves the problem that image sampling and compression cannot be performed simultaneously. In general, the main contents of the research on CS theory include sparse representation, compression observation, and optimization reconstruction [11]. Firstly, the main task of sparse representation is to find a set of bases that can make the signal sparse representation, which is the premise and foundation of the entire CS theory. Secondly, the primary task of compression observation is to design a linear measurement matrix uncorrelated with the basis vector to obtain dimensionality reduction observation data, which is the key content of CS theory. Lastly, optimization reconstruction is a difficult problem in CS theory, and its main goal is to solve the original signal through the reverse optimization problem of the sparse vector. The specific solution method of this process is the constrained optimization method.

CS mathematical model is based on the assumption of signal sparsity. Let be the original signal with n dimension. Suppose that the sparse matrix makes the sparse representation coefficient of as , where contains only K () nonzero elements. The original signal is called the K sparse signal under sparse basis . The number of nonzero elements in the coefficient vector can be calculated by , where denotes norm.

CS theory states that the information content in sparse signals can be effectively captured by a smaller number of observations. Let be the measurement matrix, where . The linear dimension-reduction acquisition vector of the original signal is given as , where represents the CS observation signal. In addition, the CS theory points out that to accurately recover the original signal by the observation signal, its dimensions must obey the following condition: , where c is an adjustment constant.

Since M < N, the reconstruction of the sparse signal from the measurement vector is ill-posed which requires us to solve the underdetermined system of equations. There are many solutions for such a system. It is common practice to achieve effective signal reconstruction by using signal sparsity as an additional constraint. The accurate signal reconstruction is accomplished through solving the following optimization problem:where is the sensing matrix, denotes the norm, and the value of p is usually 0, 1, and 2 according to different optimization goals. This is a NP-hard problem, and in order to ensure the stability and robustness of the reconstruction process, the measurement matrix must satisfy the restricted isometric property (RIP).

The above is the description of the three important problems of the CS algorithm, which solves the separation problem of traditional image acquisition and compression. However, when CS is applied to large-scale high-definition images and video processing, because the 2-dimensional image contains a lot of information, the overall projection requires a large-scale measurement matrix, which will inevitably lead to two major problems: excessive storage and reconstruction algorithm complexity. The above problems limit the application of CS in image processing. The emergence of block compressed sensing (BCS) theory solves this problem well. The solution is to cut the whole image into several small unit blocks, reconstruct after independent observation, and then perform stitching to restore and reconstruct the original image.

Traditional block compressed sensing (BCS) technology introduces the idea of blocks in CS theory to solve the dimensional disaster of data processing, and then improves the processing speed of the algorithm [12]. Its basic model is shown in the following equation:where , , and are the i-th subblocks of the original signal, observation signal, and block measurement matrix and is the number of blocks. In addition, a coefficient is often defined in the BCS, which is called the mean sampling rate. In the analysis of the above BCS algorithm model, although the blocking strategy solves the problems of dimensional disaster and computational complexity, the model uses a unified measurement matrix which can neither reflect the inherent differences between each subimage, nor can it achieve differentiated blocking.

In order to overcome the above shortcomings, the nonuniform blocking and observing are introduced into BCS, and combined with the idea of the adaptive algorithm, the ABCS algorithm is generated. The ABCS algorithm mentioned in this article is the introduction of the adaptive strategy into BCS, which is mainly reflected in adaptive blocking and observation [13, 14]. The ABCS algorithm model is as follows:where , , and are the i-th subblocks of the original signal, observation signal, and measurement matrix. The difference between the ABCS algorithm and the BCS algorithm is that it gives the dimensional freedom of subblock and measurement matrix, which provides conditions for the reasonable use of the correlation of the internal elements of the original signal.

3. Fusion of JPEG Model and ABCS Algorithm

3.1. Workflow of JPEG Lifting Algorithm

According to the above section, the JPEG image compression model mainly includes blocking, FDCT, quantization, coding, and the inverse process of the above four parts. The focus of this section is to do the research about the method on how to embed the advantages of the ABCS algorithm into the JPEG model. The basis for the fusion of the JPEG model and the ABCS algorithm is that the consistent purpose is for image compression. The former mainly compresses the image by reducing the number of bits occupied by each pixel, and the latter mainly compresses the data by reducing the amount of sampled data, so the ABCS algorithm is suitable to be embedded to the data acquisition stage of the JPEG model; that is, the ABCS algorithm is fused in the blocking and FDCT processes to reduce the amount of input data in the quantization process. In addition, the distinction of the data processing method in JPEG and ABCS is noticed. The image data in the JPEG algorithm are processed in the form of two-dimensional data, which is conducive to saving the two-dimensional structural characteristics of the image, while the input signal in the ABCS algorithm is a simple one-dimensional vector form and does not have two-dimensional characteristics. The proposed algorithm JPEG-ABCS mainly includes the solution of two main problems: (a) the conversion problem between two-dimensional time-frequency transformation of JPEG and one-dimensional measurement model of ABCS; (b) the specific method of applying ABCS to the JPEG compression algorithm.

In typical JPEG image compression, after the preprocessing stage, an R × C input image is divided into 8 × 8 size subimages , , . Each subimage is transmitted to a 2D DCT transform, and the 2D DCT can be completed using two one-dimensional DCTs according to the separability of the DCT. In addition, the blocking method designed in this paper adopts the variable shape blocking method under a unified dimension (), so the FDCT process can be described as follows:where is the subimage in the DCT domain, and are the 1D vertical and horizontal DCT orthogonal matrices, respectively, and and are the number of rows and columns of each subimage [15].

The block sparse representation and the flexible uniform-dimension blocking are introduced into the ABCS algorithm. Equation (4) can be rewritten as follows:where , , and are the i-th subblocks of the original signal, observation signal, and sparse signal; , , and are the i-th subblocks of the measurement matrix, sparse matrix, and sensing matrix; is the subsampling rate of the i-th subblock.

For retaining the two-dimensional characteristics of the image signal in the application of JPEG-ABCS, it is necessary to analyze the two-dimensional DCT transform in JPEG and the compression observation in ABCS. It is impossible that the 1-dimension vector generated directly from the subimage through column/row scanning has two-dimensional structural characteristics. The inverse solution of the reconstructed signal in the ABCS algorithm is generally denoted as ; that is, the reconstruction of the original signal is only related to the sparse representation coefficient . If the sparse representation coefficient has two-dimensional structure information, it is equivalent to the original signal with two-dimensional structure information. Therefore, the equivalent two-dimensional block vector generation can be achieved by taking the sparse matrix of ABCS as the corresponding matrix under the two-dimensional DCT transform:

Analyzing equation (7), the function between the sparse matrix and the DCT orthogonal matrices is established as follows:where represents the Kronecker product function. Original signal vector is obtained by scanning the pixel value of the subimage vertically. In addition, if the texture of the image is not in the vertical and horizontal directions, the directional DCT is used instead of the horizontal and vertical DCT orthogonal matrix.

Replacing FDCT and IDCT in the JPEG with adaptive sparse observing and sparse restoring, respectively, replacing blocking in the JPEG with adaptive blocking and vectorization, adding noise to the data storage or data transmission are done, and then the workflow of the proposed JPEG-ABCS algorithm is shown in Figure 2. Comparing the two image compression models shown in Figures 1 and 2, the key points of the JPEG-ABCS model are (1) adaptive blocking, that is, replacing a fixed block with a variable block; (2) adaptive vectorization, that is, providing a matching vector generation method based on image orientation characteristics; (3) adaptive observing, that is, replacing uniform observing with nonuniform observing; (4) adding a controllable variable in the rate control process to improve the JPEG algorithm; (5) designing a denoising method in adaptive restoring to reduce the noise impaction on restored data.

3.2. Innovation of JPEG Lifting Algorithm

The innovations of the above JPEG lifting algorithm are as follows:(1)Adding the mean sampling rate to overcome the deficiency of the traditional JPEG-like algorithm that can only use the time-frequency transform and the quantization matrix to eliminate redundant information in image compression.(2)By analyzing the correlation between sparseness and error, the optimal OMP iterative algorithm is established to enhance the JPEG-like algorithm’s noise immunity performance.(3)In the adaptive block observation, the MIE-based adaptive block reduces the information entropy of the subimage set to lower the bpp, the ASM-based adaptive vectorization can ensure the maximum avoidance of image information loss, and the adaptive observation based on multifeature saliency ensures a reasonable distribution of the total measurement number.

4. Implementation of JPEG-ABCS

This section mainly describes the implementation of the JPEG-ABCS algorithm mentioned in the previous section. The specific implementation is discussed from four aspects: adaptive blocking, adaptive vectorization, adaptive observing, and denoising by optimizing the number of iterations.

4.1. Adaptive Blocking Method Based on MIE

The adaptive blocking method proposed in this paper is a variable partitioning in the same dimension, that is, , where n is a fixed value, typical value is 64, and and are the number of rows and columns of the variable block, typical value , . Specifically, the optimized block is based on minimizing the mean information entropy (MIE) of the block observation signal set. Since blocking process needs to be completed before observation, it is impossible to use an ungenerated observation set for guiding the blocking optimization. Therefore, an alternative method is introduced to guide the reasonable blocking by minimizing the MIE of the original signal’s block set:where represents the MIE function of the pixel set, represents the information entropy of the i-th subimage in the pixel domain, represents the proportion of elements with the pixel gray value j in the i-th subimage, is the number of blocking ways, and and are the minimum and maximum values of the pixel gray in the original signal, respectively. However, the effectiveness of the above method lies on consistency between the observation signal’s MIE and the original signal’s MIE at the same partitioning. In order to verify the above consistency problem, this paper conducted a test experiment using multiple standard images, and its experimental results are shown in Figure 3. The experimental data show that, under the constraint of minimizing MIE, the optimal block of the original signal and the observed signal is consistent, which verifies the feasibility and rationality of the proposed block optimization method. Specifically, it can be seen from Figure 3 that under the constraint of minimum MIE, the optimal block shape is only related to the test image itself, not to the sampling rate. In addition, it has been verified by a large number of other standard test images that the method of finding the best block has the same trend whenever applied to the observation signal and the original signal, and there must be an extreme point.

4.2. Adaptive Vectorization Based on ASM

The basis of adaptive vectorization is how to identify the directional characteristics of the image. There are many methods for identifying direction features in the field of array signal processing, especially in DOA estimation research, such as the Capon algorithm, MUSIC algorithm, maximum likelihood algorithm, subspace fitting algorithm, and ESPRIT algorithm [16, 17]. In this article, the angular second-order moment (ASM) value under the gray-level cooccurrence matrix (GLCM) is used to characterize the saliency of the direction [18]:where is the GLCM function, is the term of , is the normalized form of , is the adjacent pixel pairs in the image with distance , direction , and gray values , and is the ideal maximum number of pixel pairs under the selected conditions.

Combined with the rectangular shape of adaptive blocking, the ASM values in four directions are defined for adaptive vectorization, namely, , , , and [19]. In addition, the maximum value of these four values is defined as .

The specific method of adaptive vectorization is based on the relationship between the four ASM values. If and , then the vectorization set of the original subimage set is generated using horizontal scanning and vertical linking; if and , then the vectorization set of the original subimage set is generated using vertical scanning and horizontal linking; if and , then the vectorization set of the original subimage set uses zigzag generation along the main diagonal direction; if and , then the vectorization set of the original subimage set uses zigzag generation along the counter-diagonal direction. It should be noted that the adaptive vectorization of each subimage must be related to the design of the sparse matrix to jointly realize the one-dimensional vectorization that preserves the two-dimensional structural characteristics of the subimage data.

4.3. Adaptive Observing Based on Multifeature Saliency and Bit Rate Control

The key point of the nonuniform measurement matrix is the determination of . Considering that different subimages contain different amounts of information and the sensitivity of the human eye’s attention mechanism to different images is different, this paper proposes an adaptive measurement matrix based on multifeature saliency and the orthogonal symmetric Toeplitz matrix (OSTM):where is the adjustment factor, stands for the overall variance function, is the local saliency function according to Weber’s theorem [20], is the number of elements in the salient domain determined by the optimal bounding box, is formed by randomly taking rows of -dimensional OSTM [21], and α = 2 and β = 1 are the recommended values. The purpose of designing the adaptive measurement matrix in this way is to rationalize the sampling process and to achieve more sampling of detail blocks and less sampling of smooth blocks.

The traditional JPEG-like algorithms control the bit rate (bits per pixel, bpp) through the quantization matrix, encoding, and bit-stream organization [22]. In this paper, the mean sampling rate has been used to improve the compression performance of the JPEG-ABCS algorithm. The bit rate control for an 8-bit 256-level grayscale image is as follows:where is the mean sampling rate and also corresponds to information decay ratio caused by sparse measurement, () represents information decay ratio caused by the quantization, and () is the bit compression ratio of the entropy encoding and bit-stream organization.

Analysis of the above three factors that affect the rate control in the JPEG-ABCS model shows that once the encoding method is determined, the only factors that can be optimized are and , while is a fixed number. In order to reduce the bit rate of restored images at the same quality, the value of these two factors must be set reasonably. This article focuses on the analysis of image performance impact in terms of different under the same bpp and matching design of the quantization matrix that can determine the value of . In the process of analyzing the impact of on the performance of compressed images, the synthetic indicator composed of the peak signal-to-noise rate (PSNR) and structural similarity (SSIM) is used as evaluation criteria to find the best under different bpp. At the same time, in order to complete the comparison experiment of different under the same bpp, it is necessary to set different quantization matrixes. This article uses the quality factor (QF) to design different quantization matrices [23]. Because the data () quantized in the JPEG-ABCS algorithm are a one-dimensional vector and are also a normalized measurement of the frequency domain sparse coefficients of the original signal (), the quantization matrix is weakened into a quantization vector whose elements no longer characterize frequency domain property and have the same importance. Therefore, the elements of the quantization matrix for the subimage designed in this paper have the same value (). The goal of the quantization matrix matching design is only to find a fitting function to approximate the relationship between bpp and QF. Figure 4 shows the experimental data of the above test process using Lena. According to Figure 4(a), it can be seen that under the constraint of maximizing synthetic features, the optimal mean sampling rate () increases with the increase in bpp. Meanwhile, an obtaining function can be summarized, as shown in equation (14), and the typical values of and in the equation are 0.15 and 0.3. In addition, it should be noted that equation (14) can only be directly applied to images with a similar MIE of Lena. For other images, the threshold determination condition in the equation should be corrected according to the MIE of the image block set. Specifically, the coefficient is introduced for the correction of the above two threshold conditions (that is, and ). The coefficient can be defined as the MIE ratio of other images to the Lena image. The design of the fitting function () adopts the cubic curve fitting method whose data are derived from the actual measurement value of QF and bpp. Figure 4(b) shows the comparison of consistency between the actual light-table’s QF and the design value obtained from equation (15). From the results, the QF obtained by equation (15) satisfies the actual requirements well:where is the floor function and , , and are obtained by quadratic curve fitting.

4.4. Denoising by Optimizing the Number of Iterations

Consider the noise observation model as follows:where is the additive white Gaussian noise with zero-mean and standard deviation and is the equivalent noisy original signal.

Since the reconstructed original signal () is recovered from the noisy observation signal (), the reconstruction error () is mainly caused by the noise and reconstruction algorithm, and its mathematical expression can be defined as the following equation using the norm:where is restored by the pseudoinverse operation, that is, , and is the reconstructed sparse signal. The abovementioned is the pseudoinverse of , usually , and also represents the reconstruction algorithm in CS. The reconstruction algorithm of CS is based on the sparse representation of the signal [24], that is, the reconstruction sparsity () of satisfies the inequation (), so the pseudoinverse operation to get can be rewritten as follows:where is the matrix generated of column vectors in and is the matrix generated of column vectors in that has the greatest correlation with .

Because equation (17) cannot be calculated directly, we add to help in analysis and calculation:where is a projection matrix of rank and is a projection matrix of rank . Since and satisfy orthogonality, the inner product of and is equal to zero. Therefore, equation (19) can be transformed into the following form:

Equation (21) reveals that the reconstruction error () is composed of the algorithm error () and the noise error (). decreases as the reconstruction sparsity () increases, and increases with the reconstruction sparsity () [25, 26]. Therefore, reconstruction error and reconstruction sparsity are a bias-variance trade-off, and there must be an optimal reconstruction sparsity () that minimizes the reconstruction error ():

Figure 5 shows the relationship between reconstruction sparsity and reconstruction error under different noise conditions by using a modified Lena test image. The modified Lena image is generated by intercepting 60 sparse coefficients under a discrete cosine basis; that is, its original sparsity () is 60. The noise added in the test is zero-mean Gaussian white noise, and its standard deviation ( = noise − std) also represents the intensity of the noise. The indicator PSNR is used to characterize the size of the reconstruction error. It can be seen from Figure 5 that the optimal reconstruction sparsity decreases as the noise intensity increases and is less than the original sparsity ().

From the verification experiment shown in Figure 5, we can see that there is indeed an optimal reconstruction sparsity in the reconstruction process under the noise background. However, equation (22) is not a feasible solution that can be directly used to optimize the reconstruction process. In the actual reconstruction process, only the observation data at the receiving end can be used for the optimization algorithm. Therefore, this paper designs a solution that uses observation data to optimize the reconstruction sparsity.

According to the definition of CS, the measurement matrix () obeys the RIP criteria, and thereforewhere is a coefficient related to and , and is the reconstruction error of observation data. The transformation of formula (23) can get the boundaries of the original data reconstruction error as follows:

It can be seen from the above two equations that the reconstruction errors of the original data and the observation data are consistent, so the reconstruction sparsity can be optimized by minimizing the errors of the observation data:

It is known from the above equation that the reconstruction errors of the observation signal satisfies the chi-square distribution, so the upper and lower boundary of can be derived from the chi-square distribution probability. In addition, when calculating the minimum value of , the worst condition is considered, that is, by calculating the minimum value of the upper bound of .

In the norm reconstruction algorithm of CS, the reconstruction sparsity is equal to the number of iterations. Therefore, optimizing the number of iterations can reduce the noise impact on image quality in using orthogonal matching pursuit (OMP) as the signal recovery algorithm [27, 28]. According to the Bayesian information criterion (the tuning parameters of confidence probability and effective probability are taken as and 0, respectively) [29], the optimal value of iteration number can be achieved by minimizing the noise influence:where is the noise error of observation data.

4.5. Pseudocode of JPEG-ABCS

The JPEG lifting algorithm (JPEG-ABCS) described in this article mainly consists of the above four sections, except for the entropy codec, and its full pseudocode is shown in Algorithm 1.

(1)Input:
 Original image I; rate control bpp;
 Subimage dimension n=64;
(2)Initialization:
; N=R×C;
;//
;//
 //step1: adaptive blocking and vectorization
(3)for j = 1, …, T2 do
(4);
(5)
(6);
(7);
(8)end for
(9);
(10);
(11);
(12)
(13)if ;
(14) if //Condition = 1
(15)  ;
 //: horizontal scanning and vertical linking
(16) else if //Condition = 2
(17)  ;
 //: vertical scanning and horizontal linking
(18) end if
(19)else if
(20) if //Condition = 3
(21)  ;
 //: zigzag along the main diagonal direction
(22) else if //Condition = 4
(23)  ;
 //: zigzag along the counter-diagonal direction
(24) end if
(25)end if
 //step2: adaptive observing and bit rate control
(26)for i = 1, …, T1 do
(27); ;
(28);
 //synthetic feature (J)
(29)end for
(30);
(31);
(32);
 //-- sampling ratio of subimages
(33);
(34)
 //prevent undersampling and oversampling
(35);
(36);
(37);
 //: noiseless; : noise
(38);
(39)
 //step3: codec and antiquantization
(40)
(41)
(42)
 //step4: reconstruction and denoising
(43)if Condition = = 1
(44)
(45)else if Condition = = 2
(46)
(47)else
(48)end if
(49)
(50)for i = 1, …, T1 do
(51);
 //-- column vector of
(52);
 //calculate optimal iterative of subimages
(53) for do
(54)  ;
(55)  ;
(56)  ;
(57) end for
(58);
 //: reconstruction sparse representation
(59);
 //: reconstruction original signal of subimages
(60)end for
(61);
 //step5: antivectorization and jointing
(62);
 //: recovered image with JPEG-ABCS

5. Experiment and Result Analysis

In order verify the superiority of the JPEG-ABCS algorithm, experiments were conducted in two cases: noiseless and noisy. Standard JPEG and JPEG2000 algorithms were used as comparison algorithms, and multiple grayscale standard images with 256 × 256 resolution were used in the following experiments which were conducted in the simulation software environment of Matlab2016b. In order to objectively evaluate the performance of the algorithm, peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) were introduced as image reconstruction evaluation indexes.

The PSNR index is the most widely used objective standard for characterizing the quality of reconstructed images:where and are the i-th element of the original image signal and the reconstructed original image signal, respectively.

The SSIM is another common signal reconstruction quality evaluation index used to describe the similarity between the original image signal and the reconstructed image signal:where and are the average gray value of all elements in and , and are the standard deviation of all elements in and , is the covariance of and , and are the constants, and is the range of pixel gray values.

5.1. Experiments and Analysis without Noise

The experiments under noiseless condition is mainly divided into three parts.

5.1.1. Effectiveness Experiment about Reducing bpp of JPEG Image by MIE Minimization Adaptive Blocking

In order to verify the effectiveness of the proposed MIE minimization adaptive blocking method, this paper has conducted experiments on two standard images (Lena and Parrots), whose MIE under different blocks is known in the above section, and the experimental data are recorded in Table 1. The basic JPEG algorithm is used in the experiment, and the quantization matrix (light-table) is generated using formula (15), where QF is 50 and 25, respectively.

From the data in Table 1, it can be seen that the Lena and Parrots recovery images have the minimum bpp in the block shape of 16 × 4 and 8 × 8, which just coincides with the minimum MIE block in the above section.

5.1.2. Verification Experiment of ASM-Based Adaptive Vectorization to Improve the 2D Image Reconstruction Performance in BCS Algorithm

The application of CS to image signal processing requires vectorization of two-dimensional images. Therefore, the proposed JPEG-ABCS algorithm also needs to vectorize the subimages. Different vectorization methods directly affect the reconstruction quality of the image.

In this article, three common vectorization methods (vertical scanning, 2D scanning, and zigzag scanning) are compared and analyzed. The verification experiment is carried out in the CS algorithm, using two kinds of images (standard test image and texture test image) to test under different blocking and sampling rates. The experimental data are recorded in Tables 2 and 3, respectively.

It can be seen from Table 2 that for standard test images that meet ASM conditions 1 or 2, vectorization using 2D scanning is optimal, and its PSNR and SSIM of the reconstructed image have a relative advantage than the other two methods.

Table 3 shows the results of texture test image reconstruction using three vectorization methods. Obviously, the vector generation using zigzag scanning has better performance at this case.

It can be seen from the above tables that for different types of test images, the use of a single mode of the vector generation method cannot always effectively improve the quality of image reconstruction, and the multimode method combined with image texture direction feature detection is recommended. Therefore, this paper proposes an adaptive vectorization method based on ASM to maximize the performance of 2D image reconstruction in the BCS algorithm.

5.1.3. Performance Comparison of Various JPEG-Like Algorithms

The image reconstruction quality of the three algorithms was compared with each other under noiseless condition to verify the benefit and universality of the proposed JPEG-ABCS. The experiment includes two parts: verification under different bpp and different test images. Figure 6 shows the experimental results of the Lena test image under three JPEG-like algorithms. Figures 6(a) and 6(b) show that compared with the JPEG and JPEG2000 algorithms, the proposed algorithm has advantages in PSNR and SSIM indicators under different bpp conditions. Furthermore, from the transformation trend of the curve, it can be seen that the JPEG-ABCS algorithm has good performance at medium and high bit rates, but as the bit rate decreases, the performance of the algorithm proposed in this article has declined, the main reason is that as the dimension of the measurement matrix decreases with bpp, the observation process cannot cover all the information of the image. Figure 6(c) is the restored grayscale images of Lena using the three algorithms under 0.25 bpp condition. It can be seen from a subjective vision that JPEG-ABCS’s performance is better than the other two algorithms.

In addition, Table 4 records the experimental results of the three algorithms for different images under the condition of bpp = 0.25, 0.3, and 0.4. The data in the table show that the improvement effect of the proposed algorithm is universal for different images. For instance, at the condition of bpp = 0.3, the PSNR index of the four standard test images under the JPEG-ABCS algorithm is improved by 8.34%, 15.09%, 4.46%, and 8.13% compared to the JPEG algorithm and 6.19%, 12.98%, 3.39%, and 6.39% compared to the JPEG2000 algorithm, respectively; the SSIM index has also been improved, compared to JPEG it increased by 0.96%, 0.88%, 1.22%, and 0.59% and compared to JPEG2000 it increased by 0.62%, 0.64%, 0.84%, and 0.30%.

As can be seen from Figure 6 and Table 4, compared to the JPEG and JPEG2000 algorithms, the proposed JPEG-ABCS algorithm has a large improvement on PSNR and SSIM, mainly due to the adaptive blocking and adaptive sampling reducing MIE of image blocks under the same conditions, while sparse restoration guarantees image restoration quality.

5.2. Experiments and Analysis under Gaussian Noisy Conditions

In the Gaussian noise condition, it is verified that the JPEG-ABCS algorithm has improved antinoise performance compared to the standard JPEG algorithm. Figure 7 shows the experimental results which are obtained by using the monarch, peppers, and cameraman as test images. It can be clearly seen from Figure 7 that the test images reconstructed using JPEG-ABCS are superior to the test images reconstructed by JPEG in different noise intensities (here, the noise standard deviation is used as the noise intensity), especially the more noise intensity, the more obvious the superiority.

Table 5 shows the PSNR and SSIM comparison records of the noisy monarch test image under two different algorithms. As can be seen from the data in Table 5, the noisy image reconstruction performance of the JPEG-ABCS algorithm under different bpp conditions is better than that of the JPEG algorithm. For example, at the condition of bpp = 0.25, the PSNR index of the JPEG-ABCS algorithm under the four noise intensities is improved by 9.68%, 4.25%, 0.74%, and 3.06% compared to the JPEG algorithm; the SSIM index has also been improved, compared to JPEG by 4.54%, 3.69%, 1.53%, and 7.71%. In other words, JPEG-ABCS adds antinoise capability that the JPEG algorithm does not have.

The main reason for the above improvement is that the algorithm itself considers the noise model in the reconstruction algorithm and adds the idea of iterative optimization.

6. Conclusions

In this paper, a JPEG lifting algorithm based on ABCS was proposed, and its structure and implementation method were specifically introduced. At the same time, the improvements of the algorithm were described, and the feasibility and rationality of the above improvements were demonstrated by experiments. Finally, through comparison experiments with similar algorithms, the contribution of this lifting algorithm to JPEG-like algorithms, that is, to improve the quality of image reconstruction, reduce bit rate (bpp), and add the antinoise function has been evaluated

Data Availability

The simulation results used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research was supported by the National Key Research and Development Program of the Ministry of Science and Technology of China (2018YFB2003304), the National Natural Science Foundation of China under Grant (61471191), and the Youth Program of the National Natural Science Foundation of China (31700478).