International Journal of Digital Multimedia Broadcasting

Volume 2017, Article ID 3902543, 10 pages

https://doi.org/10.1155/2017/3902543

## Adaptive Image Compressive Sensing Using Texture Contrast

^{1}School of Computer and Information Technology, Xinyang Normal University, Xinyang 464000, China^{2}School of Electrical and Electronic Engineering, Nanyang Institute of Technology, Nanyang 473000, China^{3}School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing 210003, China

Correspondence should be addressed to Ran Li; moc.361@853naril

Received 20 October 2016; Revised 16 February 2017; Accepted 1 March 2017; Published 13 March 2017

Academic Editor: Jintao Wang

Copyright © 2017 Fang Sun et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The traditional image Compressive Sensing (CS) conducts block-wise sampling with the same sampling rate. However, some blocking artifacts often occur due to the varying block sparsity, leading to a low rate-distortion performance. To suppress these blocking artifacts, we propose to adaptively sample each block according to texture features in this paper. With the maximum gradient in 8-connected region of each pixel, we measure the texture variation of each pixel and then compute the texture contrast of each block. According to the distribution of texture contrast, we adaptively set the sampling rate of each block and finally build an image reconstruction model using these block texture contrasts. Experimental results show that our adaptive sampling scheme improves the rate-distortion performance of image CS compared with the existing adaptive schemes and the reconstructed images by our method achieve better visual quality.

#### 1. Introduction

The core of traditional image coding (e.g., JPEG) is the image transformation based on Nyquist sampling theorem. It can recover an image without distortion only when the transformation number is greater than or equal to the total pixel number of image. However, limited by computation capability, the wireless sensor cannot tolerate excessive transformations, so traditional image coding is not fit for the wireless sensor with a light load [1, 2]. Besides, owing to the information focus on a few transformation coefficients, the quality of reconstructed image deteriorates greatly once several important coefficients are lost. Recently, the rapid development of Compressive Sensing (CS) [3, 4] introduces a new way to solve these defects in traditional image coding. Breaking the limitation of Nyquist sampling rate, CS accurately recovers signals using partial transformations. The superiority of CS lies in the fact that it can compress image by dimensionality reduction while transforming image, which attracts lots of researchers to develop the CS-based low-complexity coding [5, 6].

Many scholars are devoted to improving rate-distortion performance of image CS. A popular method is adopted to construct a sparse representation model to improve the convergence performance of minimum -norm recovery; for example, Chen et al. [7] predict sparse residual using multihypothesis prediction; Becker et al. [8] exploit the first-order Nesterov’s method to perform efficient sparse decomposition; Zhang et al. [9] use both local sparsity and nonlocal self-similarity to represent natural images; Yang et al. [10] use Gaussian mixture model to generate sparser representation. From different perspectives, these sparse representation schemes achieve some improvement of rate-distortion performance. However, their disadvantage is that rapid increase of computational complexity in spatial resolution, for example, the proposed algorithm by Zhang et al. [9], requires about an hour to recover an image of 512 × 512 in size. To avoid the high computational complexity, some works try to improve the quantization performance according to the statistics of CS samples. An efficient quantizer can reduce the amount of bits; for example, Wang et al. [11] exploit the hidden correlations between CS samples to design progressive fixed-rate scalar quantization; Mun and Fowler [12] and Zhang et al. [13] use Differential Pulse-Code Modulation (DPCM) to remove the redundancy between block CS samples. By reducing statistical redundancies, these quantizers obtain some performance improvements with a low computational complexity. Despite its less computational burden, the quantization scheme has limited improvement of rate-distortion performance due to the lower redundancies in CS samples [14]. From the above, we can see that there is a tradeoff between computational complexity and quality improvement for image CS. We expect to find a scheme which strikes a balance between the two. Compared with sparse representation and quantization schemes, the feature-based adaptive sampling achieves a satisfying improvement of rate-distortion performance without introducing excessive computations. Its idea is to increase the efficiency of CS sampling by suppressing useless CS samples. The sampling rate of each block is allocated according to various image features; for example, Zhang et al. [15] determine the sampling rate of each block depending on varying block variance; Canh et al. [16] exploit the edge information to adaptively assign the sampling rate for each block. Block variance and edge information mean a low-level vision. They preserve the low-frequency information but neglect the high-frequency texture details attractive to human eyes. Oriented by the two-feature measures, lots of CS samples are invested into those blocks with simple patterns, which results in an undesirable reconstruction quality. To overcome the defect of traditional sampling scheme, useful features should be extracted to express high-level vision. Directed by interesting features, an efficient adaptive scheme can guarantee the recovery of high-frequency details.

Texture as a visual feature is used to reveal similar patterns independent of color and brightness, and it is the mutual inherent property existing in object surfaces; for example, tree, cloud, and fabric have their own texture details. Texture details contain important information on structures of object surfaces, revealing relations between object and its surrounds. Texture details represent high-frequency components which are more attractive to human eyes. In this paper, we propose to set the sampling rate of each block based on texture details. We design texture contrast to measure the varying texture features and assign a high sampling rate to the block with a striking texture contrast. We remove the redundant CS samples of each block with a low-texture contrast. When reconstructing the image, the distribution of texture contrasts is used to weight the global reconstruction model. Experimental results show that the proposed method improves the visual quality of reconstructed image compared with the adaptive schemes based on block variance and edge features.

#### 2. Adaptive Block CS of Image

The framework of adaptive block CS is shown in Figure 1. At CS encoder, the natural scene is first captured by CMOS sensors as a full-sampling image with size of ; that is, the total number of pixels is . Then, divide image into small blocks of in size and let represent the vectorized signal of the th block through raster scanning. Next, the number () of CS samples for each block is set according to image features. We construct a random transformation matrix of in size for each block. Finally, the CS-samples vector of each block of in length is computed by the following formulation: in which the elements of obey Gaussian distribution. We define sampling rate as follows:The CS-samples vectors of all blocks will be transmitted to the CS decoder. When receiving CS samples of each block, we construct the minimum - norm reconstruction model as follows: in which and are and norms, respectively, is the block transformation matrix, for example, DCT and wavelet matrices, and is a fixed regularization factor. Because the objective function of model (3) is convex, it can be solved by using Gradient Projection for Sparse Reconstruction (GPSR) algorithm [17] or Two-step Iterative Shrinkage Thresholding (TwIST) algorithm [18]. CS theory states that the signal can be recovered precisely by using model (3) if in which is the sparse degree of the th block and is some constant [19]. Due to the nonstationary statistics of nature images, sparse degree of each block distributes nonuniformly. From (4), we can see that blocks with a large sparse degree cannot be accurately reconstructed once the sampling rate is too low; that is, the fixed number of block CS samples is not enough to capture all information on original image. Therefore, the sampling rate of each block should be set adaptively according to its own sparse degree. It is a straightforward method to acquire the block sparse degree that counts those significant transformation coefficients. However, this obviously violates the superiority of CS theory. Once the encoder performs full transformation, image CS has no advantage over the traditional coding. Therefore, it is impractical to directly get block sparse degree using full transformation. To avoid full transformation at encoder, some image features are exploited to indirectly reveal the block sparse degree, for example, block variance, the number of edge pixels. In this indirect way, we can get some improvement of rate-distortion performance; however, these features only reveal the varying of local pixel values, which improves the objective quality of reconstructed image but results in poor visual quality, which is shown especially by the occurrence of many blocking artifacts. In view of the above-mentioned, the proper feature is required to guide the adaptive sampling so as to improve the rate-distortion performance as well as guarantee a better visual quality.