#### Abstract

Sparse signals, assuming a small number of nonzero coefficients in a transformation domain, can be reconstructed from a reduced set of measurements. In practical applications, signals are only approximately sparse. Images are a representative example of such approximately sparse signals in the two-dimensional (2D) discrete cosine transform (DCT) domain. Although a significant amount of image energy is well concentrated in a small number of transform coefficients, other nonzero coefficients appearing in the 2D-DCT domain make the images be only approximately sparse or nonsparse. In the compressive sensing theory, strict sparsity should be assumed. It means that the reconstruction algorithms will not be able to recover small valued coefficients (above the assumed sparsity) of nonsparse signals. In the literature, this kind of reconstruction error is described by appropriate error bound relations. In this paper, an exact relation for the expected reconstruction error is derived and presented in the form of a theorem. In addition to the theoretical proof, the presented theory is validated through numerical simulations.

#### 1. Introduction

Signals that can be characterized by a small number of nonzero coefficients are referred to as sparse signals [1–11]. These signals can be reconstructed from a reduced set of measurements [1–25]. The measurements represent linear combination of the sparsity (transform) domain coefficients [1, 7, 24]. Signal samples can be considered as measurements (observations) in the case when a linear signal transform is the sparsity domain. Signal sparsity in a transformation domain can be observed in a number of important applications. For example, ISAR images are commonly sparse in the two-dimensional Fourier transform domain, whereas digital images are well known for their good concentration in the domain of two-dimensional (2D) discrete cosine transform (DCT) [8, 21–24].

The idea of reduced number of observations is studied within the compressed sensing (CS) theory and the sparse signal processing framework. The reduced number of measurements may appear due to different causes. In the CS applications, it arises as a consequence of intentional sampling strategy, aiming to reduce the signal acquisition time, equipment load, and subject exposure to potentially dangerous radiation during the acquisition in biomedical applications, or, simply, there is a particular interest to reduce the amount of acquired samples while preserving the complete information (compression) [1–15]. In certain cases, physical unavailability can be a cause of a reduced number of measurements. In many applications, strong disturbances (noise) can significantly corrupt the signal samples. Such signals are processed by detecting and intentionally neglecting the corrupted measurements [7, 9, 23]. Regardless of their unavailability reasons, under certain reasonable conditions, missing samples can be reconstructed using well developed CS methods and algorithms [1, 2].

The DCT is an important and frequently used tool in signal processing [21–27]. Many signal classes can be more compactly represented in the DCT domain than in the Fourier domain. Due to its superior compressibility, the 2D-DCT is one of the most exploited transforms in the compression of digital images [24]. Moreover, this transform domain has been convenient for the reconstruction of digital images with missing pixels and/or noise corruption using the sparsity assumption [21–23]. Measuring the 2D-DCT coefficients concentration (using the -norm based measure) and varying missing samples values to obtain the sparsest possible solution leads to the prominent compressive sensing reconstruction results [20, 23]. In the orthogonal matching pursuit (OMP) framework, successful reconstruction is easily obtained if the coefficients corresponding to signal component positions are successfully identified [12, 28–30]. In that case, the true coefficient values can be calculated using the identified component positions and the 2D-DCT measurement matrix [9, 24]. However, it is important to note that, in practice, digital images are usually only approximately sparse or nonsparse in the 2D-DCT domain [21–24]. It means that besides the coefficients with significant values, carrying most of the signal energy, small valued coefficients may appear instead of zero-valued ones. As sparse recovery algorithms assume certain sparsity level, these coefficients will remain unreconstructed [8, 24]. This leads to inevitable reconstruction errors. The reduction of the amount of available samples manifests as a transform domain noise [7, 9]. During the reconstruction, this noise is completely cancelled out, if the sparsity assumption is strictly satisfied. However, if weak signal coefficients of a nonsparse signal remain unreconstructed, their contribution to the noise in the reconstructed coefficients remains. If a nonsparse signal is reconstructed with a reduced set of available samples, then the noise due to the missing samples in unreconstructed coefficients will be considered as an additive input noise in the reconstructed signal [8].

The existing compressive sensing literature provides only the general bounds for the reconstruction error for nonsparse signals (reconstructed with the sparsity assumption) [1, 2, 4, 5, 28]. The error bounds for the DFT and DCT are considered in [6] within the reconstruction uniqueness framework. In this paper, we present an exact relation for the expected squared error in approximately sparse or nonsparse signals in the 2D-DCT domain. It is assumed that these signals are reconstructed from a reduced set of observations, under the sparsity constraint. Missing measurements influence on the transform domain is modelled by an additive noise [7]. The noise originating from missing samples in each signal component is statistically modelled as a Gaussian stochastic process, and its mean-value and variance are determined. The results are further exploited in the derivation of the relation for the error in the reconstructed signal if the sparsity assumption is used for the reconstruction of nonsparse signals. The theory is illustrated and verified by numerical results.

The paper is organized as follows. Basic definitions regarding the 2D-DCT domain are provided in Section 2. In Section 3, the main result is presented in the form of a theorem which will be examined and proved in the next sections. In Section 4, the 2D-DCT transform is put into the framework of the reduced number of observations and the error of nonsparse images reconstruction is analyzed. In Section 5, the theory is validated with several numerical examples, while the concluding remarks are given at the end of the paper, along with Appendix with special cases.

#### 2. Basic Definitions

Consider a 2D discrete signal (digital image) of size denoted by . The 2D-DCT of this signal is defined by [24]where and are the transform coefficient indices, andare the normalized basis functions. For or these functions are of the form and , respectively. The corresponding inverse transform is given bywith , . The 2D-DCT transform can be written in a matrix form as [24]where is the 2D-DCT coefficients matrix, is 2D-DCT transformation matrix, and is the matrix containing pixel values of a digital image. For the inverse 2D-DCT the relation holds, with .

An image of the formis sparse in the 2D-DCT domain if the number of nonzero 2D-DCT coefficients is much smaller than the number of image pixels; that is, . The components are located at the DCT indices with amplitudes , .

Assume that only randomly positioned pixels at *∈* = = are available. If we rearrange the available pixels into a vector with elements where , we can write it in the matrix form asrepresenting the mathematical model for the compressive sampling procedure, where is an measurement matrix. It is defined as the partial inverse 2D-DCT matrix, containing rows of that correspond to the available pixel positions.

Compressive sensing reconstruction inherently assumes the signal sparsity. An image is -sparse in the 2D-DCT domain if only of its 2D-DCT coefficients assume nonzero values. The nonzero coefficients at the positions = will be defined as .

The 2D-DCT of an image reconstructed under the -sparsity assumption will be denoted by . This is a vector with reconstructed nonzero coefficients at .

An image is approximately sparse or nonsparse if the coefficients , are small or of the same order as the coefficients , , respectively. In that case, the vector contains largest values of . The vector zero-padded up to the size of the original and written in the same format as will be denoted by .

#### 3. Reconstruction Error Energy

The main result of the paper providing the exact formulation of the expected squared reconstruction error in the case of nonsparse images will be given in the form of a theorem.

Theorem 1. *Assume an image nonsparse in the 2D-DCT domain, with largest amplitudes , . Assume that only out of total samples are available, where . Also assume that the image is reconstructed under the assumption as it was -sparse. The energy of error in the reconstructed coefficients is related to the energy of unreconstructed components coefficients as follows:where*

The theorem will be proved in the next section.

#### 4. The Reconstruction Process and the Proof

The proof will be presented through four subsections. In the first subsection, we will define the 2D-DCT transform put into the framework of the reduced number of observations. Then, we will describe how the missing pixels affect other components in mono- and multicomponent cases, respectively. Finally, the reconstruction under the assumption that the signal is -sparse is considered.

##### 4.1. Initial Estimate

The initial (norm-two based) 2D-DCT estimation uses the available pixels onlywhere , . The same results are obtained if the missing (unavailable) pixels assume zero values [7]. In a matrix form we can write

The coefficients in (10) act as random variables, with different statistical properties at positions of the image components, , and positions not corresponding to image components, .

##### 4.2. Noise-Only Coefficients in Monocomponent Signals

We will first observe the monocomponent signal case, that is, when , and then generalize the result for multicomponent signals. Without loss of generality, we will assume that the amplitude is . From (5) and (10) we get The variableis random for random values of . Its statistical properties for are studied next. Special cases are considered in Appendix. The initial 2D-DCT estimate can be written in the form

When , the 2D-DCT coefficients correspond to nonsignal (noise) position and behaves as a random Gaussian variable [7]. Using the basis functions orthogonalityand the fact that values of are equally distributed, it can be concluded that the mean-value of is equal to zero:

In the case of the coefficient corresponding to the image component, using the same orthogonality property and the assumption of equal distribution of values , it follows thatFor the zero-mean random variable, the variance definition is

As in the case when is observed, it can be concluded that

Multiplying the left and right side of (19) by and taking the expectation of both sides we getwith . Values are equally distributed. Therefore, terms for are the same and equal to a constant . The total number of these terms is . Further, based on (20) we get

The initial variance definition can be written asas there are exactly expectations with quadratic terms in the first summation and terms in the second variance summation equal to . In order to determine the unknown term , several special cases should be taken into account. Special cases of the 2D-DCT indices are considered in Appendix.

Consider the general case when , , , are satisfied. Thenholds. Incorporating this result into (21) we get that Further, based on (22) the variance can be written asThis result also holds when = . Note that when , the result is multiplied by .

It can be easily concluded that the average value of the variance (A.12) when all special cases from Appendix are included is constant and equal toAs , an accurate approximation for the average variance of noise-only coefficients follows

##### 4.3. Noise-Only Coefficients in Multicomponent Signals

In the multicomponent signal case, the observed random variable becomes

In this case, the coefficients at noise-only positions are random variables Gaussian in nature and zero-mean, as they are formed as the summation of independent zero-mean Gaussian variables over . Namely, now the missing pixels in each image component contribute to the noise, and the noise originating from each component is proportional to the squared amplitude of that component, following (27) with . Therefore, 2D-DCT coefficients mean-value for a multicomponent signal (image) can be written as

The average variance of noise-only coefficients in this case easily follows

However, it is important to mention that components of the image multiplied with basis functions may cause a coupling effect if they are placed at positions satisfying certain conditions. Consequently, this effect may cause the increase of the previously derived variance at these positions. However, if it appears, for example, at the position then the variance will be decreased for the same amount at the position . Therefore, we can further neglect this effect and assume that the variance expression in (30) holds in mean, which is crucial for the following derivation of the error in the nonsparse image reconstruction.

##### 4.4. Nonsparse Signal Reconstruction

We consider that an image is reconstructed under the assumption that it is -sparse and that it satisfies the condition for unique reconstruction in the compressive sensing theory. The number of reconstructed components is . According to (30), each unreconstructed component in the image behaves as a Gaussian input noise with variance

Therefore, all unreconstructed components will behave as a noise with variance

After reconstruction, the total noise energy from the unreconstructed components (in reconstructed components) will be

The noise of unreconstructed components can easily be related to the energy of the unreconstructed componentsThat is, the total error in the reconstructed components is

This completes the proof of the theorem.

#### 5. Numerical Results

In this section, the theoretical result from (35) is numerically checked on a number of test images. The images are used for the numerical calculation of the expected squared error with various sparsity per block. The block size is assumed to be . The squared errors in one block are calculated asto obtain the numerical result, whereas the theoretical curves are calculated using the right side of (35), that is,

These errors are calculated for each block separately and then the results are averaged over all blocks in an image. The statistical peak signal-to-noise ratio () is defined asand the theoretical one is calculated according towhere is considered as the maximal pixel value of an image. They are used to additionally validate the results. In all following examples the reconstruction is performed using the OMP algorithm.

*Example 1. *The considered image is the grayscale image “Barbara” of size . The image is first split into blocks × = 16 × 16. It is assumed that of pixels are available. In the reconstruction, the sparsity is assumed to be per each block. The original image is shown in Figure 1(a), the image with the available pixels is shown in Figure 1(b), and the reconstructed image from reduced set of pixels, with assumed sparsity, is shown in Figure 1(c).

The statistical error and the theoretical one are shown in Figure 2. We considered various sparsity levels per each block, changing between and . The red asterisk represents the statistical values and the theoretical result is presented with the black line.

**(a)**

**(b)**

**(c)**

*Example 2. *Let us consider the RGB image “Lena” of size . We will again split the image into blocks of size . It is assumed that of pixels are available. The sparsity is assumed to be per each block. The original image, image with the available pixels, and the reconstructed image are shown in Figure 3.

The statistical error and the theoretical one are shown in Figure 4. The results are obtained by averaging errors from each block and each channel. Sparsity per each block was varied between and . The red asterisk represents the statistical values and the black line represents the theoretical result.

**(a)**

**(b)**

**(c)**

*Example 3. *A test image set with standard MATLAB images, shown in Figure 5, is used for this example. Each image is split into blocks. The reconstruction is performed under the sparsity assumption , with of randomly positioned available pixels. The statistical and the theoretical errors are calculated according to (36) and (37), whereas the PSNR is calculated using (38) and (39). The results are presented in Table 1, confirming a high agreement between the theory and statistics.

#### 6. Conclusions

In this paper, we considered the influence of nonsparsity in the reconstruction of images. Images are originally sparse or approximately sparse in the two-dimensional discrete cosine transform domain. The reconstruction error relation is presented in the form of a theorem. The reconstruction results are checked on a number of images, both grayscale and color. It is confirmed that the statistical results are close to the derived theoretical results.

#### Appendix

#### Special Cases of Indices

The values of 2D-DCT coefficients variance, for , are considered in Section 4.2. Other special cases of indices are considered in this Appendix.

*Case 1. *When nonsignal (noise-only) positions satisfy , , we haveUsing the property , we can further write This holds for . In the previous derivation, we used the fact that the function has a zero mean-value for random . Using the cosine function periodicity, we may write Finally, we get Incorporating this into (21) and (22) leads tofor . When additional condition holds, then is obtained, which leads to the same result as (25).

*Case 2. *Using same derivations as for Case 1, it is easily shown that the result (A.4) is obtained for , , . Condition also leads to (25).

*Case 3. *Observe now the case . Also assume that . The unknown quadratic term becomeswhere for and .

Note that we used identities and that appear in when it is expressed as + Using the trigonometric identity for the sine of double angle and expectation , analogous to the quadratic cosine expectation case, we getPutting this into (22) leads towhich holds when . When it is easily shown that (25) holds.

*Case 4. *In the equivalent case when , , results are the same as in Case 3. When , result (25) holds.

*Case 5. *Observe the condition set , . Combining the derivations for Cases 1 and 3, it is easily shown that variance becomesOtherwise, when or (25) holds as shown in previously analyzed cases. When is assumed, result (25) also holds.

*Case 6. *In the analogous case when conditions , are satisfied result (A.9) holds, whereas under additional conditions that or or the variance becomes (25).

*Case 7. *When and , unknown expectation becomesfor , leading toFor we have . Therefore, in the case of noisy coefficient , the variance becomes (25). It can be shown that for either or this variance is equal to (A.8).

Note that the variance expressions obtained in all considered cases are multiplied with when . Previous results can be unified as follows:where if and and otherwise.

The variances are statistically checked in the next example.

*Example A.1. *Assume a monocomponent signal in the 2D-DCT domain, defined aswhere , , , , and . Only randomly positioned samples of the signal are available and 20,000 independent random realizations of the signal are observed. Based on the initial estimates (10), the variance of 2D-DCT coefficients is calculated numerically, averaging initial estimates over all realizations. The results are shown in Figure 6, scaled with constant term (25). Special cases considered in Appendix are denoted in Figure 6.

#### Conflicts of Interest

The authors declare that they have no conflicts of interest.