Abstract
For signals reconstruction based on compressive sensing, to reconstruct signals of higher accuracy with lower compression rates, it is required that there is a smaller mutual coherence between the measurement matrix and the sparsifying matrix. Mutual coherence between the measurement matrix and sparsifying matrix can be expressed indirectly by the property of the Gram matrix. On the basis of the Gram matrix, a new optimization algorithm of acquiring a measurement matrix has been proposed in this paper. Firstly, a new mathematical model is designed and a new method of initializing measurement matrix is adopted to optimize the measurement matrix. Then, the loss function of the new algorithm model is solved by the gradient projectionbased method of Gram matrix approximating an identity matrix. Finally, the optimized measurement matrix is generated by minimizing mutual coherence between measurement matrix and sparsifying matrix. Compared with the conventional measurement matrices and the traditional optimization methods, the proposed new algorithm effectively improves the performance of optimized measurement matrices in reconstructing onedimensional sparse signals and twodimensional image signals that are not sparse. The superior performance of the proposed method in this paper has been fully tested and verified by a large number of experiments.
1. Introduction
The theory of compressive sensing (compressed sensing, CS) was proposed by Donoho et al. in 2006. The main idea of compressed sensing is to combine signal sampling and signal compression with the premise that the original signal is sparse or can be sparsely represented [1]. The sampling of compressed sensing completes the compression by reducing signal dimensions without the intermediate stage of Nyquist Sampling [2]. Then, the original signal is reconstructed directly with less sampling number than Nyquist Sampling through the corresponding signal reconstruction algorithms, which saves transmission and storage costs and reduces computational complexity and energy consumption [1, 3–5]. Because compressed sensing can sample signals with a lower frequency than that required by Shannon–Nyquist Theorem, compressed sensing improves the compressibility and recoverability for signals [3, 6, 7]. These properties make CS have a wide range of applications in many fields of signals processing [2, 8]. Some typical CSbased applications include singlepixel imaging [9–13], recovery of images and video [14], wireless image sensor networks [15], applications in classification problem [16, 17], and some biomedical signals processing fields [4, 18–21], such as nuclear magnetic resonance imaging (MRI) [22–24] and electrocardiographic (ECG) signals processing [25, 26].
In the sampling process of CS, the original discrete signal is denoted as , . The measured or sampled signal is obtained by the measurement matrix with a size , . So, we have the following:where is the Gaussian white noise with variance .
The original signal is reconstructed from . Because the dimensions of are much smaller than those of , an underdetermined equation must be solved. To solve the underdetermined equation, it is required that should be sparse or can be represented sparsely [6, 27]. The sparsity level of discrete signal is usually expressed by the L0 norm , and denotes the number of nonzero elements in .
According to equation (1), we can get the following constrained optimization expression by adding :where is a nonnegative real parameter.
We use the orthogonal sparsifying basis to represent sparsely, . Thus,where , is the transpose operator and is the sparse coefficients vector. In this paper, the discrete wavelet transform (DWT) is used as the sparsifying basis for its good sparsifying performance [28, 29].
So in order to obtain , a new expression is obtained according to equation (3):where is the estimated solution of , and the estimation of the original signal is acquired as . is called the equivalent dictionary. When is a sparse signal, can be considered as an identity matrix of size and is almost equivalent to .
One can know that the sparsity level of the original signal, the design of the measurement matrix and the signal reconstruction algorithm are the three main parts of CS theory [30]. Some recent works have been done to optimize CS theory, and there are different optimization approaches, technical principles, and application scenarios in these works [31–33], such as adapted compressed sensing and optimized projections for compressed sensing, which further enriches and optimizes the compressed sensing theory and approaches. For CSbased signals reconstruction algorithms, greedy pursuit algorithms [34–38], minimum norm optimization algorithms [39–41] and iterative threshold algorithms [42, 43] are usually used. The orthogonal matching pursuit (OMP) algorithm is one of greedy pursuit algorithms and has stable performance in reconstructing signals because of its simplicity and efficiency to get an approximated solution [36–38]. Thus, the OMP algorithm is used to reconstruct signals in this paper.
It has been proved that the lower mutual coherence between the measurement matrix and the sparsifying matrix is usually helpful to ensure the successful reconstruction of the signal [44], and random measurement matrices may be incoherence to most fixed orthogonal sparsifying matrices with large probability [3, 45]. Commonly used random measurement matrices include 01 binary sparse random matrix, Gaussian random matrix, Bernoulli random matrix, and part Hadamard random matrix. In addition to the conventional random measurement matrices mentioned above, well optimized measurement matrices can often lead to a good performance of signals reconstruction [46–50], and measurement matrices are also usually optimized for some specific application scenarios [51–53]. For example, in the literature [47], Elad proposed a very classic algorithm that tried to minimize the average mutual coherence between the measurement matrix and sparsifying basis by the method of iteratively updating the measurement matrix when the sparsifying basis is fixed. Although Elad’s method effectively improved the performance of CSbased signals reconstruction, the accuracy of signals reconstruction was not enough and the method was timeconsuming for it requires a lot of iterations. In Wang’s work [48], Wang proposed a weighted measurement matrix method that improved the restricted isometry constant (RIC) of the measurement matrix based on singular value decomposition (SVD). Experimental results showed that the signals reconstruction results using the optimized matrices of Wang’s method are better than those by direct reconstruction for matrices [48].
Although these works have been contributed to optimizing measurement matrices, there are some limitations that usually require some specific experimental conditions and experimental objects in the previous works. Thus, based on the original works, it is necessary to develop a better optimization algorithm of generating the measurement matrix adaptive to a corresponding fixed sparsifying basis and improving the performance of measurement matrix with higher signals reconstruction accuracy under different measurement numbers and different signal sparsity levels, and the works of generating new optimized measurement matrices are contributed in this paper. Different from the conventional measurement matrices and the works in the literature [47] and [48], the main works of this paper are that we contribute a new measurement matrix optimization algorithm based on the proposed new mathematical model and the new measurement matrix initialization method. In one of our previous research works [54], the optimized measurement matrices were just fixed as the binary measurement matrices and were applied to our singlepixel camera only for the twodimensional signals imaging. Owing to some previous research conclusions, we focus on seeking the optimal measurement matrices that are not only for the binary measurement matrices and the twodimensional image signals, but also for adaptive measurement matrices and onedimensional sparse signals and the medical image, and our works are further conducted under different signal sparsity levels and different compression rates.
The remainder of this paper is organized as follows: Section 2 introduces the basic theory and the related background of the measurement matrix. In Section 3, our new algorithm is proposed, the new algorithm model is analyzed and the detailed algorithm process is introduced. In Section 4, lots of experiments were done to test the performance of our method in reconstructing onedimensional sparse signals and twodimensional image signals. In Section 5, analysis and discussion are presented about our algorithm and the experimental results. Finally, Section 6 summarizes the work of this paper.
2. Measurement Matrix Formulation
From the above theoretical introduction of CS, one can know that the performance of the measurement matrix directly influences the results of signals reconstruction, and the measurement matrix is the important intermediate link between signal sparsifying and signal reconstruction. Candès have contributed pioneering works to the theory of compressed sensing. They have published a series of important papers about the conditions that measurement matrices should meet and the relationship between the number of measuring signals and the signal sparseness [33, 55, 56]. The literature [30] proves that if the measurement matrix and the orthogonal sparsifying basis are as incoherent (orthogonal) as possible, the measurement matrix would have better performance with a large probability.
Generally, the sparsifying basis is known and fixed, so the mutual incoherence between the measurement matrix and sparsifying basis can be minimized as much as possible by designing and optimizing the measurement matrix. Also, the intuitive understanding is presented that if and are incoherent or uncorrelated, sampling from original signal would contain more new information that has not already been represented by the known basis . Thus, the performance of the measurement matrix depends largely on the mutual coherence between the measurement matrix and the sparsifying matrix.
According to the related mathematical theory [47, 57], the incoherence between the measurement matrix and the sparsifying matrix can be indirectly transformed into the problem that Gram matrix approximates an identity matrix as closely as possible. Gram matrix is defined as follows:where is the conjugate transpose operator of .
The incoherence denotes the orthogonal property between the measurement matrix and the sparsifying matrix. According to the definition of the Gram matrix, minimizing the mutual coherence between and is equivalent to minimizing the absolute offdiagonal elements in the corresponding Gram matrix and making the Gram matrix as close as possible to the identity matrix [57]. The closer the Gram matrix approximates the identity matrix, the smaller the mutual coherence between and is. Thus, constructing a more optimized measurement matrix is equivalent to solving the following optimization problem:where is an identity matrix with the size . Thus, minimizing equation (6) can effectively and indirectly minimize mutual coherence. In the literature [47]，Elad’s work is just based on the model (6) via the minimization of the average of the offdiagonal entries in the Gram matrix to seek an optimized measurement matrix.
3. Measurement Matrix Optimization
In this section, we give the mathematical definition of mutual coherence between the measurement matrix and the sparsifying matrix, and a new gradient projection based optimization algorithm is proposed to generate the new optimized measurement matrix.
3.1. Mathematical Definition of Mutual Coherence
On the basis of the Gram matrix, in [47], Elad gave a definition of the mutual coherence between and by using the equivalent dictionary , , . Then the mutual coherence is defined as follows:
Furthermore, from the perspective of between measurement matrix and sparsifying matrix, for the measurement matrix with the size , , according to the orthogonal property between and , another possible measurement value of the mutual coherence between and is expressed as follows: is the vector multiplication operator. Literature [58] gives the corresponding definition of mutual coherence, which plays an important role in reconstructing signals using the OMP algorithm, so the OMP algorithm is used as the signal reconstruction algorithm in our experiments of Section 4.
Consequently, equation (8) can denote the mutual coherence between and , when the mutual coherence between and is kept as small as possible in the sampling process of measurement matrix, which may ensure that more new information that has not been represented by the known sparsifying matrix can be sampled. That is to say, the incoherence property can ensure that original signals are reconstructed successfully with high probability in CS. If the measurement matrix is designed or optimized by minimizing equation (6), most sparse signals can be successfully recovered with the optimized measurement matrix.
3.2. New Function Model
One can know that, to solve and minimize equation (6), shrinking the absolute offdiagonal entries of the Gram matrix is usually timeconsuming [47], and the accuracy of the measurement matrix obtained by equation (6) is also not enough. Thus, based on the original model (6), we come up with a new idea to optimize the original equation (6) by adding a constraint term of approximate equivalent dictionary to ensure higher accuracy of the measurement matrix, which is expressed as follows:where is the upper bound of this term of approximate equivalent dictionary and is an estimated approximate solution of . Equation (9) makes full use of the properties of the Gram matrix and equivalent dictionary .
We can also update equation (9) into another form by exchanging the penalty term and transferring the focus of equation (19) to the approximate term of the equivalent dictionary , so the following expression is obtained:where is the upper bound of error of Gram matrix. Equation (10) is a convex constrained optimization problem. In order to solve equation (10), we need to define a new objective function or loss function to convert the constrained optimization problem to an unconstrained optimization problem. Therefore, we design a new loss function model based on equation (10) by Lagrangian method, and the original equation (9) is further transformed to the following:so that the constraint becomes a penalty. For a proper choice of , the two problems (10) and (11) are equivalent, where is a weight parameter chosen empirically balancing the two items of equation (11). is our proposed objective function (loss function) that requires to be solved for the minimum.
Our proposed new algorithm model (11) is a convex unconstrained optimization problem. The regularization penalty term denotes the approximate solution of and makes it more accurate to solve the measurement matrix by balancing and adjusting the value of . The new algorithm model (11) not only guarantees the smaller mutual coherence between the measurement matrix and the sparsifying matrix by the cost , but also ensures higher accuracy of the measurement matrix by solving the regularization penalty term . The regularization penalty term plays an important role in \ new cost function (11) and the optimized measurement matrix generated by simultaneously minimizing the two terms of and in new model (11) can guarantee the signal reconstruction with higher accuracy, which is the advantage of our proposed method of optimizing traditional cost function (6) by adding a new regularization penalty term. The new objective function will be solved by the gradient projection based strategy.
3.3. Gradient Projection
For the two special terms in our new model (11), the descent direction of this loss function is not just the gradient direction, but the direction of gradient projection. The new loss function of model (11) can better ensure that our algorithm approximates an optimal solution. According to the analysis of gradient descent direction, an iterative gradient projection based method is used to seek the minimum value of loss function . Firstly, the gradient of is expressed by the derivative of on , as follows:
In the process of solving , is updated continually at each iteration. That is, from the kth iteration to the k + 1th iteration, is updated to . To complete the update, first we choose a positive scalar parameter , , and define the following equation:where is the gradient of at the kth iteration, and is the estimation solution of at the kth iteration. is the positivepart operator defined as . At each iteration update of , our algorithm searches along the negative gradient direction , projecting onto the nonnegative orthant and conducting a backtracking line search until a sufficient decrease is attained in .
Then the second scalar parameter is defined, . So, the iteration from to iterate is expressed as follows:where
Since loss function (11) is quadratic, the line search parameter in equation (14) can be calculated simply using the following closedform expression:where , and when , we set .
To acquire the value of , the parameter is defined as follows:then is obtained by the following:where and are lower bound and upper bound of respectively, if , .
The use of parameter and parameter reduces the possibility that may increase at some time in the iteration process, and improves the efficiency of our algorithm in searching the global optimal solution of .
3.4. New Initialization
Since local minimum solutions are likely to haunt us in the iterations of the algorithm, it is usually timeconsuming to solve this new cost function (11) directly by the gradient projection strategy. Therefore, in order to reduce timeconsuming and improve efficiency in algorithm operation, a wise initialization of could be of great worth and initializing is very necessary before the algorithm is executed, which directly determines the algorithm operation direction and influences the algorithm operation results. Therefore, based on the two terms’ characteristics of objective function (11), we put forward a new method of initializing .
First, we initialize the equivalent dictionary by the pseudoinverse of with the ideal assumptions that and Gram matrix is equivalent to an identity matrix , so
Then, we suppose that the pseudoinverse is equal to the trueinverse operation in an ideal situation, so
According to , we can get (). Based on equation (10) and , so the initial of is expressed as follows:
Finally, we can get the initialized measurement matrix from , so is expressed as follows:the is unknown in equation (22) and we use the way of random initialization to initialize the of equation (22). Because the random measurement matrices may be incoherent to most fixed orthogonal sparsifying matrices with large probability as analyzed as described in the introduction part, we use the Gaussian random matrix as the of equation (22) to obtain the initialized measurement matrix .
The above method of initializing is our proposed new initialization based on the ideal Gram matrix. Our initialization takes full advantage of the coupling of the two approximate terms in the loss function (11) assuming the two ideal situations of and simultaneously, which conforms to the characteristics of the objective function and also enables our algorithm to apply fewer number of iterations. This new initialization method can make the loss function approximate the global optimal solution with higher efficiency and larger probability.
Consequently, obtained by our initialization and the fixed are as the inputs of our algorithm to train a more optimized measurement matrix , and the risk of local minima and the instability the solution of falls into is ameliorated by the fact that the initialization solution of has already been in the neighborhood of a desirable attraction basin of global optimum, and the global optimum is almost equivalent to the ideal solutions of and . In the process of our algorithm operation, the last is the estimation of the ideal solutions of and based on the of equation (22), and the coupling of and approximating zero simultaneously is also the goal our algorithm search.
3.5. Pseudocodes for Solving Measurement Matrix
The specific and detailed steps of our proposed algorithm by gradient projection for solving problem (11) to get an optimized measurement matrix are presented in Algorithm 1.

In our proposed algorithm, after the measurement matrix is initialized, and then the measurement matrix is trained and optimized by making full use of the information of the known sparsifying matrix. After a large number of experiments (the experiments in Section 4), we tested some values for these parameters in Algorithm 1 and found empirically that these parameters perform relatively well, so is set to 0.35, is set to 0.01, is set to , and is set to .
It is worth noting that, because there may be both positive and negative elements in the measurement matrix . In the iteration process of our algorithm, in order to facilitate the algorithm operation, we divide into a positive part and a negative part , then can be expressed as follows:where and for all , . , and are the elements in the matrices , and , respectively.
Finally, after acquiring the optimized solution of from the above algorithm, the columns of need to be regularized to get the final optimized measurement matrix and make the experimental results more balanced and more stable. The regularization is expressed as follows:
According to the above analysis, the measurement matrix obtained by our optimization algorithm may have lower coherence with the specific sparsifying basis theoretically. Next, we will test the performance of the optimized measurement matrix by specific experiments compared with the conventional random measurement matrices and the classic optimization methods in literature [47] and [48].
4. Experiments
In the section, to test the performance of the optimized measurement matrices after adding the new regularization penalty term to the traditional cost function (6), we conducted the numerical experiments that provided the distribution of the offdiagonal elements of the Gram matrix obtained using different measurement matrices and different methods. In addition, more experiments were conducted to reconstruct onedimensional sparse signals and twodimensional nonsparse images using different measurement matrices and different methods. All the experiments were operated in the environment of MATLAB 2017b on a standard PC with a 2.20 GHz Intel Core i5 CPU and 4 GB memory, running on the Windows 10 operating system.
4.1. OffDiagonal Elements’ Distribution of Gram Matrix
According to equation (6), making the Gram matrix as close as possible to the identity matrix equivalently minimizes the mutual coherence between and . To test the incoherence performance of the measurement matrix optimized by our proposed method, we conducted the experiments at m = 256 and n = 512 that provided the distribution of the offdiagonal elements of the Gram matrix obtained using different measurement matrices and different methods, as shown in Figure 1. The different measurement matrices include the four conventional random measurement matrices that are 01 binary sparse matrix, Gaussian matrix, Bernoulli matrix, part Hadamard matrix, and the different methods are respectively the Elad’s method in literature [47] and our methods before and after optimizing the traditional method of the cost function (6) by adding the regularization penalty term. The method of directly solving the equation (6) by the gradient projection strategy without the regularization penalty term is denoted as “No Regularization (NoReg)” in this paper.
(a)
(b)
In the experiments, in order to facilitate the statistics of the number of offdiagonal elements, the absolute value of offdiagonal elements was rounded to two significant digits with the accuracy of 0.01, which means that there are 100 evenly distributed points from 0 to 1 and the number of offdiagonal elements is counted at every point. The known square DWT (discrete wavelet transform) matrix was given as the sparsifying dictionary . Because the measurement matrix was optimized indirectly based on singular value decomposition in Wang’s method, we cannot directly get the optimized measurement matrix of Wang’s method. Thus, the offdiagonal elements’ distribution of the Gram matrix of Wang’s method is not provided in Figure 1. The method of No Regularization is given as a comparison to demonstrate the important role of the added regularization penalty term in the new cost function (11). Because the absolute value of offdiagonal elements of random matrices is too small, Figure 1 is divided into two subfigures (a) and (b) for displaying the curves of all the compared matrices. In Figure 1(a), the curves of yellow, blue, green and cyan have very low values and can hardly display normally, while the curves of black, red, and magenta are too narrow to display well in Figure 1(b). Therefore, in order to display the curves of all the colors, two subfigures of Figure 1 have to be plotted with two different scales.
According to the results of Figure 1, the distributions of the offdiagonal elements of the Gram matrix of four random measurement matrices are close and similar. Because the random matrices had not been optimized via mutual coherence minimization, the distributions of the offdiagonal elements of the Gram matrix of the four random measurement matrices are far worse than that of No Regularization, Elad’s method and our method. Furthermore, our method outperforms the No Regularization and Elad’s method in increasing the frequency of the offdiagonal elements with low absolute value and reducing the frequency of large absolute values via mutual coherence minimization, better attempting to make the columns of as close to orthogonal as possible. Thus, the experimental results of Figure 1 show that the added regularization penalty term in the new cost function (11) plays a crucial role in improving the performance of our proposed algorithm and optimizing the traditional cost function (6), and demonstrate the effectiveness and superiority of our algorithm based on gradient projection in minimizing the mutual coherence between and .
The following experiments were conducted to test the performance of our proposed algorithm in practical signals reconstruction, OMP algorithm was used as the CSbased signals reconstruction algorithm. The measurement value y was generated by equation (1) with the white Gaussian noise of variance . Onedimensional sparse signals and twodimensional nonsparse images were reconstructed by the new optimized measurement matrices generated by our method, compared with the four conventional random measurement matrices, the method of No Regularization, the Elad’s method [47] and the Wang’s method [48].
4.2. OneDimensional Sparse Signals
For the reconstruction of onedimensional sparse signals, the mean square error (MSE) is used to evaluate the accuracy of reconstructed onedimensional signals. The smaller MSE is, the higher the reconstruction accuracy of onedimensional signals is MSE is defined as follows:where is the estimation of .
In the experiments, since the onedimensional sparse signals have already been sparse, there is no need to sparsify the original signals again, so the in the objective function (11) and equation (22) is replaced as an identity matrix of dimension . = 512, = 256, = 100 mean that the original onedimensional sparse signal of length 512 contains 100 randomly placed +1 and −1 spikes, and the other elements of are zero. The measurement number is 256 and the compression rate is 0.5. For onedimensional sparse signals, the iteration number in the OMP algorithm is set to . The results of reconstructing onedimensional sparse signals by different measurement matrices and different optimization methods at the compression rate 0.5 (=256, = 512) are shown in Figure 2.
As shown in Figure 2, the optimized measurement matrix by our proposed method has the best performance in reconstructing onedimensional sparse signals, and the recovered signals by the optimized measurement matrix have the lowest MSE and are closer to the original sparse signals, compared with the four conventional measurement matrices, the method of No Regularization, Elad’s method and Wang’s method, because the four conventional measurement matrices, the initialization matrix in our method, the method of No Regularization, the Elad’s measurement matrix and the Wang’s measurement matrix are all generated randomly. In order to avoid the randomness of experimental results as much as possible, more experiments should be done to verify if the measurement matrices have stable performance. Thus, five times experiments for each sparsity level were done in reconstructing onedimensional sparse signals, and then we took the average of five experiments’ MSEs as the final results. According to the experimental results, the evolution of the MSE versus the sparsity level K under different measurement matrices is plotted when the compression rate is 0.5 (=256, = 512), as shown in Figure 3.
As shown in Figure 3, the MSEs of the reconstructed signals by our method are lower than the other four conventional measurement matrices, the method of No Regularization, the Elad’s and Wang’s methods at different sparsity level K, and our method can reconstruct the sparse signal with higher precision. In addition, to test our algorithm's performance of processing larger data, experiments were done to reconstruct larger sparse signals of length = 1024 and the sparsity level K is set to 128. In the same way, every result of MSE reported is an average over 5 experiments. The relationship curves between MSE and the measurement number m under different measurement matrices and different optimization methods are shown in Figure 4.
As shown in Figure 4, compared with other measurement matrices, the method of No Regularization, the Elad’s and Wang’s methods, our method also achieves the best performance with the lowest MSEs at different measurement numbers m. Thus, according to the above experimental results, it has been verified that the new measurement matrices generated by our optimization method have better performance in reconstructing onedimensional sparse signals of different data lengths and different sparsity levels under different measurement numbers.
4.3. TwoDimensional Nonsparse Images
In this subsection, we conduct the following experiments to test the performance of our method in reconstructing twodimensional images. We choose eight representative images that are not sparse with the size of , including nature scenes, persons, animals, detailed and texture images, respectively, as shown in Figure 5. In order to enhance the efficiency of image reconstruction and reduce the computational complexity, we reconstruct the images by measuring and reconstructing each column of original images with the size of separately at the compression rate 0.25 ( = 128, = 512), and the 512 reconstructed columns were added back to a full reconstructed image of size . The reconstruction method of columnbycolumn scanning was also used in Wang’s method [48], and this approach not only reduces problem size but also eliminates the obvious block mosaics that the 2D pattern of blockbased scanning produces in the reconstructed images, as described in the literature [59].
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
For twodimensional nonsparse images, the iteration number in the OMP algorithm is set to the measurement number . The peak signal to noise ratio (PSNR) and the structural similarity (SSIM) are used as the evaluation indexes of image quality, which are defined as follows:where is the original image and is the reconstructed image. and are the height and the width of the image, respectively. and are, respectively, the mean values of and . , , and are the variance and covariance values of and , respectively. and are the constants that maintain stability, , . is the dynamic range of pixel greyscale, , .
It is worth noting that, in the reconstruction of onedimensional sparse signals, the sparsifying basis is an identity matrix. However, for the reconstruction of twodimensional nonsparse images, since the images are not sparse, the square DWT matrix with 5level wavelet is used as the sparsifying basis in our algorithm. The performance of the optimized measurement matrices in reconstructing images will be analyzed according to the following experimental results.
Table 1 shows the PSNRs and SSIMs of eight reconstructed images by different measurement matrices and different optimization methods at the compression rate of 0.25 ( = 128). All the data in Table 1 were obtained by taking the average of five experiment runs’ results and Table 2 shows the specific statistical data of each experiment run’s results in reconstructing image Building, and the means and variances of PSNRs and SSIMs are calculated.
From Table 2, one can know that, according to the five experiment runs, the variances of PSNRs and SSIMs are very small, and the experimental results of every run are stable with little fluctuation. Moreover, for the experimental results of each run, our method has obvious advantages and effectively improve the PSNRs and SSIMs of reconstructed images. Therefore, five experiment runs can objectively reflect the actual performance of our algorithm and can prove that our method has stable and superior performance.
In order to visually show the improvement of the reconstructed images by our method, Figure 6 shows the actual reconstructed images of Building in one of five experiment runs under different measurement matrices and different methods, and the local areas of image Building marked by red box are amplified.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
From the experimental results of Tables 1 and 2 and Figure 6, in accordance with the quantitative results, one can know that our method generally has a better performance and the reconstructed image by our method is more visually pleasing at the compression rate of 0.25 (=128), compared with the four conventional measurement matrices, the method of No Regularization, the Elad’s, and Wang’s methods. The PSNRs increase by about 1 to 3 dB, and the SSIMs increase by about 0.01 to 0.2 as shown by bold in Tables 1 and 2. In addition, from the reconstructed images in Figure 6, the reconstructed images by the four conventional measurement matrices have serious noises and are fuzzy in the image contours and edges. The reconstructed images by the method of No Regularization and the Elad’s and Wang’s methods also have some distortion and much noise, while the reconstructed images by our method have obviously better image reconstruction accuracy and better visual quality.
In order to further demonstrate the effectiveness of our method in reconstructing nonsparse images at a higher sampling rate, a person image Cameraman is reconstructed at the compression rate of 0.5 ( = 256). Figure 7 shows the image reconstruction results of Cameraman at = 256 under different measurement matrices and different methods, and the local detailed areas of the image Cameraman marked by red box are also amplified.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
As shown in Figure 7, compared with the four conventional measurement matrices, the image Cameraman reconstructed by our method is closer to the original image and has much higher image reconstruction quality and higher accuracy with lower noise and fewer visual artifacts. Also, there are higher contrast and richer details, especially in the enlarged area of the image Cameraman. Compared with the method of No Regularization, the Elad’s and Wang’s methods, our proposed method also enhances the PSNR and SSIM of the reconstructed image and reduces the reconstruction errors, and the reconstructed image via our method is closer to the original image with better visibility of the details. The above experiments further show that our proposed algorithm is still superior to other conventional methods at a higher sampling rate of 0.5.
In addition, compressed sensing is widely applied in biomedical image processing, such as the image of magnetic resonance imaging (MRI). To verify that our method has universal performance for different types and different sizes of images and also has a good performance in reconstructing MRI images, we conducted the experiments on smaller data and reconstruct the image Brain acquired by MRI with the size of using our method, the compression rate is 0.25 and 0.5 ( = 32, = 64), and the DWT matrix with 3level wavelet was used in this experiments. The experimental results are shown in Figures 8 and 9.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
As shown in Figure 8, at the compression rate of 0.25, one can know that our method greatly improves the reconstruction quality of MRI image with PSNRs rising by about 3 to 4 dB and SSIMs rising by about 0.05 to 0.15, compared with the other seven methods. The other seven methods can hardly reconstruct the contour of the brain, while our method can reconstruct the basic contour and shape of the brain in the image Brain. In Figure 9, the image (h) is closer to the original image (a), and the other seven images have more visual artifacts and distortions. Figure 9(h) reconstructed by our optimized measurement matrix also has the highest PSNR and SSIM with more clarity and less noise, and our method creates the most visually comfortable results. Thus, these experiments reflect that our method also outperforms the other seven conventional methods in reconstructing MRI images of less size.
5. Analysis and Discussion
These above experiments have verified the superior performance of our new algorithm in optimizing measurement matrix by reconstructing onedimensional sparse signals and twodimensional nonsparse images under different sparsity levels and different measurement numbers. Our proposed method can make the elements distribution of the Gram matrix closer to the identity matrix than other methods. In addition, through comparing the experimental results of the new loss function (11) with and without the regularization penalty term, one can know that, after optimizing the traditional method of the cost function (6) by adding the regularization penalty term, our method has the superior performance in reconstructing higher precision signals and images than the traditional method of directly solving cost function (6) under different sparsity levels and different measurement numbers. Thus, the new cost function model (11) with the added regularization penalty term plays a crucial role in our proposed algorithm and makes the measurement matrix have smaller coherence with , which demonstrates the effectiveness of adding the regularization penalty term in the traditional cost function.
Our optimized measurement matrices perform well in extracting information from original signals and restoring information from sampled signals so that our new method can reconstruct original signals of higher precision with higher probability. The main contributions of this paper are as follows:(i)The new algorithm model (11) of the cost function is proposed for seeking optimal measurement matrix that would be least coherent to the sparsifying basis, in theory, ensuring the accuracy of solving measurement matrix by adding the regularization penalty term;(ii)A new measurement matrix initialization method is designed based on the characteristics of our proposed new algorithm model (11). The new initialization method makes the new model more solvable by reducing the risk of local minima and instability the solution of may fall into;(iii)Due to the coupling of measurement matrix and sparsifying basis, it is usually timeconsuming to solve the model (6). Therefore, our method provides a good idea for solving the new mathematical optimization problem and constructing a more optimized measurement matrix;(iv)Based on the characteristics of our method, our proposed algorithm can find a solvable measurement matrix adaptive to the corresponding fixed sparsifying basis and have good flexibility and adaptability in reconstructing CSbased signals.
In addition, here are two important matters worth being discussed:(i)First, the amount of information the measurement matrix extracts or samples from original signals increases with the rise of measurements number, which makes it easier to recover the original signals for the reason that more valuable information can be obtained from original signals. This is why the improvement degree of PSNR and SSIM in reconstructing twodimensional images by our method is smaller at a compression rate 0.5 than at a compression rate 0.25. Thus, the tradeoff between compression rate and recovery accuracy has to be considered fully in the process of measuring and recovering signals.(ii)Second, small mutual coherence between the measurement matrix and sparsifying basis would ensure that original signals may be recovered successfully with high probability. The incoherence property is only a necessary condition and one of many factors (such as signal reconstruction algorithm, signal sparsifying transform, the original signal itself, and so on) that affect the final accuracy of signal reconstruction and the main work of this paper is to only optimize the measurement matrix, so the incoherence property cannot ensure that original signals are reconstructed successfully and precisely with a probability of 100% but just a large probability. Thus, the higher frequency of the offdiagonal elements with low absolute value in Figure 1 does not mean that the signal of higher accuracy will be reconstructed with a probability of 100% but just a higher probability, and the results of signals reconstruction usually have some randomness, which is related to the original signal itself. Consequently, the experimental results of Figures 2–9 and Tables 1 and 2 may not necessarily and exactly conform to the distribution of Figure 1. Our proposed algorithm outperforms the traditional methods and can successfully reconstruct the signals with much higher probability in these experiments.
6. Conclusions
In this paper, the gradient projection based strategy is used to solve our proposed new algorithm model with the new method of initializing measurement matrix for compressively sensed signals reconstruction, which is a new idea for acquiring optimized measurement matrices via minimizing the mutual coherence between measurement matrix and sparsifying basis. According to the theoretical analysis of our method and the experimental results, we can conclude that our proposed measurement matrix optimization method has good performance and outperforms the conventional random measurement matrices and some existing optimization methods. Not only onedimensional sparse signals but also twodimensional nonsparse images can be reconstructed by our optimized measurement matrices with less error, higher accuracy, and better quality. In addition, this new algorithm also has good performance in reconstructing the MRI image. Thus, our research by proposing the new algorithm in this paper may enlighten wide exploration in this direction of the field and has potential application value in the broader field of signal processing. It is of great worth to make further research in the incoherence property and the optimization algorithm of the measurement matrix in compressive sensing. This is also the focus of our future works to extend our research to broader applications.
Data Availability
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Authors’ Contributions
Ziran Wei made significant contributions to this study regarding conception, methodology and analysis, and writing the manuscript. Jianlin Zhang contributed much to the research through his general guidance and advice and optimized the paper. Zhiyong Xu made research plans and managed the study project. Yong Liu gave lots of advice and helped in conducting experiments.
Acknowledgments
The authors are grateful to the Fraunhofer Institute for Computer Graphics Research IGD of Germany for full support. This research was funded by the National High Technology Research and Development Program of China (863 Program, grant no. G158207), the West Light Foundation for Innovative Talents (grant no. YA18K001), and the UCASFraunhofer Joint Training Ph.D. Scholarship.