Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2020 / Article

Research Article | Open Access

Volume 2020 |Article ID 7979606 | https://doi.org/10.1155/2020/7979606

Ziran Wei, Jianlin Zhang, Zhiyong Xu, Yong Liu, "Measurement Matrix Optimization via Mutual Coherence Minimization for Compressively Sensed Signals Reconstruction", Mathematical Problems in Engineering, vol. 2020, Article ID 7979606, 18 pages, 2020. https://doi.org/10.1155/2020/7979606

Measurement Matrix Optimization via Mutual Coherence Minimization for Compressively Sensed Signals Reconstruction

Academic Editor: Krzysztof Okarma
Received15 Jun 2020
Revised13 Aug 2020
Accepted24 Aug 2020
Published29 Sep 2020

Abstract

For signals reconstruction based on compressive sensing, to reconstruct signals of higher accuracy with lower compression rates, it is required that there is a smaller mutual coherence between the measurement matrix and the sparsifying matrix. Mutual coherence between the measurement matrix and sparsifying matrix can be expressed indirectly by the property of the Gram matrix. On the basis of the Gram matrix, a new optimization algorithm of acquiring a measurement matrix has been proposed in this paper. Firstly, a new mathematical model is designed and a new method of initializing measurement matrix is adopted to optimize the measurement matrix. Then, the loss function of the new algorithm model is solved by the gradient projection-based method of Gram matrix approximating an identity matrix. Finally, the optimized measurement matrix is generated by minimizing mutual coherence between measurement matrix and sparsifying matrix. Compared with the conventional measurement matrices and the traditional optimization methods, the proposed new algorithm effectively improves the performance of optimized measurement matrices in reconstructing one-dimensional sparse signals and two-dimensional image signals that are not sparse. The superior performance of the proposed method in this paper has been fully tested and verified by a large number of experiments.

1. Introduction

The theory of compressive sensing (compressed sensing, CS) was proposed by Donoho et al. in 2006. The main idea of compressed sensing is to combine signal sampling and signal compression with the premise that the original signal is sparse or can be sparsely represented [1]. The sampling of compressed sensing completes the compression by reducing signal dimensions without the intermediate stage of Nyquist Sampling [2]. Then, the original signal is reconstructed directly with less sampling number than Nyquist Sampling through the corresponding signal reconstruction algorithms, which saves transmission and storage costs and reduces computational complexity and energy consumption [1, 35]. Because compressed sensing can sample signals with a lower frequency than that required by Shannon–Nyquist Theorem, compressed sensing improves the compressibility and recoverability for signals [3, 6, 7]. These properties make CS have a wide range of applications in many fields of signals processing [2, 8]. Some typical CS-based applications include single-pixel imaging [913], recovery of images and video [14], wireless image sensor networks [15], applications in classification problem [16, 17], and some biomedical signals processing fields [4, 1821], such as nuclear magnetic resonance imaging (MRI) [2224] and electrocardiographic (ECG) signals processing [25, 26].

In the sampling process of CS, the original discrete signal is denoted as , . The measured or sampled signal is obtained by the measurement matrix with a size , . So, we have the following:where is the Gaussian white noise with variance .

The original signal is reconstructed from . Because the dimensions of are much smaller than those of , an underdetermined equation must be solved. To solve the underdetermined equation, it is required that should be sparse or can be represented sparsely [6, 27]. The sparsity level of discrete signal is usually expressed by the L0 norm , and denotes the number of nonzero elements in .

According to equation (1), we can get the following constrained optimization expression by adding :where is a nonnegative real parameter.

We use the orthogonal sparsifying basis to represent sparsely, . Thus,where , is the transpose operator and is the sparse coefficients vector. In this paper, the discrete wavelet transform (DWT) is used as the sparsifying basis for its good sparsifying performance [28, 29].

So in order to obtain , a new expression is obtained according to equation (3):where is the estimated solution of , and the estimation of the original signal is acquired as . is called the equivalent dictionary. When is a sparse signal, can be considered as an identity matrix of size and is almost equivalent to .

One can know that the sparsity level of the original signal, the design of the measurement matrix and the signal reconstruction algorithm are the three main parts of CS theory [30]. Some recent works have been done to optimize CS theory, and there are different optimization approaches, technical principles, and application scenarios in these works [3133], such as adapted compressed sensing and optimized projections for compressed sensing, which further enriches and optimizes the compressed sensing theory and approaches. For CS-based signals reconstruction algorithms, greedy pursuit algorithms [3438], minimum norm optimization algorithms [3941] and iterative threshold algorithms [42, 43] are usually used. The orthogonal matching pursuit (OMP) algorithm is one of greedy pursuit algorithms and has stable performance in reconstructing signals because of its simplicity and efficiency to get an approximated solution [3638]. Thus, the OMP algorithm is used to reconstruct signals in this paper.

It has been proved that the lower mutual coherence between the measurement matrix and the sparsifying matrix is usually helpful to ensure the successful reconstruction of the signal [44], and random measurement matrices may be incoherence to most fixed orthogonal sparsifying matrices with large probability [3, 45]. Commonly used random measurement matrices include 0-1 binary sparse random matrix, Gaussian random matrix, Bernoulli random matrix, and part Hadamard random matrix. In addition to the conventional random measurement matrices mentioned above, well optimized measurement matrices can often lead to a good performance of signals reconstruction [4650], and measurement matrices are also usually optimized for some specific application scenarios [5153]. For example, in the literature [47], Elad proposed a very classic algorithm that tried to minimize the average mutual coherence between the measurement matrix and sparsifying basis by the method of iteratively updating the measurement matrix when the sparsifying basis is fixed. Although Elad’s method effectively improved the performance of CS-based signals reconstruction, the accuracy of signals reconstruction was not enough and the method was time-consuming for it requires a lot of iterations. In Wang’s work [48], Wang proposed a weighted measurement matrix method that improved the restricted isometry constant (RIC) of the measurement matrix based on singular value decomposition (SVD). Experimental results showed that the signals reconstruction results using the optimized matrices of Wang’s method are better than those by direct reconstruction for matrices [48].

Although these works have been contributed to optimizing measurement matrices, there are some limitations that usually require some specific experimental conditions and experimental objects in the previous works. Thus, based on the original works, it is necessary to develop a better optimization algorithm of generating the measurement matrix adaptive to a corresponding fixed sparsifying basis and improving the performance of measurement matrix with higher signals reconstruction accuracy under different measurement numbers and different signal sparsity levels, and the works of generating new optimized measurement matrices are contributed in this paper. Different from the conventional measurement matrices and the works in the literature [47] and [48], the main works of this paper are that we contribute a new measurement matrix optimization algorithm based on the proposed new mathematical model and the new measurement matrix initialization method. In one of our previous research works [54], the optimized measurement matrices were just fixed as the binary measurement matrices and were applied to our single-pixel camera only for the two-dimensional signals imaging. Owing to some previous research conclusions, we focus on seeking the optimal measurement matrices that are not only for the binary measurement matrices and the two-dimensional image signals, but also for adaptive measurement matrices and one-dimensional sparse signals and the medical image, and our works are further conducted under different signal sparsity levels and different compression rates.

The remainder of this paper is organized as follows: Section 2 introduces the basic theory and the related background of the measurement matrix. In Section 3, our new algorithm is proposed, the new algorithm model is analyzed and the detailed algorithm process is introduced. In Section 4, lots of experiments were done to test the performance of our method in reconstructing one-dimensional sparse signals and two-dimensional image signals. In Section 5, analysis and discussion are presented about our algorithm and the experimental results. Finally, Section 6 summarizes the work of this paper.

2. Measurement Matrix Formulation

From the above theoretical introduction of CS, one can know that the performance of the measurement matrix directly influences the results of signals reconstruction, and the measurement matrix is the important intermediate link between signal sparsifying and signal reconstruction. Candès have contributed pioneering works to the theory of compressed sensing. They have published a series of important papers about the conditions that measurement matrices should meet and the relationship between the number of measuring signals and the signal sparseness [33, 55, 56]. The literature [30] proves that if the measurement matrix and the orthogonal sparsifying basis are as incoherent (orthogonal) as possible, the measurement matrix would have better performance with a large probability.

Generally, the sparsifying basis is known and fixed, so the mutual incoherence between the measurement matrix and sparsifying basis can be minimized as much as possible by designing and optimizing the measurement matrix. Also, the intuitive understanding is presented that if and are incoherent or uncorrelated, sampling from original signal would contain more new information that has not already been represented by the known basis . Thus, the performance of the measurement matrix depends largely on the mutual coherence between the measurement matrix and the sparsifying matrix.

According to the related mathematical theory [47, 57], the incoherence between the measurement matrix and the sparsifying matrix can be indirectly transformed into the problem that Gram matrix approximates an identity matrix as closely as possible. Gram matrix is defined as follows:where is the conjugate transpose operator of .

The incoherence denotes the orthogonal property between the measurement matrix and the sparsifying matrix. According to the definition of the Gram matrix, minimizing the mutual coherence between and is equivalent to minimizing the absolute off-diagonal elements in the corresponding Gram matrix and making the Gram matrix as close as possible to the identity matrix [57]. The closer the Gram matrix approximates the identity matrix, the smaller the mutual coherence between and is. Thus, constructing a more optimized measurement matrix is equivalent to solving the following optimization problem:where is an identity matrix with the size . Thus, minimizing equation (6) can effectively and indirectly minimize mutual coherence. In the literature [47],Elad’s work is just based on the model (6) via the minimization of the average of the off-diagonal entries in the Gram matrix to seek an optimized measurement matrix.

3. Measurement Matrix Optimization

In this section, we give the mathematical definition of mutual coherence between the measurement matrix and the sparsifying matrix, and a new gradient projection based optimization algorithm is proposed to generate the new optimized measurement matrix.

3.1. Mathematical Definition of Mutual Coherence

On the basis of the Gram matrix, in [47], Elad gave a definition of the mutual coherence between and by using the equivalent dictionary , , . Then the mutual coherence is defined as follows:

Furthermore, from the perspective of between measurement matrix and sparsifying matrix, for the measurement matrix with the size , , according to the orthogonal property between and , another possible measurement value of the mutual coherence between and is expressed as follows: is the vector multiplication operator. Literature [58] gives the corresponding definition of mutual coherence, which plays an important role in reconstructing signals using the OMP algorithm, so the OMP algorithm is used as the signal reconstruction algorithm in our experiments of Section 4.

Consequently, equation (8) can denote the mutual coherence between and , when the mutual coherence between and is kept as small as possible in the sampling process of measurement matrix, which may ensure that more new information that has not been represented by the known sparsifying matrix can be sampled. That is to say, the incoherence property can ensure that original signals are reconstructed successfully with high probability in CS. If the measurement matrix is designed or optimized by minimizing equation (6), most sparse signals can be successfully recovered with the optimized measurement matrix.

3.2. New Function Model

One can know that, to solve and minimize equation (6), shrinking the absolute off-diagonal entries of the Gram matrix is usually time-consuming [47], and the accuracy of the measurement matrix obtained by equation (6) is also not enough. Thus, based on the original model (6), we come up with a new idea to optimize the original equation (6) by adding a constraint term of approximate equivalent dictionary to ensure higher accuracy of the measurement matrix, which is expressed as follows:where is the upper bound of this term of approximate equivalent dictionary and is an estimated approximate solution of . Equation (9) makes full use of the properties of the Gram matrix and equivalent dictionary .

We can also update equation (9) into another form by exchanging the penalty term and transferring the focus of equation (19) to the approximate term of the equivalent dictionary , so the following expression is obtained:where is the upper bound of error of Gram matrix. Equation (10) is a convex constrained optimization problem. In order to solve equation (10), we need to define a new objective function or loss function to convert the constrained optimization problem to an unconstrained optimization problem. Therefore, we design a new loss function model based on equation (10) by Lagrangian method, and the original equation (9) is further transformed to the following:so that the constraint becomes a penalty. For a proper choice of , the two problems (10) and (11) are equivalent, where is a weight parameter chosen empirically balancing the two items of equation (11). is our proposed objective function (loss function) that requires to be solved for the minimum.

Our proposed new algorithm model (11) is a convex unconstrained optimization problem. The regularization penalty term denotes the approximate solution of and makes it more accurate to solve the measurement matrix by balancing and adjusting the value of . The new algorithm model (11) not only guarantees the smaller mutual coherence between the measurement matrix and the sparsifying matrix by the cost , but also ensures higher accuracy of the measurement matrix by solving the regularization penalty term . The regularization penalty term plays an important role in \ new cost function (11) and the optimized measurement matrix generated by simultaneously minimizing the two terms of and in new model (11) can guarantee the signal reconstruction with higher accuracy, which is the advantage of our proposed method of optimizing traditional cost function (6) by adding a new regularization penalty term. The new objective function will be solved by the gradient projection based strategy.

3.3. Gradient Projection

For the two special terms in our new model (11), the descent direction of this loss function is not just the gradient direction, but the direction of gradient projection. The new loss function of model (11) can better ensure that our algorithm approximates an optimal solution. According to the analysis of gradient descent direction, an iterative gradient projection based method is used to seek the minimum value of loss function . Firstly, the gradient of is expressed by the derivative of on , as follows:

In the process of solving , is updated continually at each iteration. That is, from the kth iteration to the k + 1-th iteration, is updated to . To complete the update, first we choose a positive scalar parameter , , and define the following equation:where is the gradient of at the kth iteration, and is the estimation solution of at the kth iteration. is the positive-part operator defined as . At each iteration update of , our algorithm searches along the negative gradient direction , projecting onto the nonnegative orthant and conducting a backtracking line search until a sufficient decrease is attained in .

Then the second scalar parameter is defined, . So, the iteration from to iterate is expressed as follows:where

Since loss function (11) is quadratic, the line search parameter in equation (14) can be calculated simply using the following closed-form expression:where , and when , we set .

To acquire the value of , the parameter is defined as follows:then is obtained by the following:where and are lower bound and upper bound of respectively, if , .

The use of parameter and parameter reduces the possibility that may increase at some time in the iteration process, and improves the efficiency of our algorithm in searching the global optimal solution of .

3.4. New Initialization

Since local minimum solutions are likely to haunt us in the iterations of the algorithm, it is usually time-consuming to solve this new cost function (11) directly by the gradient projection strategy. Therefore, in order to reduce time-consuming and improve efficiency in algorithm operation, a wise initialization of could be of great worth and initializing is very necessary before the algorithm is executed, which directly determines the algorithm operation direction and influences the algorithm operation results. Therefore, based on the two terms’ characteristics of objective function (11), we put forward a new method of initializing .

First, we initialize the equivalent dictionary by the pseudo-inverse of with the ideal assumptions that and Gram matrix is equivalent to an identity matrix , so

Then, we suppose that the pseudo-inverse is equal to the true-inverse operation in an ideal situation, so

According to , we can get (). Based on equation (10) and , so the initial of is expressed as follows:

Finally, we can get the initialized measurement matrix from , so is expressed as follows:the is unknown in equation (22) and we use the way of random initialization to initialize the of equation (22). Because the random measurement matrices may be incoherent to most fixed orthogonal sparsifying matrices with large probability as analyzed as described in the introduction part, we use the Gaussian random matrix as the of equation (22) to obtain the initialized measurement matrix .

The above method of initializing is our proposed new initialization based on the ideal Gram matrix. Our initialization takes full advantage of the coupling of the two approximate terms in the loss function (11) assuming the two ideal situations of and simultaneously, which conforms to the characteristics of the objective function and also enables our algorithm to apply fewer number of iterations. This new initialization method can make the loss function approximate the global optimal solution with higher efficiency and larger probability.

Consequently, obtained by our initialization and the fixed are as the inputs of our algorithm to train a more optimized measurement matrix , and the risk of local minima and the instability the solution of falls into is ameliorated by the fact that the initialization solution of has already been in the neighborhood of a desirable attraction basin of global optimum, and the global optimum is almost equivalent to the ideal solutions of and . In the process of our algorithm operation, the last is the estimation of the ideal solutions of and based on the of equation (22), and the coupling of and approximating zero simultaneously is also the goal our algorithm search.

3.5. Pseudocodes for Solving Measurement Matrix

The specific and detailed steps of our proposed algorithm by gradient projection for solving problem (11) to get an optimized measurement matrix are presented in Algorithm 1.

The algorithm process.
(1)Initialization: the number of iterations ,  = 0; set and by equations (21) and (22); choose parameters and , .
(2)Calculate the value of by equation (15).
(3)Find the parameter on the interval by equation (16).
(4)The line backtracking search to minimize : .
(5)Update by equations (17) and (18).
(6)When the algorithm performs convergence and meets the termination condition:
Stop the iteration, and is the final approximate solution of , is the threshold parameter of termination; otherwise, set  =  + 1 and return to 2 until the termination condition is met.

In our proposed algorithm, after the measurement matrix is initialized, and then the measurement matrix is trained and optimized by making full use of the information of the known sparsifying matrix. After a large number of experiments (the experiments in Section 4), we tested some values for these parameters in Algorithm 1 and found empirically that these parameters perform relatively well, so is set to 0.35, is set to 0.01, is set to , and is set to .

It is worth noting that, because there may be both positive and negative elements in the measurement matrix . In the iteration process of our algorithm, in order to facilitate the algorithm operation, we divide into a positive part and a negative part , then can be expressed as follows:where and for all , . , and are the elements in the matrices , and , respectively.

Finally, after acquiring the optimized solution of from the above algorithm, the columns of need to be regularized to get the final optimized measurement matrix and make the experimental results more balanced and more stable. The regularization is expressed as follows:

According to the above analysis, the measurement matrix obtained by our optimization algorithm may have lower coherence with the specific sparsifying basis theoretically. Next, we will test the performance of the optimized measurement matrix by specific experiments compared with the conventional random measurement matrices and the classic optimization methods in literature [47] and [48].

4. Experiments

In the section, to test the performance of the optimized measurement matrices after adding the new regularization penalty term to the traditional cost function (6), we conducted the numerical experiments that provided the distribution of the off-diagonal elements of the Gram matrix obtained using different measurement matrices and different methods. In addition, more experiments were conducted to reconstruct one-dimensional sparse signals and two-dimensional nonsparse images using different measurement matrices and different methods. All the experiments were operated in the environment of MATLAB 2017b on a standard PC with a 2.20 GHz Intel Core i5 CPU and 4 GB memory, running on the Windows 10 operating system.

4.1. Off-Diagonal Elements’ Distribution of Gram Matrix

According to equation (6), making the Gram matrix as close as possible to the identity matrix equivalently minimizes the mutual coherence between and . To test the incoherence performance of the measurement matrix optimized by our proposed method, we conducted the experiments at m = 256 and n = 512 that provided the distribution of the off-diagonal elements of the Gram matrix obtained using different measurement matrices and different methods, as shown in Figure 1. The different measurement matrices include the four conventional random measurement matrices that are 0-1 binary sparse matrix, Gaussian matrix, Bernoulli matrix, part Hadamard matrix, and the different methods are respectively the Elad’s method in literature [47] and our methods before and after optimizing the traditional method of the cost function (6) by adding the regularization penalty term. The method of directly solving the equation (6) by the gradient projection strategy without the regularization penalty term is denoted as “No Regularization (NoReg)” in this paper.

In the experiments, in order to facilitate the statistics of the number of off-diagonal elements, the absolute value of off-diagonal elements was rounded to two significant digits with the accuracy of 0.01, which means that there are 100 evenly distributed points from 0 to 1 and the number of off-diagonal elements is counted at every point. The known square DWT (discrete wavelet transform) matrix was given as the sparsifying dictionary . Because the measurement matrix was optimized indirectly based on singular value decomposition in Wang’s method, we cannot directly get the optimized measurement matrix of Wang’s method. Thus, the off-diagonal elements’ distribution of the Gram matrix of Wang’s method is not provided in Figure 1. The method of No Regularization is given as a comparison to demonstrate the important role of the added regularization penalty term in the new cost function (11). Because the absolute value of off-diagonal elements of random matrices is too small, Figure 1 is divided into two subfigures (a) and (b) for displaying the curves of all the compared matrices. In Figure 1(a), the curves of yellow, blue, green and cyan have very low values and can hardly display normally, while the curves of black, red, and magenta are too narrow to display well in Figure 1(b). Therefore, in order to display the curves of all the colors, two subfigures of Figure 1 have to be plotted with two different scales.

According to the results of Figure 1, the distributions of the off-diagonal elements of the Gram matrix of four random measurement matrices are close and similar. Because the random matrices had not been optimized via mutual coherence minimization, the distributions of the off-diagonal elements of the Gram matrix of the four random measurement matrices are far worse than that of No Regularization, Elad’s method and our method. Furthermore, our method outperforms the No Regularization and Elad’s method in increasing the frequency of the off-diagonal elements with low absolute value and reducing the frequency of large absolute values via mutual coherence minimization, better attempting to make the columns of as close to orthogonal as possible. Thus, the experimental results of Figure 1 show that the added regularization penalty term in the new cost function (11) plays a crucial role in improving the performance of our proposed algorithm and optimizing the traditional cost function (6), and demonstrate the effectiveness and superiority of our algorithm based on gradient projection in minimizing the mutual coherence between and .

The following experiments were conducted to test the performance of our proposed algorithm in practical signals reconstruction, OMP algorithm was used as the CS-based signals reconstruction algorithm. The measurement value y was generated by equation (1) with the white Gaussian noise of variance . One-dimensional sparse signals and two-dimensional nonsparse images were reconstructed by the new optimized measurement matrices generated by our method, compared with the four conventional random measurement matrices, the method of No Regularization, the Elad’s method [47] and the Wang’s method [48].

4.2. One-Dimensional Sparse Signals

For the reconstruction of one-dimensional sparse signals, the mean square error (MSE) is used to evaluate the accuracy of reconstructed one-dimensional signals. The smaller MSE is, the higher the reconstruction accuracy of one-dimensional signals is MSE is defined as follows:where is the estimation of .

In the experiments, since the one-dimensional sparse signals have already been sparse, there is no need to sparsify the original signals again, so the in the objective function (11) and equation (22) is replaced as an identity matrix of dimension .  = 512,  = 256,  = 100 mean that the original one-dimensional sparse signal of length 512 contains 100 randomly placed +1 and −1 spikes, and the other elements of are zero. The measurement number is 256 and the compression rate is 0.5. For one-dimensional sparse signals, the iteration number in the OMP algorithm is set to . The results of reconstructing one-dimensional sparse signals by different measurement matrices and different optimization methods at the compression rate 0.5 (=256,  = 512) are shown in Figure 2.

As shown in Figure 2, the optimized measurement matrix by our proposed method has the best performance in reconstructing one-dimensional sparse signals, and the recovered signals by the optimized measurement matrix have the lowest MSE and are closer to the original sparse signals, compared with the four conventional measurement matrices, the method of No Regularization, Elad’s method and Wang’s method, because the four conventional measurement matrices, the initialization matrix in our method, the method of No Regularization, the Elad’s measurement matrix and the Wang’s measurement matrix are all generated randomly. In order to avoid the randomness of experimental results as much as possible, more experiments should be done to verify if the measurement matrices have stable performance. Thus, five times experiments for each sparsity level were done in reconstructing one-dimensional sparse signals, and then we took the average of five experiments’ MSEs as the final results. According to the experimental results, the evolution of the MSE versus the sparsity level K under different measurement matrices is plotted when the compression rate is 0.5 (=256,  = 512), as shown in Figure 3.

As shown in Figure 3, the MSEs of the reconstructed signals by our method are lower than the other four conventional measurement matrices, the method of No Regularization, the Elad’s and Wang’s methods at different sparsity level K, and our method can reconstruct the sparse signal with higher precision. In addition, to test our algorithm's performance of processing larger data, experiments were done to reconstruct larger sparse signals of length  = 1024 and the sparsity level K is set to 128. In the same way, every result of MSE reported is an average over 5 experiments. The relationship curves between MSE and the measurement number m under different measurement matrices and different optimization methods are shown in Figure 4.

As shown in Figure 4, compared with other measurement matrices, the method of No Regularization, the Elad’s and Wang’s methods, our method also achieves the best performance with the lowest MSEs at different measurement numbers m. Thus, according to the above experimental results, it has been verified that the new measurement matrices generated by our optimization method have better performance in reconstructing one-dimensional sparse signals of different data lengths and different sparsity levels under different measurement numbers.

4.3. Two-Dimensional Nonsparse Images

In this subsection, we conduct the following experiments to test the performance of our method in reconstructing two-dimensional images. We choose eight representative images that are not sparse with the size of , including nature scenes, persons, animals, detailed and texture images, respectively, as shown in Figure 5. In order to enhance the efficiency of image reconstruction and reduce the computational complexity, we reconstruct the images by measuring and reconstructing each column of original images with the size of separately at the compression rate 0.25 ( = 128,  = 512), and the 512 reconstructed columns were added back to a full reconstructed image of size . The reconstruction method of column-by-column scanning was also used in Wang’s method [48], and this approach not only reduces problem size but also eliminates the obvious block mosaics that the 2D pattern of block-based scanning produces in the reconstructed images, as described in the literature [59].

For two-dimensional nonsparse images, the iteration number in the OMP algorithm is set to the measurement number . The peak signal to noise ratio (PSNR) and the structural similarity (SSIM) are used as the evaluation indexes of image quality, which are defined as follows:where is the original image and is the reconstructed image. and are the height and the width of the image, respectively. and are, respectively, the mean values of and . , , and are the variance and covariance values of and , respectively. and are the constants that maintain stability, , . is the dynamic range of pixel greyscale, , .

It is worth noting that, in the reconstruction of one-dimensional sparse signals, the sparsifying basis is an identity matrix. However, for the reconstruction of two-dimensional nonsparse images, since the images are not sparse, the square DWT matrix with 5-level wavelet is used as the sparsifying basis in our algorithm. The performance of the optimized measurement matrices in reconstructing images will be analyzed according to the following experimental results.

Table 1 shows the PSNRs and SSIMs of eight reconstructed images by different measurement matrices and different optimization methods at the compression rate of 0.25 ( = 128). All the data in Table 1 were obtained by taking the average of five experiment runs’ results and Table 2 shows the specific statistical data of each experiment run’s results in reconstructing image Building, and the means and variances of PSNRs and SSIMs are calculated.


ImagesSparse randomGaussianBernoulliPart HadamardNoRegElad’sWang’sOptimized matrix

Peppers21.2336/0.3337721.2234/0.335220.9420/0.3278621.4817/0.3478221.4106/0.3390321.4965/0.3466822.3604/0.413924.2672/0.48002
Boats20.1211/0.3211320.0598/0.3179620.1037/0.3235520.4870/0.3407921.1489/0.3216220.3158/0.3319621.1873/0.397623.3487/0.48302
Building24.0236/0.4379724.4098/0.4493724.0987/0.4432124.7301/0.4749624.2669/0.4477824.8264/0.4746726.1096/0.5719027.2533/0.58487
Cameraman21.8334/0.4315721.5152/0.4197821.6693/0.4143722.0669/0.4652421.6392/0.4252522.1898/0.4511422.7035/0.5244725.8632/0.62734
Barbara20.1602/0.4231520.4094/0.4327520.0468/0.4126220.5244/0.4403420.3292/0.4300520.4563/0.443621.8251/0.531523.6868/0.6042
Mandrill16.4434/0.2046016.4298/0.2030616.4234/0.2059416.6225/0.2143216.6581/0.2147416.4741/0.2078416.9468/0.251818.1533/0.30182
Goldhill20.2144/0.2922720.1329/0.2886320.1679/0.2900520.5636/0.3104520.3922/0.3005220.4659/0.3019421.2499/0.3762323.4258/0.46419
Fingerprint15.6476/0.2518415.5595/0.2425615.5394/0.2414915.7223/0.2541115.6447/0.2470915.6734/0.2552417.0946/0.3849717.8569/0.43580


TimesSparse randomGaussianBernoulliPart HadamardNoRegElad’sWang’sOptimized matrix

124.2901/0.4462224.3995/0.4487324.0885/0.4363024.6580/0.4717124.5528/0.4647924.7666/0.4706726.3642/0.5812426.9759/0.54479
223.5791/0.4183224.4172/0.4515024.1424/0.4403424.3653/0.4543524.5539/0.4552524.8598/0.4793525.9501/0.5685327.0524/0.58232
324.0906/0.4449924.5570/0.4585624.0547/0.4402824.8548/0.4799524.1901/0.4414624.9355/0.4761925.9409/0.5628627.2789/0.58696
424.0453/0.4343224.0803/0.4353624.3196/0.4432724.6074/0.4690623.8855/0.4313524.9349/0.4839526.2562/0.5689627.2887/0.59844
524.1129/0.4460024.5950/0.4527023.8883/0.4558625.1650/0.4997324.1520/0.4460624.6354/0.4631926.0366/0.5779027.6707/0.61185
Means24.0236/0.4379724.4098/0.4493724.0987/0.4432124.7301/0.4749624.2669/0.4477824.8264/0.4746726.1096/0.5719027.2533/0.58487
Variances0.0704/1.4532e-40.0412/7.4193e-50.0243/5.6149e-50.0895/2.7715e-40.0822/1.6450e-40.0162/6.4530e-50.0364/5.6239e-50.0734/6.3234e-4

From Table 2, one can know that, according to the five experiment runs, the variances of PSNRs and SSIMs are very small, and the experimental results of every run are stable with little fluctuation. Moreover, for the experimental results of each run, our method has obvious advantages and effectively improve the PSNRs and SSIMs of reconstructed images. Therefore, five experiment runs can objectively reflect the actual performance of our algorithm and can prove that our method has stable and superior performance.

In order to visually show the improvement of the reconstructed images by our method, Figure 6 shows the actual reconstructed images of Building in one of five experiment runs under different measurement matrices and different methods, and the local areas of image Building marked by red box are amplified.

From the experimental results of Tables 1 and 2 and Figure 6, in accordance with the quantitative results, one can know that our method generally has a better performance and the reconstructed image by our method is more visually pleasing at the compression rate of 0.25 (=128), compared with the four conventional measurement matrices, the method of No Regularization, the Elad’s, and Wang’s methods. The PSNRs increase by about 1 to 3 dB, and the SSIMs increase by about 0.01 to 0.2 as shown by bold in Tables 1 and 2. In addition, from the reconstructed images in Figure 6, the reconstructed images by the four conventional measurement matrices have serious noises and are fuzzy in the image contours and edges. The reconstructed images by the method of No Regularization and the Elad’s and Wang’s methods also have some distortion and much noise, while the reconstructed images by our method have obviously better image reconstruction accuracy and better visual quality.

In order to further demonstrate the effectiveness of our method in reconstructing nonsparse images at a higher sampling rate, a person image Cameraman is reconstructed at the compression rate of 0.5 ( = 256). Figure 7 shows the image reconstruction results of Cameraman at  = 256 under different measurement matrices and different methods, and the local detailed areas of the image Cameraman marked by red box are also amplified.

As shown in Figure 7, compared with the four conventional measurement matrices, the image Cameraman reconstructed by our method is closer to the original image and has much higher image reconstruction quality and higher accuracy with lower noise and fewer visual artifacts. Also, there are higher contrast and richer details, especially in the enlarged area of the image Cameraman. Compared with the method of No Regularization, the Elad’s and Wang’s methods, our proposed method also enhances the PSNR and SSIM of the reconstructed image and reduces the reconstruction errors, and the reconstructed image via our method is closer to the original image with better visibility of the details. The above experiments further show that our proposed algorithm is still superior to other conventional methods at a higher sampling rate of 0.5.

In addition, compressed sensing is widely applied in biomedical image processing, such as the image of magnetic resonance imaging (MRI). To verify that our method has universal performance for different types and different sizes of images and also has a good performance in reconstructing MRI images, we conducted the experiments on smaller data and reconstruct the image Brain acquired by MRI with the size of using our method, the compression rate is 0.25 and 0.5 ( = 32,  = 64), and the DWT matrix with 3-level wavelet was used in this experiments. The experimental results are shown in Figures 8 and 9.

As shown in Figure 8, at the compression rate of 0.25, one can know that our method greatly improves the reconstruction quality of MRI image with PSNRs rising by about 3 to 4 dB and SSIMs rising by about 0.05 to 0.15, compared with the other seven methods. The other seven methods can hardly reconstruct the contour of the brain, while our method can reconstruct the basic contour and shape of the brain in the image Brain. In Figure 9, the image (h) is closer to the original image (a), and the other seven images have more visual artifacts and distortions. Figure 9(h) reconstructed by our optimized measurement matrix also has the highest PSNR and SSIM with more clarity and less noise, and our method creates the most visually comfortable results. Thus, these experiments reflect that our method also outperforms the other seven conventional methods in reconstructing MRI images of less size.

5. Analysis and Discussion

These above experiments have verified the superior performance of our new algorithm in optimizing measurement matrix by reconstructing one-dimensional sparse signals and two-dimensional nonsparse images under different sparsity levels and different measurement numbers. Our proposed method can make the elements distribution of the Gram matrix closer to the identity matrix than other methods. In addition, through comparing the experimental results of the new loss function (11) with and without the regularization penalty term, one can know that, after optimizing the traditional method of the cost function (6) by adding the regularization penalty term, our method has the superior performance in reconstructing higher precision signals and images than the traditional method of directly solving cost function (6) under different sparsity levels and different measurement numbers. Thus, the new cost function model (11) with the added regularization penalty term plays a crucial role in our proposed algorithm and makes the measurement matrix have smaller coherence with , which demonstrates the effectiveness of adding the regularization penalty term in the traditional cost function.

Our optimized measurement matrices perform well in extracting information from original signals and restoring information from sampled signals so that our new method can reconstruct original signals of higher precision with higher probability. The main contributions of this paper are as follows:(i)The new algorithm model (11) of the cost function is proposed for seeking optimal measurement matrix that would be least coherent to the sparsifying basis, in theory, ensuring the accuracy of solving measurement matrix by adding the regularization penalty term;(ii)A new measurement matrix initialization method is designed based on the characteristics of our proposed new algorithm model (11). The new initialization method makes the new model more solvable by reducing the risk of local minima and instability the solution of may fall into;(iii)Due to the coupling of measurement matrix and sparsifying basis, it is usually time-consuming to solve the model (6). Therefore, our method provides a good idea for solving the new mathematical optimization problem and constructing a more optimized measurement matrix;(iv)Based on the characteristics of our method, our proposed algorithm can find a solvable measurement matrix adaptive to the corresponding fixed sparsifying basis and have good flexibility and adaptability in reconstructing CS-based signals.

In addition, here are two important matters worth being discussed:(i)First, the amount of information the measurement matrix extracts or samples from original signals increases with the rise of measurements number, which makes it easier to recover the original signals for the reason that more valuable information can be obtained from original signals. This is why the improvement degree of PSNR and SSIM in reconstructing two-dimensional images by our method is smaller at a compression rate 0.5 than at a compression rate 0.25. Thus, the trade-off between compression rate and recovery accuracy has to be considered fully in the process of measuring and recovering signals.(ii)Second, small mutual coherence between the measurement matrix and sparsifying basis would ensure that original signals may be recovered successfully with high probability. The incoherence property is only a necessary condition and one of many factors (such as signal reconstruction algorithm, signal sparsifying transform, the original signal itself, and so on) that affect the final accuracy of signal reconstruction and the main work of this paper is to only optimize the measurement matrix, so the incoherence property cannot ensure that original signals are reconstructed successfully and precisely with a probability of 100% but just a large probability. Thus, the higher frequency of the off-diagonal elements with low absolute value in Figure 1 does not mean that the signal of higher accuracy will be reconstructed with a probability of 100% but just a higher probability, and the results of signals reconstruction usually have some randomness, which is related to the original signal itself. Consequently, the experimental results of Figures 29 and Tables 1 and 2 may not necessarily and exactly conform to the distribution of Figure 1. Our proposed algorithm outperforms the traditional methods and can successfully reconstruct the signals with much higher probability in these experiments.

6. Conclusions

In this paper, the gradient projection based strategy is used to solve our proposed new algorithm model with the new method of initializing measurement matrix for compressively sensed signals reconstruction, which is a new idea for acquiring optimized measurement matrices via minimizing the mutual coherence between measurement matrix and sparsifying basis. According to the theoretical analysis of our method and the experimental results, we can conclude that our proposed measurement matrix optimization method has good performance and outperforms the conventional random measurement matrices and some existing optimization methods. Not only one-dimensional sparse signals but also two-dimensional nonsparse images can be reconstructed by our optimized measurement matrices with less error, higher accuracy, and better quality. In addition, this new algorithm also has good performance in reconstructing the MRI image. Thus, our research by proposing the new algorithm in this paper may enlighten wide exploration in this direction of the field and has potential application value in the broader field of signal processing. It is of great worth to make further research in the incoherence property and the optimization algorithm of the measurement matrix in compressive sensing. This is also the focus of our future works to extend our research to broader applications.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Authors’ Contributions

Ziran Wei made significant contributions to this study regarding conception, methodology and analysis, and writing the manuscript. Jianlin Zhang contributed much to the research through his general guidance and advice and optimized the paper. Zhiyong Xu made research plans and managed the study project. Yong Liu gave lots of advice and helped in conducting experiments.

Acknowledgments

The authors are grateful to the Fraunhofer Institute for Computer Graphics Research IGD of Germany for full support. This research was funded by the National High Technology Research and Development Program of China (863 Program, grant no. G158207), the West Light Foundation for Innovative Talents (grant no. YA18K001), and the UCAS-Fraunhofer Joint Training Ph.D. Scholarship.

References

  1. D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. View at: Publisher Site | Google Scholar
  2. S. Qaisar, R. M. Bilal, W. Iqbal, M. Naureen, and S. Lee, “Compressive sensing: from theory to applications, a survey,” Journal of Communications and Networks, vol. 15, no. 5, pp. 443–456, 2013. View at: Publisher Site | Google Scholar
  3. E. Candès, “Compressive sampling,” Proceedings of the International Congress of Mathematicians, vol. 3, Madrid, Spain, 2006. View at: Google Scholar
  4. Y. Wang, X. Li, K. Xu, F. Ren, and H. Yu, “Data-driven sampling matrix boolean optimization for energy-efficient biomedical signal acquisition by compressive sensing,” IEEE Transactions on Biomedical Circuits and Systems, vol. 11, no. 2, pp. 255–266, 2017. View at: Publisher Site | Google Scholar
  5. B. Liu and Z. Zhang, “Quantized compressive sensing for low-power data compression and wireless telemonitoring,” IEEE Sensors Journal, vol. 16, no. 23, pp. 8206–8213, 2016. View at: Publisher Site | Google Scholar
  6. E. J. Candès, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Communications on Pure and Applied Mathematics, vol. 59, no. 8, pp. 1207–1223, 2006. View at: Publisher Site | Google Scholar
  7. F.-Y. Wu, K. Yang, and Z. Yang, “Compressed acquisition and denoising recovery of EMGdi signal in WSNs and iot,” IEEE Transactions on Industrial Informatics, vol. 14, no. 5, pp. 2210–2219, 2018. View at: Publisher Site | Google Scholar
  8. H. Boche, R. Calderbank, G. Kutyniok, and J. Vybiral, Compressed Sensing and its Applications, Springer, Berlin, Germany, 2015.
  9. M. F. Duarte, M. A. Davenport, D. Takhar et al., “Single-pixel imaging via compressive sampling,” IEEE Signal Processing Magazine, vol. 25, no. 2, pp. 83–91, 2008. View at: Publisher Site | Google Scholar
  10. Z. Wei, J. Zhang, Z. Xu et al., “Improving the signal-to-noise ratio of super resolution imaging based on single pixel camera,” IEEE Photonics Journal, vol. 11, no. 1, pp. 1–16, 2019. View at: Publisher Site | Google Scholar
  11. A. C. Sankaranarayanan, C. Studer, and R. G. Baraniuk, “Cs-muvi: video compressive sensing for spatial multiplexing cameras,” in Proceedings of IEEE International Conference on Computational Photography, pp. 1–10, IEEE, Seattle, WA, USA, April 2012. View at: Publisher Site | Google Scholar
  12. N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica, vol. 1, no. 5, pp. 285–289, 2014. View at: Publisher Site | Google Scholar
  13. M. P. Edgar, G. M. Gibson, R. W. Bowman et al., “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. , vol. 5, p. 10669, 2015. View at: Publisher Site | Google Scholar
  14. C. Chen, E. W. Tramel, and J. E. Fowler, “Compressed-sensing recovery of images and video using multihypothesis predictions,” in Proceedings of the 2011 Conference Record of the Forty Fifth Asilomar Conference on Signals, Systems and Computers (ASILOMAR), Pacific Grove, CA, USA, November 2011. View at: Publisher Site | Google Scholar
  15. J. Zhang, Y. Yin, and Y. Yin, “Adaptive compressed sensing for wireless image sensor networks,” Multimedia Tools and Applications, vol. 76, pp. 1–16, 2017. View at: Publisher Site | Google Scholar
  16. Y. Gan, Z. Tong, and H. Chu, “Image classification with a deep network model based on compressive sensing,” in Proceedings of the International Conference on Signal Processing, Hangzhou, China, October 2015. View at: Publisher Site | Google Scholar
  17. D. Milioris, “Classification in twitter via compressive sensing,” in Proceedings of the Computer Communications Workshops, IEEE Hong Kong, China, May 2015. View at: Publisher Site | Google Scholar
  18. G. J. J. Warmerdam, R. Vullings, L. Schmitt, J. O. E. H. Van Laar, and J. W. M. Bergmans, “Hierarchical probabilistic framework for fetal r-peak detection, using ecg waveform and heart rate information,” IEEE Transactions on Signal Processing, vol. 66, no. 16, pp. 4388–4397, 2018. View at: Publisher Site | Google Scholar
  19. H. Kim and S. Y. Chun, “Cancelable ecg biometrics using compressive sensing-generalized likelihood ratio test,” IEEE Access, vol. 7, pp. 9232–9242, 2019. View at: Publisher Site | Google Scholar
  20. S. K. Jain and B. Bhaumik, “An energy efficient ECG signal processor detecting cardiovascular diseases on smartphone,” IEEE Transactions on Biomedical Circuits and Systems, vol. 11, no. 2, pp. 314–323, 2017. View at: Publisher Site | Google Scholar
  21. J. Zhang, J. Teng, and Y. Bai, “Improving sparse compressed sensing medical CT image reconstruction,” Automatic Control & Computer Sciences, vol. 53, no. 3, pp. 281–289, 2019. View at: Publisher Site | Google Scholar
  22. K. Haris, E. Hedström, F. Kording et al., “Free-breathing fetal cardiac MRI with Doppler ultrasound gating, compressed sensing, and motion compensation,” Journal of Magnetic Resonance Imaging, vol. 51, no. 1, pp. 260–272, 2019. View at: Publisher Site | Google Scholar
  23. M. Lustig, D. Donoho, and J. M. Pauly, “Sparse MRI: the application of compressed sensing for rapid MR imaging,” Magnetic Resonance in Medicine, vol. 58, no. 6, pp. 1182–1195, 2007. View at: Publisher Site | Google Scholar
  24. H. Jung, K. Sung, K. S. Nayak, E. Y. Kim, and J. C. Ye, “k-t FOCUSS: a general compressed sensing framework for high resolution dynamic MRI,” Magnetic Resonance in Medicine, vol. 61, no. 1, pp. 103–116, 2009. View at: Publisher Site | Google Scholar
  25. J. Zhang, Z. Gu, Z. L. Yu, and Y. Li, “Energy-effificient ECG compression on wireless biosensors via minimal coherence sensing and weighted l1 minimization reconstruction,” IEEE Journal of Biomedical and Health Informatics, vol. 19, no. 2, pp. 520–528, 2015. View at: Publisher Site | Google Scholar
  26. D. Craven, B. Mcginley, L. Kilmartin, M. Glavin, and E. Jones, “Energy-efficient compressed sensing for ambulatory ECG monitoring,” Computers in Biology and Medicine, vol. 71, pp. 1–13, 2016. View at: Publisher Site | Google Scholar
  27. E. J. Candès and T. Tao, “Near-optimal signal recovery from random projections: universal encoding strategies?” IEEE Transactions on Information Theory, vol. 52, no. 12, pp. 5406–5425, 2006. View at: Publisher Site | Google Scholar
  28. Q. Wang, D. Li, and Y. Shen, “Intelligent nonconvex compressive sensing using prior information for image reconstruction by sparse representation,” Neurocomputing, vol. 224, pp. 71–81, 2017. View at: Publisher Site | Google Scholar
  29. F.-Y. Wu, F. Tong, and Z. Yang, “EMGdi signal enhancement based on ICA decomposition and wavelet transform,” Applied Soft Computing, vol. 43, pp. 561–571, 2016. View at: Publisher Site | Google Scholar
  30. R. Baraniuk, “Compressive sensing lecture notes,” IEEE Signal Processing Magazine, vol. 24, no. 4, pp. 118–121, 2007. View at: Publisher Site | Google Scholar
  31. M. Mangia, R. F. Pareschi, and G. Setti, “Adapted compressed sensing: a game worth playing,” IEEE Circuits and Systems Magazine, vol. 20, no. 1, pp. 40–60, 2020. View at: Publisher Site | Google Scholar
  32. N. Cleju, “Optimized projections for compressed sensing via rank-constrained nearest correlation matrix,” Applied and Computational Harmonic Analysis, vol. 36, no. 3, pp. 495–507, 2014. View at: Publisher Site | Google Scholar
  33. J. Xu, Y. Pi, and Z. Cao, “Optimized projection matrix for compressive sensing,” Eurasip Journal on Advances in Signal Processing, vol. 2010, no. 1, p. 560349, 2010. View at: Publisher Site | Google Scholar
  34. D. Needell and R. S. Vershynin, “Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit,” IEEE Journal of Selected Topics in Signal Processing, vol. 4, no. 2, pp. 310–316, 2010. View at: Publisher Site | Google Scholar
  35. J. Wang, S. Kwon, and B. Shim, “Generalized orthogonal matching pursuit,” IEEE Transactions on Signal Processing, vol. 60, pp. 6202–6216, 2012. View at: Publisher Site | Google Scholar
  36. Z. Xu, “The performance of orthogonal multi-matching pursuit under the restricted isometry property,” Journal of Computational Mathematics, vol. 33, no. 5, pp. 495–516, 2015. View at: Publisher Site | Google Scholar
  37. J. Wang and B. Shim, “Exact recovery of sparse signals using orthogonal matching pursuit: how many iterations do we need?” IEEE Transactions on Signal Processing, vol. 64, no. 16, pp. 4194–4202, 2016. View at: Publisher Site | Google Scholar
  38. Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad, “Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition,” in Proceedings of the 27th Annual Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, November 1993. View at: Publisher Site | Google Scholar
  39. S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM Review, vol. 43, no. 1, pp. 129–159, 2001. View at: Publisher Site | Google Scholar
  40. S.-J. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky, “An interior-point method for large-scale -regularized least squares,” IEEE Journal of Selected Topics in Signal Processing, vol. 1, no. 4, pp. 606–617, 2007. View at: Publisher Site | Google Scholar
  41. Z. Wei, J. Zhang, Z. Xu, Y. Huang, Y. Liu, and X. Fan, “Gradient projection with approximate L0 norm minimization for sparse reconstruction in compressed sensing,” Sensors, vol. 18, no. 10, p. 3373, 2018. View at: Publisher Site | Google Scholar
  42. T. Blumensath and M. E. Davies, “Iterative hard thresholding for compressed sensing,” Applied & Computational HarmonicAnalysis, vol. 27, no. 3, pp. 265–274, 2008. View at: Publisher Site | Google Scholar
  43. K. Bredies and D. A. Lorenz, “Iterative soft-thresholding converges linearly,” Math Subject Classifications, Journal of Fourier Analysis and Applications, vol. 14, pp. 813–837, 2007. View at: Google Scholar
  44. V. Abolghasemi, S. Ferdowsi, B. Makkiabadi et al., “On optimization of the measurement matrix for compressive sensing,” in Proceedings of the 18th European Signal Processing Conference, pp. 427–431, Aalborg, Denmark, August 2010. View at: Google Scholar
  45. E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489–509, 2006. View at: Publisher Site | Google Scholar
  46. C. Lu, H. Li, and Z. Lin, “Optimized projections for compressed sensing via direct mutual coherence minimization,” Signal Processing, vol. 15, pp. 45–55, 2015. View at: Publisher Site | Google Scholar
  47. M. Elad, “Optimized projections for compressed sensing,” IEEE Transactions on Signal Processing, vol. 55, no. 12, pp. 5695–5702, 2007. View at: Publisher Site | Google Scholar
  48. Q. Wang and G. Qu, “Restricted isometry constant improvement based on a singular value decomposition-weighted measurement matrix for compressed sensing,” Iet Communications, vol. 11, no. 11, pp. 1706–1718, 2017. View at: Publisher Site | Google Scholar
  49. V. Tiwari, P. P. Bansod, and A. Kumar, “Designing sparse sensing matrix for compressive sensing to reconstruct high resolution medical images,” Cogent Engineering, vol. 2, no. 1, p. 1017244, 2015. View at: Publisher Site | Google Scholar
  50. A. Aziz, K. Singh, W. Osamy, and A. M. Khedr, “Optimising compressive sensing matrix using Chicken Swarm Optimisation algorithm,” IET Wireless Sensor Systems, vol. 9, no. 5, pp. 306–312, 2019. View at: Publisher Site | Google Scholar
  51. V. Abolghasemi, S. S. Ferdowsi, and S. Sanei, “A gradient-based alternating minimization approach for optimization of the measurement matrix in compressive sensing,” Signal Processing, vol. 92, no. 4, pp. 999–1009, 2012. View at: Publisher Site | Google Scholar
  52. Y. J. Chen, Q. Zhang, Y. Luo et al., “Measurement matrix optimization for ISAR sparse imaging based on genetic algorithm,” IEEE Geoscience and Remote Sensing Letters, vol. 13, no. 12, pp. 1875–1879, 2016. View at: Publisher Site | Google Scholar
  53. E. Cand`es and J. Romberg, “Sparsity and incoherence in compressive sampling,” Inverse Problems, vol. 23, no. 3, pp. 969–985, 2007. View at: Publisher Site | Google Scholar
  54. Z. Wei, Z. J. Zhang, and Y. Liu, “Optimization methods of compressively sensed image reconstruction based on single-pixel imaging,” Applied Sciences, vol. 10, no. 9, p. 3288, 2020. View at: Publisher Site | Google Scholar
  55. S. Foucart and H. Rauhut, “A mathematical introduction to compressive sensing,” in Applied and Numerical Harmonic Analysis, J. J. Benedetto, Ed., Springer, New York, NY, USA, 2013. View at: Google Scholar
  56. D. L. Donoho and M. Elad, “Optimally sparse representation in general (nonorthogonal) dictionaries via 1 minimization,” Proceedings of the National Academy of Sciences, vol. 100, no. 5, pp. 2197–2202, 2003. View at: Publisher Site | Google Scholar
  57. J. M. Duarte-Carvajalino, G. Sapiro, Sapiro et al., “Learning to sense sparse signals: simultaneous sensing matrix and sparsifying dictionary optimization,” IEEE Transactions on Image Processing, vol. 18, no. 7, pp. 1395–1408, 2009. View at: Publisher Site | Google Scholar
  58. D. Donoho and Y. Tsaig, “Extensions of compressed sensing,” Signal Processing, vol. 86, no. 3, pp. 533–548, 2006. View at: Publisher Site | Google Scholar
  59. J. E. Fowler, S. Mun, and E. W. Tramel, Block-Based Compressed Sensing of Images and Video, Now Publishers Inc., Boston, MA, USA, 2012.

Copyright © 2020 Ziran Wei et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views532
Downloads267
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.