Abstract

Distributed Compressed Sensing (DCS) is an important research area of compressed sensing (CS). This paper aims at solving the Distributed Compressed Sensing (DCS) problem based on mixed support model. In solving this problem, the previous proposed greedy pursuit algorithms easily fall into suboptimal solutions. In this paper, an intelligent grey wolf optimizer (GWO) algorithm called DCS-GWO is proposed by combining GWO and -thresholding algorithm. In DCS-GWO, the grey wolves’ positions are initialized by using the -thresholding algorithm and updated by using the idea of GWO. Inheriting the global search ability of GWO, DCS-GWO is efficient in finding global optimum solution. The simulation results illustrate that DCS-GWO has better recovery performance than previous greedy pursuit algorithms at the expense of computational complexity.

1. Introduction

Compressed sensing (CS) [1, 2] is a new signal sampling theory which has broken through the limit of Nyquist sampling theorem. If there are no more than nonzero entries in the signal , is called a sparse signal and the sparsity of is . If is sparse, it can be recovered from much fewer samples. We can get the measurement signal by projecting onto the measurement matrix , where . Because , it is an NP-hard problem to recover from . However, if and satisfies the Restrict Isometry Property (RIP) with order , can be perfectly recovered. Gaussian random matrix [1], partial Fourier matrix [3], Bernoulli random matrix [2], and so on can be used as measurement matrix. Greedy pursuit algorithms [47], minimization algorithms [810], and intelligent optimal algorithms [1113] are proposed to recover from .

CS theory just exploits intrasignal correlation, which makes it not efficient in dealing with multiple signals. An expanded version of CS, Distributed Compressed Sensing (DCS) [14, 15], which can exploit not only intrasignal correlation but also intersignal correlation is proposed. With proper joint recovery algorithms, the measurement number needed in DCS can be further reduced. In this paper, we call the problem of jointly recovering signals as DCS problem. Several joint sparse models (JSM) and corresponding joint recovery algorithms are proposed to solve the DCS problem. One-Step Greedy Algorithm (OSGA) [15] is proposed to solve the DCS problem based on JSM-1. Greedy pursuit algorithms, including Simultaneous Orthogonal Matching Pursuit (SOMP) [15], Simultaneous Iterative Hard Thresholding (SIHT) [16], and Simultaneous Hard Thresholding Pursuit (SHTP) [16] are proposed to solve the DCS problem based on JSM-2. In [17, 18], two intelligent optimization algorithms based on particle swarm optimization and simulated annealing are proposed to solve the DCS problem based on JSM-2. However, as our analysis in Section 2.1, JSM-1 and JSM-2 are stringent on the description of signal correlation, which makes them reflect less intersignal and intrasignal correlations. In [19], JSM-3 is proposed. As a generalization of JSM-1 and JSM-2, it can reflect more intersignal and intrasignal correlations. This paper focuses on solving the DCS problem based on JSM-3. We notice that Joint Subspace Pursuit (Joint-SP) [20], Joint Orthogonal Matching Pursuit (Joint-OMP) [20], Sparsity Adaptive Matching Pursuit for DCS (DCS-SAMP) [21], and Forward-Backward Pursuit for DCS (DCS-FBP) [22] are proposed to solve the DCS problem based on JSM-3. However, as greedy pursuit algorithms, they easily fall into suboptimal solutions.

GWO algorithm [23] is proposed by Mirjalili et al. in 2014, which simulates the hunting behavior and leadership hierarchy of grey wolves. As top apex predators, grey wolves normally live and hunt in a pack which includes 5–12 wolves. As an optimization algorithm, GWO has probability to accept a less optimal solution, which makes it avoid being stuck in local optimal solutions. Because of this, GWO draws much attention in solving some optimization problems and NP hard problems. Kumar et al. [24] apply GWO in system reliability optimization. Mirjalili [25], Hassanin et al. [26], and Sánchez [27] adopt GWO to train Artificial Neural Network (ANN). Muangkote et al. [28] propose an improved GWO algorithm and apply it in training -Gaussian Radial Basis Functional-link nets (qRBFLNs) neural networks. Li et al. [29] propose a modified discrete GWO algorithm to solve the image segmentation problem. Emary et al. [30] propose a binary GWO algorithm and use it to select the optimal feature for the purpose of classification. In these areas, GWO performs better or comparable to other prevailing nature-inspired optimization algorithms [3135].

In solving the DCS problem based on JSM-3, greedy pursuit algorithms easily fall into suboptimal solutions. From the above analysis of GWO, we know that it is a recently proposed global search optimization algorithm. Its performance is superior or comparable to other prevailing algorithms in solving some optimization problems. As illustrated in Section 3, the DCS problem based on JSM-3 can be modeled as an optimization problem. These reasons motivate us to exploit GWO to solve the DCS problem based on JSM-3.

In this paper, an intelligent grey wolf optimizer (GWO) [23] algorithm called DCS-GWO is proposed to solve the DCS problem based on JSM-3. DCS-GWO is essentially a GWO algorithm, the grey wolves’ positions are initialized by using the - algorithm [36] and updated by using the strategy of GWO. Inheriting the global search ability of GWO, DCS-GWO has better recovery performance than previous greedy pursuit algorithms at the expense of computational complexity.

The remainder of this paper is organized as follows. In Section 2, we introduce related background knowledge, including DCS model, joint sparse models (JSM), grey wolf optimizer (GWO) algorithm, and - algorithm. In Section 3, we introduce the DCS-GWO algorithm. In Section 4, we provide the simulation results. Conclusions are stated in Section 5.

We use the following notations in this paper. Lowercase bold-face denotes a vector. Uppercase bold-face denotes a matrix. For the vector , denotes the norm of . If , is called sparse signal and the sparsity is . denotes the support set of . For the matrix , denotes the column of and denotes the row of . For the set , denotes the cardinality of . denotes that is a subset of . denotes the matrix composed of the columns . denotes the matrix composed of the rows . denote the transpose matrix of the matrix . denotes the pseudo-inverse matrix of .

2. Background Knowledge

In this part, we introduce related background knowledge of this paper. The DCS model is introduced in Section 2.1. Joint sparse models are introduced in Section 2.2. Grey wolf optimizer is introduced in Section 2.3. At last, the - algorithm is introduced in Section 2.4.

2.1. DCS Model

Suppose that there are signals which are sparse and have the sparsity , where . They are individually measured by the measurement matrix as [14, 21]. That is,Without the consideration of noise, the measurement process can be denoted aswhere denotes the joint signal and denotes the measurement signal. The DCS problem is to recover from jointly by exploiting intersignal and intrasignal correlations. For the matrix , the set of indices corresponding to nonzero rows of is the joint support set of , which can be denoted as . If there are no more than nonzero rows in , is called jointly sparse and the joint sparsity is .

2.2. Joint Sparse Models

The JSM reflects the intersignal correlations and intrasignal correlations. There are mainly three JSMs.

(1) JSM-1. JSM-1 is called sparse common support and innovation model [14, 15]. JSM-1 can be written as where . is the common component shared by all signals. is the innovation component for each signal. The sparsity of is . The sparsity of is .

(2) JSM-2. JSM-2 is called sparse common support model [14, 15]. JSM-2 can be written aswhere . For each signal, the support set of is the same, but the coefficients are individual. The sparsity of is .

(3) JSM-3. JSM-3 is called mixed support model [19, 20]. JSM-3 has common component and innovation component . For each signal, the support set of is the same, but the nonzero coefficients are individual. For each signal, the innovation component is completely independent, not only the coefficients but also the support set. JSM-3 can be written aswhere . The sparsity of is . The sparsity of is . Obviously, JSM-3 is a generalization of JSM-1 and JSM-2. If , JSM-3 reduces to JSM-2. If both the coefficients and support set of are the same for each signal, JSM-3 reduces to JSM-1. JSM-3 is less stringent in describing signal correlations, so, it can reflect more signal correlations. Our algorithm is proposed to solve the DCS problem based on JSM-3.

2.3. Grey Wolf Optimizer

Grey wolf optimizer (GWO) [31] algorithm is a recently proposed intelligent optimization algorithm. As apex predators, grey wolves own special leadership hierarchy and hunting mechanism. Grey wolves are divided into four categories, alpha (), beta (), delta (), and omega (). is the leader which makes decision to hunt, rest, forward, or stop. assists make decision and reinforces ’s commands. executes the decision and manages wolves which are the lowest ranking of grey wolves.

Before hunting, the grey wolves firstly encircle the prey. The distance between the wolf and prey is computed by using (6). The wolf’s position is updated by using (7). where denotes the current iteration, denotes the prey’s position vector, and are two coefficient vectors, and denotes a grey wolf’s position vector. The coefficient vectors and are determined as follows:where , are random vectors between and and the vector decreases from to linearly in the iteration course.

After the process of encircling the prey, the hunting is led by , , and . All wolves’ positions are updated according to the positions of , , and . Firstly, the distances between a wolf and the best three wolves are computed by using (9). Then, the position of the wolf is updated by using (10) and (11).

After all wolves’ positions are updated, the process of hunting the prey goes to the next iteration in which the new best three solutions are generated. The iteration repeats until the stopping criterion is satisfied.

2.4. -Thresholding Algorithm

- is a joint recovery algorithm proposed in [36]. If the joint sparsity level is known, we can estimate the joint support set of joint signal by using the following:

3. DCS-GWO: Grey Wolf Optimizer Algorithm for Distributed Compressed Sensing

In this part, we firstly introduce the DCS-GWO in Section 3.1. Next, we analyze DCS-GWO’s computational complexity in Section 3.2.

3.1. DCS-GWO

DCS-GWO is essentially a GWO algorithm. It has four basic elements: cost function, initial positions, generating mechanism, and stopping criterions. In this part, we firstly introduce DCS-GWO’s four basic elements and then summarize it in Algorithm 1.

Input: The joint sparsity ; the wolf number set ; the limiting parameter ; the stopping criterion .
Initialization: Initialize the wolf’s position by using the Eq. (16); Initialize the best three positions by
, , ;
the iteration number ; the allowed maximum iterations number .
Judgement: if  , set , output the joint signal by using the Eq. (13) and stop. Otherwise, go to the iteration.
Iteration:
Step 1. Update all wolves’ positions  :
Step 1.1. Define .
Step 1.2. If , randomly choose elements from to forma set and define
. If , randomly choose elements from to form a set
and define .
Step 1.3. Use the least square method to estimate a temporary solution by using the Eq. (17).
Step 1.4. Update the wolf’s position by using the Eq. (18).
Step 2. Update the best three wolves’ positions: , ,
.
Step 3. Check the terminate criterion: If or , set the final joint support set and terminate
the iteration. Otherwise, set and go to the next iteration.
Output: Estimate the joint signal by using the Eq. (13).

(1) Cost Function. Similar to DC-SAMP and DCS-FBP, we can use a two-step strategy to solve the DCS problem. Firstly, the joint support set of is estimated. Then, the joint signal can be estimated by using the least square method as follows:where denotes the estimated joint support set and .

We can find that if the joint support set is estimated accurately, it must satisfy . As , we define the cost function as

We can estimate the joint support set by solving the following:where is the set consisting of all the - subsets of .

(2) Initial Positions. We assume that the wolf number set . We use the - algorithm to initialize the grey wolves’ positions. denotes the initial position of the wolf where . It is estimated by using where is a random number in the closed interval .

(3) Update Strategy. The update strategy of DCS-GWO inherits from GWO. In DCS-GWO, denotes the current iteration. For , the wolf’s position is updated according to the previous best three wolves’ positions , , and and its previous position . A parameter is used to limit the set size where .

The position of the wolf is updated as follows. In Step , a set is formed according to the previous best three positions and the wolf’s previous position, . In Step 1.2, if , randomly choose elements from to form the set and define the temporary position ; if , randomly choose elements from to form a set and define the temporary position . In Step 1.3, the temporary solution is estimated by using the least square method as

Lastly, the position of the wolf is updated by where . After all wolves’ positions are updated, we can update the best three wolves’ positions according to the cost function.

(4) Stopping Criterion. In order to avoid too many iterations, we set a maximum allowed iteration number and a small positive number as stopping criterions. If or the number of iterations reaches , the iteration process is terminated.

We summarize the DCS-GWO in Algorithm 1.

3.2. Computational Complexity Analysis of DCS-GWO

According to Algorithm 1, the initialization and iteration contribute the main computational complexity of DCS-GWO. The computational complexity of initialization is . In each iteration, the main computational complexity lies in Steps 1.3 and 1.4 which, respectively, have upper limit value and . Because , the computational complexity upper limit value of each iteration is . Because the total number of iterations is not more than , the computational complexity upper limit value of DCS-GWO is . Obviously, as an intelligent optimizer algorithm, DCS-GWO has higher computational complexity than greedy pursuit algorithms. However, as swarm intelligence algorithm, it can run in parallel to reduce the running time.

4. Simulation Results and Analysis

4.1. Experiment Configuration

In this section, the performance of DCS-GWO is compared with other algorithms that can solve the DCS problem based on JSM-3, including Joint OMP [20], Joint SP [20], DCS-SAMP [21], and DCS-FBP [22]. The algorithms proposed in [1518] are designed for JSM-1 or JSM-2, they are not discussed in this part. The parameters of DCS-GWO are set as , , , and . We use the following hypothesis in the simulation.

We use the Gaussian random matrix as measurement matrix , the elements of which are randomly drawn from the standard i.i.d. and every column of which is normalized to unit norm. All signals follow the JSM-3 with the common sparsity and innovation sparsity . We assume that the innovation sparsity is the same for all signals. For each signal, the support sets of common component and innovation component are random subsets of the set . The nonzero coefficients of the common component and innovation component are randomly drawn from the standard i.i.d. In each experiment, 200 independent trials are conducted. In each trial, the signals and measurement matrix are generated independently. Average Normalized Mean Squared Error (ANMSE), perfect recovery percentage, and average runtime are used to evaluate the algorithms. The ANMSE is defined asThe perfect recovery condition is , where and , respectively, denote the original joint signal and the recovered joint signal in the trial. If trials are success, the perfect recovery percentage is . All the experiments are implemented by using Matlab R2014a on the computer with 2.5 GHz Intel Core I3 processor and 4.0 GB memory running window 7 system.

4.2. Experiment Results

(1) Requirement for Measurements. In the first simulation, we compare the performance of all algorithms against the measurement number changing from 50 to 90 with Step 10. Other parameters are fixed as , , , and .

As Figure 1(a) shows that the perfect recovery percentage of DCS-GWO is always higher than other algorithms. Besides, DCS-GWO needs less measurement number to perfectly recover signals. When the measurement number reaches , DCS-GWO recovers signals perfectly. Other algorithms need the measurement number to perfectly recover signals. As Figure 1(b) shows, DCS-GWO has lower ANMSE than other algorithms.

From Figure 1(c), at the expense of global search ability, DCS-GWO needs more running time than other algorithms. However, as a swarm intelligence algorithm, it can run in parallel to reduce the running time.

(2) Robustness against Common Sparsity. In the second simulation, we evaluate the performance of the algorithms against the common sparsity changing from to with the step size . Other parameters are fixed as , , , and .

As Figure 2(a) shows, DCS-GWO performs far better than other algorithms in the perfect recovery percentage. For all algorithms, as the common signal sparsity increases, the perfect recovery percentage decreases. We are more interested in at which sparsity the perfect recovery percentages drop below 1. The perfect recovery percentage of DCS-GWO starts to fall below 1 when ; however, other algorithms already fall below 1 when . From Figure 2(b), DCS-GWO has lower ANMSE than other algorithms. As for average runtime, we can get the similarly results as Figure 1(c).

(3) Robustness against Innovation Sparsity. In the third simulation, we compare the performance of DCS-GWO with other algorithms against the innovation sparsity changing from 0 to 7 with Step . Other parameters are set as , , , and .

As Figure 3(a) shows, the DCS-GWO performs extremely better than other algorithms in perfect recovery percentage. As the increase of the innovation sparsity , the perfect recovery percentage is declining, that is, because the increase of joint sparsity level influences the performance of all algorithms. The perfect recovery percentages of other algorithms start to fall below 1 when ; our algorithm starts to fall below 1 until . As in Figure 3(b), DCS-GWO has lower ANMSE than other algorithms. As for average runtime, we can get the similar results as Figure 1(c).

(4) Robustness against the Number of Signals. We compare our algorithm with other algorithms against the number of signals changing from 2 to 6 with Step . Other parameters are set as , , , and .

The perfect recovery percentages of Joint SP and Joint OMP are not influenced obviously by the increase of signals number, because both of them recover the signals one by one rather than jointly. They have better performance than our algorithms when ; however, our algorithm performs better than them when .

As the number of signals increases, the perfect recovery percentages of DCSFBP, DCS-GWO, and DCSSAMP decrease. As Figure 4(a) shows, our algorithm has higher perfect recovery percentages than DCSFBP and DCSSAMP. When , the perfect recovery percentages of DCS-FBP and DCSSAMP already fall below 1; however, our algorithm starts to fall below 1 until . As Figure 4(b), DCS-GWO has lower ANMSE than other algorithms. As for average runtime, we can get the similarly results as Figure 1(c).

From the above simulations, we can see that DCS-GWO has higher perfect recovery percentage and lower ANMSE than greedy pursuit algorithms. Next, we analyze the reason of DCS-GWO’s better performance according to its structure.

The main reason for DCS-GWO’s better performance is its update mechanism. In Step , the common information of the best three grey wolves’ positions is utilized to update all the grey wolves’ positions. By this way, the previous obtained best information is preserved in the new grey wolves’ positions. In Step , random perturbations are introduced into the grey wolves’ positions. Benefitting from this, DCS-GWO can skip the local optimum position and guide the search in promising directions. In Steps and , indices corresponding to the rows of the temporal joint signal which have smallest row norm values are removed from the temporal support set. By this way, the previous wrong selected indices can be removed from the temporal support set.

In contrast to DCS-GWO, greedy pursuit algorithms search the support set according to the gradient of (15), which is a local optimal search mechanism. Therefore, due to the efficiency of DCS-GWO’s update mechanism, DCS-GWO can find the support set more accurately than greedy pursuit algorithms. Then, DCS-GWO can recover the signals more accurately than greedy pursuit algorithms.

5. Conclusion

In this paper, an intelligent grey wolf optimizer algorithm is proposed to solve the DCS problem based on JSM-3. The positions of grey wolves are initialized by using the -thresholding algorithm and updated by using the idea of GWO. Inheriting the global search ability of GWO, DCS-GWO overcomes greedy pursuit algorithms’ shortcoming of easily falling into suboptimal solutions. The simulation results illustrate that DCS-GWO has higher perfect recovery percentage and lower ANMSE than other algorithms. DCS-GWO has higher computational complexity than other algorithms. However, as a swarm intelligence algorithm, it can compute in parallel to reducing the running time. In the future work, we will focus on developing effective update strategies to reduce the running time. Moreover, there are many more recent nature-inspired algorithms, such as Ant Lion Optimizer (ALO) [37], Moth-Flame Optimization (MFO) algorithm [38], Whale Optimization Algorithm (WOA) [39], and Multiverse Optimizer (MVO) [40]. We will exploit them to solve the DCS problem based on JSM-3.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work is financially supported by National Science Foundation of China (no. 51574232).