Wireless Communications and Mobile Computing

Volume 2017 (2017), Article ID 9823684, 10 pages

https://doi.org/10.1155/2017/9823684

## An Adaptive Joint Sparsity Recovery for Compressive Sensing Based EEG System

College of Engineering, Qatar University, P.O. Box 2713, Doha, Qatar

Correspondence should be addressed to Hamza Djelouat

Received 28 July 2017; Revised 14 October 2017; Accepted 2 November 2017; Published 29 November 2017

Academic Editor: Gonzalo Vazquez-Vilar

Copyright © 2017 Hamza Djelouat et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The last decade has witnessed tremendous efforts to shape the Internet of things (IoT) platforms to be well suited for healthcare applications. These platforms are comprised of a network of wireless sensors to monitor several physical and physiological quantities. For instance, long-term monitoring of brain activities using wearable electroencephalogram (EEG) sensors is widely exploited in the clinical diagnosis of epileptic seizures and sleeping disorders. However, the deployment of such platforms is challenged by the high power consumption and system complexity. Energy efficiency can be achieved by exploring efficient compression techniques such as compressive sensing (CS). CS is an emerging theory that enables a compressed acquisition using well-designed sensing matrices. Moreover, system complexity can be optimized by using hardware friendly structured sensing matrices. This paper quantifies the performance of a CS-based multichannel EEG monitoring. In addition, the paper exploits the joint sparsity of multichannel EEG using subspace pursuit (SP) algorithm as well as a designed sparsifying basis in order to improve the reconstruction quality. Furthermore, the paper proposes a modification to the SP algorithm based on an adaptive selection approach to further improve the performance in terms of reconstruction quality, execution time, and the robustness of the recovery process.

#### 1. Introduction

Nowadays, a huge interest has been dedicated to the development of Internet of things (IoT) based connected health platforms. These platforms are empowered by several wearable battery-driven sensors that collect and record different vital signs for a long period. The collected data is sent using low-power communication protocols to a nearby gateway. The gateway then delivers the data to the host cloud. At the cloud level, various signal processing and data analysis techniques are performed to provide computer-aided medical assistance. However, the performance of these platforms is bottlenecked mainly by the limited lifespan of wearable sensors. Therefore, exploring data compression techniques can reduce the number of the data transmitted from the sensors to the gateway, hence prolonging the sensor’s lifespan. Compressive sensing (CS) theory has proved to be a reliable compression technique which provides the best trade-off between reconstruction quality and low-power consumption compared to conventional compression approaches such as transform coding or segmentation and labeling techniques [1].

CS is a novel data sampling paradigm that merges the acquisition and the compression processes into one operation. CS relies on the signal sparsity/compressibility in order to acquire a compressed form of the signal while maintaining its salient information. CS has been introduced in [2, 3]; the authors have proved that any sparse signal can be recovered exactly from a smaller set of measurements than its original dimension. Therefore, it is possible to acquire sparse signals by taking far fewer random measurements than what the famous Shannon-Nyquist theorem states using well-designed matrices. Despite the fact that CS is a relatively new theory, it has been incorporated in a wide range of emerging applications including image processing, radar, wireless communication, and monitoring-based applications.

Furthermore, to cope with current monitoring systems, an extension to CS has been introduced in [4], namely, distributed compressive sensing (DCS). DCS aims to exploit both the signal intrastructure (the sparsity) and the interstructure of the acquisition system (correlation between the measurements of the different sensors) in order to acquire the information about the signals of interest using the minimum number of measurements.

Subsequently, by leveraging the sparsity of most biosignals such as electrocardiogram (ECG) and electromyogram (EMG) [5, 6], many efforts have been dedicated to exploit CS and DCS in wireless body area network (WBAN) applications to enable CS-based IoT platforms for connected health. In such platforms, the compressed data is transmitted to a fusion node (gateway) that possesses enough computing and communication abilities. Afterwards, the data is routed to the cloud via the Internet for reconstruction, processing, and analysis. It is worth mentioning that data reconstruction can be performed on the gateway to empower an “IoT-based edge computing platform.”

CS-based systems for EMG and ECG monitoring have been thoroughly investigated, where various aspects have been well analyzed, for instance, the comparison between CS and state-of-the-art compression techniques [7], the system design considerations [5], the effect of the sparsifying dictionaries [8], and the best algorithms in terms of quality of reconstruction [9]. In addition, authors in [10] further leveraged the biosignals structure, where, instead of only exploring the signal sparsity in one domain, the authors proposed using all the available structure such as low rank, piece-wise smoothness, and the sparsity in more than one domain. Subsequently, the authors in [10] proposed a reconstruction framework that aims to exploit any a priori information about the signals in order to enhance the reconstruction quality.

Moreover, the application of CS in electroencephalogram (EEG) signals has been presented in the literature. The performance of such CS-based systems is controlled by two parameters, the sparsity of the signal, which depends mainly on the sparsifying basis, and the appropriate recovery algorithm adopted. Authors in [11] have shown the possibility of using CS for EEG compression as long as the EEG signal is recorded at least via 22 channels. The major limitation facing the deployment of CS in EEG compression is that it is very hard to find the transform domain where the EEG exhibits a sparse behavior. Therefore, different classes of sparsifying basis and dictionaries have been investigated to determine the best basis that provides the sparsest representation for EEG. Senay et al. have quantified the use of Slepian basis as sparsifying basis for EEG [12]; the obtained result shows a low error rate for the reconstructed EEG signal. In addition, Aviyente has presented a CS-based EEG compression system exploiting Gabor frame as dictionary for EEG signals [13], whereas Gangopadhyay et al. [14] have found that adopting a wavelet transform for EEG is more efficient in terms of quality of reconstruction. Zhang et al. presented in [15] a block sparse Bayesian learning (BSBL) approach to recover EEG raw data that enable both good reconstruction quality and low system complexity by using sparse sensing matrices and wavelet matrices as the sparsifying basis. More recently, authors in [16] introduced an optimization model based on norm to enhance the cosparsity and to enforce the low-rank structure of the EEG signal. The authors proposed using a second-order difference matrix as the sparsifying dictionary to enhance the sparsity of the EEG signal as well as exploit the collaboration between the cosparsity and the low-rank structure to recover simultaneously a multichannel EEG signal.

Besides selecting the optimum sparsifying matrix, adopting the appropriate reconstruction algorithm plays an important role in the recovery of the EEG data. Greedy algorithms have been widely explored in CS applications due to their low complexity and their superior performance compared to other recovery algorithms, such as convex relaxation approaches. The widely used greedy algorithms are orthogonal matching pursuits (OMP) [17], stage-wise OMP (StOMP) [18], compressive sampling matching pursuit (CoSaMP) [19], and subspace pursuit (SP) [20].

The main task in greedy algorithms is to identify the locations of the largest coefficients in the estimated signal. Greedy algorithms adopt a signal proxy approach at each iteration to identify these locations. If the sensing matrix satisfies the restricted isometry property condition [21], then the signal proxy is very similar to the original signal and the locations of the nonzero elements can be easily identified. OMP and StOMP reconstruct the signal in an iterative approach by locating the largest coefficient at each time. On the other hand, SP and CoSaMP select more than one coefficient at each iteration which allows them to converge to the solution with a lower number of iterations. However, SP and CoSaMP require information about the signal sparsity which is not available a priori in many applications as the sparsity of the signal often changes over time. Moreover, the sparsity parameter depends not only on the signal structure but also on the space where the data is sparse; hence, the same signal can exhibit different levels of sparsity depending on the sparsifying basis. The required knowledge of sparsity estimate parameter presents a critical issue with the SP and CoSaMP, where a poor choice of can remarkably degrade the reconstruction quality. Adaptive sparsity algorithms have been proposed in the literature; authors in [22, 23] have performed various modification to OMP, CoSaMP, and SP algorithms in order to provide an adaptive framework that estimates the best value for the sparsity parameter . Sparsity adaptive matching pursuit (SAMP) proposed in [22] is considered as a generalization of both OMP and SP by updating at each iteration until a certain condition is satisfied. The SAMP increases the value of sparsity parameter using a two-stage verification process until the difference between the norms of the residual for every two successive iterations is below a certain threshold.

In this paper, a CS-based scheme for EEG signal compression and recovery is presented. The contributions of the paper are as follows:(i)Joint channel reconstruction using SP algorithm is presented. The proposed approach renders a better reconstruction quality than the conventional channel-per-channel recovery.(ii)The concept of concatenated basis as the sparsifying basis for EEG signals is explored to tackle the problem of the nonsparsity, and the concatenated basis consists of a random selection of elements from both discrete cosine transform matrix (DCT) and discrete wavelet transform matrix (DWT).(iii)A new adaptive approach is presented to reconstruct the EEG signal. The new algorithm is a modification of the SP algorithm to provide an algorithm that does not require the knowledge of the sparsity of the signal a priori. The new proposed dynamic selection subspace pursuit (DSSP) algorithm performs an adaptive selection at each iteration for the coefficients that capture most of the signal energy. The proposed algorithm promotes two improvements over SP: first, an enhancement of the data reconstruction quality and, second, an increased robustness compared with SP, as the latter would provide a bad reconstruction quality if the sparsity parameter is poorly estimated.

The rest of the paper is organized as follows: CS fundamentals are briefly presented in Section 2. Section 3 addresses the main issue of the paper where the description of joint reconstruction approach and the proposed recovery algorithm is provided. Simulation results and discussion are presented in Section 4. Section 5 concludes the paper.

#### 2. Compressed Sensing

##### 2.1. Acquisition Model

The acquisition model of CS (1) is represented by an inner product between the input sparse signal and the sensing matrix (such that ) to generate the compressed measured signal .

In most cases, the input signal is not sparse in time domain, yet it can exhibit a sparse behavior under the appropriate transform. Thus, given a set that spans , can be expressed as a linear combination between the elements of with a vector such that . The input signal is said to be -sparse if has only nonzero entries. The set of the indices corresponding to the positions of the nonzero entries of is called the support of and denoted as .

The sensing matrix which maps the -length input signal to an -length signal has to enable a small number of samples to acquire the salient information in the input signal. Moreover, it should allow acceptable reconstruction quality. Therefore, has to satisfy two conditions on the RIP and should be incoherent with sparsifying matrix [21].

##### 2.2. Reconstruction Algorithms

Data reconstruction is the crucial task in any CS-based system. Thus, several approaches to recover the original signal from the measured signal have been proposed in literature. However, there are two main classes of reconstruction algorithms that have been widely explored, namely, convex optimization and greedy algorithms. Convex optimization approaches provide the exact solution if the input signal is completely sparse. Convex optimization algorithms are based on the minimization operation; for instance, basis pursuit (BP) algorithm [24] considers the following solution:

In the case where the acquisition process is contaminated with noise, two different techniques can be deployed; first, if the noise level is known a priori, basis pursuit denoising (BPDN) [25] can be applied. However, if there is no knowledge about the noise level, least absolute shrinkage and selection operator (LASSO) presents an efficient approach to recover the original signal.

Greedy algorithms provide a suboptimal recovery for sparse signals, yet they outperform convex optimization approaches in the case where the signal of interest is highly sparse [17]. Greedy algorithms solve (1) iteratively by taking locally optimal decisions. These algorithms aim to find the locations of the nonzero coefficients to enable a fast recovery. Greedy algorithms include several variants such as gradient pursuit, matching pursuit (MP) [26], and OMP [17]. OMP offers a fast recovery compared to convex optimization approaches; however, it suffers from bad recovery quality for signals with a low degree of sparsity. Thus, several improved versions of OMP have been proposed, such as CoSaMP [19], SP [20], and StOMP [18].

##### 2.3. Distributed Compressive Sensing

Conventional CS exploits only the sparsity of the data. However, if the same data is collected using different sensing nodes or different channels, their measurements would be highly correlated. In such scenario, the measurements exhibit the same behavior, such as being sparse in a particular domain.

Therefore, in a multichannel CS-based data acquisition system, each sensing node collects and compresses its data individually without taking any considerations about the other nodes. For the recovery, two approaches can be considered to reconstruct the data; first, data reconstruction can be performed on each sensing node individually, this approach ignores the dependency between the measurements of different sensors, and, hence, the quality of the reconstruction depends only on the sparsity of each recording. The second approach exploits the collaboration between all measurements to obtain more information about the data; thus, a better reconstruction quality can be achieved. This process is called joint measurement setting and it has motivated the introduction of the DCS concept.

DCS presents a new distributed coding framework that exploits both the sparsity of the signal and the correlation between the different signals in multisensing architectures. In the DCS acquisition stage, each sensor collects its measurements by taking random projections of the signal without any consideration about the states of the other sensors in the network. However, the reconstruction phase exploits the intersignal correlation by using all of the obtained measurements to recover all the signals simultaneously.

#### 3. CS-Based EEG Compression

EEG is a well-considered framework to measure the electrical activity of the brain; EEG signals are widely used to detect different types of neurological disorders such as comas, epilepsy, and sleep disorders. Moreover, EEG can also be used for nonmedical applications such as brain-computer interface. EEG signals are recorded over a long period of time using a set of electrodes placed over the head of the subject. EEG signals are considered as a multivariate signal acquired via multiple channels which results in the generation of big EEG data that need to be stored and transmitted. However, several studies have highlighted the limitation of such approach in terms of high energy consumption due to massive raw data streaming. Thus, EEG monitoring platforms would benefit from more power efficient sampling and compression prior to wireless transmission. These limitations motivate the incorporation of CS and DCS to the EEG acquisition and compression.

##### 3.1. Related Work

CS-based EEG monitoring has been investigated in the literature. First, the feasibility of applying CS to EEG acquisition has been addressed in [11, 27]. The authors quantified CS-based EEG monitoring, where CS has been used as a compression technique to reduce both the storage and the processing load. The obtained results revealed that CS does not provide a good reconstruction quality unless an appropriate acquisition scheme is deployed. CS can be applied only if at least 22 channels are deployed to collect the EEG data.

The low sparsity of the EEG raw data in both time and frequency domains presents the main challenge in the design of CS-based EEG monitoring systems. Thus, a great attention was dedicated to providing dictionaries and basis that render a high sparse representation of EEG signals. Subsequently, several dictionaries have been investigated in the literature such as Slepian basis, Gabor frames, and DWT matrices [12–15, 28]. In [12], Senay et al. quantified a CS framework for EEG compression using Slepian functions as a sparsifying dictionary. By projecting the EEG signal into the Slepian basis, a sparse representation is achieved; hence, the EEG can be efficiently compressed with CS at a very low error rate. In addition, Aviyente analyzed a CS framework for EEG compression in terms of the mean square error (MSE) using Gabor frame method as sparsifying basis [13]. The author argued that chirped Gabor dictionary would be very efficient and it can increase the sparsity of the signals; hence, it improves the performance of CS-based EEG monitoring. On the other hand, Gangopadhyay et al. claimed in [14] that wavelet-based dictionaries are more suitable for CS-based EEG compression than the previously mentioned approaches. Author in [11] have provided a detailed performance study for six different sparsifying dictionaries, namely, Gabor, Mexican Hat, cubic Spline, linear Spline, cubic B-Spline, and linear B-Spline. In the paper, intensive sets of simulations were carried out for different reconstruction algorithms in 18 different test conditions. The B-Spline dictionaries proved to be the most promising, yielding best reconstruction quality and achieving the lowest error rates. Furthermore, Liu et al*.* proposed in [16] a new framework for EEG monitoring based on sparse signal recovery method and simultaneous cosparsity and low-rank (SCLR) optimization approaches. The proposed approach utilizes second-order difference matrix as the sparsifying basis and minimization for data reconstruction. Nevertheless, Zhang et al. explored the BSBL which was initially developed in [29] to empower ECG signal monitoring for EEG reconstruction [15]. The idea of the paper is that, instead of finding the optimal sparsifying dictionary, the authors used general dictionary matrices (DWT and DCT) to represent the EEG signal. Yet, they explored BSBL to take advantage of the block structure of the EEG signal. The results revealed an acceptable reconstruction quality for particular sets of applications.

Besides evaluating the sparsity, the metrics for EEG data reconstruction have been investigated as well. For instance, root-mean-squared difference (PRD) has been used to evaluate the reconstruction quality of EEG reconstruction in [30]. However, different thresholds have been established based on the targeted application. In [31], based on energy preservation criterion, authors determined that the maximum PRD which provides an acceptable recovery is 7%. Such PRD value can guarantee that 99.5% of the signal energy is persevered, whereas Higgins et al. demonstrated in [32] that up to 30% PRD is tolerable with EEG compression for applications of automated seizure detection.

Table 1 expatiates on the comparative results between several works presented in the literature on the integration of CS in context of EEG monitoring.