Table of Contents Author Guidelines Submit a Manuscript
The Scientific World Journal

Volume 2014, Article ID 973063, 13 pages

http://dx.doi.org/10.1155/2014/973063
Research Article

Feature Selection and Classifier Parameters Estimation for EEG Signals Peak Detection Using Particle Swarm Optimization

1Applied Control and Robotics (ACR) Laboratory, Department of Electrical Engineering, Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur, Malaysia

2Faculty of Electrical Engineering, Universiti Teknologi Malaysia, 81310 Johor Bahru, Malaysia

3Faculty of Electrical and Electronic Engineering, Universiti Malaysia Pahang, 26600 Pekan, Pahang, Malaysia

4Faculty of Computing, Universiti Teknologi Malaysia, 81310 Johor Bahru, Malaysia

Received 18 June 2014; Accepted 30 July 2014; Published 19 August 2014

Academic Editor: Shifei Ding

Copyright © 2014 Asrul Adam et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model.

1. Introduction

The peak detection algorithms have significantly been used on different types of biological signals such as electrooculogram (EOG), electrocardiogram (ECG), and electroencephalogram (EEG). EOG signal is generated by human eye. ECG signal is generated by heart. EEG signal is generated by brain. The peak detection in the EOG signal has been used for detecting the eye blink [1, 2]. In the EOG based signal, a number of electrodes are placed around the eyes. If the eyes move in vertical direction, positive or negative peak points will arise. For the ECG signal, peak detection is typically used to detect the combination of Q, R, and S waves or the so-called QRS complex [3]. The QRS complex is a peak model for ECG signal including Q-valley point, R-peak point, and S-valley point. Other important peak points in ECG signal are P-peak point and T-peak point. The detection of the QRS complex is critical part in numerous ECG signal processing system. The different pattern of QRS complex will determine the patient heart syndrome. Additionally, the peak detection for the EEG signal has been widely used to detect P300 response [4, 5] and epilepsy response [6]. P300 is a brain response measured by electrodes covering the parietal lobe in the presence of visual and auditory stimuli. A brain with chronic disorder will respond with epilepsy. Therefore, the utilization of peak detection algorithm for the biological signals is compatible in this study.

To date, variety approaches of peak detection algorithms have been proposed. These algorithms can be categorized into four main approaches based on time domain [715], frequency domain [16], time-frequency domain [10, 17], and nonlinear [18]. In time domain approach, the peaks are analyzed in time. In frequency domain approach, the peaks are analyzed in frequency. In time-frequency domain approach, the peaks are analyzed in both time and frequency domain. In nonlinear approach, some statistical parameters of the peaks are analyzed. The general framework of peak detection algorithm usually involves several processes which are signal preprocessing, peak candidate detection, feature extraction, and classification. Various signal preprocessing methods have been employed such as data compression [19], wavelet transform [6], Kalman filter [20], and Hilbert transform [15]. Two methods for peak candidate detection have been used which are three point sliding window method [8] and k-point nonlinear energy operator (k-NEO) method [21]. Various feature extraction techniques have been proposed which are model-based [21], wavelet analysis [22], template matching [23], and power spectra analysis [24]. Several classifiers have been used, which are rule-based [8, 24], artificial neural network (ANN) [10, 11, 25, 26], support vector machine (SVM) [7, 27], and expert system [10]. The highlighted purposes in designing the framework are to achieve the highest performance and to reduce the computational time. Almost all studies in the EEG peak detection literature focus on the problem of detecting peaks in epileptic EEG signals. A review of peak detection algorithms that is employed to the epileptic EEG signal is presented in [28]. The peak detection is just a first step in epileptic event detection. The main goal is to determine the epileptic spikes not the whole peaks. Therefore, for an epileptic event detection system, the epileptic spike detection performance not the peak detection performance is the performance of interest.

In time domain approach, fourteen different peak features are recognized from different peak models [710]. The peak model is a set of peak features that represents a peak by its amplitude, width, and slope. Most algorithms [713, 21] in time domain approach consider different peak models and the different styles of framework. The peak model is chosen based on the experiences of EEG expert. To date, there is no any peak detection framework that automatically finds the finest existing peak model. The use of the finest peak model will give a chance for the algorithm to achieve a good performance. On the other hand, the chosen peak model is not necessarily suitable for different types of biological signal. Moreover, the finest peak model represents some meaningful information on the signal to be evaluated. Therefore, the adaptation of feature selection technique is important in this study to automatically find the finest peak model. The utilization of feature selection on peak detection algorithm will also reduce the computational time.

In this study, feature selection and classifier parameters estimation method based on standard particle swarm optimization (PSO) and random asynchronous PSO (RA-PSO) algorithm are employed. The process to find the finest peak model and classifier parameter estimation is executed simultaneously. The peak features will be evaluated by a rule-based classifier. The role of the classifier is to distinguish between peak point and non-peak point. Rule-based classifier is employed due to the ability to provide an outstanding interpretation for the obtained decisions [24]. In addition, the parameter values are tricky to be estimated manually. A PSO algorithm is considered to be appropriate for addressing the problem based on the reason in which the feature selection is a binary search problem and determination of classifier parameter is a continuous search problem [29].

1.1. Peak Model in Time Domain Analysis

Peak model is a set of peak features that represents a peak by its amplitude, width, and slope. In time domain analysis, fourteen different peak features are recognized from different peak models [810]. The earliest peak model was introduced by Dumpala et al. in 1982 [8]. The peak model comprises four features, which are (1) the amplitude between the magnitude of peak point and the magnitude of valley point at the first half wave, (2) the width between valley point of first half point and valley point at second half wave, (3) and (4) two slopes between a peak point and valley point in the first half wave and second half wave. A similar definition of the peak amplitude and slopes are also been used in [7, 11, 13].

An additional feature of peak amplitude and two features of peak width have been introduced by Acir et al. [7, 11]. The additional peak amplitude is the amplitude between the magnitude of peak point and the magnitude of valley point of the second half wave. The peak widths are the width between peak point and valley point of first half wave and second half wave. The total features that are introduced by Acir et al. are six features. Acir et al. did not use the width feature that was introduced by Dumpala et al. A similar definition of the peak amplitudes, widths, and slopes has also been used in [21]. In [21], an additional peak feature is added with a set of features that is introduced in [7, 11], which is the area of peak. However, the definition of area integration is not presented in the paper.

In addition, Liu et al. [10] have introduced eleven peak features. The proposed peak model consists of four amplitudes: (1) the amplitude between the magnitude of peak point and the magnitude of valley point at the first half wave; (2) the amplitude between the magnitude of peak point and the magnitude of valley point of the second half wave; (3) the amplitude between the magnitude of peak and the magnitude of turning point at the first half wave, and (4) the amplitude between the magnitude of peak and the magnitude of turning point at the second half wave. The turning point is defined as the point where the slope decreases more than 50% as compared to the slope of the preceding point. The model also consists of three widths: (1) the width between valley point at first half point and valley point at second half wave, (2) the width between turning point at first half wave and turning point at second half wave, and (3) the width between half point at first half wave and half point at second half wave. There are four slopes that are also measured: (1) and (2) two slopes between a peak point and valley point in the first half wave and second half wave, (3) and (4) two slopes between peak point and turning point at first half wave and second half wave.

Another peak model consists of four features, which has been proposed by Dingle et al. [9]. The peak amplitude is the difference between the peak point and the floating mean. The floating mean is the average EEG which is centered at the peak point that is also called moving average curve (MAC) [12]. The width is calculated based on the difference between the valley point at the first half wave and the valley point at the second half wave. The two slopes are the slopes between a peak point and valley point in the first half wave and second half wave. Summary of different peak models on different style of framework is briefly described in Table 1. The strength and weakness are also highlighted in Table 1. Generally, the authors claimed that the selected peak feature offers good classification performance on the proposed framework. However, the previous works did not provide the justification on the selected features.

tab1
Table 1: Summary of different peak models on different style of framework.

2. Methodology

Figure 1 shows the framework of the proposed techniques for EEG signal peak detection. There are two phases of the process which are training and testing phases. The training phase is firstly run to find the finest peak model and the optimal decision threshold values. Next, the testing phase is utilized for unseen EEG signal.

973063.fig.001
Figure 1: Feature selection and parameters estimation framework for peak detection algorithm.

The framework can be divided into four stages: peak candidate detection, features extraction of peak candidate, feature selection and parameters estimation, and classification. In the first stage, the detection of peak candidates is performed to differentiate between a peak candidate and a non-peak candidate. The second stage is the extraction of peak candidate features. In the third stage, PSO algorithm is adapted during the training phase for feature selection and classifier parameters’ estimation. Finally, the peak candidates are classified between predicted peak and predicted non-peak at particular locations by rule-based classifier.

2.1. Peak Candidate Detection

The first step to detect peaks is to find candidate peaks. Consider a discrete-time signal, , of points. The th candidate peak point, , as shown in Figure 2, is identified using three-points sliding window method [8]. Those three-points are denoted as , , and for . A candidate peak point is identified when and two associated valley points, and , are in between as shown in Figure 2. Both valley points exist when and .

973063.fig.002
Figure 2: Model-based parameters.
2.2. Feature Extraction

Based on the existing peak models, the total peak features are fourteen. The peak features of a peak candidate are calculated based on the eight model-based parameters as shown in Figure 2. The parameters consist of the th candidate peak point, , the two associated valley points, and , the half point at first half wave , the half point at second half wave , the turning point at first half wave , the turning point at second half wave , and the moving average curve . The peak features can be categorized into three groups; amplitude, width, and slope. There are five different amplitudes, five different widths, and four different slopes that can be calculated based on the model-based parameters. All equations and description of peak features are tabulated in Table 2. Referring to Table 3, the peak model, which is introduced by Dumpala et al. [8] and Dingle et al. [9], consists of four features. The peak model, which is specified by Acir et al. [7, 11], consists of six features. The peak model, which is specified by Liu et al. [10], consists of eleven features.

tab2
Table 2: Equations and descriptions of peak features.
tab3
Table 3: List of different peak models and sets of features.
2.3. Feature Selection and Parameters Estimation Using Particle Swarm Optimization

In this stage, the peak features and classifier parameters are simultaneously found using two different PSO algorithms which are standard PSO and RA-PSO algorithms. At the end of this stage, the finest peak model and the optimal classifier parameters are obtained. The optimal classifier parameters represent the optimal decision threshold values.

The PSO algorithm was firstly introduced by Kennedy and Eberhart in 1995 [30]. The PSO algorithm has been numerously enhanced fundamentally [31, 32] and applied in many fields [3335]. Fundamentally, the PSO algorithm follows several steps as described in Algorithm 1: (1) initialization, (2) calculation of the fitness function, (3) updating the personal best (pbest) for each particle and global best (gbest), (4) updating the particle’s velocity and the particle’s position, and (5) performing termination based on a stopping criterion.

alg1
Algorithm 1: Standard PSO Algorithm.

In PSO, particles search for the best solution and update the position information from iteration to iteration. Each particle in the population consists of a vector position and vector velocity in dimension. The position of particle at iteration is denoted as , while the velocity of particle at iteration is denoted as . The pbest of particle is represented as and the gbest is denoted as . To obtain the updated position of a particle, , each particle changes its velocity as the follows: where is a cognitive coefficient, is a social coefficient, and are random values , and is a decrease inertial weight [36, 37] calculated as follows: where and denote the maximum and minimum values of inertia weight, respectively, and is the maximum iteration. Then, the particle’s position is updated based on (3). Note that this equation is only valid for continuous version of PSO algorithm: For a binary version of PSO [38], the particle position is updated based on the following equation: Equation (4) is a transfer function which is the main part of the binary version. Several studies have proven that this transfer function significantly improves the performance of the standard binary PSO. Equation (5) is used to update the particle position according to the given rules, where and represent the vector position and velocity of th particle at iteration and is the complement of . The particle position maintains the current position when the velocity is lower than random value and its complement the position when the velocity is greater than random value. This method has been introduced by Mirjalili and Lewis (2013) that is also named as v-shaped transfer function [39].

Synchronous update in standard PSO algorithm indicates that all particles move to their new position after all particles are evaluated, as described in Algorithm 1. However, in RA-PSO [40], a particle immediately updates its position after it is evaluated without the need to wait until the evaluation of all particles is completed. Moreover, an th particle in a population is randomly chosen with a total times before th particle is evaluated. is the total number of particles. Some particles might be chosen more than once while some particles might not be chosen at all. The RA-PSO algorithm is described in Algorithm 2.

alg2
Algorithm 2: Random Asynchronous PSO (RA-PSO).

To perform the feature selection and parameters estimation simultaneously, both versions of PSO algorithm are employed to the standard PSO and RA-PSO algorithms. Table 4 illustrates the representation of particle position. The th particle at iteration , , in PSO represents two types of dimensions which are binary and continuous type of dimension [29], . The is a th dimension of binary type, and the is a th dimension of continuous type. is the total number of peak features. The particle dimension is a two times number of features. The number of thresholds is equal of the number of features.

tab4
Table 4: Representation of particle position.

In the initialization stage of PSO algorithm, some of the parameters are initialized: (1) the initial PSO parameters and (2) the initial particle position. The initial PSO parameters consist of the maximum inertia weight, , the minimum inertia weight, , the velocity clamping, the velocity vector for each particle, the pbest score for each particle, gbest score, the cognitive component, , and the social component, . The random values, and , are randomly distributed values from 0 to 1. All particles are randomly placed within the search space.

For the calculation of fitness function, geometric mean (Gmean) is employed. The Gmean is calculated as follows: where true peak (TP) is a correctly detected peak point, true non-peak (TN) is a correctly detected non-peak point, false peak (FP) is a wrongly detected the non-peak point, false non-peak (FN) is a wrongly detected peak point, TPR is a true peak rate, and TNR is a true non-peak rate.

2.4. Rule-Based Classifier

A rule-based classifier is employed to distinguish whether the candidate peak is a true peak or true non-peak from the extracted features. Each feature has a corresponding threshold value in the classification process. Given a set of features, a true peak only can be identified if all the feature values are greater than or equal to the decision threshold values. Otherwise, the candidate peak belongs to true non-peak. The form of the rule is where is denoted as a one of sixteen peak features, is denoted as one of the decision threshold values of this peak feature, and true peak is predicted peak at a particular peak point location.

3. Experimental Setup

In this section, two experiments are conducted for peak detection of EEG signal. For first experiment, the framework is executed without feature selection. For second experiment, the experiment is executed with feature selection. The experimental protocols are discussed in the next subsection. The training and testing EEG signal are prepared to evaluate the performance of the proposed framework. Then, the results are discussed and analyzed.

Each experiment is conducted in 10 independent runs. For each run, 30 particles are used to perform feature selection and parameters estimation. For each particle, the total number of dimensions is depending on the number of features in a feature set. The maximum iteration was set to 1000. For the initial value of PSO parameters, the maximum inertia weight, , is 0.9 and the minimum inertia weight, , is 0.4. The cognitive component, , and the social component, , are set to 2. These values are proposed by Shi and Eberhart in 1999 [41]. The random values, and , are randomly distributed values from 0 to 1. The velocity clamping, , for binary version is set to 6 [39]. The velocity vector for each particle, the pbest score for each particle, and gbest score is set to 0. The parameters setting of standard PSO and RA-PSO algorithms are tabulated in Table 5.

tab5
Table 5: Parameters setting of standard PSO and RA-PSO algorithms.
3.1. Experimental Protocols

This study uses the eye movement EEG signal as a case study to evaluate the proposed framework. The observation of the eye movement EEG signal indicates that the most observable signal pattern is the peak point which signifies the brain response on eye movements. The known peak point locations through the response of the brain can be translated into an output, for example, wheelchair movement.

The experimental protocol to acquire this EEG signal was reviewed and approved by the Medical Ethic Committee (MEC) in the University of Malaya Medical Centre (UMMC). The subject gave a written consent prior to the data collection session. This EEG signal was acquired in the Applied Control and Robotic (ACR) Laboratory, Department of Electrical Engineering, Faculty of Engineering, University of Malaya, Malaysia. A healthy subject was involved voluntarily in this data collection session who is a postgraduate student in the Faculty of Engineering.

The EEG signal recording was conducted using the g.MOBIlab portable signal acquisition system. The EEG signal was recorded from C4 channel. The EEG signal of channel CZ was used as a reference. The ground electrode was located on the forehead. The electrode was placed using the 10–20 international electrode placement system. The sampling frequency was set to 256 Hz.

Before the session begins, the subject was advised to get good rest. Thus, he can give full focus during the session. The subject was also advised to wash his hair. During the data collection session, the subject was required to be ready within 0 to 4 seconds for waiting for an external cue. The cue is a command for a subject to move their eyes to the right position. Within the standby time, the subject is required not to move their eyes into a frontal position.

When the time is exactly 5 seconds, the external cue appears on the screen monitor. The instruction allows the subject to move back their eyes in a frontal position. The external cue appears for 40 times. The total length of EEG recording is 40 seconds. As a cleanliness procedure, the electrodes and head-cap that are used in the session were washed. The filtered EEG signal is shown in Figure 3. Forty locations of definite peak points are highlighted in the red circle. The next process is to prepare the training and testing data.

973063.fig.003
Figure 3: Filtered EEG signal.

From the data collection, 40 definite peak point locations have been identified by EEG expert. In 40-second signal there are 10240 sampling points, . There are only 40 peak points and the remaining of 10200 sampling points are the non-peak points. For preparing the training and testing signal, the training signal is selected from 1 to 5120 sampling points while the remaining EEG signal is used for testing signal. The signal specification is summarized in Table 6.

tab6
Table 6: Signal specifications.

4. Results and Discussions

To evaluate the proposed framework for training and testing phase, four different measures are used including the average Gmean, the maximum Gmean, the minimum Gmean, and the standard deviation (STDEV).

4.1. Results of Peak Detection Algorithm without Feature Selection

Four peak models are employed for evaluating the peak models performance in the proposed framework. The training and testing performance based on those four different measures for each model is shown in Table 7. The standard PSO algorithm is used to find the optimal threshold values for each peak model. The obtained results for each peak model are compared with the results of peak detection algorithm and the feature selection framework based on standard PSO. Notably, in this section, only standard PSO is considered in the peak detection algorithm without feature selection framework.

tab7
Table 7: Training and testing performance of peak detection for each peak model (without feature selection).

Referring to Table 7, the training performance for average, maximum, minimum, and STDEV is 84.01%, 89.15%, 80.58%, and 4.43% for Dumpala et al.’s peak model; 74.4%, 80.59%, 67.08%, and 3.71% for Acir et al.’s peak model; and 90.98%, 94.76%, 83.66%, and 5.51% for Dingle et al.’s peak model, respectively. The testing performance for average, maximum, minimum, and STDEV is 81.22%, 91.83%, 74.15%, and 9.13% for Dumpala et al.’s peak model; 68.59%, 77.43%, 54.77%, and 6.97% for Acir et al.’s peak model; and 88.78%, 94.75%, 77.44%, and 7.98% for Dingle et al.’s peak model, respectively.

Overall, the average performance of the training phase for Dumpala et al.’s peak model, Acir et al.’s peak model, and Dingle et al.’s peak model is greater than the average performance of their testing phase. However, for the peak model, Liu et al.’s peak model, will give zero percent performance for training and testing phase. This result indicates the limitation of rule-based classifier when dealing with both feature sets. During the training process on the feature sets, the particles in the PSO algorithm do not meet the optimum decision threshold values and the particles might also be trapped at local optima. Based on the preceding rule, a true peak only can be identified if all the feature values are greater than or equal to the decision threshold values. So, if one of the feature values does not satisfy the decision threshold value, the classifier will decide the peak candidate as a non-peak point. When this happens to all peak candidates, the TP is equal to zero. Gmean will give zero percent performance even if TN is equal to some values. The end results indicate the employment of the presented rule is only valid for Dumpala et al.’s peak model, Acir et al.’s peak model, and Dingle et al.’s peak model.

Compared to the test average performance of the peak models, the highest test performance is obtained by Dingle et al.’s peak model, which is 88.78%, then follows by Dumpala et al.’s peak model, which is 81.22%. The worst test performance is obtained by Acir et al.’s peak model, which is 68.59%. It can be concluded: from the findings of experimental results, the finest peak model for the filtered EEG signal is Dingle et al.’s peak model, and the worst peak model for the filtered EEG signal is Acir et al.’s peak model. True peak rate and true non-peak rate of test performance are shown in Table 8. It can be concluded that, from the finding experimental results, the chosen peak models limit the designed framework to obtain the best accuracy. Therefore, the feature selection technique using standard PSO is employed into the designed framework.

tab8
Table 8: TPR and TNR test results for EEG signal (without feature selection).
4.2. Results of Peak Detection Algorithm with Feature Selection

The results of peak detection algorithm with feature selection are categorized into two subsections which are the results of feature selection using standard PSO and the results of feature selection using RA-PSO. Also, the results from the two PSO algorithms in the proposed framework are discussed.

4.2.1. Feature Selection Using Standard PSO

The feature sets of 10 runs using the standard PSO algorithm are shown in Table 9. The result shows the variety of the optimal combination of features that give the higher classification performance, mostly higher than 99.69%. The maximum training accuracy is 99.98%. The most significant peak feature is the feature because all the 10 runs appear as a selected feature by PSO. Feature is the amplitude that is calculated from the difference between peak points and moving average curve (MAC). Another most significant feature is feature , which is the calculated amplitude between a peak point and valley point at the second haft wave. The feature is chosen 4 times. The feature is chosen 4 times. The features and are chosen 2 times. The feature is only selected at 9th run.

tab9
Table 9: Training results: the feature sets of 10 runs using standard PSO.

Based on the results in Table 9, the combination of peak features (, , and ) appears 4 times, the combination of peak features (, , and ) appears 2 times, and the combination of peak features ( and ) appears 2 times. Therefore, there are 3 optimal combinations of features that can be chosen.

Table 10 has the optimal threshold values for the optimal combination of the features. The threshold values are selected based on the selected peak features that are highlighted in the table.

tab10
Table 10: Training results: the optimal decision threshold values of 10 runs using standard PSO.

The average of training and testing results of 10 runs using standard PSO algorithm is tabulated in Table 11. The results of standard PSO show the average training accuracy is 99.91%. The maximum training accuracy is 99.98%. The minimum training accuracy is 99.69%, and the standard deviation is 8.07%. On the other hand, the testing accuracy is 93.73%. The maximum testing accuracy is 99.92%. The minimum testing accuracy is 77.41%.

tab11
Table 11: Average training and testing results of 10 runs with feature selection using standard PSO.

In terms of peak and the non-peak rate (TP and TN) for training results, the classifier accurately predicted all 20 peak points and 5113 non-peak points. The results also show that the classifier misclassified 27 non-peak points. The maximum of the true peak point is 20 and true non-peak point is 5118. The minimum of true peak point is 20, and true non-peak point is 5109.

For testing results, the classifier accurately predicted 18 peak points and 5110 non-peak points. The maximum of the true peak point is 20 and true non-peak point is 5114. The minimum of true peak point is 12 and true non-peak point is 5106. In general, the average testing result that corresponds to the selected peak features using the proposed feature selection framework is greater than the average testing result of Dingle’s peak model which is 93.73% and 88.78%. The feature set of the Dingle’s peak model is , , , and while the feature set that gives a higher training performance in this experiment is and .

However, the proposed framework based on standard PSO produces slightly high variance model as it measures from the STDEV index. The STDEV is evaluated for measuring the algorithm consistency where lowest STDEV value indicates a good generalization algorithm. Based on the results of the STDEV in Table 13, the STDEV values of the standard PSO are 8.07% and 7.18% for training and testing, respectively. This results show that the high standard deviation of the accuracy is recorded between maximum and minimum of classification rate. The experimental results are reasonable due to the limitation of the standard PSO algorithm.

4.2.2. Feature Selection Using RA-PSO

Table 12 shows the feature selection results of 10 runs based on the RA-PSO algorithm. The feature set was highlighted of each run. The threshold values for all selected features are also given in Table 13. The highest Gmean value of training phase is 99.91%. The significant peak features are and . The corresponding threshold values are 9.20 and 4. Note that feature is the amplitude that is calculated from the difference between peak points and moving average curve (MAC). Another most significant feature is feature , which is the width between peak point and valley point of second half wave. The features , , and are chosen 3 times. The feature is only selected at second run.

tab12
Table 12: Training results: the feature sets of 10 runs using RA-PSO.
tab13
Table 13: Training results: the optimal decision threshold values of 10 runs using RA-PSO.

Three similar results were obtained out of ten runs. Other significant feature sets that are obtained in this result are the combination of peak features ( and ) and ( and ). These feature sets also appear 3 times.

Table 14 shows the average training and testing results of 10 runs with feature selection using RA-PSO algorithm. The average Gmean value of the RA-PSO algorithm is 99.90% and 98.59% for training and testing, respectively. The maximum Gmean value of the RA-PSO algorithm is 99.91% and 99.86% for training and testing, respectively. The minimum Gmean value of the RA-PSO algorithm is 99.87% and 97.33% for training and testing, respectively.

tab14
Table 14: Average training and testing results of 10 runs with feature selection using RA-PSO.

In terms of peak and the non-peak rate (TP and TN) for training results, the classifier accurately predicted all 20 peak points and 5110 non-peak points. The results also show that the classifier misclassified 30 non-peak points. The maximum of the true peak point is 20 and true non-peak point is 5111. The minimum of true peak point is 20 and true non-peak point is 5107.

For testing results, the classifier accurately predicted 19 peak points and 5106 non-peak points. The maximum of the true peak point is 20 and true non-peak point is 5107. The minimum of true peak point is 19 and true non-peak point is 5103.

As compared to the framework, using standard PSO, RA-PSO is found to offer lower variance model. The recorded STDEV values of the RA-PSO are 1.15% and 1.33% for training and testing, respectively. Therefore, the RA-PSO may offer a reliable and reasonable model as compared to standard PSO with consistent classification rate.

5. Conclusions

In this study, the framework of feature selection and parameters estimation is proposed for EEG signal peak detection algorithm. The proposed framework involves peak candidate detection, feature extraction, feature selection, and classification. The framework is developed based on PSO algorithm and a rule-based classifier. In general, the binary PSO based algorithm was utilized for selecting the peak features while the continuous PSO based algorithm was utilized for optimizing the classifier parameters. Two PSO based algorithms are employed in the proposed framework: (1) standard PSO and (2) RA-PSO. Fourteen peak features were employed in this study. All these peak features were taken from the existing peak models in the time domain approach. The available peak features are then automatically selected in combinatorial form using the proposed framework. Based on the experiment results of peak detection algorithm without feature selection, the best peak model is Dingle et al.’s [9] peak model where the highest performance obtained is 88.78%. Meanwhile, the experimental results with feature selection show the proposed framework with standard PSO can further improve the Dingle et al.’s model. However, the recorded results are inconsistent due to high variances of the classification accuracy. The unreliability of the standard PSO can be further improved based on the proposed framework using RA-PSO. In general, the proposed feature selection technique offers a better performance as compared to any peak models without feature selection. For future work, the proposed framework will be employed in more case studies and will invent more classification methods.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This project is funded by the Ministry of Education Malaysia for High Impact Research Grant (UM-D000016-16001), University of Malaya, Research Acculturation Grant Scheme (RDU121403), Universiti Malaysia Pahang, Fundamental Research Grant Scheme (VOT 4F331), Universiti Teknologi Malaysia, and MyPhD scholarship from Ministry of Education Malaysia. The authors would like to thank the Faculty of Engineering, the University of Malaya, for supporting this research. The authors also would like to acknowledge the Editor and anonymous reviewers for their valuable comments and suggestions.

References

  1. A. Bulling, J. A. Ward, H. Gellersen, and G. Tröster, “Eye movement analysis for activity recognition using electrooculography,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 4, pp. 741–753, 2011. View at Publisher · View at Google Scholar · View at Scopus
  2. H. Zeng and A. G. Song, “Removal of EOG artifacts from EEG recordings using stationary subspace analysis,” The Scientific World Journal, vol. 2014, Article ID 259121, 9 pages, 2014. View at Publisher · View at Google Scholar
  3. R. Tafreshi, A. Jaleel, J. Lim, and L. Tafreshi, “Automated analysis of ECG waveforms with atypical QRS complex morphologies,” Biomedical Signal Processing and Control, vol. 10, pp. 41–49, 2014. View at Publisher · View at Google Scholar
  4. N. Xu, X. Gao, B. Hong, X. Miao, S. Gao, and F. Yang, “BCI competition 2003—data set IIb: enhancing P300 wave detection using ICA-based subspace projections for BCI applications,” IEEE Transactions on Biomedical Engineering, vol. 51, no. 6, pp. 1067–1072, 2004. View at Publisher · View at Google Scholar · View at Scopus
  5. Q. G. Ma and Q. Shang, “The influence of negative emotion on the Simon effect as reflected by P300,” The Scientific World Journal, vol. 2013, Article ID 516906, 6 pages, 2013. View at Publisher · View at Google Scholar
  6. K. P. Indiradevi, E. Elias, P. S. Sathidevi, S. Dinesh Nayak, and K. Radhakrishnan, “A multi-level wavelet approach for automatic detection of epileptic spikes in the electroencephalogram,” Computers in Biology and Medicine, vol. 38, no. 7, pp. 805–816, 2008. View at Publisher · View at Google Scholar · View at Scopus
  7. N. Acir and C. Güzeliş, “Automatic spike detection in EEG by a two-stage procedure based on support vector machines,” Computers in Biology and Medicine, vol. 34, no. 7, pp. 561–575, 2004. View at Publisher · View at Google Scholar · View at Scopus
  8. S. R. Dumpala, S. Narasimha Reddy, and S. K. Sarna, “An algorithm for the detection of peaks in biological signals,” Computer Programs in Biomedicine, vol. 14, no. 3, pp. 249–256, 1982. View at Publisher · View at Google Scholar · View at Scopus
  9. A. A. Dingle, R. D. Jones, G. J. Carroll, and W. R. Fright, “A multistage system to detect epileptiform activity in the EEG,” IEEE Transactions on Biomedical Engineering, vol. 40, no. 12, pp. 1260–1268, 1993. View at Publisher · View at Google Scholar · View at Scopus
  10. H. S. Liu, T. Zhang, and F. S. Yang, “A multistage, multimethod approach for automatic detection and classification of epileptiform EEG,” IEEE Transactions on Biomedical Engineering, vol. 49, no. 12 I, pp. 1557–1566, 2002. View at Publisher · View at Google Scholar · View at Scopus
  11. N. Acir, I. Öztura, M. Kuntalp, B. Baklan, and C. Güzeliş, “Automatic detection of epileptiform events in EEG by a three-stage procedure based on artificial neural networks,” IEEE Transactions on Biomedical Engineering, vol. 52, no. 1, pp. 30–40, 2005. View at Publisher · View at Google Scholar · View at Scopus
  12. W. Lu, M. M. Nystrom, P. J. Parikh et al., “A semi-automatic method for peak and valley detection in free-breathing respiratory waveforms,” Medical Physics, vol. 33, no. 10, pp. 3634–3636, 2006. View at Publisher · View at Google Scholar · View at Scopus
  13. L. Xu, M. Q.-H. Meng, R. Liu, and K. Wang, “Robust peak detection of pulse waveform using height ratio,” in Proceedings of the 30th IEEE Annual International Conference of the Engineering in Medicine and Biology Society, pp. 2856–3859, British Columbia, Canada, 2008.
  14. R. Barea, L. Boquete, S. Ortega, E. López, and J. M. Rodríguez-Ascariz, “EOG-based eye movements codification for human computer interaction,” Expert Systems with Applications, vol. 39, no. 3, pp. 2677–2683, 2012. View at Publisher · View at Google Scholar · View at Scopus
  15. M. S. Manikandan and K. P. Soman, “A novel method for detecting R-peaks in electrocardiogram (ECG) signal,” Biomedical Signal Processing and Control, vol. 7, no. 2, pp. 118–128, 2012. View at Publisher · View at Google Scholar · View at Scopus
  16. A. Juozapavičius, G. Bacevičius, D. Bugelskis, and R. Samaitienė, “EEG analysis—automatic spike detection,” Journal of Nonlinear Analysis: Modelling and Control, vol. 16, no. 4, pp. 375–386, 2011. View at Google Scholar · View at MathSciNet · View at Scopus
  17. L. Senhadji and F. Wendling, “Epileptic transient detection: wavelets and time-frequency approaches,” Neurophysiologie Clinique, vol. 32, no. 3, pp. 175–192, 2002. View at Publisher · View at Google Scholar · View at Scopus
  18. M. Putignano, A. Intermite, and P. Welsch, “A non-linear algorithm for current signal filtering and peak detection in SiPM,” Journal of Instrumentation, vol. 7, pp. 1–19, 2012. View at Publisher · View at Google Scholar · View at Scopus
  19. R. E. Bonner, L. Crevasse, M. IrenéFerrer, and J. C. Greenfield Jr., “A new computer program for analysis of scalar electrocardiograms,” Computers and Biomedical Research, vol. 5, no. 6, pp. 629–653, 1972. View at Publisher · View at Google Scholar · View at Scopus
  20. V. P. Oikonomou, A. T. Tzallas, and D. I. Fotiadis, “A Kalman filter based methodology for EEG spike enhancement,” Computer Methods and Programs in Biomedicine, vol. 85, no. 2, pp. 101–108, 2007. View at Publisher · View at Google Scholar · View at Scopus
  21. Y.-C. Liu, C.-C. K. Lin, J.-J. Tsai, and Y.-N. Sun, “Model-based spike detection of epileptic EEG data,” Sensors, vol. 13, pp. 12536–12547, 2013. View at Publisher · View at Google Scholar
  22. N. Sinno and K. Tout, “Analysis of epileptic events using wavelet packets,” The International Arab Journal of Information Technology, vol. 5, no. 4, pp. 165–169, 2008. View at Google Scholar · View at Scopus
  23. Z. Ji, X. Wang, T. Sugi, S. Goto, and M. Nakamura, “Automatic spike detection based on real-time multi-channel template,” in Proceedings of the 4th International Conference on Biomedical Engineering and Informatics (BMEI '11), pp. 648–652, IEEE, October 2011. View at Publisher · View at Google Scholar · View at Scopus
  24. T. P. Exarchos, A. T. Tzallas, D. I. Fotiadis, S. Konitsiotis, and S. Giannopoulos, “EEG transient event detection and classification using association rules,” IEEE Transactions on Information Technology in Biomedicine, vol. 10, no. 3, pp. 451–457, 2006. View at Publisher · View at Google Scholar · View at Scopus
  25. C. J. James, R. D. Jones, P. J. Bones, and G. J. Carroll, “Detection of epileptiform discharges in the EEG by a hybrid system comprising mimetic, self-organized artificial neural network, and fuzzy logic stages,” Clinical Neurophysiology, vol. 110, no. 12, pp. 2049–2063, 1999. View at Publisher · View at Google Scholar · View at Scopus
  26. N. Acir, “Automated system for detection of epileptiform patterns in EEG by using a modified RBFN classifier,” Expert Systems with Applications, vol. 29, no. 2, pp. 455–462, 2005. View at Publisher · View at Google Scholar · View at Scopus
  27. J. F. Gao, Y. Yang, P. Lin, P. Wang, and C. X. Zheng, “Automatic removal of eye-movement and blink artifacts from EEG signals,” Brain Topography, vol. 23, no. 1, pp. 105–114, 2010. View at Publisher · View at Google Scholar · View at Scopus
  28. S. B. Wilson and R. Emerson, “Spike detection: a review and comparison of algorithms,” Clinical Neurophysiology, vol. 113, no. 12, pp. 1873–1881, 2002. View at Publisher · View at Google Scholar · View at Scopus
  29. C.-L. Huang and J.-F. Dun, “A distributed PSO-SVM hybrid system with feature selection and parameter optimization,” Applied Soft Computing, vol. 8, no. 4, pp. 1381–1391, 2008. View at Publisher · View at Google Scholar · View at Scopus
  30. J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks (ICW '95), pp. 1942–1948, Perth, Australia, November-December 1995. View at Scopus
  31. K. S. Lim, Z. Ibrahim, S. Buyamin et al., “Improving vector evaluated particle swarm optimisation by incorporating nondominated solutions,” The Scientific World Journal, vol. 2013, Article ID 510763, 19 pages, 2013. View at Publisher · View at Google Scholar · View at Scopus
  32. M. S. Mohamad, S. Omatu, S. Deris, M. Yoshioka, A. Abdullah, and Z. Ibrahim, “An enhancement of binary particle swarm optimization for gene selection in classifying cancer classes,” Algorithms for Molecular Biology, vol. 8, article 15, 2013. View at Publisher · View at Google Scholar · View at Scopus
  33. Z. Ibrahim, N. K. Khalid, J. A. A. Mukred et al., “A DNA sequence design for DNA computation based on binary vector evaluated particle swarm optimization,” International Journal of Unconventional Computing, vol. 8, no. 2, pp. 119–137, 2012. View at Google Scholar · View at Scopus
  34. A. Adam, A. F. Zainal Abidin, Z. Ibrahim, A. R. Husain, Z. Md Yusof, and I. Ibrahim, “A particle swarm optimization approach to Robotic Drill route optimization,” in Proceedings of the 4th International Conference on Mathematical Modelling and Computer Simulation (AMS '10), pp. 60–64, May 2010. View at Publisher · View at Google Scholar · View at Scopus
  35. M. N. Ayob, Z. M. Yusof, A. Adam et al., “A particle swarm optimization approach for routing in VLSI,” in Proceedings of the 2nd International Conference on Computational Intelligence, Communication Systems and Networks (CICSyN '10), pp. 49–53, July 2010. View at Publisher · View at Google Scholar · View at Scopus
  36. Y. Shi and R. Eberhart, “Modified particle swarm optimizer,” in Proceedings of the IEEE International Conference on Evolutionary Computation, pp. 69–73, Anchorage, Alaska, USA, May 1998. View at Scopus
  37. Y. Shi and R. C. Eberhart, “Parameter selection in particle swarm optimization,” in Proceedings of the 7th Annual Conference on Evolutionary Programming, pp. 591–601, San Diego, Calif, USA, 1998.
  38. J. Kennedy and R. C. Eberhart, “Discrete binary version of the particle swarm algorithm,” in Proceedings of the IEEE International Conference on Computational Cybernetics and Simulation, pp. 4104–4108, IEEE, Orlando, Fla, USA, October 1997. View at Scopus
  39. S. Mirjalili and A. Lewis, “S-shaped versus V-shaped transfer functions for binary Particle Swarm Optimization,” Swarm and Evolutionary Computation, vol. 9, pp. 1–14, 2013. View at Publisher · View at Google Scholar · View at Scopus
  40. J. Rada-Vilela, M. Zhang, and W. Seah, “A performance study on synchronicity and neighborhood size in particle swarm optimization,” Soft Computing, vol. 17, no. 6, pp. 1019–1030, 2013. View at Publisher · View at Google Scholar · View at Scopus
  41. Y. Shi and R. Eberhart, “Empirical study of particle swarm optimization,” in Proceedings of the IEEE Conference on Evolutionary Computation, pp. 1945–1950, Washington, DC, USA, 1999.