Mathematical Problems in Engineering

Mathematical Problems in Engineering / 2014 / Article
Special Issue

Hybrid Intelligent Techniques for Benchmark Functions and Real-World Optimization Problems

View this Special Issue

Research Article | Open Access

Volume 2014 |Article ID 813197 | 13 pages | https://doi.org/10.1155/2014/813197

Wavelets and Morphological Operators Based Classification of Epilepsy Risk Levels

Academic Editor: P. Karthigaikumar
Received04 Feb 2014
Accepted03 Mar 2014
Published07 Apr 2014

Abstract

The objective of this paper is to compare the performance of Singular Value Decomposition (SVD), Expectation Maximization (EM), and Modified Expectation Maximization (MEM) as the postclassifiers for classifications of the epilepsy risk levels obtained from extracted features through wavelet transforms and morphological filters from EEG signals. The code converter acts as a level one classifier. The seven features such as energy, variance, positive and negative peaks, spike and sharp waves, events, average duration, and covariance are extracted from EEG signals, out of which four parameters like positive and negative peaks, spike and sharp waves, events, and average duration are extracted using Haar, dB2, dB4, and Sym8 wavelet transforms with hard and soft thresholding methods. The above said four features are also extracted through morphological filters. The performance of the code converter and classifiers are compared based on the parameters such as Performance Index (PI) and Quality Value (QV). The Performance Index and Quality Value of code converters are at low value of 33.26% and 12.74, respectively. The highest PI of 98.03% and QV of 23.82 are attained at dB2 wavelet with hard thresholding method for SVD classifier. All the postclassifiers are settled at PI value of more than 90% at QV of 20.

1. Introduction

The Electroencephalogram (EEG) is a measure of cumulative firing of neurons in various parts of the brain [1]. It contains information regarding changes in the electrical potential of the brain obtained from a given set of recording electrodes. These data include the characteristic waveforms with accompanying variations in amplitude, frequency, phase, and so forth, as well as brief occurrence of electrical patterns such as spindles, sharps, and spike waveforms [2]. EEG patterns have shown to be modified by a wide range of variables including biochemical, metabolic, circulatory, hormonal, neuroelectric, and behavioral factors [3]. In the past, the encephalographer, by visual inspection, was able to qualitatively distinguish normal EEG activity from localized or generalized abnormalities contained within relatively long EEG records [4]. The most important activity possibly detected from the EEG is the epilepsy [5]. Epilepsy is characterized by uncontrolled excessive activity or potential discharge by either a part or all of the central nervous system [5]. The different types of epileptic seizures are characterized by different EEG waveform patterns [6]. With real-time monitoring to detect epileptic seizures gaining widespread recognition, the advent of computers has made it possible to effectively apply a host of methods to quantify the changes occurring based on the EEG signals [4]. The EEG is an important clinical tool for diagnosing, monitoring, and managing neurological disorders related to epilepsy [7]. This disorder is characterized by sudden recurrent and transient disturbances of mental function and/or movements of body that results in excessive discharge group of brain cells [8]. The presence of epileptiform activity in the EEG confirms the diagnosis of epilepsy, which sometimes is confused with other disorders producing similar seizure-like activity [9]. Between seizures, the EEG of a patient with epilepsy may be characterized by occasional epileptic form transients-spikes and sharp waves [10]. Seizures are featured by short episodic neural synchronous discharges with considerably enlarged amplitude. This uneven synchrony may happen in the brain accordingly, that is, partial seizures visible only in few channels of the EEG signal or generalized seizures, which are seen in every channel of the EEG signal involving the whole brain [11]. Epileptic seizure is an abnormality in EEG gathering and is featured by short and episodic neuronal synchronous discharges with severely high amplitude. This anomalous synchrony may happen in the brain locally (partial seizures) and is visible only in fewer channels of the EEG signal, or including the entire brain, that is, visible in all the channels of the EEG signal [12].

1.1. Related Works

In the last three decades the analysis and classification of epilepsy from EEG signal have become a fascinating research. A huge volume of research was performed which includes spike detection, classification epilepsy seizures, ictal and interictal analysis, nonlinear and linear analysis, and soft computing methods. Gotman [9] discussed the improvement of epileptic seizure detection and evaluation. Pang et al. [10] summarized the history and evaluation of various spike detecting algorithms. Reference [13] discussed the different neural network as function approximation and universal approximation for epilepsy diagnosis. Sarang [14] encapsulates the performance of spike detecting algorithms in terms of sensitivity, specificity, and average detection. Reference [14] ordered the performance of spike detecting algorithms in terms of good detection ratio (GDR). McSharry et al. [8] discussed and enumerated the nonlinear methods and its relevance to predict epilepsy by considering EEG samples as time series. Majumdar [15] reviews various soft computing approaches of EEG signals which emphasize more on pattern recognition techniques. Reference [15] mainly focuses on dimensionality reduction, SNR problems, and linear and soft computing techniques for EEG signal processing. Kaushik concludes that the neural network and Bayesian approaches are two popular choices even though linear statistical discriminants are easier to implement. A large number of Support Vector Machines (SVM) are also discussed in this paper for their classification accuracy. Hence, the EEG signal occupies a great deal of data regarding the working of the brain. However classification and estimation of the signals are inadequate. As there is no explicit category suggested by the experts, visual examination of EEG signals in time domain may be deficient. Routine clinical diagnosis necessitates the analysis of EEG signals [13]. Hence, automation and computer methods have been utilized for this reason. Current multicenter clinical analysis indicates confirmation of premonitory symptoms in 6.2% of 500 patients with epilepsy [16]. Another interview based study found that 50% of 562 patients felt “auras” before seizures. Those clinical data provide a motivation to search for premonitoring alterations on EEG recordings from the brain and to employ a device that can act without human intervention to forewarn the patient [17]. On the other hand, despite decades of research, existing techniques do not yield to better performance. This paper addresses the application and comparison of SVD, EM, and MEM classifiers towards optimization of code converter outputs in the classification of epilepsy risk levels.

Webber et al. [18] have proposed the three-stage design of an EEG seizure detection system. The first stage of the seizure detector compresses the raw data stream and transforms the data into variables which represent the state of the subject’s EEG. These state measures are referred to as context parameters. The second stage of the system is a neural network that transforms the state measures into smaller number of parameters that are intended to represent measures of recognized phenomena such as small seizure in the EEG [9, 10]. The third stage consists of a few simple rules that confirm the existence of the phenomena under consideration. Similarly, this paper also presents a three-stage design for epilepsy risk level classification. The first stage extracts the required seven distinct features from raw EEG data stream of the patient in time domain. The next stage transforms these features into a code word through a code converter with seven alphabets which represents the patient’s state in five distinct risk levels for a two-second epoch of EEG signal per channel. The last stage is a SVD, EM, or MEM which optimizes the epilepsy risk level of the patient. The organization of the paper is as follows. Section 1 introduces the paper and materials and its methods are discussed in Section 2. Section 3 describes the SVD, EM, and MEM as postclassifiers for epilepsy risk level classification. Results are discussed in Section 4 and the paper is concluded in Section 5.

2. Materials and Methods

2.1. Data Acquisition of EEG Signals

For the comparative study and to analyze the performance of the pre- and postclassifiers we have obtained the raw EEG data of 20 epileptic patients in European Data Format (EDF) who underwent treatment in the Neurology Department of Sri Ramakrishna Hospital, Coimbatore. An issue that has been given great attention is the preprocessing stage of the EEG signals because it is important to use the best technique to extract the useful information embedded in the nonstationary biomedical signals. The obtained EEG records were continuous for about 30 seconds; each of them was divided into epochs of two-second duration. A two-second epoch is long enough to detect any significant changes in activity and presence of artifacts and also short enough to avoid any redundancy in the signal [19]. For a patient we have 16 channels over three epochs. Having a frequency of 50 Hz, each epoch was sampled at a frequency of 200 Hz. Each sample corresponds to the instantaneous amplitude values of the signal, totaling to 400 values for an epoch. Figure 1 shows the model of flow diagram of epilepsy risk level classification system. Four types of artifacts were present in our data. They included eye blink, electromyography (EMG) artifact, chewing, and motion artifacts [20]. Approximately 1% of the data was artifacts. We did not make any attempt to select certain number of artifacts and of a specific nature. The objective of including artifacts was to have spikes versus nonspike categories of waveforms. The latter could be a normal background EEG and/or artifacts [21]. In order to train and test the feature extractor and classifiers, we need to select a suitable segment of EEG data. In our experiment, the training and testing were selected through a short sampling window and all EEG signals were visually examined by a qualified EEG technologist. A neurologist’s decision regarding EEG features (or normal EEG segment) was used as the gold standard. We choose a sample window of 400 points corresponding to 2 seconds of the EEG data. This width can cover almost all types of transient epileptic patterns in the EEG signal, even though seizure often lasts longer [22].

In order to classify the risk level of the patients, certain parameters were chosen which are detailed below.(1)For every epoch, the energy is calculated as [4] where —sample value of signal and —number of such samples.(2)One of the simplest linear statistics that may be used for investigating the dynamics underlying the EEG is the variance of the signal calculated in consecutive nonoverlapping windows. The variance () is given by where —average amplitude of the epoch.(3)For the average variance, the covariance of duration is determined by using the equation below: The following are the four parameters which are extracted using morphological filters and wavelet transforms.(1)The total number of positive and negative peaks is found above the threshold.(2)For a zero crossing function, if it lies between 20 milliseconds and 70 milliseconds, then the spikes can be detected. If the zero crossing function lies between 70 milliseconds and 200 milliseconds then the sharp waves are detected, when the zero crossing function lies between 70 milliseconds and 200 milliseconds.(3)The total number of spikes and sharp waves are determined as the events.(4)The duration for these waves is determined by the relation: where —peak to peak duration and —number of such durations.

2.2. Wavelet Transforms for Feature Extraction

The brain signals are nonstationary in nature. In order to capture the transients and events of the waveforms we are in dire state to visualize the time and frequency simultaneously. Hence, the wavelet transforms are the better choice to extract the transient features and events from the EEG signals. The wavelet transform based feature extraction is discussed as follows.

Let us consider a function . The wavelet transform of this function is defined as [23] where —complex conjugate of the wavelet function .

With the set of the analyzing function, the wavelet family is deduced from the mother wavelet by [24] where —dilation parameter and —translation parameter.

The feature extraction process is initialized by studying the effect of simple Haar threshold. The Haar wavelet function can be represented as [25]

Wavelet thresholding is a signal estimation technique that exploits the capabilities of wavelet transform for signal denoising or smoothing. It depends on the choice of a threshold parameter which determines to great extent the efficacy of denoising: where is the threshold level.

Typical threshold operators for denoising include hard threshold, soft threshold, and affine (firm) threshold. Hard threshold is defined as [24]. Soft thresholding (wavelet shrinkage) is given by

Haar, Db2, Db4, and Sym8 wavelets with hard thresholding and four types of soft thresholding methods such as Heursure, Minimaxi, Rigrsure, and Sqtwolog are used to extract the parameters from EEG signals. With the help of expert’s knowledge and our experiences with [5, 20, 26], we have identified the following parametric ranges for five linguistic risk levels (very low, low, medium, high, and very high) in the clinical description for the patients which is shown in Table 1.


Normalized parametersRisk levels
NormalLowMediumHighVery high

Energy0-10.7–3.62.9–8.27.6–119.2–30
Variance0–0.30.15–0.450.4–2.21.6–4.33.8–10
Peaks0–21–43–86–1612–20
Events0–21–54–107–1615–28
Sharp waves0–21–54–87–1110–12
Average
duration
0–0.30.15–0.450.4–2.41.8–4.63.6–10
Covariance0–0.050.025–0.10.09–0.40.28–0.640.54–1

The output of code converter is encoded into the strings of seven codes corresponding to each EEG signal parameter based on the epilepsy risk levels threshold values as set in Table 1. The expert defined threshold values as containing noise in the form of overlapping ranges. Therefore we have encoded the patient risk level into the next level of risk instead of a lower level. Likewise, if the input energy is at 3.4 then the code converter output will be at medium risk level instead of low level [26].

2.3. Code Converter as a Preclassifier

The encoding method processes the sampled output values as individual code. Since working on definite alphabets is easier than processing numbers with large decimal accuracy, we encode the outputs as a string of alphabets. The alphabetical representation of the five classifications of the outputs is shown in Table 2.


Risk level Coded Representation

Normal U
LowW
MediumX
HighY
Very highZ

The ease of operation in using characteristic representation is obviously evident than in performing Cumber some operations of numbers. By encoding each risk level from one of the five states, a string of seven characters is obtained for each of the sixteen channels of each epoch. A sample output with actual patient readings is shown in Table 3 for eight channels over three epochs.


Epoch 1Epoch 2Epoch 3

YYYYXXXZYYWYYYYYYXYZZ
YYYXYYYZZYZZZZYYYXYZZ
YYYYYYYZZYZZZZZYYYZZZ
ZYYYZZZZZYZYYYYYYXXZZ
YYYYYYYYYYXYYYYYYYYZZ
YYYYYYYYYYXYYYYYYXYYX
YYYYYYYYYYYYYYYYYYYYZ
ZZYZYZZZZYZZZZZZYZZZZ

It can be seen that channel 1 shows low risk levels while channel 7 shows high risk levels. Also, the risk level classification varies between adjacent epochs. There are sixteen different channels for input to the system at three epochs. This gives a total of forty-eight input and output pairs. Since we deal with known cases of epileptic patients, it is necessary to find the exact level of epilepsy risk in the patient. This will also aid towards the development of automated systems that can precisely classify the risk level of the epileptic patient under observation. Hence an optimization is necessary. This will improve the classification of the patient and can provide the EEG with a clear picture [20]. The outputs from each epoch are not identical and are varying in condition such as to to . In this case energy factor is predominant and thus results in the high risk level for two epochs and low risk level for middle epoch. Channels five and six settle at high risk level. Due to this type of mixed state output we cannot come to proper conclusion; therefore we group four adjacent channels and optimize the risk level. The frequently repeated patterns show the average risk level of the group channels. Same individual patterns depict the constant risk level associated in a particular epoch. Whether a group of channel is at the high risk level or not is identified by the occurrences of at least one pattern in an epoch. It is also true that the variation of the risk level is abrupt across epochs and eventually in channels. Hence we are in a dilemma and cannot come up with the final verdict. The five risk levels are encoded as in binary strings of length five bits using weighted positional representation as shown in Table 4. Encoding each output risk level gives us a string of seven alphabets, the fitness of which is calculated as the sum of probabilities of the individual alphabets. For example, if the output of an epoch is encoded as , its fitness would be 0.419352.


Risk levelCodeBinary stringWeightProbability

Very highZ1000016/31 = 0.516120.086021
HighY010008/31 = 0.258060.043011
MediumX001004/31 = 0.129030.021505
LowW000102/31 = 0.064510.010752
NormalU000011/31 = 0.032250.005376

11111 = 31 = 1

The Sensitivity Se and Specificity Sp are represented as [19] where PI—Performance Index, PC—perfect classification, MC—missed classification, and FA—false alarm.

The performance of code converter is 44.81%. The perfect classification represents when both the physician and code converter agrees with the same epilepsy risk level. Missed classification represents a high level as low level. False alarm represents a low level as high level with respect to physician’s diagnosis. The other performance measures are also defined as below.

The sensitivity Se and specificity Sp are represented as [19]

The relative risk factor indicates the stability and sensitivity of the classifier. For an ideal classifier the relative risk will be unity. More sensitive classifier will have this factor slightly above unity, whereas slow response classifier makes this factor lower than unity. We have obtained a low value of just 40% for Performance Index and 83.33%, 71.42%, 78.87%, and 1.166 for sensitivity, specificity, average detection, and relative risk for the code converter. Due to the low performance measures it is essential to optimize the output of the code converter. Performance Index of code converters output using different wavelet transforms for hard thresholding methods is tabulated in Table 5.


WaveletsPerfect classificationMissed classificationFalse alarmPerformance Index

Haar61.4515.62522.9137.58
Db261.1816.1422.6536.44
Db464.5712.4922.9144.72
Sym863.5211.4423.9544.81

2.4. Rhythmicity of Code Converter

Now we are about to identify the rhythmicity of code converter techniques which is associated with nonlinearities of the epilepsy risk levels. Let the rhythmicity be defined as [10] where of categories of patterns and number of patterns which is 960 in our case. For an ideal classifier is to be one and . Table 6 shows the rhythmicity of the code converter classifier for hard thresholding of each wavelet. Table 6 shows that the value of is highly deviated from its ideal value. Hence, it is necessary to optimize the code converters outputs to endure a singleton risk level. In the following section we discuss the morphological filtering of EEG signals.


WaveletsNumber of categories of patternsRhythmicity  

Haar310.032292
Db2410.042708
Db4300.03125
Sym8450.046875

2.5. Morphological Filtering for Feature Extraction of EEG Signals

Morphological filtering was chosen over other methods such as the temporal approach of the EEG signal and wavelet based approach due to the fact that morphological filtering can precisely determine the spikes with a very high accuracy rate [14]. Let us call it as a function . Let us also take into account a structuring element which together with is the subsets of Euclidean space .

Accordingly, the Minkowski addition and subtraction [6] for the function is given by the relation

The opening and closing functions of the morphological filter are given as

The abovementioned equations help us in determining the peaks and valleys in the original recording [7]. The opening function (erosion-dilation) is used in smoothing of the convex peak of the original signal and the closing function (dilation-erosion) is used in smoothing the concave peak of the signal. Combinations of opening and closing function lead to the formation of a new filter which when fed with the original signal can divide it into two, the first signal being defined by a structuring element and the second signal being the residue of . This type of filtering is done in order to detect the spikes with high accuracy. For two structuring elements, say and , the open-close (OC) and close-open (CO) functions are defined as

When considered separately, the OC and CO functions result in a variation in amplitude; that is, while OC results in lower amplitude, the CO function yields higher amplitude. For easier interpretation and calculation, we go for the average of the two defined as opening-closing-closing-opening (OCCO) function. The same is depicted below as where is the original signal represented as where —spiky part of the signal.

Performance Index, sensitivity, and specificity of code converter outputs through morphological filter based feature extraction arrived at the low value of 33.46%, 76.23%, and 77.42%, respectively. This scenario impacts the optimization of code converter outputs using postclassifier to accomplish a singleton result. The following section describes the outcome of SVD, EM, and MEM techniques as postclassifier.

3. Singular Value Decomposition, Expectation Maximization, and Modified EM as Postclassifier for Classification of Epilepsy Risk Levels

In this section, we discuss the possible usage of SVD, EM, and MEM as a postclassifier for classification of epilepsy risk levels. The Singular Value Decomposition (SVD) was established in the 1870s by Beltrami and Jordan for real square matrices [27]. It is used mainly for dimensionality reduction and determining the modes of a complex linear dynamical system [27]. Since then, SVD is regarded as one of the most important tools of modern numerical analysis and numerical linear algebra.

3.1. SVD Theorem

Let us have an matrix . The SVD theorem states that [28] where (with ), , , and is a diagonal matrix of size .

Equation (18) can be further realized as

The columns of are called the left singular vectors of matrix and the columns of are called the right singular vectors of . ; is called the singular value matrix along with the diagonal.

We have taken the EEG records of twenty patients for our study. Each patient’s sample is composed of a 16 × 3 matrix as code converter outputs depicted in Table 3. Considering this to be as matrix , SVD is computed. The so-obtained Eigen value is eventually regarded as the patient’s epilepsy risk level. The similar procedure is carried out in finding out the remaining Eigen values of other patients as well.

3.2. Expectation Maximization as a Postclassifier

The Expectation Maximization (EM) is often defined as a statistical technique for maximizing complex likelihoods and handling incomplete data problem. EM algorithm consists of two steps, namely, the following.

Expectation Step ( Step): say for data , having an estimate of the parameter and the observed data; the expected value is initially computed [29]. For a given measurement and based on the current estimate of the parameter, the expected value of is computed as given below:

This implies

Maximization Step ( Step): from the Expectation Step, we use the data which was actually measured to determine the ML estimate of the parameter.

Considering the code converter outputs, let us take a set of unit vectors to be as . We will have to find out the parameters and of the distribution Md. Accordingly, we can form the equation as [30]

Considering , the likelihood of is

The log likelihood of (19) can be written as where

In order to obtain the likelihood parameters and , we will have to maximize (22) with the help of Lagrange operator . The equation can be written as

Derivating (23) with respect to , , and and equating these to zero will yield the parameter constraints as

In the Expectation Step, the threshold data are estimated, given the observed data and current estimate of the model parameters [31]. This is achieved using the conditional expectation, explaining the choice of terminology. In the -Step, the likelihood function is maximized under the assumption that the threshold data are known. The estimate of the missing data from the -Step is used in lieu of the actual threshold data.

3.3. Modified Expectation Maximization Algorithm

A Modified Expectation Maximization (EM) algorithm which uses maximum likelihood (ML) approach is discussed in this paper for pattern optimization. Similar to the conventional EM algorithm, this algorithm alternated between the estimation of the complete log-likelihood function (-Step) and the maximization of this estimate over values of the unknown parameters (-Step) [32]. Because of the difficulties in the evaluation of the ML function [33], modifications are made to the EM algorithm as follows.

The method of maximum likelihood corresponds to many well-known estimation methods in statistics. For example, one may be interested in the heights of adult female giraffes but be unable due to cost or time constraints to measure the height of every single giraffe in a population. Assuming that the heights are normally (Gaussian) distributed with some unknown mean and variance, the mean and variance can be estimated with MLE while only knowing the heights of some samples of the overall population.

Given a set of samples , the complete data set consists of the sample set and a set of variable indicating from which component of the mixtures the samples came. The description is given below of how to estimate the parameters of the Gaussian mixtures with the maximization algorithm. After optimization of the patterns, maximum likelihood is adopted to redesign the intracranial area into two clusters. Basically, maximum likelihood algorithm is a statistical estimation algorithm used for finding log-likelihood estimates of parameters in probabilistic models [30].(1)Find the initial values of the maximum likelihood parameters which are means covariance and mixing weights.(2)Assign each to its nearest cluster centre by Euclidean Distance (3)In maximization step, use Maximization . The likelihood function is written as: (4)Repeat iterations, until becomes small enough.

The algorithm terminates when the difference between the log likelihood for the previous iteration and current iteration fulfills the tolerance. For and , the likelihood function was applied to the 16 × 3 matrix of the code converter output by having truncated to the known endpoints.

4. Results and Discussion

To study the relative performance of these code converters and SVD, EM, and MEM, we measure two parameters, the Performance Index and the Quality Value. These parameters are calculated for each set of twenty patients and are compared.

4.1. Performance Index

A sample of Performance Index of morphological filter based feature extraction with code converters, Singular Value Decomposition, EM, and MEM for an average of twenty known epilepsy data sets is shown in Table 7. As shown in Table 7 the morphological filter based feature extraction along with SVD optimization is ranked at first with high PI of 89.48% against the 80.1% and 83.35% of EM and MEM methods. But the morphological filter plugged into more missed classification rather than less false alarm which is a dangerous trend. Therefore, this method will be considered as a lazy and high threshold classifier.


ClassifiersMorphological operators based feature extraction
Perfect classificationMissed classificationFalse alarmPerformance Index

Code converter62.618.2519.1333.26
With SVD
optimization
91.227.311.4289.48
With EM optimization82.6812.934.3880.1
With MEM optimization85.3210.953.7283.35

Table 8 depicts the performance analysis of wavelet transform with hard thresholding method. In case of hard thresholding, while code converter has got an average classification rate and false alarm of 62.68% and 18.105%, EM optimizer has 87.39% of perfect classification with a false alarm rate of 4.43%. Not much of deviations, MEM have 89.36% and 4.46% of average perfect classification and false alarm, respectively. SVD optimization has got the highest value of perfect classification rate of 96.58% with zero false alarms. Hence SVD optimizer can be regarded as the best postclassifier. In all the four wavelet transforms SVD postclassifier is the best suited one to achieve the high classification rate. EM and MEM techniques fail miserably to achieve better classification accuracy when compared with SVD classifier.


ClassifiersPerfect classificationMissed classificationFalse alarmPerformance Index

Haar wavelet
Code converter61.4515.62522.9137.58
With SVD
optimization
96.583.42096.4
With EM optimization82.6812.934.3880.1
With MEM optimization85.3210.953.7283.35

DB2 wavelet
Code converter61.1816.1422.6536.44
With SVD
optimization
98.130.94650.94698.03
With EM optimization87.298.14.5985.42
With MEM optimization89.585.814.6187.81

DB4 wavelet
Code converter64.5712.4922.9144.72
With SVD
optimization
97.540.3782.0897.45
With EM optimization92.334.313.2991.35
With MEM optimization93.862.33.8493.17

Sym8 wavelet
Code converter63.5211.4423.9544.81
With SVD
optimization
97.351.5121.13597.23
With EM optimization87.277.785.4885.03
With MEM optimization88.716.95.6786.95

Table 9 represents the performance analysis of wavelet transforms with soft thresholding with code converter, SVD, EM, and MEM, respectively. It can be found that, in soft thresholding, the code converter has got an average perfect classification of 65.6 and false alarm of 11.94. SVD has got a classification rate of over 85% with comparatively higher values of false alarms. MEM optimizer claims to be the best optimizer as it has a classification rate of 93.97% with a false alarm rate of 3.5 only. This is obtained when Haar wavelet is used with mini max soft thresholding.


ClassifiersPerfect classificationMissed classificationFalse alarmPerformance Index

Heursure soft thresholding
Code converter66.119.1811.9352.82
With SVD
optimization
87.212.849.9482.64
With EM optimization89.036.794.1687.88
With MEM optimization90.464.64.9389.82

Mini max soft thresholding
Code converter64.6320.5915.144.52
With SVD
optimization
85.22014.7779.43
With EM optimization89.485.924.688.15
With MEM optimization93.972.743.593.4

Rig sure soft thresholding
Code converter66.3419.8813.7849.11
With SVD
optimization
88.49011.584.18
With EM
optimization
90.93.845.4889.93
With MEM
optimization
92.223.624.1691.25

Sqtwolog soft thresholding
Code converter65.3427.696.9646.89
With SVD
optimization
77.6920.881.4266.58
With EM optimization84.6510.854.4982.12
With MEM optimization88.388.882.7486.87

4.2. Quality Value

This parameter determines the overall quality of the classifiers used. The relation for Quality Value is given by [19] where —scaling constant, —false alarm per set, —average delay of on-set classification, —percentage of perfect classification, and —percentage of perfect risk level missed.

By setting the value of “C” to a constant value, consider as 10. The classifier with the highest Quality Value is the better one. Table 10 depicts the Quality Value of wavelet transforms with hard thresholding and SVD, EM, and MEM optimization methods. It was observed that SVD with dB2 wavelet in hard thresholding attained the maximum value of QV at 23.82 and EM with Haar wavelet has the low value of QV at 18.32.


WaveletsQuality Value
Without optimizationWith SVD optimizationWith EM optimizationWith MEM optimization

Haar11.5623.518.3219.24
Db212.5723.8219.7220.49
Db412.4923.1521.3222.11
Sym812.8423.3719.5220.3

Table 11 shows the performance analysis of twenty patients using dB2 wavelet hard thresholding with SVD, EM, and MEM as postclassifiers. The evaluation parameters achieved an appreciable value in the case of SVD postclassifier when compared to the other two classifiers. Hence, we can choose SVD as a good postclassifier for epilepsy risk level classification. All the three postclassifiers are bestowed with the best sensitivity and specificity measures. EM and MEM classifiers are plugged into the higher false alarm rate and this leads to the lower QV and PI for the system.


ParametersCode converter method before optimizationSVD optimizationWith EM optimizationWith MEM optimization

Risk level classification rate (%)61.4598.1387.2989.58
Weighted delay (s)2.1892.0172.2332.14
False alarm rate/set22.650.94634.594.6
Performance Index %36.4598.0385.4287.81
Sensitivity75.4399.0595.495.4
Specificity81.9499.191.8994.19
Average detection78.87599.07593.64594.795
Relative risk1.1660.99991.0381.0128
Quality Value12.5723.8219.7220.49

Since the Haar wavelet is a predominant wavelet we had chosen this wavelet for the four types of soft thresholding methods and the same is depicted in Table 12. As seen in Table 12 the highest QV of 22.54 is attained in the mini max soft thresholding with MEM as a postclassifier.


Haar wavelet with soft thresholdingQuality Value
Code converterWith SVD optimizationWith EM optimizationWith MEM optimization

Heursure13.5420.1620.1220.85
Mini max12.1119.3820.0922.54
Rig sure12.9120.4420.3221.42
Sqtwolog13.2217.8218.7720.22

Table 13 exhibits the performance analysis of twenty patients using Haar wavelet in soft thresholding with SVD, EM, and MEM postclassifiers. MEM postclassifier with mini max soft thresholding reached the better QV and PI when compared to SVD and EM classifiers. A slight incremental tradeoff in the weighted delay for MEM is responsible for this performance when compared with SVD and EM classifiers. SVD fails to achieve a good performance in this methodology due to more false alarm rate. EM is struck in the middle path as far as Performance Index is concerned.


ParametersCode converter method before optimizationSVD optimizationWith EM
optimization
With MEM
optimization

Heursure soft thresholding
Risk level classification rate (%)66.187.2189.0390.46
Weighted delay (s)2.471.912.192.08
False alarm rate/set11.939.944.164.93
Performance Index %52.8282.6487.8889.82
Sensitivity85.3990.0596.2795.07
Specificity78.2897.1692.7695.4
Average detection78.87593.6194.5195.23
Relative risk1.1660.9261.0370.996
Quality Value13.5420.1620.1220.85

Mini max soft thresholding
Risk level classification rate (%)64.6385.2289.4893.97
Weighted delay (s)2.531.72.152.06
False alarm rate/set15.114.774.63.5
Performance Index %44.5279.4388.1593.4
Sensitivity82.3185.2595.496.71
Specificity76.8110094.0897.26
Average detection78.87592.62594.7496.98
Relative risk1.1660.851.0140.994
Quality Value12.1119.3620.0922.54

Rig sure soft thresholding
Risk level classification rate (%)66.3488.4990.992.22
Weighted delay (s)2.521.772.012.1
False alarm rate/set13.7811.55.484.16
Performance Index %49.1184.1889.9391.25
Sensitivity84.3588.4994.7495.83
Specificity78.2310096.1696.83
Average detection78.87594.4595.4596.33
Relative risk1.1660.880.9920.989
Quality Value12.9120.4420.3221.42

Sqtwolog soft thresholding
Risk level classification rate (%)65.3477.6984.6588.38
Weighted delay (s)2.962.8062.342.3
False alarm rate/set6.961.424.492.74
Performance Index %46.8966.5882.1286.87
Sensitivity91.3498.5795.597.26
Specificity70.8279.1189.1591.12
Average detection78.87588.8492.32594.19
Relative risk1.1661.2451.071.06
Quality Value13.2217.8218.7720.22

Table 14 shows the performance analysis of twenty patients using morphological filters with SVD, EM, and MEM postclassifiers. In this method SVD outperforms other classifiers in terms of QV and PI. This morphological filtering is inherited with slow response and is considered to be a high threshold classifier. SVD classifier is summed with low false alarm and weighted delays. All these methods in average positioned at more than 90% of Performance Index and around Quality Value of 18. Since for all these classifiers the fact that the obtained weighted delay is more than 2 seconds leads to larger threshold and slow response system.


ParametersCode converter methodSVD
optimization
With EM
optimization
With MEM
optimization

Risk level classification rate (%)62.691.2287.2788.71
Weighted delay (s)2.342.262.22.18
False alarm rate/set19.131.425.475.67
Performance Index %33.2689.4885.0386.95
Sensitivity77.8498.5795.5998.97
Specificity78.9192.6598.1197.67
Average detection78.87595.6196.8598.32
Relative risk1.1661.0630.9741.013
Quality Value12.7420.6219.5220.3

We wish to analyse the time complexity of the postclassifiers in terms of weighted delay and quality value. Table 15 shows the performance analysis of postclassifiers in terms of weighted delay and Quality Value. It is observed that the four types of wavelet transforms in hard thresholding method along with SVD postclassifier attained low weighted delay and high value of QV.


Methods/waveletsSVD optimizationEM optimizationMEM optimization
Weighted delay (sec)Quality ValueWeighted delay (sec)Quality ValueWeighted delay (sec)Quality Value

Hard thresholdHaar2.1423.52.43118.322.3619.24
dB22.01723.822.2319.722.1420.49
dB41.97423.152.1121.322.0122.11
Sym82.03823.372.219.522.1820.3

Soft threshold
heursure
Haar1.9420.162.1920.122.0820.85
dB22.8216.012.3720.242.2720.54
dB42.4119.992.3119.792.2720.57
Sym82.1618.952.2620.442.1322

Soft threshold
mini max
Haar1.719.362.1520.092.0622.54
dB22.2319.392.318.872.1620.41
dB41.9720.132.1719.972.1420.66
Sym82.5119.512.2720.22.2220.73

Soft threshold
rig sure
Haar1.7720.442.0120.322.121.42
dB21.6218.772.0719.42.0419.95
dB41.5316.742.0820.422.0822.04
Sym81.6519.512.1820.022.0921.06

Soft threshold
sqtwolog
Haar2.0817.822.3418.772.320.22
dB23.2516.732.4219.172.3620.1
dB42.7619.12.3719.622.3619.35
Sym82.9717.522.4118.742.3919.97

Morphological filters2.2620.622.219.522.1820.3

In the case of Table 15 the EM and MEM classifiers are either plugged into more missed classification or false alarms and subsequently lead to lower value of QV less than 20 in most of the wavelet transforms. In case of soft thresholding dB2 wavelet in rig sure thresholding for MEM postclassifier outperforms other fifteen methods. Morphological filters are stacked at higher delay with QV set at near 20.

5. Conclusion

The objective of this paper is to classify the risk level of the epileptic patients from the EEG signals. The aim is to obtain high classification rate, Performance Index, Quality Value with low false alarm, and missed classification. Due to the nonlinearity obtained and also a poor performance found in the code converters, an optimization was vital for the effective classification of the signals. We went for SVD, EM, and MEM as postclassifiers. Morphological filters were also used for the feature extraction of the EEG signals. After having computed the values of PI and QV discussed under the results column, we found that SVD was working perfectly with a high classification rate of 91.22% and a false alarm as low as 1.42. Therefore, SVD was chosen to be the best postclassifier. The accuracy of the results obtained can be made even better by using extreme learning machine as a postclassifier and further research will be in this direction.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors express their sincere thanks to the Management and the Principal of Bannari Amman Institute of Technology, Sathyamangalam, for providing the necessary facilities for the completion of this paper. This research is also funded by AICTE RPS: F. no. 8023/BOR/RID/RPS-41/2009-10, dated December 10, 2010.

References

  1. Y. Yuan, “Detection of epileptic seizure based on EEG signals,” in Proceedings of the 3rd International Congress on Image and Signal Processing (CISP '10), pp. 4209–4211, Yantai, China, October 2010. View at: Publisher Site | Google Scholar
  2. S. Herculano-Houzel, “The human brain in numbers: a linerarly scaled—up primate brain,” Frontiers in Human Neuroscience, vol. 3, pp. 1–11, 2009. View at: Publisher Site | Google Scholar
  3. Z. Zainuddin, L. Kee Huong, and O. Pauline, “On the use of wavelet neural networks in the task of epileptic seizure detection from electroencephalography signals,” in Proceedings of the 3rd International Conference on Computational Systems—Biology and Bioinformatics, vol. 11, pp. 149–159, 2012. View at: Google Scholar
  4. A. A. Dingle, R. D. Jones, G. J. Carroll, and W. R. Fright, “A multistage system to detect epileptiform activity in the EEG,” IEEE Transactions on Biomedical Engineering, vol. 40, no. 12, pp. 1260–1268, 1993. View at: Publisher Site | Google Scholar
  5. R. Harikumar, R. Sukanesh, and P. A. Bharathi, “Genetic algorithm optimization of fuzzy outputs for classification of epilepsy risk levels from EEG signals,” Journal of Interdisciplinary Panels I.E., vol. 86, no. 1, pp. 1–10, 2005. View at: Google Scholar
  6. P. Xanthopoulos, S. Rebennack, C.-C. Liu et al., “A novel wavelet based algorithm for spike and wave detection in absence epilepsy,” in Proceedings of the 10th IEEE International Conference on Bioinformatics and Bioengineering (BIBE '10), pp. 14–19, Philadelphia, Pa, USA, June 2010. View at: Publisher Site | Google Scholar
  7. A. Mirzaei, A. Ayatollahi, P. Gifani, and L. Salehi, “EEG analysis based on wavelet-spectral entropy for epileptic seizures detection,” in Proceedings of the 3rd International Conference on BioMedical Engineering and Informatics (BMEI '10), pp. 878–882, Yantai, China, October 2010. View at: Publisher Site | Google Scholar
  8. P. E. McSharry, L. A. Smith, L. Tarassenko et al., “Prediction of epileptic seizures: are nonlinear methods relevant?” Nature Medicine, vol. 9, no. 3, pp. 241–242, 2003. View at: Publisher Site | Google Scholar
  9. J. Gotman, “Automatic seizure detection: improvements and evaluation,” Electroencephalography and Clinical Neurophysiology, vol. 76, no. 4, pp. 317–324, 1990. View at: Publisher Site | Google Scholar
  10. C. C. C. Pang, A. R. M. Upton, G. Shine, and M. V. Kamath, “A comparison of algorithms for detection of spikes in the electroencephalogram,” IEEE Transactions on Biomedical Engineering, vol. 50, no. 4, pp. 521–526, 2003. View at: Publisher Site | Google Scholar
  11. L. Tarassenko, Y. U. Khan, and M. R. G. Holt, “Identification of inter-ictal spikes in the EEG using neural network analysis,” in Proceedings of the IEE Proceedings Science Measurement Technology, vol. 145, pp. 270–278, November 1998. View at: Google Scholar
  12. M. van Gils, “Signal processing in prolonged EEG recordings during intensive care,” IEEE EMB Magazine, vol. 16, no. 6, pp. 56–63, 1997. View at: Google Scholar
  13. E. Sezer, H. Işik, and E. Saracoǧlu, “Employment and comparison of different artificial neural networks for epilepsy diagnosis from EEG signals,” Journal of Medical Systems, vol. 36, no. 1, pp. 347–362, 2012. View at: Publisher Site | Google Scholar
  14. R. Sarang, “A strong adaptive and comprehensive evaluation of wavelet based epileptic EEG spike detection methods,” in Proceedings of the International Conference on Biomedical and Pharmaceutical Engineering (ICBPE '06), pp. 432–437, Singapore, December 2006. View at: Publisher Site | Google Scholar
  15. K. Majumdar, “Human scalp EEG processing: various soft computing approaches,” Applied Soft Computing Journal, vol. 11, no. 8, pp. 4433–4447, 2011. View at: Publisher Site | Google Scholar
  16. R. P. Costa, P. Oliveria, G. Rodrigues, B. Leitao, and D. Antonio, “Epileptic seizure classfication using neural networks with 14 features,” in Proceedings of the 12th International Conference on Knowledge Based Intelligent Information and Engineering System, Part II Lecture Notes of Computer Science Series, pp. 281–288, Springer, Zageed, Croatia, September 2008. View at: Google Scholar
  17. H. Adeli, “Chaos-wavelet- neural network models for automated EEG based diagnosis of the neurological disorders,” in Proceedings of the 17th International Conference on Systems, Signals and Image Processing (IWSSIP '10), pp. 45–48, Riode Janerio, Brazil, June 2010. View at: Google Scholar
  18. W. R. S. Webber, R. P. Lesser, R. T. Richardson, and K. Wilson, “An approach to seizure detection using an artificial neural network (ANN),” Electroencephalography and Clinical Neurophysiology, vol. 98, no. 4, pp. 250–272, 1996. View at: Publisher Site | Google Scholar
  19. H. Qu and J. Gotham, “A patient-specific algorithm for the detection of seizure onset in long- term EEG monitoring: possible use as a warning device,” IEEE Transactions on Biomedical Engineering, vol. 44, no. 2, pp. 115–122, 1997. View at: Publisher Site | Google Scholar
  20. R. Harikumar, T. Vijayakumar, and M. G. Sreejith, “Performance analysis of SVD and support vector machines for optimization of fuzzy outputs in classification of epilepsy risk level from EEG signals,” in Proceedings of the IEEE Recent Advances in Intelligent Computational Systems (RAICS '11), pp. 718–723, Thiruvananthapuram, India, September 2011. View at: Publisher Site | Google Scholar
  21. R. M. Rangayyan, Bio-Medical Signal Analysis a Case Study Approach, IEEE Press-John Wiley & Sons, New York, NY, USA, 2002.
  22. L. M. Patnaik and O. K. Manyam, “Epileptic EEG detection using neural networks and post-classification,” Computer Methods and Programs in Biomedicine, vol. 91, no. 2, pp. 100–109, 2008. View at: Publisher Site | Google Scholar
  23. A. T. Tzallas, M. G. Tsipouras, and D. I. Fotiadis, “A time-frequency based method for the detection of epileptic seizures in EEG recordings,” in Proceedings of the 20th IEEE International Symposium on Computer-Based Medical Systems (CBMS '07), pp. 135–140, Maribor, Slovenia, June 2007. View at: Publisher Site | Google Scholar
  24. V. V. K. D.V. Prasad, P. Siddiah, and B. Prabhakara Rao, “A new wavelet based method for denoising biological signals,” International Journal of Computer Science and Network Security, vol. 8, no. 1, pp. 238–244, 2008. View at: Google Scholar
  25. P. Kumar and D. Agnihotri, “Biosignal Denoising via wavelet thresholds,” IETE Journal of Research, vol. 56, no. 3, pp. 132–138, 2010. View at: Publisher Site | Google Scholar
  26. R. Harikumar and B. S. Narayanan, “Fuzzy techniques for classification of epilepsy risk level from EEG signals,” in Proceedings of the IEEE Confernce on Covergent Technologies for the Asia-Pacific Region (TENCON '03), pp. 209–213, Bangalore, India, October 2003. View at: Google Scholar
  27. V. C. Klema and A. J. Laub, “Singular value decomposition: its computation and some applications,” IEEE Transactions on Automatic Control, vol. AC-25, no. 2, pp. 164–176, 1980. View at: Google Scholar | Zentralblatt MATH
  28. P. K. Sadasivan and D. Narayana Dutt, “SVD based technique for noise reduction in electroencephalographic signals,” Signal Processing, vol. 55, no. 2, pp. 179–189, 1996. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  29. T. K. Moon, “The expectation-maximization algorithm,” IEEE Signal Processing Magazine, vol. 13, no. 6, pp. 47–60, 1996. View at: Publisher Site | Google Scholar
  30. C. E. McCulloch, “Maximum likelihood algorithms for generalized linear mixed models,” Journal of the American Statistical Association, vol. 92, no. 437, pp. 162–170, 1997. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  31. G. Tian, Y. Xia, Y. Zhang, and D. Feng, “Hybrid genetic and variational expectation-maximization algorithm for Gaussian-mixture-model-based brain MR image segmentation,” IEEE Transactions on Information Technology in Biomedicine, vol. 15, no. 3, pp. 373–380, 2011. View at: Publisher Site | Google Scholar
  32. Y. Zhou and J. P. Y. Lee, “A modified expectation maximization algorithm for maximum likelihood direction-of-arrival estimation,” in Proceedings of the 34th Asilomar Conference, pp. 613–617, November 2000. View at: Google Scholar
  33. J. Xian, J. Li, and Y. Yang, “A new EM acceleration algorithm for multi-user detection,” in Proceedings of the 3rd International Conference on Measuring Technology and Mechatronics Automation (ICMTMA '11), pp. 150–153, Shanghai, China, January 2011. View at: Publisher Site | Google Scholar

Copyright © 2014 Harikumar Rajaguru and Vijayakumar Thangavel. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

1049 Views | 601 Downloads | 6 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

Related articles

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19. Sign up here as a reviewer to help fast-track new submissions.