Abstract

The objective of this paper is to compare the performance of Singular Value Decomposition (SVD), Expectation Maximization (EM), and Modified Expectation Maximization (MEM) as the postclassifiers for classifications of the epilepsy risk levels obtained from extracted features through wavelet transforms and morphological filters from EEG signals. The code converter acts as a level one classifier. The seven features such as energy, variance, positive and negative peaks, spike and sharp waves, events, average duration, and covariance are extracted from EEG signals, out of which four parameters like positive and negative peaks, spike and sharp waves, events, and average duration are extracted using Haar, dB2, dB4, and Sym8 wavelet transforms with hard and soft thresholding methods. The above said four features are also extracted through morphological filters. The performance of the code converter and classifiers are compared based on the parameters such as Performance Index (PI) and Quality Value (QV). The Performance Index and Quality Value of code converters are at low value of 33.26% and 12.74, respectively. The highest PI of 98.03% and QV of 23.82 are attained at dB2 wavelet with hard thresholding method for SVD classifier. All the postclassifiers are settled at PI value of more than 90% at QV of 20.

1. Introduction

The Electroencephalogram (EEG) is a measure of cumulative firing of neurons in various parts of the brain [1]. It contains information regarding changes in the electrical potential of the brain obtained from a given set of recording electrodes. These data include the characteristic waveforms with accompanying variations in amplitude, frequency, phase, and so forth, as well as brief occurrence of electrical patterns such as spindles, sharps, and spike waveforms [2]. EEG patterns have shown to be modified by a wide range of variables including biochemical, metabolic, circulatory, hormonal, neuroelectric, and behavioral factors [3]. In the past, the encephalographer, by visual inspection, was able to qualitatively distinguish normal EEG activity from localized or generalized abnormalities contained within relatively long EEG records [4]. The most important activity possibly detected from the EEG is the epilepsy [5]. Epilepsy is characterized by uncontrolled excessive activity or potential discharge by either a part or all of the central nervous system [5]. The different types of epileptic seizures are characterized by different EEG waveform patterns [6]. With real-time monitoring to detect epileptic seizures gaining widespread recognition, the advent of computers has made it possible to effectively apply a host of methods to quantify the changes occurring based on the EEG signals [4]. The EEG is an important clinical tool for diagnosing, monitoring, and managing neurological disorders related to epilepsy [7]. This disorder is characterized by sudden recurrent and transient disturbances of mental function and/or movements of body that results in excessive discharge group of brain cells [8]. The presence of epileptiform activity in the EEG confirms the diagnosis of epilepsy, which sometimes is confused with other disorders producing similar seizure-like activity [9]. Between seizures, the EEG of a patient with epilepsy may be characterized by occasional epileptic form transients-spikes and sharp waves [10]. Seizures are featured by short episodic neural synchronous discharges with considerably enlarged amplitude. This uneven synchrony may happen in the brain accordingly, that is, partial seizures visible only in few channels of the EEG signal or generalized seizures, which are seen in every channel of the EEG signal involving the whole brain [11]. Epileptic seizure is an abnormality in EEG gathering and is featured by short and episodic neuronal synchronous discharges with severely high amplitude. This anomalous synchrony may happen in the brain locally (partial seizures) and is visible only in fewer channels of the EEG signal, or including the entire brain, that is, visible in all the channels of the EEG signal [12].

1.1. Related Works

In the last three decades the analysis and classification of epilepsy from EEG signal have become a fascinating research. A huge volume of research was performed which includes spike detection, classification epilepsy seizures, ictal and interictal analysis, nonlinear and linear analysis, and soft computing methods. Gotman [9] discussed the improvement of epileptic seizure detection and evaluation. Pang et al. [10] summarized the history and evaluation of various spike detecting algorithms. Reference [13] discussed the different neural network as function approximation and universal approximation for epilepsy diagnosis. Sarang [14] encapsulates the performance of spike detecting algorithms in terms of sensitivity, specificity, and average detection. Reference [14] ordered the performance of spike detecting algorithms in terms of good detection ratio (GDR). McSharry et al. [8] discussed and enumerated the nonlinear methods and its relevance to predict epilepsy by considering EEG samples as time series. Majumdar [15] reviews various soft computing approaches of EEG signals which emphasize more on pattern recognition techniques. Reference [15] mainly focuses on dimensionality reduction, SNR problems, and linear and soft computing techniques for EEG signal processing. Kaushik concludes that the neural network and Bayesian approaches are two popular choices even though linear statistical discriminants are easier to implement. A large number of Support Vector Machines (SVM) are also discussed in this paper for their classification accuracy. Hence, the EEG signal occupies a great deal of data regarding the working of the brain. However classification and estimation of the signals are inadequate. As there is no explicit category suggested by the experts, visual examination of EEG signals in time domain may be deficient. Routine clinical diagnosis necessitates the analysis of EEG signals [13]. Hence, automation and computer methods have been utilized for this reason. Current multicenter clinical analysis indicates confirmation of premonitory symptoms in 6.2% of 500 patients with epilepsy [16]. Another interview based study found that 50% of 562 patients felt “auras” before seizures. Those clinical data provide a motivation to search for premonitoring alterations on EEG recordings from the brain and to employ a device that can act without human intervention to forewarn the patient [17]. On the other hand, despite decades of research, existing techniques do not yield to better performance. This paper addresses the application and comparison of SVD, EM, and MEM classifiers towards optimization of code converter outputs in the classification of epilepsy risk levels.

Webber et al. [18] have proposed the three-stage design of an EEG seizure detection system. The first stage of the seizure detector compresses the raw data stream and transforms the data into variables which represent the state of the subject’s EEG. These state measures are referred to as context parameters. The second stage of the system is a neural network that transforms the state measures into smaller number of parameters that are intended to represent measures of recognized phenomena such as small seizure in the EEG [9, 10]. The third stage consists of a few simple rules that confirm the existence of the phenomena under consideration. Similarly, this paper also presents a three-stage design for epilepsy risk level classification. The first stage extracts the required seven distinct features from raw EEG data stream of the patient in time domain. The next stage transforms these features into a code word through a code converter with seven alphabets which represents the patient’s state in five distinct risk levels for a two-second epoch of EEG signal per channel. The last stage is a SVD, EM, or MEM which optimizes the epilepsy risk level of the patient. The organization of the paper is as follows. Section 1 introduces the paper and materials and its methods are discussed in Section 2. Section 3 describes the SVD, EM, and MEM as postclassifiers for epilepsy risk level classification. Results are discussed in Section 4 and the paper is concluded in Section 5.

2. Materials and Methods

2.1. Data Acquisition of EEG Signals

For the comparative study and to analyze the performance of the pre- and postclassifiers we have obtained the raw EEG data of 20 epileptic patients in European Data Format (EDF) who underwent treatment in the Neurology Department of Sri Ramakrishna Hospital, Coimbatore. An issue that has been given great attention is the preprocessing stage of the EEG signals because it is important to use the best technique to extract the useful information embedded in the nonstationary biomedical signals. The obtained EEG records were continuous for about 30 seconds; each of them was divided into epochs of two-second duration. A two-second epoch is long enough to detect any significant changes in activity and presence of artifacts and also short enough to avoid any redundancy in the signal [19]. For a patient we have 16 channels over three epochs. Having a frequency of 50 Hz, each epoch was sampled at a frequency of 200 Hz. Each sample corresponds to the instantaneous amplitude values of the signal, totaling to 400 values for an epoch. Figure 1 shows the model of flow diagram of epilepsy risk level classification system. Four types of artifacts were present in our data. They included eye blink, electromyography (EMG) artifact, chewing, and motion artifacts [20]. Approximately 1% of the data was artifacts. We did not make any attempt to select certain number of artifacts and of a specific nature. The objective of including artifacts was to have spikes versus nonspike categories of waveforms. The latter could be a normal background EEG and/or artifacts [21]. In order to train and test the feature extractor and classifiers, we need to select a suitable segment of EEG data. In our experiment, the training and testing were selected through a short sampling window and all EEG signals were visually examined by a qualified EEG technologist. A neurologist’s decision regarding EEG features (or normal EEG segment) was used as the gold standard. We choose a sample window of 400 points corresponding to 2 seconds of the EEG data. This width can cover almost all types of transient epileptic patterns in the EEG signal, even though seizure often lasts longer [22].

In order to classify the risk level of the patients, certain parameters were chosen which are detailed below.(1)For every epoch, the energy is calculated as [4] where —sample value of signal and —number of such samples.(2)One of the simplest linear statistics that may be used for investigating the dynamics underlying the EEG is the variance of the signal calculated in consecutive nonoverlapping windows. The variance () is given by where —average amplitude of the epoch.(3)For the average variance, the covariance of duration is determined by using the equation below: The following are the four parameters which are extracted using morphological filters and wavelet transforms.(1)The total number of positive and negative peaks is found above the threshold.(2)For a zero crossing function, if it lies between 20 milliseconds and 70 milliseconds, then the spikes can be detected. If the zero crossing function lies between 70 milliseconds and 200 milliseconds then the sharp waves are detected, when the zero crossing function lies between 70 milliseconds and 200 milliseconds.(3)The total number of spikes and sharp waves are determined as the events.(4)The duration for these waves is determined by the relation: where —peak to peak duration and —number of such durations.

2.2. Wavelet Transforms for Feature Extraction

The brain signals are nonstationary in nature. In order to capture the transients and events of the waveforms we are in dire state to visualize the time and frequency simultaneously. Hence, the wavelet transforms are the better choice to extract the transient features and events from the EEG signals. The wavelet transform based feature extraction is discussed as follows.

Let us consider a function . The wavelet transform of this function is defined as [23] where —complex conjugate of the wavelet function .

With the set of the analyzing function, the wavelet family is deduced from the mother wavelet by [24] where —dilation parameter and —translation parameter.

The feature extraction process is initialized by studying the effect of simple Haar threshold. The Haar wavelet function can be represented as [25]

Wavelet thresholding is a signal estimation technique that exploits the capabilities of wavelet transform for signal denoising or smoothing. It depends on the choice of a threshold parameter which determines to great extent the efficacy of denoising: where is the threshold level.

Typical threshold operators for denoising include hard threshold, soft threshold, and affine (firm) threshold. Hard threshold is defined as [24]. Soft thresholding (wavelet shrinkage) is given by

Haar, Db2, Db4, and Sym8 wavelets with hard thresholding and four types of soft thresholding methods such as Heursure, Minimaxi, Rigrsure, and Sqtwolog are used to extract the parameters from EEG signals. With the help of expert’s knowledge and our experiences with [5, 20, 26], we have identified the following parametric ranges for five linguistic risk levels (very low, low, medium, high, and very high) in the clinical description for the patients which is shown in Table 1.

The output of code converter is encoded into the strings of seven codes corresponding to each EEG signal parameter based on the epilepsy risk levels threshold values as set in Table 1. The expert defined threshold values as containing noise in the form of overlapping ranges. Therefore we have encoded the patient risk level into the next level of risk instead of a lower level. Likewise, if the input energy is at 3.4 then the code converter output will be at medium risk level instead of low level [26].

2.3. Code Converter as a Preclassifier

The encoding method processes the sampled output values as individual code. Since working on definite alphabets is easier than processing numbers with large decimal accuracy, we encode the outputs as a string of alphabets. The alphabetical representation of the five classifications of the outputs is shown in Table 2.

The ease of operation in using characteristic representation is obviously evident than in performing Cumber some operations of numbers. By encoding each risk level from one of the five states, a string of seven characters is obtained for each of the sixteen channels of each epoch. A sample output with actual patient readings is shown in Table 3 for eight channels over three epochs.

It can be seen that channel 1 shows low risk levels while channel 7 shows high risk levels. Also, the risk level classification varies between adjacent epochs. There are sixteen different channels for input to the system at three epochs. This gives a total of forty-eight input and output pairs. Since we deal with known cases of epileptic patients, it is necessary to find the exact level of epilepsy risk in the patient. This will also aid towards the development of automated systems that can precisely classify the risk level of the epileptic patient under observation. Hence an optimization is necessary. This will improve the classification of the patient and can provide the EEG with a clear picture [20]. The outputs from each epoch are not identical and are varying in condition such as to to . In this case energy factor is predominant and thus results in the high risk level for two epochs and low risk level for middle epoch. Channels five and six settle at high risk level. Due to this type of mixed state output we cannot come to proper conclusion; therefore we group four adjacent channels and optimize the risk level. The frequently repeated patterns show the average risk level of the group channels. Same individual patterns depict the constant risk level associated in a particular epoch. Whether a group of channel is at the high risk level or not is identified by the occurrences of at least one pattern in an epoch. It is also true that the variation of the risk level is abrupt across epochs and eventually in channels. Hence we are in a dilemma and cannot come up with the final verdict. The five risk levels are encoded as in binary strings of length five bits using weighted positional representation as shown in Table 4. Encoding each output risk level gives us a string of seven alphabets, the fitness of which is calculated as the sum of probabilities of the individual alphabets. For example, if the output of an epoch is encoded as , its fitness would be 0.419352.

The Sensitivity Se and Specificity Sp are represented as [19] where PI—Performance Index, PC—perfect classification, MC—missed classification, and FA—false alarm.

The performance of code converter is 44.81%. The perfect classification represents when both the physician and code converter agrees with the same epilepsy risk level. Missed classification represents a high level as low level. False alarm represents a low level as high level with respect to physician’s diagnosis. The other performance measures are also defined as below.

The sensitivity Se and specificity Sp are represented as [19]

The relative risk factor indicates the stability and sensitivity of the classifier. For an ideal classifier the relative risk will be unity. More sensitive classifier will have this factor slightly above unity, whereas slow response classifier makes this factor lower than unity. We have obtained a low value of just 40% for Performance Index and 83.33%, 71.42%, 78.87%, and 1.166 for sensitivity, specificity, average detection, and relative risk for the code converter. Due to the low performance measures it is essential to optimize the output of the code converter. Performance Index of code converters output using different wavelet transforms for hard thresholding methods is tabulated in Table 5.

2.4. Rhythmicity of Code Converter

Now we are about to identify the rhythmicity of code converter techniques which is associated with nonlinearities of the epilepsy risk levels. Let the rhythmicity be defined as [10] where of categories of patterns and number of patterns which is 960 in our case. For an ideal classifier is to be one and . Table 6 shows the rhythmicity of the code converter classifier for hard thresholding of each wavelet. Table 6 shows that the value of is highly deviated from its ideal value. Hence, it is necessary to optimize the code converters outputs to endure a singleton risk level. In the following section we discuss the morphological filtering of EEG signals.

2.5. Morphological Filtering for Feature Extraction of EEG Signals

Morphological filtering was chosen over other methods such as the temporal approach of the EEG signal and wavelet based approach due to the fact that morphological filtering can precisely determine the spikes with a very high accuracy rate [14]. Let us call it as a function . Let us also take into account a structuring element which together with is the subsets of Euclidean space .

Accordingly, the Minkowski addition and subtraction [6] for the function is given by the relation

The opening and closing functions of the morphological filter are given as

The abovementioned equations help us in determining the peaks and valleys in the original recording [7]. The opening function (erosion-dilation) is used in smoothing of the convex peak of the original signal and the closing function (dilation-erosion) is used in smoothing the concave peak of the signal. Combinations of opening and closing function lead to the formation of a new filter which when fed with the original signal can divide it into two, the first signal being defined by a structuring element and the second signal being the residue of . This type of filtering is done in order to detect the spikes with high accuracy. For two structuring elements, say and , the open-close (OC) and close-open (CO) functions are defined as

When considered separately, the OC and CO functions result in a variation in amplitude; that is, while OC results in lower amplitude, the CO function yields higher amplitude. For easier interpretation and calculation, we go for the average of the two defined as opening-closing-closing-opening (OCCO) function. The same is depicted below as where is the original signal represented as where —spiky part of the signal.

Performance Index, sensitivity, and specificity of code converter outputs through morphological filter based feature extraction arrived at the low value of 33.46%, 76.23%, and 77.42%, respectively. This scenario impacts the optimization of code converter outputs using postclassifier to accomplish a singleton result. The following section describes the outcome of SVD, EM, and MEM techniques as postclassifier.

3. Singular Value Decomposition, Expectation Maximization, and Modified EM as Postclassifier for Classification of Epilepsy Risk Levels

In this section, we discuss the possible usage of SVD, EM, and MEM as a postclassifier for classification of epilepsy risk levels. The Singular Value Decomposition (SVD) was established in the 1870s by Beltrami and Jordan for real square matrices [27]. It is used mainly for dimensionality reduction and determining the modes of a complex linear dynamical system [27]. Since then, SVD is regarded as one of the most important tools of modern numerical analysis and numerical linear algebra.

3.1. SVD Theorem

Let us have an matrix . The SVD theorem states that [28] where (with ), , , and is a diagonal matrix of size .

Equation (18) can be further realized as

The columns of are called the left singular vectors of matrix and the columns of are called the right singular vectors of . ; is called the singular value matrix along with the diagonal.

We have taken the EEG records of twenty patients for our study. Each patient’s sample is composed of a 16 × 3 matrix as code converter outputs depicted in Table 3. Considering this to be as matrix , SVD is computed. The so-obtained Eigen value is eventually regarded as the patient’s epilepsy risk level. The similar procedure is carried out in finding out the remaining Eigen values of other patients as well.

3.2. Expectation Maximization as a Postclassifier

The Expectation Maximization (EM) is often defined as a statistical technique for maximizing complex likelihoods and handling incomplete data problem. EM algorithm consists of two steps, namely, the following.

Expectation Step ( Step): say for data , having an estimate of the parameter and the observed data; the expected value is initially computed [29]. For a given measurement and based on the current estimate of the parameter, the expected value of is computed as given below:

This implies

Maximization Step ( Step): from the Expectation Step, we use the data which was actually measured to determine the ML estimate of the parameter.

Considering the code converter outputs, let us take a set of unit vectors to be as . We will have to find out the parameters and of the distribution Md. Accordingly, we can form the equation as [30]

Considering , the likelihood of is

The log likelihood of (19) can be written as where

In order to obtain the likelihood parameters and , we will have to maximize (22) with the help of Lagrange operator . The equation can be written as

Derivating (23) with respect to , , and and equating these to zero will yield the parameter constraints as

In the Expectation Step, the threshold data are estimated, given the observed data and current estimate of the model parameters [31]. This is achieved using the conditional expectation, explaining the choice of terminology. In the -Step, the likelihood function is maximized under the assumption that the threshold data are known. The estimate of the missing data from the -Step is used in lieu of the actual threshold data.

3.3. Modified Expectation Maximization Algorithm

A Modified Expectation Maximization (EM) algorithm which uses maximum likelihood (ML) approach is discussed in this paper for pattern optimization. Similar to the conventional EM algorithm, this algorithm alternated between the estimation of the complete log-likelihood function (-Step) and the maximization of this estimate over values of the unknown parameters (-Step) [32]. Because of the difficulties in the evaluation of the ML function [33], modifications are made to the EM algorithm as follows.

The method of maximum likelihood corresponds to many well-known estimation methods in statistics. For example, one may be interested in the heights of adult female giraffes but be unable due to cost or time constraints to measure the height of every single giraffe in a population. Assuming that the heights are normally (Gaussian) distributed with some unknown mean and variance, the mean and variance can be estimated with MLE while only knowing the heights of some samples of the overall population.

Given a set of samples , the complete data set consists of the sample set and a set of variable indicating from which component of the mixtures the samples came. The description is given below of how to estimate the parameters of the Gaussian mixtures with the maximization algorithm. After optimization of the patterns, maximum likelihood is adopted to redesign the intracranial area into two clusters. Basically, maximum likelihood algorithm is a statistical estimation algorithm used for finding log-likelihood estimates of parameters in probabilistic models [30].(1)Find the initial values of the maximum likelihood parameters which are means covariance and mixing weights.(2)Assign each to its nearest cluster centre by Euclidean Distance (3)In maximization step, use Maximization . The likelihood function is written as: (4)Repeat iterations, until becomes small enough.

The algorithm terminates when the difference between the log likelihood for the previous iteration and current iteration fulfills the tolerance. For and , the likelihood function was applied to the 16 × 3 matrix of the code converter output by having truncated to the known endpoints.

4. Results and Discussion

To study the relative performance of these code converters and SVD, EM, and MEM, we measure two parameters, the Performance Index and the Quality Value. These parameters are calculated for each set of twenty patients and are compared.

4.1. Performance Index

A sample of Performance Index of morphological filter based feature extraction with code converters, Singular Value Decomposition, EM, and MEM for an average of twenty known epilepsy data sets is shown in Table 7. As shown in Table 7 the morphological filter based feature extraction along with SVD optimization is ranked at first with high PI of 89.48% against the 80.1% and 83.35% of EM and MEM methods. But the morphological filter plugged into more missed classification rather than less false alarm which is a dangerous trend. Therefore, this method will be considered as a lazy and high threshold classifier.

Table 8 depicts the performance analysis of wavelet transform with hard thresholding method. In case of hard thresholding, while code converter has got an average classification rate and false alarm of 62.68% and 18.105%, EM optimizer has 87.39% of perfect classification with a false alarm rate of 4.43%. Not much of deviations, MEM have 89.36% and 4.46% of average perfect classification and false alarm, respectively. SVD optimization has got the highest value of perfect classification rate of 96.58% with zero false alarms. Hence SVD optimizer can be regarded as the best postclassifier. In all the four wavelet transforms SVD postclassifier is the best suited one to achieve the high classification rate. EM and MEM techniques fail miserably to achieve better classification accuracy when compared with SVD classifier.

Table 9 represents the performance analysis of wavelet transforms with soft thresholding with code converter, SVD, EM, and MEM, respectively. It can be found that, in soft thresholding, the code converter has got an average perfect classification of 65.6 and false alarm of 11.94. SVD has got a classification rate of over 85% with comparatively higher values of false alarms. MEM optimizer claims to be the best optimizer as it has a classification rate of 93.97% with a false alarm rate of 3.5 only. This is obtained when Haar wavelet is used with mini max soft thresholding.

4.2. Quality Value

This parameter determines the overall quality of the classifiers used. The relation for Quality Value is given by [19] where —scaling constant, —false alarm per set, —average delay of on-set classification, —percentage of perfect classification, and —percentage of perfect risk level missed.

By setting the value of “C” to a constant value, consider as 10. The classifier with the highest Quality Value is the better one. Table 10 depicts the Quality Value of wavelet transforms with hard thresholding and SVD, EM, and MEM optimization methods. It was observed that SVD with dB2 wavelet in hard thresholding attained the maximum value of QV at 23.82 and EM with Haar wavelet has the low value of QV at 18.32.

Table 11 shows the performance analysis of twenty patients using dB2 wavelet hard thresholding with SVD, EM, and MEM as postclassifiers. The evaluation parameters achieved an appreciable value in the case of SVD postclassifier when compared to the other two classifiers. Hence, we can choose SVD as a good postclassifier for epilepsy risk level classification. All the three postclassifiers are bestowed with the best sensitivity and specificity measures. EM and MEM classifiers are plugged into the higher false alarm rate and this leads to the lower QV and PI for the system.

Since the Haar wavelet is a predominant wavelet we had chosen this wavelet for the four types of soft thresholding methods and the same is depicted in Table 12. As seen in Table 12 the highest QV of 22.54 is attained in the mini max soft thresholding with MEM as a postclassifier.

Table 13 exhibits the performance analysis of twenty patients using Haar wavelet in soft thresholding with SVD, EM, and MEM postclassifiers. MEM postclassifier with mini max soft thresholding reached the better QV and PI when compared to SVD and EM classifiers. A slight incremental tradeoff in the weighted delay for MEM is responsible for this performance when compared with SVD and EM classifiers. SVD fails to achieve a good performance in this methodology due to more false alarm rate. EM is struck in the middle path as far as Performance Index is concerned.

Table 14 shows the performance analysis of twenty patients using morphological filters with SVD, EM, and MEM postclassifiers. In this method SVD outperforms other classifiers in terms of QV and PI. This morphological filtering is inherited with slow response and is considered to be a high threshold classifier. SVD classifier is summed with low false alarm and weighted delays. All these methods in average positioned at more than 90% of Performance Index and around Quality Value of 18. Since for all these classifiers the fact that the obtained weighted delay is more than 2 seconds leads to larger threshold and slow response system.

We wish to analyse the time complexity of the postclassifiers in terms of weighted delay and quality value. Table 15 shows the performance analysis of postclassifiers in terms of weighted delay and Quality Value. It is observed that the four types of wavelet transforms in hard thresholding method along with SVD postclassifier attained low weighted delay and high value of QV.

In the case of Table 15 the EM and MEM classifiers are either plugged into more missed classification or false alarms and subsequently lead to lower value of QV less than 20 in most of the wavelet transforms. In case of soft thresholding dB2 wavelet in rig sure thresholding for MEM postclassifier outperforms other fifteen methods. Morphological filters are stacked at higher delay with QV set at near 20.

5. Conclusion

The objective of this paper is to classify the risk level of the epileptic patients from the EEG signals. The aim is to obtain high classification rate, Performance Index, Quality Value with low false alarm, and missed classification. Due to the nonlinearity obtained and also a poor performance found in the code converters, an optimization was vital for the effective classification of the signals. We went for SVD, EM, and MEM as postclassifiers. Morphological filters were also used for the feature extraction of the EEG signals. After having computed the values of PI and QV discussed under the results column, we found that SVD was working perfectly with a high classification rate of 91.22% and a false alarm as low as 1.42. Therefore, SVD was chosen to be the best postclassifier. The accuracy of the results obtained can be made even better by using extreme learning machine as a postclassifier and further research will be in this direction.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

The authors express their sincere thanks to the Management and the Principal of Bannari Amman Institute of Technology, Sathyamangalam, for providing the necessary facilities for the completion of this paper. This research is also funded by AICTE RPS: F. no. 8023/BOR/RID/RPS-41/2009-10, dated December 10, 2010.