Abstract

Brain computer interface (BCI) requires an online and real-time processing of EEG signals. Hence, the accuracy of the recording system is improved by nullifying the developed artifacts. The goal of this proposal is to develop a hybrid model for recognizing and minimizing ocular artifacts through an improved deep learning scheme. The discrete wavelet transform (DWT) and Pisarenko harmonic decomposition are used for decomposing the signals. Then, the features are extracted by principal component analysis (PCA) and independent component analysis (ICA) techniques. After collecting the features, an optimized deformable convolutional network (ODCN) is used for the recognition of ocular artifacts from EEG input signals. When artifacts are sensed, the moderation method is executed by applying the empirical mean curve decomposition (EMCD) followed by ODCN for noise optimization in EEG signals. Conclusively, the spotless signal is reconstructed by an application of inverse EMCD. The proposed method has achieved a higher performance than that of conventional methods, which demonstrates a better ocular artifact reduction by the proposed method.

1. Introduction

Electroencephalogram (EEG) signals are affected by artifacts in the recorded electrical activity; thereby, it affects the analysis of EEG. To extract clean data from EEG signals and to improve the efficiency of detection during encephalogram recordings, a developed model is required. Although various methods have been proposed for the artifacts removal process, still research on this process continues. Even if several types of artifacts from both the subject and equipment interferences are highly contaminated with the EEG signals, the most common and important type of interference is known as ocular artifacts. Electroencephalogram (EEG) is the key component in the field of analyzing brain activity and behavior. Jaffino et al. [1] proposed a grey wolf optimized-based approach for detecting epileptic seizure with an acceptable efficiency. Obukhov et al. [2] have proposed a method of feature extraction from EEG by an application of wavelet scheme. Although their model has some advantages, the performance level was not up to a satisfactory level. Sawangjai et al. [3] experimented by generative adversarial network approach for removing ocular artifact from EEG signal with a moderate sensitivity. Similarly, Peterson et al. [4] reviewed towards signal-to-noise ratio for an ITER. A model using combined methods of wavelet-ICA and SVM has proposed by author [5] to improve the elimination process of artifacts without any loss of data in EEG signals and without depending on any thresholding function. However, the performance of the system was limited to a certain range. However, this model needs a large number of features for training data sets when dealing with large datasets with more noise. An efficient technique for the removal process of artifacts from EEG signals has been explained by Selvan et al. [6], in which two adaptive filtering techniques combined like ANC for noise signal removal from the primary signal as well as reference signal and adaptive signal enhancement scheme for ANC output signal enhancement. The performance analysis based on real-time applications of this proposed model has revealed that it has efficiently removed the OAs from EEG signals. Peng et al. [7] have presented a new model to remove ocular artifacts from EEG signals, which was based on DWT and ANC. The accuracy of the proposed model was compared with the existing models in terms of simulated and measured data and used in real-time applications and portable environments since it has required only single channel sources. DWT and ANC eliminate artifacts in the low-frequency band even when the frequency is overlapping with the EEG signal. Yet, it has some processing overhead issues. Betta et al. [8] have established a novel method for removing ocular artifacts, which was an automated system to analyze rapid eye movement (REM) signals. This method has used both the detection algorithm and removal system, in which the detection algorithm has included the correlation of DWT and adaptive filtering techniques to improve the performance of artifact removal system with better accuracy. Quazi et al. [9] have implemented an algorithm for removing artifacts from EEG signals, which was based on a hybrid scheme, namely, Firefly-Levenberg-Marquardt (FLM) algorithm. The performance evaluation of the proposed model was conducted based on three factors, namely, mean square error, signal-to-noise ratio (SNR), and computational time. The estimated results have shown that the implemented model based on FLM algorithm has delivered increased performance in the process of mitigating the artifacts from EEG signals. FLM provides accurate results in removing the artifacts from EEG signal. However, this model may have a chance to fall into the local minima problem. Jafarifarmand et al. [10] have developed a model with the combination of two approaches namely, ICA and ANC. The ICA technique has been used to extract the source signals of artifacts as independent components. The extracted results have been used in the ANC technique based on neural networks. The analyzed results have shown that the developed model has offered better performance for identifying and reducing the artifacts in EEG signals. ICA-ANC gives better performance in artifact removal from EEG with the use of parallel cleaning procedure. However, it shows weak performance in following the changes during online analysis. In the medical diagnosis field, the EGG signals are used for brain electrical activity recordings. The EEG signals are often contaminated with different types of artifacts, and among them, ocular artifacts are considered as the major sources of noise. The identification and removal of ocular artifacts from EEG signals is considered as a main challenging task. The evaluation of electrical activity inside a brain is carried out by EEG using electrodes attached to the scalp. This process is known as noninvasive brain imaging technique [11, 12]. The advantages of using EEG signals in the medical field are fast functionality, safe to use, relatively inexpensive, simple to operate, and portability. On the other hand, several artifacts of technical and biological origin highly contaminate the EEG signals [1315]. The most common types of artifacts are arising from muscle activities, heartbeat, eye blink, or movements. These artifacts are considered as a major hindrance in the analysis of EEG signals. The human eyes produce a large electric potential during eye blinks, and the resulting signal is known as Electro-Oculo-Gram (EOG). The EOG signal spreads all over the scalp, which contaminates the EEG signal that are known as ocular artifacts [16, 17]. These ocular artifacts interfere while measuring the brain signals and produce significant changes in measurements, which may induce negative waveforms with high amplitude. Therefore, the recognition and removal of ocular artifacts from EEG signals are an essential process. Various techniques are available for the removal of ocular artifacts from EEG signals [18].

In the past research works, singular value decomposition (SVD) and PCA have been used to remove ocular artifacts. Although both methods have been used for recognizing the artifacts, it has not removed it completely due to some wrong assumptions while measuring the EEG signals. Adaptive filtering is another technique that has been used for the removal of ocular artifacts. It also has some restrictions in the results due to ignorance of some information among electrodes [19, 20]. ICA is a technique that has been used for analyzing and then eliminating the ocular artifacts from EEG signals. This technique includes linear transformation, which optimizes the statistical dependence among the independent components (ICs) since the ICs lost the data in EEG signals [21, 22]. However, the ICA is not trained well for removing the ocular artifacts completely. Blind source separation (BSS) algorithm has used to separate the EOG and EEG into ICs statistically. The separation process was done again on EEGs with inverted EOG channels. However, it has some restrictions on reference EOG channels [23].

Adaptive noise cancellation (ANC) and DWT techniques are used to remove ocular artifacts from EEG signals [2428]. This method can perform using a single EEG signal without the need of EOG signal. Although this model has given reasonable results with superior performance, it has dependent on wavelet form and threshold function, which leads to the loss of data in EEG signals [29, 30]. The existing models have a lot of challenges to overcome. Thus, deep learning is used to solve the issues in the conventional methods. It also provides various techniques, which efficiently remove ocular artifacts from the EEG signals. The followings are the advantages of using deep learning methods: (a) strong generalization ability, (b) time saving, (c) nonuse of additional EOG reference signals, etc. Most of the deep learning models provide high clearance in the process of recognizing and mitigating the ocular artifacts from EEG signals [3, 3134]. Therefore, it is necessary to develop new methodologies to solve the abovementioned challenges and to remove the ocular artifacts efficiently. The main contributions of this paper are given as follows: (i)To develop a proposed model for detection and removal of ocular artifacts from EEG-signals by various techniques like 5-level DWT and Pisarenko harmonic method for decomposition of signals, PCA and ICA for features extraction from signals, EMCD for decomposition of signals, and optimized DCN with DS-EFO for detection of ocular artifacts with enhanced accuracy rate(ii)Furthermore, to develop an efficient detection method of ocular artifacts using optimized DCN with the developed DS-EFO algorithm by optimizing the parameters of DCN with the aim of multiobjective function in terms of maximizing the accuracy and precision and to validate the efficiency of detection and prevention phases of ocular artifacts model by estimating with trained data sets from BCI applications by various performance metrics through comparing with the existing algorithms(iii)The chirplet transform is used to evaluate the performance on RMSE of the proposed scheme

2. Deep Learning-Based Detection and Prevention of Ocular Artifacts from EEG Signals

2.1. Proposed Architecture

In recent years, the medical field used EEG signals for several brain-related evaluations. Generally, the EEG signals have some drawbacks like diverse types of noise signals, low SNR rate, overlapping of noise and artifacts, and nonlinearity and stationary properties. Among those, artifacts are the most dangerous issue, which has the capability of degrading the efficiency of EEG signals. The artifacts in EEG signals may cause electronic saturation with high amplitude, which may affect the EEG signals and lead to provide improper results in BCI applications. Several types of artifacts can affect EEG signals in different ways. One of the most common types of artifacts is ocular artifact. The ocular artifacts are caused due to the overlapping of EOG and EEG signals in terms of both time and frequency domains. The ocular artifacts are 10 to 100 times stronger than EEG signals, which is the major drawback of ocular artifacts in EEG signals. Hence, it is considered as a challenging task to remove ocular artifacts from EEG signals. Various techniques are used to identify and remove the ocular artifacts from EEG signals like DWT, ICA, PCA, BSS, and FLM. Although these techniques provide reasonable results in the process of recognizing and removal of artifacts, it also has some limitations like does not have the ability to remove the artifacts completely with high accuracy, needs additional EOG recordings, requires multichannel EEG signals, etc. Therefore, the deep learning techniques are used in this paper to achieve accurate results with efficient diagnosis and mitigation process of ocular artifacts from EEG signals. The diagrammatic representation of the proposed detection and mitigation of ocular artifacts from EEG signal is depicted in Figure 1.

The proposed ocular artifact diagnosis model has two phases “(i) Detection phase and (ii) Mitigation phase”. The detection phase consists of three processes such as “signal decomposition, feature extraction and ocular artifacts detection”. Initially, the raw EEG input signals are decomposed by two decomposition techniques such as 5-level DWT and Pisarenko harmonic decomposition technique. The input EEG signal is decomposed into a number of samples, and the samples are examined one by one for efficient processing. Then, the decomposed signals are given as the input for PCA and ICA, in which the features are extracted from the decomposed signals. This helps to reduce the redundant features. The features extracted from the PCA and ICA are concatenated and forwarded to a deep learning technique, namely, optimized DCN, in which the epoch and learning rate are optimized using distance sorted EFO (DS-EFO). The optimized DCN is trained to classify the signals from the extracted features. Therefore, the trained optimized DCN by DS-EFO provides the output signal with artifacts and signal without artifacts. The objective function in terms of precision and accuracy ensures the efficient detection of ocular artifacts between the input and detected artifact signals.

The mitigation phase has initiated once the ocular artifacts are detected in the first phase. The mitigation phase has various steps like “signal decomposition, signal denoising and signal recovery.” The semisimulated data are generated from the signals with ocular artifacts, and it is divided into decomposed signal and leftover signal by using EMCD decomposition technique. The decomposed signal is forwarded to the optimized DCN by DS-EFO for producing the denoised signals, which is further processed through inverse EMCD to generate artifact restored denoised signal. Then, the artifacts removed signals or retrieved signals are generated by summing the leftover signals and the restored denoised signal. Here, the objective function lifting the efficiency of mitigation of ocular artifacts between the clean signal and retrieved signal is to reduce the MAE between them.

2.2. Signal Decomposition Phase

The initial step of efficient signal processing is signal decomposition, in which the signal components are extracted and separated into a greater number of samples. The first phase of decomposition of the input EEG signal is done by 5-level DWT. The collected input EEG signals are termed as , where and represent the total number of input EEG signals.

Discrete wavelet transforms (DWT) [28]: it is a wavelet transform technique that decomposes the input EEG signals into a number of samples, where each sample is a time series of coefficients. The coefficients describe the signal evolution time related to the frequency bands. The frequency of the signal is divided into low and high frequency bands by DWT. The low frequency band is further divided into low and high frequency phases. The high frequency band contains the data of the edge and surface of the signal. In the 5-level DWT decomposition method, the level 1 decomposition of the signal produces four number of subfrequency bands like LFLF1, LFHF1, HFLF1, and HFHF1. The LFLF1 subfrequency band in the top level is given as input for the next level decomposition. The decomposition process of the remaining levels is as follows: (i)For the decomposition of level 2, the DWT is employed on LFLF1 subband, which is the previous level. The level 2 decomposition generates four subfrequency bands such as LFLF2, LFHF2, HFLF2, and HFHF2(ii)Likewise, the level 3 decomposition produces 4 subbands such as LFLF3, LFHF3, HFLF3, and HFHF3 by applying the DWT to the LFLF2, i.e., level 2(iii)For the level 4 decomposition, the DWT is applied to level 3, i.e., LFLF3 band. Therefore, the level 4 decomposition delivers four subfrequency bands such as LFLF4, LFHF4, HFLF4, and HFHF4(iv)Finally, the decomposition of level 4 is done by applying DWT to the LFLF4 band. The level 4 generates four subfrequency bands like LFLF5, LFHF5, HFLF5, and HFHF5

The signal is transmitted to the filter series for the measurement of DWT of a signal . Initially, the samples are transmitted through a low pass filter with impulse response .The result is generated as shown in Eq. (1).

Similarly, the high pass filter is is also used for signal decomposition. The output of low-pass filter is resampled by 2. Thus, the signal is again transferred to a new “low-pass filter and high-pass filter” for further processing by half the cut-off frequency of the final one. The process is defined in the formulas of Eq. (2) and Eq. (3).

Hence, the decomposed signal is generated by using DWT technique.

Pisarenko harmonic decomposition [17]: the next phase decomposition of is accomplished by Pisarenko harmonic decomposition technique. Generally, this technique is familiar for frequency estimation, in which the eigenvector corresponding to the lowest eigenvalue of the input signal is used for evaluation, and the result is generated as shown in Eq. (4).

Here, the term refers to the noise eigenvector, where . Thus, the decomposed signal is generated by using Pisarenko harmonic decomposition technique.

2.3. Proposed DS-EFO

The detection and mitigation of ocular artifacts is effectively improved by a developed heuristic EFO algorithm, namely, DS-EFO. The DCN technique is used to detect and mitigate the artifacts in the signals. The parameters of DCN such as epoch and learning rate are optimized by DS-EFO algorithm to improve the efficiency of detection. EFO algorithm is based on swarm intelligence. This algorithm has done many optimizations on the swarm intelligence, and it is a better algorithm to solve some complex problems. However, it needs more steps for solving the problems thereby it takes much time for computation. Therefore, the DS-EFO is proposed to overcome the limitations of the existing EFO by simplifying the process, thereby reducing the computation time.

EFO [29]: EFO is simulated based on the communication behaviors of electric fish, namely, nocturnal electric fish. Generally, this electric fish lives in a muddy water surfaces, where the visual capacity of electric fish is narrow. This electric fish with poor eyesight depends on their species-specific ability known as electrolocation to recognize the environment. Electrolocation refers to the sense of the ability of the electric fish to differentiate between prey and obstacles. There is an electric organ in the electric fish, which has disc-like-cells called electrocytes. This organ is located at the tail of an electric fish, and it is used to generate an electric field. Electric organ discharge (EOD) is generated due to the simultaneous excitation of these electrolytes. EOD is identified by its amplitude and frequency. The amplitude of electric field finds the effective range of the EOD in local search, and this parameter is depending on the size of the fish. The electric fish which are closest to the optimal source generates high frequency of electric field, and the time corresponding to the frequency is measured for everyone. Electrolocation is categorized into active and passive based on the capability of the fish in searching and locating the prey. The active electrolocation has a limited range, and the electric fish can be able to sense the near areas to identify their prey and generate EOD through the changes in the electric field. On the other hand, the passive electrolocation has a wider range than the active electro location, which leads the electric fish to find the location of a distanced objects and able to communicate with other fish. Thus, to find the best food source quality from the infinite food source of everyone with the time frequency in the large dimensional search space, the computational steps of EFO algorithm are formulated in the following equations.

In the conventional EFO algorithm, the solutions are updated based on different constraints and that leads to computational and time complexity. Therefore, the proposed DS-EFO algorithm is introduced based on the distance among the solutions. It is executed by only one constraint called distance, which makes the algorithm as a simpler one. Here, the distance is computed between the best solution and the current solution. Then, the mean of distance is computed. If the distance of the current solution is lesser than the mean distance and there exists at least one neighbor in the active sense area, then the solutions are updated based on active electrolocation. If the condition fails, then the solutions are updated based on passive electrolocation.

Population initialization: the collection of individuals or electric fish population is spreading in the search space in a random manner. The population initialization with the determination of boundaries is formulated in Eq. (5).

Here, the term refers to the location of the individual in the dimensional search space with the population of size , where . The term denotes the uniform distribution. The lower and upper boundaries of the search space are indicated by and , respectively.

After the population initialization process, the probability of individuals’ frequency range is determined using the minimum frequency and maximum frequency range of individuals from its fitness value. The individuals with higher frequency range use active electrolocation, and others employ passive electrolocation. The frequency value of individuals from its fitness value is formulated in Eq. (6).

Here, the terms and denote the best and worst fitness value of individuals for the corresponding individual population at iteration . The probability calculation is done by using the frequency value of and which is given in the range of 0 and 1, respectively. Next, the amplitude value of the individual is calculated by the weight of the previous amplitudes of individuals due to its dependence. The amplitude value depends on other passively electro locating fish, and the electric field strength decreases with the inverse cube of distance. The calculation of the amplitude value is formulated in Eq. (7).

Active electrolocation: the characteristics of active electrolocation determine the exploitation capability. The amplitude value determines the active range of the individual , and it is formulated in Eq. (8).

After the calculation of an active range, the distance among the individuals and the remaining population is measured. The Cartesian distance calculation is used to determine the individuals and neighboring individuals , and it is formulated in Eq. (9).

The EFO algorithm uses the Eq. (10) formula when at least one neighbor exists in the active region.

Passive electrolocation: the exploration capability is based on the characteristics of passive electrolocation. The probability of the individual in the active mode (i.e., ) being perceived by the individual in passive mode (i.e., ) is calculated using Eq. (11).

Using Eq. (11) the individuals selected from to determine a reference location in Eq. (12), the new location is generated in Eq. (13).

Finally, the probability of the new location is increased by modifying a parameter of the individual and it is formulated in Eq. (14).

In the EFO algorithm, the calculation of active and passive electrolocation takes several steps to find the distance between the individuals and the location of the best food source in the given search space. In the proposed algorithm, Eqs. (10) and (14) are modified to reduce the time complexity and the computation time. The pseudocode of the proposed DS-EFO algorithm is represented in Algorithm 1.

Population initialization
Fitness value of each individual is calculated.
Frequency and amplitude calculation of every individual using Eq. (6) and Eq. (7).
 for each do
  
   Calculate the location of best optimal food source on active electrlocation mode.
   Determine active range individual
   Estimate the distance among individual and other individuals.
  Else
   Calculate the location of best optimal food source on passive electrolocation mode.
   Considering values and choose individuals from using Eq. (11).
   Modify qth parameter by Eq. (14).
  End
  Evaluate quality of new source
 Update frequency and amplitude values of the population
End

The flowchart of the proposed DS-EFO algorithm is represented in Figure 2.

3. Ocular Artifacts Detection by Optimized Deformable Convolutional Networks

3.1. Feature Extraction by PCA and ICA

The feature extraction process refers to transforming the input signals into numerical features for preserving the information of input data while processing. The results obtained are better while performing detection or classification tasks using the extracted features than applying to the raw input data. The features of the decomposed signal are extracted by two analytical component techniques such as PCA and ICA.

PCA [12]: it is considered as a data reduction technique, and it uses linear algebra for feature extraction, which transforms the input data signal into a compressed form, i.e., a small number of relevant features. The features are converted into matrix, the feature extraction process is done by evaluating the mean variables , and it is formulated in Eq. (15). The term denotes the weight, and is a variable, where and are another variable, where .

In Eq. (15), the term denotes the weight, and is a variable, where and are another variable, where .

In Eq. (16), the term refers to the covariance matrix. The term denotes the variance. The covariance is formulated in Eq. (17).

In Eq. (17), the variable is represented by . An eigenvalue and eigenvector can be calculated by . If is a matrix of full rank, eigenvalues and all corresponding eigenvectors are measured in Eq. (18).

The features of decomposed signals are represented as, where and denote the total number of features extracted from PCA, which are attained 83 features.

ICA [20]: it is a method for extracting features from the input signal , which is a multivariate random signal that has transformed into independent components. Each component carries information that will not infer to others. Numerically, the probability of each component is obtained from the feature extraction process. The multivariate density function is measured by gathering the independent components into vector by assuming the vector with zero mean and the result generated as shown in Eq. (19).

The above Eq. (20) formulates the dimensional data for each component. The main aim of ICA is recovering the source signals from the sensed signals, and it is formulated in Eq. (21).

Here, the term denotes the source, and the term indicates the estimation of and . The features of decomposed signals using ICA are represented as , which is attained as 83 features. Thus, the extracted features from PCA and ICA are concatenated as , where and denote the total number of concatenated features.

Optimized DCN-based detection process: the efficient ocular artifact detection is performed by DCN, which is further improved by optimizing the epoch and learning rate of DCN using the proposed DS-EFO algorithm. The extracted features from PCA and ICA are given as input to the optimized DCN. The optimized DCN classifies the signals with or without ocular artifacts.

DCN [7, 26]: the deformable network is established to overcome the performance limitations of the existing CNN. The DCN network has a learnable and deformable convolution and pooling layer. The deformable convolution adds offsets to the regular grid sampling locations in the standard convolution to deform the constant receptive field of the previous activation unit. Likewise, the deformable pooling adds an offset to each position in standard pooling. The preceding feature map is used to extract the offsets.

Deformable convolution: the convolution layer is the key component of CNN, which is used for extracting feature maps from the input. The two steps of regular convolution are sampling and summation. The sampling is done on the input feature map by adding the offsets to the locations in the regular convolution, and the summation is processed by using weighted kernel values. The process of feature extraction is enhanced by generating deformed sampling locations for the existing convolutions. It is modified by adding 2 modules prior to regular convolution, in which one is used to produce an offset field, and the other is used to generate deformable feature maps. The offset fields of the instantaneous value of the input signal through convolution are calculated, and the information of neighboring instantaneous values is fused to generate the deformable signal. The extracted features are given as input to the DCN. The sampling locations are shifted to neighboring locations by training the offsets field, which is generated using the weights of the convolution layer. The output generated from the deformable features using regular convolution is formulated in Eq. (22).

Here, is the kernel weight. The deformable convolution considers fractional data locations and the interneuron positions, which is not considered in regular convolution. Moreover, the deformable convolution has no fixed shape.

Deformable pooling: in conventional pooling, the downsampling is used to minimize the size of input values to speed up the learning process. The fixed sampling locations and less efficiency of the learnable process are the drawbacks of existing convolution. The limitations of both methods are solved by the deformable region-of-interest (RoI) pooling. Before pooling, the offsets are added to the spatial positions, and the kernel weights of the downsampling are trained well by using the deformable sampling locations. The functions of deformable convolution and deformable pooling are depicted in Figure 3.

DCN consists of a deformable pooling layer preceded by a deformable convolution layer. The deformable signal is generated by applying the linear interpolation method to the input signal. The deformable signal is further given to the convolution. The calculation of trainable offset is performed on both pooling and convolution layers.

Deformable convolution layer: in the existing convolution methods, the output feature for each time instant is defined as shown in Eq. (23).

Here, denotes the time instants of the sampling grid . The regular grid is attached with offsets.

In Eq. (24), the term refers to indicate the changeable sampling locations. The term is typically fractional, and the linear interpolation method is used to find the new location. The equation is denoted by Eq. (25).

Here, the fractional location is denoted by and , the term denotes the spatial locations in the feature map , the linear interpolation kernel is indicated by , and it is represented in Eq. (26).

Therefore, the computation time is reduced by using deformable convolution when compared to regular convolution. Additionally, the kernels in the convolution layer as well as the offsets are learned efficiently while training. The regular sampling method of convolution layer is replaced with adaptive sampling to achieve enhanced learning.

Deformable pooling layer: this layer uses spatial pooling, which concatenates the neighboring locations and generates a summation of the joint distribution of the features. As already known that the existing pooling models are not trained and their sampling locations are fixed, the RoI pooling is used in the deformable pooling layer. The generated output is as shown in Eq. (27).

Fully connected layer: this layer is used to determine each and every class of signal, and it is shown in Eq. (28).

Here, the term and denotes the weight vector and bias in the fully connected layer, and the term indicates the signum function. The overall architecture of the DCN is represented in Figure 4.

The detection of ocular artifacts by optimized DCN generates the output signal with ocular artifacts or without artifacts from the raw input EEG signals. In the detection process, the optimized DCN is trained by assigning the input as extracted features and the target with the presence of ocular artifacts or not. This trained optimized DCN efficiently detects the ocular artifacts with concerning accuracy and precision.

4. Prevention of Ocular Artifacts by EMCD and Optimized Deformable Convolutional Networks

4.1. Semisimulation Data Generation

The prevention or mitigation phase of ocular artifacts from EEG signals uses the same architecture model of DCN as in the detection phase. Here, the optimized DCN is used in the process of denoising the signals. The detected ocular artifacts from the detection phase are further removed or prevented in the mitigation phase. However, there is no proof for the complete removal of ocular artifact from the signals. The semisimulated data generation is required to validate the removal process. The signals are added with some ocular artifacts are combined along with the signals with no artifacts. The signals without adding artifacts are considered as the target signal and the signals after removing ocular artifacts are assigned as the denoised signal .

The 22 EEG and 3 EOG signals of 25 channel signals are from BCI competition IV dataset. Then, it was segmented and reshaped to 288 epochs of length 6 s () and data tensor, respectively [35]. Furthermore, the labeled data of whether the artifact-contaminated epochs or cleaned epochs are obtained from BCI competition. Additionally, the epochs with contaminated artifacts are reshaped into matrix . Here, the contaminated epochs are denoted as.

4.2. EMCD-Based Signal Decomposition

The signal decomposition process involves the extraction of samples from the signal components. In the mitigation phase, the EMCD decomposition technique is used to decompose the artifactual signals into decomposed signals and left-over signals.

EMCD [36]: the decomposition algorithm of EMCD calculates the superior and inferior envelope of signal decomposition in every process. In this process, the mean curve is extracted by optimizing the envelopes by averaging it using the scale control algorithm. This EMCD algorithm uses data-driven approach, and the time series are decomposed in multiscale level. Initially, the maxima and minima are extracted from the input time series. Then, the inferior and superior envelopes are generated by using local scale control technique. The mean curve output is calculated by averaging both envelopes.

Consider as where is an element and the time series is referred by . The minima series of is denoted as . The time index is indicated by , and the number of minima is termed as . The maxima series of is denoted as , in which the term indicates the time index and denotes the number of maximums. The term is the mostly utilized B-spline interpolation function that interpolates the input series at time point .

Superior envelope: the upper trend curve is referred to as the time series of this envelope that passes through all of its maxima. The maxima are interpolated by applying the B-spline interpolation. The superior envelope is mathematically represented [37] in Eq. (29).

Inferior envelope: the inferior envelope of a time series is the lower trend curve that passes through all of its minima. The B-spline interpolation is used to interpolate the minima. The mathematical representation of the inferior envelope is represented in Eq. (30).

Mean curve: the mean curve of the time series is the average of its inferior and superior envelopes, and it represents the global trend as shown in Eq. (33).

Mode: the mode of a time series is the average of the number of its maxima and that of its minima .The equation of the mean curve is formulated in Eq. (32).

Empirical waveform: in reality, the mean curve is defined by the extrema that generate a new method to model the time series. The new concept EWF is introduced, which is a series of alternating maxima and minima. The empirical waveform is mathematically represented in Eq. (33).

EWF is used to represent the mean curve when the mode of , which characterizes this EWF. In particular, one entire sine wave cycle has one maximum and one minimum that contribute exactly one to its mode. Therefore, the EWF mode behaves like the count of entire cycles in classical Fourier analysis. Eq. (34) represents the empirical period mentioned EWF.

The equation for empirical frequency is represented in Eq. (35).

It should be noted that both the empirical period and empirical frequency are temporal evaluations over the complete time series than the original model parameters as in the conventional Fourier analysis. These conditions improve the descriptive capabilities of the signals in an efficient way that a broad signal class from oscillatory sources is designed, for example, brain regions, and neurons, and those signals are similar, but not like sine waves. Hence, the Fourier analysis decomposes this type of time series into a collection of sine waves at various frequencies, and the wavelet transform decomposes them into a set of wavelets at a number of frequencies and distinct temporal locations.

Therefore, the signals with detected ocular artifacts , and the semisimulated data are processed through EMCD, in which the signals are decomposed into decomposed signals and leftover signals , and it is formulated in Eq. (36).

Furthermore, the decomposed signals are processed by an optimized DCN for denoising the signals, which are forwarded to inverse EMCD to recover the restored source signals. Hence, the retrieved signals are obtained by adding the restored denoised signals with the leftover signals.

4.3. Prevention of Ocular Artifacts by Optimized DCN

The prevention or mitigation phase of ocular artifacts uses an optimized DCN for removing the noise from the given input signals. The decomposed signals from the EMCD are further given as input to optimized DCN to denoise the signals. The optimized DCN is trained by assigning the input as decomposed signals and the target as denoised signals . This trained optimized DCN by DS-EFO performs the signal denoising process in an efficient manner. The denoised signals are further given to the inverse EMCD to attain the restored denoised signals . The leftover signal and the restored denoised signals are concatenated to produce the artifacts removed signal or retrieved signal , which is the output signal of the mitigation phase without any artifacts. The equation is denoted in Eq. (37).

4.4. Objective Model for Detection and Prevention

The proposed ocular artifact removal model consists of two phases such as detection and prevention of ocular artifacts from EEG signals. The efficiency of the proposed model is verified by validating the multiobjective function.

Detection phase: although the DCN performs efficiently in the detection process of artifacts, it has some limitations in terms of accuracy when deal with a large number of training datasets. Therefore, in the proposed model, the epoch and learning rate of DCN are optimized using DS-EFO, which is in the range of 10 to 20 and 0.1 to 0.9, respectively. The main objective of optimized DCN is to improve the classification or detection process concerning maximization of accuracy () and precision ().

Accuracy is referred as “the nearness of the measurements to a specific value.” It is formulated in Eq. (39).

Here, the term is denoted as true positives, is denoted as false positives, is denoted as true negatives, and is denoted as false negatives. Precision is referred as “the points that are stated to be positive especially it is used to declare what percentage of the points is truly positive” as denoted in Eq. (40).

Therefore, the efficiency of the ocular artifacts detection is enhanced by the optimized DCN by DS-EFO.

Prevention phase: the ocular artifact removal models use DCN for denoising the decomposed signals. The efficiency of DCN is improved by optimizing the DCN parameters by DS-EFO. The objective function of the optimized DCN removing ocular artifacts from EEG signals is the minimization of MAE between the clean signal and the artifacts removed signal or retrieved signal. The MAE metric “compares with the artifact reduction method ability to represent artifact waveforms because it provides an intuitive interpretation of the reconstruction errors by remaining their original units.” The equation of MAE is denoted in Eq. (41).

Here, the time index is denoted as , and the term denotes the time points. The term indicates the clean signal. Thus, the objective function for the optimized DCN is given in Eq. (42).

Therefore, the proposed detection and removal model of ocular artifacts from EEG signals provide enhanced performance by optimizing the epoch and learning rates of DCN by DS-EFO algorithm. In the detection phase, the optimized DCN efficiently classifies the ocular artifacts by signals with artifacts and signals without artifacts. In the prevention phase, the input signals are denoised efficiently using the optimized DCN.

5. Results and Discussion

5.1. Experimental Setup

The proposed model for the detection and mitigation of ocular artifacts from EEG signals was implemented using MATLAB 2020a, and the performance evaluation was conducted by following measures. The dataset used for validating the proposed model was collected from (URL: http://www.bbci.de/competition/iv/#datasets, Access date: 2021-06-22) (Table 1). The experimental analysis was conducted by considering 9 subjects and the population size as 10, and the number of iterations performed as 100. The detection phase of the proposed DS-DFO-DCN was compared over the existing heuristic algorithms such as particle swarm optimization (PSO) [8], grey wolf optimization (GWO) [7], dual positioned elitism-based earth worm optimization algorithm-DCN (DPE-EWA-DCN) [38], and EFO [11] classifiers such as neural networks [15], SVM [12], EMCD+DPE-EWA-LWT [38], and DCN [7].

5.2. Performance Measures

Various performance metrics are considered for evaluating the performance of detection and prevention of ocular artifacts model that are given below: (a)Sensitivity: it measures “the number of true positives, which are recognized exactly” (b)Specificity: it measures “the number of true negatives, which are determined precisely” (c)FPR: it is computed as “the ratio of the count of false positive predictions to the entire count of negative predictions” (d)FNR: it is “the proportion of positives which yield negative test outcomes on the test” (e)NPV: it is the “probability that subjects with a negative screening test truly don’t have the disease” (f)FDR: it is “the number of false positives in all of the rejected hypotheses” (g)F1 score: It is defined as the “harmonic mean between precision and recall. It is used as a statistical measure to rate performance” (h)MCC: it is a “correlation coefficient computed by four values” (i)Correlation coefficient: it considers the relative movements of the signals and then defines if there is any relationship between them” (j)RMSE: RMSE “is a quadratic scoring rule that measures the average magnitude of the error. It’s the square root of the average of the squared differences between prediction and actual observations”

5.3. Performance Analysis on MAE

The performance analysis of the proposed ocular artifact detection and mitigation model on MAE is evaluated between the retrieved signal and clean signal in semisimulated data generation. The proposed DS-EFO-DCN is compared with other heuristic algorithms in terms of MAE with 9 subjects that are depicted in Figure 5. The proposed DS-EFO-DCN achieves minimum error rates when gradually increasing the SNR rate from 0.5 to 1.5. The EFO-DCN and the DPE-EWA-DCN attained more or less similar MAE rates like the proposed DS-EFO-DCN while varying the SNR rate for all 9 subjects when compared with the PSO and GWO algorithms. In subject 2, the MAE of proposed DS-EFO-DCN for SNR value 1.5 is 46.15% better than PSO-DCN, 40% better than GWO-DCN, 16% better than DPE-EWA-DCN, and 27.59% better than EFO-DCN. Likewise, for all subjects, the proposed DS-EFO-DCN attains minimum MAE values when compared with conventional algorithms for the prevention of ocular artifacts.

5.4. Performance Analysis on RMSE

The performance of the DS-EFO-DCN is compared with the other heuristic algorithms by evaluating the RMSE value for all 9 subjects as shown in Table 2. The minimum error rate value was attained by the proposed DS-EFO-DCN while increasing the SNR rate from the range of 0.5 to 1.5 when compared with the conventional algorithms. In all 9 subjects, the EFO-DCN and the DPE-EWA-DCN also reach more or less same RMSE rate while varying the SNR rate like the proposed DS-EFO-DCN when compared with the PSO and GWO algorithms. In subject 9, the RMSE of proposed DS-EFO-DCN for SNR value 1 is 40% better than PSO-DCN, 50% better than GWO-DCN, 23.07% better than DPE-EWA-DCN, and 45.45% better than EFO-DCN. Similarly, the proposed DS-EFO-DCN attains minimum RSME values for the remaining subjects when compared with conventional algorithms for the prevention of ocular artifacts.

Again, performance analysis on RMSE has evaluated through chirplet transform-based time-frequency images of both 5 s and 8 sec frames of EEG signals. The resultant is presented in Table 3. It is evident that the time-frequency images of 5 s and 8 s EEG frames coupled with the proposed scheme obtained average accuracy value of 97.66% and 96.65%, respectively. These accuracy values are comparatively good as compared to other existing method.

5.5. Performance Analysis on Correlation Coefficient

The correlation coefficient analysis on the performance of the DS-EFO-DCN for all 9 subjects is compared with the other heuristic algorithms as shown in Figure 6. The high correlation efficient rate was attained by the proposed DS-EFO-DCN for the selected electrodes when compared with the conventional algorithms. In all 9 subjects, the EFO-DCN and the DPE-EWA-DCN attained more or less the same correlation coefficient rate while taking different electrodes DS-EFO-DCN, whereas the PSO and GWO algorithms attained very less value of correlation coefficient. In subject 5, the correlation coefficient of proposed DS-EFO-DCN for electrode no. 5 is 16.87% better than PSO-DCN, 58% better than GWO-DCN, 3.19% better than DPE-EWA-DCN, and 7.77% better than EFO-DCN. In the same way, the proposed DS-EFO-DCN attains high correlation values for the remaining subjects in mitigating the artifacts when compared with existing algorithms.

5.6. Performance Analysis on Correlation Coefficient

The semisimulated data generation is used to validate the mitigation of ocular artifacts, since there is no proof to validate the measures. Therefore, the difference between the denoised and the clean signal is validated by autocorrelation evaluation. While processing the evaluation, there should not be any data loss other than artifacts. Thus, the best performance is attained by reaching a high correlation coefficient value. Correlation coefficient of the proposed DS-EFO-DCN is attained maximum rate when compared with the conventional algorithms. The proposed DS-EFO-DCN is compared with other conventional algorithms with 22 numbers of electrodes for each subject, and the results with regards to the correlation coefficient for all 9 subjects are represented in Table 4. From Table 4, while taking electrode 5 of subject 2, the performance of the proposed DS-EFO-DCN is 15.22% better than PSO-DCN, 44.92% better than GWO-DCN, 1.46% better than DPE-EWA-DCN, and 12.29% better than EFO-DCN. Therefore, the proposed approach has attained a high correlation coefficient, which improves the prevention strategy when compared with other algorithms.

5.7. Overall Performance Analysis of Detection

The performance of the proposed DS-EFO-DCN detection and mitigation model is analyzed with the existing metaheuristic algorithms and is represented in Table 5. The accuracy of the proposed DS-EFO-DCN model is 4.42% better than PSO-DCN, 0.82% better than GWO-DCN, 2.67% better than DPE-EWA-DCN, and 3.02% better than EFO-DCN. Similarly, the proposed model attains better performance for all performance metrics. In the same way, the performance analysis of the proposed DS-EFO-DCN model with the existing classifiers is represented in Table 6.

The precision of the proposed model is 24.44% better than NN, 5.97% better than SVM, 27% better than DPE-EWA-DCN, and 15.35% better than DCN. Therefore, the overall analysis reveals that the proposed DS-EFO-DCN algorithm of detection and mitigation model provides better performance than the existing algorithms.

The computational complexity with simulation time of proposed scheme was also compared with the existing methods of artifact elimination approach. The comparison outcomes are presented in Table 7. Here, the EEG signal’s length () for comparing computational complexities of different artifact removal approach is selected as 38000 samples. The proposed scheme has been implemented in MATLAB-2019a version software along with 64-bit personal computer of 10GB RAM and Intel 5 core i3 processor at 2.476Ghz. The proposed method has a simulation time value of 2.89 second, which is optimum interval as compared to other existing scheme by considering ICA as a base line. Hence, the proposed scheme is computationally feasible and has better denoising performance for artifacts removal from EEG signal.

6. Conclusion

A new approach for the detection and mitigation of ocular artifacts from EEG signals was introduced in this proposed research work. The projected model has two phases such as detection phase and mitigation phase. In the detection part, the input EEG signals were decomposed through 5-level DWT and Pisarenko harmonic decomposition techniques. The features of decomposed signals were extracted by PCA and ICA. Then, the extracted features were given to the optimized DCN, in which the optimization was done by DS-EFO algorithm. The optimized DCN classifies the signals into signal with artifacts and without artifacts. In the mitigation part, the semisimulated data was generated for validating the detection of artifacts. Here, the moderation of ocular artifacts from EEG signals was done by the same optimized DCN using the proposed DS-EFO. The performance analysis of the proposed DS-EFO-DCN algorithm ensures the enhanced results over the existing metaheuristic algorithms in terms of MAE, RMSE, and correlation coefficients. From the overall analysis, the specificity evaluation of the proposed DS-EFO-DCN model has achieved 5.03% better than NN, 0.93% better than SVM, 2.84% better than EMCD+DPE-EWA-LWT, and 3.23% better than DCN. Thus, it is concluded that the developed DS-EFO-DCN model achieves better performance in the detection and mitigation of ocular artifacts from EEG signals.

Data Availability

The [URL: http://www.bbci.de/competition/iv/#datasets] data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.