Applied Bionics and Biomechanics

Applied Bionics and Biomechanics / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 6472586 | https://doi.org/10.1155/2021/6472586

Selen Güney, Sema Arslan, Adil Deniz Duru, Dilek Göksel Duru, "Identification of Food/Nonfood Visual Stimuli from Event-Related Brain Potentials", Applied Bionics and Biomechanics, vol. 2021, Article ID 6472586, 11 pages, 2021. https://doi.org/10.1155/2021/6472586

Identification of Food/Nonfood Visual Stimuli from Event-Related Brain Potentials

Academic Editor: Francesca Cordella
Received06 Jun 2021
Accepted24 Aug 2021
Published24 Sep 2021

Abstract

Although food consumption is one of the most basic human behaviors, the factors underlying nutritional preferences are not yet clear. The use of classification algorithms can clarify the understanding of these factors. This study was aimed at measuring electrophysiological responses to food/nonfood stimuli and applying classification techniques to discriminate the responses using a single-sweep dataset. Twenty-one right-handed male athletes with body mass index (BMI) levels between 18.5% and 25% (mean age: ) participated in this study voluntarily. The participants were asked to focus on the food and nonfood images that were randomly presented on the monitor without performing any motor task, and EEG data have been collected using a 16-channel amplifier with a sampling rate of 1024 Hz. The SensoMotoric Instruments (SMI) iView XTM RED eye tracking technology was used simultaneously with the EEG to measure the participants’ attention to the presented stimuli. Three datasets were generated using the amplitude, time-frequency decomposition, and time-frequency connectivity metrics of P300 and LPP components to separate food and nonfood stimuli. We have implemented -nearest neighbor (kNN), support vector machine (SVM), Linear Discriminant Analysis (LDA), Logistic Regression (LR), Bayesian classifier, decision tree (DT), and Multilayer Perceptron (MLP) classifiers on these datasets. Finally, the response to food-related stimuli in the hunger state is discriminated from nonfood with an accuracy value close to 78% for each dataset. The results obtained in this study motivate us to employ classifier algorithms using the features obtained from single-trial measurements in amplitude and time-frequency space instead of applying more complex ones like connectivity metrics.

1. Introduction

Although food consumption is one of the most basic human behaviors, the factors underlying nutritional preferences are not yet apparent. Many factors, such as taste, texture, appearance, food deprivation, and smell of a meal, play an essential role in the attention to food [13]. Several studies point out increased attention given to food-related stimuli, mainly due to food deprivation [4, 5]. It is significant to identify both the activated brain regions and the temporal microstructure of the information flow between these regions to understand the neural foundations of a cognitive process such as the attention given to these types of stimuli [6]. Even though the methods of imaging (Magnetic Resonance Imaging (MRI), Functional Magnetic Resonance Imaging (fMRI), and Positron Emission Tomography (PET)) are very useful for showing changes in cerebral blood flow that occurred during cognitive processing, hemodynamic responses are insufficient to explain the temporal dynamics of fast electrophysiological activity in the neural network [6, 7]. Electroencephalogram (EEG) has a high temporal resolution that allows measurement of the brain’s electrical activity [810] and varies concerning the presence of visual, somatosensory, and auditory stimuli [1, 11]. Event-Related Potential (ERP) recordings consist of sudden voltage fluctuations as a response to the stimulus [12, 13]. Researchers observed several ERP components according to the time delay after the occurrence of a stimulus. For instance, the P300 component, which is measured as a positive waveform approximately 300 ms after the stimulus, has been extensively studied in the literature due to its potential to reveal the dynamics of cognitive processes [1419]. Moreover, Late Positive Potentials (LPP) are observed 550-700 ms after the stimulus that might be the projection of the focused attention or detailed stimulus analysis. Moreover, it reflects the conscious stimulus recognition phase. Wavelet transform (WT) is one of the methods that are capable of estimating the ERP components. WT has a more significant advantage than classical spectral analysis because it is suitable for the analysis of nonstationary signals in the time-frequency domain. WT can be used to analyze various transient events in biological signals with the structure of representation and feature extraction [20]. Each ERP component derived by WT can be associated with different situations and tasks [2124]. In several studies, ERP components have been elucidated in response to food stimuli. For instance, Hachl et al. [25] conducted a study with a group of subjects who ate their last meal 3 hours or 6 hours before the ERP measurements where they used food images as stimuli. In another study, the effects of attention to food-related word stimuli in the absence of food were investigated [26]. Similarly, Channon and Hayward [27] investigated P300 and LPP responses to food and flower images in the hunger state. Furthermore, many researchers have conducted various Stroop studies in which the naming of the color of food words is used as stimuli [2831]. Moreover, Kitamura et al. [32] observed the effect of hypoglycemic glucose drink intake on a P300 response. As a result, the P300 component varied as a response to food and nonfood stimuli in the hunger state. This variation motivated us to investigate the differences that occurred in the ERP components extracted from single-epoch electrical recordings.

In recent decades, the detection of the mental status via EEG measurements had been performed via the implementation of machine learning algorithms [33, 34]. In most of the studies, researchers computed the features from ongoing EEG time series, and those features were subjected to classifiers to detect whether the subject is normal or not [35, 36]. This procedure necessitated the use of known features while the modern approach, the deep learning mechanism, enables us to figure out the filters which can be used to classify the labelled measured data. A gross review has been given in [37] where the brain signals were used as inputs in various problems, including the seizure, emotion detection, motor imagery identification, and evoked potentials.

In addition, eye tracking technology is used in attention studies to understand whether the participant pays attention to the stimulus presented. Eye tracking technology is the name given to a set of methods and techniques used to detect and record the activity of eye movements [38]. Studies have shown that eye tracking data provide reliable measures of attention to the stimulus in complex situations [39, 40].

There are a few studies in the literature that classify food-related stimuli [32, 41]. Unfortunately, none of the previous studies have examined electrophysiological responses to food-related stimuli using classification techniques. This study is aimed at measuring electrophysiological responses to food/nonfood stimuli and applying classification techniques to discriminate the responses using a single-sweep time series.

2. Materials and Methods

2.1. Participants

Twenty-one right-handed male athletes with BMI levels between 18.5% and 25% (mean age: ) participated in this study voluntarily. All participants had a minimum training in a week of 10 hours and competed in karate or rowing. None of the participants had a lack of food intake, head injuries, neurological and psychiatric disorders, or other illness history.

2.2. Experimental Design

More specifically, participants were asked not to eat after 09.00 pm before the test day. We performed EEG measurements at 09.00-10.00 am before breakfast. Before the start of the experiment, we asked participants to focus on the food and nonfood images without large motor movements that can negatively affect the signal. We presented the stimuli randomly using in-house developed software. In our study, standardized and contrast-color-adjusted images were selected from the study of Charbonnier et al. to minimize the adverse effects of food images on the ERP [42]. In this study, we separated the images according to their nutrient content [43] into five groups. Since our aim is not to classify the response to the images through calorie content, we just separated the groups as food and nonfood ones. In the experiment, we have shown images for 800 ms and inserted a negligible time of two adjacent stimuli that are shown in Figure 1. The number of neutral images was , while it was for food images. The resolution of the images was adjusted to .

2.3. Data Collection

We used a 16-channel V-AMP amplifier (Brain Products TM, Germany) with a sampling rate of 1024 Hz. In this study, we collected EEG from FP1, FP2, FP1, FP2, F3, Fz, F4, P3, P4, Pz, C3, C4, Cz, O1, O2 Oz, T7, and T8 channels with two electrodes as the reference and ground, as shown in Figure 2. Impedances of the channels have been kept below 5 khm.

The SensoMotoric Instruments (SMI) iView XTM RED eye tracking technology was used simultaneously with the EEG. A 22” LCD screen with resolution and the eye-tracker system are shown in Figure 3. The frequency of the SMI eye-tracking system is 60 Hz, and it can record eye movements with a 0.5-degree recording error.

2.4. Data Analysis

Eye movements are analyzed to check if the subjects focused on the visual stimuli using SMI BeGaze (Behavioral and Gaze Analysis) software. Next, noisy components are removed from the EEG signal and the relevant properties of the data are extracted based on signal processing techniques. In this step, if the extracted features are not appropriate, inaccurate findings can be achieved. Thus, it is necessary to find and extract suitable features from the raw signals to obtain accurate classification results [44, 45]. The last step is the use of various machine learning techniques (like a decision tree and support vector machine) to classify the EEG signal using the characteristics obtained from the feature extraction process. Preprocessing of data is very substantial for improving the noise ratio of the EEG signal. We applied a low-pass filter at 40 Hz and a high-pass filter at 0.1 Hz. Artifacts have been marked on the EEG data and removed for further processing. After the preprocessing step, a total of 4754 single epochs remained. Next, EEG data are epoched with a length of 200 ms before and 800 ms post to each stimulus marker. In the second step, for both food images and nonfood images, the features are extracted using the data collected from 21 subjects. The feature vector consists of both time and frequency domain features. Datasets of essential features obtained from EEG for food and nonfood images are as follows: the amplitude, time-frequency power, and time-frequency connectivity metrics. Datasets have formed as follows. DataSet1: row values are computed for the LPP and P300 amplitude. DataSet2: wavelet transform (WT) is used to compute row values for each frequency band (delta, theta, alpha, beta, and gamma) for the LPP and P300. DataSet3: wavelet coherence is applied to form row values in each frequency band (delta, theta, alpha, beta, and gamma) for the LPP and P300.

The -nearest neighbor (kNN), support vector machine (SVM), Linear Discriminant Analysis (LDA), Logistic Regression (LR), Bayesian classifier, decision tree (DT), and Multilayer Perceptron (MLP) classifiers are implemented using each dataset. The first classifier used in this study is the kNN, which is a nonparametric supervised learning algorithm. The new sample to be tested with the features extracted that occur during the classification is assigned to the most appropriate class according to its proximity to the -nearest neighbors [46]. The second classifier, SVM, uses a distinctive hyperplane to determine classes. The hyperplane is the one that maximizes the margins using the distance from the nearest training points of the class. As a linear classifier, LDA (also known as Fisher’s LDA) is an enhanced version of principal component analysis. The Bayesian classifier is a supervised statistical method for classification. It uses the probability to assign the most likely class of a given example described by its feature vector. MLP is a classifier based on artificial neural networks. The logistic regression used in this study is a statistical technique for binary classification. A tree-like structure containing the rules for classification in DT is produced using the mutual information hidden in the dataset. All of these classifiers were implemented in Python using the Scikit package.

3. Results

As a result of the analysis, the heat map of food/nonfood images obtained from the eye-tracking technology proves that the participants focused their attention on the presented images during the study as shown in Figures 4 and 5.

The grand average ERP components obtained from 21 subjects in the study are summarized in terms of P300 and LPP amplitudes as shown in Table 1. and Figure 6. We investigated the amplitude differences that occurred as a result of the presence of the food and nonfood stimuli using paired -tests for each electrode.


ChannelP300 (Food) mean/stdP300 (Non-Food) mean/stdpLPP (Food) mean/stdLPP (Non-Food) mean/stdp

Fp11.205/1.1441.623/1.1190.36952.315/1.1422.114/1.0910.7773
Fp2-0.027 / 1.1720.054/1.1350.87951.020/1.1710.445 /1.1380.3008
F3-6.537/1.155-6.286/ 1.140.5663-5.781/ 1.173-5.822 /1.1420.9162
Fz7.298/ 1.0087.462 / 1.0010.70816.812 / 1.0166.413 / 0.9920.3246
F40.721/1.0140.676 /1.0190.93431.368 / 1.0261.438/ 1.0060.879
P34.107/ 1.0084.461/ 0.9890.36754.955 / 1.0305.054/ 1.0150.8533
P4-15.839 / 1.282-15.967/1.30.8282-15.452/ 1.3-14.816 / 1.3290.2634
Pz-8.574 / 1.186-8.037/ 1.1930.3823-8.047/ 1.2-8.468 / 1.1970.4037
C32.556 / 0.9603.079 / 0.9540.20592.109 / 0.9641.722 / 0.9630.485
C4-1.077 /0.946-1.092/ 0.9320.9672-1.349 / 0.955-1.37/0.9380.9524
Cz7.233 /0.9637.405 /0.9490.71756.215 / 0.9646.177 / 0.9400.9365
O11.193 / 0.9990.899 / 0.9960.38250.657 / 1.0060.739 /1.0350.8176
O24.099 / 0.9823.856 / 0.9890.58963.194 / 0.9903.286 / 0.9970.8204
Oz5.218 / 0.9524.275 /0.9430.01224.752 /0.9584.681 /0.9530.8566
T7-1.646 / 1.187-2.662 / 1.1510.0394-2.08 / 1.19-1.683 / 1.1920.5251
T80.069/ 1.1710.254 / 1.1230.7408-0.688 / 0.0911.149 / 1.1850.2419

Oz and T7 electrodes differed between food and nonfood stimuli significantly in the absence of a multiple test correction procedure while none of the electrodes’ LPP components differed between stimuli. Further, this result motivated us to infer the mechanism of the measured ERP by the computation of the frequency decomposition. The increased occipital activity of the P300 observed concerning food stimuli agrees with our previous studies [47]. After the frequency decomposition of the EEG time series, we computed the statistical tests to elucidate the differences between food and nonfood stimuli. For the P300 component, in the delta band, Pz () and Oz (); in the theta band, T7 (); and in the alpha band, FP2 (), electrodes differed between food and nonfood stimuli. On the other hand, for LPP, differences were observed just in the alpha band for Fp2 (), Fz (), T7 (), and T8 ().

Furthermore, we computed the coherence between the electrodes in each frequency band and performed -tests to check the significance of the differences for food and nonfood stimuli. In the theta band, P300 coherence between Fp1 and Fp2 () and delta band LPP coherence of Fp2-Fz () are observed to differ between stimuli. After the descriptive investigation of the features, we focused on the classification procedures.

In this study, we achieved accuracy values close to 80% for the discrimination of the electrophysiological responses given to food-related stimuli versus nonfood stimuli in a hunger state, using various classification algorithms for datasets. The classification accuracy values are summarized in Tables 24 for the amplitudes of P300/LPP (DataSet1), for time-frequency-derived components of P300/LPP (DataSet2), and for connectivity metrics of the electrodes in the time-frequency domain of P300/LPP (DataSet3), respectively. A sample topography image is shown in Figure 7 for P300 and LPP while topographies regarding different time-frequency components are visualized in Figure 8.


Method/FeatureP300LPP

k-NN7676
LR7877
DT6566
LDA7877
NB6868
SVM7877
MLP7776


Method/FeatureP300 (%)LPP (%)P300 (%)LPP (%)P300 (%)LPP(%)P300 (%)LPP (%)

k-NN7776767777767577
LR7776767778767678
DT6262636366646368
LDA7776767778767678
NB7473767678767678
SVM7776767778767678
MLP7775767778767677


Method/FeatureP300 (%)LPP (%)P300 (%)LPP (%)P300 (%)LPP(%)P300 (%)LPP (%)

k-NN7776767777777777
LR7777777777777777
DT6462636463646465
LDA7777777777777777
NB6567666769717474
SVM7777777778787778
MLP6974737462737073

We repeated the classification procedures based on individual subjects’ data and reported the results (mean and standard deviation) in Table 5. In Figure 9, classification accuracy values of all algorithms are visualized.


Accuracyk-NNLRDTLDANBSVMMLP

Dataset 1P300Mean73.775.17175.171.773.961.6
Std. Dev.1.21.32.91.62.15.76.4
LPPMean74.17570.67571.87661.5
Std. Dev.1.41.12.51.323.18.9

Dataset 2P300 (Delta)Mean73.675.369.974.971.877.458.6
Std. Dev.1.61.62.91.52013.5
P300 (Theta)Mean73.674.97074.971.377.454.1
Std. Dev.1.81.52.51.42.6013.7
P300 (Alpha)Mean73.574.870.774.970.977.459.2
Std. Dev.1.81.42.61.32.209.6
P300 (Beta)Mean73.47570.974.872.577.457.7
Std. Dev.1.71.72.61.52.309.6
LPP (Delta)Mean73.765.269.259.362.17058.1
Std. Dev.2.22.63.33.23.3310.5
LPP (Theta)Mean73.767.769.159.761.274.859.9
Std. Dev.1.52.533.45.11.59.7
LPP (Alpha)Mean73.266.768.659.760.77456.1
Std. Dev.22.72.72.852.28.9
LPP (Beta)Mean73.667.367.659.861.875.661.7
Std. Dev.1.82.62.82.95.71.57.1

Dataset 3P300 (Delta) CohMean73.365.769.858.460.869.355
Std. Dev.22.62.83.34.22.59.6
P300 (Theta) CohMean73.667.668.558.161.87458.1
Std. Dev.1.43.63.34.34.42.210.9
P300 (Alpha) CohMean73.466.468.459.658.973.458.8
Std. Dev.1.732.33.35.91.77.4
P300 (Beta) CohMean74.167.268.960.159.873.358.5
Std. Dev.2.12.42.53.95.42.39.6
LPP (Delta) CohMean73.765.269.259.362.17058.1
Std. Dev.2.22.63.33.23.3310.5
LPP (Theta) CohMean73.767.769.159.761.274.859.9
Std. Dev.1.52.533.45.11.59.7
LPP (Alpha) CohMean73.266.768.659.760.77456.1
Std. Dev.22.72.72.852.28.9
LPP (Beta) CohMean73.667.367.659.861.875.661.7
Std. Dev.1.82.62.82.95.71.57.1

4. Discussion

Up to our knowledge, the present study is the first one that classifies the electrophysiological responses to food and nonfood stimuli in a hunger state. For this, the first dataset consists of the amplitudes of the P300 and LPP components from single epochs. The dataset was formed by pooling the rows computed for each subject. As stated by Blankertz et al. [48], the investigation of ERP components from single-trial measurements is a complex problem because of trial variability and background noise. Thus, each row was normalized to avoid the amplitude differences within subjects and single-trial epochs. In the hunger state, P300 and LPP amplitudes were found to differ concerning food and nonfood stimuli in posterior regions [49]. Similar to this, Geisler and Polich reported P300 differences due to the food deprivation [31]. In contradiction to these findings, when the participants ingest hypoglycemic glucose, P300 changes were not observed [31]. In another study, LPP increased when the responses to food images and flower images were compared. In that study, P300 amplitude increased over the occipital, temporal, and centroparietal areas [26]. In our study, the maximum classification accuracy was 78% when the amplitudes of the P300 and LPP derived from single-trial measurements were used as features, separately. The differences in P300 or LPP components in the presence of the food/nonfood stimuli varied, as reported in previous studies. In ERP studies, averaging of the responses causes an increase in the signal-noise ratio of the signal and enhances the contrast between the cases.

However, in the concept of our study, a remarkable accuracy value (78%) has been obtained from the use of single-trial P300 and LPP amplitude components, separately. In the ERP literature, in a classification study, the average accuracy value increased to 86% based on the N170 component. In that study, single-trial measurements as responses to pictures having positive and negative emotions were the input data to the classifier [50]. Single-trial EEG measurements can provide valuable information in the presence of adequate contrast mechanisms. For instance, in the comparison of the resting-state EEG data with the brain dynamics measured during an increased mental workload state, high classification accuracy results are achieved [51]. In our study, the consistent accuracy values obtained using several techniques exhibit the limitation of the stimulus identification. DT outputs the lowest accuracy in classification, which might be due to the low number of levels of the tree.

For the ERP data collection, one needs to perform an averaging procedure over several responses given to the same or similar stimulus. Thus, conducting ERP experiments is a time-requiring process. On the other hand, in our study, we concentrated just on the single sweeps which last less than a second. So, the data that we need is limited by physiological mechanisms for the testing phase of the classification. Therefore, for real-time implementation, the minimum detection time can be thought of as the time needed to compute P300 and LPP features. On the other hand, the classification procedures consist of a training phase where several realizations of the labelled data are being used. For the estimation of the computational complexity, the number of features () and the number of samples () have a crucial role. For instance, in -NN, in the test phase, the complexity is directly related to , while it is just affected by in DT. Since the complexity values are on the order of the square of sample size, the training phase is time-consuming for DT, MLP, and SVM. On the other hand, LR is much faster. When we pool the data, our sample size becomes more than thousands.

5. Conclusion

In the ERP literature, the common sense is to analyze the electrical activity in different frequency bands. Thus, in the concept of this study, the time series were decomposed into a time-frequency space using wavelet transform. Moreover, the connectivity approach was adopted to multichannel ERP measurements in the time window of P300 and LPP to deduce the coherence information. Based on our findings, we can propose that the use of complex features is not necessary since the usage of them does not overcome the basic amplitude features.

There are still many gaps in our understanding of the brain responses given to visual stimuli. The concept of visual stimuli cannot directly be classified with high-accuracy values. On the other hand, it is more straightforward for mental illness detection or motor imagery studies. Thus, in future studies, one should focus on the feature engineering side of the EEG. In particular, deep learning with convolutional neural networks can be adopted to develop spatial filters on the topography images. This process may yield researchers to exhibit valuable information from the measured ERP signals.

Data Availability

The EEG and eye tracker data used to support the findings of this study are available from the corresponding author upon request.

Ethical Approval

The study was approved by the Ethical Review Board of the Medical Faculty, Marmara University (approval number 09.2018.380).

Informed consent was obtained from all individual participants included in the study prior to measurement.

Conflicts of Interest

The authors declare no conflict of interest directly related to the submitted work.

Acknowledgments

This work was supported by the Research Fund of the Marmara University (Project No. SAG-A-100713-0296). The article processing charge was funded by Hindawi.

References

  1. L. J. Karhunen, E. J. Vanninen, J. T. Kuikka, R. I. Lappalainen, J. Tiiho-Nen, and M. I. Uusitupa, “Regional cerebral blood flow during exposure to food in obese binge eating women,” Psychiatry Research: Neuroimaging, vol. 99, no. 1, pp. 29–42, 2000. View at: Publisher Site | Google Scholar
  2. W. D. Killgore, A. D. Young, L. A. Femia, P. Bogorodzki, J. Rogowska, and D. A. Yurgelun-Todd, “Cortical and limbic activation during viewing of high- versus low-calorie foods,” NeuroImage, vol. 19, no. 4, pp. 1381–1394, 2003. View at: Publisher Site | Google Scholar
  3. N. Tashiro, H. Sugata, T. Ikeda et al., “Effect of individual food preferences on oscillatory brain activity,” Brain and Behavior, vol. 9, no. 5, p. e01262, 2019. View at: Publisher Site | Google Scholar
  4. W. Plihal, C. Haenschel, P. Hachl, J. Born, and R. Pietrowsky, “The effect of food deprivation on ERP during identification of tachistoscopically presented food-related words,” Journal of Psychophysiology, vol. 15, no. 3, pp. 163–172, 2001. View at: Publisher Site | Google Scholar
  5. J. Sänger, “Can't take my eyes off you - How task irrelevant pictures of food influence attentional selection,” Appetite, vol. 133, pp. 313–323, 2019. View at: Publisher Site | Google Scholar
  6. S. A. Hillyard and L. Anllo-Vento, “Event-related brain potentials in the study of visual selective attention,” Proceedings of the National Academy of Sciences, vol. 95, no. 3, pp. 781–787, 1998. View at: Publisher Site | Google Scholar
  7. P. T. Fox and M. G. Woldorff, “Integrating human brain maps,” Current Opinion in Neurobiology, vol. 4, no. 2, pp. 151–156, 1994. View at: Publisher Site | Google Scholar
  8. P. Ritter and A. Villringer, “Simultaneous EEG-fMRI,” Neuroscience & Biobehavioral Reviews, vol. 30, no. 6, pp. 823–838, 2006. View at: Publisher Site | Google Scholar
  9. R. Srinivasan, W. R. Winter, and P. L. Nunez, “Source analysis of EEG oscillations using high-resolution EEG and MEG,” Progress in Brain Research, vol. 159, pp. 29–42, 2006. View at: Publisher Site | Google Scholar
  10. B. M. Savers, H. A. Beagley, and W. R. Henshall, “The mechanism of auditory evoked EEG responses,” Nature, vol. 247, no. 5441, pp. 481–483, 1974. View at: Publisher Site | Google Scholar
  11. K. Elf, E. Ronne-Engström, R. Semnic, E. Rostami-Berglund, J. Sundblom, and M. Zetterling, “Continuous EEG monitoring after brain tumor surgery,” Acta neurochirurgica, vol. 161, no. 9, pp. 1835–1843, 2019. View at: Publisher Site | Google Scholar
  12. S. J. Luck, G. F. Woodman, and E. K. Vogel, “Event-related potential studies of attention,” Trends in Cognitive Sciences, vol. 4, no. 11, pp. 432–440, 2000. View at: Publisher Site | Google Scholar
  13. H. T. Schupp, T. Flaisch, J. Stockburger, and M. Jungh Ofer, “Emotion and attention: event-related brain potential studies,” Progress in Brain Research, vol. 156, pp. 31–51, 2006. View at: Publisher Site | Google Scholar
  14. T. W. Picton, “The P300 wave of the human event-related potential,” Journal of Clinical Neurophysiology, vol. 9, no. 4, pp. 456–479, 1992. View at: Publisher Site | Google Scholar
  15. K. McDowell, S. E. Kerick, D. L. Santa Maria, and B. D. Hatfield, “Aging, physical activity, and cognitive processing: an examination of P300,” Neurobiology of Aging, vol. 24, no. 4, pp. 597–606, 2003. View at: Publisher Site | Google Scholar
  16. V. Dodin and J. L. Nandrino, “Cognitive processing of anorexic patients in recognition tasks: an event-related potentials study,” International Jour- nal of Eating Disorders, vol. 33, no. 3, pp. 299–307, 2003. View at: Publisher Site | Google Scholar
  17. Bressler, The Handbook of Brain Theory and Neural Networks, MIT press, London, 2003.
  18. S. J. Luck and Kappenman, The Oxford Handbook of Event-Related Potential Components, Oxford university press, 2011.
  19. M. Giraldo, G. Buodo, and M. Sarlo, “Food processing and emotion regulation in vegetarians and omnivores: An event- related potential investigation,” Appetite, vol. 141, p. 104334, 2019. View at: Publisher Site | Google Scholar
  20. L. J. Karhunen, E. J. Vanninen, J. T. Kuikka, R. I. Lappalainen, J. Tiihonen, and M. I. J. Uusitupa, “Regional cerebral blood flow during exposure to food in obese binge eating women,” Neuroimaging, vol. 99, no. 1, pp. 120–124, 2006. View at: Publisher Site | Google Scholar
  21. B. Kopp, F. Rist, and U. W. E. Mattler, “N200 in the flanker task as a neurobehavioral tool for investigating executive control,” Psychophysiology, vol. 33, no. 3, pp. 282–294, 1996. View at: Publisher Site | Google Scholar
  22. E. K. Vogel and S. J. Luck, “The visual N1 component as an index of a discrimination process,” Psychophysiology, vol. 37, no. 2, pp. 190–203, 2000. View at: Publisher Site | Google Scholar
  23. S. Iceta, J. Benoit, P. Cristini et al., “Attentional bias and response inhibition in severe obesity with food disinhibition: a study of P300 and N200 event-related potential,” International Journal of Obesity, vol. 44, 2020. View at: Publisher Site | Google Scholar
  24. M. W. Geisler and J. Polich, “P300 and time of day: circadian rhythms, food intake, and body temperature,” Biological Psychology, vol. 31, no. 2, pp. 117–136, 1990. View at: Publisher Site | Google Scholar
  25. P. Hachl, C. Hempel, and R. Pietrowsky, “ERPs to stimulus identification in persons with restrained eating behavior,” International Journal of Psychophysiology, vol. 49, no. 2, pp. 111–121, 2003. View at: Publisher Site | Google Scholar
  26. J. Stockburger, R. Schmälzle, T. Flaisch, F. Bublatzky, and H. T. Schupp, “The impact of hunger on food cue processing: an event-related brain potential study,” Neuroimage, vol. 47, no. 4, pp. 1819–1829, 2009. View at: Publisher Site | Google Scholar
  27. S. Channon and A. Hayward, “The effect of short-term fasting on processing of food cues in normal subjects,” International Journal of Eating Disorders, vol. 9, no. 4, pp. 447–452, 1990. View at: Publisher Site | Google Scholar
  28. E. H. Lavy and M. A. van den Hout, “Attentional bias for appetitive cues: effects of fasting in normal subjects,” Behavioural and Cognitive Psychotherapy, vol. 21, no. 4, pp. 297–310, 1993. View at: Publisher Site | Google Scholar
  29. K. S. Dobson and D. J. Dozois, “Attentional biases in eating disorders: a meta-analytic review of Stroop performance,” Clinical Psychology Review, vol. 23, no. 8, pp. 1001–1022, 2004. View at: Publisher Site | Google Scholar
  30. S. Hollitt, E. Kemps, M. Tiggemann, E. Smeets, and J. S. Mills, “Components of attentional bias for food cues among restrained eaters,” Appetite, vol. 54, no. 2, pp. 309–313, 2010. View at: Publisher Site | Google Scholar
  31. M. W. Geisler and J. Polich, “P300 is unaffected by glucose increase,” Biological Psychology, vol. 37, no. 3, pp. 235–245, 1994. View at: Publisher Site | Google Scholar
  32. K. Kitamura, T. Yamasaki, and K. Aizawa, “Food log by analyzing food images,” in In: Proceedings of the 16th ACM international conference on Mul- timedia, pp. 999-1000, 2008. View at: Google Scholar
  33. Y. Zhang, G. Zhou, J. Jin, Q. Zhao, X. Wang, and A. Cichocki, “Sparse Bayesian classification of EEG for brain–computer interface,” in IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 11, pp. 2256–2267, 2016. View at: Publisher Site | Google Scholar
  34. Y. Zhang, G. Zhou, Q. Zhao, J. Jin, X. Wang, and A. Cichocki, “Spatial-temporal discriminant analysis for ERP-based brain-computer interface,” in IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 21, no. 2, pp. 233–243, 2013. View at: Publisher Site | Google Scholar
  35. D. G. Duru and A. D. Duru, “Classification of Event Related Potential Patterns using Deep Learning,” 2018 Medical Technologies National Congress (TIPTEKNO), pp. 1–4, 2018. View at: Publisher Site | Google Scholar
  36. S. A. Shaban, O. N. Ucan, and A. D. Duru, “Classification of lactate level using resting-state EEG measurements,” Applied Bionics and Biomechanics, vol. 2021, Article ID 6662074, 2021. View at: Publisher Site | Google Scholar
  37. X. Zhang, L. Yao, X. Wang, J. J. M. Monaghan, D. Mcalpine, and Y. Zhang, “A survey on deep learning-based non-invasive brain signals: recent advances and new frontiers,” Journal of Neural Engineering, vol. 18, 2020. View at: Publisher Site | Google Scholar
  38. M. L. Mele and S. Federici, “Gaze and eye-tracking solutions for psychological research,” Cognitive Processing, vol. 13, no. S1, pp. 261–265, 2012. View at: Publisher Site | Google Scholar
  39. E. Koç, O. Bayat, D. G. Duru, and A. D. Duru, “Design of Brain Computer Interface Based on Eye Movements,” International Journal of Engineering Research and Development, vol. 12, no. 1, pp. 176–188. View at: Publisher Site | Google Scholar
  40. G. Lohse and E. Johnson, “A comparison of two process tracing methods for choice tasks,” Organizational Behavior And Human Decision Pro- cesses, Sa- yı, vol. 68, no. 1, pp. 28–43, 1996. View at: Publisher Site | Google Scholar
  41. H. Kagaya and K. Aizawa, “Highly accurate food/non-food image classification based on a deep convolutional neural network,” in In: International Conference on Image Analysis and Processing, pp. 350–357, 2015. View at: Google Scholar
  42. L. Charbonnier, F. van Meer, L. N. van der Laan, M. A. Viergever, and P. A. Smeets, “Standardized food images: a photographing protocol and image database,” Appetite, vol. 96, pp. 166–173, 2016. View at: Publisher Site | Google Scholar
  43. S. E. de Bruijn, Y. C. de Vries, C. de Graaf, S. Boesveldt, and G. Jager, “The reliability and validity of the Macronutrient and Taste Preference Ranking Task: a new method to measure food preferences,” Food Quality and Preference, vol. 57, pp. 32–40, 2017. View at: Publisher Site | Google Scholar
  44. H. U. Amin, W. Mumtaz, A. R. Subhani, M. N. M. Saad, and A. S. Malik, “Classification of EEG signals based on pattern recognition approach,” Frontiers in Computational Neuroscience, vol. 11, 2017. View at: Publisher Site | Google Scholar
  45. M. Ahmed, A. Mohamed, O. N. Uçan, O. Bayat, and A. D. Duru, “Classification of resting-state status based on sample entropy and power spectrum of electroencephalography (EEG),” Applied Bionics and Biomechanics, vol. 2020, Article ID 8853238, 2020. View at: Publisher Site | Google Scholar
  46. D. Torse, V. Desai, and R. Khanai, “A review on seizure detection systems with emphasis on multi-domain feature extraction and classification using machine learning, BRAIN,” Broad Research in Artificial Intelligence and Neuroscience, vol. 8, no. 4, pp. 109–129, 2017. View at: Google Scholar
  47. S. Arslan, S. Güney, K. Tan, S. B. Yücel, H. B. Çotuk, and A. D. Duru, “Event related potential responses to food pictures in hunger,” in 2018 Electric Electronics, Computer Science, Biomedical Engineerings' Meeting (EBBT), 2018. View at: Publisher Site | Google Scholar
  48. B. Blankertz, S. Lemm, M. Treder, S. Haufe, and K. R. Müller, “Single-trial analysis and classification of ERP components -- A tutorial,” NeuroIm- age, vol. 56, no. 2, pp. 814–825, 2011. View at: Publisher Site | Google Scholar
  49. J. Polich and A. Kok, “Cognitive and biological determinants of P300: an integrative review,” Biological Psychology, vol. 41, no. 2, pp. 103–146, 1995. View at: Publisher Site | Google Scholar
  50. T. Yin, Z. Huiling, P. Yu, and L. Jinzhao, “Classification for single-trial N170 during responding to facial picture with emotion,” Frontiers in Computational Neuroscience, vol. 12, 2018. View at: Publisher Site | Google Scholar
  51. A. D. Duru, “Determination of increased mental workload condition from EEG by the use of classification techniques,” International Journal of Advances in Engineering and Pure Sciences, vol. 1, pp. 47–52, 2019. View at: Publisher Site | Google Scholar

Copyright © 2021 Selen Güney et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views123
Downloads130
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.