Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 6645270 | https://doi.org/10.1155/2021/6645270

Yulong Feng, Wei Xiao, Teng Wu, Jianwei Zhang, Jing Xiang, Hong Guo, "A New Recognition Method for the Auditory Evoked Magnetic Fields", Computational Intelligence and Neuroscience, vol. 2021, Article ID 6645270, 11 pages, 2021. https://doi.org/10.1155/2021/6645270

A New Recognition Method for the Auditory Evoked Magnetic Fields

Academic Editor: Mario Versaci
Received14 Dec 2020
Revised15 Jan 2021
Accepted21 Jan 2021
Published11 Feb 2021

Abstract

Magnetoencephalography (MEG) is a persuasive tool to study the human brain in physiology and psychology. It can be employed to obtain the inference of change between the external environment and the internal psychology, which requires us to recognize different single trial event-related magnetic fields (ERFs) originated from different functional areas of the brain. Current recognition methods for the single trial data are mainly used for event-related potentials (ERPs) in the electroencephalography (EEG). Although the MEG shares the same signal sources with the EEG, much less interference from the other brain tissues may give the MEG an edge in recognition of the ERFs. In this work, we propose a new recognition method for the single trial auditory evoked magnetic fields (AEFs) through enhancing the signal. We find that the signal strength of the single trial AEFs is concentrated in the primary auditory cortex of the temporal lobe, which can be clearly displayed in the 2D images. These 2D images are then recognized by an artificial neural network (ANN) with 100% accuracy, which realizes the automatic recognition for the single trial AEFs. The method not only may be combined with the source estimation algorithm to improve its accuracy but also paves the way for the implementation of the brain-computer interface (BCI) with the MEG.

1. Introduction

Magnetoencephalography (MEG) utilizes extremely sensitive magnetic sensors, such as optical pumped atomic magnetometers (OPMs) and superconducting quantum interference devices (SQUIDs), to capture the feeble signal originated from the brain and enable researchers to investigate neuronal activities [14]. MEG is a frontier tool in scientific research and clinical application. On the one hand, it can be employed to divide and study different brain functional areas [5, 6]. On the other hand, MEG is also an effective method for clinical diagnoses and treatments of some brain functional diseases, such as mild traumatic brain injury [7, 8] and autism spectrum disorder [9], and provides novel insights into the biological mechanisms underlying some brain disorders such as dementia [10], depression [11], and psychosis [12]. Especially for epilepsy, MEG allows us to locate the epileptic foci without risky invasion procedures [13].

The signal generated by the brain in response to different stimuli is the hot topic in the study of the brain function. These signals are called event-related potentials (ERPs) [14] in the electroencephalography (EEG) or event-related magnetic fields (ERFs) in the MEG, including sensory ERFs, motor ERFs, long latency, and artifacts [15]. In the EEG, the amplitudes, phases, waveforms, occurrence times, and source locations of ERPs are the major characteristics to be studied [16, 17]. It is also important to explore the dependences of the ERPs with different subjects [18, 19] or with different stimulus [2023]. Thus, MEG is an edge tool in clinical setting, such as brain functional diseases, biomedical engineering, and medical devices [2427]. In the context of given stimuli, studying features of the ERPs is helpful for obtaining the information from different brain functional areas [28, 29]. On the contrary, with knowledge of the brain functional regions, the recognition for different ERPs can help us understand the stimuli the subjects are exposed to [30, 31] and the mental states the subjects are in [32]. It is the basis of converting the different brain responses of the subjects into different behaviours, which is also the goal the brain-computer interface (BCI) intends to achieve [3335]. Meanwhile, ERFs takes the form of magnetic induction intensity of brain responses reflected by ERPs. The recognition for the ERFs would also provide complementary information for MEG application that is important for brain research.

Due to the weakness and fuzziness of the ERFs and existence of various noises [36], even with the magnetic shielding room (MSR), the single trial ERFs cannot be precisely observed. We need to stimulate the subjects hundreds of times and get enough single trial ERFs to average them precisely. Averaging could suppress the random noise and strengthen the pattern of the ERFs, which is a conventional data processing method for the ERPs in the EEG. However, the average signal depends on the common patterns and components existing in the single trial ERFs, which may vary widely in both the time domain and the scalp distribution. These variations result from different strategies employed by the subject for processing stimuli. The physiological differences always appear in the subject’s performance during each stimulus, such as expectation, attention, arousal, alertness, and fatigue [31, 37, 38]. At the same time, signals generated by ongoing activities of the subject, which are unrelated to the event of interest, compete with the ERFs for the signal space and play the role of noise [31]. These signals appear randomly and irregularly and are seldom well defined, which makes the positive identification of single trial signal very difficult, while it does not mean that these signals are useless. For example, these signals are usually required to calculate the noise matrix in the process of magnetic source imaging (MSI) [3941]. Besides, if a considerable noise is introduced in a single trial, averaging would also bring it into the average signal. Averaging disregards some information of the interested signal contained in the single trial and can only produce a signal prototype that is not representative of any of the single trials induced in the average, which makes the recognition and determination of the single trial ERFs an important problem in brain science [31, 42].

Compared to averaging, direct recognition for the single trial ERFs does not need to store the data, providing the possibility to perform the on-the-fly identification. In the help of this advantage, a door opens for the study of cognitive brain function, which is a hot spot of experimental psychophysiology [43]. Cognitive variables like visual, auditory, sensory, and even emotional changes, as well as psychological changes, which vary from trial to trial, may be manipulated and sorted out in the study, showing the possibility that the ERFs could be used to implement an objective measure of the brain processes implicated in learning and problem solving. Furthermore, if the single trial ERFs can be read from the MEG and translated by the computer into a perceivable behaviour, the specific brain activity resulting from the specific consciousness can generate the specific action, which is exactly what BCI wants to achieve [44, 45].

Current single trial recognition methods are mainly for the ERPs in the EEG. Independent component analyses (ICA) are first employed to separate the feature vectors from the single trial ERPs to represent their characteristics [42, 46]. The original detected EEG signal is the result of multiple factors, such as the diversity of ERPs’ sources, the inconsistency in the electrical conductivity of brain tissues, and differences between sensors, leading to the separated characteristics being not evident. With the improvement of instruments and detection methods, the signal waveforms in the EEG become a starting point to solve the single trial ERPs recognition problem. Various statistics are constructed using amplitudes [15], phases [31], and frequencies [30, 47, 48] to detect different ERPs. However, signals detected by the MEG and the EEG are different [49]. Data processing methods that are suitable for the EEG signal may not be appropriate for the MEG signal. The MEG signal has less interferences and more sensors can be used to get the location information. Besides, there are some alternative techniques such as soft computing capable of studying magnetic fields [5052]. With the application of machine learning algorithms in medicine [5355], in this work, we intend to enhance the position features indicating the spatial distribution of the MEG signal and utilize the artificial neural network (ANN) to recognize the single trial auditory evoked magnetic fields (AEFs), which are called auditory evoked potentials (AEPs) in the EEG. Two AEFs datasets and one noise dataset are used to verify recognition method of the AEFs. After enhancing the position features with the signal enhancement method, these position features are also highlighted as 2D images and are automatically recognized with GoogLeNet [56].

In this article, we first describe the AEFs dataset, including the data collection and the data format, in Materials and Methods. Then the signal enhancement and recognition method for the single trial AEFs are described in detail. Results show the effect of the signal enhancement method and the recognition results for the single trial AEFs. In Discussion, we further discuss the advantages and disadvantages of this single trial AEFs recognition method compared with traditional methods and prospect the future development and application of the method. At last, we summarize this article in Conclusions.

2. Materials and Methods

2.1. Data Description

The data used in this work comes from the open-source database on the Brainstorm website [57]. The experiment with the method involves three datasets. The first and second datasets are AEFs data and the third dataset is the noise recorded in the same empty room. These data are all recorded with a SQUID-MEG device, which is produced by CTF corporation, Canada, and the distribution of its sensors is shown in Figure 1. The sensors involved in the data recording are shown in Table 1.


Sensor typeNumberFunction

Stimulus channel1Recording the electrical trigger signals that produce audio stimuli.
Audio recording channel1Recording the audio stimuli sent to the subject.
MEG axial gradiometers274Recording the MEG signal.
EEG electrodes2Recording the EEG signal.
Electrocardiograph (ECG), bipolar1Recording the subject’s heartbeat signals.
Electrooculogram (EOG), bipolar2Recording the subject’s eye movement signals
Head tracking channels12Recording the position of the subject’s head.

Two AEFs datasets are acquired with a sampling rate of 600 Hz in 360 s. The first (second) dataset contains 200 (199) auditory stimuli, which means that it should include 200 (199) single trial AEFs. We use the first dataset as the training data source and the second dataset as the testing data source. The noise collected in an empty room with the same environment is the third dataset. The noise has a length of 120 s and is sampled with a rate of 600 Hz. These three datasets are shown in Figure 2. It should be noted that the spikes that appear in the signal are interferences caused by eye movement signals. In addition, the fluctuations may be caused by various signals from various parts of the brain as well as some environmental noises.

2.2. Signal Enhancement

One cannot observe any trace about the single trial AEFs from Figure 2, due to the fuzziness of the single trial AEFs and the existence of various noises. It is necessary to enhance the AEFs and suppress the noise as much as possible. We propose a signal enhancement method, which can be used to enhance the position features of the single trial AEFs. The procedures for signal processing are described below.

Event markers: in order to obtain the single trial AEFs segment, the initial time of the single trial AEFs should be calibrated and marked by the signal recorded from the audio recording channel, because there is a delay between the sound being produced and the sound being heard by the subject, which is about 0.13 s in the experiment, while the appearance time of the stimulus recorded by the audio recording channel is almost the same as the time at which the subjects heard the sound. In addition, the eye movement artifacts and the cardiac artifacts should be marked according to the signals recorded by the bipolar ECG and the bipolar EOG, as is shown in Figure 3(b), so that we can eliminate them with the signal space projection (SSP) algorithm. After the process, the occurrence times of the artifacts and the AEFs are marked in the original detected data.

Preprocessing: the presence of various noises makes the identification of the AEFs difficult. Noise needs to be eliminated as much as possible. Firstly, the SSP algorithm is adopted to remove the eye movement artifacts and the cardiac artifacts [5861]. Secondly, a 2nd-order infinite impulse response (IIR) notch filter with 3 dB bandwidth of 2 Hz is used to clear and β waves of 10 Hz, 11 Hz, 20 Hz, and 21 Hz. At last, an even-order linear phase finite impulse response (FIR) low-pass 40 Hz filter with 60 dB stopband attenuation is performed on the signal to remove as much noise as possible and leave the AEFs. Meanwhile, the line frequency noise and its harmonics (60 Hz, 120 Hz, and 180 Hz) are also filtered. The preprocessing signal is shown in Figure 3(c). After the process, the artifacts, irrelevant MEG signals, and the line frequency noise have been removed.

Interception for the single trial AEFs: in order to obtain the major features of the AEFs, which are three peaks called P50, N100, and P200, respectively (see Appendix A for detailed information), we need to intercept the single trial AEFs. The single trial AEFs last for about 0.3 s and can be intercepted according to the stimulus previously marked in the MEG signal. During the first (second) set of data recording, we stimulate the subject with 200 (199) audio stimuli. Therefore, we can obtain 200 (199) single trials AEFs from the first (second) dataset.

Calculation of correlation coefficients of the MEG signal: among 200 single trials AEFs from the first dataset, it could be found that signals detected in the primary auditory area of the temporal lobe are correlated. Thus, the correlation coefficients are used to enhance the single trial AEFs. We set 0.022 × 1.7 m as the radius and define a neighbourhood for each sensor (see Appendix B for more details). For simplicity, the sensor in the centre of neighbourhood is called the selected sensor and its detected signal is the selected signal . Sensors in its neighbourhood are neighbourhood sensors and their detected signals are neighbourhood signals . The number of the neighbourhood sensors is 5 to 8. The correlation coefficients between the selected signal and its neighbourhood signals are calculated as [62, 63]

Signal enhancement: it is a conventional way to stack signals of different channels by weighting as a new signal in the EEG [30]. In this work, when the correlation coefficient between the neighbourhood signal and the selected signal is greater than 0.8, we will make the weighted linear superposition of two signals as the new selected signal. Since the correlation coefficient represents the size of components in one variable which are similar to the others, the weight is selected as the correlation coefficient between them. The new selected signal can be obtained aswhere represents the number of neighbourhood signals with an absolute value of the correlation coefficient greater than 0.8. Meanwhile, if there were no neighbourhood signals with a correlation coefficient greater than 0.8, we would do nothing with the selected signal; that is, . After the enhancement, if the selected signal consists mostly of the noise, due to its irrelevance to the neighbourhood signals , the new selected signal should not be strengthened, while if the AEFs from the brain were detected by the selected sensor, the surrounding sensors could also detect the same component, which means that the correlation between them would be strong and the new selected signal would be enhanced.

2D images: since the permeabilities of brain tissues are approximately the same, while the conductivity varies from tissue to tissue, one of the advantages of the MEG is that its detected signal is a better indicator of its source location. Drawing 2D image is a decent way to highlight the spatial distribution of the MEG signal. We calculate the energy of the enhanced signal to draw the 2D image and normalize it for convenience of comparison. A certain kind of ERFs, that is, the single trial AEFs, can be observed and identified. The method should also be applicable for other ERFs including location information, such as visual evoked magnetic fields (VEFs) and somatosensory evoked magnetic fields (SEFs). If the noise also contained spatial correlation, the enhancement method would strengthen the noise. However, as the noise does not have regular spatial distribution, it can further be filtered out by ANN.

2.3. Signal Recognition

In the first AEFs dataset, there are a total of 200 single trial AEFs, 3 of which are seriously polluted by the noise, so they are screened out (see Appendix C). The remaining 197 single trial AEFs are training data source. Each intercepted AEFs segment lasts 0.3 s, ensuring it contains P50, N100, and P200 peaks. At the same time, we randomly intercept 200 equal-length segments from the third noise dataset without overlapping. All 397 signal segments are processed with the signal enhancement method and 397 2D images can be obtained as training data. Similarly, 199 single trial AEFs can be obtained from the second AEFs dataset. 200 other equal-length noise segments are also randomly intercepted from the noise dataset. Therefore, a total of 399 2D images can be obtained as testing data.

Pretrained GoogLeNet is utilized to recognize auditory activation patterns in the single trial data. GoogLeNet is a 144-layer convolutional neural network (CNN). The input image is filtered by each layer of the network to get its features. The initial layer is used primarily to identify common features of the image, such as blobs, edges, and colours. The subsequent layers focus on more specific features to divide the images into different categories. For the single trial AEFs recognition problem, 3 layers of GoogLeNet should be readjusted.

The first adjusted layer is the final dropout layer in the network, which aims to prevent overfitting. The original dropout layer randomly sets input elements to zero with a given probability of 0.5, which is set as 0.6 in the new layer. The second one is the last connected layer that decides how to combine the features that the network extracts into class probabilities, a loss value, and predicted labels. In order to retrain GoogLeNet to classify noise and AEFs 2D images, the last connected layer is replaced with a new fully connected layer with the number of filters equal to the number of classes (noise and AEFs). The third adjusted layer is the final classification layer that is utilized to specify the output classes of the network. The classification layer is replaced with a new one without class labels, which will be automatically set as the output classes during the network training. Then, we retrain GoogLeNet for the single trial AEFs recognition problem, which means that it is trained based on the network parameters obtained from pretraining. We set the initial learning rate as 0.0001, which determines the variation range of parameters in the ANN. The epoch is set as 10, which represents how many times ANN is trained with the same set of training data. We use 80% images for training and the remainder for validation. A random seed is set as the default value in Matlab to generate random numbers.

3. Results

The first three single trial AEFs show the positive effect of the signal enhancement method, as is illustrated in Figure 4. If there were auditory stimuli, the auditory area would send out signals, and the correlation coefficients should be higher than 0.8. The original detected single trial AEFs appear randomly in the time domain and do not have a regular spatial distribution across the scalp. After signal enhancement process, the signals detected by the temporal lobe sensors are all enhanced in the time domain, which makes them become the maximum amplitude or the minimum amplitude of all the channel signals at the P50, N100, and P200 peaks. In the spatial distribution, the normalization energy is concentrated in the auditory area of the temporal lobe of the brain, while the detected signals in other areas tend to be random noises with little correlation, which makes the position characteristics of the AEFs evident. The second detected original AEFs are already well characterized, which indicates that the MEG signal is less interfered during the second measurement. However, the location of the AEFs deviates from the auditory part of the temporal lobe a little. After the enhancement, the deviation has been corrected. It should be noted that, in the experiment, the subjects’ left ear is not sensitive to hearing, so in the 2D image, the signal on the right side is more obvious than that on the left side.

The same operation is implemented for the noise segments to obtain the similar 2D image, as is shown in Figure 5. The noise detected by sensors at various locations appears randomly in space. After the enhancement, most of them are suppressed, but some are also enhanced. For the noise 2D image in Figure 5, it can be speculated that a large magnetic signal fluctuation appears on the front of the brain, resulting in the enhancement of signals in the region, while the signal strength is generally suppressed elsewhere.

After retraining GoogLeNet, it can be utilized to achieve the automatic and on-the-fly recognition for auditory activation patterns in the single trial data. In this work, we use the training data (the testing data), which include 197 (199) single trial AEFs and 200 (200) noise segments, to test the efficiency of GoogLeNet in identifying the single trial AEFs. The recognition accuracies of both datasets are 100%.

4. Discussion

In this work, according to the correlation of the signals detected by different sensors, the signal strength is concentrated to the auditory area of the temporal lobe, so that the AEFs can be automatically and timely recognized by ANN. The new method makes full use of the information on the signal spatial distribution contained in the MEG. Although some noises with a specific spatial distribution could also be enhanced, generally the correlation of noises is not strong, and the noise source does not happen to be located in the auditory area. Besides, the method also applies to other ERFs that have a specific spatial distribution, while it may not be suitable for the EEG that is disturbed by different conductivities and measures potential differences.

Compared with the conventional averaging, the new method could preserve the information in the single trial data as much as possible, including the intact ERFs obtained by stimulation and other signals. The single trial recognition can be carried out synchronously with the signal measurement, which means that the real-time identification can be realized. It would provide a powerful tool for psychophysiological study and MEG data processing. At the same time, it is also the basic algorithm to realize the BCI with the MEG. However, the identification of AEFs in single trial requires the AEFs segments so that their position features can be enhanced and displayed, which means that we still need to know the approximate location of the stimulus. If the single trial ERFs can be recognized without any prior condition on the MEG, the information about the external stimulations the body of the subject is exposed to and the mental states the brain of the subject is in could be obtained through the MEG in real time, which is also the true “mind-reading.”

The process of enhancement and recognition for the single trial AEFs is actually the process of extracting and recognizing the position features. It is required that the single trial AEFs are highlighted by the signal enhancement method. In this work, the signal strength is concentrated to its source location, which is the primary auditory cortex of the temporal lobe. On the one hand, if the source of the single trial signal is directly calculated based on some source estimation algorithms such as minimum norm imaging [41], linearly constrained minimum variance (LCMV) beam formers [42], and dipole modelling [43], due to the weakness and fuzziness of the single trial AEFs and the existence of various noises, the results of source estimation may be greatly deteriorated. Nevertheless, if the signal enhancement method proposed in this work is used to process the signal first and then we estimate the source, it may achieve a more accurate result. Further research is needed to figure out how to combine the enhancement method with the source estimation algorithm. On the other hand, ANN is employed here to recognize auditory activation patterns and realize the automatic and on-the-fly recognition for the single trial AEFs. If the training dataset can be extended, ANN should also have the potential to be used to identify VEFs, SEFs, and so forth, which are originated from different functional areas of the brain. It could even be employed to identify the signal amplitude, phase, frequency, and distribution features of some brain diseases such as epilepsy and migraines, realizing the initial diagnosis of these diseases.

5. Conclusions

In this work, we propose a new signal recognition method in analysis of the single trial AEFs. ANN can be used to automatically and timely identify the single trial AEFs. This single trial identification can retain the intact original data. Three datasets, two AEFs datasets and one noise dataset, are utilized to experimentally verify the signal enhancement method and the single trial recognition method. Finally, the recognition accuracies of training data and testing data are both 100%. The recognition for the single trial ERFs can not only expand the psychological research methods but also establish the algorithm basis for using the MEG signal to achieve BCI. In addition, it may be combined with the source estimation algorithm to improve its accuracy in the future.

Appendix

A. Average Process for the AEFs

The averaging for the first dataset is exhibited in order to observe some features of the AEFs. The first dataset is 360 s with a sampling rate of 600 Hz and contains 200 single trial AEFs produced by sound stimuli at a frequency of 440 Hz. The average operation for the AEFs mainly includes the following processes.

Data preprocessing: notch filter of 60 Hz, 120 Hz, and 180 Hz, respectively, is performed on the original detected data to remove line frequency noise and its harmonics. Then the positions of heartbeat artifacts and eye movement artifacts are obtained and marked according to heartbeat signals and eye movement signals measured by one bipolar ECG and two bipolar EOG, so that the SSP algorithm can be utilized to remove these artifacts.

Interception for the single trial AEFs: marking single trial AEFs on the MEG according to the occurrence time of stimuli sounds recorded by the audio recording channel. MEG signal segments of 100 ms before and 500 ms after this time point are then intercepted.

Average signal segments: all AEFs are averaged to get the average signal. The AEFs with obvious characteristics can be observed in Figure 6.

B. Selection of Parameters in the Algorithm

In this work, the neighbourhood radius of 0.022 × 1.7 m and the signal enhancement correlation coefficient threshold of 0.8 are two reasonable parameters obtained through statistics. The average distance between sensors is 0.022 m. The neighbourhood radius selected as 1.7 × 0.022 m ensures that 5 to 8 sensors exist in each selected sensor’s neighbourhood, as Figure 7 shows. In other words, 5 to 8 neighbourhood signals are involved in the calculation of correlation coefficient and the competition of signal enhancement.

In addition, the threshold value of correlation coefficient is selected as 0.8. When the correlation coefficient between the neighbourhood signal and the selected signal is greater than 0.8, the neighbourhood signal will be superimposed with the selected signal, so as to enhance the selected signal. The value of 0.8 is selected according to the average correlation coefficient for the selected signal , which can be calculated as

For simplicity, we call it the neighbourhood correlation coefficient. represents the number of neighbourhood signals for the selected signal ; denotes the correlation coefficients between the selected signal and its neighbourhood signals . 0.8 is obtained based on the statistics of 200 AEFs in the first dataset, as is illustrated in Figure 8. It should be pointed out that because we calculate the correlation coefficients between the neighbourhood signals and average them, they are all positive. But there are also negative correlation coefficients between the MEG signals. There are approximately 10 sensors located in the auditory part of the temporal lobe. For 200 single trial AEFs, 2,000 average correlation coefficients can be obtained, 1711 of which are greater than 0.8 and other values are also close to 0.8. Therefore, the threshold value in the body of the paper is set at 0.8.

C. Selection of Training Data

In the first dataset, there are 200 single trial AEFs, 3 of which are screened out due to serious pollution caused by noise. This can be observed by the energy of signals of each channel, as is shown in Figure 9(a). In the 93rd, 94th, and 112th trials, there are three channels with very high power, which are MRT31 channel, MRT41 channel, and MRT51 channel, respectively. All these three channels are located closest to the eye, so it can be reasonably speculated that such power fluctuations are due to the incomplete elimination of eye movement artifacts. In the training data, we delete these 3 AEFs. The power of the remaining signals is shown in Figure 9(b).

Data Availability

Three MEG datasets used in this work are obtained from the open database of Brainstorm software, and its website is http://neuroimage.usc.edu/brainstorm.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This project was supported by the National Natural Science Foundation of China (61571018, 61531003, and 91436210) and the National Key Research and Development Program. The authors are grateful for the open data source in Brainstorm software, and its website is http://neuroimage.usc.edu/brainstorm.

References

  1. D. Cohen, “Magnetoencephalography: evidence of magnetic fields produced by alpha-rhythm currents,” Science, vol. 161, no. 3843, pp. 784–786, 1968. View at: Publisher Site | Google Scholar
  2. E. Boto, S. S. Meyer, V. Shah et al., “A new generation of magnetoencephalography: room temperature measurements using optically-pumped magnetometers,” NeuroImage, vol. 149, pp. 404–414, 2017. View at: Publisher Site | Google Scholar
  3. E. Boto, N. Holmes, J. Leggett et al., “Moving magnetoencephalography towards real-world applications with a wearable system,” Nature, vol. 555, no. 7698, pp. 657–661, 2018. View at: Publisher Site | Google Scholar
  4. D. Cohen, “Magnetoencephalography: detection of the brain’s electrical activity with a superconducting magnetometer,” Science, vol. 175, no. 4022, pp. 664–666, 1972. View at: Publisher Site | Google Scholar
  5. A. P. Baker, M. J. Brookes, I. A. Rezek et al., “Fast transient networks in spontaneous human brain activity,” Elife, vol. 3, Article ID e01867, 2014. View at: Publisher Site | Google Scholar
  6. G. C. O’Neill, P. K. Tewarie, G. L. Colclough et al., “Measurement of dynamic task related functional networks using meg,” NeuroImage, vol. 146, pp. 667–678, 2017. View at: Publisher Site | Google Scholar
  7. B. T. Dunkley, L. Da Costa, A. Bethune et al., “Low-frequency connectivity is associated with mild traumatic brain injury,” NeuroImage: Clinical, vol. 7, pp. 611–621, 2015. View at: Publisher Site | Google Scholar
  8. M.-X. Huang, S. Nichols, D. G. Baker et al., “Single-subject-based whole-brain MEG slow-wave imaging approach for detecting abnormality in patients with mild traumatic brain injury,” NeuroImage: Clinical, vol. 5, pp. 109–119, 2014. View at: Publisher Site | Google Scholar
  9. T. P. L. Roberts, J. Matsuzaki, L. Blaskey et al., “Delayed m50/m100 evoked response component latency in minimally verbal/nonverbal children who have autism spectrum disorder,” Molecular Autism, vol. 10, no. 1, p. 34, 2019. View at: Publisher Site | Google Scholar
  10. D. López-Sanz, N. Serrano, and F. Maestú, “The role of magnetoencephalography in the early stages of Alzheimer’s disease,” Frontiers in Neuroscience, vol. 12, p. 572, 2018. View at: Publisher Site | Google Scholar
  11. A. C. Nugent, S. E. Robinson, R. Coppola, M. L. Furey, and C. A. Zarate Jr, “Group differences in MEG-ICA derived resting state networks: application to major depressive disorder,” Neuroimage, vol. 118, pp. 1–12, 2015. View at: Publisher Site | Google Scholar
  12. S. E. Robson, M. J. Brookes, E. L. Hall et al., “Abnormal visuomotor processing in schizophrenia,” NeuroImage: Clinical, vol. 12, pp. 869–878, 2016. View at: Publisher Site | Google Scholar
  13. H. Stefan and E. Trinka, “Magnetoencephalography (meg): past, current and future perspectives for improved differentiation and treatment of epilepsies,” Seizure, vol. 44, pp. 121–124, 2017. View at: Publisher Site | Google Scholar
  14. K. H. Chiappa, Evoked Potentials in Clinical Medicine, Lippincott Williams & Wilkins, Philadelphia, PA, USA, 1997. View at: Publisher Site
  15. J. J. Vidal, “Real-time detection of brain events in EEG,” Proceedings of the IEEE, vol. 65, no. 5, pp. 633–641, 1977. View at: Publisher Site | Google Scholar
  16. G. P. Novak, W. Ritter, H. G. Vaughan Jr, and M. L. Wiznitzer, “Differentiation of negative event-related potentials in an auditory discrimination task,” Electroencephalography and Clinical Neurophysiology, vol. 75, no. 4, pp. 255–275, 1990. View at: Publisher Site | Google Scholar
  17. A. Mecklinger, A. F. Kramer, and D. L. Strayer, “Event related potentials and EEG components in a semantic memory search task,” Psychophysiology, vol. 29, no. 1, pp. 104–119, 1992. View at: Publisher Site | Google Scholar
  18. A. Pfeerbaum, J. M. Ford, W. T. Roth, and B. S. Kopell, “Age-related changes in auditory event-related potentials,” Electroencephalography and Clinical Neurophysiology, vol. 49, no. 3-4, pp. 266–276, 1980. View at: Publisher Site | Google Scholar
  19. M. Vander Ghinst, M. Bourguignon, M. Niesen et al., “Cortical tracking of speech-in-noise develops from childhood to adulthood,” The Journal of Neuroscience, vol. 39, no. 15, pp. 2938–2950, 2019. View at: Publisher Site | Google Scholar
  20. C. M. G. Gonzalez, V. P. Clark, S. Fan, S. J. Luck, and S. A. Hillyard, “Sources of attention-sensitive visual event-related potentials,” Brain Topography, vol. 7, no. 1, pp. 41–51, 1994. View at: Publisher Site | Google Scholar
  21. W. T. Roth, A. Pfefferbaum, A. F. Kelly, P. A. Berger, and B. S. Kopell, “Auditory event-related potentials in schizophrenia and depression,” Psychiatry Research, vol. 4, no. 2, pp. 199–212, 1981. View at: Publisher Site | Google Scholar
  22. T. C. Handy, Event-related Potentials: A Methods Handbook, MIT press, Cambridge, MA, USA, 2005. View at: Publisher Site
  23. J. M. Ford, W. T. Roth, R. C. Mohs, W. F. Hopkins III, and B. S. Kopell, “Event-related potentials recorded from young and old adults during a memory retrieval task,” Electroencephalography and Clinical Neurophysiology, vol. 47, no. 4, pp. 450–459, 1979. View at: Publisher Site | Google Scholar
  24. R. Hari and R. Salmelin, “Magnetoencephalography: from SQUIDs to neuroscience,” Neuroimage, vol. 61, no. 2, pp. 386–396, 2012. View at: Publisher Site | Google Scholar
  25. S. A. Hosseini, “Decoding visual covert selective spatial attention based ON magnetoencephalography signals,” Biomedical Engineering: Applications, Basis and Communications, vol. 31, no. 01, Article ID 1950003, 2019. View at: Publisher Site | Google Scholar
  26. R. Hari, S. Baillet, G. Barnes et al., “IFCN-endorsed practical guidelines for clinical magnetoencephalography (MEG),” Clinical Neurophysiology, vol. 129, no. 8, pp. 1720–1747, 2018. View at: Publisher Site | Google Scholar
  27. P. Di Barba, L. Fattorusso, and M. Versaci, “Electrostatic field in terms of geometric curvature in membrane MEMS devices,” Communications in Applied and Industrial Mathematics, vol. 8, no. 1, pp. 165–184, 2017. View at: Publisher Site | Google Scholar
  28. H. J. Heinze, S. J. Luck, G. R. Mangun, and S. A. Hillyard, “Visual event-related potentials index focused attention within bilateral stimulus arrays. I. Evidence for early selection,” Electroencephalography and Clinical Neurophysiology, vol. 75, no. 6, pp. 511–527, 1990. View at: Publisher Site | Google Scholar
  29. G. Hajcak, A. MacNamara, and D. M. Olvet, “Event-related potentials, emotion, and emotion regulation: an integrative review,” Developmental Neuropsychology, vol. 35, no. 2, pp. 129–155, 2010. View at: Publisher Site | Google Scholar
  30. O. Friman, I. Volosyak, and A. Graser, “Multiple channel detection of steady-state visual evoked potentials for brain-computer interfaces,” IEEE Transactions on Biomedical Engineering, vol. 54, no. 4, pp. 742–750, 2007. View at: Publisher Site | Google Scholar
  31. T. Kluge and M. Hartmann, “Phase coherent detection of steady-state evoked potentials: experimental results and application to brain-computer interfaces,” in Proceedings of the 2007 3rd International IEEE/EMBS Conference on Neural Engineering, pp. 425–429, Kohala Coast, HI, USA, June 2007. View at: Publisher Site | Google Scholar
  32. A. Keil, M. M. Müller, T. Gruber, C. Wienbruch, M. Stolarova, and T. Elbert, “Effects of emotional arousal in the cerebral hemispheres: a study of oscillatory brain activity and event-related potentials,” Clinical Neurophysiology, vol. 112, no. 11, pp. 2057–2068, 2001. View at: Publisher Site | Google Scholar
  33. E. W. Sellers and E. Donchin, “A P300-based brain-computer interface: initial tests by ALS patients,” Clinical Neurophysiology, vol. 117, no. 3, pp. 538–548, 2006. View at: Publisher Site | Google Scholar
  34. V. Bostanov, “BCI competition 2003-data sets IB and IIB: feature extraction from event-related brain potentials with the continuous wavelet transform and the t-value scalogram,” IEEE Transactions on Biomedical Engineering, vol. 51, no. 6, pp. 1057–1061, 2004. View at: Publisher Site | Google Scholar
  35. L. M. McCane, S. M. Heckman, D. J. McFarland et al., “P300-based brain-computer interface (BCI) event-related potentials (ERPs): people with amyotrophic lateral sclerosis (ALS) vs. age-matched controls,” Clinical Neurophysiology, vol. 126, no. 11, pp. 2124–2131, 2015. View at: Publisher Site | Google Scholar
  36. M. Hamalainen, R. Hari, R. J. Ilmoniemi, J. Knuutila, and O. V. Lounasmaa, “Magnetoencephalography theory, instrumentation, and applications to non-invasive studies of the working human brain,” Reviews of Modern Physics, vol. 65, no. 2, p. 413, 1993. View at: Publisher Site | Google Scholar
  37. H. Yabe, F. Saito, and Y. Fukushima, “Median method for detecting endogenous event-related brain potentials,” Electroencephalography and Clinical Neurophysiology, vol. 87, no. 6, pp. 403–407, 1993. View at: Publisher Site | Google Scholar
  38. Y. Huang, D. Erdogmus, S. Mathan, and M. Pavel, “Large-scale image database triage via EEG evoked responses,” in Proceedings of the 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 429–432, IEEE, Las Vegas, NV, USA, April 2008. View at: Publisher Site | Google Scholar
  39. S. Baillet, J. C. Mosher, and R. M. Leahy, “Electromagnetic brain mapping,” IEEE Signal Processing Magazine, vol. 18, no. 6, pp. 14–30, 2001. View at: Publisher Site | Google Scholar
  40. M. Popescu, E.-A. Popescu, T. Chan, S. D. Blunt, and J. D. Lewine, “Spatio-temporal reconstruction of bilateral auditory steady-state responses using meg beamformers,” IEEE Transactions on Biomedical Engineering, vol. 55, no. 3, pp. 1092–1102, 2008. View at: Publisher Site | Google Scholar
  41. J. C. Mosher, P. S. Lewis, and R. M. Leahy, “Multiple dipole modeling and localization from spatio-temporal meg data,” IEEE Transactions on Biomedical Engineering, vol. 39, no. 6, pp. 541–557, 1992. View at: Publisher Site | Google Scholar
  42. T.-P. Jung, S. Makeig, M. Westereld, J. Townsend, E. Courchesne, and T. J. Sejnowski, “Analyzing and visualizing single-trial event-related potentials,” Advances in Neural Information Processing Systems, vol. 11, pp. 118–124, 1999. View at: Publisher Site | Google Scholar
  43. L. Carretié, M. Martín-Loeches, J. A. Hinojosa, and F. Mercado, “Emotion and attention interaction studied through event-related potentials,” Journal of Cognitive Neuroscience, vol. 13, no. 8, pp. 1109–1128, 2001. View at: Publisher Site | Google Scholar
  44. J. Wolpaw and E. W. Wolpaw, Brain-computer Interfaces: Principles and Practice, OUP USA, Oxford, USA, 2012. View at: Publisher Site
  45. F. Lotte, M. Congedo, A. Lécuyer, F. Lamarche, and B. Arnaldi, “A review of classification algorithms for EEG-based brain-computer interfaces,” Journal of Neural Engineering, vol. 4, no. 2, p. R1, 2007. View at: Publisher Site | Google Scholar
  46. R. Sánchez, A. Riquenes, and M. Pérez-Abalo, “Automatic detection of auditory brainstem responses using feature vectors,” International Journal of Bio-Medical Computing, vol. 39, no. 3, pp. 287–297, 1995. View at: Publisher Site | Google Scholar
  47. V. Bostanov and B. Kotchoubey, “The t-cwt: a new erp detection and quantification method based on the continuous wavelet transform and student’s t-statistics,” Clinical Neurophysiology, vol. 117, no. 12, pp. 2627–2644, 2006. View at: Publisher Site | Google Scholar
  48. U. Hoppe, S. Weiss, R. W. Stewart, and U. Eysholdt, “An automatic sequential recognition method for cortical auditory evoked potentials,” IEEE Transactions on Biomedical Engineering, vol. 48, no. 2, pp. 154–164, 2001. View at: Publisher Site | Google Scholar
  49. G. Pfurtscheller and F. H. Lopes da Silva, “Event-related eeg/meg synchronization and desynchronization: basic principles,” Clinical Neurophysiology, vol. 110, no. 11, pp. 1842–1857, 1999. View at: Publisher Site | Google Scholar
  50. F. C. Morabito, M. Versaci, G. Pautasso, C. Tichmann, and A. U. Team, “Fuzzy-neural approaches to the prediction of disruptions in ASDEX upgrade,” Nuclear Fusion, vol. 41, no. 11, p. 1715, 2001. View at: Publisher Site | Google Scholar
  51. J. Velmurugan, S. Sinha, and P. Satishchandra, “Magnetoencephalography recording and analysis,” Annals of Indian Academy of Neurology, vol. 17, no. 1, p. S113, 2014. View at: Publisher Site | Google Scholar
  52. S. Braeutigam, “Magnetoencephalography: fundamentals and established and emerging clinical applications in radiology,” ISRN Radiology, vol. 2013, Article ID 529463, 18 pages, 2013. View at: Publisher Site | Google Scholar
  53. C. Ieracitano, A. Paviglianiti, N. Mammone, M. Versaci, E. Pasero, and F. C. M. SoCNNet, “An optimized sobel filter based convolutional neural network for SEM images classification of nanomaterials,” in Progresses in Artificial Intelligence and Neural Systems, pp. 103–113, Springer, Singapore, 2020. View at: Publisher Site | Google Scholar
  54. M. Versaci, G. Angiulli, and F. La Foresta, “A modified heart dipole model for the generation of pathological ECG signals,” Computation, vol. 8, no. 4, p. 92, 2020. View at: Publisher Site | Google Scholar
  55. F. C. Morabito, M. Campolo, N. Mammone et al., “Deep learning representation from electroencephalography of early-stage Creutzfeldt-Jakob disease and features for differentiation from rapidly progressive dementia,” International Journal of Neural Systems, vol. 27, no. 02, Article ID 1650039, 2017. View at: Publisher Site | Google Scholar
  56. C. Szegedy, W. Liu, Y. Jia et al., “Going deeper with convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9, Boston, MA, USA, June 2015. View at: Publisher Site | Google Scholar
  57. S. Baillet, K. J. Friston, and R. Oostenveld, “Academic software applications for electromagnetic brain mapping using MEG and EEG,” Computational Intelligence and Neuroscience, vol. 2011, Article ID 972050, p. 12, 2011. View at: Publisher Site | Google Scholar
  58. R. Ilmoniemi, S. J. Williamson, and W. Hostetler, New Method for the Study of Spontaneous Brain activity, Tech. Rep, New York, NY, USA, 1988. View at: Publisher Site
  59. P. Berg and M. Scherg, “A multiple source approach to the correction of eye artifacts,” Electroencephalography and Clinical Neurophysiology, vol. 90, no. 3, pp. 229–241, 1994. View at: Publisher Site | Google Scholar
  60. V. Jousmäki and R. Hari, “Cardiac artifacts in magnetoencephalogram,” Journal of Clinical Neurophysiology: Official Publication of the American Electroencephalographic Society, vol. 13, no. 2, pp. 172–176, 1996. View at: Publisher Site | Google Scholar
  61. S. Taulu and R. Hari, “Removal of magnetoencephalographic artifacts with temporal signal-space separation: demonstration with single-trial auditory-evoked responses,” Human Brain Mapping, vol. 30, no. 5, pp. 1524–1534, 2009. View at: Publisher Site | Google Scholar
  62. K. Pearson, “VII. Note on regression and inheritance in the case of two parents,” Proceedings of the Royal Society of London, vol. 58, pp. 240–242, 1895. View at: Google Scholar
  63. S. M. Stigler, “Francis Galton’s account of the invention of correlation,” Statistical Science, vol. 4, no. 2, pp. 73–79, 1989. View at: Publisher Site | Google Scholar

Copyright © 2021 Yulong Feng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views137
Downloads213
Citations

Related articles