Table of Contents Author Guidelines Submit a Manuscript
Journal of Engineering
Volume 2018, Article ID 1350692, 10 pages
Research Article

Deep Learning Approach for Automatic Classification of Ocular and Cardiac Artifacts in MEG Data

1Information Technology Department, Palestine Ahliya University College, Bethlehem, West Bank, State of Palestine
2Institute of Neurosciences and Medicine, Forschungszentrum Jülich GmbH, 52425 Jülich, Germany

Correspondence should be addressed to Jürgen Dammers; ed.hcileuj-zf@sremmad.j

Received 9 November 2017; Revised 8 March 2018; Accepted 29 March 2018; Published 2 May 2018

Academic Editor: Yudong Zhang

Copyright © 2018 Ahmad Hasasneh et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


We propose an artifact classification scheme based on a combined deep and convolutional neural network (DCNN) model, to automatically identify cardiac and ocular artifacts from neuromagnetic data, without the need for additional electrocardiogram (ECG) and electrooculogram (EOG) recordings. From independent components, the model uses both the spatial and temporal information of the decomposed magnetoencephalography (MEG) data. In total, 7122 samples were used after data augmentation, in which task and nontask related MEG recordings from 48 subjects served as the database for this study. Artifact rejection was applied using the combined model, which achieved a sensitivity and specificity of 91.8% and 97.4%, respectively. The overall accuracy of the model was validated using a cross-validation test and revealed a median accuracy of 94.4%, indicating high reliability of the DCNN-based artifact removal in task and nontask related MEG experiments. The major advantages of the proposed method are as follows: (1) it is a fully automated and user independent workflow of artifact classification in MEG data; (2) once the model is trained there is no need for auxiliary signal recordings; (3) the flexibility in the model design and training allows for various modalities (MEG/EEG) and various sensor types.