Table of Contents Author Guidelines Submit a Manuscript
Behavioural Neurology
Volume 2015, Article ID 514361, 16 pages
Review Article

Brain Signals of Face Processing as Revealed by Event-Related Potentials

1Departamento de Psicología Biológica y de la Salud, Facultad de Psicología, Universidad Autónoma de Madrid, 28049 Madrid, Spain
2División de Psicología, Colegio Universitario Cardenal Cisneros, 28006 Madrid, Spain
3Institute of Brain, Behaviour and Mental Health, Centre for Clinical and Cognitive Neuroscience, University of Manchester, Manchester M13 9PL, UK
4Centro de Neurociencias de Cuba, 11600 Havana, Cuba

Received 11 March 2015; Revised 10 May 2015; Accepted 11 May 2015

Academic Editor: João Quevedo

Copyright © 2015 Ela I. Olivares et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


We analyze the functional significance of different event-related potentials (ERPs) as electrophysiological indices of face perception and face recognition, according to cognitive and neurofunctional models of face processing. Initially, the processing of faces seems to be supported by early extrastriate occipital cortices and revealed by modulations of the occipital P1. This early response is thought to reflect the detection of certain primary structural aspects indicating the presence grosso modo of a face within the visual field. The posterior-temporal N170 is more sensitive to the detection of faces as complex-structured stimuli and, therefore, to the presence of its distinctive organizational characteristics prior to within-category identification. In turn, the relatively late and probably more rostrally generated N250r and N400-like responses might respectively indicate processes of access and retrieval of face-related information, which is stored in long-term memory (LTM). New methods of analysis of electrophysiological and neuroanatomical data, namely, dynamic causal modeling, single-trial and time-frequency analyses, are highly recommended to advance in the knowledge of those brain mechanisms concerning face processing.

1. Objective

The present work is intended to offer a comprehensive review of the literature regarding those evoked brain responses related to face perception and face recognition. Moreover, we stress the pertinence of using new approaches to better understand the functional meaning of such responses and the underlying neural mechanisms. Firstly, we analyze the theoretical framework (inspired by cognitive psychology, neuropsychology, and, more recently, neuroimaging studies) that has been most frequently used to interpret ERP studies of face processing. We then dedicate a section to each of the clusters of ERP components that have been related to different stages of face processing and examine their possible relationship with the posited nodes of the face processing network derived from neuroimaging studies. In the next section, we consider recent findings derived from new methodological approaches such as dynamic causal modeling, single trial and time-frequency analyses. Finally, the conclusions are set out.

This review will be limited to studies of face structural processing which eventually leads to face recognition (i.e., recognizing a person by seeing her/his face). Other aspects of face processing (recognition of emotional expressions, gaze direction, lip reading, and so on) merit special attention and are beyond the scope of the article.

2. Theoretical Framework on Face Processing

2.1. Cognitive and Neurofunctional Models Derived from Functional Magnetic Resonance (fMRI) Studies

Conceptualizations on cognitive operations underlying face recognition have been largely influenced by the seminal model of Bruce and Young [1]. This model assumes that face recognition is achieved, after an initial stage of visual analysis, by sequential access to visual-structural and verbal-semantic codes in long-term memory (LTM). The structural codes concerning the physical appearance of each known individual face, with information about the shape of facial features (lips, eyes, etc.) as well as their spatial configuration, are contained in Face Recognition Units (“FRUs”). These memory units are assumed to be specific to the face domain. The verbal-semantic codes comprise personal biographical information (occupation, context where an individual is usually seen, etc.) contained in Person Identity Nodes (PINs) that are in turn connected to verbal codes for the corresponding name. Subsequent interactive activation implementations of Bruce and Young’s model have provided working models to demonstrate, through simulation, certain empirical phenomena like semantic priming, repetition priming, cross-modal cueing, and distinctiveness in face recognition [2, 3].

A basic assumption of Bruce and Young’s model is that FRUs, PINs, and name codes are activated in a strictly sequential mode. However, the complete model includes several parallel pathways originating after the initial visual analysis, each dedicated to the processing of other types of facial information (not considered further in this paper). Reports on brain damaged subjects document dissociations and in some cases double dissociations of symptoms that are consistent with the distinctions among cognitive operations posited in the original Bruce and Young’s model. More recent psychological and neuropsychological evidence has prompted modifications [4] or rebuttals [5] of the model, including a substantial revision [6], but the original version has guided ERP research on face recognition over recent decades. It is important to note that all models assume that many different types of memory codes (pictorial, face structural, emotional, social, semantic, episodic, verbal, etc.) are associated with each familiar face, a fact to be remembered when considering the experiments reviewed below.

The notable increase of fMRI studies concerning both face perception and face recognition in recent years has induced the formulation of neurofunctional models which are intended to explain how the distinct functional aspects involved in face processing are supported by brain architecture with components or nodes that are stimulus- and task-dependent, specialized in the processing of different inputs relative to faces [6, 9, 10]. Some neural models try to explain how neural connectivity among certain brain regions (not necessarily close to each other) is required for efficient processing [11, 12]. Haxby et al. [10] proposed that facial information processing is mediated by a hierarchically organized and distributed neural system, composed of both a “core” and an “extended system.” The core system includes three bilateral regions in occipitotemporal visual extrastriate cortex, which are in the inferior occipital gyri (concretely, the region termed Occipital Face Area or “OFA”), in the lateral fusiform gyrus (concretely, the region termed Fusiform Face Area or “FFA”) and in the superior temporal sulcus (concretely, the posterior superior temporal sulcus or “pSTS”). The OFA-FFA link is thought to participate in processing of invariant structural face information (i.e., the identity of the face), whereas the OFA-STS link processes dynamic aspects of faces (such as expression). The extended system comprises limbic areas (for emotion processing) and auditory regions (for paralexical speech perception) among others. These regions, acting in cooperation with the “core” regions, provide pertinent information from other (nonvisual) cognitive domains to enable the processing of face-derived information. In fact, Gobbini and Haxby [9] point out that successful recognition of familiar individuals may also require the participation of the so-called “theory of mind” areas (such as the anterior paracingulate cortex, the posterior superior temporal sulcus (pSTS)/temporoparietal junction (TPJ), and the precuneus), which have been implicated in social and cognitive functions.

According to Ishai [11], the neural connectivity among face sensitive regions depends on the nature of the stimulus and task demands. Thus, seeing faces with affective connotation increases the “effective connectivity” between the fusiform gyrus (FFG) and the amygdala, whereas seeing faces of celebrities or famous persons increases the coupling between the FFG and the orbitofrontal cortex. Additionally, task influence is revealed by the increase in “bottom-up” connectivity between extrastriate visual regions and the prefrontal cortex during face perception, whereas the mental generation of face images increases the “top-down” connectivity (see also [13]). In any event, hitherto, the relationship between processing stages posited in cognitive models and the cortical machinery identified with neuroimaging is not clearly understood, and a direct mapping may not exist.

2.2. ERPs Are Essential for the Search of Effective Connectivity between Brain Areas for Face Processing

As outlined above, an extensive catalogue of cortical and subcortical brain areas apparently involved in face processing has been provided by an increasing flow of neuroimaging studies [11]. For some of these areas, unequivocal evidence that they are essential nodes of the brain network involved in face recognition comes from neuropsychological case studies and/or reversible inactivation experiments. For other structures, the evidence is not as clear-cut. In any case, the strength of this approach (mainly involving fMRI and to a lesser degree positron emission tomography, PET) lies in its relatively high spatial resolution and in its precise anatomical localization. However, a present limitation of fMRI (and more so PET) is its poor temporal resolution (in the order of several seconds), which in most studies depends on signals derived from slow hemodynamic responses of the local neural activity of interest. Therefore, although fMRI can target possible nodes of the cortical network of face processing, it offers very limited information on the temporal dynamics of the activation (or inhibition) of those nodes as they participate in face processing.

A complete model of face processing has to specify not only who are the cortical actors but also in what sequence their roles are played out as well as what types of interaction occur among them. Since faces are usually identified in less than half a second, we are dealing with processes that are carried out in a range between ten and several hundred milliseconds. Interestingly, this time range corresponds to the latencies of face-related unit activity recorded in consciously behaving monkeys [14, 15]. Furthermore, despite the excitement generated by studies of functional and “effective connectivity” based on fMRI data (see, e.g., [11]), the time-scale of interactions identified by these studies is necessarily slow due to the nature of the fMRI signals.

This contrast between an increasingly detailed anatomical picture of the nodes comprising the face-processing network on one hand and such meagre knowledge of its temporal dynamics on the other makes it timely to review the ERP research on both face perception and face recognition. ERPs are voltage variations that index the synchronized postsynaptic activity of large neural masses. Although these potentials, measured at the scalp, are difficult to relate to their neural sources, and as a recording technique they have relatively low spatial resolution, they may be recorded with very high temporal resolution. A large body of studies allows us to identify ERP components that are reliably associated to different aspects of face processing. Recently, new promising methods have been developed to infer the neural sources (i.e., the distribution of current sources inside the brain) that generate the scalp-recorded ERPs (see, e.g., [1618]). These methods have to deal with the difficulties of the “inverse problem” associated with such an inference task, namely, the nonuniqueness of the solution, the limited numbers of sensors available (which makes the problem highly underdetermined), and the instability of the solutions due to the observation noise. However, in conjunction with a now substantial database of intracranial recordings of face-related potentials [19], they provide useful constraints on models of face processing. The ERP technique also has the potential to be integrated with other neuroimaging methods, finding solutions to previously unanswerable questions.

2.3. On the Neuronal Origin of ERPs and the Specificity of Neural Mechanisms for Face Processing

Further progress in experimental designs aimed at exploring the brain dynamics of face processing will also be inevitably coupled with the advance in knowledge of the electrophysiological neuronal mechanisms giving place to scalp recorded potentials evoked by face stimuli. A neurobiological reductionist approach, based on the biophysical nature of EEG, has intended to explain both the positive and negative voltage deflections characterizing the ERP waveforms as mirrors at the scalp of the underlying excitatory and inhibitory neuronal activity, occurring in specific cortical layers [20]. Thus, negative ERP components, for example, might be reflecting a massive depolarization of apical dendrites in cortical layer I resulting from thalamocortical excitation as well as an inhibition in deep cortical layers. This neural activity would underlie psychological feed-forward processes like formulation of perceptual “expectancies” and preparatory activation of preexisting cognitive structures. While this approach merits the interest of neuroscientists, in an effort to accommodate within a unifying framework the proliferation of uncountable ERP components, we consider that the incomparable high temporal resolution related to ERP data interpretation offers a unique opportunity to study the complex dynamic nature of cognitive functions such as those involved in face processing.

On the other hand, in line with the most traditional view on ERPs, the effort to characterize specific mechanisms underlying face processing has attracted the attention of research groups in the search for brain responses that, being larger for faces than for other stimuli, can be considered domain-specific [2124]. Whereas most researchers, on the basis of neuropsychological, developmental, and neuroimaging data, favor the “face specificity” hypothesis, the alternative view sustains that the face superiority effect is a consequence of “expertise”, developed by the greater and earlier experience that we have gained with faces in relation to other visual objects (see [25] for a discussion on this issue). To address the “specificity” question, some authors have carried out some experiments using “objects of expertise” and it is proposed that the kind of processing (i.e., holistic) that characterizes face processing can be the key to understand the functional and neural overlap between face and object processing [2628]. However, progress in this direction is practically null. It might be sensible to conduct more studies to unveil the functional architecture of the brain system involved in face processing and to show how its components can be investigated using ERPs and other experimental methods [29]. Accordingly, in this work, we focus on studies of ERPs related to visual-structural aspects which reveal that a face is different from other visual objects as well as on those studies concerning the differentiation of individual faces. From a functional point of view, we then refer to experiments on brain responses regarding mainly the structural encoding necessary to activate the “FRUs” and, eventually, the verbal-semantic information associated with each known face (i.e., related to the “core” and “extended” neural systems, resp.).

3. Event-Related Potentials as Electrophysiological Markers of Operations Related to Face Processing

3.1. Categorization and Initial Structural Processing of Faces Is Revealed in the Early P1 and N170 Waves

Much of the ERP research on faces has searched for face-sensitive responses and has been based on comparing (in both healthy individuals and neurological patients) the brain activity elicited by face presentations with that elicited by the presentation of other categories of visual stimuli (the same comparison that has evidenced the “core” areas in fMRI experiments).

One of the most robust brain responses described in the literature on face processing is the N170 component [30] (Figure 1). N170 is reliably larger for faces than for other categories of visual objects. One notable exception is pictures of front views of cars which elicit a N170 that is comparable with the N170 elicited by upright faces. This is probably due to a relatively invariant face-like feature configuration (see [31, 32]). The second notable exception is pictures of human bodies and body parts, which also cause a conspicuous N170 effect but that is generated in more anterior brain regions, probably concerning body-sensitive cortices [33, 34]. This negative wave has its maximal amplitude at posterior temporal regions (greater on the right side) and neural sources in lateral, basal temporal, and extrastriate occipital cortices have been proposed [30, 3538]. Many authors additionally suggest that the FFA in the lateral FFG, a region suggested by neuroimaging studies as being especially sensitive to faces [3941], is involved. However, other authors emphasize a more lateral source in the inferior temporal gyrus or generators in the p-STS [37, 42, 43]. The fact that face-selective N170 could be elicited in a patient with extensive lesions that cover the areas occupied by FFA in normal subjects suggests that N170 has multiple sources [36, 44].

Figure 1: ERPs elicited by external (straight line) versus internal (dotted line) features of familiar faces in a recent experiment [7]. Note that N170 was larger for internal features and enhanced at the right temporal posterior site T6/P8. At the same latency, a positive peak (VPP) was present at the central midline position Cz.

In electrophysiological recordings, with electrodes that were placed subdurally on the cortical surface in neurological (epileptic) patients, a negative potential, N200, was evoked by faces but not by other categories of stimuli [41, 45, 46]. This N200 was located on the left and right fusiform and inferior temporal gyri and can be considered a cortical correlate of the scalp N170. More recently, Barbeau et al. [19], using intracerebral electrodes (placed more profoundly than subdural ones), have also identified a deep neural correlate (although with polarity reversal) of N170. Thus, in old/new tasks regarding face and object recognition, they found a face-sensitive P160 that was recorded in several posterior regions such as the lateral occipitotemporal cortex although mostly in posterior fusiform and lingual gyri.

It was initially suggested that N170 could reflect the activation of a mechanism specialized in initial stages of face structural encoding [30, 4751]. However, several studies have reported that this wave is sensitive to experimental manipulations linked to subsequent stages of face processing which concern facial contents in LTM. Thus, several authors have found that N170 is modulated by face familiarity or by face repetition within a sequence of visual stimuli ([37, 5255]; see [56] for a similar result concerning the M170 response described in magnetoencephalography (MEG), but see also [38, 49, 57]), and by the perceptual and contextual experience denoting task-dependent “top-down” processing ([58]; but see also [59]).

Experimental results supporting both alternative explanations still make the interpretation on the functional significance of N170 controversial. However, data provided by deep recordings indicate that the ERP patterns that differentiate familiar and unfamiliar face processing emerge only in those components beyond 200 msec in temporal mesial structures [19], supporting the notion of N170 reflecting a “face detector” mechanism, which triggers the encoding process in the occipitotemporal cortex [21, 47]. Recent evidence for the “face detector” hypothesis has also been offered by neural adaptation experiments. In this case, amplitude reductions of N170 when faces were preceded either by the same face or by different faces were found relative to when they were preceded by other perceptual categories, like objects, voices, or words or when a facial social signal like gaze direction was manipulated [6064]. Such findings could also explain to some extent certain initial discrepancies among those research groups which obtained a larger N170 when faces were intermixed with other stimulus categories relative to when faces were presented as a unique category in recognition experiments (see, e.g., results from Bentin and Rossion groups and those from Schweinberger and Sommer groups, resp.). Interestingly, amplitude attenuations and latency delays of N170 are usually associated with the removal of internal features [7, 65], but they have also been reported when facial contours are deleted [50]. This suggests that N170 can be associated to a relatively late operation within structural encoding, likely concerning the generation of face gestalts that will contribute further to individual identification.

Around a decade before the initial description of N170 by Bentin et al. [30], other researchers had described an ERP of similar functional characteristics but of inverse (i.e., positive) polarity and maximal amplitude at central sites on the scalp. Bötzel and Grüsser [48] and Seeck and Grüsser [66] observed that the electrophysiological responses to faces differed from those elicited by other serially displayed visual stimuli (a chair, a tree, the human body, different kinds of vases, shoes, tools, and flowers). The principal difference consisted of a positive peak elicited by the face images appearing between 150–190 msec (P150) and a negative peak between 220–300 msec (N300) poststimulus. These face-sensitive responses were more conspicuous at mid-line electrodes (the standard scalp positions from the 10-20 International system Cz, Pz, T5, and T6 were used in those studies), and no lateralization effect was observed. The “vertex positive peak” or “VPP” was the term then proposed by other authors [51, 67, 68] for this brain response also observed when the participants perceived faces presented either as drawings or pictures in different sizes and even as illusory figures resembling faces [69]. Jeffreys [51] and George et al. [69] pointed out that VPP reverses its polarity in the temporal regions and they agree that the location of its neural generators could be those areas in the temporal cortex functionally equivalent to the superior temporal sulcus in nonhuman primates, the inferior temporal cortex, and possibly also (as suggested by [48]) some limbic structures and basal temporal regions.

The critical difference causing researchers to observe alternatively either N170 or VPP was the reference electrode position: whereas those that described the VPP used lateral sites near temporal regions (e.g., mastoid bones or interconnected ear lobules), the posterior temporal N170 was conspicuous when the tip of the nose was used as the reference site in the recording montages (Bötzel et al. [35] and Jeffreys [51] initially alerted of this important methodological issue; see [70] for a study on the importance of reference placement in ERP experiments on face processing). New research using both high-density recordings and appropriate source analysis is necessary to unravel the extent in which both components have overlapping neural generators.

The neural mechanism represented by N170/VPP might be activated subsequently to the perception of certain features suggesting the global form of the perceived object (face), which triggers the process of categorization. In fact, in latencies earlier than 170–200 msec, several studies have also found modulations of both amplitude and latency on positive deflections concerning facial structural processing. Such responses might reflect the encoding of primary sensorial cues necessary for subsequent perceptual integration into more global representations of the facial data. Thus, Linkenkaer-Hansen et al. [71], in a combined ERP-MEG study, proposed that some degree of face-selective processing seems to occur around 100–130 msec, since they observed both amplitude and latency increases of the P1 (P120) to inverted faces (an experimental manipulation that disrupts the holistic processing) but not to upright faces. In the same study, the visual inspection of magnetic field contours and neural source modeling suggested that P1 originated in the posterior extrastriate cortex, whereas N170 was generated more rostrally possibly in the fusiform and inferior temporal gyri. Similar neural sources for P1 and N170 have been reported very recently in another MEG study related to face inversion [72]. Moreover, Halit et al. [73] found that P1 (in the 48–120 msec time window) is larger for atypical faces created artificially by varying the distance among features (Experiments 1 and 2), which denoted, for these authors, the influence of either attentional or “top-down” mechanisms concerning the analysis of a facial prototype. In the same study, the N170 was larger for atypical faces only in Experiment 2 in which the interindividual face typicality processing was evaluated. This was interpreted as an indicator of N170 reflecting the perceptual processing of particular faces in relation to a general facial prototype.

In relation also to the functional role of these early ERPs, in an interesting experiment the spatial frequency of face images varied in order to test the effect of both the coarse and the fine processing on ERPs [74]. In this study, the P1 amplitude was augmented for low-spatial frequency faces, while N170 amplitude was augmented for high-spatial frequency faces. Additionally, the P1 amplitude was unaffected for physically equiluminant faces compared with the response evoked by houses. These results were considered by the authors as evidence of P1 reflecting an early face-sensitive visual mechanism and its holistic process per se which is triggered whenever a stimulus contains sufficient information to generate the concept of face (e.g., gestalt-based). Interestingly, Mitsudo et al. [75] found a larger P1 for upright than for inverted faces when stimuli were presented at a subthreshold duration, which was interpreted as reflecting the activity of a local contrast detector of face parts that can be useful to discriminate faces from objects.

In another study [76], inverted but not upright or contrast reversal faces evoked a delay in P1. Furthermore, in a series of MEG studies, Liu et al. [77] reported that both M100 and M170 (the MEG analogues of P1 and N170, resp.) correlated positively with successful face categorization, whereas only M170 correlated with successful face recognition (see also [78, 79]). Also, M100 was larger for face parts and M170 tended to be more sensitive to facial configuration.

Taking into account the results derived from all these studies, the brain responses P1 and N170 can be considered as relatively early electrophysiological markers of neural mechanisms leading to the formation and activation of face representations. Data on the modulations of both components cited in the preceding paragraphs suggest that the earlier P1 might be an indicator of subroutines responsible for the grosso modo detection of any stimulus candidate to be categorized as a face within our visual field. On the other hand, N170 might reflect a subsequent operation of detection of those features contributing to defining a face. It would be facilitated by the presence of a canonical configuration of those stimuli that are potentially facial and that would eventually lead to an adequate identification of exemplars (individuals) at a subordinate level.

3.2. Access to Face Representations Is Associated with Activity Beyond 170 msec

Repetition paradigms have been frequently used to ascertain the access to LTM representations [80]. Repeated presentation of the same faces (within relatively short time intervals) induces, compared to nonrepeated stimuli, ERP modulations between 180 and 290 msec poststimulus. Thus, N250r or “ERE” (“early repetition effect”) has been described as a negative ERP peaking at around 250 msec at posterior temporal sites (larger on the right side) with polarity reversal at anterior sites at the same latency [32, 81]. The N250r effect is larger for familiar than for unfamiliar faces. This effect is also larger for nonmasked versus masked stimuli in an explicit matching perceptual task and with respect to face semantic matching tasks. Thus, it does not depend solely on automatic preactivation by face repetition [80], although it can be elicited even in a facial expression detection task where face identities are implicitly activated [82].

The N250r is found even with presentation of different images of the same person, suggesting that it is related to the activation of relatively abstract representations concerning face structure which are invariant over transformations of low-level visual cues [37, 38, 83]. Although N250r does show a degree of image specificity (larger repetition effect across the same image), a study found equivalent priming by the same repeated face image and by the presentation of stretched and unstretched versions of the same face [84], which confirms that N250r does not simply reflect low-level visual (pictorial) coding but is related to person recognition.

On the other hand, N250r has larger amplitude to upright famous faces than to nonhuman primate faces and it is not significant for inverted faces, which links it to face-recognition mechanisms [32]. Moreover, this effect is not obtained with pictures of automobiles in the same experiment or with pictures of hands or houses in a more recent study [33]. In this latter study, the N250r was elicited by the second presentation of faces despite the high perceptual load at initial presentation (see also [85] for a similar result), supporting the notion that a putative face-selective attention module supports encoding under high load and that similar mechanisms are unavailable for other natural or artificial objects. Intriguingly, Henson et al. [86] have reported repetition effects for certain everyday nameable objects in a combined ERP-fMRI experiment. However, contrary to Henson et al., in the studies of Schweinberger et al., faces were presented as task-irrelevant distractors, a crucial difference that might explain such apparently contradictory findings.

In the experiment of Henson et al. [86], a repetition-related positive shift over frontal sites and a transient negative deflection over occipitotemporal sites were produced from 200 to 300 msec only with short repetition lags, supporting the notion that N250r is short-lived [38, 86]. Another repetition effect was found between 400–600 msec by Henson et al. [86], but that was less affected by the increasing lags and it had a central maximum, suggesting that the two effects reflected the activity of at least partially distinct neural generators. A similar distinction between short and long-latency repetition effects for faces was found by Itier and Taylor [76]. All this supports the proposal that N250r indicates the transitory activation of long-term memory representations [63, 64, 82]. Accordingly, Scott et al. [87] found that those modulations occurring around 250 msec could be associated with subordinate-level versus basic-level training, corroborating that in face recognition tasks this ERP is related to processing of representations of individuals.

Source modeling based on high density recordings suggests that the possible neural generators of N250r are located in basal/inferior temporal regions (predominantly on the right side), specifically in the FFG, more rostrally than the estimated generators for N170 [38, 88]. In fact, its possible neuromagnetic correlate, the M250r, also especially sensitive to upright faces versus control stimuli, is predominantly associated with the activity in the right FFG [38]. Accordingly, Henson et al. [86] reported with their fMRI data a decrease in the hemodynamic response (the hemodynamic correlate of stimulus repetition) associated with repetition in several inferior occipitotemporal regions, the magnitude of which also typically decreased as lag increased.

3.3. Modulations of Negativities around 400 msec Are Related to the Retrieval of Content from Face Representations and of Its Associated Verbal-Semantic Information

The search for ERP markers concerning face recognition has also motivated researchers to use the rationale underlying experimental tasks originally developed in language studies, which were designed to know the principles of organization in LTM. The N400 component was originally described by Kutas and Hillyard [89], who compared ERPs elicited by the final word of a sentence when it was congruent with the preceding context (“I drink coffee with sugar and milk”) and when it was incongruent (“I drink coffee with sugar and socks”). The N400 was larger for the incongruent ending (which violated contextually generated expectancies) and this component has been used as an index of the degree of contextual preactivation during memory retrieval or of the amount of postretrieval integration with context (see [90] for a review).

By creating different types of contextual expectancy, the retrieval of distinct kinds of memory codes related to faces can be probed with N400-like components [7, 8, 49, 57, 81, 83, 91102]. Importantly, such responses have different latencies, durations, and topographic distributions depending on the degree of involvement of the verbal information in the task [8, 100].

The most obvious application of this approach has been to create a context with one face and then to present the same face, a semantically related or unrelated face [91]. In general, the long-latency “incongruence negativities” related to faces that were observed in the above mentioned studies have been elicited by facial stimuli with strongly linked verbal-semantic codes and, in fact, such negativities have been elsewhere associated with domain-independent postperceptual processes [38]. Searching for a more “domain selective” approach, several studies have analyzed face structural processing by presenting incomplete (i.e., removing eyes/eyebrows) familiar faces as primes (i.e., contextual stimuli) and asking participants to detect a feature mismatch in subsequently displayed complete faces. “Incongruent” face-feature completions (putting in place eyes from another face), as compared to congruent completions (correct features), have elicited a negative component around 380 msec which seemed similar to the classical N400 effect [95, 97, 102]. This component is alleged to reflect the lack of associative priming among facial features concerning the face structural representation in LTM. This response has been elicited even by familiar faces for which the names were not known [102], by faces for which the participants possessed only their visual-structural memories since they were artificially learned at the laboratory under a controlled procedure [97, 98, 103], and independently from occupation retrieval [104]. Moreover, a “pure” visual facial N360 has been elicited by structural processing of faces for which verbal-semantic information was not easily available [8] (Figure 2). This N360 was maximal at the right temporal posterior region on the scalp (see compatible result with the N350 from Jemel et al. [95], where neural source estimation was carried out using current dipole localization). Accordingly, N360 might share some neural generators with N170 but is probably representing an ulterior stage in the processing of a known face and tentatively associated with the retrieval from LTM of the visual information stored in the “FRUs” [1].

Figure 2: Long-latency ERPs related to face recognition. Top: examples of facial N400-like ERPs (waveforms resulting from subtracting matching trials from mismatching trials) elicited in different tasks in which the degree of verbal and structural visual information involved was varied: a N360 (black) elicited by face-feature mismatching in faces learned without associated verbal information; a N380 (red) elicited by face-feature mismatching in faces learned with occupations and names; a cross-domain N440 (green) elicited by face-occupation mismatching; and a N370 (blue) elicited by occupation-name mismatching. Bottom: topographic voltage maps showing the scalp distribution of these ERPs in each task when the amplitude value was maximal.

In summary, the results derived from all these experiments using facial stimuli seem to suggest that N400-like components can be generated in an experimental framework related either to the contextual preactivation for repetition (e.g., in identity-matching tasks, in serial presentation of repeated versus nonrepeated faces) or to association (e.g., in face-feature, face-occupation, or face-pairs matching tasks) related to face memories. However, we want to emphasize that the denomination of such brain responses as electrophysiological markers in the face visual domain should firstly consider the study of the activity elicited by faces independently from other verbal-semantic information which is associated commonly with faces. This verbal-semantic information is, nevertheless, relevant for the eventual identification of those individuals that we know. New experimental studies using high density ERP recordings to improve the spatial resolution of electrophysiological data will allow delineating those possible neural generators of “facial” N400-like waves. Such future studies are necessary to investigate whether face-sensitive neural mechanisms supporting structural processing can be triggered in a relatively independent way from those underlying verbal-semantic processing associated with faces (Table 1).

Table 1: Summary of the main characteristics of different event-related potentials (ERPs) related to face processing.

4. New Methods for Further Research

4.1. Dynamic Causal Modeling to Disentangle the Dynamics of the Face Processing Network

Most research studies developed up to date, some of which are described here, propose plausible neurofunctional models of different aspects of face processing, based solely on estimates of “where” and “when” the underlying neural events associated with this process occur in the brain. However, the ultimate goal of these models is to describe “how” brain activity is coordinated among different regions during the execution of the given task. For this, several pieces of information critical for characterizing a network are missing. These include the directionality of information transfer or “effective connectivity” between connected regions [105, 106]. In this sense, current developments in both measuring and analysis techniques are providing tools that allow a movement from “guessing” to actually “inferring” neurofunctional network models directly from the data.

In general, “effective connectivity” relies on metrics of interactions that are more or less related to the notion of temporal precedence (because of propagation and synaptic delays) of the activity in the driving structure with respect to that in the driven ones. Due to their high temporal resolution, EEG and MEG are particularly amenable for this type of analysis. In contrast, fMRI is sensitive to changes of local perfusion and oxygen uptake by neurons, which is characterized by the “hemodynamic response function” that delays hemodynamic responses, relative to their hidden neuronal causes. Therefore, fMRI provides an indirect measure of neuronal activity, but the actual nature of this relationship is still a matter of current debate [107]. In addition, the “hemodynamic response function” shows regional variations that make it impossible to estimate neuronal delays directly from the fMRI measurements. This physiological limitation not only compromises the temporal resolution of the technique but also compromises its capability for estimating “effective connectivity” directly from the data [108]. Therefore, despite the exciting knowledge contributed by fMRI and other techniques, ERPs have an important role to play in understanding face processing, but refinement of the analysis techniques is mandatory.

One direction for this development is the use of DCM [109, 110]. DCM relies on a biophysical model that connects the neuronal states to measured responses. It regards an experiment as a designed perturbation of neuronal dynamics in which stimuli cause changes in neuronal activity that are propagated throughout a system of coupled anatomical nodes or sources, which in turn cause changes in the observed EEG/MEG signals. Experimental factors can also change the parameters or causal architecture of the network producing the observations. The inversion of these models is used to infer the “effective connectivity” among unobserved neuronal states and how “effective connectivity” depends upon either stimulus attributes or experimental context. Additionally, Bayesian inference allows the comparison of a set of models with different directed connections and the identification of the optimal model given the data.

As a relevant example for the present work, David et al. [111] carried out a DCM analysis of ERPs recorded during the perception of both faces and houses. As a result, category-selectivity, as indexed by the face-selective N170, could be explained by category-specific differences in forward connections from sensory to higher areas in the ventral stream. Specifically, there was an increase of forward connectivity in the medial ventral pathway from retrosplenial cortex to parahippocampal place area when processing houses versus faces. Conversely, in agreement with Haxby et al.’s [10] model, there was an increase in coupling from inferior occipital gyrus (IOG) to the FFA and from IOG to the STS during face perception. The face-selectivity of STS responses was smaller than in the FFA due to a gain in sensitivity to inputs from IOG. The connections from V1 to IOG showed no selectivity. This suggests that category-selectivity emerges downstream from IOG, at a fairly high level, somewhat contrary to expected [112]. In a related study, Fairhall and Ishai [113] used DCM on fMRI data while subjects processed emotional and famous faces. In accordance with David et al. [111], they predicted a ventral rather than dorsal connection between the “core” (visual areas) and the “extended” (limbic and prefrontal regions) systems during face viewing. They also found that the core system is hierarchically organized in a predominantly feed-forward fashion, with the IOG exerting influence on the FFG and on the STS. Furthermore, the FFG was found to exert a strong causal influence on the orbitofrontal cortex (OFC) when processing famous faces and on the amygdala and inferior frontal gyrus when processing emotional faces.

In a recent and pioneering study, Nguyen et al. [114] used DCM as a data fusion approach to integrate concurrently acquired EEG and fMRI data to examine the association between the N170 of ERPs and the activity within the face-selective fMRI network for processing both upright and inverted faces. Data features derived from EEG were used as contextual modulators on fMRI-derived estimates of effective connectivity between key regions of the face perception network. As main results they obtained that the OFA acts a central “gatekeeper,” directing visual information to the STS and the FFA and to a medial region of the fusiform gyrus (mFG). The connection from the OFA to the STS was strengthened on trials in which N170 amplitudes to upright faces were large. In contrast, the connection from the OFA to the mFG, an area known to be involved in object processing, was enhanced for inverted faces particularly on trials in which N170 amplitudes were small. According to these authors, their approach can be considered asymmetric within the model-driven data fusion framework; that is, the forward model (from sources to observable data) is confined here to only one modality (neurovascular coupling from neural states to BOLD signal), whereas the second modality (EEG) is used to constrain that model. In turn, a symmetric approach would rely on a joint forward model that generates both EEG and fMRI data from the same neuronal states. This would allow an integrative model inversion that could take advantage of the exquisite temporal resolution of EEG data and greater spatial resolution of the BOLD signal [115].

Data fusion approaches are a direct consequence of recent hardware and software developments, which have made it feasible to acquire EEG and fMRI data simultaneously. Nonetheless, this approach should be applied cautiously since the degree of overlap between underlying neuronal activity generating observations in each modality is variable and, for the most part, unknown [116]. Specifically, some studies related to face processing have shown that different ERP deflections correlate best with the BOLD (blood oxygen level dependent) response; for example, P3a is related to BOLD signal changes in the right fusiform and left superior temporal gyrus for a facial emotion recognition task [117] and N170s for face and house visual stimuli have been found to correlate well with hemodynamic responses in various brain areas in the temporal-occipital lobes [118]. These findings imply that certain EEG components may correlate better with the BOLD signal than others. Moreover, the relationship between these components and the BOLD response may vary according to the experimental paradigm used. Thus, although EEG-fMRI fusion has great potential to pursue new strategies in cognitive neuroimaging, including those with respect to face processing, further studies about the actual nature of the coupling between the underlying neuronal activity and these two types of measurements are necessary. This will allow the formulation of more realistic forward generative models as well as the development of appropriate multimodal inference methods.

4.2. Single-Trial Perspective in the Study of Face-Sensitive ERPs

Another direction for future development is that relative to single-trial analyses of evoked activity. In the last years an increasing number of studies are aimed to explore EEG processes whose dynamic characteristics are also correlated with behavioral changes but cannot be seen in the averaged ERP [119, 120]. Comparing different procedures for single-trial data filtering (viz., raw sensor amplitudes, regression-based estimation, bandpass filtering, and independent component analysis or ICA), De Vos et al. [121] found the best single-trial estimation for ICA in the case of the N170 single-trial ERP. According to such findings the face-sensitive N170 does not represent activity from a face-tuned neuronal population exclusively but rather the activity of a network involved in general visual processing. Moreover, single-trials approach has allowed Rousselet et al. [122] to know that the “N170 face effect” is essentially characterized by an event-related modulation of amplitude from trial to trial rather than an increase in phase coherence in the N170 time window.

The single-trial approach combined with parametrically manipulated stimuli is intended to establish statistical links between image properties and brain activity [123125]. Implementations of such analyses are based on the criterion that information content of brain states can only be revealed using reverse correlation techniques and statistical modeling approaches by determining what global and local image properties modulate single-trial ERPs [125].

The single-trial perspective has also shed light on the nature of the neurocognitive deficit in prosopagnosic individuals. In a recent study, Nemeth et al. [126] found that the altered (reduced) face sensitivity of the N170 in congenital prosopagnosia was due to a larger than normal N170 to noise stimuli rather than to a smaller N170 elicited by faces. This effect was explained, on a single-trial basis, by a larger oscillatory power and phase-locking in the theta frequency-band around 130–190 ms as well as by a lower intertrial jitter of the response latency for the noise stimuli.

4.3. Face Processing and Brain Oscillations

The development of computing and methodological tools for signal processing in laboratories devoted to electroencephalographic (EEG) research has increased notably in the last decades the interest for the study of brain oscillations and allowed the advance in the interpretation of their functional meaning [119]. A consequence of this development is that evoked responses are no longer considered mere increases in signal amplitude with fixed time course and fixed polarity, arising overlaid on “spontaneous EEG” and detected via trial averaging. Instead, they are thought to reflect, at least partially, a reset of ongoing oscillations and are mainly studied via time-frequency analyses ([120, 127132]) (Figure 3).

Figure 3: Time-frequency plots derived from wavelet transformations of multiple EEG trials in a subject. Induced activity in form of event-related spectral perturbation (ERSP (a)) and of the inter-trial phase coherence (ITC) as a measure of phase consistence among trials (b), both represented for recording sites Cz and Pz of the International 10/20 System and elicited in a face-feature matching task (see, e.g., [8]). Observe how induced activity (ERSP) is larger (red colour) in the middle of the epoch for low frequencies and around 200 msec for high ones. In turn, ITC is larger for very low frequencies along the epoch and for other somewhat higher oscillations at the beginning of the epoch.

Whereas the assumption of either the (traditional) evoked model or the oscillatory model to understand the event-related EEG activity is controversial, integrative approaches that analyze simultaneously both types of scalp-recorded data are necessary to elucidate the brain mechanisms underlying cognitive processes of interest (see, e.g., [130], and their proposal of the “event-related phase reorganization” model).

In relation with face processing, the face sensitive scalp-recorded N170 has been related to modulations of amplitude of low frequency (in the 5–15 Hz band) oscillations [122]. In fact, Tang et al. [133] have differentiated this low-frequency (4–10 Hz in their study) oscillatory activity from a lower (0–5 Hz) frequency accounting for the (usually considered positive counterpart of N170) vertex positive peak (VPP), suggesting that both ERPs have different sources.

On the other hand, Anaki et al. [134] studied the N170 wave conjointly with induced gamma band activity (>20 Hz), while face orientation and face familiarity were manipulated. These authors found that N170 was modulated by inversion but not by familiarity, whereas low (25–50 Hz) and high (50–70 Hz) gamma(s) were modulated by orientation and familiarity, respectively. In a similar vein, Zion-Golumbic and Bentin [65] dissociated the functional roles of N170 and induced gamma oscillations when they found that, unlike the N170, the amplitude of gamma was sensitive to the configuration of internal facial features but insensitive to their presence within or outside a face contour. A relatively late gamma sensitivity and an increased P2 concerning the own-race effect were both reported by Chen et al. [135] who in turn did not find any race modulation on the “structural” N170 component. These authors suggested that such modulations could be associated with more elaborated processing on the basis of configural computation due to greater experience with own-race faces. Furthermore, using subdural recordings in the ventral occipitotemporal cortices, Engell and McCarthy [136] found that both N200 and induced gamma activity had stimulus (face) specificity; however, only N200 was evoked by impoverished face stimuli that did not induce gamma activity. It suggested that the face-induced gamma response reflects elaborative processing of faces, while face-N200 may reflect a synchronizing event within the face network. All these results suggest that, even in the same latencies, ERPs and neural oscillations can be reflecting distinct neural subroutines and might arise from the activity of separated neural assemblies acting conjointly to make face recognition efficient.

5. Conclusions

The study of ERPs concerning face processing has allowed the identification of possible markers for distinct cognitive operations involved in face perception and face recognition. Both the latencies and the scalp distribution of these brain responses as well as the experimental variables modulating their amplitudes allow us to characterize these noninvasively recorded signals as electrophysiological correlates of distinct modules commonly described in the theoretical models of face processing. Thus, the initial processing of faces as complex visual stimuli can be indexed by the early occipital P1, which might be linked to the detection of certain primary structural aspects (for instance, a contour) suggesting the presence of stimuli resembling faces. N170 seems to be more clearly sensitive to detection of faces as complex organized visual stimuli and to the presence of its defining features, prior to intracategorical identification, whereas the later N250r and N400 could be indexes of processes of access and retrieval of information corresponding to long-term face representations, respectively. All these responses can originate in activity of neural populations situated mainly in cortical regions encompassing the so-called “ventral visual stream,” which is assumed to be hierarchically organized from the extrastriate early visual cortices to the temporal regions in accordance with the latencies of such responses.

The high temporal resolution of the ERPs study offers an ideal framework to incorporate new methodological approaches, such as time-frequency and single-trial analyses, to determine, for example, how certain image properties are linked to brain activity. DCM can also benefit from this in order to infer information flow through the face network and the effective connectivity among brain regions, depending on the nature of faces and the task at hand.

The use of methodological tools and perspectives as those mentioned above, together with the enormous and increasing volume of experimental data, can lead to a major breakthrough in the study of the neural dynamics of cognitive operations such as those involved in face processing.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.


The authors thank Stefan Schweinberger for his valuable comments on an earlier version of the present paper. This work was supported by “Ministerio de Economía y Competitividad” (Spain, I + D + I National Programme, PSI2013-46007-P).


  1. V. Bruce and A. W. Young, “Understanding face recognition,” The British Journal of Psychology, vol. 77, no. 3, pp. 305–327, 1986. View at Google Scholar · View at Scopus
  2. A. M. Burton, V. Bruce, and R. A. Johnston, “Understanding face recognition with an interactive activation model,” The British journal of psychology, vol. 81, no. 3, 1990. View at Google Scholar · View at Scopus
  3. A. M. Burton, V. Bruce, and P. J. B. Hancock, “From pixels to people: a model of familiar face recognition,” Cognitive Science, vol. 23, no. 1, pp. 1–31, 1999. View at Publisher · View at Google Scholar · View at Scopus
  4. M. B. Lewis and H. D. Ellis, “How we detect a face: a survey of psychological evidence,” International Journal of Imaging Systems and Technology, vol. 13, no. 1, pp. 3–7, 2003. View at Publisher · View at Google Scholar · View at Scopus
  5. M. J. Farah, R. C. O'Reilly, and S. P. Vecera, “Dissociated overt and covert recognition as an emergent property of a lesioned neural network,” Psychological Review, vol. 100, no. 4, pp. 571–588, 1993. View at Publisher · View at Google Scholar · View at Scopus
  6. A. J. Calder and A. W. Young, “Understanding the recognition of facial identity and facial expression,” Nature Reviews Neuroscience, vol. 6, no. 8, pp. 641–651, 2005. View at Publisher · View at Google Scholar · View at Scopus
  7. E. I. Olivares and J. Iglesias, “Brain potential correlates of the ‘internal features advantage’ in face recognition,” Biological Psychology, vol. 83, no. 2, pp. 133–142, 2010. View at Publisher · View at Google Scholar · View at Scopus
  8. E. I. Olivares, J. Iglesias, and S. Rodríguez-Holguín, “Long-latency ERPs and recognition of facial identity,” Journal of Cognitive Neuroscience, vol. 15, no. 1, pp. 136–151, 2003. View at Publisher · View at Google Scholar · View at Scopus
  9. M. I. Gobbini and J. V. Haxby, “Neural systems for recognition of familiar faces,” Neuropsychologia, vol. 45, no. 1, pp. 32–41, 2007. View at Publisher · View at Google Scholar · View at Scopus
  10. J. V. Haxby, E. A. Hoffman, and M. I. Gobbini, “The distributed human neural system for face perception,” Trends in Cognitive Sciences, vol. 4, no. 6, pp. 223–233, 2000. View at Publisher · View at Google Scholar · View at Scopus
  11. A. Ishai, “Let's face it: It's a cortical network,” NeuroImage, vol. 40, no. 2, pp. 415–419, 2008. View at Publisher · View at Google Scholar · View at Scopus
  12. A. J. Wiggett and P. E. Downing, “The face network: overextended? (Comment on: ‘Let's face it: it's a cortical network’ by Alumit Ishai),” NeuroImage, vol. 40, no. 2, pp. 420–422, 2008. View at Publisher · View at Google Scholar · View at Scopus
  13. A. Mechelli, C. J. Price, K. J. Friston, and A. Ishai, “Where bottom-up meets top-down: neuronal interactions during perception and imagery,” Cerebral Cortex, vol. 14, no. 11, pp. 1256–1265, 2004. View at Publisher · View at Google Scholar · View at Scopus
  14. M. E. Hasselmo, E. T. Rolls, and G. C. Baylis, “The role of expression and identity in the face-selective responses of neurons in the temporal visual cortex of the monkey,” Behavioural Brain Research, vol. 32, no. 3, pp. 203–218, 1989. View at Publisher · View at Google Scholar · View at Scopus
  15. D. I. Perrett, E. T. Rolls, and W. Caan, “Visual neurones responsive to faces in the monkey temporal cortex,” Experimental Brain Research, vol. 47, no. 3, pp. 329–342, 1982. View at Google Scholar · View at Scopus
  16. R. G. D. P. Menendez, S. G. Andino, G. Lantz, C. M. Michel, and T. Landis, “Noninvasive localization of electromagnetic epileptic activity. I. Method descriptions and simulations,” Brain Topography, vol. 14, no. 2, pp. 131–137, 2001. View at Publisher · View at Google Scholar · View at Scopus
  17. R. D. Pascual-Marqui, C. M. Michel, and D. Lehmann, “Low resolution electromagnetic tomography: a new method for localizing electrical activity in the brain,” International Journal of Psychophysiology, vol. 18, no. 1, pp. 49–65, 1994. View at Publisher · View at Google Scholar · View at Scopus
  18. N. J. Trujillo-Barreto, E. Aubert-Vázquez, and P. A. Valdés-Sosa, “Bayesian model averaging in EEG/MEG imaging,” NeuroImage, vol. 21, no. 4, pp. 1300–1319, 2004. View at Publisher · View at Google Scholar · View at Scopus
  19. E. J. Barbeau, M. J. Taylor, J. Regis, P. Marquis, P. Chauvel, and C. Liégeois-Chauvel, “Spatio temporal dynamics of face recognition,” Cerebral Cortex, vol. 18, no. 5, pp. 997–1009, 2008. View at Publisher · View at Google Scholar · View at Scopus
  20. B. Kotchoubey, “Event-related potentials, cognition, and behavior: a biological approach,” Neuroscience and Biobehavioral Reviews, vol. 30, no. 1, pp. 42–65, 2006. View at Publisher · View at Google Scholar · View at Scopus
  21. S. Bentin and D. Carmel, “Accounts for the N170 face-effect: a reply to Rossion, Curran & Gauthier,” Cognition, vol. 85, no. 2, pp. 197–202, 2002. View at Publisher · View at Google Scholar · View at Scopus
  22. I. Gauthier and M. J. Tarr, “Becoming a ‘Greeble’ expert: exploring mechanisms for face recognition,” Vision Research, vol. 37, no. 12, pp. 1673–1682, 1997. View at Publisher · View at Google Scholar · View at Scopus
  23. E. McKone and R. Robbins, “The evidence rejects the expertise hypothesis: reply to Gauthier & Bukach,” Cognition, vol. 103, no. 2, pp. 331–336, 2007. View at Publisher · View at Google Scholar · View at Scopus
  24. B. Rossion, C.-C. Kung, and M. J. Tarr, “Visual expertise with nonface objects leads to competition with the early perceptual processing of faces in the human occipitotemporal cortex,” Proceedings of the National Academy of Sciences of the United States of America, vol. 101, no. 40, pp. 14521–14526, 2004. View at Publisher · View at Google Scholar · View at Scopus
  25. I. Gauthier and C. Bukach, “Should we reject the expertise hypothesis?” Cognition, vol. 103, no. 2, pp. 322–330, 2007. View at Publisher · View at Google Scholar · View at Scopus
  26. T. A. Busey and J. R. Vanderkolk, “Behavioral and electrophysiological evidence for configural processing in fingerprint experts,” Vision Research, vol. 45, no. 4, pp. 431–448, 2005. View at Publisher · View at Google Scholar · View at Scopus
  27. M. J. Farah, K. D. Wilson, M. Drain, and J. N. Tanaka, “What is ‘Special’ about face perception?” Psychological Review, vol. 105, no. 3, pp. 482–498, 1998. View at Publisher · View at Google Scholar · View at Scopus
  28. T. J. McKeeff, R. W. McGugin, F. Tong, and I. Gauthier, “Expertise increases the functional overlap between face and object perception,” Cognition, vol. 117, no. 3, pp. 355–360, 2010. View at Publisher · View at Google Scholar · View at Scopus
  29. S. R. Schweinberger, “Neurophysiological correlates of face recognition,” in Handbook of Face Perception, A. Calder, G. Rhodes, M. H. Johnson, and J. V. Haxby, Eds., Oxford University Press, Oxford, UK, 2011. View at Google Scholar
  30. S. Bentin, T. Allison, A. Puce, E. Perez, and G. McCarthy, “Electrophysiological studies of face perception in humans,” Journal of Cognitive Neuroscience, vol. 8, no. 6, pp. 551–565, 1996. View at Publisher · View at Google Scholar · View at Scopus
  31. B. Rossion, I. Gauthier, M. J. Tarr et al., “The N170 occipito-temporal component is delayed and enhanced to inverted faces but not to inverted objects: an electrophysiological account of face-specific processes in the human brain,” NeuroReport, vol. 11, no. 1, pp. 69–74, 2000. View at Publisher · View at Google Scholar · View at Scopus
  32. S. R. Schweinberger, V. Huddy, and A. M. Burton, “N250r: a face-selective brain response to stimulus repetitions,” NeuroReport, vol. 15, no. 9, pp. 1501–1505, 2004. View at Publisher · View at Google Scholar · View at Scopus
  33. M. F. Neumann, T. N. Mohamed, and S. R. Schweinberger, “Face and object encoding under perceptual load: ERP evidence,” NeuroImage, vol. 54, no. 4, pp. 3021–3027, 2011. View at Publisher · View at Google Scholar · View at Scopus
  34. G. Thierry, A. J. Pegna, C. Dodds, M. Roberts, S. Basan, and P. Downing, “An event-related potential component sensitive to images of the human body,” NeuroImage, vol. 32, no. 2, pp. 871–879, 2006. View at Publisher · View at Google Scholar · View at Scopus
  35. K. Bötzel, S. Schulze, and S. R. G. Stodieck, “Scalp topography and analysis of intracranial sources of face-evoked potentials,” Experimental Brain Research, vol. 104, no. 1, pp. 135–143, 1995. View at Google Scholar · View at Scopus
  36. K. A. Dalrymple, I. Oruç, B. Duchaine et al., “The anatomic basis of the right face-selective N170 IN acquired prosopagnosia: a combined ERP/fMRI study,” Neuropsychologia, vol. 49, no. 9, pp. 2553–2563, 2011. View at Publisher · View at Google Scholar · View at Scopus
  37. R. J. Itier and M. J. Taylor, “N170 or N1? Spatiotemporal differences between object and face processing using ERPs,” Cerebral Cortex, vol. 14, no. 2, pp. 132–142, 2004. View at Publisher · View at Google Scholar · View at Scopus
  38. S. R. Schweinberger, E. C. Pickering, I. Jentzsch, A. M. Burton, and J. M. Kaufmann, “Event-related brain potential evidence for a response of inferior temporal cortex to familiar face repetitions,” Cognitive Brain Research, vol. 14, no. 3, pp. 398–409, 2002. View at Publisher · View at Google Scholar · View at Scopus
  39. N. Kanwisher, J. McDermott, and M. M. Chun, “The fusiform face area: a module in human extrastriate cortex specialized for face perception,” The Journal of Neuroscience, vol. 17, no. 11, pp. 4302–4311, 1997. View at Google Scholar
  40. G. McCarthy, A. Puce, J. C. Gore, and T. Allison, “Face-specific processing in the human fusiform gyrus,” Journal of Cognitive Neuroscience, vol. 9, no. 5, pp. 605–610, 1997. View at Publisher · View at Google Scholar · View at Scopus
  41. G. McCarthy, A. Puce, A. Belger, and T. Allison, “Electrophysiological studies of human face perception. II: response properties of face-specific potentials generated in occipitotemporal cortex,” Cerebral Cortex, vol. 9, no. 5, pp. 431–444, 1999. View at Publisher · View at Google Scholar · View at Scopus
  42. R. N. Henson, Y. Goshen-Gottstein, T. Ganel, L. J. Otten, A. Quayle, and M. D. Rugg, “Electrophysiological and haemodynamic correlates of face perception, recognition and priming,” Cerebral Cortex, vol. 13, no. 7, pp. 793–805, 2003. View at Publisher · View at Google Scholar · View at Scopus
  43. V. T. Nguyen and R. Cunnington, “The superior temporal sulcus and the N170 during face processing: single trial analysis of concurrent EEG-fMRI,” NeuroImage, vol. 86, pp. 492–502, 2014. View at Publisher · View at Google Scholar · View at Scopus
  44. M. A. Bobes, F. Lopera, L. Díaz et al., “Brain potentials reflect residual face processing in a case of prosopagnosia,” Cognitive Neuropsychology, vol. 21, no. 7, pp. 691–718, 2004. View at Publisher · View at Google Scholar · View at Scopus
  45. T. Allison, H. Ginter, G. McCarthy et al., “Face recognition in human extraestriate cortex,” Journal of Neurophysiology, vol. 71, no. 2, pp. 821–825, 1994. View at Google Scholar · View at Scopus
  46. T. Allison, A. Puce, D. D. Spencer, and G. McCarthy, “Electrophysiological studies of human face perception. I: potentials generated in occipitotemporal cortex by face and non-face stimuli,” Cerebral Cortex, vol. 9, no. 5, pp. 415–430, 1999. View at Publisher · View at Google Scholar · View at Scopus
  47. S. Bentin, Y. Golland, A. V. Flevaris, L. C. Robertson, and M. Moscovitch, “Processing the trees and the forest during initial stages of face perception: electrophysiological evidence,” Journal of Cognitive Neuroscience, vol. 18, no. 8, pp. 1406–1421, 2006. View at Publisher · View at Google Scholar · View at Scopus
  48. K. Bötzel and O.-J. Grüsser, “Electric brain potentials evoked by pictures of faces and non-faces: a search for ‘face-specific’ EEG-potentials,” Experimental Brain Research, vol. 77, no. 2, pp. 349–360, 1989. View at Publisher · View at Google Scholar · View at Scopus
  49. M. Eimer, “Event-related brain potentials distinguish processing stages involved in face perception and recognition,” Clinical Neurophysiology, vol. 111, no. 4, pp. 694–705, 2000. View at Publisher · View at Google Scholar · View at Scopus
  50. M. Eimer, “The face-specific N170 component reflects late stages in the structural encoding of faces,” NeuroReport, vol. 11, no. 10, pp. 2319–2324, 2000. View at Publisher · View at Google Scholar · View at Scopus
  51. D. A. Jeffreys, “A face-responsive potential recorded from the human scalp,” Experimental Brain Research, vol. 78, no. 1, pp. 193–202, 1989. View at Publisher · View at Google Scholar · View at Scopus
  52. S. Caharel, S. Poiroux, C. Bernard, F. Thibaut, R. Lalonde, and M. Rebai, “ERPs associated with familiarity and degree of familiarity during face recognition,” International Journal of Neuroscience, vol. 112, no. 12, pp. 1499–1512, 2002. View at Publisher · View at Google Scholar · View at Scopus
  53. S. Campanella, C. Hanoteau, D. Dépy et al., “Right N170 modulation in a face discrimination task: an account for categorical perception of familiar faces,” Psychophysiology, vol. 37, no. 6, pp. 796–806, 2000. View at Publisher · View at Google Scholar · View at Scopus
  54. F. Guillaume and G. Tiberghien, “Electrophysiological study of contextual variations in a short-term face recognition task,” Cognitive Brain Research, vol. 22, no. 3, pp. 471–487, 2005. View at Publisher · View at Google Scholar · View at Scopus
  55. J. J. Heisz, S. Watter, and J. M. Shedden, “Progressive N170 habituation to unattended repeated faces,” Vision Research, vol. 46, no. 1-2, pp. 47–56, 2006. View at Publisher · View at Google Scholar · View at Scopus
  56. N. Kloth, C. Dobel, S. R. Schweinberger, P. Zwitserlood, J. Bölte, and M. Junghöfer, “Effects of personal familiarity on early neuromagnetic correlates of face perception,” European Journal of Neuroscience, vol. 24, no. 11, pp. 3317–3321, 2006. View at Publisher · View at Google Scholar · View at Scopus
  57. S. Bentin and L. Y. Deouell, “Structural encoding and identification in face processing: ERP evidence for separate mechanisms,” Cognitive Neuropsychology, vol. 17, no. 1–3, pp. 35–54, 2000. View at Publisher · View at Google Scholar · View at Scopus
  58. G. Galli, M. Feurra, and M. P. Viggiano, ““Did you see him in the newspaper?” Electrophysiological correlates of context and valence in face processing,” Brain Research, vol. 1119, no. 1, pp. 190–202, 2006. View at Publisher · View at Google Scholar · View at Scopus
  59. Y. Xu, J. Liu, and N. Kanwisher, “The M170 is selective for faces, not for expertise,” Neuropsychologia, vol. 43, no. 4, pp. 588–597, 2005. View at Publisher · View at Google Scholar · View at Scopus
  60. I. Amihai, L. Y. Deouell, and S. Bentin, “Neural adaptation is related to face repetition irrespective of identity: a reappraisal of the N170 effect,” Experimental Brain Research, vol. 209, no. 2, pp. 193–204, 2011. View at Publisher · View at Google Scholar · View at Scopus
  61. N. Kloth, S. R. Schweinberger, and G. Kovács, “Neural correlates of generic versus gender-specific face adaptation,” Journal of Cognitive Neuroscience, vol. 22, no. 10, pp. 2345–2356, 2010. View at Publisher · View at Google Scholar · View at Scopus
  62. U. Maurer, B. Rossion, and B. D. McCandliss, “Category specificity in early perception: Face and word N170 responses differ in both lateralization and habituation properties,” Frontiers in Human Neuroscience, vol. 2, article 18, 2008. View at Publisher · View at Google Scholar · View at Scopus
  63. S. R. Schweinberger, J. M. Kaufmann, S. Moratti, A. Keil, and A. M. Burton, “Brain responses to repetitions of human and animal faces, inverted faces, and objects—an MEG study,” Brain Research, vol. 1184, no. 1, pp. 226–233, 2007. View at Publisher · View at Google Scholar · View at Scopus
  64. S. R. Schweinberger, N. Kloth, and R. Jenkins, “Are you looking at me? Neural correlates of gaze adaptation,” NeuroReport, vol. 18, no. 7, pp. 693–696, 2007. View at Publisher · View at Google Scholar · View at Scopus
  65. E. Zion-Golumbic and S. Bentin, “Dissociated neural mechanisms for face detection and configural encoding: evidence from N170 and induced gamma-band oscillation effects,” Cerebral Cortex, vol. 17, no. 8, pp. 1741–1749, 2007. View at Publisher · View at Google Scholar · View at Scopus
  66. M. Seeck and O. J. Grüsser, “Category-related components in visual evoked potentials: photographs of faces, persons, flowers and tools as stimuli,” Experimental Brain Research, vol. 92, no. 2, pp. 338–349, 1992. View at Google Scholar · View at Scopus
  67. D. A. Jeffreys and E. S. A. Tukmachi, “The vertex-positive scalp potential evoked by faces and by objects,” Experimental Brain Research, vol. 91, no. 2, pp. 340–350, 1992. View at Publisher · View at Google Scholar · View at Scopus
  68. D. A. Jeffreys, E. S. A. Tukmachi, and G. Rockley, “Evoked potential evidence for human brain mechanisms that respond to single, fixated faces,” Experimental Brain Research, vol. 91, no. 2, pp. 351–362, 1992. View at Publisher · View at Google Scholar · View at Scopus
  69. N. George, J. Evans, N. Fiori, J. Davidoff, and B. Renault, “Brain events related to normal and moderately scrambled faces,” Cognitive Brain Research, vol. 4, no. 2, pp. 65–76, 1996. View at Publisher · View at Google Scholar · View at Scopus
  70. C. Joyce and B. Rossion, “The face-sensitive N170 and VPP components manifest the same brain processes: the effect of reference electrode site,” Clinical Neurophysiology, vol. 116, no. 11, pp. 2613–2631, 2005. View at Publisher · View at Google Scholar · View at Scopus
  71. K. Linkenkaer-Hansen, J. M. Palva, M. Sams, J. K. Hietanen, H. J. Aronen, and R. J. Ilmoniemi, “Face-selective processing in human extrastriate cortex around 120 ms after stimulus onset revealed by magneto- and electroencephalography,” Neuroscience Letters, vol. 253, no. 3, pp. 147–150, 1998. View at Publisher · View at Google Scholar · View at Scopus
  72. M. J. Taylor, S. J. Bayless, T. Mills, and E. W. Pang, “Recognising upright and inverted faces: MEG source localisation,” Brain Research, vol. 1381, pp. 167–174, 2011. View at Publisher · View at Google Scholar · View at Scopus
  73. H. Halit, M. De Haan, and M. H. Johnson, “Modulation of event-related potentials by prototypical and atypical faces,” NeuroReport, vol. 11, no. 9, pp. 1871–1875, 2000. View at Publisher · View at Google Scholar · View at Scopus
  74. T. Nakashima, K. Kaneko, Y. Goto et al., “Early ERP components differentially extract facial features: evidence for spatial frequency-and-contrast detectors,” Neuroscience Research, vol. 62, no. 4, pp. 225–235, 2008. View at Publisher · View at Google Scholar · View at Scopus
  75. T. Mitsudo, Y. Kamio, Y. Goto, T. Nakashima, and S. Tobimatsu, “Neural responses in the occipital cortex to unrecognizable faces,” Clinical Neurophysiology, vol. 122, no. 4, pp. 708–718, 2011. View at Publisher · View at Google Scholar · View at Scopus
  76. R. J. Itier and M. J. Taylor, “Inversion and contrast polarity reversal affect both encoding and recognition processes of unfamiliar faces: a repetition study using ERPs,” NeuroImage, vol. 15, no. 2, pp. 353–372, 2002. View at Publisher · View at Google Scholar · View at Scopus
  77. J. Liu, A. Harris, and N. Kanwisher, “Stages of processing in face perception: an MEG study,” Nature Neuroscience, vol. 5, no. 9, pp. 910–916, 2002. View at Publisher · View at Google Scholar · View at Scopus
  78. T. Tanskanen, R. Näsänen, T. Montez, J. Päällysaho, and R. Hari, “Face recognition and cortical responses show similar sensitivity to noise spatial frequency,” Cerebral Cortex, vol. 15, no. 5, pp. 526–534, 2005. View at Publisher · View at Google Scholar · View at Scopus
  79. T. Tanskanen, R. Näsänen, H. Ojanpää, and R. Hari, “Face recognition and cortical responses: effect of stimulus duration,” NeuroImage, vol. 35, no. 4, pp. 1636–1644, 2007. View at Publisher · View at Google Scholar · View at Scopus
  80. U. Martens, S. R. Schweinberger, M. Kiefer, and A. M. Burton, “Masked and unmasked electrophysiological repetition effects of famous faces,” Brain Research, vol. 1109, no. 1, pp. 146–157, 2006. View at Publisher · View at Google Scholar · View at Scopus
  81. S. R. Schweinberger, E.-M. Pfütze, and W. Sommer, “Repetition priming and associative priming of face recognition: evidence from event-related potentials,” Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 21, no. 3, pp. 722–736, 1995. View at Publisher · View at Google Scholar · View at Scopus
  82. C. Saavedra, J. Iglesias, and E. I. Olivares, “Event-related potentials elicited by the explicit and implicit processing of familiarity in faces,” Clinical EEG and Neuroscience, vol. 41, no. 1, pp. 24–31, 2010. View at Publisher · View at Google Scholar · View at Scopus
  83. S. G. Boehm and W. Sommer, “Neural correlates of intentional and incidental recognition of famous faces,” Cognitive Brain Research, vol. 23, no. 2-3, pp. 153–163, 2005. View at Publisher · View at Google Scholar · View at Scopus
  84. M. Bindemann, A. M. Burton, H. Leuthold, and S. R. Schweinberger, “Brain potential correlates of face recognition: geometric distortions and the N250r brain response to stimulus repetitions,” Psychophysiology, vol. 45, no. 4, pp. 535–544, 2008. View at Publisher · View at Google Scholar · View at Scopus
  85. M. F. Neumann and S. R. Schweinberger, “N250r and N400 ERP correlates of immediate famous face repetition are independent of perceptual load,” Brain Research, vol. 1239, pp. 181–190, 2008. View at Publisher · View at Google Scholar · View at Scopus
  86. R. N. Henson, A. Rylands, E. Ross, P. Vuilleumeir, and M. D. Rugg, “The effect of repetition lag on electrophysiological and haemodynamic correlates of visual object priming,” NeuroImage, vol. 21, no. 4, pp. 1674–1689, 2004. View at Publisher · View at Google Scholar · View at Scopus
  87. L. S. Scott, J. W. Tanaka, D. L. Sheinberg, and T. Curran, “The role of category learning in the acquisition and retention of perceptual expertise: a behavioral and neurophysiological study,” Brain Research, vol. 1210, pp. 204–215, 2008. View at Publisher · View at Google Scholar · View at Scopus
  88. J. M. Kaufmann, S. R. Schweinberger, and A. M. Burton, “N250 ERP correlates of the acquisition of face representations across different images,” Journal of Cognitive Neuroscience, vol. 21, no. 4, pp. 625–641, 2009. View at Publisher · View at Google Scholar · View at Scopus
  89. M. Kutas and S. A. Hillyard, “Reading senseless sentences: brain potentials reflect semantic incongruity,” Science, vol. 207, no. 4427, pp. 203–205, 1980. View at Publisher · View at Google Scholar · View at Scopus
  90. J. B. Debruille, “The N400 potential could index a semantic inhibition,” Brain Research Reviews, vol. 56, no. 2, pp. 472–477, 2007. View at Publisher · View at Google Scholar · View at Scopus
  91. S. E. Barrett and M. D. Rugg, “Event-related potentials and the semantic matching of faces,” Neuropsychologia, vol. 27, no. 7, pp. 913–922, 1989. View at Publisher · View at Google Scholar · View at Scopus
  92. S. E. Barrett, M. D. Rugg, and D. I. Perrett, “Event-related potentials and the matching of familiar and unfamiliar faces,” Neuropsychologia, vol. 26, no. 1, pp. 105–117, 1988. View at Publisher · View at Google Scholar · View at Scopus
  93. M. A. Bobes, M. Valdessosa, and E. Olivares, “An ERP study of expectancy violation in face perception,” Brain and Cognition, vol. 26, no. 1, pp. 1–22, 1994. View at Publisher · View at Google Scholar · View at Scopus
  94. J. B. Debruille, J. Pineda, and B. Renault, “N400-like potentials elicited by faces and knowledge inhibition,” Cognitive Brain Research, vol. 4, no. 2, pp. 133–144, 1996. View at Publisher · View at Google Scholar · View at Scopus
  95. B. Jemel, N. George, E. Olivares, N. Fiori, and B. Renault, “Event-related potentials to structural familiar face incongruity processing,” Psychophysiology, vol. 36, no. 4, pp. 437–452, 1999. View at Publisher · View at Google Scholar · View at Scopus
  96. E. V. Mnatsakanian and I. M. Tarkka, “Matching of familiar faces and abstract patterns: behavioral and high-resolution ERP study,” International Journal of Psychophysiology, vol. 47, no. 3, pp. 217–227, 2003. View at Publisher · View at Google Scholar · View at Scopus
  97. E. Olivares, M. A. Bobes, E. Aubert, and M. Valdes-Sosa, “Associative ERP effects with memories of artificial faces,” Cognitive Brain Research, vol. 2, no. 1, pp. 39–48, 1994. View at Publisher · View at Google Scholar · View at Scopus
  98. E. I. Olivares, J. Iglesias, and M. Antonieta Bobes, “Searching for face-specific long latency ERPs: a topographic study of effects associated with mismatching features,” Cognitive Brain Research, vol. 7, no. 3, pp. 343–356, 1999. View at Publisher · View at Google Scholar · View at Scopus
  99. E. I. Olivares, C. Saavedra, N. J. Trujillo-Barreto, and J. Iglesias, “Long-term information and distributed neural activation arerelevant for the ‘internal features advantage’ in face processing: electrophysiological and source reconstruction evidence,” Cortex, vol. 49, no. 10, pp. 2735–2747, 2013. View at Publisher · View at Google Scholar · View at Scopus
  100. K. A. Paller, B. Gonsalves, M. Grabowecky, V. S. Bozic, and S. Yamada, “Electrophysiological correlates of recollecting faces of known and unknown individuals,” NeuroImage, vol. 11, no. 2, pp. 98–110, 2000. View at Publisher · View at Google Scholar · View at Scopus
  101. K. A. Paller, C. Ranganath, B. Gonsalves et al., “Neural correlates of person recognition,” Learning and Memory, vol. 10, no. 4, pp. 253–260, 2003. View at Publisher · View at Google Scholar · View at Scopus
  102. M. Valdés-Sosa and M. A. Bobes, “Making sense out of words and faces: ERPs evidence for multiple memory systems,” in Machinery of the Mind, E. R. John, Ed., pp. 252–288, Birkhauser, Boston, Mass, USA, 1990. View at Google Scholar
  103. E. I. Olivares, J. Iglesias, M. A. Bobes, and M. Valdés-Sosa, “Making features relevant: learning faces and event-related potentials recording using an analytic procedure,” Brain Research Protocols, vol. 5, no. 1, pp. 1–9, 2000. View at Publisher · View at Google Scholar · View at Scopus
  104. T. Curran and J. Hancock, “The FN400 indexes familiarity-based recognition of faces,” NeuroImage, vol. 36, no. 2, pp. 464–471, 2007. View at Publisher · View at Google Scholar · View at Scopus
  105. E. Salinas and T. J. Sejnowski, “Correlated neuronal activity and the flow of neural information,” Nature Reviews Neuroscience, vol. 2, no. 8, pp. 539–550, 2001. View at Publisher · View at Google Scholar · View at Scopus
  106. F. Varela, J.-P. Lachaux, E. Rodriguez, and J. Martinerie, “The brainweb: phase synchronization and large-scale integration,” Nature Reviews Neuroscience, vol. 2, no. 4, pp. 229–239, 2001. View at Publisher · View at Google Scholar · View at Scopus
  107. Y. B. Sirotin and A. Das, “Anticipatory haemodynamic signals in sensory cortex not predicted by local neuronal activity,” Nature, vol. 457, no. 7228, pp. 475–479, 2009. View at Publisher · View at Google Scholar · View at Scopus
  108. O. David, I. Guillemain, S. Saillet et al., “Identifying neural drivers with functional MRI: an electrophysiological validation,” PLoS Biology, vol. 6, no. 12, article e315, pp. 2683–2697, 2008. View at Google Scholar · View at Scopus
  109. K. J. Friston, L. Harrison, and W. Penny, “Dynamic causal modelling,” NeuroImage, vol. 19, no. 4, pp. 1273–1302, 2003. View at Publisher · View at Google Scholar · View at Scopus
  110. M. I. Garrido, J. M. Kilner, S. J. Kiebel, K. E. Stephan, and K. J. Friston, “Dynamic causal modelling of evoked potentials: a reproducibility study,” NeuroImage, vol. 36, no. 3, pp. 571–580, 2007. View at Publisher · View at Google Scholar · View at Scopus
  111. O. David, S. J. Kiebel, L. M. Harrison, J. Mattout, J. M. Kilner, and K. J. Friston, “Dynamic causal modeling of evoked responses in EEG and MEG,” NeuroImage, vol. 30, no. 4, pp. 1255–1272, 2006. View at Publisher · View at Google Scholar · View at Scopus
  112. P. Vuilleumier, J. L. Armony, J. Driver, and R. J. Dolan, “Effects of attention and emotion on face processing in the human brain: an event-related fMRI study,” Neuron, vol. 30, no. 3, pp. 829–841, 2001. View at Publisher · View at Google Scholar · View at Scopus
  113. S. L. Fairhall and A. Ishai, “Effective connectivity within the distributed cortical network for face perception,” Cerebral Cortex, vol. 17, no. 10, pp. 2400–2406, 2007. View at Publisher · View at Google Scholar · View at Scopus
  114. V. T. Nguyen, M. Breakspear, and R. Cunnington, “Fusing concurrent EEG-fMRI with dynamic causal modeling: application to effective connectivity during face perception,” NeuroImage, vol. 102, pp. 60–70, 2014. View at Publisher · View at Google Scholar · View at Scopus
  115. S. Debener, M. Ullsperger, M. Siegel, and A. K. Engel, “Single-trial EEG-fMRI reveals the dynamics of cognitive function,” Trends in Cognitive Sciences, vol. 10, no. 12, pp. 558–563, 2006. View at Publisher · View at Google Scholar · View at Scopus
  116. R. C. Sotero and N. J. Trujillo-Barreto, “Biophysical model for integrating neuronal activity, EEG, fMRI and metabolism,” NeuroImage, vol. 39, no. 1, pp. 290–309, 2008. View at Publisher · View at Google Scholar · View at Scopus
  117. P. J. Johnston, W. Stojanov, H. Devir, and U. Schall, “Functional MRI of facial emotion recognition deficits in schizophrenia and their electrophysiological correlates,” European Journal of Neuroscience, vol. 22, no. 5, pp. 1221–1232, 2005. View at Publisher · View at Google Scholar · View at Scopus
  118. T. Iidaka, A. Matsumoto, K. Haneda, T. Okada, and N. Sadato, “Hemodynamic and electrophysiological relationship involved in human face processing: evidence from a combined fMRI-ERP study,” Brain and Cognition, vol. 60, no. 2, pp. 176–186, 2006. View at Publisher · View at Google Scholar · View at Scopus
  119. A. Delorme and S. Makeig, “EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis,” Journal of Neuroscience Methods, vol. 134, no. 1, pp. 9–21, 2004. View at Publisher · View at Google Scholar · View at Scopus
  120. S. Makeig, M. Westerfield, T.-P. Jung et al., “Dynamic brain sources of visual evoked responses,” Science, vol. 295, no. 5555, pp. 690–694, 2002. View at Publisher · View at Google Scholar · View at Scopus
  121. M. De Vos, J. D. Thorne, G. Yovel, and S. Debener, “Let's face it, from trial to trial: comparing procedures for N170 single-trial estimation,” NeuroImage, vol. 63, no. 3, pp. 1196–1202, 2012. View at Publisher · View at Google Scholar · View at Scopus
  122. G. A. Rousselet, J. S. Husk, P. J. Bennett, and A. B. Sekuler, “Single-trial EEG dynamics of object and face visual processing,” NeuroImage, vol. 36, no. 3, pp. 843–862, 2007. View at Publisher · View at Google Scholar · View at Scopus
  123. M. G. Philiastides and P. Sajda, “Temporal characterization of the neural correlates of perceptual decision making in the human brain,” Cerebral Cortex, vol. 16, no. 4, pp. 509–518, 2006. View at Publisher · View at Google Scholar · View at Scopus
  124. R. Ratcliff, M. G. Philiastides, and P. Sajda, “Quality of evidence for perceptual decision making is indexed by trial-to-trial variability of the EEG,” Proceedings of the National Academy of Sciences of the United States of America, vol. 106, no. 16, pp. 6539–6544, 2009. View at Publisher · View at Google Scholar · View at Scopus
  125. G. A. Rousselet and C. R. Pernet, “Quantifying the time course of visual object processing using ERPs: It's time to up the game,” Frontiers in Psychology, vol. 2, article 107, 2011. View at Publisher · View at Google Scholar · View at Scopus
  126. K. Nemeth, M. Zimmer, S. R. Schweinberger, P. Vakli, and G. Kovács, “The background of reduced face specificity of N170 in congenital prosopagnosia,” PLoS ONE, vol. 9, no. 7, Article ID e101393, 2014. View at Publisher · View at Google Scholar
  127. E. Basar, Brain Function and Oscillations: II. Integrative Brain Function, Neurophysiology and Cognitive Processes, Springer, Heidelberg, Germany, 1999.
  128. G. Buzsáki and A. Draguhn, “Neuronal olscillations in cortical networks,” Science, vol. 304, no. 5679, pp. 1926–1929, 2004. View at Publisher · View at Google Scholar · View at Scopus
  129. S. Karakaş, Ö. U. Erzengin, and E. Başar, “A new strategy involving multiple cognitive paradigms demonstrates that ERP components are determined by the superposition of oscillatory responses,” Clinical Neurophysiology, vol. 111, no. 10, pp. 1719–1732, 2000. View at Publisher · View at Google Scholar · View at Scopus
  130. W. Klimesch, P. Sauseng, S. Hanslmayr, W. Gruber, and R. Freunberger, “Event-related phase reorganization may explain evoked neural dynamics,” Neuroscience and Biobehavioral Reviews, vol. 31, no. 7, pp. 1003–1016, 2007. View at Publisher · View at Google Scholar · View at Scopus
  131. G. Pfurtscheller and F. H. Lopes da Silva, “Event-related EEG/MEG synchronization and desynchronization: basic principles,” Clinical Neurophysiology, vol. 110, no. 11, pp. 1842–1857, 1999. View at Publisher · View at Google Scholar · View at Scopus
  132. L. M. Ward, “Synchronous neural oscillations and cognitive processes,” Trends in Cognitive Sciences, vol. 7, no. 12, pp. 553–559, 2003. View at Publisher · View at Google Scholar · View at Scopus
  133. Y. Tang, D. Liu, Y. Li, Y. Qiu, and Y. Zhu, “The time-frequency representation of the ERPs of face processing,” in Proceeding of the IEEE Engineering Medical Biological Society Conference, pp. 4114–4117, 2008.
  134. D. Anaki, E. Zion-Golumbic, and S. Bentin, “Electrophysiological neural mechanisms for detection, configural analysis and recognition of faces,” NeuroImage, vol. 37, no. 4, pp. 1407–1416, 2007. View at Publisher · View at Google Scholar · View at Scopus
  135. Y. Chen, F. Pan, H. Wang, S. Xiao, and L. Zhao, “Electrophysiological correlates of processing own- and other-race faces,” Brain Topography, vol. 26, no. 4, pp. 606–615, 2013. View at Publisher · View at Google Scholar · View at Scopus
  136. A. D. Engell and G. McCarthy, “The relationship of gamma oscillations and face-specific ERPs recorded subdurally from occipitotemporal cortex,” Cerebral Cortex, vol. 21, no. 5, pp. 1213–1221, 2011. View at Publisher · View at Google Scholar · View at Scopus