Abstract

Social communication uses verbal and nonverbal language. We examined the degree of trust and brain activity when verbal and facial expressions are incongruent. Fourteen healthy volunteers viewed photographs of 8 people with pleasant (smile) or unpleasant expressions (disgust) alone or combined with a verbal [positive/negative] expression. As an index for degree of trust, subjects were asked to offer a donation when told that the person in the photograph was troubled financially. Positive emotions and degree of trust were evaluated using the Visual Analogue Scale (VAS). Event-related potentials (ERPs) were obtained at 170–240 ms after viewing the photographs. Brain activity during incongruent conditions was localized using standardized Low Resolution Brain Electromagnetic Tomography (sLORETA). VAS scores for positive smile condition were significantly higher than those for the other conditions (). The donation offered was significantly lower for incongruence between verbal and facial expressions, particularly for negative smile condition. EEG showed more activity in the parietal lobe with incongruent than with congruent conditions. Incongruence [negative smile] elicited the least positive emotion, degree of trust, and amount of offer. Our results indicate that incongruent sensory information increased activity in the parietal lobe, which may be a basis of mentalizing.

1. Introduction

It is extremely important that a person gains trust from others in his or her social life. Faces serve as an important source of information in inferring social characteristics of each individual [1]. Brainwave activity involved in face processing is examined generally by the N170 component of event-related potentials (ERPs) [2]. Faces provide not only information of the identity of an individual, but also pertinent information in inferring the emotions of the individual through facial expressions. An association between facial expression and the trustworthiness of an individual has been observed [3].

Previous studies have examined the facial expressions that are considered trustworthy [4, 5]. Oosterhof and Todorov examined what types of facial expressions were judged to be more trustworthy by creating face models with neutral facial expressions that can be modified to depict several different facial expressions. Their results showed that smiling faces yield the highest levels of trustworthiness, while angry faces result in lower levels [5]. Thus, humans trust others with a face expressing positive affection but do not trust those with a face expressing a negative affection.

Humans in social interaction are influenced not only by facial expressions, but also by biases such as social contexts and linguistic information. These biases are known to influence social cognition, as well as social behaviors [6]. For example, if a negative language expression such as “my dog is dead” is presented by a person with a smiley face, the impression of the person by others is changed. The mechanism of alteration of trustworthiness when a language bias is presented in disagreement with a facial expression is unclear. Moreover, no studies have examined the modification of brain activities in such a case.

In this study, we examined behaviors that were more likely to be trusted in a real situation of social interaction by means of weighing trustworthiness in the condition of a language bias consistent with a facial expression against trustworthiness in the adverse condition. We hypothesized that the levels of trustworthiness would be high when linguistic information is congruent with facial expression and low when linguistic information is incongruent with facial expression. From a neuroscientific point of view, it has been reported that the parietal lobe is involved in processing visually presented facial expressions and linguistic information that are incongruent [7, 8]. For instance, robust brain activities have been reported in the parietal lobe when a study participant was asked to evaluate the facial expression of an individual when the facial expression and the accompanying linguistic stimulus were incongruent [7, 8]. It is thought that the parietal lobe is an association area for multiple sensory modalities and has the function of detecting incongruence in information received from various modalities [8]. This parietal lobe activation in response to incongruent information has been examined by a cognitive task involving a conflict between facial expression and a word describing the expression and has not been investigated using a task requiring an evaluation of trustworthiness of a person in context. Furthermore, due to the involvement of the parietal lobe in mentalizing, which is the ability to make inferences about the state of mind of others, this region may be involved in determining the levels of trust that one puts into others in a social context where information is abundant. Therefore, we hypothesized that brain activities in the parietal lobe in response to incongruent facial and linguistic information are activities that determine the trustworthiness of others. On the other hand, although the N170 component of ERPs has been shown to be associated with face processing, brain mapping of face processing utilizing the N170 has not been examined extensively.

The purpose of the current study was to investigate neuronal activities in the brain in response to incongruent facial and linguistic information using standardized Low Resolution Brain Electromagnetic Tomography (sLORETA) [9], which enables the three-dimensional visualization of neuronal activities based on the event-related potentials (ERPs).

2. Methods

2.1. Participants

Subjects were 14 healthy university students (7 males, 7 females; mean age, 21 ± 0.8 years). While subjects were informed of the experimental procedures before the experiment, the purpose of the experiment was not revealed to them. Only subjects who provided informed consent were accepted as eligible participants. The study protocol was approved by the Ethical Committee of Kio University.

2.2. Experimental Conditions

We used two major experimental conditions. One was a congruent condition, in which the facial expression, that is, nonlinguistic information, and linguistic information were congruent with each other. The other was an incongruent condition, in which the facial expression and linguistic information were not congruent. The congruent condition included a positive × smile (PoSm) condition where positive language (“I’m rescuing a kid from the fire”) and a smiley face were presented at the same time and a negative × disgust (NeDi) condition where negative language (“I’m teasing a friend”) and a facial expression of disgust were presented simultaneously.

The incongruent condition was composed of a positive × disgust (PoDi) condition where positive language (“I’ll keep my promise with my friend”) and a facial expression of disgust were presented at the same time and a negative × smile (NeSm) condition where a negative language (“I’m going to hit and kill my friend by car”) and a smiley face were presented concurrently. The linguistic stimuli used in the current study were selected from 50 arbitrary sentences in a preliminary study conducted prior to this experiment. A total of four sentences were chosen that had the two highest positive and the two lowest negative scores as evaluated on the Visual Analogue Scale (VAS).

2.3. Stimuli

As a way to present languages and facial expressions, we used photos that were projected on the screen. Each facial expression was created without reference to others based on the Action Unit Classification of the Facial Action Coding System (FACS). Subjects were asked to watch the facial expressions of 4 people in total because one person’s photo was used for one condition. To prevent gender difference effects, photos of persons of the same sex as the subject were shown. Language presentation was performed by projecting text on a screen. All photographs were in color, and only photographs with clear facial expressions were used. Each photograph included the face and shoulders of an individual, with a solid background. All photographs were taken in the same environment. The individuals photographed were in the same age group as the study participants. We ensured that the individuals in the photographs were not acquainted with the study participants and that they were free of effects of social status and preexisting favorable impressions.

2.4. Procedures

Subjects were instructed to watch images on the screen while sitting in a chair. The screen showed a letter of “+” for 5 seconds, a language stimulus for 5 seconds, and finally a facial expression of a person for 2 seconds together with the language stimulus in one trial (Figure 1). A total of 80 trials were administered, with 20 trials for each of the 4 conditions presented randomly. There were no intertrial intervals and the trials were presented continuously. The same sentence was used for all 20 trials of one condition. After all the 80 trials were completed, subjects were asked to rate their emotions (pleasantness or unpleasantness) and trustworthiness toward the persons in the photographs using the VAS.

Emotions toward a person in a photograph were graded on a scale of “0” meaning maximum unpleasantness to “100” meaning maximum pleasantness. Trustworthiness toward the person was similarly graded at “0” representing maximum untrustworthiness and “100” representing maximum trustworthiness. At the end of the trial, subjects were asked to orally answer the following question: “If you had ten thousand yen on hand, how much money would you give a person having a financial problem?”

2.5. EEG Recording and Source Localization

EEG recording was performed at 64 sites on the scalp according to the international 10–20 location method (ActiveTwo; BioSemi B.V., Amsterdam, The Netherlands). The sampling frequency was 512 Hz and the band pass filter ranged from 0.5 to 50 Hz. Rejection as artifact was performed for amplitudes exceeding 80 μV. Ocular artifacts, such as blinks and large eye movements, were removed from data using a specially designed spatial filter in EMSE Suite 5.4 (Source Signal Imaging, Inc., La Mesa, CA, USA). EEG was referenced to the average reference. A total of 1200 ms brain waves recorded from the point of 200 ms prior to the stimulus (baseline) to 1000 ms after the stimulus were stored for each facial expression presentation.

Action potentials provoked in response to facial expression recognition were defined as brain waves appearing between 170 and 240 ms based on previous studies [2, 10]. Action potentials from each condition were averaged to provide event-related potential data (ERP data).

We compared brain activities between language-facial expression congruence and incongruence as follows. We averaged ERP data from the PoSm and NeDi conditions and those from the PoDi and NeSm conditions. In other words, we acquired ERP data in the congruent and incongruent conditions to compare the data using LORETA statistics analysis processing (paired -test), which was built into the sLORETA program. sLORETA enables the spatial identification and analysis of brain cortical activity through traditional EEG recordings [9, 1113]. Consistency of LORETA with physiology and localization has been validated for numerous normal and pathological conditions [14]. Comprehensive evaluation of the LORETA method is available in reviews [15]. From these reports it follows that reliability of the LORETA analysis is high.

Cortical regions are created by allocating the raw sLORETA values of individual voxels to their corresponding Brodmann areas or cerebral gyri on the basis of the coordinates of the digitized Talairach Atlas.

Brain sites that showed a difference at a significance level of less than 5% between both data were superimposed on brain function imaging with 6239 voxels (5 mm × 5 mm × 5 mm for one voxel) using the MNI template to visualize identified sites.

2.6. Statistical Analysis

Emotions (pleasant or unpleasant), trustworthiness toward a person, and amounts of donated money were analyzed using one-way analysis of variance and their post hoc analysis was performed by a multiple comparison test (Tukey method). Furthermore, Pearson correlation coefficients were calculated to investigate the relationships between the emotions (pleasant, unpleasant) and trustworthiness towards the individual being appraised. We applied an alpha of 0.05 in the statistical analyses.

3. Results

The emotion scores in each condition were as follows: PoSm, 75.75 ± 4.83 (mean ± SE); NeDi, 61.46 ± 6.59; PoDi, 43.89 ± 3.74; and NeSm, 16.92 ± 4.92. Statistical analysis using one-way repeated ANOVA showed a significant main effect (, ). Post hoc tests indicated a significant difference between PoSm and PoDi or NeSm () and between NeDi and PoDi () (Figure 2(a)). The trustworthiness scores in each condition were as follows: PoSm, 74.85 ± 5.80; NeDi, 61.5 ± 5.49; PoDi, 40.60 ± 6.14; and NeSm, 21 ± 5.18. Statistical analysis using one-way repeated ANOVA showed a significant main effect (, ). Post hoc tests indicated a significant difference between PoSm and PoDi or NeSm (), NeDi and PoDi (), and PoDi and NeSm () (Figure 2(b)). The amounts of donated money in each condition were as follows: PoSm, 5907.14 ± 907.75; NeDi, 4071.43 ± 722.21; PoDi, 2114.29 ± 780.17; and NeSm, 1142.86 ± 501.18. Statistical analysis using one-way repeated ANOVA showed a significant main effect (, ). Post hoc tests indicated a significant difference between PoSm and PoDi or NeSm () and between NeDi and NeSm () (Figure 3).

Figure 4 shows the result of correlation analysis between the emotion score and the trustworthiness scores. Significant positive correlations were observed in the PoSm, PoDi, and NeSm conditions ().

LORETA analysis revealed that brain activities of the parietal association area (BA7) were significantly greater in the incongruence condition than in the congruence condition (Figure 5).

4. Discussion

Our results indicate that the conditions of facial expressions of another person being incongruent with linguistic information not only evoke unpleasantness, but also decreased trustworthiness towards that person. In addition, our results show that the observation of facial expressions incongruent with linguistic information activates the parietal lobe significantly. Due to the lack of significant differences in pleasantness ratings and the amount of money donated between the congruent PoSm and NeDi conditions and between the incongruent PoDi and NeSm conditions, we averaged the ERPs of the positive and negative conditions together in order to create overall congruent and incongruent conditions. The sLORETA analyses were carried out on these two conditions.

A previous study reported that faces expressing positive emotions, such as a smiley face, were more likely to be trusted, while those expressing negative emotions, such as an angry face, were less likely to be trusted [5]. Furthermore, viewing a less trustworthy face stimulated brain areas, such as the amygdala, that are involved in emotions and caused unpleasant emotions toward the other person [4, 16]. In the present study, when linguistic information and a facial expression were congruent, a negative face made the other person feel more unpleasant, have decreased trust, and donate a significantly smaller amount of money.

On the other hand, in the condition where linguistic information and a facial expression were not congruent, both a smiley face and a facial expression of disgust created a significantly unpleasant feeling, decreased trustworthiness, and a smaller amount of donated money as compared to the congruent condition. Previous studies suggested that such linguistic bias and social context could alter human emotional reactions [17, 18]. According to a report by McRae et al., even when a neutral facial expression appeared following additional linguistic information (“his/her son is left in the burning building”), emotions were evoked [17].

Our results demonstrate that matching or not matching linguistic information to a facial expression can alter not only emotional behaviors toward the other person, but also important aspects of the social interaction such as trustworthiness and the amount of donations. Thus, consistency between linguistic information and facial expression is important in communication between humans.

In this study, the condition where a facial expression was not congruent with linguistic information enhanced parietal lobe activities significantly. The parietal lobe has been reported to increase activity when an inconsistency between intention and sensation or between letters and visual information of colors (Stroop task) is presented [19, 20].

Using the Task-Switching Paradigm, Liston et al. observed that the parietal lobe can detect an inconsistency of sensory information [21]. In the present study, parietal lobe activities were stimulated in the case of inconsistency between linguistic information and facial expression, which suggests that an inconsistency was detected. The parietal lobe was also reported to have a function in mentalizing [22, 23].

Mentalizing represents the ability to infer state of mind. A previous report suggested that the parietal lobe was activated even when the state of a character in a story was deduced [22]. Together, our results suggest that inconsistency between linguistic information and facial expression creates difficulty in mentalizing and, thus, activates the parietal lobe. Our results demonstrate the importance of congruence between language and facial expression through psychological and neuroscientific approaches. Therefore, we suggest that congruence between language and facial expression is necessary to build trustful relations between individuals.

The current study has several limitations. First, the subjects were limited to healthy university students. Additional studies of adult subjects, who have experienced a lot of social interactions, are required. Second, we could not detect responses of deep brain regions such as the amygdala since the data were collected by EEG. Functional MRI or similar methods should be employed in future studies. Nevertheless, the involvement of cortical activities is suggested strongly in the task in the current study, because the task employed not only facial expressions, but also linguistic information. In addition, our brain wave analyses were limited to the comparison between the language-face congruent and incongruent conditions. Further studies are needed with different language stimuli and facial expressions, which may produce additional activities in the frontal lobe, not only due to incongruence of modalities, but also due to varying language information.

Despite the limitations mentioned above, our results indicate clearly that incongruence between linguistic information and facial expression decreases the levels of trustworthiness of the person and that observing this incongruence results in increased parietal lobe activities. Studying the effects of a combination of language and facial expressions is important since real communication involves not only facial expressions, but also various social contexts.

Conflict of Interests

There is no conflict of interests.