Skip Navigation LinksHome > July/August 2014 - Volume 29 - Issue 4 > Affect Recognition in Traumatic Brain Injury: Responses to U...
Journal of Head Trauma Rehabilitation:
doi: 10.1097/HTR.0b013e31829dded6
Original Articles

Affect Recognition in Traumatic Brain Injury: Responses to Unimodal and Multimodal Media

Zupan, Barbra PhD; Neumann, Dawn PhD

Free Access
Article Outline
Collapse Box

Author Information

Department of Applied Linguistics, Brock University, St. Catharines, Ontario, Canada (Dr Zupan); and Department of Physical Medicine and Rehabilitation, Indiana University School of Medicine and Rehabilitation Hospital of Indiana, Indiana (Dr Neumann).

Corresponding Author: Barbra Zupan, PhD, Department of Applied Linguistics, Brock University, 500 Glenridge Avenue, St. Catharines, Ontario, Canada, L2S 3A1 ( bzupan@brocku.ca).

This work was funded through the Humanities Research Institute at Brock University in St. Catharines, Ontario, and by the Cannon Research Center at Carolinas Rehabilitation in Charlotte, North Carolina.

The authors declare no conflicts of interest.

Collapse Box

Abstract

Objectives:

To compare affect recognition by people with and without traumatic brain injury (TBI) for (1) unimodal and context-enriched multimodal media; (2) positive (happy) and negative emotions; and (3) neutral multimodal stimuli.

Participants:

A total of 60 people with moderate to severe TBI and 60 matched controls.

Measures:

(1) facial affect, (2) vocal affect, and (3) multimodal affect.

Results:

Compared with controls, people with TBI scored significantly lower on both unimodal measures but not on the multimodal measure. Within- group comparisons for people with TBI revealed that they were better at recognizing affect from multimodal than unimodal stimuli. As a group, participants with TBI who were categorized as having impaired facial/vocal affect recognition were less accurate at recognizing all emotions, including happy, than unimpaired participants. Neutral stimuli were more poorly identified by participants with TBI than by those with controls.

Conclusion:

Context-enriched multimodal stimuli may enhance affect recognition for people with TBI. People with TBI who have impaired affect recognition may have problems identifying both positive (happy) and negative expressions. Furthermore, people with TBI may perceive affect when there is none.

IN TRAUMATIC BRAIN INJURY (TBI), areas of the brain commonly associated with emotion are particularly susceptible to damage1; as a result, people with TBI often exhibit impaired recognition of others' emotions, despite the heterogeneity of their injuries.2–12 Quality of interpersonal interactions and relationships partially depends on accurate interpretation of others' emotions via nonverbal, semantic, and contextual cues.10,13–15 Thus, it is important to consider how these cues are processed under various conditions so that interventions that target specific issues can be designed.

A recent meta-analysis has shown that 13% to 39% of people with moderate to severe TBI are significantly impaired at recognizing emotion from static facial cues.16 Recognition of vocal affect is also problematic.3,5,9,10,17,18 For instance, Dimoska et al3 found that people with moderate to severe TBI had more difficulty than controls at discriminating and labeling vocal expressions. Recently, studies have investigated whether adding situational context helps or hinders affect recognition after TBI. Situational context provides important information about circumstances, such as whether or not the situation is consistent with that person's wants and expectations, which would affect how that person feels.19 As such, situational context might be expected to foster accurate inferences about how a person is feeling. However, the findings from studies thus far have been mixed. Croker and McDonald2 reported that people with TBI were better able to match and label photographs depicting various facial emotion expressions when social context was added. In contrast, Milders and colleagues9,20 reported that people with TBI showed significant difficulty in their ability to identify and explain the intentions and feelings of others using contextual information. Thus, it remains unclear what effect contextual information has on emotion recognition in people with TBI.

Prior work has helped inform the development of treatment programs for people with TBI. However, because previous studies focused primarily on emotion recognition from isolated cues (rather than the multimodal cues more characteristic of everyday interactions), further work is needed using ecologically valid stimuli. To our knowledge, only 2 studies have attempted to compare how people with TBI process affective cues from unimodal (ie, face only and voice only) versus multimodal media. McDonald and Saunders8 presented 34 participants with severe TBI and 28 controls with unimodal and multimodal media extracted from the Emotion Evaluation Test,21 a video-based test that displays audiovisual emotion expressions within the context of an emotionally ambiguous dialogue. Participants with TBI had significantly more difficulty recognizing emotions from multimodal media and from unimodal vocal affect cues. Inconsistent with prior findings,16 their participants with TBI performed as well as controls at recognizing emotions from unimodal facial affect stimuli (static and dynamic).8 In a similar study by Williams and Wood,12 64 participants with TBI and 64 matched controls were presented with multimodal stimuli from the Emotion Evaluation Test and unimodal static facial stimuli (Ekman 60 Faces Test22). Participants with TBI were significantly less accurate at identifying emotions from both the unimodal and multimodal stimuli. However, within-group comparisons revealed that participants with TBI were more accurate at recognizing emotions from the multimodal media than from unimodal facial stimuli. This result contradicts McDonald and Saunders'8 findings that participants with TBI better recognized unimodal facial affect than affect from multimodal media. Finally, Williams and Wood12 found that participants with TBI had significantly more difficulty than controls at recognizing expressions of positive and negative affect as well as neutral displays.

Back to Top | Article Outline

CURRENT STUDY

Although both McDonald and Saunders8 and Williams and Wood12 investigated affect recognition in response to unimodal and multimodal media, neither study considered the influence of semantic or contextual information on interpretation of emotion expressions. Thus, it remains unclear whether the presence of such information might improve affect recognition through increased intersensory redundancy or hinder recognition because of increased cognitive demands (eg, attention and inferencing). The aim of this study was to expand upon the work of these investigators by assessing affect recognition for multimodal media stimuli that also includes contextual information. Combining contextual information with multimodal stimuli offers greater ecological validity of affect recognition assessments. This study also differs from former research in that our multimodal stimuli are derived from commercial television and film excerpts. Although still not genuine expressions of emotion, these commercial-grade stimuli using professional actors to depict real-world social experiences are likely to be more realistic and believable than emotion stimuli designed for research purposes.

To experience and express appropriate responses to everyday emotional situations requires the ability simultaneously to interpret a combination of nonverbal emotion cues, including contextual ones. Determining how well people with TBI can do this—and what limits their performance—may influence the design of interventions for improving affect recognition by people with TBI. If contextual information enhances affect recognition, people with TBI may benefit from interventions that match various affective cues (eg, vocal emotion expressions) to contextual situations. Ecologically valid stimuli, such as the film and television clips proposed here, could potentially be used as training tools, allowing clinicians to demonstrate subtle and obvious emotion expressions and point out contextual cues that might help patients to infer emotions of others.

Back to Top | Article Outline
Objectives

  1. To compare affect recognition by people with and without TBI for unimodal (face only and voice only) and context-enriched multimodal (film clips) media. Given the results of McDonald and Saunders8 and Williams and Wood,12 we predicted that participants with TBI would perform more poorly than controls in all media conditions.
  2. To compare affect recognition for positive (happy) and negative emotions by people with and without TBI. Although it has been well documented that people with TBI have more difficulty recognizing negatively valenced than positively valenced facial stimuli,2,6,11,12,23 we felt it was important to explore whether valence would continue to influence affect recognition in context-enriched multimodal stimuli. Although we only use 1 positive exemplar (happy), this exploratory objective will provide useful preliminary data regarding valence effects with this novel stimulus set for future studies. We hypothesized that people with TBI would have more difficulty identifying emotions with a negative than positive valence.
  3. To compare participants' recognition of neutral stimuli in the context-enriched multimodal condition. Although recognition of neutral was investigated by Williams and Wood,12 we were interested in the effect of contextual information, which may facilitate participants' interpretation of the neutral multimodal stimuli.

Back to Top | Article Outline

METHODS

Participants

Sixty adults (37 males and 23 females) with moderate to severe TBI and 60 age-matched controls (38 males and 22 females) were recruited from local rehabilitation centers in Charlotte, North Carolina, and St. Catharines, Ontario, Canada (30 from each group, from each site). Controls were either staff members or volunteers at these centers or friends and family members of the participants with TBI. People with TBI had to be 6 months postinjury and have a Glasgow coma scale (GCS) score, posttraumatic amnesia (PTA), or loss of consciousness (LOC) indicative of a moderate to severe TBI (GCS ≤ 12; PTA ≥ 24 hours; LOC ≥ 24 hours). Participants were excluded for the following reasons: presence of a developmental affective disorder; acquired neurological disorder; major psychiatric disorder; and/or impaired vision or hearing that would prohibit full participation in the experimental tasks. In addition, control participants were excluded if they had a history of TBI of any severity, including concussion resulting in postconcussive syndrome.

Participants with TBI were between the ages of 21.6 and 63 years (mean = 40.98; SD = 12.45), and the age range of the control participants was 18 to 63.2 years (mean = 40.64; SD = 13.04). There was no significant age difference between groups [F(1, 118) = 0.021; P = 0.884]. Table 1 lists demographics and injury-related variables for the participants with TBI. Given that this study aimed to expand on the work of McDonald and Saunders8 and Williams and Wood,12Table 1 also provides demographic comparisons from these studies.

TABLE 1
TABLE 1
Image Tools

For all participants English was their primary language, and they were able to comprehend and respond to basic written and oral language, as indicated via self-report and interaction with the investigator during the consent process. On average, participants with TBI had completed 14.43 years of education (SD = 2.29)—17 had earned a college or university degree, 23 had attained some postsecondary education or training (eg, skills certificate), 16 had completed high school, and 4 had completed no more than grade school. Control participants had completed an average of 15.72 years of education (SD = 1.96)—28 had obtained a college or university degree, 22 reported having some postsecondary education or training, and 10 had completed high school.

Back to Top | Article Outline
Measures and Procedures

This study includes a subset of measures administered to participants as part of a larger investigation.24 Only measures relevant to this study are discussed here. Characteristic features of these measures are provided in Table 2, as are features of the affect measures used by McDonald and Saunders8 and Williams and Wood.12 Participants in this study were seen either individually or in small groups (maximum = 3), and the order of tasks was randomized across testing sessions.

TABLE 2
TABLE 2
Image Tools
Back to Top | Article Outline
Unimodal Measure: Facial Affect Recognition

The Diagnostic Analysis of Nonverbal Affect 2-Adult Faces (DANVA-Faces)25 was selected for the facial affect task because of its prior use with people with TBI.10,14 The DANVA-Faces includes 24 colored photographs of young adults, portraying happy, sad, angry, and fearful facial expressions. Each photograph was displayed for a total of 5 seconds, and participants were instructed to select the emotion portrayed using from a set of 5 choices (happy, sad, angry, fearful, and I don't know). This standardized tool is reported to have good internal consistency and high test–retest reliability. In addition, it has been shown to correlate well with measures that assess related constructs such as personality and social competence (see Nowicki25 for a summary of validity evidence).

Back to Top | Article Outline
Unimodal Measure: Vocal Affect Recognition

The Diagnostic Analysis of Nonverbal Affect 2-Adult Paralanguage (DANVA-Voices)25 has also been used previously with people with TBI.10,14 The DANVA-Voices consists of 24 repetitions of a single, emotionally neutral sentence (“I'm going out of the room now, and I'll be back later”). Participants heard each sentence only once and were instructed to indicate the emotion portrayed through the tone of voice by selecting 1 of 5 possible responses: happy, sad, angry, fearful, and I don't know. As indicated by Nowicki,25 the DANVA-Voices has good internal consistency and test–retest reliability. Good criteria validity has also been indicated in studies that have compared the performance with the performance on similar constructs (ie, social competence).25

Back to Top | Article Outline
Context-Enriched Multimodal Measure: Film Clips

Fifteen film clips were extracted from commercial movies and television because of their presumed ecological validity (see Appendix). Unlike the video vignettes used in previous studies, which were generated for research,8,12 cinematic film clips provide a natural way to include context and appear very real because of their approximation of everyday situations.26 Clips were chosen according to the selection criteria suggested by Gross and Levenson.27 In a previous study (n = 70 typical adults), the film clips included in this study successfully elicited the target emotion 84.73% (range: 64-96) of the time. The clips ranged in length from 45 to 103 seconds (mean = 71.87). Three restricted randomized orders of the film clips were created so that the series of clips always started with a neutral portrayal and 2 clips that targeted the same emotion never occurred consecutively. Participants were assigned to 1 of the 3 randomized orders before recruitment using a computerized random number generator.

Back to Top | Article Outline

RESULTS

Statistical Analysis

Descriptive statistics were calculated for all demographics, injury-related variables, and outcome measures. Average years of education significantly differed between groups [F(1, 103) = 9.39; P = 0.003; ŋp2 = 0.05]. Education was negatively correlated with DANVA-Faces (r = 0.30; P = 0.002) and DANVA-Voices (r = 0.34; P = 0.001). Therefore, group comparisons involving the DANVA-Faces or DANVA-Voices included education as a covariate.

Analyses were conducted using mean percentage accuracy scores for the DANVA-Faces, DANVA-Voices, and film clips. Positive affect scores consisted of happy items; scores for negative affect were determined by collapsing responses to sad, angry, and fearful stimuli into a single category. Mixed model analyses of variance (ANCOVA) were conducted to examine affect recognition performance by TBI and control participants within and across the 3 types of media using education as a covariate. When examining differences across media, we excluded neutral from the film clip accuracy scores so that only emotions that were represented in all 3 tasks were analyzed (happy, sad, angry, fearful). Neutral clips were included when analyses focused solely on film clips. Main effects found in the ANCOVA were explored further to compare performance across and within tasks. Finally, a 1-way analysis of variance was conducted to compare group performance on responses to neutral film clip stimuli.

Back to Top | Article Outline
Relations Between Affect Recognition and Injury-Related Variables (TBI Only)

Spearman correlations indicated no significant relations between age at injury, years postinjury, and injury severity (GCS, PTA, LOC) with affect recognition scores on the DANVA-Faces, DANVA-Voices, and film clips for people with TBI after applying an α less than 0.003 to account for multiple comparisons (see Table 3).

TABLE 3
TABLE 3
Image Tools
Back to Top | Article Outline
Relation of Affect Recognition With Testing Condition

Spearman rho analyses were conducted to investigate whether there was a relation between testing condition (ie, individual versus group and task order) and task performance. Thirty-five participants with TBI and 24 control participants were tested individually. Twenty-five participants with TBI and 35 control participants were tested in small groups (maximum = 3). After applying an α less than 0.008 to account for multiple comparisons, testing style (individual vs group) was not significantly correlated with performance on the DANVA-Faces, DANVA-Voices, or film clips tasks for people with TBI (r = −0.214, P = 0.10; r = −0.123, P = 0.36; r = −0.264, P = 0.04, respectively) or controls (r = −0.047, P = 0.72; r = −0.027, P = 0.84; r = −0.176, P = 0.18, respectively). Similarly, Spearman correlations indicated no significant relation between the order in which the tasks were administered and performance on the DANVA-Faces, DANVA-Voices, and film clips for either group (r = −0.022, P = 0.87; r = −0.074, P = 0.58; r = 0.115, P = 0.38 and r = −0.248, P = 0.06; r = −0.008, P = 0.95; r = −0.018, P = 0.89, TBI and controls, respectively).

Back to Top | Article Outline
Objective 1: To Compare Affect Recognition by People With and Without TBI in Response to Unimodal and Multimodal Emotion Stimuli

Figure 1 shows the mean performance accuracy for each task by participants with TBI and controls. An ANCOVA examining group differences across the 3 affect recognition tasks found significant main effects for the group [F(1, 100) = 9.31; P = 0.003; ŋp2 = 0.08] and task [F(2, 200) = 11.05; P < 0.001; ŋp2 = 0.10], indicating that affect recognition differed between people with and without TBI (group) and media type (ie, unimodal facial expressions vs film clips). Using a Bonferroni-adjusted α of 0.017 (0.05/3), follow-up univariate comparisons across groups indicated that people with TBI had significantly more errors than controls on DANVA-Faces [F(1, 101) = 6.77; P = 0.005; ŋp2 = .06] and DANVA-Voices [F(1, 100) = 9.64; P = 0.002; ŋp2 = 0.09]. No significant difference was found between groups on the context-enriched film clips [F(1, 118) = 2.12; P = 0.15; ŋp2 = 0.02]. Within-group comparisons across media showed that both groups were better at recognizing emotions from multimodal than unimodal media (see Table 4 for paired sample t test results).

TABLE 4
TABLE 4
Image Tools
Figure 1.
Figure 1.
Image Tools

Because people with TBI had significantly more errors than controls for unimodal stimuli, we further explored whether people with TBI had more difficulty with faces versus voices or they were equally impaired for both types of stimuli. Because group average can be skewed by the heterogeneity of the group, we compared the number of people with TBI who were classified as impaired (defined as 2 SD below the standardized norms) on each task with those who were not impaired. The number of participants with TBI who were impaired at identifying facial emotion expressions (n = 29) was significantly greater than the number of participants who were significantly impaired at identifying vocal affect (n = 8) [X2(1) = 17.81; P < 0.001].

Back to Top | Article Outline
Objective 2: To Compare Affect Recognition for Positive Versus Negative Emotions Across Unimodal Media

An ANCOVA was conducted to explore the impact of valence on response accuracy by participants with and without TBI across the unimodal media. Although our original objective was to compare across all media, the context-enriched multimodal media data were not included because previous analyses showed no group differences in overall responses for this task. Results revealed a significant group × valence interaction [F(1, 100) = 4.00; P = 0.04; ŋp2 = 0.04], indicating that participants with TBI responded differently than controls to positive and negative stimuli. Follow-up ANCOVAs, using a Bonferroni-adjusted α level of 0.012 (.05/4), revealed no significant group difference in recognizing positive affect on the DANVA-Faces [F(1, 101) = 0.322; P = 0.57; ŋp2 = 0.003] and the DANVA-Voices [F(1, 100) = 0.026; P = 0.87; ŋp2 = 0.00] tasks. However, participants with TBI had significantly more difficulty than controls at identifying negative emotion expressions in the DANVA-Faces task [F(1, 101) = 6.8; P = 0.01; ŋp2 = 0.06] and the DANVA-Voices task [F(1, 100) = 10.85; P = 0.001; ŋp2 = 0.10].

Responses to positive versus negative stimuli also differed by tasks (ie, face vs voice) as indicated by the significant valence × task interaction [F(1, 100) = 6.64; P = 0.01; ŋp2 = 0.06]. Both groups identified positive expressions portrayed in the DANVA-Faces with significantly greater accuracy than negative ones [TBI: t(58) = 11.64, P < 0.001; controls: t(59) = 9.71, P < 0.001]. No significant difference was found for either group for responses to positive versus negative expressions portrayed in the DANVA-Voices task. However, people with TBI demonstrated a trend toward greater accuracy in identifying positive vocal affect expressions versus negative ones. This trend led to the observed interaction shown in Figure 2.

Figure 2.
Figure 2.
Image Tools

We further explored whether participants who were classified as having significant affect recognition impairments only had problems recognizing negative emotional expressions, or whether this included positive affect as well. Table 5 details the mean accuracy by TBI participants with and without affect recognition impairment for positive versus negative stimuli for the DANVA-Faces and DANVA-Voices. Participants with TBI and impaired facial affect recognition had a greater number of errors for both positive and negative facial emotion expressions than participants with TBI who did not have impaired facial affect recognition [positive: F(1, 57) = 4.43, P = 0.04, ŋp2 = 0.07; negative: F(1, 57) = 87.19, P < 0.001, ŋp2 = 0.60]. Similarly, participants with TBI and impaired vocal affect recognition had significantly more difficulty identifying both happy vocal expressions [F(1, 56) = 7.00; P = 0.01; ŋp2 = 0.11] and negative vocal emotion expressions [F(1, 56) = 33.93; P < 0.001, ŋp2 = 0.38] than those who were unimpaired at vocal affect recognition.

TABLE 5
TABLE 5
Image Tools
Back to Top | Article Outline
Objective 3: To Compare Groups in Their Recognition of Neutral in Context-Enriched Multimodal Stimuli

The maximum number of errors that could be attained for neutral stimuli was 3. On average, people with TBI had 2.05 errors (SD = 1.02) and controls had 1.6 errors (SD = .89), a significant difference [F(1, 118) = 6.68; P = 0.01].

Back to Top | Article Outline

DISCUSSION

Responses to Unimodal Versus Context-Enriched Multimodal Stimuli

Participants with TBI were less accurate than controls at identifying unimodal (static) facial and vocal affect, but not at recognizing affect from our context-enriched multimodal media. In fact, people with TBI identified emotion expressions in the multimodal media (when neutral was excluded) almost as accurately as controls. We were surprised by this latter finding, which conflicts with other8,12 reports that people with TBI had more difficulty identifying emotion in multimodal media than controls. Our finding suggests that the people with TBI in this study may have used the contextual information contained in the film clips to facilitate their emotion recognition. However, because we did not also test multimodal stimuli that were not context enriched, we cannot definitively conclude that it was the addition of context that led to enhanced performance and not the use of other cues (eg, gestures and body posture). Presenting these film clips with the sound off may have provided further insight into what cues produced the improvement in emotion recognition in response to the multimodal versus unimodal media. Although we cannot be certain that the contextual information improved our participants' performance, the fact that it did not worsen performance suggests it would not be harmful to include contextual information in affect recognition interventions for people with TBI to achieve greater ecological validity.

It is important to note that these film clips included extraneous cues that were not controlled for, cues that could have contributed to recognition of emotion by our participants. For instance, 9 of the 15 film clips had accompanying affective background music that is often used in film to induce specific emotions.28,29 Further, it has been shown to activate many of the same neural structures important to emotion perception.30 Our set of film clips was not large enough to permit direct comparison of emotion recognition for clips with and without background music. Without this comparison, we cannot say for certain that participants were better at recognizing emotions from the film clip stimuli because of the added contextual information, and not the presence of affective music. If further study using film clips that do not have accompanying affective music indicates that contextual cues facilitate affect recognition for people with TBI, therapeutic interventions can be designed to initially maximize these cues and then slowly diminish them while drawing attention to other relevant nonverbal cues. Inclusion of background music in the film clips, a limitation in this study, could potentially be a focal point for future investigations. A comparison could be made between clips with and without accompanying music to investigate whether music positively influences affect recognition for people with TBI. Similar to context, music may also have important implications for treatment approaches.

Responses to our context-enriched multimodal stimuli seem to suggest that affect recognition in everyday social interactions may be less challenging than isolated testing indicates. However, people with TBI and their loved ones continue to report affect recognition to be a considerable problem.14,24 It seems, then, that although the film clips may be more ecologically valid than unimodal stimuli, the information available still differs from what actually occurs during interpersonal interactions. People with TBI may have more difficulty when they are actually involved in an emotional interaction in a normal environment with all of its distractions, rather than observing an emotional interaction within a controlled environment where they have been instructed to focus on the available emotion cues.

In addition to comparing unimodal with multimodal recognition, this study compared affect recognition by people with TBI across facial and vocal modalities to determine whether one is more problematic than the other. A significantly greater number of people with TBI were impaired for facial (29 participants) versus vocal (8 participants) affect recognition. It seems that facial affect recognition poses a greater challenge for people with TBI, even though recognition of isolated vocal affect is reported to be more difficult for the general population.31–33

Back to Top | Article Outline
Affect Recognition for Positive and Negative Emotions

The second objective of this study was to investigate the impact of valence in the identification of emotion expressions across the different forms of media. Because no significant difference was found between groups for affect recognition on the film clips tasks, multimodal media was not considered in this analysis. At first glance, it may seem that participants with TBI were able to competently recognize positive affect because they were better at recognizing happy facial and vocal expressions than negative affect. However, once we categorized our TBI participants as either impaired or unimpaired, we found that participants with TBI and impaired affect recognition actually had significantly more trouble than the unimpaired TBI group at recognizing both positive and negative affect for faces and voices.

These results contrast with the literature on valence that suggests recognition of positive expressions is not impaired for people with TBI.2,6,11,12,23 Although this study included only 1 positive exemplar (happy), this finding has important implications for the development of treatment programs, as it suggests that positively valenced emotions should be given as much credence in rehabilitation programs as negatively valenced ones. Future studies should include a wider range of positive and negative emotions to more thoroughly investigate the influence of valence on affect recognition for both unimodal and multimodal media.

Back to Top | Article Outline
Affect Recognition to Neutral Stimuli

Our final objective was to compare responses by people with and without TBI to context-enriched neutral multimodal stimuli. Williams and Wood12 found neutral to be particularly difficult for people with TBI to identify, even when given combined facial and vocal affect cues. Our results showed that although the intersensory redundancy offered by the context-enriched cues in the multimodal film clips led to high recognition accuracy for happy, sad, angry, and fearful, both groups of participants had difficulty identifying neutral. It seems that the recognition of neutral is more elusive than the recognition of positive and negative emotions, even in multimodal representations. This may be because neutral stimuli are more likely to convey affective information when presented alongside contextual information.34 Furthermore, participants with TBI were found to have significantly more difficulty with the neutral film clips than controls. This has important implications for affect recognition during social interactions, because erroneously perceiving an emotion when there is not one, and vice versa, may lead to communication breakdowns.34 As a result, these findings imply that treatment programs should also include training recognition to neutral stimuli.

Back to Top | Article Outline
Limitations and Future Directions

The use of cinematic film clips from commercial films and television to assess affect recognition in people with TBI was a novel approach that revealed interesting results. However, the lack of comparison to similar unimodal cues (eg, dynamic facial expressions) and various combinations of these cues (eg, contextually enriched film clips played with no sound) limits our ability to extend the findings to everyday situations. We compared performance on our multimodal media with performance on unimodal subtests from an existing standardized tool, the Diagnostic Analysis of Nonverbal Accuracy-2 (DANVA-2).25 Although these subtests have been shown to have good reliability and validity25 and have been previously used with people with TBI,14,24 they are not without limitations. One limitation of comparing performance on the DANVA-Faces and DANVA-Voices tasks with performance on the film clips was the lack of neutral exemplars in the DANVA-2. Because we have no direct comparison to how people with TBI responded to unimodal neutral expressions, we cannot discern whether neutral is simply an elusive state to identify or the addition of contextual cues led people with TBI to infer emotions that were not present. Given the implications this may have for social interaction, future studies should explore the perception of neutral in various forms of media.

The use of the DANVA-2 further limits interpretation of our results because we were unable to compare performance for unimodal visual stimuli that was equivalent, or at least similar, to the multimodal film clips (ie, dynamic facial expression). Not only have dynamic facial stimuli been associated with enhanced neural activity,35–39 they contain important temporal cues within the facial movements themselves that facilitate overall performance.8,40 Because we did not test visual-only affect recognition of dynamic facial expressions, we do not know whether the dynamic nature of the facial stimuli contributed to the increased recognition of emotion for the film clips. Simply playing the film clips without sound would not have fully answered this question because their multimodal nature inherently includes visual contextual cues (eg, social environment) in addition to the nonverbal facial expressions. The contribution of each of these cues to multimodal affect recognition is an important query that awaits further investigation with larger sample sizes and carefully selected stimuli that limit the extraneous visual cues in dynamic visual-only displays.

Factors associated with the film clips themselves (eg, the previously discussed potential influence of affective music in some film clips) may also limit the conclusions we can make. Furthermore, familiarity with the film itself or with the actors may have led to improved performance. Familiarity has been shown to contribute to the discrimination of facial expressions.41,42 Although previous work with these same film clips did not show a relation between prior viewing and participant responses, (B. Zupan and D. R. Babbage, unpublished data, 2010) we cannot be certain it was not a factor for our participants with TBI.

The use of only 1 positive exemplar among a limited set of emotion categories constrains our interpretation of valence effects because recognition of happy may have been influenced by the use of exclusion rules in response selection. Including additional positive emotions (eg, amusement and pleasant surprise) in future studies might provide a more accurate representation of the ability of people with TBI to recognize emotions that differ by valence. The use of a forced-choice format may also have led to increased recognition rates. However, we attempted to overcome the tendency for participants to guess or select an emotion they did not perceive by including an alternate response category (ie, I don't know), a practice shown to reduce artificial agreement.43

We did not collect data on depression or anxiety, both of which may have influenced affect recognition performance. Depression in people with TBI has been associated with changes in personality and mood;44 thus, the lack of these measures in this study is a limitation. Similarly, the generalization of results from this study to real-world functioning is restricted because we did not collect observer data from people who interact regularly with our participants with TBI. However, we are unaware of any standardized questionnaires for observers that focus on the ability of people with TBI to use affective cues within social contexts to interpret the feelings of others.

Back to Top | Article Outline

CONCLUSIONS

This study supports earlier reports of deficits in people with TBI for facial and vocal affect recognition. Of note was the finding that people with TBI identified as having impaired facial or vocal affect recognition were impaired for both positive (happy) and negative emotions. Recognition of neutral in context-enriched stimuli was also found to be problematic, a finding that has important implications for real-world interactions. Finally, our findings with context-enriched multimodal media stimuli suggest that people with TBI are better at recognizing emotions when they are given an array of affective cues that include contextual information. These results add to our knowledge in this area, but leave us with further questions about how people with TBI combine multiple affective cues in their interpretation of affect, and what cues most influence their interpretation of a complex social event. Some questions that have emerged include the following: (a) Do people with TBI use all available cues equally or do they compensate for deficits in 1 modality (eg, facial affect recognition) by relying more heavily on other available cues (eg, context)? (b) Is there a minimum number of affective cues necessary for success with affect recognition (eg, face and voice only)? (c) Does successful affect recognition for people with TBI depend on a specific combination of affective cues (eg, face, posture, and context)? (d) How are other emotions affected? (e) Does the presence of affective music impact recognition? Answering these questions will assist us in developing more suitable and rigorous affect recognition interventions for people with TBI.

Back to Top | Article Outline

REFERENCES

1. Radice-Neumann D, Zupan B, Babbage DR, Willer B. Overview of impaired facial affect recognition in persons with traumatic brain injury. Brain Inj. 2007; 21:(8):807–816.

2. Croker V, McDonald S. Recognition of emotion from facial expression following traumatic brain injury. Brain Inj. 2005; 19:(10):787–799.

3. Dimoska A, McDonald S, Pell MC, Tate RL, James CM. Recognizing vocal expressions of emotion in patients with social skills deficits following traumatic brain injury. J Int Neuropsychol Soc. 2010; 16:369–382.

4. Green REA, Turner GR, Thompson WF. Deficits in facial emotion perception in adults with recent traumatic brain injury. Neuropsychologia. 2004; 42:133–141.

5. Ietswaart M, Milders M, Crawford JR, Currie D, Scott CL. Longitudinal aspects of emotion recognition in patients with traumatic brain injury. Neuropsychologia. 2008; 46:148–159.

6. Jackson HF, Moffat NJ. Impaired emotional recognition following severe head injury. Cortex. 1987; 23:(2):293–300.

7. McDonald S, Flanagan S. Social perception deficits after traumatic brain injury: interaction between emotion recognition, mentalizing ability, and social communication. Neuropsychology. 2004; 18:(3):572–579.

8. McDonald S, Saunders JC. Differential impairment in recognition of emotion across different media in people with severe traumatic brain injury. J Int Neuropsychol Soc. 2005; 11:392–399.

9. Milders M, Fuchs S, Crawford JR. Neuropsychological impairments and changes in emotional and social behavior following severe traumatic brain injury. J Clin Exp Neuropscyhol. 2003; 25:(2):157–172.

10. Spell LA, Frank E. Recognition of nonverbal communication of affect following traumatic brain injury. J Nonverbal Behav. 2000; 24:(4):285–300.

11. Hopkins MJ, Dywan J, Segalowitz SJ. Altered electrodermal response to facial expression after closed head injury. Brain Inj. 2002; 16:(3):245–257.

12. Williams C, Wood RLI. Impairment in the recognition of emotion across different media following traumatic brain injury. J Clin Exp Neuropscyhol. 2010; 32:(2):113–122.

13. Marquardt TP, Rios-Brown M, Richburg T. Comprehension and expression of affective sentences in traumatic brain injury. Aphasiology. 2001; 15:(10/11):1091–1101.

14. Radice-Neumann D, Zupan B, Tomita MR, Willer B. Training emotion processing in persons with brain injury. J Head Trauma Rehabil. 2009; 24:(5):313–323.

15. Zupan B, Neumann D, Babbage DR, Willer B. The importance of vocal affect to bimodal processing of emotion: implications for individuals with traumatic brain injury. J Commun Disord. 2009; 42:1–17.

16. Babbage DR, Yim J, Zupan B, Neumann D, Tomita MR, Willer B. Meta-analysis of facial affect recognition difficulties after traumatic brain injury. Neuropsychology. 2011; 25:(3):277–285.

17. Hornak J, Rolls ET, Wade D. Face and voice expression identification in patients with emotional and behavioural changes following ventral frontal lobe damage. Neuropsychologia. 1996; 34:(4):247–261.

18. Pell MD. Cerebral mechanisims for understanding emotional prosody in speech. Brain Lang. 2006; 96:221–234.

19. Wierzbicka A. Defining emotional concepts. Cogn Sci. 1992; 16:539–581.

20. Milders M, Ietswaart M, Crawford JR, Currie D. Impairments in theory of mind shortly after traumatic brain injury and at 1-year follow-up. Neuropsychology. 2006; 20:(4):400–408.

21. McDonald S, Flanagan S, Rollins J. The Awareness of Social Inference Test. San Antonio, TX: Harcourt Assessment; 2002 .

22. Ekman P, Friesen WV. Pictures of Facial Affect. Palo Alto, CA: Consulting Psychological Press; 1976 .

23. Kucharska-Pietura K, Phillips ML, Gernand W, David AS. Perception of emotions from faces and voices following unilateral brain damage. Neuropsychologia. 2003; 41:(8):1082–1090.

24. Neumann D, Zupan B, Hammond F, Malec J. Relationships between affect recognition, empathy, and alexythymia after TBI. J Head Trauma Rehabil. 2013; [Epub ahead of print].

25. Nowicki S. The Manual for the Receptive Tests of the Diagnostic Analysis of Nonverbal Accuracy 2 (DANVA 2). Atlanta, GA: Department of Psychology, Emory University; 2008 .

26. Rottenberg J, Ray RD, Gross JJ. Emotion elicitation using films. In: Coan JA, Allen JJB., eds. Handbook of Emotion Elicitation and Assessment. New York: Oxford University Press; 2007 .

27. Gross JJ, Levenson RW. Emotion elicitation using films. Cogn Emot. 1995; 9:(1):87–108.

28. Busselle R, Bilandzic H. Fictionality and perceived realism in experiencing stories: a model of narrative comprehension and engagement. Commun Theory. 2008; 18:255–280.

29. Zagalo N, Barker A, Branco V. Story Reaction Structures to Emotion Detection. Paper presented at: Proceedings of ACM Workshop on Story Representation, Mechanisms and Context, New York; 2004 .

30. Koelsch S, Fritz T, V Cramon DY, Müller K, Friederici AD. Investigating emotion with music: an fMRI study. Hum Brain Mapping. 2006; 27:239–250.

31. Russell JA. Is there universal recognition of emotion from facial expression? a review of the cross-cultural studies. Psychol Bull. 1994; 115:(1):102–141.

32. Scherer KR. Vocal communication of emotion: a review of research paradigms. Speech Commun. 2003; 40:(1–2):227–256.

33. Walbott HG, Scherer KR. Cues and channels in emotion recognition. J Pers Soc Psychol. 1986; 51:(4):690–699.

34. Carrera-Levillain P, Fernandez-Dols J-M. Neutral faces in context: their emotional meaning and their function. J Nonverbal Behav. 1994; 18:(4):281–299.

35. Biele C, Grabowska A. Sex differences in perception of emotion intensity in dynamic and static facial expressions. Exp Brain Res. 2006; 171:(1):1–6.

36. Collignon O, Girard S, Gosselin F, et al. Audio-visual intergration of emotion expression. Brain Res. 2008; 1242:126–135.

37. LaBar KS, Crupain MJ, Voyvodic JT, McCarthy G. Dynamic perception of facial affect and identity in the human brain. Cerebral Cortex. 2003; 13:(10):1023–1047.

38. Mayes AK, Pipingas A, Silberstein RB, Johnston P. Steady state visually evoked potential correlates of static and dynamic emotional face processing. Brain Topogr. 2009; 22:(3):145–157.

39. Schulz J, Pilz KS. Natural facial motion enhances cortical responses to faces. Exp Brain Res. 2009; 194:(3):465–475.

40. Cunningham DW, Wallraven C. Dynamic information for the recognition of conversational expressions. J Vision. 2009; 9:(13):1–17.

41. Dobel C, Geiger L, Bruchmann M, Putsche C, Schweinberger SR, Junghöfer M. On the interplay between familiarity and emotional expression in face perception. Psychol Res. 2008; 72:580–586.

42. Wild-Wall N, Dimigen O, Sommer W. Interaction of facial expressions and familiarity: ERP evidence. Biol Psychol. 2008; 77:138–149.

43. Frank MG, Stennett J. The forced-choice paradigm and the perception of facial expressions of emotion. J Pers Soc Psychol. 2001; 80:(1):75–85.

44. Ownsworth TL, Oei TPS. Depression after traumatic brain injury: conceptualization and treatment considerations. Brain Inj. 1998; 12:(9):735–751.

APPENDIX
APPENDIX
Image Tools
Back to Top | Article Outline

Cited By:

This article has been cited 1 time(s).

Brain Injury
Recognition of facial and vocal affect following traumatic brain injury
Zupan, B; Babbage, D; Neumann, D; Willer, B
Brain Injury, 28(8): 1087-1095.
10.3109/02699052.2014.901560
CrossRef
Keywords:

affect recognition; media; multimodal; traumatic brain injury; unimodal

© 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins

Login

Search for Similar Articles
You may search for similar articles that contain these same keywords or you may modify the keyword list to augment your search.