Multifeature Modeling of Cortical Tuning to Natural Emotional Images

Examining the cortical representation of real-world emotional faces.

Our lab has had a long-standing interest in face processing and in whether individual differences in processing emotional face features may be distinct from individual differences in processing identity-related face features (see Achaibou et al., SCAN, 2015; Bishop et al., Frontiers in Human Neuroscience, 2015). Recently, we have moved to focus on studying real-world or ‘natural’ emotional faces. We believe that by modeling natural variation in the features of faces photographed in real-world settings, we can advance our understanding of how we process complex, non-basic, facial expressions and facial cues to social traits such as dominance or trustworthiness. Traditional sets of posed face stimuli do not manipulate the features needed to investigate these aspects of facial information. In an initial study, we have used the voxel-wise modeling framework pioneered by the Gallant lab at UC Berkeley, to fit multi-feature encoding models to fMRI data collected while participants view over 900 faces photographed in natural settings. Our findings from this study contradict classic face models that argue for separate brain pathways for the processing of facial identity and facial expression. Instead, they suggest that social-affective dimensions encoding Valence of expressions and social traits (Valence-ET), Expression arousal (Arousal-E), and Dominance in expression and age (Dominance-EA) underlie tuning to faces across occipital temporal cortex (Cowen et al., In Prep).


This work falls in line with recent behavioral findings by social psychologists such as Alex Todorov (see figure below).


Ongoing directions

Under funding from NIH and the Leverhulme Trust, we are extending this research to address (i) cross-modal representation of cues to emotional state and (ii) how cortical tuning to structural and semantic face features differs in individuals with Autism and Developmental Prosopagnosia.

Is there co-tuning in occipital temporal cortex to natural image emotional and semantic content?

In a second recent study, we have adopted a similar voxel-wise multi-feature modeling approach to examine whether voxels in occipital temporal cortex show tuning to both semantic and emotional content of natural images. Based on rodent auditory conditioning studies, Le Doux and colleagues proposed that a fast subcortical (thalamo-amygdala) route supports rapid responses to emotional stimuli. However, recent studies have argued that there is no parallel ‘fast’ route for processing emotional visual stimuli in primates. Visual information reaches occipital temporal cortex (OTC) as rapidly as the amygdala, with OTC allowing far more complex semantic processing. Hence, we investigated whether tuning in OTC might allow for integrated representation of image semantic and emotional content. To address this question, we have modeled fMRI data acquired while participants viewed 1620 emotional natural images. Our findings indicate that many OTC voxels show combined tuning to image semantic category and image emotional content (Abdel-Ghaffar et al., In Prep; please do not reproduce without permission).