Abstract
We present an analytical framework aimed at predicting the local brain activity in uncontrolled experimental conditions based on multimodal recordings of participants’ behavior, and its application to a corpus of participants having conversations with another human or a conversational humanoid robot. The framework consists in extracting high-level features from the raw behavioral recordings and applying a dynamic prediction of binarized fMRI-recorded local brain activity using these behavioral features. The objective is to identify behavioral features required for this prediction, and their relative weights, depending on the brain area under investigation and the experimental condition. In order to validate our framework, we use a corpus of uncontrolled conversations of participants with a human or a robotic agent, focusing on brain regions involved in speech processing, and more generally in social interactions. The framework not only predicts local brain activity significantly better than random, it also quantifies the weights of behavioral features required for this prediction, depending on the brain area under investigation and on the nature of the conversational partner. In the left Superior Temporal Sulcus, perceived speech is the most important behavioral feature for predicting brain activity, regardless of the agent, while several features, which differ between the human and robot interlocutors, contribute to the prediction in regions involved in social cognition, such as the TemporoParietal Junction. This framework therefore allows us to study how multiple behavioral signals from different modalities are integrated in individual brain regions during complex social interactions.
Publisher
Public Library of Science (PLoS)