Abstract
AbstractThe unique ability to identify one’s own body and experience it as one’s own is fundamental in goal-oriented behavior and survival. However, the mechanisms underlying the so-called body ownership are yet not fully understood. The plasticity of body ownership has been studied using two experimental methods or their variations. Specifically, the Rubber Hand Illusion (RHI), where the tactile stimuli are externally generated, or the moving RHI which implies self-initiated movements. Grounded in these paradigms, evidence has demonstrated that body ownership is a product of bottom-up reception of self- and externally-generated multisensory information and top-down comparison between the predicted and the actual sensory stimuli. Crucially, provided the design of the current paradigms, where one of the manipulated cues always involves the processing of a proximal modality sensing the body or its surface (e.g., touch), the contribution of sensory signals which pertain to the environment remain elusive. Here we propose that, as any robust percept, body ownership depends on the integration and prediction of all the sensory stimuli, and therefore it will depend on the consistency of purely distal sensory signals pertaining to the environment. To test our hypothesis, we create an embodied goal-oriented task and manipulate the predictability of the surrounding environment by changing the congruency of purely distal multisensory cues while preserving bodily and action-driven signals entirely predictable. Our results empirically reveal that the way we represent our body is contingent upon all the sensory stimuli including purely distal and action-independent signals which pertain to the environment.
Publisher
Cold Spring Harbor Laboratory