Author:
Zucchini Elena,Borzelli Daniele,Casile Antonino
Abstract
AbstractObserving the actions of others triggers, in our brain, an internal and automatic simulation of its unfolding in time. Here, we investigated whether the instantaneous internal representation of an observed action is modulated by the point of view under which an action is observed and the stimulus type. To this end, we motion captured the elliptical arm movement of a human actor and used these trajectories to animate a photorealistic avatar, a point-light stimulus or a single dot rendered either from an egocentric or an allocentric point of view. Crucially, the underlying physical characteristics of the movement were the same in all conditions. In a representational momentum paradigm, we then asked subjects to report the perceived last position of an observed movement at the moment in which the stimulus was randomly stopped. In all conditions, subjects tended to misremember the last configuration of the observed stimulus as being further forward than the veridical last showed position. This misrepresentation was however significantly smaller for full-body stimuli compared to point-light and single dot displays and it was not modulated by the point of view. It was also smaller when first-person full body stimuli were compared with a stimulus consisting of a solid shape moving with the same physical motion. We interpret these findings as evidence that full-body stimuli elicit a simulation process that is closer to the instantaneous veridical configuration of the observed movements while impoverished displays (both point-light and single-dot) elicit a prediction that is further forward in time. This simulation process seems to be independent from the point of view under which the actions are observed.
Publisher
Springer Science and Business Media LLC
Reference87 articles.
1. von Helmholtz, H. Handbuch der physiologischen Optik. ((English Trans: Southall JPC, Ed), Dover, New York, 1860).
2. Rao, R. P. & Ballard, D. H. Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects. Nat. Neurosci. 2, 79–87 (1999).
3. Enns, J. T. & Lleras, A. What’s next? New evidence for prediction in human vision. Trends Cogn. Sci. 12, 327–333 (2008).
4. Nijhawan, R. Motion extrapolation in catching. Nature 370, 256–257 (1994).
5. Trapp, S. & Bar, M. Prediction, context, and competition in visual recognition. Ann. N. Y. Acad. Sci. 1339, 190–198 (2015).
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献