Visual Perception of 3D Space and Shape in Time - Part II: 3D Space Perception with Holographic Depth
Author:
Bustanoby IsabellaORCID, Krupien AndrewORCID, Afifa UmaimaORCID, Asdell BenjaminORCID, Bacani MichaelaORCID, Boudreau James, Carmona JavierORCID, Chandrashekar PranavORCID, Diamond MarkORCID, Espino DiegoORCID, Gangal ArnavORCID, Kittur ChandanORCID, Li YaochiORCID, Mann Tanvir, Matamoros ChristianORCID, McCarthy TrevorORCID, Mills ElizabethORCID, Nazareth StephenORCID, Nguyen JustinORCID, Ochoa KenyaORCID, Robbins SophieORCID, Sparakis DespoinaORCID, Ta BrianORCID, Trengove KianORCID, Xu TylerORCID, Yamaguchi NatsukoORCID, Yang ChristineORCID, Zafran EdenORCID, Blaisdell Aaron P.ORCID, Arisaka KatsushiORCID
Abstract
AbstractVisual perception plays a critical role in navigating 3D space and extracting semantic information crucial to survival. Even though visual stimulation on the retina is fundamentally 2D, we seem to perceive the world around us in vivid 3D effortlessly. This reconstructed 3D space is allocentric and faithfully represents the external 3D world. How can we recreate stable 3D visual space so promptly and reliably?To solve this mystery, we have developed new concepts MePMoS (Memory-Prediction-Motion-Sensing) and NHT (Neural Holography Tomography). These models state that visual signal processing must be primarily top-down, starting from memory and prediction. Our brains predict and construct the expected 3D space holographically using traveling alpha brainwaves. Thus, 3D space is represented by the three time signals in three directions.To test this hypothesis, we designed reaction time (RT) experiments to observe predicted space-to-time conversion, especially as a function of distance. We placed LED strips on a horizontal plane to cover distances from close up to 2.5 m or 5 m, either using a 1D or a 2D lattice. Participants were instructed to promptly report observed LED patterns at various distances. As expected, stimulation at the fixation cue location always gave the fastest RT. Additional RT delays were proportional to the distance from the cue. Furthermore, both covert attention (without eye movements) and overt attention (with eye movements) created the same RT delays, and both binocular and monocular views resulted in the same RTs. These findings strongly support our predictions, in which the observed RT-depth dependence is indicative of the spatiotemporal conversion required for constructing allocentric 3D space. After all, we perceive and measure 3D space by time as Einstein postulated a century ago.
Publisher
Cold Spring Harbor Laboratory
Reference28 articles.
1. The Involvement of the Two Hemispheres in the Reaction Time to Central and Peripheral Visual Stimuli 2. Balakrishnan, G. , Uppinakudru, G. , Girwar Singh, G. , Bangera, S. , Dutt Raghavendra, A. , & Thangavel, D. (2014a, December 16). A Comparative Study on Visual Choice Reaction Time for Different Colors in Females [Research Article]. Neurology Research International. https://doi.org/10.1155/2014/301473 3. Balakrishnan, G. , Uppinakudru, G. , Girwar Singh, G. , Bangera, S. , Dutt Raghavendra, A. , & Thangavel, D. (2014b, December 16). A Comparative Study on Visual Choice Reaction Time for Different Colors in Females [Research Article]. Neurology Research International. https://doi.org/10.1155/2014/301473 4. Emotional Facial Expression Detection in the Peripheral Visual Field 5. A Parametric fMRI Study of Overt and Covert Shifts of Visuospatial Attention
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|