Affiliation:
1. State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences
2. University of Chinese Academy of Sciences, Chinese Academy of Sciences
3. Hefei Comprehensive National Science Center, Institute of Artificial Intelligence
Abstract
We could recognize the dynamic world quickly and accurately benefiting from extracting invariance from highly variable scenes, and this process can be continuously optimized through visual perceptual learning. It is widely accepted that more stable invariants are prior to be perceived in the visual system. But how the structural stability of invariants affects the process of perceptual learning remains largely unknown. We designed three geometrical invariants with varying levels of stability for perceptual learning: projective (e.g., collinearity), affine (e.g., parallelism), and Euclidean (e.g., orientation) invariants, following the Klein’s Erlangen program. We found that the learning effects of low-stability invariants could transfer to those with higher stability, but not vice versa. To uncover the mechanism of the asymmetric transfers, we used deep neural networks to simulate the learning procedure and further discovered that more stable invariants were learned faster. Additionally, the analysis of the network’s weight changes across layers revealed that training on less stable invariants induced more changes in lower layers. These findings suggest that the process of perceptual learning in extracting different invariants is consistent with the Klein hierarchy of geometries and the relative stability of the invariants plays a crucial role in the mode of learning and generalization.
Publisher
eLife Sciences Publications, Ltd