Affiliation:
1. PLA Strategic Support Force Information Engineering University Henan Key Laboratory of Imaging and Intelligent Processing, , Zhengzhou, 450000, China
Abstract
Abstract
Visual encoding models often use deep neural networks to describe the brain’s visual cortex response to external stimuli. Inspired by biological findings, researchers found that large receptive fields built with large convolutional kernels improve convolutional encoding model performance. Inspired by scaling laws in recent years, this article investigates the performance of large convolutional kernel encoding models on larger parameter scales. This paper proposes a large-scale parameters framework with a sizeable convolutional kernel for encoding visual functional magnetic resonance imaging activity information. The proposed framework consists of three parts: First, the stimulus image feature extraction module is constructed using a large-kernel convolutional network while increasing channel numbers to expand the parameter size of the framework. Second, enlarging the input data during the training stage through the multi-subject fusion module to accommodate the increase in parameters. Third, the voxel mapping module maps from stimulus image features to functional magnetic resonance imaging signals. Compared to sizeable convolutional kernel visual encoding networks with base parameter scale, our visual encoding framework improves by approximately 7% on the Natural Scenes Dataset, the dedicated dataset for the Algonauts 2023 Challenge. We further analyze that our encoding framework made a trade-off between encoding performance and trainability. This paper confirms that expanding parameters in visual coding can bring performance improvements.
Funder
National Natural Science Foundation of China
Major Projects of Technological Innovation 2030 of China
Publisher
Oxford University Press (OUP)
Reference25 articles.
1. A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence;Allen;Nat Neurosci,2022
2. Understanding and accelerating neural architecture search with training-free and theory-grounded metrics;Chen;IEEE Trans Pattern Anal Mach Intell,2024
3. Scaling up your kernels to 31x31: revisiting large kernel design in CNNs;Ding;Proceedings of the IEEE/CVF conference on computer vision and pattern recognition
4. UniRepLKNet: a universal perception large-kernel ConvNet for audio, video, point cloud;Ding