Disentangling object category representations driven by dynamic and static visual input

Author:

Robert Sophia,Ungerleider Leslie G.,Vaziri-Pashkam Maryam

Abstract

AbstractHumans can label and categorize objects in a visual scene with high accuracy and speed—a capacity well-characterized with neuroimaging studies using static images. However, motion is another cue that could be used by the visual system to classify objects. To determine how motion-defined object category information is processed in the brain, we created a novel stimulus set to isolate motion-defined signals from other sources of information. We extracted movement information from videos of 6 object categories and applied the motion to random dot patterns. Using these stimuli, we investigated whether fMRI responses elicited by motion cues could be decoded at the object category level in functionally defined regions of occipitotemporal and parietal cortex. Participants performed a one-back repetition detection task as they viewed motion-defined stimuli or static images from the original videos. Linear classifiers could decode object category for both stimulus formats in all higher order regions of interest. More posterior occipitotemporal and ventral regions showed higher accuracy in the static condition and more anterior occipitotemporal and dorsal regions showed higher accuracy in the dynamic condition. Significantly above chance classification accuracies were also observed in all regions when training and testing the SVM classifier across stimulus formats. These results demonstrate that motion-defined cues can elicit widespread robust category responses on par with those elicited by luminance cues in regions of object-selective visual cortex. The informational content of these responses overlapped with, but also demonstrated interesting distinctions from, those elicited by static cues.Significance StatementMuch research on visual object recognition has focused on recognizing objects in static images. However, motion cues are a rich source of information that humans might also use to categorize objects. Here, we present the first study to compare neural representations of several animate and inanimate objects when category information is presented in two formats: static cues or isolated dynamic cues. Our study shows that while higher order brain regions differentially process object categories depending on format, they also contain robust, abstract category representations that generalize across format. These results expand our previous understanding of motion-derived animate and inanimate object category processing and provide useful tools for future research on object category processing driven by multiple sources of visual information.

Publisher

Cold Spring Harbor Laboratory

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3