Social-affective features drive human representations of observed actions

Author:

Dima Diana C1ORCID,Tomita Tyler M2,Honey Christopher J2ORCID,Isik Leyla1

Affiliation:

1. Department of Cognitive Science, Johns Hopkins University

2. Department of Psychological and Brain Sciences, Johns Hopkins University

Abstract

Humans observe actions performed by others in many different visual and social settings. What features do we extract and attend when we view such complex scenes, and how are they processed in the brain? To answer these questions, we curated two large-scale sets of naturalistic videos of everyday actions and estimated their perceived similarity in two behavioral experiments. We normed and quantified a large range of visual, action-related, and social-affective features across the stimulus sets. Using a cross-validated variance partitioning analysis, we found that social-affective features predicted similarity judgments better than, and independently of, visual and action features in both behavioral experiments. Next, we conducted an electroencephalography experiment, which revealed a sustained correlation between neural responses to videos and their behavioral similarity. Visual, action, and social-affective features predicted neural patterns at early, intermediate, and late stages, respectively, during this behaviorally relevant time window. Together, these findings show that social-affective features are important for perceiving naturalistic actions and are extracted at the final stage of a temporal gradient in the brain.

Funder

National Science Foundation

Publisher

eLife Sciences Publications, Ltd

Subject

General Immunology and Microbiology,General Biochemistry, Genetics and Molecular Biology,General Medicine,General Neuroscience

Reference66 articles.

1. Spatiotemporal energy models for the perception of motion;Adelson;Journal of the Optical Society of America. A, Optics and Image Science,1985

2. Raincloud plots: A multi-platform tool for robust data visualization;Allen;Wellcome Open Research,2019

3. ATUS. 2019. Bureau of Labor Statistics. American Time Use Survey, United States Department of Labor.

4. Visual objects in context;Bar;Nature Reviews. Neuroscience,2004

5. Perception, action, and word meanings in the human brain: the case from action verbs;Bedny;Annals of the New York Academy of Sciences,2011

Cited by 18 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3