Multimodal human discourse

Author:

Quek Francis1,McNeill David2,Bryll Robert1,Duncan Susan3,Ma Xin-Feng4,Kirbas Cemil5,McCullough Karl E.2,Ansari Rashid4

Affiliation:

1. Wright State University, Dayton, OH

2. University of Chicago

3. Wright State University, University of Chicago

4. University of Illinois at Chicago

5. Wright State University

Abstract

Gesture and speech combine to form a rich basis for human conversational interaction. To exploit these modalities in HCI, we need to understand the interplay between them and the way in which they support communication. We propose a framework for the gesture research done to date, and present our work on the cross-modal cues for discourse segmentation in free-form gesticulation accompanying speech in natural conversation as a new paradigm for such multimodal interaction. The basis for this integration is the psycholinguistic concept of the coequal generation of gesture and speech from the same semantic intent. We present a detailed case study of a gesture and speech elicitation experiment in which a subject describes her living space to an interlocutor. We perform two independent sets of analyses on the video and audio data: video and audio analysis to extract segmentation cues, and expert transcription of the speech and gesture data by microanalyzing the videotape using a frame-accurate videoplayer to correlate the speech with the gestural entities. We compare the results of both analyses to identify the cues accessible in the gestural and audio data that correlate well with the expert psycholinguistic analysis. We show that "handedness" and the kind of symmetry in two-handed gestures provide effective supersegmental discourse cues.

Publisher

Association for Computing Machinery (ACM)

Subject

Human-Computer Interaction

Reference46 articles.

Cited by 171 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Steering Towards Safety: Evaluating Signaling Gestures for an Embodied Driver Guide;Proceedings of the 16th International Conference on Automotive User Interfaces and Interactive Vehicular Applications;2024-09-11

2. Interactions for Socially Shared Regulation in Collaborative Learning: An Interdisciplinary Multimodal Dataset;ACM Transactions on Interactive Intelligent Systems;2024-08-02

3. Interactive Output Modalities Design for Enhancement of User Trust Experience in Highly Autonomous Driving;International Journal of Human–Computer Interaction;2024-07-10

4. Engaging Children in Storytelling Through Tabletop Play: Exploring Construction of Story Ideas through Enactive Actions and Vocalizations;Proceedings of the 23rd Annual ACM Interaction Design and Children Conference;2024-06-17

5. Multimodal perception-fusion-control and human–robot collaboration in manufacturing: a review;The International Journal of Advanced Manufacturing Technology;2024-03-23

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3