Abstract
AbstractThis paper focuses on gaining new knowledge through observation, qualitative analytics, and cross-modal fusion of rich multi-layered conversational features expressed during multiparty discourse. The outlined research stems from the theory that speech and co-speech gestures originate from the same representation; however, the representation is not solely limited to the speech production process. Thus, the nature of how information is conveyed by synchronously fusing speech and gestures must be investigated in detail. Therefore, this paper introduces an integrated annotation scheme and methodology which opens the opportunity to study verbal (i.e., speech) and non-verbal (i.e., visual cues with a communicative intent) components independently, however, still interconnected over a common timeline. To analyse this interaction between linguistic, paralinguistic, and non-verbal components in multiparty discourse and to help improve natural language generation in embodied conversational agents, a high-quality multimodal corpus, consisting of several annotation layers spanning syntax, POS, dialogue acts, discourse markers, sentiment, emotions, non-verbal behaviour, and gesture units was built and is represented in detail. It is the first of its kind for the Slovenian language. Moreover, detailed case studies show the tendency of metadiscourse to coincide with non-verbal behaviour of non-propositional origin. The case analysis further highlights how the newly created conversational model and the corresponding information-rich consistent corpus can be exploited to deepen the understanding of multiparty discourse.
Funder
Horizon 2020
Javna Agencija za Raziskovalno Dejavnost RS
Publisher
Springer Science and Business Media LLC
Subject
Library and Information Sciences,Linguistics and Language,Education,Language and Linguistics
Reference89 articles.
1. Adolphs, S., & Carter, R. (2013). Spoken corpus linguistics. Routledge.
2. Alahverdzhieva, K., Lascarides, A., & Flickinger, D. (2018). Aligning speech and co-speech gesture in a constraint-based grammar. Journal of Language Modelling. https://doi.org/10.15398/jlm.v5i3.167
3. Allwood, J. (2013). A framework for studying human multimodal communication. In M. Rojc & N. Campbell (Eds.), Coverbal synchrony in human-machine interaction. CRC Press.
4. Allwood, J. (2017). Pragmatics: From language as a system of signs to language use. In E. Weigand (Ed.), The Routledge handbook of language and dialogue. Routledge.
5. Allwood, J., Cerrato, L., Jokinen, K., Navarretta, C., & Paggio, P. (2007). The MUMIN coding scheme for the annotation of feedback, turn management and sequencing phenomena. Language Resources and Evaluation, 41(3–4), 273–287. https://doi.org/10.1007/s10579-007-9061-5
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献