Author:
Burgoon Judee K.,Wang Rebecca Xinran,Chen Xunyu,Ge Tina Saiying,Dorn Bradley
Abstract
Social relationships are constructed by and through the relational communication that people exchange. Relational messages are implicit nonverbal and verbal messages that signal how people regard one another and define their interpersonal relationships—equal or unequal, affectionate or hostile, inclusive or exclusive, similar or dissimilar, and so forth. Such signals can be measured automatically by the latest machine learning software tools and combined into meaningful factors that represent the socioemotional expressions that constitute relational messages between people. Relational messages operate continuously on a parallel track with verbal communication, implicitly telling interactants the current state of their relationship and how to interpret the verbal messages being exchanged. We report an investigation that explored how group members signal these implicit messages through multimodal behaviors measured by sensor data and linked to the socioemotional cognitions interpreted as relational messages. By use of a modified Brunswikian lens model, we predicted perceived relational messages of dominance, affection, involvement, composure, similarity and trust from automatically measured kinesic, vocalic and linguistic indicators. The relational messages in turn predicted the veracity of group members. The Brunswikian Lens Model offers a way to connect objective behaviors exhibited by social actors to the emotions and cognitions being perceived by other interactants and linking those perceptions to social outcomes. This method can be used to ascertain what behaviors and/or perceptions are associated with judgments of an actor’s veracity. Computerized measurements of behaviors and perceptions can replace manual measurements, significantly expediting analysis and drilling down to micro-level measurement in a previously unavailable manner.
Reference47 articles.
1. A comparison of low-complexity real-time feature extraction for neuromorphic speech recognition.;Acharya;Front. Neurosci.,2018
2. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI).;Adadi;IEEE Access,2018
3. Real-time video analytics: the killer app for edge computing.;Ananthanarayanan;Computer,2017
4. Trust, conflict, and cooperation: a meta-analysis.;Balliet;Psychol. Bull.,2013
5. Cross-dataset learning and person-specific normalisation for automatic action unit detection;Baltrušaitis;Proceedings of the 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG),2015
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献