Comprehending the Gossips: Meme Explanation in Time-Sync Video Comment via Multimodal Cues

Author:

Xie Zheyong1ORCID,He Weidong1ORCID,Xu Tong1ORCID,Wu Shiwei2ORCID,Zhu Chen3ORCID,Yang Ping4ORCID,Chen Enhong1ORCID

Affiliation:

1. School of Computer Science and Technology, University of Science and Technology of China, China

2. School of Data Science, University of Science and Technology of China, China

3. School of Management, University of Science and Technology of China, China

4. Alibaba Inc., China

Abstract

Recent years have witnessed the booming of online social media platforms with embracing the popular service called “Time-Sync Comment”, which supports the viewers to share their time-sync opinions along with video content. In this way, we observe that numerous semantically-altered terms, or “Memes”, were created by niche users to express their unique ideas and emotions, and further attracted a large group of viewers with better activity and enthusiasm. Unfortunately, since the memes were created based on domain-specific knowledge and semantically varied depending on the multimodal context in videos, newcomers may fail to comprehend the semantic connotation of memes, which may severely impair their user-experiences. To deal with this issue, in this article, we propose a novel meme explanation framework, called ProMDE, to automatically capture and comprehend the memes in time-sync comments, which could further benefit the viewers with meme explanation service. Specifically, we first iteratively reconstruct the original time-sync comments compared with visual embedding to detect the semantically-altered terms as meme candidates. Afterward, based on the guides from the domain-specific corpus, visual and textual features will be fused to represent the context-aware multimodal cues. Moreover, to accurately describe the commonly-seen homophones in memes, i.e., they have the same pronunciation but different word-spelling expressions, we integrate the phonetic symbols as an additional modality to enhance the framework. Finally, we utilize a Transformer-based decoder to generate the natural language explanation for captured memes. Extensive experiments on a large real-world dataset prove that our framework could significantly outperform several state-of-the-art baseline methods, demonstrating the efficacy of modeling multimodal context and pronunciation for meme detection and explanation.

Funder

National Natural Science Foundation of China

USTC Research Funds of the Double First-Class Initiative

Publisher

Association for Computing Machinery (ACM)

Subject

General Computer Science

Reference46 articles.

1. Soheyla Amirian, Khaled Rasheed, Thiab R. Taha, and Hamid R. Arabnia. 2019. A short review on image caption generation with deep learning. In Proceedings of the International Conference on Image Processing, Computer Vision, and Pattern Recognition (IPCV). The Steering Committee of The World Congress in Computer Science, Computer Engineering and Applied Computing (WorldComp), 10–18.

2. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, Conference Track Proceedings, Yoshua Bengio and Yann LeCun (Eds.). International Conference on Learning Representations 2015, San Diego.

3. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation

4. Richard Dawkins. 1989. The Selfish Gene (New Ed.). Oxford University Press, Oxford, New York.

5. Poorav Desai, Tanmoy Chakraborty, and Md. Shad Akhtar. 2022. Nice perfume. how long did you marinate in it? Multimodal sarcasm explanation. In Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 - March 1, 2022. AAAI Press, 10563–10571.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3