Abstract
AbstractReal-time global event detection particularly catastrophic events has benefited significantly due to the ubiquitous adoption of social media platforms and advancements in image classification and natural language processing. Social media is a rich repository of multimedia content during disasters, encompassing reports on casualties, infrastructure damage, and information about missing individuals. While previous research has predominantly concentrated on textual or image analysis, the proposed study presents a multimodal middle fusion paradigm that includes Cross-modal attention and Self-attention to improve learning from both image and text modalities. Through rigorous experimentation, we validate the effectiveness of our proposed middle fusion paradigm in leveraging complementary information from both textual and visual sources.The proposed intermediate design outperforms current late and early fusion structures, achieving an accuracy of 91.53% and 91.07% in the informativeness and disaster type recognition categories, respectively. This study is among the few that examine all three tasks in the CrisisMMD dataset by combining textual and image analysis, demonstrating an approximate improvement of about 2% in prediction accuracy compared to similar studies on the same dataset.Additionally, ablation studies indicate that it outperforms the best-selected unimodal classifiers, with a 3-5% increase in prediction accuracies across various tasks. Thus, the method aims to bolster emergency response capabilities by offering more precise insights into evolving events.
Funder
Manipal Academy of Higher Education, Manipal
Publisher
Springer Science and Business Media LLC
Reference45 articles.
1. Kumar A, Sangwan SR, Nayyar A (2020) Multimedia social big data: mining. Concepts, Paradigms and Solutions, Multimedia Big Data Computing for IoT Applications, pp 289–321
2. Cai Q, Wang H, Li Z, Liu X (2019) A survey on multimodal data-driven smart healthcare systems: approaches and applications. IEEE Access 7:133583–133599
3. Layek AK, Chatterjee A, Chatterjee D, Biswas S (2020) Detection and classification of earthquake images from online social media. In: Computational Intelligence in Pattern Recognition: Proceedings of CIPR 2019, pp 345–355 Springer
4. Abavisani M, Wu L, Hu S, Tetreault J, Jaimes A (2020) Multimodal categorization of crisis events in social media. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 14679–14689
5. Sirbu I, Sosea T, Caragea C, Caragea D, Rebedea T (2022) Multimodal semi-supervised learning for disaster tweet classification. In: Proceedings of the 29th international conference on computational linguistics, pp 2711–2723