Abstract
AbstractNatural language processing and other areas of artificial intelligence have seen staggering progress in recent years, yet much of this is reported with reference to somewhat limited benchmark datasets.We see the deployment of these techniques in realistic use cases as the next step in this development. In particular, much progress is still needed in educational settings, which can strongly improve users’ safety on social media. We present our efforts to develop multi-modal machine learning algorithms to be integrated into a social media companion aimed at supporting and educating users in dealing with fake news and other social media threats.Inside the companion environment, such algorithms can automatically assess and enable users to contextualize different aspects of their social media experience. They can estimate and display different characteristics of content in supported users’ feeds, such as ‘fakeness’ and ‘sentiment’, and suggest related alternatives to enrich users’ perspectives. In addition, they can evaluate the opinions, attitudes, and neighbourhoods of the users and of those appearing in their feeds. The aim of the latter process is to raise users’ awareness and resilience to filter bubbles and echo chambers, which are almost unnoticeable and rarely understood phenomena that may affect users’ information intake unconsciously and are unexpectedly widespread.The social media environment is rapidly changing and complex. While our algorithms show state-of-the-art performance, they rely on task-specific datasets, and their reliability may decrease over time and be limited against novel threats. The negative impact of these limits may be exasperated by users’ over-reliance on algorithmic tools.Therefore, companion algorithms and educational activities are meant to increase users’ awareness of social media threats while exposing the limits of such algorithms. This will also provide an educational example of the limits affecting the machine-learning components of social media platforms.We aim to devise, implement and test the impact of the companion and connected educational activities in acquiring and supporting conscientious and autonomous social media usage.
Funder
Università degli Studi di Milano - Bicocca
Publisher
Springer Science and Business Media LLC
Subject
General Earth and Planetary Sciences,General Environmental Science
Reference138 articles.
1. Aboujaoude E, Koran LM, Gamel N et al (2006) Potential markers for problematic internet use: a telephone survey of 2,513 adults. CNS Spectr 11(10):750–755
2. Ahmad Z, Jindal R, Ekbal A et al (2020) Borrow from rich cousin: transfer learning for emotion detection using cross lingual embedding. Expert Syst Appl 139:112851
3. Akomeah KO, Kruschwitz U, Ludwig B (2021) University of regensburg @ pan: Profiling hate speech spreaders on twitter. In: Proceedings of the 12th Conference and Labs of the Evaluation Forum (CLEF2021). CEUR Workshop Proceedings (CEUR-WS.org), pp 2083–2089
4. Ali R, Jiang N, Phalp K et al (2015) The emerging requirement for digital addiction labels. In: International working conference on requirements engineering: Foundation for software quality. Springer, pp 198–213
5. Allcott H, Gentzkow M (2017) Social media and fake news in the 2016 election. J Econ Perspect 31(2):211–236. https://doi.org/10.3386/w23089
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献