Affiliation:
1. Stanford Internet Observatory, Stanford University, USA
2. Center for Security and Emerging Technology, Georgetown University, USA
Abstract
Much of the research and discourse on risks from artificial intelligence (AI) image generators, such as DALL-E and Midjourney, has centered around whether they could be used to inject false information into political discourse. We show that spammers and scammers—seemingly motivated by profit or clout, not ideology—are already using AI-generated images to gain significant traction on Facebook. At times, the Facebook Feed is recommending unlabeled AI-generated images to users who neither follow the Pages posting the images nor realize that the images are AI-generated, highlighting the need for improved transparency and provenance standards as AI models proliferate.
Publisher
Shorenstein Center for Media, Politics, and Public Policy
Reference29 articles.
1. Bickert, M. (2024, April 5). Our approach to labeling AI-generated content and manipulated media. Meta Newsroom. https://about.fb.com/news/2024/04/metas-approach-to-labeling-ai-generated-content-and-manipulated-media/
2. Caufield, M. (2019, June 19). SIFT (the four moves). Hapgood. https://hapgood.us/2019/06/19/sift-the-four-moves/
3. Clegg, N. (2024, February 6). Labeling AI-generated images on Facebook, Instagram and Threads. Meta Newsroom. https://about.fb.com/news/2024/02/labeling-ai-generated-images-on-facebook-instagram-and-threads/
4. Dixon, R. B. L., & Frase, H. (2024, March). An argument for hybrid AI incident reporting: Lessons learned from other incident reporting systems. Center for Security and Emerging Technology. https://cset.georgetown.edu/publication/an-argument-for-hybrid-ai-incident-reporting/
5. Ferrara, E. (2024). GenAI against humanity: Nefarious applications of generative artificial intelligence and large language models. Journal of Computational Science, 7, 549–569. https://doi.org/10.1007/s42001-024-00250-1