Affiliation:
1. Associate Professor, Dr., Vytautas Magnus University , Lithuania
2. PhD Candidate, Vytautas Magnus University , Lithuania
3. Project Coordinator, Vytautas Magnus University , Lithuania
Abstract
Abstract
Synthetic media – defined as text, audio, images, and video content or entire 2D or 3D environments generated by AI-enabled tools – are currently at the center of public attention. While benevolent applications of such technologies abound, the negatives attract significantly more debate. While some of such uses tap into existing fears of disinformation and related threats, others pertain to qualitatively new harms, such as non-consensual synthetic pornography. Of particular note is synthetic media’s capacity to democratize content creation, for better or worse. Ultimately, such concerns lead to calls for policing synthetic media in terms of its automatic detection and removal. Nevertheless, such reliance on technological solutions has at least two undesirable effects: first, further concentration of power in the hands of online platforms and other technology companies and, second, ignorance of the underlying causes of nefarious uses of synthetic media. In this sense, generation of harmful content is best seen not as a standalone problem but as a symptom of underlying deeper – cultural – trends. As part of seeking a solution, this article traces some of the roots of nefarious synthetic content, ranging from non-consensual pornography to disinformation to toxic masculinity cultures and the insecurities atttendant to it.
Reference79 articles.
1. Alvarez, León, Luis F. and Rosen Jovanna. “Technology as Ideology in Urban Governance.” Annals of the American Association of Geographers 110, no. 2 (2020): 497–506.
2. Audry, Sofian. Art in the Age of Machine Learning. Cambridge (MA) and London: The MIT Press, 2021.
3. Bass, Dina. “OpenAI Chatbot So Good It Can Fool Humans, Even When It’s Wrong.” Bloomberg (December 2022) // https://www.bloomberg.com/news/articles/2022-12-07/openai-chatbot-so-good-it-can-fool-humans-even-when-it-s-wrong?leadSource=uverify%20wall.
4. Bogost, Ian. “ChatGPT Is Dumber Than You Think.” The Atlantic (December 2022) // https://www.theatlantic.com/technology/archive/2022/12/chatgpt-openai-artificial-intelligence-writing-ethics/672386/.
5. Bontridder, Noémi and Yves Poullet. “The Role of Artificial Intelligence in Disinformation.” Data & Policy 3 (2022): 1–21.