Affiliation:
1. Department of Communication, University of Vienna, Vienna, Austria
Abstract
With the emergence of artificial intelligence, deepfakes have rendered it possible to manipulate anyone’s and anything’s audio-visual representation, adding fuel to the discussion about the believability of what we hear and see in the news. However, we do not know yet whether deepfakes can actually impact (1) the credibility attributed to audio-visual media in general, as well as (2) the perceived self-efficacy to discern between real and fake media. Furthermore, it remains unclear if different deepfake formats can affect citizens to differing degrees. This study employs a 3 × 2 × 2 between-within-subjects experiment ( N = 951) with the between-subjects factor format (audio vs. video vs. 360°-video) and facticity (real vs. fake) and the within-subjects factor reveal (pre vs. post-reveal). We explore what happens after revealing to a sample of German participants that they have been deceived by a deepfake. Our findings show that credibility of media drops across all formats after revealing the stimulus was fake, whereas the control group is not affected. On the other hand, self-efficacy is impacted even for people who were exposed to authentic news media. This shows that deepfakes may have far-reaching societal implications that go beyond deception, whereas modality seems to matter little for such effects.
Reference68 articles.
1. Ajder H., Patrini G., Cavalli F., Cullen L. (2019). The state of deepfakes 2019. https://regmedia.co.uk/2019/10/08/deepfake_report.pdf
2. Malicious Design in AIVR, Falsehood and Cybersecurity-oriented Immersive Defenses
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献