Abstract
AbstractDeepfakes pose a multi-faceted threat to the acquisition of knowledge. It is widely hoped that technological solutions—in the form of artificially intelligent systems for detecting deepfakes—will help to address this threat. I argue that the prospects for purely technological solutions to the problem of deepfakes are dim. Especially given the evolving nature of the threat, technological solutions cannot be expected to prevent deception at the hands of deepfakes, or to preserve the authority of video footage. Moreover, the success of such technologies depends on institutional trust that is in short supply. Finally, outsourcing the discrimination between the real and the fake to automated, largely opaque systems runs the risk of undermining epistemic autonomy.
Funder
Ministerium für Innovation, Wissenschaft und Forschung des Landes Nordrhein-Westfalen
Ruhr-Universität Bochum
Publisher
Springer Science and Business Media LLC
Reference66 articles.
1. Ahlstrom-Vij, K. (2013). Epistemic Paternalism. Palgrave Macmillan UK. https://doi.org/10.1057/9781137313171
2. Allyn, B. (2022). Deepfake video of Zelenskyy could be “tip of the iceberg” in info war, experts warn. NPR. https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia
3. Alvarado, R. (2022). Should we replace radiologists with deep learning? Pigeons, error and trust in medical AI. Bioethics, 36(2), 121–133. https://doi.org/10.1111/bioe.12959
4. Ballantyne, N., Celniker, J. B., & Dunning, D. (2022). “Do Your Own Research.” Social Epistemology, 1–16. https://doi.org/10.1080/02691728.2022.2146469
5. Battaly, H. (2021). Intellectual Autonomy and Intellectual Interdependence. In J. Matheson & K. Lougheed, Epistemic Autonomy (1st ed., pp. 153–172). Routledge. https://doi.org/10.4324/9781003003465-12
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Consumer Engagement;Advances in Information Security, Privacy, and Ethics;2024-07-26