A place for (socio)linguistics in audio deepfake detection and discernment: Opportunities for convergence and interdisciplinary collaboration
-
Published:2024-07-09
Issue:5
Volume:18
Page:
-
ISSN:1749-818X
-
Container-title:Language and Linguistics Compass
-
language:en
-
Short-container-title:Language and Linguist. Compass
Author:
Mallinson Christine1ORCID,
Janeja Vandana P.1,
Evered Chloe2,
Khanjani Zahra1,
Davis Lavon1,
Bhalli Noshaba Nasir1,
Nwosu Kifekachukwu3
Affiliation:
1. University of Maryland, Baltimore County Baltimore Maryland USA
2. Georgetown University Washington District of Columbia USA
3. Rochester Institute of Technology Rochester New York USA
Abstract
AbstractDeepfakes, particularly audio deepfakes, have become pervasive and pose unique, ever‐changing threats to society. This paper reviews the current research landscape on audio deepfakes. We assert that limitations of existing approaches to deepfake detection and discernment are areas where (socio)linguists can directly contribute to helping address the societal challenge of audio deepfakes. In particular, incorporating expert knowledge and developing techniques that everyday listeners can use to avoid deception are promising pathways for (socio)linguistics. Further opportunities exist for developing benevolent applications of this technology through generative AI methods as well.
Funder
Directorate for Computer and Information Science and Engineering
National Science Foundation
University of Maryland, Baltimore County
Reference64 articles.
1. A Review of Modern Audio Deepfake Detection Methods: Challenges and Future Directions
2. Bateman J.(2020).Deepfakes and synthetic media in the financial system: Assessing threat scenarios(CEIP Working Paper No. 7 15–17). Retrieved from Carnegie Endowment for Int’l Peace Cybersecurity and the Financial System website:https://carnegieendowment.org/files/Bateman_FinCyber_Deepfakes_final.pdf[https://perma.cc/98EP‐QX6K]
3. Hello me, meet the real me: Voice synthesis attacks on voice assistants
4. Blue L. &Traynor P.(2022).Deepfake audio has a tell – researchers use fluid dynamics to spot artificial imposter voices.https://theconversation.com/deepfake‐audio‐has‐a‐tell‐researchers‐use‐fluid‐dynamics‐to‐spot‐artificial‐imposter‐voices‐189104