Affiliation:
1. RMIT University Melbourne Australia
Abstract
AbstractThe rise of deepfakes and AI‐generated images has raised concerns regarding their potential misuse. However, this commentary highlights the valuable opportunities these technologies offer for neuroscience research. Deepfakes deliver accessible, realistic and customisable dynamic face stimuli, while generative adversarial networks (GANs) can generate and modify diverse and high‐quality static content. These advancements can enhance the variability and ecological validity of research methods and enable the creation of previously unattainable stimuli. When AI‐generated images are informed by brain responses, they provide unique insights into the structure and function of visual systems. The authors argue that experimental psychologists and cognitive neuroscientists stay informed about these emerging tools and embrace their potential to advance the field of visual neuroscience.
Reference49 articles.
1. Deepfake in Face Perception Research
2. Neural population control via deep image synthesis
3. Bik E.(2022).Science has a nasty photoshopping problem.The New York Times.https://www.nytimes.com/interactive/2022/10/29/opinion/science-fraud-image-manipulation-photoshop.html
4. Bray S. D. Johnson S. D. &Kleinberg B.(2022).Testing Human Ability To Detect Deepfake Images of Human Faces.arXiv preprint arXiv:2212.05056.
5. BuzzFeedVideo. (2018).You Won't Believe What Obama Says In This Video! ?
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献