Author:
Momen Ali,Hugenberg Kurt,Wiese Eva
Abstract
AbstractRoboticists often imbue robots with human-like physical features to increase the likelihood that they are afforded benefits known to be associated with anthropomorphism. Similarly, deepfakes often employ computer-generated human faces to attempt to create convincing simulacra of actual humans. In the present work, we investigate whether perceivers’ higher-order beliefs about faces (i.e., whether they represent actual people or android robots) modulate the extent to which perceivers deploy face-typical processing for social stimuli. Past work has shown that perceivers’ recognition performance is more impacted by the inversion of faces than objects, thus highlighting that faces are processed holistically (i.e., as Gestalt), whereas objects engage feature-based processing. Here, we use an inversion task to examine whether face-typical processing is attenuated when actual human faces are labeled as non-human (i.e., android robot). This allows us to employ a task shown to be differentially sensitive to social (i.e., faces) and non-social (i.e., objects) stimuli while also randomly assigning face stimuli to seem real or fake. The results show smaller inversion effects when face stimuli were believed to represent android robots compared to when they were believed to represent humans. This suggests that robots strongly resembling humans may still fail to be perceived as “social” due pre-existing beliefs about their mechanistic nature. Theoretical and practical implications of this research are discussed.
Funder
United States Department of Defense | United States Air Force | AFMC | Air Force Office of Scientific Research
Technische Universität Berlin
Publisher
Springer Science and Business Media LLC
Reference73 articles.
1. van den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A. & Kavukcuoglu, K. WaveNet: A Generative Model for Raw Audio. https://doi.org/10.48550/arXiv.1609.03499 (2016).
2. Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J. & Aila, T. Analyzing and Improving the Image Quality of StyleGAN. 8110–8119. https://openaccess.thecvf.com/content_CVPR_2020/html/Karras_Analyzing_and_Improving_the_Image_Quality_of_StyleGAN_CVPR_2020_paper.html. Accessed 21 June 2023 (2020).
3. Suwajanakorn, S., Seitz, S. & Kemelmacher, I. Synthesizing Obama: Learning lip sync from audio. ACM Trans. Graph. 36, 1–13. https://doi.org/10.1145/3072959.3073640 (2017).
4. The Rise of the Deepfake and the Threat to Democracy | Technology | The Guardian. https://www.theguardian.com/technology/ng-interactive/2019/jun/22/the-rise-of-the-deepfake-and-the-threat-to-democracy. Accessed 21 June 2023.
5. Farid, H., & McGregor, J. We have the technology to fight manipulated images and videos. It’s time to use it. Fast Company. https://www.fastcompany.com/90575763/we-have-the-technology-to-fight-manipulated-images-and-videos-its-time-to-use-it. Accessed 24 June 2023 (2020).