Abstract
Household social robots may have massive effects on our everyday lives and raise several concerns on data protection and privacy. The main characteristic of these devices is their capability of building close connections, even emotional bonds between humans and robots. The socially interactive robots exhibit human social characteristics, e.g. express and/or perceive emotions, communicate with high-level dialogue, etc. Affective computing permits development of AI systems that are capable of imitating human traits (emotions, speech, body language). The goal is to gain the trust of humans, to improve safety, and to strengthen emotional bonds between human and robot with the help of anthropomorphization. However, this emotional engagement may incentivize people to trade personal information jeopardizing their privacy. Social robots can infer from emotional expressions and gestures the feelings, physical and mental states of human beings. As a result, concerns may be raised regarding data protection, such as the classification of emotions, the issues of consent, and appearance of the right to explanation. The article proceeds in two main stages. The first chapter deals with general questions relating to emotional AI and social robots, focusing on the deceptive and manipulative nature that makes humans disclose more and more information and lull their privacy and data protection awareness. The second chapter serves to demonstrate several data protection problems such as the categorization and datafication of emotions (as biometrics), the issues of consent, and the appearance of the right to explanation. The third chapter highlights certain civil liability concerns regarding the infringement of the right to privacy in the light of the future EU civil liability regime for artificial intelligence.
Publisher
Universitatea Sapientia din municipiul Cluj-Napoca
Reference50 articles.
1. "1. AROYO, A. M.-DE BRUYNE, J.-DHEU, O.-FOSCH-VILLARONGA, E.-GUDKOV, A.-HOCH, H.-JONES, S.- LUTZ, Chr.-SÆTRA, H.-SOLBERG, M.-TAMÒLARRIEUX, A. 2021. Overtrusting Robots: Setting a Research Agenda to Mitigate Overtrust in Automation.Paladyn, Journal of Behavioral Robotics 12: 423-436. https://doi.org/10.1515/pjbr-2021-0029.
2. 2. AROYO, A. M.-REA, F.-SANDINI, G.-SCIUTTI, A. 2018. Trust and Social Engineering in Human Robot Interaction: Will a Robot Make You Disclose Sensitive Information, Conform to Its Recommendations or Gamble? IEEE Robotics and Automation Letters 3: 3701-3708. https://doi.org/10.1109/LRA.2018.2856272.
3. 3. AUGUSTO J.-KRAMER, D.-ALEGRE, U.-COVACI, A.-SANTOKHEE, A. 2018. The User-Centred Intelligent Environments Development Process as a Guide to Co-create Smart Technology for People with Special Needs.Universal Access in the Information Society 17: 115-130. https://doi.org/10.1007/s10209-016-0514-8.
4. 4. BIEBER, G.-HAESCHER, M.-ANTONY, N.-HOEPFNER, F.-KRAUSE, S. 2019. Unobtrusive Vital Data Recognition by Robots to Enhance Natural Human-Robot Communication. In: Social Robots: Technological, Societal and Ethical Aspects of Human-Robot Interaction. Human-Computer Interaction Series. Cham. https://doi.org/10.1007/978-3-030-17107-0_5.
5. 5. BREAZEAL, C. 2005. Designing Socially Intelligent Robots. In: Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2004 Nae Symposium on Frontiers of Engineering. Washington, D.C. 123-152. https://www.nap.edu/read/11220/chapter/19 (accessed on: 30.07.2021).
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Authors’ Moral Rights in the Digital Environment;Journal of Digital Technologies and Law;2024-03-21