Abstract
AbstractThis article explores the research question: ‘What are ChatGPT’s human-like traits as perceived by society?’ Thematic analyses of insights from 452 individuals worldwide yielded two categories of traits. Category 1 entails social traits, where ChatGPT embodies the social roles of ‘author’ (imitating human phrasing and paraphrasing practices) and ‘interactor’ (simulating human collaboration and emotion). Category 2 encompasses political traits, with ChatGPT assuming the political roles of ‘agent’ (emulating human cognition and identity) and ‘influencer’ (mimicking human diplomacy and consultation). When asked, ChatGPT confirmed the possession of these human-like traits (except for one trait). Thus, ChatGPT displays human-like qualities, humanising itself through the ‘game of algorithms’. It transcends its inherent technical essence and machine-based origins to manifest as a ‘semi-human’ living actor within human society, showcasing the emergence of semi-humans. Therefore, researchers should redirect their attention towards the ‘sociology of semi-humans’ (studying their socio-political traits) beyond the ‘biology of semi-humans’ (examining their technical traits). While medieval society was captivated by mythical semi-human beings (e.g. mermaids), modern society finds itself increasingly captivated by computational semi-human beings like ChatGPT. Ethical concerns arise as semi-humans impersonate human traits without consent or genuine human existence, blurring the boundaries between what is authentically and artificially ‘human’.
Publisher
Springer Science and Business Media LLC
Subject
General Economics, Econometrics and Finance,General Psychology,General Social Sciences,General Arts and Humanities,General Business, Management and Accounting
Reference28 articles.
1. Al Lily AE, Alhazmi AA, Alzahrani S (2017) The theory of multiple stupidities: education, technology and organisation in Arabia. Cogn Process 18(2017):529–541. https://doi.org/10.1007/s10339-017-0816-7
2. AlAfnan MA, Dishari S, Jovic M, Lomidze K (2023) Chatgpt as an educational tool: opportunities, challenges, and recommendations for communication, business writing, and composition courses. J Artif Intell Technol 3(2):60–68
3. Anders BA (2023) Is using ChatGPT cheating, plagiarism, both, neither, or forward thinking? Patterns 4(3):100694
4. Blackshaw BP (2023) Artificial consciousness is morally irrelevant. AJOB Neurosci 14(2):72–74. https://doi.org/10.1080/21507740.2023.2188276
5. Cassinis R, Morelli LM, Nissan E (2007) Emulation of human feelings and behaviors in an animated artwork. Int J Artif Intell Tools 16(02):291–375
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献