Abstract
Artificial intelligence (AI) chatbots like ChatGPT and Google Bard are computer programs that use AI and natural language processing to understand customer questions and generate natural, fluid, dialogue-like responses to their inputs. ChatGPT, an AI chatbot created by OpenAI, has rapidly become a widely used tool on the internet. AI chatbots have the potential to improve patient care and public health. However, they are trained on massive amounts of people’s data, which may include sensitive patient data and business information. The increased use of chatbots introduces data security issues, which should be handled yet remain understudied. This paper aims to identify the most important security problems of AI chatbots and propose guidelines for protecting sensitive health information. It explores the impact of using ChatGPT in health care. It also identifies the principal security risks of ChatGPT and suggests key considerations for security risk mitigation. It concludes by discussing the policy implications of using AI chatbots in health care.
Reference29 articles.
1. What is a chatbot?IBM20232023-08-30https://www.ibm.com/topics/chatbots
2. PaudelPChatGPT in health care2023-08-30https://medium.com/tech-guides/chatgpt-in-healthcare-c0cccb1a59bc
3. DuKMurphyASuarezCChatGPT and health care privacy risksJdsupra20232023-08-30https://www.jdsupra.com/legalnews/chatgpt-and-health
4. Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study
5. Privacy policies for health social networking sites: Table 1
Cited by
18 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献