Abstract
AbstractThis article evaluates the ChatGPT decision support system’s utility for creating policies related to concussion and repetitive brain trauma associated with neurodegenerative disease risk. It is generally stable and fast. prompt/response pairs (n=259) were examined returning: six prompt response pairs that regenerated (2.31%); one Incorrect Answer; (.38%) one fragment (.38%). Its accuracy, validity, opacity, informational latency and vulnerability to manipulation limits its utility. ChatGPT’s data can be both out-of-date and incomplete which limits its utility use to subject matter experts analyzing expert statements. ChatGPT’s performance is affected by prompts involving stakeholder bias and litigation management, such as race. Nonetheless, ChatGPT demonstrated its ability to respond in both American and British/Australian English with ease. Overall, this study suggests that ChatGPT has limitations that need to be addressed before it can be widely used in decision-making related to concussion and repetitive brain trauma policies.
Publisher
Cold Spring Harbor Laboratory
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献