Affiliation:
1. London School of Economics and Political Science , UK
Abstract
Abstract
When an artificial agent can intelligently draw upon huge amounts of human-generated training data, the result can be gaming of our criteria for sentience. Gaming occurs when systems mimic human behaviours that are likely to persuade human users of their sentience without possessing the underlying capacity. The gaming problem leads initially to the thought that we should ‘box’ AI systems when assessing their sentience candidature, denying them access to a large corpus of human-generated training data. However, this would destroy the capabilities of any LLM. What we really need in the AI case are deep computational markers, not behavioural markers. If we find signs that an LLM has implicitly learned ways of recreating a global workspace or perceptual/evaluative reality monitoring system, this should lead us to regard it as a sentience candidate. Unfortunately, at the time of writing, we lack the sort of understanding of the inner workings of LLMs that is needed to ascertain which algorithms they have implicitly acquired during training.
Publisher
Oxford University PressOxford
Reference829 articles.
1. Unresolved issues of behavioral analysis in invertebrates;Abramson;Animal Sentience,2022
2. The ‘Precautionary Principle’—A work in progress;Adamo;Animal Sentience,2017
3. China to clamp down on abortions for ‘non-medical purposes’;Ahmed,2021
4. Blindsight and unconscious vision: what they teach us about the human visual system;Ajina;The Neuroscientist,2017
5. Aplysia ganglia preparation for electrophysiological and molecular analyses of single neurons;Akhmedov;Journal of Visualized Experiments: JoVE,2014