Abstract
AbstractRecent developments in artificial intelligence based on neural nets—deep learning and large language models which together I refer to as NEWAI—have resulted in startling improvements in language handling and the potential to keep up with changing human knowledge by learning from the internet. Nevertheless, examples such as ChatGPT, which is a ‘large language model’, have proved to have no moral compass: they answer queries with fabrications with the same fluency as they provide facts. I try to explain why this is, basing the argument on the sociology of knowledge, particularly social studies of science, notably ‘studies of expertise and experience’ and the ‘fractal model’ of society. Learning from the internet is not the same as socialisation: NEWAI has no primary socialisation such as provides the foundations of human moral understanding. Instead, large language models are retrospectively socialised by human intervention in an attempt to align them with societally accepted ethics. Perhaps, as technology advances, large language models could come to understand speech and recognise objects sufficiently well to acquire the equivalent of primary socialisation. In the meantime, we must be vigilant about who is socialising them and be aware of the danger of their socialising us to align with them rather than vice-versa, an eventuality that would lead to the further erosion of the distinction between the true and the false giving further support to populism and fascism.
Publisher
Springer Science and Business Media LLC
Reference39 articles.
1. Arendt H (1951) The origins of totalitarianism. Harcourt, Brace and Company, New York
2. Bloor D (1976) Knowledge and social imagery. Routledge and Kegan Paul, London
3. Bloor D (1983) Wittgenstein: a social theory of knowledge. Macmillan, London
4. Bowlby JM (1953) Child care and the growth of love. Penguin
5. Chalmers DJ (1996) The conscious mind. Oxford University Press