Abstract
Developing cognitive agents with human-level natural language understanding (NLU) capabilities requires modeling human cognition because natural, unedited utterances regularly contain ambiguities, ellipses, production errors, implicatures, and many other types of complexities. Moreover, cognitive agents must be nimble in the face of incomplete interpretations since even people do not perfectly understand every aspect of every utterance they hear. So, once an agent has reached the best interpretation it can, it must determine how to proceed – be that acting upon the new information directly, remembering an incomplete interpretation and waiting to see what happens next, seeking out information to fill in the blanks, or asking its interlocutor for clarification. The reasoning needed to support NLU extends far beyond language itself, including, non-exhaustively, the agent’s understanding of its own plans and goals; its dynamic modeling of its interlocutor’s knowledge, plans, and goals, all guided by a theory of mind; its recognition of diverse aspects human behavior, such as affect, cooperative behavior, and the effects of cognitive biases; and its integration of linguistic interpretations with its interpretations of other perceptive inputs, such as simulated vision and non-linguistic audition. Considering all of these needs, it seems hardly possible that fundamental NLU will ever be achieved through the kinds of knowledge-lean text-string manipulation being pursued by the mainstream natural language processing (NLP) community. Instead, it requires a holistic approach to cognitive modeling of the type we are pursuing in a paradigm called OntoAgent.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
16 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献