Affiliation:
1. Microsoft Research NYC, Reinforcement Learning Station, 300 Lafayette, New York, NY 10012, USA
Abstract
Researchers across cognitive, neuro- and computer sciences increasingly reference ‘human-like’ artificial intelligence and ‘neuroAI’. However, the scope and use of the terms are often inconsistent. Contributed research ranges widely from mimicking
behaviour
, to testing machine learning methods as
neurally plausible
hypotheses at the cellular or functional levels, or solving
engineering
problems. However, it cannot be assumed nor expected that progress on one of these three goals will automatically translate to progress in others. Here, a simple rubric is proposed to clarify the scope of individual contributions, grounded in their commitments to
human-like behaviour
,
neural plausibility
or
benchmark/engineering/computer science
goals. This is clarified using examples of weak and strong neuroAI and human-like agents, and discussing the generative, corroborate and corrective ways in which the three dimensions interact with one another. The author maintains that future progress in artificial intelligence will need strong interactions across the disciplines, with iterative feedback loops and meticulous validity tests—leading to both known and yet-unknown advances that may span decades to come.
This article is part of a discussion meeting issue ‘New approaches to 3D vision’.
Subject
General Agricultural and Biological Sciences,General Biochemistry, Genetics and Molecular Biology
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献