Article contents
Building on prior knowledge without building it in
Published online by Cambridge University Press: 10 November 2017
Abstract
Lake et al. propose that people rely on “start-up software,” “causal models,” and “intuitive theories” built using compositional representations to learn new tasks more efficiently than some deep neural network models. We highlight the many drawbacks of a commitment to compositional representations and describe our continuing effort to explore how the ability to build on prior knowledge and to learn new tasks efficiently could arise through learning in deep neural networks.
- Type
- Open Peer Commentary
- Information
- Copyright
- Copyright © Cambridge University Press 2017
References
- 4
- Cited by
Target article
Building machines that learn and think like people
Related commentaries (27)
Autonomous development and learning in artificial intelligence and robotics: Scaling up deep learning to human-like learning
Avoiding frostbite: It helps to learn from others
Back to the future: The return of cognitive functionalism
Benefits of embodiment
Building brains that communicate like machines
Building machines that adapt and compute like brains
Building machines that learn and think for themselves
Building on prior knowledge without building it in
Causal generative models are just a start
Children begin with the same start-up software, but their software updates are cultural
Crossmodal lifelong learning in hybrid neural embodied architectures
Deep-learning networks and the functional architecture of executive control
Digging deeper on “deep” learning: A computational ecology approach
Evidence from machines that learn and think like people
Human-like machines: Transparency and comprehensibility
Intelligent machines and human minds
Social-motor experience and perception-action learning bring efficiency to machines
The architecture challenge: Future artificial-intelligence systems will require sophisticated architectures, and knowledge of the brain might guide their construction
The argument for single-purpose robots
The fork in the road
The humanness of artificial non-normative personalities
The importance of motivation and emotion for explaining human cognition
Theories or fragments?
Thinking like animals or thinking like colleagues?
Understand the cogs to understand cognition
What can the brain teach us about building artificial intelligence?
Will human-like machines make human-like mistakes?
Author response
Ingredients of intelligence: From classic debates to an engineering roadmap