No CrossRef data available.
Article contents
Quo vadis, planning?
Published online by Cambridge University Press: 23 September 2024
Abstract
Deep meta-learning is the driving force behind advances in contemporary AI research, and a promising theory of flexible cognition in natural intelligence. We agree with Binz et al. that many supposedly “model-based” behaviours may be better explained by meta-learning than by classical models. We argue that this invites us to revisit our neural theories of problem solving and goal-directed planning.
- Type
- Open Peer Commentary
- Information
- Copyright
- Copyright © The Author(s), 2024. Published by Cambridge University Press
References
Chan, S. C. Y., Dasgupta, I., Kim, J., Kumaran, D., Lampinen, A. K., & Hill, F. (2022a). Transformers generalize differently from information stored in context vs in weights. https://doi.org/10.48550/ARXIV.2210.05675CrossRefGoogle Scholar
Chan, S. C. Y., Santoro, A., Lampinen, A. K., Wang, J. X., Singh, A., Richemond, P. H., … Hill, F. (2022b). Data distributional properties drive emergent in-context learning in transformers. https://doi.org/10.48550/ARXIV.2205.05055CrossRefGoogle Scholar
Daw, N. D., & Dayan, P. (2014). The algorithmic anatomy of model-based evaluation. Philosophical Transactions of the Royal Society B, 369, 20130478. https://doi.org/10.1098/rstb.2013.0478CrossRefGoogle ScholarPubMed
de Waal, F. (2016). Are we smart enough to know how smart animals are? W. W. Norton & Company.Google Scholar
Ericsson, K. A., & Charness, N. (1994). Expert performance: Its structure and acquisition. American Psychologist, 49, 725–747. https://doi.org/10.1037/0003-066X.49.8.725CrossRefGoogle Scholar
Ruoss, A., Delétang, G., Medapati, S., Grau-Moya, J., Wenliang, L. K., Catt, E., … Genewein, T. (2024). Grandmaster-level chess without search.Google Scholar
Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., … Hassabis, D. (2018). A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, 362, 1140–1144. https://doi.org/10.1126/science.aar6404CrossRefGoogle ScholarPubMed
Summerfield, C. (2022). Natural general intelligence: How understanding the brain can help us build A1. Oxford University Press.CrossRefGoogle Scholar
Van Opheusden, B., Kuperwajs, I., Galbiati, G., Bnaya, Z., Li, Y., & Ma, W. J. (2023). Expertise increases planning depth in human gameplay. Nature, 618, 1000–1005. https://doi.org/10.1038/s41586-023-06124-2CrossRefGoogle ScholarPubMed
Wang, J. X., Kurth-Nelson, Z., Kumaran, D., Tirumala, D., Soyer, H., Leibo, J.Z., … Botvinick, M. (2018). Prefrontal cortex as a meta-reinforcement learning system. Nature Neuroscience, 21, 860–868. https://doi.org/10.1038/s41593-018-0147-8CrossRefGoogle ScholarPubMed
Target article
Meta-learned models of cognition
Related commentaries (22)
Bayes beyond the predictive distribution
Challenges of meta-learning and rational analysis in large worlds
Combining meta-learned models with process models of cognition
Integrative learning in the lens of meta-learned models of cognition: Impacts on animal and human learning outcomes
Is human compositionality meta-learned?
Learning and memory are inextricable
Linking meta-learning to meta-structure
Meta-learned models as tools to test theories of cognitive development
Meta-learned models beyond and beneath the cognitive
Meta-learning and the evolution of cognition
Meta-learning as a bridge between neural networks and symbolic Bayesian models
Meta-learning goes hand-in-hand with metacognition
Meta-learning in active inference
Meta-learning modeling and the role of affective-homeostatic states in human cognition
Meta-learning: Bayesian or quantum?
Probabilistic programming versus meta-learning as models of cognition
Quantum Markov blankets for meta-learned classical inferential paradoxes with suboptimal free energy
Quo vadis, planning?
The added value of affective processes for models of human cognition and learning
The hard problem of meta-learning is what-to-learn
The meta-learning toolkit needs stronger constraints
The reinforcement metalearner as a biologically plausible meta-learning framework
Author response
Meta-learning: Data, architecture, and both