No CrossRef data available.
Article contents
Statistical prediction alone cannot identify good models of behavior
Published online by Cambridge University Press: 06 December 2023
Abstract
The dissociation between statistical prediction and scientific explanation advanced by Bowers et al. for studies of vision using deep neural networks is also observed in several other domains of behavior research, and is in fact unavoidable when fitting large models such as deep nets and other supervised learners, with weak theoretical commitments, to restricted samples of highly stochastic behavioral phenomena.
- Type
- Open Peer Commentary
- Information
- Copyright
- Copyright © The Author(s), 2023. Published by Cambridge University Press
References
Agrawal, M., Peterson, J. C., & Griffiths, T. L. (2020). Scaling up psychology via scientific regret minimization. Proceedings of the National Academy of Sciences of the United States of America, 117(16), 8825–8835.CrossRefGoogle ScholarPubMed
Bhatia, S., & He, L. (2021). Machine-generated theories of human decision-making. Science (New York, N.Y.), 372(6547), 1150–1151.CrossRefGoogle ScholarPubMed
Cichy, R. M., & Kaiser, D. (2019). Deep neural networks as scientific models. Trends in Cognitive Sciences, 23(4), 305–317.CrossRefGoogle ScholarPubMed
Fudenberg, D., Kleinberg, J., Liang, A., & Mullainathan, S. (2019). Measuring the completeness of theories. arXiv preprint arXiv:1910.07022.Google Scholar
Martin, T., Hofman, J. M., Sharma, A., Anderson, A., & Watts, D. J. (2016). Exploring limits to prediction in complex social systems. In I. Horrocks & B. Zhao (Eds.), Proceedings of the 25th International conference on world wide web, ACM, Montreal (pp. 683–694).CrossRefGoogle Scholar
Peterson, J. C., Bourgin, D. D., Agrawal, M., Reichman, D., & Griffiths, T. L. (2021). Using large-scale experiments and machine learning to discover theories of human decision-making. Science (New York, N.Y.), 372(6547), 1209–1214.CrossRefGoogle ScholarPubMed
Sifar, A., & Srivastava, N. (2021). Imprecise oracles impose limits to predictability in supervised learning. In Z. H. Zhou (Ed.), The International joint conference on artificial intelligence (IJCAI), International Joint Conferences on Artificial Intelligence, Montreal (pp. 4834–4838).Google Scholar
Sifar, A., & Srivastava, N. (2022). Over-precise predictions cannot identify good choice models. Computational Brain & Behavior, 5(3), 378–396.CrossRefGoogle Scholar
Yarkoni, T., & Westfall, J. (2017). Choosing prediction over explanation in psychology: Lessons from machine learning. Perspectives on Psychological Science, 12(6), 1100–1122.CrossRefGoogle ScholarPubMed
Target article
Deep problems with neural network models of human vision
Related commentaries (29)
Explananda and explanantia in deep neural network models of neurological network functions
A deep new look at color
Beyond the limitations of any imaginable mechanism: Large language models and psycholinguistics
Comprehensive assessment methods are key to progress in deep learning
Deep neural networks are not a single hypothesis but a language for expressing computational hypotheses
Even deeper problems with neural network models of language
Fixing the problems of deep neural networks will require better training data and learning algorithms
For deep networks, the whole equals the sum of the parts
For human-like models, train on human-like tasks
Going after the bigger picture: Using high-capacity models to understand mind and brain
Implications of capacity-limited, generative models for human vision
Let's move forward: Image-computable models and a common model evaluation scheme are prerequisites for a scientific understanding of human vision
Modelling human vision needs to account for subjective experience
Models of vision need some action
My pet pig won't fly and I want a refund
Neither hype nor gloom do DNNs justice
Neural networks need real-world behavior
Neural networks, AI, and the goals of modeling
Perceptual learning in humans: An active, top-down-guided process
Psychophysics may be the game-changer for deep neural networks (DNNs) to imitate the human vision
Statistical prediction alone cannot identify good models of behavior
The model-resistant richness of human visual experience
The scientific value of explanation and prediction
There is a fundamental, unbridgeable gap between DNNs and the visual cortex
Thinking beyond the ventral stream: Comment on Bowers et al.
Using DNNs to understand the primate vision: A shortcut or a distraction?
Where do the hypotheses come from? Data-driven learning in science and the brain
Why psychologists should embrace rather than abandon DNNs
You can't play 20 questions with nature and win redux
Author response
Clarifying status of DNNs as models of human vision