No CrossRef data available.
Article contents
You can't play 20 questions with nature and win redux
Published online by Cambridge University Press: 06 December 2023
Abstract
An incomplete science begets imperfect models. Nevertheless, the target article advocates for jettisoning deep-learning models with some competency in object recognition for toy models evaluated against a checklist of laboratory findings; an approach which evokes Alan Newell's 20 questions critique. We believe their approach risks incoherency and neglects the most basic test; can the model perform its intended task.
- Type
- Open Peer Commentary
- Information
- Copyright
- Copyright © The Author(s), 2023. Published by Cambridge University Press
References
Dagaev, N., Roads, B. D., Luo, X., Barry, D. N., Patil, K. R., & Love, B. C. (2023). A too-good-to-be-true prior to reduce shortcut reliance. Pattern Recognition Letters, 166, 164–171. https://doi.org/10.1016/j.patrec.2022.12.010CrossRefGoogle ScholarPubMed
Newell, A. (1973). You can't play 20 questions with nature and win: Projective comments on the papers of this symposium. In W. G. Chase (Ed.). (1973). Visual information processing: Proceedings of the 8th annual Carnegie symposium on cognition, held at the Carnegie-Mellon University, Pittsburgh, Pennsylvania, May 19, 1972. Academic Press.Google Scholar
Roads, B. D., & Love, B. C. (2021). Enriching ImageNet with human similarity judgments and psychological embeddings. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 3547–3557).CrossRefGoogle Scholar
Sexton, N. J., & Love, B. C. (2022). Reassessing hierarchical correspondences between brain and deep networks through direct interface. Science Advances, 8, 1–9. doi: https://doi.org/www.science.org/doi/10.1126/sciadv.abm2219CrossRefGoogle ScholarPubMed
Winograd, T. (1971). Procedures as a representation for data in a computer program for understanding natural language. AITR-235. http://hdl.handle.net/1721.1/7095Google Scholar
Target article
Deep problems with neural network models of human vision
Related commentaries (29)
Explananda and explanantia in deep neural network models of neurological network functions
A deep new look at color
Beyond the limitations of any imaginable mechanism: Large language models and psycholinguistics
Comprehensive assessment methods are key to progress in deep learning
Deep neural networks are not a single hypothesis but a language for expressing computational hypotheses
Even deeper problems with neural network models of language
Fixing the problems of deep neural networks will require better training data and learning algorithms
For deep networks, the whole equals the sum of the parts
For human-like models, train on human-like tasks
Going after the bigger picture: Using high-capacity models to understand mind and brain
Implications of capacity-limited, generative models for human vision
Let's move forward: Image-computable models and a common model evaluation scheme are prerequisites for a scientific understanding of human vision
Modelling human vision needs to account for subjective experience
Models of vision need some action
My pet pig won't fly and I want a refund
Neither hype nor gloom do DNNs justice
Neural networks need real-world behavior
Neural networks, AI, and the goals of modeling
Perceptual learning in humans: An active, top-down-guided process
Psychophysics may be the game-changer for deep neural networks (DNNs) to imitate the human vision
Statistical prediction alone cannot identify good models of behavior
The model-resistant richness of human visual experience
The scientific value of explanation and prediction
There is a fundamental, unbridgeable gap between DNNs and the visual cortex
Thinking beyond the ventral stream: Comment on Bowers et al.
Using DNNs to understand the primate vision: A shortcut or a distraction?
Where do the hypotheses come from? Data-driven learning in science and the brain
Why psychologists should embrace rather than abandon DNNs
You can't play 20 questions with nature and win redux
Author response
Clarifying status of DNNs as models of human vision