No CrossRef data available.
Article contents
Where do the hypotheses come from? Data-driven learning in science and the brain
Published online by Cambridge University Press: 06 December 2023
Abstract
Everyone agrees that testing hypotheses is important, but Bowers et al. provide scant details about where hypotheses about perception and brain function should come from. We suggest that the answer lies in considering how information about the outside world could be acquired – that is, learned – over the course of evolution and development. Deep neural networks (DNNs) provide one tool to address this question.
- Type
- Open Peer Commentary
- Information
- Copyright
- Copyright © The Author(s), 2023. Published by Cambridge University Press
References
Anderson, B. L., & Marlow, P. J. (2023). Perceiving the shape and material properties of 3D surfaces. Trends in Cognitive Sciences, 27(1), 98–110. doi:10.1016/j.tics.2022.10.005CrossRefGoogle ScholarPubMed
Golan, T., Raju, P. C., & Kriegeskorte, N. (2020). Controversial stimuli: Pitting neural networks against each other as models of human cognition. Proceedings of the National Academy of Sciences of the United States of America, 117(47), 29330–29337.CrossRefGoogle ScholarPubMed
Ho, Y. X., Landy, M. S., & Maloney, L. T. (2008). Conjoint measurement of gloss and surface texture. Psychological Science, 19, 196–204.CrossRefGoogle ScholarPubMed
Marlow, P., & Anderson, B. (2021). The cospecification of the shape and material properties of light permeable materials. Proceedings of the National Academy of Sciences of the United States of America, 118(14), e2024798118.CrossRefGoogle ScholarPubMed
Marlow, P., Kim, J., & Anderson, B. (2012). The perception and misperception of specular surface reflectance. Current Biology, 22(20), 1–5.CrossRefGoogle ScholarPubMed
Marlow, P., Mooney, S., & Anderson, B. (2019). Photogeometric cues to perceived surface shading. Current Biology, 29(2), 306–311.CrossRefGoogle ScholarPubMed
Storrs, K. R., Anderson, B. L., & Fleming, R. W. (2021b). Unsupervised learning predicts human perception and misperception of gloss. Nature Human Behavior, 5, 1402–1417. https://doi.org/10.1038/s41562-021-01097-6CrossRefGoogle ScholarPubMed
Storrs, K. R., Kietzmann, T. C., Walther, A., Mehrer, J., & Kriegeskorte, N. (2021a). Diverse deep neural networks all predict human inferior temporal cortex well, after training and fitting. Journal of Cognitive Neuroscience, 33(10), 2044–2064.Google ScholarPubMed
Wang, Z., & Simoncelli, E. P. (2008). Maximum differentiation (MAD) competition: A methodology for comparing computational models of perceptual quantities. Journal of Vision, 8(12), 8.CrossRefGoogle ScholarPubMed
Yamins, D., & DiCarlo, J. (2016). Using goal-driven deep learning models to understand sensory cortex. Nature Neuroscience, 19, 356–365. https://doi.org/10.1038/nn.4244CrossRefGoogle ScholarPubMed
Target article
Deep problems with neural network models of human vision
Related commentaries (29)
Explananda and explanantia in deep neural network models of neurological network functions
A deep new look at color
Beyond the limitations of any imaginable mechanism: Large language models and psycholinguistics
Comprehensive assessment methods are key to progress in deep learning
Deep neural networks are not a single hypothesis but a language for expressing computational hypotheses
Even deeper problems with neural network models of language
Fixing the problems of deep neural networks will require better training data and learning algorithms
For deep networks, the whole equals the sum of the parts
For human-like models, train on human-like tasks
Going after the bigger picture: Using high-capacity models to understand mind and brain
Implications of capacity-limited, generative models for human vision
Let's move forward: Image-computable models and a common model evaluation scheme are prerequisites for a scientific understanding of human vision
Modelling human vision needs to account for subjective experience
Models of vision need some action
My pet pig won't fly and I want a refund
Neither hype nor gloom do DNNs justice
Neural networks need real-world behavior
Neural networks, AI, and the goals of modeling
Perceptual learning in humans: An active, top-down-guided process
Psychophysics may be the game-changer for deep neural networks (DNNs) to imitate the human vision
Statistical prediction alone cannot identify good models of behavior
The model-resistant richness of human visual experience
The scientific value of explanation and prediction
There is a fundamental, unbridgeable gap between DNNs and the visual cortex
Thinking beyond the ventral stream: Comment on Bowers et al.
Using DNNs to understand the primate vision: A shortcut or a distraction?
Where do the hypotheses come from? Data-driven learning in science and the brain
Why psychologists should embrace rather than abandon DNNs
You can't play 20 questions with nature and win redux
Author response
Clarifying status of DNNs as models of human vision