Skip to main content Accessibility help
×
Hostname: page-component-586b7cd67f-rdxmf Total loading time: 0 Render date: 2024-11-28T17:12:37.098Z Has data issue: false hasContentIssue false

The Expertise of Perception

How Experience Changes the Way We See the World

Published online by Cambridge University Press:  24 February 2022

James W. Tanaka
Affiliation:
University of Victoria, British Columbia
Victoria Philibert
Affiliation:
University of Toronto

Summary

How does experience change the way we perceive the world? This Element explores the interaction between perception and experience by studying perceptual experts, people who specialize in recognizing objects such as birds, automobiles, dogs. It proposes perceptual expertise promotes a downward shift in object recognition where experts recognize objects in their domain of expertise at a more specific level than novices. To support this claim, it examines the recognition abilities and brain mechanisms of real-world experts. It discusses the acquisition of expertise by tracing the cognitive and neural changes that occur as a novice becomes an expert through training and experience. Next, it looks “under the hood” of expertise and examines the perceptual features that experts bring to bear to facilitate their fast, accurate, and specific recognition. The final section considers the future of human expertise as deep learning models and artificial intelligence compete with human experts in medical diagnosis.
Get access
Type
Element
Information
Online ISBN: 9781108919616
Publisher: Cambridge University Press
Print publication: 24 March 2022

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Aldridge, R. B., Maxwell, S. S., & Rees, J. L. (2012). Dermatology undergraduate skin cancer training: A disconnect between recommendations, clinical exposure and competence. BMC Medical Education, 12(1), 1–9. https://doi.org/10.1186/1472-6920-12-27CrossRefGoogle ScholarPubMed
Alexander, J. M., Johnson, K. E., Leibham, M. E., & Kelley, K. (2008). The development of conceptual interests in young children. Cognitive Development, 23(2), 324334. https://doi.org/10.1016/j.cogdev.2007.11.004CrossRefGoogle Scholar
Alvarez, G. A. & Cavanagh, P. (2004). The capacity of visual short-term memory is set both by visual information load and by number of objects. Psychological Science, 15(2), 106111. https://doi.org/10.1111/j.0963-7214.2004.01502006.xCrossRefGoogle ScholarPubMed
Anaki, D. & Bentin, S. (2009). Familiarity effects on categorization levels of faces and objects. Cognition, 111(1), 144149.Google Scholar
Anglin, J. M. (1977). Word, Object, and Conceptual Development. New York: W. W. Norton.Google Scholar
Angwin, J., Larson, J., Mattu, S., and Kirchner, L. (2016). Machine bias: There’s software across the country to predict future criminals and it’s biased against blacks. ProPublica.Google Scholar
Ashby, F. G. & Ell, S. W. (2001). The neurobiology of human category learning. Trends in Cognitive Sciences, 5(5), 204210. www.ncbi.nlm.nih.gov/pubmed/11323265CrossRefGoogle Scholar
Ashby, G. F. & O’Brien, J. B. (2005). Category learning and multiple memory systems. Trends in Cognitive Sciences, 9(2), 8389. https://doi.org/10.1016/j.tics.2004.12.003Google Scholar
Baron-Cohen, S., Ashwin, E., Ashwin, C., Tavassoli, T., & Chakrabarti, B. (2009). Talent in autism: Hyper-systemizing, hyper-attention to detail and sensory hypersensitivity. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1522), 13771383. https://doi.org/10.1098/rstb.2008.0337CrossRefGoogle ScholarPubMed
Baron-Cohen, S. & Wheelwright, S. (1999). “Obsessions” in children with autism or Asperger syndrome: Content analysis in terms of core domains of cognition. British Journal of Psychiatry, 175(5), 484490.CrossRefGoogle ScholarPubMed
Barragan-Jason, G., Lachat, F., & Barbeau, E. (2012). How fast is famous face recognition? Frontiers in Psychology, 3, 454.Google Scholar
Barton, J. J. S., Hantif, H., & Ashraf, S. (2009). Relating visual to verbal semantic knowledge: The evaluation of object recognition in prosopagnosia. Brain, 132(12), 34563466. https://doi.org/10.1093/brain/awp252Google Scholar
Belke, B., Leder, H., Harsanyi, G., & Carbon, C. C. (2010). When a Picasso is a “Picasso”: The entry point in the identification of visual art. Acta Psychologica, 133(2), 191202. https://doi.org/10.1016/j.actpsy.2009.11.007Google Scholar
Biederman, I. & Ju, G. (1988). Surface versus edge-based determinants of visual recognition. Cognitive Psychology, 20(1), 3864.Google Scholar
Biederman, I., Mezzanotte, R. J., & Rabinowitz, J. C. (1982). Scene perception: Detecting and judging objects undergoing relational violations. Cognitive Psychology, 14(2), 143177.Google Scholar
Biederman, I. & Shiffrar, M. (1987). Sexing day-old chicks: A case-study and expert systems-analysis of a difficult perceptual-learning task. Journal of Experimental Psychology: Learning Memory and Cognition, 13(4), 640645.Google Scholar
Bilalić, M., Grottenthaler, T., Nagele, T., & Lindig, T. (2016). The faces in radiological images: Fusiform face area supports radiological expertise. Cerebral Cortex, 26(3), 10041014. https://doi.org/10.1093/cercor/bhu272Google Scholar
Bilalić, M., Langner, R., Ulrich, R., & Grodd, W. (2011). Many faces of expertise: Fusiform face area in chess experts and novices. Journal of Neuroscience, 31(28), 1020610214. https://doi.org/10.1523/JNEUROSCI.5727-10.2011Google Scholar
Bornstein, M. H. & Arterberry, M. E. (2010). The development of object categorization in young children: Hierarchical inclusiveness, age, perceptual attribute, and group versus individual analyses. Developmental Psychology, 46(2), 350365. https://doi.org/10.1037/a0018411Google Scholar
Boster, J. S. & Johnson, J. C. (1989). Form or function: A comparison of expert and novice judgments of similarity among fish. American Anthropologist, 91(4), 866889.Google Scholar
Braje, W. L., Tjan, B. S., & Legge, G. E. (1995). Human efficiency for recognizing and detecting low-pass filtered objects. Vision Research, 35(21), 29552966. https://doi.org/10.1016/0042-6989(95)00071-7Google Scholar
Bramão, I., Inacio, F., Faísca, L., Reis, A., & Petersson, K. M. (2011). The influence of color information on the recognition of color diagnostic and noncolor diagnostic objects. Journal of General Psychology, 138(1), 4965. https://doi.org/10.1080/00221309.2010.533718Google Scholar
Brendel, W. & Bethge, M. (2019). Approximating CNNs with bag-of-local-features models works surprisingly well on ImageNet. arXiv:1904.00760, 115.Google Scholar
Brown, R. (1958). How shall a thing be called? Psychological Review, 65(1), 1421. https://doi.org/10.1037/h0041727Google Scholar
Bruyer, R. & Crispeels, G. (1992). Expertise in person recognition. Bulletin of the Psychonomic Society, 30(6), 501504. https://doi.org/10.3758/BF03334112Google Scholar
Bukach, C. M., Vickery, T. J., Kinka, D., & Gauthier, I. (2012). Training experts: Individuation without naming is worth it. Journal of Experimental Psychology: Human Perception and Performance, 38(1), 1417.Google ScholarPubMed
Buolamwini, J. & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on Fairness, Accountability and Transparency, 77–91. PMLR.Google Scholar
Busey, T. A. & Vanderkolk, J. R. (2005). Behavioral and electrophysiological evidence for configural processing in fingerprint experts. Vision Research, 45(4), 431448. https://doi.org/10.1016/j.visres.2004.08.021CrossRefGoogle ScholarPubMed
Callanan, M. A. (1985). How parents label objects for young children: The role of input in the acquisition of category hierarchies. Child Development, 508–523. https://doi.org/10.2307/1129738Google Scholar
Camilleri, A., Geoghegan, R., Meade, R., Osorio, S., & Zou., Q. (2019). Bias in machine learning: How facial recognition models show signs of racism, sexism and ageism. Towards Data Science, December 14. https://towardsdatascience.com/bias-in-machine-learning-how-facial-recognition-models-show-signs-of-racism-sexism-and-ageism-32549e2c972d (accessed February 18, 2021)Google Scholar
Campanella, G., Nehal, K. S., Lee, E. H., Rossi, A., Possum, B., Manuel, G., … & Busam, K. J. (2021). A deep learning algorithm with high sensitivity for the detection of basal cell carcinoma in Mohs micrographic surgery frozen sections. Journal of the American Academy of Dermatology, 85(5), 1285–1286.Google Scholar
Campbell, A. & Tanaka, J. W. (2018). Inversion impairs expert budgerigar identity recognition: A face-like effect for a nonface object of expertise. Perception, 47(6), 647659. https://doi.org/10.1177/0301006618771806Google Scholar
Carrigan, A. J., Wardle, S. G., & Rich, A. N. (2018). Finding cancer in mammograms: If you know it’s there, do you know where? Cognitive Research: Principles and Implications, 3(1), 1–14.Google Scholar
Chen, S. C., Bravata, D. M., Weil, E., & Olkin, I. (2001). A comparison of dermatologists’ and primary care physicians’ accuracy in diagnosing melanoma: A systematic review. Archives of Dermatology, 137(12), 1627–1634.Google Scholar
Chin, M. D., Evans, K. K., Wolfe, J. M., Bowen, J., & Tanaka, J. W. (2018). Inversion effects in the expert classification of mammograms and faces. Cognitive Research: Principles and Implications, 3(1), 1–9, 17(10). https://doi.org/10.1167/17.10.1226Google Scholar
Chiwome, L., Okojie, O. M., Jamiur, A. K., Javed, F., & Hamid, P. (2020). Artificial intelligence: Is it Armageddon for breast radiologists? Cureus, 12(6). https://doi.org/10.7759/cureus.8923Google Scholar
Collin, C. A. & McMullen, P. A. (2005). Subordinate-level categorization relies on high spatial frequencies to a greater degree than basic-level categorization. Perception and Psychophysics, 67(2), 354364. www.ncbi.nlm.nih.gov/pubmed/15971697Google Scholar
Costen, N. P., Parker, D. M., & Craw, I. (1994). Spatial content and spatial quantisation effects in face recognition. Perception, 23(2), 129146. https://doi.org/10.1068/p230129Google Scholar
Costen, N. P., Parker, D. M., & Craw, I. (1996). Effects of high-pass and low-pass spatial filtering on face identification. Perception and Psychophysics, 58(4), 602612. https://doi.org/10.3758/BF03213093CrossRefGoogle ScholarPubMed
Curby, K. M. & Gauthier, I. (2009). The temporal advantage for individuating objects of expertise: Perceptual expertise is an early riser. Journal of Vision, 9(6), 113. https://doi.org/10.1167/9.6.7.Google Scholar
Curby, K. M., Glazek, K., & Gauthier, I. (2009). A visual short-term memory advantage for objects of expertise. Journal of Experimental Psychology: Human Perception and Performance, 35(1), 94–107.Google Scholar
Curby, K. M. & Gauthier, I. (2010). To the trained eye: Perceptual expertise alters visual processing. Topics in Cognitive Science, 2(2), 189–201.Google Scholar
Curran, T., Tanaka, J. W., & Weiskopf, D. M. (2002). An electrophysiological comparison of visual categorization and recognition memory. Cognitive Affective and Behavioral Neuroscience, 2(1), 118. https://doi.org/10.3758/CABN.2.1.1Google Scholar
Davidoff, J. & Donnelly, N. (1990). Object superiority: A comparison of complete and part probes. Acta Psychologica, 73(3), 225–243.Google Scholar
DeLoache, J. S., Simcock, G., & Macari, S. (2007). Planes, trains, automobiles – and tea sets: Extremely intense interests in very young children. Developmental Psychology, 43(6), 15791586. https://doi.org/10.1037/0012-1649.43.6.1579Google Scholar
Deng, J., Krause, J., & Fei-Fei, L. (2013). Fine-grained crowdsourcing for fine-grained recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 580587.CrossRefGoogle Scholar
Dennett, H. W., McKone, E., Tavashmi, R. et al. (2012). The Cambridge Car Memory Test: A task matched in format to the Cambridge Face Memory Test, with norms, reliability, sex differences, dissociations from face memory, and expertise effects. Behavior Research Methods, 44(2), 587605. https://doi.org/10.3758/s13428-011-0160-2Google Scholar
Devillez, H., Mollison, M. V., Hagen, S., Tanaka, J. W., Scott, L. S., & Curran, T. (2019). Color and spatial frequency differentially impact early stages of perceptual expertise training. Neuropsychologia, 122, 62–75. https://doi.org/10.1016/j.neuropsychologia.2018.11.011Google Scholar
Diamond, R. & Carey, S. (1986). Why faces are not special: An effect of expertise. Journal of Experimental Psychology: General, 115(2), 107117.Google Scholar
Dosovitskiy, A., Beyer, L., Kolesnikov, A. et al. (2020). An image is worth 16×16 words: Transformers for image recognition at scale. arXiv. http://arxiv.org/abs/2010.11929Google Scholar
Doughterty, J. W. D. (1978). Salience and relativity in classification. American Ethnologist, 5(1), 6680. https://doi.org/10.1525/ae.1978.5.1.02a00060Google Scholar
Eckstein, M. P., Koehler, K., Welbourne, L. E., & Akbas, E. (2017). Humans, but not deep neural networks, often miss giant targets in scenes. Current Biology: CB, 27(18), 2827–2832.e3.Google Scholar
ESR (European Society of Radiology). (2019). What the radiologist should know about artificial intelligence: An ESR white paper. Insights Imaging, 10(1), 44. https://doi.org/10.1186/s13244-019-0738-2Google Scholar
Esteva, A., Kuprel, B., Novoa, R. A. et al. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118.Google Scholar
Evans, K. K., Georgian-Smith, D., Tambouret, R., Birdwell, R. L., & Wolfe, J. M. (2013). The gist of the abnormal: Above-chance medical decision making in the blink of an eye. Psychonomic Bulletin and Review, 20(6), 11701175. https://doi.org/10.3758/s13423-013-0459-3.Google Scholar
Evans, K. K., Haygood, T. M., Cooper, J., Culpan, A.-M., & Wolfe, J. M. (2016). A half-second glimpse often lets radiologists identify breast cancer cases even when viewing the mammogram of the opposite breast. Proceedings of the National Academy of Sciences of the United States of America, 113(37), 10292–10297.Google ScholarPubMed
Fan, C., He, W., He, H., Ren, G., Luo, Y., Li, H., & Luo, W. (2016). N170 changes show identifiable Chinese characters compete primarily with faces rather than houses. Frontiers in Psychology, 6, 1952. https://doi.org/10.3389/fpsyg.2015.01952Google Scholar
Fonseca, P., Mendoza, J., Wainer, J. et al. (2015). Automatic breast density classification using a convolutional neural network architecture search procedure. Medical Imaging 2015: Computer-Aided Diagnosis, 9414, 941428.Google Scholar
Foss-Feig, J. H., McGugin, R. W., Gauthier, I., Mash, L. E., Ventola, P., & Cascio, C. J. (2016). A functional neuroimaging study of fusiform response to restricted interests in children and adolescents with autism spectrum disorder. Journal of Neurodevelopmental Disorders, 8(1), 112. https://doi.org/10.1186/s11689-016-9149-6Google Scholar
Fu, J., Zheng, H., & Mei, T. (2017). Look closer to see better: Recurrent attention convolutional neural network for fine-grained image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 44384446.Google Scholar
Gauthier, I., Curran, T., Curby, K. M., & Collins, D. (2003). Perceptual interference supports a non-modular account of face processing. Nature Neuroscience, 6(4), 428432. https://doi.org/10.1038/nn1029nnCrossRefGoogle ScholarPubMed
Gauthier, I., Skudlarski, P., Gore, J. C., & Anderson, A. W. (2000). Expertise for cars and birds recruits brain areas involved in face recognition. Nature Neuroscience, 3, 191197.Google Scholar
Gauthier, I. & Tarr, M. J. (1997). Becoming a “Greeble” expert: Exploring the face recognition mechanism. Vision Research, 37(12), 1673–1682.Google Scholar
Gauthier, I., Tarr, M. J., Anderson, A. W., Skudlarski, P., & Gore, J. C. (1999). Activation of the middle fusiform “face area” increases with expertise in recognizing novel objects. Nature Neuroscience, 2(6), 568573.Google Scholar
Gauthier, I., Williams, P., Tarr, M. J., & Tanaka, J. (1998). Training “greeble” experts: A framework for studying expert object recognition processes. Vision Research, 38(1516). https://doi.org/10.1016/S0042-6989(97)00442-2Google Scholar
Gebru, T. (2019). Oxford handbook on AI ethics book chapter on race and gender. arXiv preprint arXiv:1908.06165.Google Scholar
Gegenfurtner, K. R. & Rieger, J. (2000). Sensory and cognitive contributions of color to the recognition of natural scenes. Current Biology: CB, 10(13), 805808. www.ncbi.nlm.nih.gov/pubmed/10898985Google Scholar
Geirhos, R., Michaelis, C., Wichmann, F. A., Rubisch, P., Bethge, M., & Brendel, W. (2019). Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. 7th International Conference on Learning Representations, ICLR 2019, c, 122.Google Scholar
Glover, G. H. (2011). Overview of functional magnetic resonance imaging. Neurosurgery Clinics of North America, 22(2), 133139. https://doi.org/10.1016/j.nec.2010.11.001Google Scholar
Gobbo, C. & Chi, M. (1986). How knowledge is structured and used by expert and novice children. Cognitive Development, 1(3), 221237.Google Scholar
Goffaux, V., Hault, B., Michel, C., Vuong, Q. C., & Rossion, B. (2005). The respective role of low and high spatial frequencies in supporting configural and featural processing of faces. Perception, 34(1), 7786. https://doi.org/10.1068/p5370Google Scholar
Gomez, J., Barnett, M., & Grill-Spector, K. (2019). Extensive childhood experience with Pokémon suggests eccentricity drives organization of visual cortex. Nature Human Behaviour 3(6), 611624. https://doi.org/10.1038/s41562-019-0592-8Google Scholar
Grelotti, D. J., Klin, A. J., Gauthier, I. et al. (2005). fMRI activation of the fusiform gyrus and amygdala to cartoon characters but not to faces in a boy with autism. Neuropsychologia, 43(3), 373385. https://doi.org/10.1016/j.neuropsychologia.2004.06.015Google Scholar
Gulshan, V., Peng, L., Coram, M. et al. (2016). Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA: The Journal of the American Medical Association, 316(22), 24022410.Google Scholar
Hagen, S. & Tanaka, J. W. (2019). Examining the neural correlates of within-category discrimination in face and non-face expert recognition. Neuropsychologia, 124, 4454.Google Scholar
Hagen, S., Vuong, Q. C., Scott, L. S., Curran, T., & Tanaka, J. W. (2014). The role of color in expert object recognition. Journal of Vision, 14(9). https://doi.org/10.1167/14.9.9Google Scholar
Hagen, S., Vuong, Q. C., Scott, L. S., Curran, T., & Tanaka, J. W. (2016). The role of spatial frequency in expert object recognition. Journal of Experimental Psychology: Human Perception and Performance, 42(3). https://doi.org/10.1037/xhp0000139Google Scholar
Hajibayova, L. & Jacob, E. (2015). Basic-level concepts and the assessment of subject relevance: Are they really relevant? NASKO, 5(1), 74–81.Google Scholar
Harel, A. & Bentin, S. (2009). Stimulus type, level of categorization, and spatial-frequencies utilization: Implications for perceptual categorization hierarchies. Journal of Experimental Psychology: Human Perception and Performance, 35(4), 12641273. https://doi.org/10.1037/a0013621Google ScholarPubMed
Harel, A., Gilaie-Dotan, S., Malach, R., & Bentin, S. (2010). Top-down engagement modulates the neural expressions of visual expertise. Cerebral Cortex, 20(10), 23042318. https://doi.org/10.1093/cercor/bhp316Google Scholar
Harley, E. M., Pope, W. B., Villablanca, J. P. et al. (2009). Engagement of fusiform cortex and disengagement of lateral occipital cortex in the acquisition of radiological expertise. Cerebral Cortex, 19(11), 27462754. https://doi.org/10.1093/cercor/bhp051Google Scholar
Hershler, O. & Hochstein, S. (2009). The importance of being expert: Top-down attentional control in visual search with photographs. Attention, Perception, and Psychophysics, 71(7), 14391459. https://doi.org/10.3758/APPGoogle Scholar
Hosseini, H., Xiao, B., Jaiswal, M., & Poovendran, R. (2017). On the limitation of convolutional neural networks in recognizing negative images. 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), 352358.Google Scholar
Hsiao, J. H. & Cottrell, G. W. (2009). Not all visual expertise is holistic, but it may be leftist: The case of Chinese character recognition: Research Article. Psychological Science, 20(4), 455463. https://doi.org/10.1111/j.1467-9280.2009.02315.xGoogle Scholar
Huang, W., Wu, X., Hu, L., Wang, L., Ding, Y., & Qu, Z. (2017). Revisiting the earliest electrophysiological correlate of familiar face recognition. International Journal of Psychophysiology, 120(May), 4253. https://doi.org/10.1016/j.ijpsycho.2017.07.001Google Scholar
Hubel, D. H. & Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. The Journal of Physiology, 160, 106–154.Google Scholar
James, T. W. & James, K. H. (2013). Expert individuation of objects increases activation in the fusiform face area of children. NeuroImage, 67, 182192. https://doi.org/10.1016/j.neuroimage.2012.11.007CrossRefGoogle ScholarPubMed
Johnson, K. E. & Eilers, A. T. (1998). Effects of knowledge and development on subordinate level categorization. Cognitive Development, 13(4), 515545. https://doi.org/10.1016/S0885-2014(98)90005-3Google Scholar
Johnson, K. E. & Mervis, C. B. (1994). Microgenetic analysis of first steps in children’s acquisition of expertise on shorebirds. Developmental Psychology, 30(3), 418435. https://doi.org/10.1037/0012-1649.30.3.418Google Scholar
Johnson, K. E. & Mervis, C. B. (1997). Effects of varying levels of expertise on the basic level of categorization. Journal of Experimental Psychology: General, 126(3), 248277. www.ncbi.nlm.nih.gov/pubmed/9281832Google Scholar
Johnson, K. E., Scott, P., & Mervis, C. B. (2004). What are theories for? Concept use throughout the continuum of dinosaur expertise. Journal of Experimental Child Psychology, 87(3), 171200. https://doi.org/10.1016/j.jecp.2003.12.001Google Scholar
Jolicoeur, P., Gluck, M. A., & Kosslyn, S. M. (1984). Pictures and names: Making the connection. Cognitive Psychology, 16(2), 243275.Google Scholar
Jones, T., Hadley, H., Cataldo, A. M. et al. (2018). Neural and behavioral effects of subordinate-level training of novel objects across manipulations of color and spatial frequency. European Journal of Neuroscience, 52(11), 44684479. https://doi.org/10.1111/ejn.13889Google Scholar
Kanwisher, N. (2000). Domain specificity in face perception. Nature Neuroscience, 3(8), 759763.Google Scholar
Kanwisher, N., McDermott, J., & Chun, M. M. (1997). The fusiform face area: A module in human extrastriate cortex specialized for face perception. Journal of Neuroscience, 17(11), 43024311. https://doi.org/10.1523/JNEUROSCI.17-11-04302.1997Google Scholar
Kanwisher, N. & Yovel, G. (2006). The fusiform face area: A cortical region specialized for the perception of faces. Philosophical transactions of the Royal Society of London: Series B, Biological Sciences, 361(1476), 21092128. https://doi.org/10.1098/rstb.2006.1934Google Scholar
Karimi-Rouzbahani, H., Bagheri, N., & Ebrahimpour, R. (2017). Invariant object recognition is a personalized selection of invariant features in humans, not simply explained by hierarchical feedforward vision models. Scientific Reports, 7(1), 14402. https://doi.org/10.1038/s41598-017-13756-8Google Scholar
Klin, A., Danovitch, J. H., Merz, A. B., & Volkmar, F. R. (2007). Circumscribed interests in higher functioning individuals with autism spectrum disorders: An exploratory study. Research and Practice for Persons with Severe Disabilities, 32(2), 89100. https://doi.org/10.2511/rpsd.32.2.89Google Scholar
Krause, J., Jin, H., Yang, J., & Fei-Fei, L. (2015). Fine-grained recognition without part annotations. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 55465555.Google Scholar
Kriegeskorte, N. (2015). Deep neural networks: A new framework for modeling biological vision and brain information processing. Annual Review of Vision Science, 1, 417446.Google Scholar
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25, 10971105.Google Scholar
Kubilius, J., Bracci, S., & Op de Beeck, H. P. (2016). Deep neural networks as a computational model for human shape sensitivity. PLoS Computational Biology, 12(4), e1004896.Google Scholar
Kundel, H. L. & Nodine, C. F. (1975). Interpreting chest radiographs without visual search. Radiology, 116(3), 527532. https://doi.org/10.1148/116.3.527Google Scholar
Krupinski, E. A. & Jiang, Y. (2008). Anniversary paper: Evaluations of medical imaging systems. Medical Physics, 35(2), 645659. https://doi.org/10.1118/1.2830376.Google Scholar
Lebrecht, S., Pierce, L. J., Tarr, M. J., & Tanaka, J. W. (2009). Perceptual other-race training reduces implicit racial bias. PloS One, 4(1), e4215.Google Scholar
Liu-Shuang, J., Norcia, A. M., & Rossion, B. (2014). An objective index of individual face discrimination in the right occipito-temporal cortex by means of fast periodic oddball stimulation. Neuropsychologia, 52, 5772.Google Scholar
Luck, Steven J. (2014). An introduction to the event-related potential technique. Cambridge, MA: MIT Press.Google Scholar
Malt, B. C. (1995). Category coherence in cross-cultural perspective. Cognitive Psychology, 29(2), 85–148.Google Scholar
Margalit, E., Jamison, K. W., Weiner, K. S. et al. (2020). Ultra-high-resolution fMRI of human ventral temporal cortex reveals differential representation of categories and domains. Journal of Neuroscience, 40(15), 30083024. https://doi.org/10.1523/JNEUROSCI.2106-19.2020Google Scholar
Martinez-Cuitino, M. & Vivas, L. (2019). Category or diagnosticity effect? The influence of color in picture naming tasks. Psychology and Neuroscience, 12(3), 328341. https://doi.org/10.1037/pne0000172Google Scholar
Maurer, U., Zevin, J. D., & McCandliss, B. D. (2008). Left-lateralized N170 effects of visual expertise in reading: Evidence from Japanese syllabic and logographic scripts. Journal of Cognitive Neuroscience, 20(10), 18781891. https://doi.org/10.1162/jocn.2008.20125Google Scholar
McGugin, R. W., Gatenby, J. C., Gore, J. C., & Gauthier, I. (2012). High-resolution imaging of expertise reveals reliable object selectivity in the fusiform face area related to perceptual performance. Proceedings of the National Academy of Sciences of the United States of America, 109(42), 1706317068. https://doi.org/10.1073/pnas.1116333109Google Scholar
McGugin, R. W., McKeeff, T. J., Tong, F., & Gauthier, I. (2011). Irrelevant objects of expertise compete with faces during visual search. Attention, Perception, and Psychophysics, 73(2), 309317. https://doi.org/10.3758/s13414-010-0006-5CrossRefGoogle ScholarPubMed
Medin, D. L., Lynch, E. B., Coley, J. D., & Atran, S. (1997). Categorization and reasoning among tree experts: Do all roads lead to Rome? Cognitive Psychology, 32(1), 4996. https://doi.org/10.1006/cogp.1997.0645Google Scholar
Miyakoshi, M., Nomura, M., & Ohira, H. (2007). An ERP study on self-relevant object recognition. Brain and Cognition, 63(2), 182189.Google Scholar
Morrison, D. J. & Schyns, P. G. (2001). Usage of spatial scales for the categorization of faces, objects, and scenes. Psychonomic Bulletin and Review, 8(3), 454469. www.ncbi.nlm.nih.gov/pubmed/11700896Google Scholar
Murphy, G. L. (2016). Explaining the basic-level concept advantage in infants … or is it the superordinate-level advantage? Psychology of Learning and Motivation: Advances in Research and Theory, 64, 5792.Google Scholar
Murphy, G. L. & Brownell, H. H. (1985). Category differentiation in object recognition: Typicality constraints on the basic category advantage. Journal of Experimental Psychology: Learning, Memory, and Cognition, 11(1), 7084.Google Scholar
Murphy, G. L. & Smith, E. E. (1982). Basic level superiority in picture categorization. Journal of Verbal Learning and Verbal Behavior, 21(1), 120.CrossRefGoogle Scholar
Nagai, J. & Yokosawa, K. (2003). What regulates the surface color effect in object recognition: Color diagnosticity or category. Technical Report on Attention and Cognition, 28(28), 14. http://staff.aist.go.jp/jun.kawahara/AandC/3/nagai.pdfGoogle Scholar
Oliveria, S. A., Nehal, K. S., Christos, P. J., Sharma, N., Tromberg, J. S., & Halpern, A. C. (2001). Using nurse practitioners for skin cancer screening: A pilot study. American Journal of Preventive Medicine, 21(3), 214217.Google Scholar
Peterson, R. T. (1998). Peterson First Guide to Birds of North America. New York: Houghton Mifflin Harcourt.Google Scholar
Pierce, L. J., Scott, L., Boddington, S., Droucker, D., Curran, T., & Tanaka, J. (2011). The n250 brain potential to personally familiar and newly learned faces and objects. Frontiers in Human Neuroscience, 5, 111.Google ScholarGoogle Scholar
Rama Fiorini, S., Gärdenfors, P., & Abel, M. (2014). Representing part-whole relations in conceptual spaces. Cognitive Processing, 15(2), 127142. https://doi.org/10.1007/s10339-013-0585-xGoogle Scholar
Richler, J. J., Wilmer, J. B., & Gauthier, I. (2017). General object recognition is specific: Evidence from novel and familiar objects. Cognition, 166, 4255.Google Scholar
Rieger, L., Singh, C., Murdoch, W., & Yu, B. (2020). Interpretations are useful: Penalizing explanations to align neural networks with prior knowledge. In H. D. Iii & A. Singh (Eds.), Proceedings of the 37th International Conference on Machine Learning 119, 8116–8126. PMLR.Google Scholar
Roads, B. D., Xu, B., Robinson, J. K., & Tanaka, J. W. (2018). The easy-to-hard training advantage with real-world medical images. Cognitive Research, 3(38). http://doi.org/10.1186/s41235-018-0131-6Google Scholar
Robbins, R. & McKone, E. (2007). No face-like processing for objects-of-expertise in three behavioural tasks. Cognition, 103(1), 3479. https://doi.org/10.1016/j.cognition.2006.02.008Google Scholar
Rosch, E., Mervis, C. B., Gray, W., Johnson, D., & Boyes-Braem, P. (1976). Basic objects in natural categories. Cognitive Psychology, 8, 382439.Google Scholar
Rossion, B. (2008). Constraining the cortical face network by neuroimaging studies of acquired prosopagnosia. NeuroImage, 40, 423426.Google Scholar
Rossion, B. (2013). The composite face illusion: A whole window into our understanding of holistic face perception. Visual Cognition, 21(2), 139253. https://doi.org/10.1080/13506285.2013.772929Google Scholar
Rossion, B., Collins, D., & Curran, T. (2007). Long-term expertise with artificial objects increases visual competition with early face categorization processes. Journal of Cognitive Neurosciennce, 19(3), 543555. https://doi.org/10.1162/jocn.2007.19.3.543Google Scholar
Rossion, B. & Curran, T. (2010). Visual expertise with pictures of cars correlates with RT magnitude of the car inversion effect. Perception, 39(2). https://doi.org/10.1068/p6270Google Scholar
Rossion, B., Gauthier, I., Goffaux, V., Tarr, M. J., & Crommelinck, M. (2002). Expertise training with novel objects leads to left-lateralized facelike electrophysiological responses. Psychological Science, 13(3), 250257. https://doi.org/10.1111/1467-9280.00446Google Scholar
Rossion, Bruno; Jacques, Corentin. The N170: understanding the time-course of face perception in the human brain. In: Steven J. Luck, The Oxford Handbook of Event-Related Potential Components, Oxford University Press: (United Kingdom) Oxford 2011, p. 115–142 http://hdl.handle.net/2078.1/110943 – DOI:10.1093/oxfordhb/9780195374148.013.0064Google Scholar
Rossion, B. & Pourtois, G. (2004). Revisiting Snodgrass and Vanderwart’s object pictorial set: The role of surface detail in basic-level object recognition. Perception, 33(2), 217236.Google Scholar
Schweinberger, S. R., Huddy, V., & Burton, A. M. (2004). N250r: A face-selective brain response to stimulus repetitions. NeuroReport, 15(9), 15011505.Google Scholar
Schyns, P. G. (1998). Diagnostic recognition: Task constraints, object information, and their interactions. Cognition, 67(1–2), 147–179.Google Scholar
Schyns, P. G. & Rodet, L. (1997). Categorization creates functional features. Journal of Experimental Psychology: Learning, Memory and Cognition, 23(3), 681696.Google Scholar
Scolari, M., Vogel, E. K., & Awh, E. (2008). Perceptual expertise enhances the resolution but not the number of representations in working memory. Psychonomic Bulletin and Review, 15, 215222.CrossRefGoogle Scholar
Scott, L., Tanaka, J., Sheinberg, D., & Curran, T. (2006). A reevaluation of the electrophysiological correlates of expert object processing. Journal of Cognitive Neuroscience, 18(9), 14531465. https://doi.org/10.1162/jocn.2006.18.9.1453Google Scholar
Scott, L. S., Tanaka, J. W., Sheinberg, D. L., & Curran, T. (2008). The role of category learning in the acquisition and retention of perceptual expertise: A behavioral and neurophysiological study. Brain Research, 1210, 204215. https://doi.org/10.1016/j.brainres.2008.02.054Google Scholar
Shen, J., Mack, M. L., Palmeri, T. J., & Connors, M. H. (2014). Studying real-world perceptual expertise. Frontiers in Psychology, 5, 857. https://doi.org/10.3389/fpsyg.2014.00857Google Scholar
Solomon, J. A. & Pelli, D. G. (1994). The visual filter mediating letter identification. Nature, 369, 395397. https://doi.org/10.1038/369395a0Google Scholar
Spence, C., Levitan, C. A., Shankar, M. U., & Zampini, M. (2010). Does food color influence taste and flavor perception in humans? Chemosensory Perception, 3(1), 6884. https://doi.org/10.1007/s12078-010-9067-zGoogle Scholar
Swensson, R. G. (1980). A 2-stage detection model applied to skilled visual-search by radiologists. Perception and Psychophysics, 27(1), 1116. https://doi.org/10.3758/BF03199899Google Scholar
Szegedy, C., Liu, W., Jia, Y. et al. (2014). Going deeper with convolutions: Technical report. arXiv. https://arxiv.org/abs/1409.4842Google Scholar
Tanaka, J. W. (2001). The entry point of face recognition. Journal of Experimental Psychology: General, 130, 534–543.Google Scholar
Tanaka, J. W. & Curran, T. (2001). A neural basis for expert object recognition. Psychological Science, 12(1), 4347. https://doi.org/10.1111/1467-9280.00308.Google Scholar
Tanaka, J. W., Curran, T., Porterfield, A. L., & Collins, D. (2006). Activation of preexisting and acquired face representations: The N250 event-related potential as an index of face familiarity. Journal of Cognitive Neuroscience, 18(9). https://doi.org/10.1162/jocn.2006.18.9.1488Google Scholar
Tanaka, J. W., Curran, T., & Sheinberg, D. (2005). The training and transfer of real-world perceptual expertise. Psychological Science, 16(2), 145151.Google Scholar
Tanaka, J. W. & Farah, M. J. (1993). Parts and wholes in face recognition. The Quarterly Journal of Experimental Psychology Section A: Human Experimental Psychology, 46(2), 225–245.Google Scholar
Tanaka, J. W. & Pierce, L. J. (2009). The neural plasticity of other-race face recognition. Cognitive, Affective and Behavioral Neuroscience, 9(1), 122–131.Google Scholar
Tanaka, J. W. & Presnell, L. M. (1999). Color diagnosticity in object recognition. Perception and Psychophysics, 61(6), 11401153.Google Scholar
Tanaka, J. W. & Taylor, M. (1991). Object categories and expertise: Is the basic level in the eye of the beholder? Cognitive Psychology, 23(3), 457482. https://doi.org/10.1016/0010-0285(91)90016-HGoogle Scholar
Turner-Brown, L. M., Lam, K. S. L., Holtzclaw, T. N., Dichter, G. S., & Bodfish, J. W. (2011). Phenomenology and measurement of circumscribed interests in autism spectrum disorders. Autism, 15(4), 437456. https://doi.org/10.1177/1362361310386507Google Scholar
Tversky, B. (1989). Parts, partonomies, and taxonomies. Developmental Psychology, 25(6), 983995.Google Scholar
Tversky, B. & Hemenway, K. (1984). Objects, parts, and categories. Journal of Experimental Psychology: General, 113, 169193.Google Scholar
Ullman, S., Assif, L., Fetaya, E., & Harari, D. (2016). Atoms of recognition in human and computer vision. Proceedings of the National Academy of Sciences of the United States of America, 113(10), 27442749.Google Scholar
Ventura, P., Fernandes, T., Leite, I., Almeida, V., Casqueiro, I., & Wong, A. (2017). The word-composite effect depends on abstract lexical representations but not surface features like case and font. Frontiers in Perception Science, 8, 1036. https://doi.org/10.3389/fpsyg.2017.01036.Google Scholar
Ventura, P., Fernandes, T., Pereira, A. et al. (2020). Holistic word processing is correlated with efficiency in visual word recognition. Attention, Perception, and Psychophysics, 82(5), 27392750. https://doi.org/10.3758/s13414-020-01988-2Google Scholar
Vogel, E. K., Woodman, G. F., & Luck, S. J. (2001). Storage of features, conjunctions, and objects in visual working memory. Journal of Experimental Psychology: Human Perception and Performance, 27(1), 92114. https://doi.org/10.1037/0096-1523.27.1.92Google Scholar
Waldrop, M. M. (2019). News feature: What are the limits of deep learning? Proceedings of the National Academy of Sciences of the United States of America, 116(4),10741077.Google Scholar
Wang, L., Mottron, L., Peng, D., Berthiaume, C., & Dawson, M. (2007). Local bias and local-to-global interference without global deficit: A robust finding in autism under various conditions of attention, exposure time, and visual angle. Cognitive Neuropsychology, 24(5), 550574. https://doi.org/10.1080/13546800701417096Google Scholar
Wiese, Holger, Simone, C. Tüttenberg, Brandon T. Ingram, Chelsea YX Chan, Zehra Gurbuz, A. Mike Burton, and Andrew W. Young. “A robust neural index of high face familiarity.” Psychological science 30, no. 2 (2019): 261–272.Google Scholar
Winkler-Rhoades, N., Medin, D., Waxman, S. R., Woodring, J., & Ross, N. O. (2010). Naming the animals that come to mind: Effects of culture and experience on category fluency. Journal of Cognition and Culture, 10(1–2), 205220. https://doi.org/10.1163/156853710X497248Google Scholar
Wisniewski, E. J. & Murphy, G. L. (1989). Superordinate and basic category names in discourse: A textual analysis. Discourse Processes. https://doi.org/10.1080/01638538909544728Google Scholar
Wolfe, J. N. (1976). Risk for breast cancer development determined by mammographic parenchymal pattern. Cancer, 37(5), 24862492.Google Scholar
Wong, A. C.-N., Bukach, C. M., Yuen, C., Yang, L., Leung, S., & Greenspon, E. (2011). Holistic processing of words modulated by reading experience. PloS One, 6(6), e20753. https://doi.org/10.1371/journal.pone.0020753Google Scholar
Wong, A. C.-N., Palmeri, T. J., & Gauthier, I. (2009). Conditions for facelike expertise with objects: Becoming a ziggerin expert – but which type? Psychological Science, 20(9), 11081117. https://doi.org/10.1111/j.1467-9280.2009.02430.xGoogle Scholar
Wong, Y. K., & Gauthier, I. (2010). Holistic processing of musical notation: Dissociating failures of selective attention in experts and novices. Cognitive, Affective, & Behavioral Neuroscience, 10(4), 541–551. https://doi.org/10.3758/CABN.10.4.541Google Scholar
Wurm, L. H., Legge, G. E., Isenberg, L. M., & Luebker, A. (1993). Color improves object recognition in normal and low vision. Journal of Experimental Psychology: Human perception and performance, 19(4), 899.Google Scholar
Xu, B., Rourke, L., Robinson, J. K., & Tanaka, J. W. (2016). Training melanoma detection in photographs using the perceptual expertise training approach. Applied Cognitive Psychology, 30(5), 750756.Google Scholar
Xu, Y. (2005). Revisiting the role of the fusiform face area in visual expertise. Cerebral Cortex. 15(8), 12341242. https://doi.org/10.1093/cercor/bhi006Google Scholar
Yamins, D. L. K., Hong, H., Cadieu, C. F., Solomon, E. A., Seibert, D., & DiCarlo, J. J. (2014). Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences, 111(23), 86198624.Google Scholar
Yin, R. K. (1969). Looking at upside-down faces. Journal of experimental psychology, 81(1), 141.Google Scholar
Young, A. W., Hellawell, D., & Hay, D. C. (2013). Configurational information in face perception. Perception, 42(11), 1166–1178.Google Scholar

Save element to Kindle

To save this element to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

The Expertise of Perception
Available formats
×

Save element to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

The Expertise of Perception
Available formats
×

Save element to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

The Expertise of Perception
Available formats
×