Book contents
- Frontmatter
- Contents
- Preface
- Contributors
- 1 The Evolution of Object Categorization and the Challenge of Image Abstraction
- 2 A Strategy for Understanding How the Brain Accomplishes Object Recognition
- 3 Visual Recognition Circa 2008
- 4 On What It Means to See, and WhatWe Can Do About It
- 5 Generic Object Recognition by Inference of 3-D Volumetric Parts
- 6 What Has fMRI Taught Us About Object Recognition?
- 7 Object Recognition Through Reasoning About Functionality: A Survey of Related Work
- 8 The Interface Theory of Perception: Natural Selection Drives True Perception to Swift Extinction
- 9 Words and Pictures: Categories, Modifiers, Depiction, and Iconography
- 10 Structural Representation of Object Shape in the Brain
- 11 Learning Hierarchical Compositional Representations of Object Structure
- 12 Object Categorization in Man, Monkey, and Machine: Some Answers and Some Open Questions
- 13 Learning Compositional Models for Object Categories from Small Sample Sets
- 14 The Neurophysiology and Computational Mechanisms of Object Representation
- 15 From Classification to Full Object Interpretation
- 16 Visual Object Discovery
- 17 Towards Integration of Different Paradigms in Modeling, Representation, and Learning of Visual Categories
- 18 Acquisition and Disruption of Category Specificity in the Ventral Visual Stream: The Case of Late Developing and Vulnerable Face-Related Cortex
- 19 Using Simple Features and Relations
- 20 The Proactive Brain: Using Memory-Based Predictions in Visual Recognition
- 21 Spatial Pyramid Matching
- 22 Visual Learning for Optimal Decisions in the Human Brain
- 23 Shapes and Shock Graphs: From Segmented Shapes to Shapes Embedded in Images
- 24 Neural Encoding of Scene Statistics for Surface and Object Inference
- 25 Medial Models for Vision
- 26 Multimodal Categorization
- 27 Comparing 2-D Images of 3-D Objects
- Index
- Plate section
5 - Generic Object Recognition by Inference of 3-D Volumetric Parts
Published online by Cambridge University Press: 20 May 2010
- Frontmatter
- Contents
- Preface
- Contributors
- 1 The Evolution of Object Categorization and the Challenge of Image Abstraction
- 2 A Strategy for Understanding How the Brain Accomplishes Object Recognition
- 3 Visual Recognition Circa 2008
- 4 On What It Means to See, and WhatWe Can Do About It
- 5 Generic Object Recognition by Inference of 3-D Volumetric Parts
- 6 What Has fMRI Taught Us About Object Recognition?
- 7 Object Recognition Through Reasoning About Functionality: A Survey of Related Work
- 8 The Interface Theory of Perception: Natural Selection Drives True Perception to Swift Extinction
- 9 Words and Pictures: Categories, Modifiers, Depiction, and Iconography
- 10 Structural Representation of Object Shape in the Brain
- 11 Learning Hierarchical Compositional Representations of Object Structure
- 12 Object Categorization in Man, Monkey, and Machine: Some Answers and Some Open Questions
- 13 Learning Compositional Models for Object Categories from Small Sample Sets
- 14 The Neurophysiology and Computational Mechanisms of Object Representation
- 15 From Classification to Full Object Interpretation
- 16 Visual Object Discovery
- 17 Towards Integration of Different Paradigms in Modeling, Representation, and Learning of Visual Categories
- 18 Acquisition and Disruption of Category Specificity in the Ventral Visual Stream: The Case of Late Developing and Vulnerable Face-Related Cortex
- 19 Using Simple Features and Relations
- 20 The Proactive Brain: Using Memory-Based Predictions in Visual Recognition
- 21 Spatial Pyramid Matching
- 22 Visual Learning for Optimal Decisions in the Human Brain
- 23 Shapes and Shock Graphs: From Segmented Shapes to Shapes Embedded in Images
- 24 Neural Encoding of Scene Statistics for Surface and Object Inference
- 25 Medial Models for Vision
- 26 Multimodal Categorization
- 27 Comparing 2-D Images of 3-D Objects
- Index
- Plate section
Summary
Introduction
Recognizing 3-D objects from a single 2-D image is one of the most challenging problems in computer vision; it requires solving complex tasks along multiple axes. Humans perform this task effortlessly, and have no problems describing objects in a scene, even if they have never seen these objects before. This is illustrated in Figure 5.1. The first task is to extract a set of features from the image, thus producing descriptions of the image different from an array of pixel values. A second task involves defining a model description, and producing a database of such models. One must then establish correspondences between descriptions of the image and those of the models. The last task consists of learning new objects, and adding their descriptions to the database. If the database is large, then an indexing scheme is required for efficiency.
Although these tasks seem clear and well-defined, no consensus has emerged regarding the choice and level of features (2-D or 3-D), the matching strategy, the type of indexing used, and the order in which these tasks should be performed. Furthermore, it is still not established whether all these tasks are necessary for recognizing objects in images.
The early days of computer vision study were dominated by the dogma of the 2 1/2-D sketch (Marr 1981). Consequently, it was “obvious” that the only way to process an image was to extract features such as edges and regions to infer a description of the visible surfaces, from which 3-D descriptions should be inferred.
- Type
- Chapter
- Information
- Object CategorizationComputer and Human Vision Perspectives, pp. 87 - 101Publisher: Cambridge University PressPrint publication year: 2009
- 3
- Cited by