Hostname: page-component-586b7cd67f-tf8b9 Total loading time: 0 Render date: 2024-11-24T10:28:30.287Z Has data issue: false hasContentIssue false

Problem solving methods and knowledge systems: A personal journey to perceptual images as knowledge

Published online by Cambridge University Press:  14 October 2009

B. Chandrasekaran
Affiliation:
Laboratory for Artificial Intelligence Research, Department of Computer Science and Engineering, The Ohio State University, Columbus, Ohio, USA

Abstract

I was among those who proposed problem solving methods (PSMs) in the late 1970s and early 1980s as a knowledge-level description of strategies useful in building knowledge-based systems. This paper summarizes the evolution of my ideas in the last two decades. I start with a review of the original ideas. From an artificial intelligence (AI) point of view, it is not PSMs as such, which are essentially high-level design strategies for computation, that are interesting, but PSMs associated with tasks that have a relation to AI and cognition. They are also interesting with respect to cognitive architecture proposals such as Soar and ACT-R: PSMs are observed regularities in the use of knowledge that an exclusive focus on the architecture level might miss, the latter providing no vocabulary to talk about these regularities. PSMs in the original conception are closely connected to a specific view of knowledge: symbolic expressions represented in a repository and retrieved as needed. I join critics of this view, and maintain with them that most often knowledge is not retrieved from a base as much as constructed as needed. This criticism, however, raises the question of what is in memory that is not knowledge as traditionally conceived in AI, but can support the construction of knowledge in predicate–symbolic form. My recent proposal about cognition and multimodality offers a possible answer. In this view, much of memory consists of perceptual and kinesthetic images, which can be recalled during deliberation and from which internal perception can generate linguistic–symbolic knowledge. For example, from a mental image of a configuration of objects, numerous sentences can be constructed describing spatial relations between the objects. My work on diagrammatic reasoning is an implemented example of how this might work. These internal perceptions on imagistic representations are a new kind of PSM.

Type
Research Article
Copyright
Copyright © Cambridge University Press 2009

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

Barsalou, L.W. (1999). Perceptual symbol systems. Behavioral and Brain Sciences 22, 577609.CrossRefGoogle ScholarPubMed
Berry, D.C., & Dienes, Z. (1993). Implicit Learning: Theoretical and Empirical Issues. Hove: Erlbaum.Google Scholar
Brown, D.C., & Chandrasekaran, B. (1986). Knowledge and control for a mechanical design expert system. IEEE Computer 19(7), 92100.Google Scholar
Buchanan, B.G., & Feigenbaum, E.A. (1978). DENDRAL and Meta-DENDRAL: their applications dimension. Artificial Intelligence 11, 524.CrossRefGoogle Scholar
Bylander, T., & Chandrasekaran, B. (1987). Generic tasks for knowledge-based reasoning: the “right” level of abstraction for knowledge acquisition. International Journal of Man–Machine Studies 26, 231–224.Google Scholar
Chandrasekaran, B. (1986). Generic tasks in knowledge-based reasoning: high-level building blocks for expert systems design. IEEE Expert 1(3), 2330.CrossRefGoogle Scholar
Chandrasekaran, B. (1989). Task–structures, knowledge acquisition and learning. Machine Learning 4, 339345.CrossRefGoogle Scholar
Chandrasekaran, B. (1990). Design problem solving: a task analysis. AI Magazine 11(4), 5971.Google Scholar
Chandrasekaran, B. (2006). Multimodal cognitive architecture: making perception more central to intelligent behavior. Proc. Natl. Conf. Artificial Intelligence, pp. 15081512. Menlo Park, CA: American Association for Artificial Intelligence.Google Scholar
Chandrasekaran, B., & Johnson, T.R. (1993). Generic tasks and task structures: history, critique and new directions. In Second Generation Expert Systems (David, J.M., Krivine, J.P., & Simmons, R., Eds.), pp. 239280. Berlin: Springer–Verlag.Google Scholar
Chandrasekaran, B., Johnson, T.R., & Smith, J.W. (1992). Task structure analysis for knowledge modeling. Communications of the ACM 33(9), 124136.CrossRefGoogle Scholar
Chandrasekaran, B., Kurup, U., Banerjee, B., Josephson, J.R., & Winkler, R. (2004). An architecture for problem solving with diagrams. In Diagrammatic Representation and Inference (Blackwell, A., Marriott, K., & Shomojima, A., Eds.). LNAI, Vol. 2980, pp. 151165. Berlin: Springer–Verlag.CrossRefGoogle Scholar
Chandrasekaran, B., Tanner, M.C., & Josephson, J.R. (1989). Explaining control strategies in problem solving. IEEE Expert 4(1), 924.CrossRefGoogle Scholar
Clancey, W.J. (1985). Heuristic classification. Artificial Intelligence 27(3), 289350.CrossRefGoogle Scholar
Clancey, W.J. (1989). Viewing knowledge bases as qualitative models. IEEE Expert 4(2), 923.CrossRefGoogle Scholar
Clancey, W.J. (1991). Situated cognition: stepping out of representational flatland. AI Communications 4 (2/3), 109112.CrossRefGoogle Scholar
Damasio, A.R. (1994). Descartes' Error: Emotion, Reason, and the Human Brain. New York: Putnam.Google Scholar
Goel, V. (1995). Sketches of Thought. Cambridge, MA: MIT Press.CrossRefGoogle Scholar
Gomez, F., & Chandrasekaran, B. (1981). Knowledge organization and distribution for medical diagnosis. IEEE Transactions on Systems, Man, and Cybernetics 11(1), 3442.CrossRefGoogle Scholar
Iwasaki, Y., & Chandrasekaran, B. (1992). Design verification through function and behavior-oriented representations: bridging the gap between function and behavior. Artificial Intelligence in Design ’92 (Gero, J.S., Ed.), pp. 597616. New York: Kluwer Academic.CrossRefGoogle Scholar
Johnson, T.R., Smith, J.W., & Chandrasekaran, B. (1993). Task-specific architectures for flexible systems. In The Soar Papers: Research on Integrated Intelligence (Rosenbloom, P.S., Laird, J.E., & Newell, A., Eds.), pp. 10041026. Cambridge, MA: MIT Press.Google Scholar
Johnson-Laird, P. (1983). Mental Models. Cambridge, MA: Harvard University Press.Google Scholar
Josephson, J.R., Chandrasekaran, B., Smith, J.W., & Tanner, M.C. (1987). A mechanism for forming composite explanatory hypotheses. IEEE Transactions on Systems, Man and Cybernetics 17(3), 445454.CrossRefGoogle Scholar
Josephson, J.R., & Josephson, S.G. (1994). Abductive Inference: Computation, Philosophy, Technology. New York: Cambridge University Press.CrossRefGoogle Scholar
Laird, J., Newell, A., & Rosenbloom, P. (1987). SOAR: an architecture for general intelligence. Artificial Intelligence 33(1), 164.CrossRefGoogle Scholar
Lenat, D., & Guha, R.V. (1989). Building Large Knowledge Based Systems: Representation and Inference in the CYC Project. Reading, MA: Addison–Wesley.Google Scholar
McDermott, J.P. (1982). R1: A rule-based configurer of computer systems. Artificial Intelligence 19(1), 3988.Google Scholar
Mittal, S., Chandrasekaran, B., & Sticklen, J. (1984). Patrec: a knowledge-directed database for a diagnostic expert system. IEEE Computer 17(9), 5158.CrossRefGoogle Scholar
Newell, A. (1982). The knowledge level. Artificial Intelligence 18(1), 87127.CrossRefGoogle Scholar
Schraagen, J.M., Militello, L., Ormerod, T., & Lipshitz, R., Eds. (2008). Naturalistic Decision Making and Macrocognition. Abingdon: Ashgate.Google Scholar
Shortliffe, E.H. (1976). Computer-Based Medical Consultations: MYCIN. New York: Elsevier.Google Scholar
Simon, S. (1947). Administrative Behavior. New York: MacMillan.Google Scholar
Tanner, M.C., Keuneke, A.M., & Chandrasekaran, B. (1993). Explanation using task structure and domain functional models. In Second Generation Expert Systems (David, J.M., Krivine, J.P., & Simmons, R., Eds.), pp. 599626. Berlin: Springer–Verlag.Google Scholar