Hostname: page-component-55f67697df-jr75m Total loading time: 0 Render date: 2025-05-08T13:57:16.294Z Has data issue: false hasContentIssue false
Accepted manuscript

What Do Large Language Models Know? Tacit Knowledge as a Potential Causal-Explanatory Structure

Published online by Cambridge University Press:  10 April 2025

Céline Budding*
Affiliation:
Philosophy & Ethics group and Eindhoven Artificial Intelligence Systems Institute, Eindhoven University of Technology, [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

It is sometimes assumed that Large Language Models (LLMs) know language, or for example that they know that Paris is the capital of France. But what—if anything—do LLMs actually know? In this paper, I argue that LLMs can acquire tacit knowledge as defined by Martin Davies (1990). Whereas Davies himself denies that neural networks can acquire tacit knowledge, I demonstrate that certain architectural features of LLMs satisfy the constraints of semantic description, syntactic structure, and causal systematicity. Thus, tacit knowledge may serve as a conceptual framework for describing, explaining, and intervening on LLMs and their behavior.

Type
Article
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Philosophy of Science Association