Shallow belief in LLMs

Abstract

Do large language models have beliefs? Interpretationist theories ground belief in predictive utility rather than in facts about representational format. This creates a problem: doesn’t LLM linguistic fluency make them doxastic equivalents to humans? To solve this problem, I distinguish two questions. (i) do propositional attitude models outperform alternative methods of predicting LLM output? and (ii) how well do propositional attitudes predict LLM behavior, relative to their performance in predicting human behavior? I show that LLMs pass the first test but score much lower on the second. Cross-context contradictions generate indeterminacy absent in human belief. Humans possess reconciliation mechanisms—embodied action, memory, continual learning—that LLMs lack. Parallel arguments apply to desires. The resulting predictive profile provides a way of thinking about the doxastic properties of LLMs that avoids both eliminativism and anthropomorphism.

Date
Dec 4, 2025 1:00 PM
Event
Berlin philosophy of AI group
Location
Berlin
Charles Rathkopf
Charles Rathkopf

I am interested in how mental properties emerge from physical stuff.