Yetter-Chappell, H. (2026). What a Bing Really, Really Wants: Zigazig Ah. Journal of Consciousness Studies
[Abstract]It’s natural to assume that if Large Language Models (LLMs) have intentional states, these states can be straightforwardly read off of their outputs. I argue that this is a mistake. Given plausible assumptions about consciousness and intentionality, the content of their intentional states will essentially be inaccessible to us. ‘Pain’ talk from an LLM may be meaningful, but there is no reason to think it expresses pain. ‘Desire’ talk from an LLM may be meaningful, but there is no reason to think it expresses desires. Our epistemic position regarding LLMs is radically and hopelessly impoverished.
[Citing Place (1956)]
Citing Place (1956) in context (citations start with an asterisk *):
Section 1.1 LLMs and Radically Alien Meaning
Premise (3) LLMs may (in future) be phenomenally conscious.
Footnote 7
The view that consciousness is based in biology stems from Place (1956) and Smart (1959).