LLMs, consciousness, understanding and moral worth
Barbara Montero's NY Times op-ed on Nov. 8, 2025 -- “AI is on it way to something even more remarkable than intelligence” -- imagines the possibility of LLMs having consciousness – something it is like to be them, using a definition of subjective awarenes made famous by Thomas Nagel’s “What is it like to be a bat?.” This would be different than what it’s like to be oneself or, to the extent we can imagine it, a bat … but still ‘something’.
This seems reasonable. Montero suggests the criteria to be met for this accomplishment might be some reported ‘inner’ experiences. She suggests that this is no different from how we attribute consciousness to anyone but ourselves. That attribution in the case of other humans (or some animals) is strengthened by the belief that they are similarly constructed, so we might expect similar inner experiences. We don’t have that belief in shared construction in the case of LLMs (and it’s a matter of debate how important similar construction is). But it does not eliminate the possibility for LLMs if only because we are still clueless as to how subjective awareness arises from the materials of our own brains (if one presumes the physicalist assumption that it does).
Anil Seth, for one, thinks the matter of construction is significant … see his recent “Conscious artificial intelligence and biological naturalism.” I lean the other way, as I think David Chalmers does … see his "Could a large language model be conscous?". Chalmers has also advocated for a form of panpsychism that would place the discussion somewhere else entirely, though he does not entertain that position in this paper in order to keep it within the mainstream.
Self-reports by LLMs seem a dicey indicator given the sometimes capricious (or outright false) expressions from LLMs. Chalmers has suggested additional criteria.
The problem of consciousness is different from determining whether LLMs might actually understand certain things. It already seems they do. In the case of understanding we can more easily point to demonstrable criteria for understanding than we can for consciousness (as Beckmann & Queloz nicely do in their article “Mechanistic indicators of understanding in large language models”).
Insisting that LLMs that meet these criteria still don’t ‘really’ understand (as it seems Seth contends and John Searle long before him) seems to come from a humanistic bias unwilling to consider this possibility for non-human or at least non-biological entities, or maybe the belief that to ‘really’ understand something requires a subjective feeling of understanding, which then reduces this to the same question of subjective awareness.
More questionable for me are Montero’s observations re morality. She suggests that the presence of consciousness does not for most people imply moral consideration. But this seems at odds with much of the philosophical literature (Chalmers in the above paper, for one) and maybe even people more broadly. As a moral error theorist, I have no problem not attributing moral status to conscious LLMs (or anything/anyone else :), and at a pragmatic level (where I still consider how people are treated to be important) I have no problem simply excluding LLMs … at least, perhaps, until I can see that they might actually ‘suffer’ in some way, which seems a step further than consciousness.
Comments
Post a Comment