Subjectivity bias and AI

Here's an abstract of a paper I'm proposing for this year's Society for Philosophy and Psychology conference. It pulls together and extends some of my thoughts from earlier blog posts. 

Subjectivity Bias and the Evaluation of Artificial Intelligence 

Many critiques of artificial intelligence claim that its apparent capacities are "not real." It's said that AI intelligence is not real intelligence, AI understanding not real understanding, AI emotions not real emotions, and AI companions not real companions. This paper examines such claims and argues that they often rely on an unexamined subjectivity bias: the tendency to treat human subjective experience as a prerequisite for recognizing psychological capacities.

One sense of “not real” appeals to the fact that AI systems are machine constructions rather than biological organisms. But the observation is largely redundant for explicitly artificial capabilities. If humans are understood as physical systems produced through biological processes, then differences in construction alone cannot determine whether a system instantiates intelligence, understanding, or emotion.

A more substantive sense of “not real” appeals to the absence of subjective awareness. Current AI systems plausibly lack inner experience, and there is disagreement about whether such experience could ever be engineered. The uncertainty here reflects a deeper problem: we do not yet understand how subjective awareness arises in humans. If we are never able to fully understand this, as some have suggested, then deliberately constructing subjectively aware machines may be impossible, even if such awareness could in principle arise unintentionally as a byproduct of sufficiently complex organization.q

The paper argues that the relevance of subjective experience varies by psychological capacity. In the case of emotions, many prevailing theories treat inner experience and appraisal as essential components. Artificial systems that competently express emotional behavior while lacking subjective awareness would therefore resemble philosophical zombies. This has implications for artificial companionship. While emotionally limited companions may still provide value, their lack of inner experience plausibly constrains the mutual affectivity characteristic of full companionship. Such limitations may be acceptable in some contexts while raising concerns in others, particularly where users may be vulnerable or developmentally immature.

Related considerations arise concerning moral status. On views that link moral standing to subjective awareness or the capacity to suffer, artificial systems lacking such capacities would not qualify as moral patients. Historical reactions to early conversational systems illustrate that people can value and engage with artificial agents without attributing moral status to them or believing that they can be harmed. 

By contrast, the paper argues that judgments about intelligence and understanding need not depend on subjective experience in the same way. Plausible accounts of understanding emphasize functional and explanatory capacities, such as the ability to represent, manipulate, and reason with information in flexible and context-sensitive ways. Insisting that there is no real understanding without the experience of “getting it” risks arbitrariness, effectively requiring human-like subjectivity as a precondition for attributing cognitive competence even when relevant behavioral and explanatory standards are met.

This insistence reflects a broader subjectivity bias that shapes many evaluations of AI. While such a bias may be defensible in the assessment of emotions or moral status, its application to intelligence and understanding is far less clear. The paper concludes that critiques of AI should distinguish more carefully between capacities for which subjective experience plausibly matters and those for which it does not, supporting a more differentiated and philosophically grounded evaluation of artificial systems.

Comments

Popular posts from this blog

LLMs, consciousness, understanding and moral worth

Stochastic parrots: so what ...