Posts

Subjectivity bias and AI

Here's an abstract of a paper I'm proposing for this year's Society for Philosophy and Psychology conference. It pulls together and extends some of my thoughts from earlier blog posts.  Subjectivity Bias and the Evaluation of Artificial Intelligence  Many critiques of artificial intelligence claim that its apparent capacities are "not real." It's said that AI intelligence is not real intelligence, AI understanding not real understanding, AI emotions not real emotions, and AI companions not real companions. This paper examines such claims and argues that they often rely on an unexamined subjectivity bias: the tendency to treat human subjective experience as a prerequisite for recognizing psychological capacities. One sense of “not real” appeals to the fact that AI systems are machine constructions rather than biological organisms. But the observation is largely redundant for explicitly artificial capabilities. If humans are understood as physical systems produc...

Trump on Rob Reiner

Hopefully Trump's comments about Rob Reiner will rile even more Republicans than it already has. Besides blaming 'Trump Derangement Syndrome' for his death he makes his usual move of falsely labeling an opponent as once successful but now 'failing'.   The extreme repulsion described by the supposed syndrome is accurate, but rather than being irrational it is of course a reasonable reaction to how beyond the pale Trump's behavior is.  Similar in a way to Trump taking 'fake news' -- first coined in response to some of his early fabricated facts -- and using it against those who report on him honestly, as he's done by co-opting  the language of other criticisms  (e.g., weaponization).  An interesting interview  in NY Times with 3 departing senators -- Jeff Flake, Tina Smith and Joe Manchin. The failure of many senators to defy Trump on issues it's known they actually disagree with him about is explained as simply fear of his organizing a primary opp...

Better angels?

Trump, Putin, Xi Jinping, Kim Jong Un. How did the world get into a situation where these people have enough power and weapons to do so much harm or even destroy civilization? Historians no doubt have explanations, but the core problem seems to be aspects of human nature that can produce immense harm by people in positions of great power when aided or abetted by rewarded loyalists with common desires (or who are kept in check by fear). Banal thoughts, but still remarkable that the rest of the world has not been able to forestall this Part of the problem rests with that ‘rest of the world’,  or enough of it that fails to see what’s going on and act accordingly if only in their long-term self-interest. It gives further lie to Steven Pinker's contention that overall things are steadily better in the world thanks to the 'better angels of our nature'. Pinker makes a strong statistical case for overall historical world improvement. But the statistics are undone by the immense har...

Tump's worst

The worst things about Trump seem to be: 1. Dishonesty Truth has no meaning for Trump. What he says is simply a tool for some goal -- usually self-gratification. It's relationship to the truth is unimportant, though he sometimes tries to connect his statements to some kernel of something real to minimize blowback. The connection is usually distorted or exaggerated, or entirely fabricated. It's often claimed that there's a liberal bias in universities, and most faculty overall and many others in universities probably lean that way. As regards Trump, this is not hard to explain. Universities and those involved with them seek the truth.  That's the business of education, research and science. People may be limited in attaining it by their own biases but, as a rule, they strive for it. They care about it, and it's the very basis of scientific progress. (The psychologist Marvin Frankel has suggested we've progressed in science but not so much in human affairs because...

Stochastic parrots: so what ...

In " On the dangers of stochastic parrots: Can language models be too big? " Bender et al. state that people "mistake LM [i.e., language model]-driven performance gains for actual natural language understanding" (p. 616). This is because language models  are  built by being able to accurately predict what word follows another but don't actually understand language. They are nothing more than stochastic parrots. The interesting question here is the 'nothing more than'. The objection again seems rooted in the humanistic bias that only biological beings can 'really' understand. But human understanding and the language manipulation that manifests it may itself be mostly a sophisticated stochastic process. Sam Altman has tweeted on X "I am a stochastic parrot, and so r u." Our brains are sophisticated neural networks that don't seem functionally very different from the artificial networks that they were the original...

LLMs, consciousness, understanding and moral worth

Barbara Montero's NY Times op-ed on Nov. 8, 2025 -- “ AI is on it way to something even more remarkable than intelligence ”  -- imagines the possibility of LLMs having consciousness – something it is like to be them, using a definition of subjective awarenes made famous by Thomas Nagel’s “ What is it like to be a bat? .” This would be different than what it’s like to be oneself or, to the extent we can imagine it, a bat … but still ‘something’. This seems reasonable. Montero suggests the criteria to be met for this accomplishment might be some reported ‘inner’ experiences. She suggests that this is no different from how we attribute consciousness to anyone but ourselves. That attribution in the case of other humans (or some animals) is strengthened by the belief that they are similarly constructed, so we might expect similar inner experiences. We don’t have that belief in shared construction in the case of LLMs (and it’s a matter of debate how important similar construction is). Bu...