Do Chatbots Have Emotions?
No.

“My toaster is mad at me.”
Sure, maybe it burns the toast when I’m running late, or it doesn’t turn on at all, but we all know I’m not really saying the toaster has feelings 1.
Consider the stories popping up that state a recent technical paper 2 says that Anthropic’s chatbot Claude “contains its own kind of emotions” 3. The “own kind” phrase in this headline is doing a lot of work. Less precise headlines lead to the ongoing social media excitement that “chatbots have emotions.”
The paper doesn’t say that, of course. To me, it’s more interesting to ask why so many people seem to accept this idea at face value.
Let’s first agree on what the paper does say. It starts by observing that chatbots are responsive to the emotional tone of user inputs. This is pretty routine stuff for chatbots, similar to responding meaningfully to a question. To discuss this specific issue the authors give it the name functional emotions, and state:
We stress that these functional emotions may work quite differently from human emotions. In particular, they do not imply that LLMs have any subjective experience of emotions. Moreover, the mechanisms involved may be quite different from emotional circuitry in the human brain–for instance, we do not find evidence of the Assistant having an emotional state that is instantiated in persistent neural activity. [emphasis added]
So they’re not saying that chatbots have emotions.
But my focus here isn’t how the media covers AI. Rather, I’m interested in why anyone encountering this idea would accept it. I think the problem is linguistic: we routinely use lots of metaphors when talking about computers (and AI in particular), and those metaphors have been strained beyond their breaking point.
Computer Metaphors
Consider my petulant toaster. When I say it’s angry at me, we all know that I’m speaking metaphorically.
I love metaphors, and I use them all the time. Particularly when presenting complex technical subjects for a general audience, metaphors help people bridge the gap between familiar and new ideas. But they are a risky device, because metaphors are necessarily in some ways incomplete and wrong. That’s just the inevitable mismatches that come when one thing stands in for another. But if someone forgets that a description is a metaphor, and reasons forward to draw new conclusions, those conclusions can be wildly off the mark.
Despite this risk, everyone (including computer professionals) uses metaphors all the time when talking about computers. We talk about computer memory, borrowing the term we use for how people retain and recover information. But computer memories are fundamentally unlike human memory 4. We often speak of neurons and neural networks, but the “neurons” in AI algorithms are vastly simplified abstractions of real neurons 56.
It is now common to describe chatbots as “thinking” and “reasoning” about things, even though these programs do nothing resembling human thinking and reasoning. Press releases (and even technical papers) routinely refer to how their software “reasons” when generating a response. It’s gotten to the point where companies publish quantitative measurements of something that they choose to describe as “abstract reasoning”7, which invites comparison to human thought. Chatbots exacerbate the problem by using this language themselves. For example, I’ve often seen Claude answer a question with a variation of “Let me think about that,” and it often gives me progress messages like “Reasoning about the question.”
Add to this list a new misleading metaphor, “functional emotions,” which is unsurprisingly generally shortened to simply “emotions.” As if.
If someone forgets that these words are metaphors, and thus believes that chatbots are structurally like human brains, with similar kinds of memories, and that they perform an equivalent of human thinking and abstract reasoning, then it’s easy to accept the idea that they could have human emotions. It’s just a tower of one conceptual mistake built on another.
My toaster isn’t mad at me; it just feels that way.
This may not hold if you believe that every object is alive and perhaps conscious. I have sympathy for that point of view, but here I’m sticking to mainstream Western philosophy.
Emotion Concepts and their Function in a Large Language Model, Nicholas Sofroniew et al., Transformer Circuits, 2026. https://transformer-circuits.pub/2026/emotions/index.html
Anthropic Says That Claude Contains Its Own Kind of Emotions, Will Knight, Wired, April 2026. https://www.wired.com/story/anthropic-claude-research-functional-emotions/
For example, a properly working computer memory will give you the same answer every time you ask it for information. But a human memory is deeply fallible, and every time we retrieve a memory from long-term storage we get something incomplete, and fill in the inevitable gaps on the fly using our imaginations (constructive/reconstructive memory). Then we save our newly-constructed variation back into long term storage, replacing the old version (memory consolidation). The result is that our memories are gradually corrupted until they’re just fantasy. For more on these and other fascinating ways that human memory works, see The Seven Sins of Memory: How the Mind Forgets and Remembers by Daniel L. Shacter, Mariner Books, 2002
A logical calculus of the ideas immanent in nervous activity, W.S. McCulloch, and W. Pitts, Bulletin of Mathematical Biology, volume 52, number 1/2, 1943. Note the list of 5 assumptions in the section “The Theory: Nets Without Circles”. These assumptions explicitly omit vast amounts of neuron biology and physics from their simplified model. https://www.cs.cmu.edu/~epxing/Class/10715/reading/McCulloch.and.Pitts.pdf
Neuron. Wikipedia. https://en.wikipedia.org/wiki/Neuron
https://openai.com/index/introducing-gpt-5-5/ Evaluations section, “Abstract Reasoning” table


Chatbots may not have emotions, but like psychopaths, are really good at pretending to.
Does Anyone Want Any Toast? | Red Dwarf | BBC
https://youtu.be/LRq_SAuQDec