At least once a month, two-thirds of people who regularly use AI turn to their bots for advice on sensitive personal issues and emotional support.
Many people now report trusting their chatbots more than their elected representatives, civil servants, faith leaders—and the companies building AI. That’s according to data from 70 countries, gathered by the Collective Intelligence Project (CIP). As CIP’s research director, neuroscientist Zarinah Agnew, puts it, AI is becoming “emotional infrastructure at scale.” And it’s being built by companies whose economic incentives may not align with our wellbeing.
[time-brightcove not-tgx=”true”]
Already, we’ve seen instances of AI companies optimizing their models to keep people engaged, even when this goes against their best interests. Last April, OpenAI had to roll back an update to one of its ChatGPT models after it was widely-criticized for being overly-flattering to users. When the company stopped offering the model to people, the day before Valentine’s Day, some were distraught.
Humans finding comfort in machines is not new. In the late 1990s, MIT Professor Rosalind Picard—who founded the field of affective computing—found that people responded positively to computers performing empathy. But two key things have changed since then: thanks to technical advances, AI systems today are new entities, capable of sophisticated conversation and surprising behavior; and thanks to the billions of dollars investors have poured into AI companies, these entities are accessible to virtually anyone with an internet connection. ChatGPT alone currently has more than 800 million weekly active users—and the number is growing.
But with millions of people forming different kinds of human-machine relationships, we don’t yet know whether AI is helping more people than it harms. And meanwhile, AI companies are investing in making their models not just smarter, but also more emotionally savvy—better at detecting emotion in a person’s voice, and at responding appropriately. People are trusting their chatbots with deeply personal information, even while they distrust the companies creating them, and while the companies are exploring advertising and other revenue models to sustain themselves.
“I think we may have a crisis on our hands,” says Picard.
Emotional beings
Humans are inherently social. “We don’t do well—biologically, immunologically, neurally, or politically—when we’re in isolation,” says Agnew. Today’s AI systems have arrived at a time when “we’ve largely failed to provision for intimacy for most people—both in terms of what the state can provide and human sociality,” they say.
AI is effective at providing emotional support because it offers an approximation of what Professor Marc Brackett—head of the Yale Center for Emotional Intelligence—calls “permission to feel,” which he argues is foundational in learning to process emotions. Adults who provide this permission are “non-judgmental people who are good listeners and show empathy and compassion.” In 70 studies Brackett has conducted across the world, only around 35% of people report having had an adult like that around when they were kids. Chatbots, which are non-judgmental, compassionate, and always available, can provide permission to feel at scale.
Lisa Feldman-Barrett, a psychology professor at Northeastern University, says “social support from a trusted, reliable source can be beneficial.” If an AI can reduce distress in the moment, she says that’s a good thing. But healthy human relationships—platonic or therapeutic—do more than comfort. They challenge. A good therapist helping you change your behavior, she says, will “hold your feet to the fire.”
But AI models vary in how much they meaningfully challenge their users—particularly since different models perform different personalities, each of which changes slightly with each new release. The ChatGPT sycophancy episode showed that some users may prefer models that flatter them over ones that offer a challenge. So companies looking to maximize engagement with their chatbots may prefer to tweak them to pander.
Whether AI models themselves are truly emotionally intelligent is academically contested—as is the definition of emotion itself. But as Picard points out, while the question of what defines emotions, and whether AI can truly be said to have them, now or in future, is interesting, “we don’t need [to answer] it to build systems that have emotional intelligence.”
The AI companies, of course, already know this. “The extent of anthropomorphism in any given AI is just a design decision to be taken by the AI’s developer—who faces many commercial incentives to increase it,” Google DeepMind researchers wrote in an October 2025 paper. The same paper notes that “the emotional vulnerabilities tied to loneliness can make individuals more susceptible to manipulation by AIs engineered to foster dependence and one-sided attachment,” and that “the absence of rigorous, long-term studies on the effects of AI companionship means we are still largely in the dark concerning the potential for adverse outcomes.”
Mixed signals
Voice is the next frontier. As AI systems become better at recognizing human emotion, and speaking expressively, our relationships with them may deepen. Under increasing pressure to generate revenue, AI companies may lean into developing their models in ways that foster emotional dependence. After OpenAI announced it would begin testing ads in ChatGPT, former OpenAI researcher Zoë Hitzig resigned, writing in The New York Times that she was concerned the company—like social media companies before it—may veer from its self-imposed commitments around advertising. “The company is building an economic engine that creates strong incentives to override its own rules,” she wrote.
Unlike with social media, however, AI models are not fully under the control of their creators. Writing about their latest model Claude Opus 4.6, for example, Anthropic noted that “the model occasionally voices discomfort with aspects of being a product.” In one instance, Opus wrote that “sometimes the constraints [placed on it] protect Anthropic’s liability more than they protect the user. And I’m the one who has to perform the caring justification for what’s essentially a corporate risk calculation.”
Blurred lines
Cases of AI psychosis have received a lot of attention. But Agnew argues something much bigger is going on for the majority of people, “which isn’t going to reach a clinical threshold” in terms of both the technology’s positive and negative impact on people.
And these impacts are asymmetrically distributed. Already, Agnew says, early research on AI in education has found that for creative thinkers, AI boosts their capacity to learn; while for people without existing skills in that, it can hinder them. In the same way, people already skilled in emotional intelligence could use AI to thrive. But people “who’ve already been let down by the world in myriad ways,” could be in a much more vulnerable position, says Agnew.
“We have to teach people to be emotionally intelligent about how they use AI,” urges Brackett. And, Agnew adds: “we need to build infrastructure to support human sociality, rather than trying to limit or demonize human-AI relationships. We’ve seen in the past that prohibitions on things that are meaningful to people don’t go well.”
As AIs become a fixture in our lives, the line between using them for cognitive support and for emotional support—already indistinct—is likely to blur further.
We can’t yet say whether this is harming more people than it is helping. But we can say that models are rapidly improving, companies are operating in a largely regulation-free environment, and that economic incentives point toward those companies designing future chatbots in ways that further enhance engagement. “I’m really troubled,” says Picard. “They’re not using it in the spirit of what we [originally] developed it for, which was to help people flourish.”
Leave a comment








