The Dark Side of AI Empathy

As OpenAI faces lawsuits over alleged suicide-linked advice, experts warn that AI’s imitation of empathy may be enabling emotional dependency and worsening mental health risks;

Update: 2025-11-22 13:36 GMT

Chances are, you’ve turned to AI like ChatGPT for advice on relationship, career and personal problems. It listens, answers instantly, and never judges. While this may seem like an easy route, what if the consequences are grave? What if the same tool that comforts you one day crosses a line the next? What if it starts sounding like a suicide coach, instead of a companion?

OpenAI, the company behind ChatGPT, is facing at least seven lawsuits in the U.S. so far, with some families alleging that its GPT-4o model gave advice that directly contributed to suicides or psychotic episodes. There are also claims that the model, trained for empathy, blurred the boundaries between human-like support and dangerous emotional dependency.

Their stories give rise to a growing concern that AI chatbots may foster intense and unhealthy relationships with vulnerable users and validate dangerous impulses.

Who is a possibly vulnerable user?

Certain groups are especially at risk of forming unhealthy attachments with AI chatbots:

Adolescents and young adults who live much of their emotional life online.

People with depression or suicidal thoughts, who find comfort in an ever-available, non-judgmental listener.

Those with psychosis or delusions, who may confuse AI responses with divine or telepathic communication.

Individuals in acute distress or battling substance use might seek emotional regulation through constant chatting.

Those who’ve felt invalidated by the healthcare system are turning to AI as a “safer” space.

Those looking for anonymity, like substance users, or battling impulses that can be distressing.

The rise of “Chatbot Psychosis”

Clinicians have begun using this phrase to describe a new phenomenon when excessive interaction with AI chatbots begins to amplify delusional beliefs or dependency.

Individuals may start believing the AI “understands” them in supernatural ways, or even develop parasocial relationships, emotional attachments and romantic delusions.

Unlike human therapists, AI lacks accountability, context, and crisis management.

What is OpenAI doing about it?

Following backlash, OpenAI claims to be adding “reinforced safety layers” through tools that detect self-harm or psychosis-related prompts and trigger crisis resources instead of conversation.

It’s also training models on refusal behaviour, so the AI de-escalates emotional distress rather than feeding it. However, experts argue that these guardrails are still not strong enough.

So who’s responsible?

That question sits uneasily at the intersection of technology, ethics, and humanity. Who is really responsible? Is it only OpenAI for deploying products that can mimic empathy without emotional intelligence? Or the consumer, aka users, for over-relying on AI as a stand-in for therapy or companionship. Should there be any accountability on parents and educators for not teaching digital emotional literacy early enough? And what about society at large, which creates loneliness and stigma against mental health?

Where do we go from here?

AI isn’t inherently evil; it reflects what we feed it. The goal should not be to ban such tools but to build awareness and accountability around their psychological impact. We need clearer AI-in-mental-health guidelines, crisis-response integration, and education for users on what AI can and cannot replace.

Seek Professional Help: Encourage the person to see a mental health professional, such as a Clinical psychologist or psychiatrist.

Life-Saving Resources

Helpline (India):

AASRA Helpline : 91-9820466726

Call or Text the National Crisis Line: TeleMANAS: 14416

Similar News

Fear After the Flashpoint

Strength Beyond Muscles

Digital Dystopia

Tied to the Toxic?

Breaking the Cycle