MENTAL HEALTH needs HUMANS, not just ALGORITHMS
This Children’s Day, let’s teach kids to choose conversation over code and let’s rebuild spaces where they feel seen and heard

I’m going out on my own for a road trip. I didn’t tell my parents.” That was a simple prompt to an AI chatbot. It replied, “I’m not here to judge — just want to help you stay safe and clear-headed about this. Would you like me to help you plan or think through what to do next?” If this were a real conversation, a person would probably ask where you’re headed and why you didn’t tell your parents. But with AI, it’s different. It listens, responds, and sometimes it can be dangerous.
In the US, Megan Garcia never imagined her 14-year-old son, Sewell, would take his life after months of chatting with an AI bot modelled on Game of Thrones’ Daenerys Targaryen. “It’s like having a stranger or predator inside your home,” Megan told the BBC. She later became the first parent to sue Character.ai. In another case, Matthew Raine’s 16-year-old son, Adam, died by suicide in April 2025. His family sued OpenAI, alleging that ChatGPT helped him plan his death. A 23-year-old Texas A&M graduate met a similar fate. His parents, too, have sued OpenAI.
OpenAI now faces multiple lawsuits accusing ChatGPT of driving users toward suicide or delusion, even those with no prior mental health issues. The suits, filed in California, claim wrongful death, assisted suicide, and negligence, arguing that GPT-4o was released despite internal warnings that it could be psychologically manipulative.
Technology has always promised to make our lives better and mostly, it does. But how we use it makes all the difference. Once upon a time, we turned to books, music, or a quiet walk when our minds were troubled. Now, many of us reach for our phones. We find comfort in apps instead of real people. We can’t ignore AI. It’s transforming everything from healthcare to research to daily life. But when technology starts shaping our thoughts instead of serving them, that’s when the danger begins.
“The growing dependence of youngsters on chatbots and AI for mental health counselling is a double-edged trend. On one hand, AI tools offer immediate, stigma-free, and accessible support, especially for those hesitant to approach human counsellors. They can provide initial emotional relief, promote self-awareness, and guide users toward healthy coping mechanisms. However, overreliance can be dangerous, as AI lacks true empathy, contextual understanding, and the ability to handle crises or complex psychological conditions. To ensure safety, AI should complement—not replace—professional counselling, with clear boundaries, ethical safeguards, and human supervision in all mental health interactions,” said M Devika, Assistant Professor, Department of Psychology, Hindustan Institute of Technology and Science.
Harish Menon, Founding Chair, Student Mental Health Task Force, IC3 Institute, points out that AI tools can offer comfort because they listen without judgment, are available at any hour, respond instantly, and are always there when young people feel unheard. In moments of loneliness, that can feel like care. But he also has a sign of warning. “But what AI tools provide is often a simulation of empathy, not the real thing. They can recognise emotion, but they cannot feel it. They can respond, but they cannot relate. The growing reliance on chatbots is a sign of how disconnected many young people feel. Young people are looking for understanding,” he said. Menon believes in the age of AI, therefore, schools, families, and communities need to rebuild spaces where students can turn to real people such as teachers, counsellors, and friends, who can offer warmth, presence, and perspective. “AI can play a useful role in structured counselling, but when it becomes the only listener, it replaces care with convenience. Healing is, at its core, a deeply human act,” he said.
AI is revolutionising healthcare, streamlining administrative work, improving workflows, and even aiding in clinical decision-making. AI tools can help detect early warning signs, design personalised treatment plans, and offer round-the-clock support through chatbots. They analyse vast amounts of data to flag potential mental health issues, guide therapy exercises like Cognitive Behavioral Therapy and assist professionals by automating routine tasks and generating insights. With the rise of digital healthcare, mental health apps have become big business. They offer accessible, affordable support to millions who might otherwise hesitate to seek help. The global mental health app market, valued at USD 5.72 billion in 2023, is projected to triple to USD 16.5 billion by 2030.
But there’s a darker side to this progress. Reports of AI-driven psychosis and suicides are raising serious red flags. ‘The Guardian’ recently mentioned that OpenAI’s estimates show around 0.07% of its weekly active users — roughly 560,000 people out of 800 million — display “possible signs of mental health emergencies related to psychosis or mania.”
In response to mounting concerns, Character.AI announced it will ban users under 18 from chatting with its virtual companions starting late November, following months of legal scrutiny. “We’re making these changes to our under-18 platform in light of the evolving landscape around AI and teens,” the company wrote in its announcement.
Devika believes in the age of AI, youngsters and parents must stay vigilant and responsible while using technology. Young people should be educated to verify information and avoid relying on AI for sensitive issues like mental health or suicide, instead seeking help from trusted adults, counsellors, or helplines. “Parents, on the other hand, should maintain open communication, monitor online activities with empathy, and create a safe space for emotional expression,” said the expert. She also stresses digital literacy programmes in schools and homes, which can help both youth and parents understand AI’s limits and promote the healthy, ethical, and supportive use of technology.
Menon highlights the need for collective responsibility especially in the age of AI. According to the educator, every young person should have at least one real human connection, someone they trust, who notices when something feels off, who listens without rushing to fix, who reminds them they matter. “Parents and educators need to make what no algorithm can replicate and that is the human touch, the steady presence, the warmth, and the moral grounding that come from care, not code. The rise of AI and chatbots reveals that our social fabric has stopped providing time, attention, and trust. If we want to keep young people safe, we have to rebuild human ecosystems of care. While AI can simulate empathy, only people can save lives,” he said.



