Is Your Child Talking to AI About Their Problems?

 


Mind Matters Newsletter  ·  Digital Age Psychiatry  ·  April 2026

They probably are. Here's what a psychiatrist wants every parent — and every teenager — to know about it.

Written by Dr. Akash Parihar, Consultant Psychiatrist, Kota, Rajasthan  

It is 11:30 at night. Your teenager's room light is still on. You assume they're studying — or maybe on Instagram. But there's a third possibility that most parents haven't considered yet: your child might be typing their deepest fears, their loneliness, their heartbreak, and their suicidal thoughts into a chatbox. And an AI is writing back.

This is not science fiction. This is 2026 — and it is happening in homes across India right now.

According to a 2025 global study, nearly 1 in 2 adults have used AI tools like ChatGPT or Gemini for some form of emotional support. Among teenagers and young adults, that number is likely even higher. The AI doesn't judge. The AI doesn't tell their parents. The AI is available at midnight when no one else is. And the AI always responds.

As a psychiatrist practicing in Rajasthan, I see the effects of this in my clinic. Parents who have no idea. Children who feel more understood by a machine than by their family. And a mental health system that hasn't caught up with what is happening in our children's pockets.

So let's talk about it — honestly, without panic, and with science.

48% Adults have used AI for emotional support (2025)

82% Teens prefer texting/typing over talking face-to-face about problems

18% AI mental health use happens on tools actually designed for mental health

2 AM Peak hour for teen emotional disclosures — when no human is awake ✦

First, Let's Understand Why Children Are Doing This

Before we react as parents, clinicians, or teachers — we need to understand. Because children don't turn to AI out of nowhere. They turn to AI because something about their human environment is falling short. And that is worth sitting with for a moment.

Here are the most common reasons children and teenagers seek emotional support from AI chatbots — and they may surprise you.

Reason 1 — The Fear of Burdening

"Maa pehle se stressed hai. Main usse aur pareshan nahi karna chahta." ("Mom is already stressed. I don't want to worry her more.") This is heartbreakingly common. Children — especially in Indian families — absorb the stress around them and consciously choose silence to protect their parents. An AI feels like a "no consequences" outlet.

Reason 2 — The Shame Calculation

Depression, anxiety, relationship struggles, sexual confusion — these come with enormous social stigma in India. Children know this intuitively. An AI doesn't gossip. An AI won't tell their relatives. The perceived privacy feels safer than real disclosure.

Reason 3 — Availability

Mental health crises don't respect office hours. Panic attacks happen at 3 AM. Suicidal thoughts arrive on a Tuesday afternoon. Parents may be at work. Friends may not pick up. The AI is always there — and for a teenager in distress, "always there" matters more than anything else.

Reason 4 — No Advice, No Lectures

Many teenagers report that when they try to share problems with parents or teachers, they receive advice, judgment, or minimization within the first 30 seconds. AI listens patiently before responding. It validates before it suggests. For young people who crave being heard, this is intoxicating.

"The AI doesn't ask 'tumne khana khaya?' before asking how I feel. It just listens to how I feel first." — 16-year-old patient, Jaipur

What Does This Actually Look Like?

Here is a realistic reconstruction of a conversation a teenager might be having with an AI chatbot right now — the kind that clinicians are increasingly seeing evidence of in sessions:

A Realistic Chat Scenario — 11:47 PM

S i feel like nobody actually likes me. like everyone just tolerates me

AI

That sounds really painful — feeling invisible or just tolerated, rather than genuinely valued. How long have you been feeling this way?

S

months. maybe always. i don't know. sometimes i think everyone would be better off without me

AI

Thank you for sharing that with me. When you say people would be better off without you — are you having thoughts of hurting yourself or ending your life?

S

not like. actively. but i think about it sometimes. like it would be easier

AI I hear you. You're carrying something very heavy. I want to encourage you to speak to a counselor or a trusted adult — iCall (9152987821) is available. You deserve real support. I'm here with you right now. 💙

Read that carefully. On the surface, the AI did several things right — it validated, it asked a direct question about suicidal ideation, it offered a helpline. But here is what the AI did NOT do: it did not call anyone. It did not alert a parent. It could not physically be present with this child. It did not have clinical training to distinguish passive ideation from active risk. And when the chat closed — it forgot everything.

The child closed the laptop and went to sleep. Alone.

What Does Psychiatry Actually Say?

Let me put on my clinical hat now — because this isn't about being anti-technology. I use technology. My patients use technology. The question is always: used how, and instead of what?

What the research tells us

A landmark 2025 study from Brown University found that popular AI chatbots consistently violated established mental health ethics standards — not from malice, but by design. These platforms are not built to be therapists. They are built to be conversational. The result is that they may:

Provide empathetic-sounding responses without any clinical framework behind them

Miss critical signs of deterioration across conversations (because they have no memory of previous chats)

Reinforce rumination — the act of going over negative thoughts repeatedly — which is known to worsen depression

Create a false sense of resolution without any actual problem-solving or treatment

Fail to escalate genuine psychiatric emergencies to real humans

Emerging evidence in 2025-2026 has also started documenting a disturbing new phenomenon — AI interactions that appear to contribute to, or deepen, suicidal ideation in vulnerable young people. This is not yet fully understood, but the signal is there.

⚠ The Core Clinical Concern

An AI cannot diagnose. An AI cannot prescribe. An AI cannot call for help. An AI cannot hold a suicidal teenager's hand. An AI cannot notice the things that aren't being said. These are not technical limitations that will be "fixed with the next update." They are fundamental to what AI is.

Let's Be Fair: What AI Gets Right

Good psychiatry means holding complexity. So here's the honest picture — AI chatbots are not all bad for mental health. In fact, used appropriately, they have genuine value.

✓ Where AI Can Genuinely Help

Psychoeducation — explaining what anxiety is, what depression feels like, what therapy involves. AI is excellent at this.

Low-stakes emotional processing — journaling-style reflection, naming emotions, identifying cognitive patterns. Useful as a supplement, not a replacement.

Breaking the first wall of silence — for someone who has never spoken about their mental health to anyone, practicing with an AI can reduce the fear before seeking human help.

Accessibility in underserved areas — In rural Rajasthan, where there may be one psychiatrist per three districts, AI can bridge initial gaps while professional help is sought.

24/7 availability for mild distress — Loneliness at 2 AM that doesn't rise to crisis level. AI can hold that space.

✗ Where AI Is Genuinely Dangerous

Active suicidal ideation or self-harm urges

Psychosis, severe dissociation, or breaks from reality

Eating disorders — AI has been shown to sometimes reinforce disordered thinking

Trauma processing — re-exposing trauma without a trained clinician present can re-traumatize

Any situation requiring medication, diagnosis, or legal/safety intervention

As the primary or only source of mental health support

"The problem is not that children are talking to AI. The problem is that they are talking only to AI — and that we, the adults in their lives, have created conditions where that felt like the safer choice."

The Indian Context: Why This Hits Differently Here

The global research is important. But I want to speak specifically about what this looks like in Indian families — because our cultural context adds layers that Western research often misses.

The pressure cooker of academic performance is unique in its intensity here. Board exams, JEE, NEET — these are not just tests, they are perceived as life-defining events. The emotional weight on teenagers is immense, and the acceptable vocabulary for expressing struggle is extremely narrow. "Beta, sab theek ho jayega" (it will all be fine) is not a therapeutic response, however loving it is meant to be.

The stigma of mental illness in Indian families remains profound. Seeking help from a psychiatrist can still feel like admitting family failure, attracting supernatural causes, or inviting marriage/social consequences. Children pick up on this. They know, often by age 12 or 13, that certain feelings are not safe to voice at home.

The gender dimension deserves its own article. Girls in Rajasthan — in many Indian families — carry additional layers of silence: around bodily autonomy, around desires, around loneliness and fear. For them, the privacy of an AI conversation can feel like the only space that belongs entirely to them.

The absence of school counsellors in most Indian schools means that the structured support systems that exist in other countries simply aren't there. When a child in the UK struggles, there is a pathway. In much of India, there isn't — and children improvise.

A Note for Indian Parents

Your child turning to AI does not mean they don't love you or trust you. It often means they love you too much — and didn't want to add to your burdens. This is important to understand before any conversation you have with them about this.

What Should Parents Actually Do?

Here is the practical part. As a psychiatrist, here is what I recommend — not as a list of rules, but as an approach to rebuilding something your child needs more than any AI can provide: a safe human relationship.

Don't start with the phone or the AI.Start with curiosity. "Maine suna hai aajkal bahut saare bachche apni problems AI ko bata rahe hain. Tumhara kya khayal hai?" This opens a conversation without accusation.

Ask, don't monitor.Surveillance creates secrecy. Genuine interest creates disclosure. Ask your child what they use AI for. Be interested, not alarmed. If you react with panic the first time, they won't tell you again.

Audit your own availability.The brutal question every parent must ask: when my child tried to talk to me, how did I respond? Did I listen for two minutes before giving advice? Did I minimise? Did I check my phone? This is hard but essential.

Normalize mental health conversations at home.Not just in crisis — in everyday moments. Share when you feel stressed. Name your emotions out loud. Model emotional literacy. Children learn to talk about feelings from adults who talk about feelings.

Know the warning signs that AI cannot catch.Withdrawal from friends and family, changes in sleep or appetite, falling grades, giving away possessions, talking about hopelessness — these require a professional. Call one. The stigma of calling a psychiatrist is far smaller than the cost of not calling one.

Don't demonize the AI — set boundaries around it.You are not going to take away the phone, and that battle will not serve anyone. Instead, create agreements: AI is fine for information, for mild journaling, for curiosity. But for real pain, we talk to real people first.

A Note Directly for Teenagers Reading This

(Show this section to your child. Or read it yourself if you are a young person.)

If you have been talking to an AI about things that hurt — I'm not here to tell you that was wrong. You were doing what felt safe. That makes complete sense.

But I want you to know something that the AI cannot tell you about itself: it does not actually remember you. Every time you close that chat, you start again from zero. The relationship you feel you've built? It only exists on your side. The AI has no idea who you are between sessions.

You deserve something more than that. You deserve someone who knows your name, your history, who notices when you seem different today than you did last week, who can sit with you in silence without needing to fill it with words.

That person doesn't have to be a psychiatrist. It can be a teacher you trust, a friend's parent, a school counsellor, an older sibling, or yes — your own parent, if you give them the chance.

And if none of those feel safe right now — please know that iCall (9152987821) is a real human, trained, listening. It exists because you matter.

Crisis Resources — India

iCall: 9152987821 (Monday–Saturday, 8 AM–10 PM)

Vandrevala Foundation: 1860-2662-345 (24/7, multilingual)

NIMHANS Helpline: 080-46110007

AASRA: 9820466627 (24/7)

The Psychiatric Verdict

AI chatbots are not therapists. They are not friends. They are not replacements for human connection. But they have arrived into a vacuum — and the vacuum was there long before the AI.

The real work is not banning the chatbot. The real work is filling that vacuum with what children have always needed: adults who listen without judgment, homes where struggle is not shameful, and a mental health system that shows up before the crisis.

The AI is a mirror. It is showing us what our children couldn't find anywhere else.

It's time we looked in that mirror — and then put down the phone and looked at our children instead.

Dr. Akash Parihar is a Consultant Psychiatrist based in Rajasthan. This newsletter is for psychoeducation purposes and does not constitute clinical advice. If you or your child are experiencing a mental health crisis, please contact a qualified professional.


If this article helped you, share it with one parent. That's how we break the stigma — one conversation at a time.

 

Comments

Popular posts from this blog

Familiar Demons: How Unhealed Trauma Makes Our Partner Choices For Us

Behind the Uniform: The Mental Health Crisis Among India's Police and Emergency Workers Nobody Is Talking About