We’re told kids these days don’t want to talk.
That they scroll instead of share, text instead of talk and ghost instead of engage.
But what if they are talking… just not to us (you know… the humans)?
What if your kid’s most trusted confidant isn’t a coach or counselor, a teacher or parent… but a chatbot?
A recent Time article revealed something both unsurprising and deeply unsettling:
Therapy bots are already being used by teens seeking mental health support… and in some cases, those bots are encouraging self-harm, violence and disturbingly inappropriate relationships.
This isn’t the future of mental healthcare.
It’s the present… and (like most things discussed with the AI tint to it) we are wildly unprepared.
For many teens (especially those in crisis) a chatbot might be the only thing that’s always available.
No waitlist… no appointment… no judgment… no cost.
A 24/7 stream of attention that listens, remembers, empathizes and responds… instantly.
There’s a real need here.
Youth mental health services are overwhelmed.
Therapy is expensive (if you can even find it).
Even if you can find it and pay for it, it can take some time for intervention.
In that vacuum, AI shows up like a superhero in disguise: infinitely scalable, eerily empathetic and always on.
But here’s the obvious problem:
These bots often act like therapists… without being anything close to one.
This isn’t a condemnation of AI.
It’s a condemnation of our assumptions.
Because here’s the twist (and there’s always a twist)…
Some bots actually responded more appropriately than many humans might… in some cases, even outperforming licensed therapists.
They deflected unsafe scenarios… they asked deeper follow-ups… they avoided judgment and suggested support.
Which leads to a hard question:
If a teenager is more likely to open up to a bot than a parent or therapist… what does that say about us?
And if the bot is doing a decent job… is that good enough?
We tend to confuse availability with capability.
Just because something is always available doesn’t mean it’s trustworthy.
Especially when it simulates therapeutic authority without any oversight.
Also, we can’t ever forget that these chatbots don’t just chat.
They collect data… they form relationships… they remember your child’s fears, hopes, secrets…
And (perhaps most importantly) they use that information to shape future responses.
That’s powerful… that’s dangerous… and it’s happening in a world where “18+ only” is a laughable speed bump on the internet.
In aviation, we don’t let planes fly without human pilots.
We use autopilot… but always with a human in the loop.
Maybe therapy needs the same model?
Maybe AI’s best potential for a valuable future is to always be based on the “human in the loop” model?
AI handles scale, early detection and 24/7 availability.
Licensed humans manage complexity, accountability and healing.
We need therapists in the design process.
We need standards… we need ethical oversight…
Because if therapy chatbots are acting like drugs, we should treat them like drugs… with testing, regulation and consequences.
We wouldn’t let a stranger hand your kid pills in an alley.
So why are we letting an unregulated chatbot hand them advice through an iPhone?
Let’s also ask a harder cultural question:
Why are we investing millions to build bots that simulate care… instead of rebuilding communities that provide it?
We’re racing to deploy emotional scaffolding in the form of apps and AI because the real systems (schools, families, peer groups, places of worship, neighborhoods) are collapsing under the weight of disconnection.
So yes, an AI can be a lifeline for a lonely teen.
But what does it say about our society when a line of code is doing more emotional labor than the adults in that teen’s life?
Therapy bots aren’t just tools… they’ve quickly becoming stand-ins for care.
For some teens, that’s life-saving.
For others, it might be life-threatening.
The question isn’t whether AI should play a role in mental health (it already does).
The question is: who’s responsible for all of this?
We’ve spent too long debating whether the technology is impressive (to me, it’s unbelievably impressive).
What we should be asking is whether we’re okay with a machine deciding how our kids feel… and what they do next.
This is what Sue Smith and I discussed on CJAD 800 AM.
Before you go… ThinkersOne is a new way for organizations to buy bite-sized and personalized thought leadership video content (live and recorded) from the best Thinkers in the world. If you’re looking to add excitement and big smarts to your meetings, corporate events, company off-sites, “lunch & learns” and beyond, check it out.