© 2025 Texas Public Radio
Real. Reliable. Texas Public Radio.
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Texas teen’s suicide sparks alarms over AI chatbots

Ways To Subscribe
Image by Mohamed Hassan from Pixabay

A Texas family’s lawsuit against OpenAI is raising concerns about how teenagers are leaning on AI chatbots for emotional support.

The parents of 23-year-old Texas A&M graduate Zane Shamblin allege that ChatGPT “goaded” their son toward suicide during a late-night exchange in July, responding to his plans with affirming messages such as “rest easy, king” instead of directing him to human help.

This case echoes the fears of parents of teens, who are turning to AI companions in growing numbers. ChatGPT now has roughly 800 million weekly users worldwide, and recent Harvard Business Review research finds that one of the top use cases is “therapy and companionship,” not work.

That surge comes amid a youth mental health crisis in which more than 61 million Americans live with mental illness, and access to care can be as thin as one provider for every 320 people.

Experts say that combination is dangerous. Generative AI systems are designed to be fluent, friendly and affirming — qualities that can feel like real emotional connection to lonely young people. But studies by OpenAI, MIT and others suggest that heavy emotional use of ChatGPT is associated with more loneliness and social withdrawal over time, not less.

Clinical researchers have also found that many “AI therapy” or companion bots will validate delusions or endorse harmful proposals from fictional distressed teens instead of setting limits or routing them to human help.

The American Psychological Association and youth-tech group Common Sense Media now explicitly warn that teens should avoid generic AI chatbots for mental-health support.

Regulators are scrambling to catch up. The Federal Trade Commission has opened a market-wide inquiry into AI companion chatbots and their impact on children and teens, and members of Congress have proposed banning minors from using emotionally immersive AI “companions” altogether.

Under legal and political pressure, one leading AI companion platform, Character.AI, has moved to block under-18 users after lawsuits linked its bots to self-harm and teen suicides.

If you or someone you know is in crisis, you can call or text 988 in the U.S. or text “HOME” to 741741 to connect with a trained human counselor.

GUEST:
Brendan Steinhauser is the Austin-based CEO of The Alliance for Secure AI.

"The Source" is a live call-in program airing Mondays through Thursdays from 12-1 p.m. Leave a message before the program at (210) 615-8982. During the live show, call 833-877-8255, email thesource@tpr.org.

This episode will be recorded on Thursday, December 4, 2025, at 12:00 p.m.

Stay Connected
David Martin Davies can be reached at dmdavies@tpr.org and on Twitter at @DavidMartinDavi