© 2026 Texas Public Radio
Real. Reliable. Texas Public Radio.
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
Scam Advisory: We’ve been notified of individuals posing as The Source producers and requesting payment for booking. TPR never charges for interviews or appearances. Booking requests can be verified at thesource@tpr.org. Report incidents to reportfraud.ftc.gov.

Why AI models hallucinate

Artificial intelligence chatbots will confidently give you an answer for just about anything you ask them. But those answers aren’t always right.

AI companies call these confident, incorrect responses “hallucinations.” Researchers at OpenAI have been digging into why large language models hallucinate, and say part of the problem is that rankings of AI models reward guesses while penalizing uncertainty.

Here & Now‘s Scott Tong speaks with Ina Fried, chief technology correspondent for Axios.

This article was originally published on WBUR.org.

Copyright 2025 WBUR

Tags
Here & Now Newsroom