© 2024 Texas Public Radio
Real. Reliable. Texas Public Radio.
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

OpenAI faces new scrutiny on AI safety

MARY LOUISE KELLY, HOST:

Yet another version of ChatGPT is on the way, and it is expected to be even more powerful, even more human-like. Recent moves by the company have raised concerns about the technology's potential dangers to society and highlighted how few guidelines are reining them in. For more, we are joined by NPR's Bobby Allyn. Hi, Bobby.

BOBBY ALLYN, BYLINE: Hey, Mary Louise.

KELLY: OK. Another version of ChatGPT dropping, which is interesting because, at the same time, there's been this big push for the company to slow down - can you talk about that tension?

ALLYN: Yeah. We know by now that OpenAI is kind of the pacesetting AI company in Silicon Valley with, of course, ChatGPT and Dall-E and its other services. When they do something, the world watches. But recently, two big things happened.

One, the company dissolved the team that was studying so-called alignment, and that's tech lingo for making sure superintelligent AIs are aligned with human goals - in short, safety. Shutting that team down, Mary Louise, hit like a lightning bolt. I mean, it sparked all sorts of criticism online among AI researchers.

And then secondly, some former members of that team broke the usual secrecy around OpenAI and spoke out against the company. One former executive said safety has taken a backseat to so-called shiny products.

Now, to quell the critics, OpenAI, this week, said it was putting together a safety and security committee that will focus on ways to prevent AI from being abused as a tool for impersonating people and spreading disinformation online and preventing all sorts of other harms.

KELLY: I will note that OpenAI is hardly the only tech company grappling with these challenges. But how - tell me more about how this has played out specifically at OpenAI.

ALLYN: Yeah. You know, it's really a tension at the heart of the company since it was founded as a nonprofit research lab and then, of course, became this hypercompetitive player in Silicon Valley. Jump ahead to last year, and these tensions really came to a head when CEO Sam Altman was briefly ousted from the company and then brought back over issues including whether safety was a company priority.

Since Altman's return, though, the questions have just gotten even more pronounced. The new committee that's set to examine safety is going to be led by Sam Altman, and some were skeptical about that. I talked to leading AI researcher - his name is De Kai. He teaches computer science at Hong Kong University. And, you know, he's sympathetic to that pushback but says there are currently no federal rules, so companies are governing themselves.

DE KAI: There are no clear guidelines. There is no level playing field. And we certainly don't want to be putting unelected tech executives in charge of making those crucial decisions for our society at large.

ALLYN: Yeah. De Kai says there needs to be some kind of government intervention to set the rules of the road for AI companies. He puts it in pretty dire terms.

DE KAI: Tackling these trade-offs in a conscious way is the single most important thing that humanity, that society, that democracy, has to do urgently.

KELLY: Well, until somebody tackles these trade-offs, until, say, Congress acts, what is - where does this leave us with this latest version of ChatGPT?

ALLYN: Yeah. I mean, it means that OpenAI and really all AI companies can do whatever they want. I mean, we saw the capabilities of the latest ChatGPT recently. It can replicate human behavior, right? It scans human faces for emotions. It can determine what room you're standing in. It can make jokes. It's really incredible. And OpenAI says an even fancier version is on the way.

At the same time, we're seeing abuses. Scammers are using it. Malicious actors are using AI to impersonate voices. And many are wondering whether they can trust OpenAI, right? And when actress Scarlett Johansson recently accused OpenAI of copying her voice in a ChatGPT personal assistant, that certainly hurt the company's image as a...

KELLY: Right.

ALLYN: ...Trustworthy company. And just a couple days ago, Mary Louise, two former board members of OpenAI wrote a piece in The Economist that said, for humanity's sake, AI regulation is needed to tame market forces. So those calls are certainly getting louder.

KELLY: Getting louder. NPR's Bobby Allyn. Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Bobby Allyn is a business reporter at NPR based in San Francisco. He covers technology and how Silicon Valley's largest companies are transforming how we live and reshaping society.