© 2024 Texas Public Radio
Real. Reliable. Texas Public Radio.
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

What are the threats and promises of AI?

Ways To Subscribe
Avengers Age of Ultron

When machines can think, will humanity become obsolete? Will the sentient constructs born from the reckless invention of shortsighted people lead to our own destruction or enslavement? Is it too late to turn away from the artificial intelligence advancements that are already in place today? How can the risks of learning machines be eliminated before it’s too late?

These questions may seem like the plots of science fiction, but today these are the real problems that we are facing as a species.

Artificial intelligence is advancing at an astonishing rate, and there is the potential for AI to soon become so intelligent, that it surpasses human intelligence. This is known as "superintelligence" and presents what is known as the "AI control problem." If AI were to become more intelligent than humans, it could potentially pose a threat to our existence. For example, an AI could decide that humans are a threat to its own existence and take steps to eliminate us. Or a badly informed or poorly instructed AI agent could follow its mission goals in an extreme way that would threaten the human population.

But even without super artificial intelligence there is the potential for AI to be used by bad actor humans for malicious purposes. AI could be used to create autonomous weapons systems that could kill without human intervention. It could also be used to manipulate people or spread misinformation.

There is the potential for AI to lead to mass unemployment. As AI becomes more sophisticated, it is likely to automate many tasks that are currently done by humans. This could lead to widespread job displacement and economic disruption.

It is also important to note that there are many people who believe that the benefits of AI outweigh the risks. However, it is important to be aware of the potential dangers of AI and to take steps to mitigate them. For instance, ethical guidelines for the development and use of AI should be put in place. These guidelines should ensure that AI systems are fair, unbiased, and safe. But who polices these ethical guidelines and will international competition for the most powerful AI system force the abandonment of those guidelines?

More resources need to be dedicated to research on AI safety. This research should focus on developing techniques for preventing AI from becoming a threat to humanity. But how much risk can we tolerate? It’s unlikely that any artificial superintelligence will be 100 percent safe.

Guest:

Roman Yampolskiy is an associate professor in the Department of Computer Science and Engineering at the University of Louisville J.B. Speed School of Engineering. His research expertise is in understanding the limitations, safety concerns, and controllability of artificial intelligence—including developing a plan for conceivable adversarial scenarios between humans and AI.

"The Source" is a live call-in program airing Mondays through Thursdays from 12-1 p.m. Leave a message before the program at (210) 615-8982. During the live show, call 833-877-8255 or email thesource@tpr.org.

*This interview will be recorded on Thursday, June 8.

Stay Connected
David Martin Davies can be reached at dmdavies@tpr.org and on Twitter at @DavidMartinDavi