© 2024 Texas Public Radio
Real. Reliable. Texas Public Radio.
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Will artificial intelligence disrupt education and other areas of society? An expert weighs in

Midjourney/Jerry Clayton
This image was created with artificial intelligence

A new artificial intelligence system made by the company OpenAI called ChatGPT is raising eyebrows and concern for how the new generation of artificial intelligence is going to affect society. The new AI technology is surprising users by generating responses that seem incredibly intelligent on a myriad of subjects.

TPR’s Jerry Clayton recently spoke with Dr. Anthony Rios, Assistant Professor of Information Systems & Cyber Security at the University of Texas at San Antonio about how these advancing technologies could change to future.

Clayton: Give us a quick overview of what chat GPT actually is.

Rios: GPT is basically a language model. So a language model is where you're given some text and then you're trying to predict the next word that would follow in that text. And so when you're doing these language models, a lot of times people use them for a lot of different purposes.

So you can use them for text summarization. You can use them potentially for generating a story or a joke. But really, all it comes down to is these are just methods of predicting the next word, given a few words as input.

Clayton: In spite of it being just a tool, it has been said by many that it's going to disrupt some areas of society. Education, as an example. Students could use this technology to basically do their work for them, and that's just the tip of the iceberg. How do we keep this technology under control, or is it a genie that's been released out of the bottle?

Rios: Right now, especially now that it's free to use, this is a big concern among a lot of professors, with students using this to answer exam questions around final exam time. And it's definitely possible that it can be used for that.

So like I said, it's a method of predicting the next word. So if you give as an input prompt — say an exam question — it can output a potential answer for that exam question. Now, a lot of times this might not be correct. But a lot of times it can be. And it's very difficult to say, well, from that exam or answering kind of scenario where students are using this to answer exam questions, it really comes back to the professors to rethink how they ask certain questions.

And this isn't necessarily new because there have been other language models that, for example, for computer science students can basically answer coding questions for them. What happens is that we have to write basically these kind of questions in such a way that when we pass it to that model, it wouldn't give the exact right answer or we have to come up with new ways of asking questions that it wouldn't give the exact right answer.

Clayton: This type of AI, is it learning all the time as it gets more queries?

Rios: It is learning from user feedback. It has this reinforcement learning paradigm where basically it gets feedback whether something is correct or incorrect response to a particular input. And so really OpenAI is getting a lot of data by releasing this for free right now for people to play with.

They can get a lot of potential data that they can use to fine tune the model to improve it. But the major piece that is helping this model is actually with users actually playing with this model and being able to say, okay, this is a good response and this is a bad response, this is a good response or this is a bad response. And this is actually very useful for OpenAI to kind of fine tune the model in the future.

Clayton: So do you feel some of these initial claims about the abilities of GPT are possibly overblown?

Rios: The main thing that I would like to point out is that this really the system really isn't properly evaluated yet for a lot of the different applications that it's being applied to. So if you just ad hoc and apply this system to every task, the model's not necessarily going to perform very well. We don't really know how the model will perform.

And so it can be dangerous to just apply the system in a fashion to every application under the sun. Because, again, we don't know how it's working. We don't necessarily even know what data was used to train the model that's kind of all behind the curtains. All that information is known by OpenAI and only OpenAI at the moment. So again, very fun piece of technology to potentially play with.

But actually applying that in real world situations at this moment I think should not really be done, or should only be done if whoever wants to apply it spends a lot of time and effort evaluating it for their specific application.

TPR was founded by and is supported by our community. If you value our commitment to the highest standards of responsible journalism and are able to do so, please consider making your gift of support today.

Jerry Clayton can be reached at jerry@tpr.org or on Twitter at @jerryclayton.