NOEL KING, HOST:
Federal indictments against three people in President Trump's orbit dominated the headlines this week. But on Capitol Hill, there was another Russia story. Lawyers from Facebook, Google and Twitter faced tough questions from Congress about how foreign actors used their platforms to influence the 2016 election. Facebook now says around 130 million Americans were exposed to content created by alleged Russian operatives. Now, all three companies say they plan to try to stop this from happening again. The question is, what can they really do?
For some answers, we're joined by Hany Farid. He chairs the computer science department at Dartmouth. Professor Farid created software that is used by tech companies to find and remove bad content like child pornography. And he says his technology could be modified for some of these newer problems. Professor Farid, thanks for coming on the show.
HANY FARID: Thanks for having me.
KING: So tell me about the software that you invented.
FARID: Yeah. so back in the mid-2000s, we developed a technology called PhotoDNA that is currently being used by the major platforms to find and remove child pornography. And so the approach we took is a bit of a hybrid approach. Humans flag the illegal or inappropriate content, and then the computer goes in and extracts from that content a distinct digital signature. And then we just sit at the pipe of a Facebook. And every single upload that comes up, you extract the same type of signature. You compare it to a database of known bad content, and you can very quickly and accurately remove that kind of content.
KING: The companies that were being grilled on Capitol Hill this week - Google, Facebook, Twitter - are they currently using your software?
FARID: They are. Facebook has been using it since 2010, Twitter since 2011 and Google fairly recently in 2016 started using it but only in the child pornography space. So they have yet to deploy that same technology as aggressively and widespread in the counter-extremism, fake news, election tampering, these issues that we're hearing about.
KING: Your software has prevented child pornography from being transmitted. Can it do the same with fake political ads coming from Russia?
FARID: To some extent, yes, and to some extent, no. So to the extent that these fake articles have images or videos that we have flagged as being part of a fake news story, you can take exactly the same technology and deploy it on the fight against fake news. We're also going to have to change the business model of how we promote news stories. It can't just be the things that are clicked on the most - the click bait. We have to start getting some idea of reliability. We have to start thinking about this is not just about engagement, how many eyes can I get on a story? But it should also be about trust, and that requires a really fundamental difference in thinking on these platforms and how they want to engage with their customers.
KING: Was there anything that Facebook and Twitter and Google could have done to stop Russia from buying and spreading fake ads?
FARID: Sure. There's absolutely things they could have done. They could have had more transparency in reporting on who's buying the ads. They could have changed the business model on how they promote articles not just who's clicking on those modes, but what is more reliable. They could have had humans in the loop reviewing the purchases.
So they absolutely could have done more, but part of the business model is to fully automate all these things so that they could work at the scale of billions and billions of people. So it's in many ways baked into the system - the system that they created. They could have created a system or modified a system to have more checks and balances but they didn't. And now the question is, what are they going to do next?
KING: Tech companies really do seem nervous about instituting regulation to stop false information from spreading online. Why is that?
FARID: Yeah, that's a good question. It's so interesting because when we were working on the child pornography problem back in the mid-2000s, we had similar concerns from the tech companies. They said things like, well, how do you define child pornography? How do we know what the age is? What is the definition of sexually explicit? And, look, the easiest thing to do is to have a platform with no rules and regulations. It's easy because there's no inconsistency, right? Everything goes and if something's illegal, that's a law enforcement problem.
And as soon as you get into the business of saying, this is appropriate, this is not appropriate, this is legal, this is not legal, you open yourself up to complex problems. I don't think we should shy away from those complex problems because I think the cost of not doing it is simply too high. That should not be an excuse for inaction.
KING: Hany Farid is chair of the department of computer science at Dartmouth College. Professor Farid, thanks so much for coming on.
FARID: Thanks for having me. Transcript provided by NPR, Copyright NPR.