AILSA CHANG, HOST:
2016 is the year that fake news made its impact, and it did so via social media. Now the goliath of social media, Facebook, says it'll try to identify and limit fake news using a combination of algorithms and humans. The humans will come from a handful of traditional news organizations and fact-checking sites, and they're affiliated with the nonprofit Poynter Institute. Alexios Mantzarlis is director of Poynter's International Fact-Checking Network, and he joined us to tell us a little more about how Facebook's fact-checking effort is going to work. Thanks for being with us.
ALEXIOS MANTZARLIS: Thanks for having me.
CHANG: Facebook has more than a billion users around the world. Anyone can set up a website and post articles on Facebook. And those articles can look remarkably similar to real news. How are you going to filter the fake stuff out?
MANTZARLIS: I mean, the reality is that some of these fake news are really quite self-evident once you do a minimum of research. URLs will imitate real news outlets or will use headlines that are not actually reflected in the text of the article. So the spread of fake news has been really turbocharged by how the newsfeed algorithms worked, but I do think the entirely fabricated news can be easily spotted by professional fact-checkers.
CHANG: But beyond sort of outrageously obviously fake headlines, when you get into the content, how do you decide what is it's fake? Because, you know, a lot of reporters would tell you figuring out the facts is not always easy. You have to confirm numbers, figure out whether sources are lying, cross-check. I guess how involved do you think the fact-checking is going to get?
MANTZARLIS: I do think that especially in this stage, in this pilot phase that Facebook is deploying, the fact-checkers will try to concentrate on the glaringly fake news. You know, the stories about the pope endorsing Donald Trump or Megyn Kelly being fired by Fox News for coming out as a Hillary Clinton-backing traitor. Those are the stories that did remarkably well. And that's, I think, where we should be concentrating on the worst offenders, so to speak.
CHANG: And so tell us a little bit more about how the process would work. Say someone flags an article as being fake. What's the chain of events that follows?
MANTZARLIS: So it's been at least since the beginning of 2015 that users could report something as false. But now they can, say, send this to a third-party fact-checker and/or to tell their friend that they think it's a false story. When a sufficient amount of users have flagged something, these stories will surface in a dashboard that fact-checkers can see. They can tag it as false and put a link to the background that they have that indicates that it's false. When more than one fact-checker has indicated that the story is false, it will appear as disputed on the newsfeed of users. Users can still share the content, but they will see that it has been disputed and that they have a way to see the evidence provided by the fact-checkers.
CHANG: Well, as you might have read, conservatives and right-wing media are completely bashing this effort. They say Facebook is just working with the liberal media who will bring a liberal bias to the fact-checking. How do you respond to that concern?
MANTZARLIS: I think that Facebook is obviously on a tightrope. The balance between avoiding that entirely fabricated stories get an enormous reach on their network is one concern, but the other concern, of course, is that they don't want to be censoring or reducing the reach of information that may still be accurate. And I do believe that critics will provide evidence of when maybe something isn't working so that this process can be made better.
CHANG: Alexios Mantzarlis is director of Poynter's International Fact-Checking Network. Thanks for joining us.
MANTZARLIS: Thanks for having me. Transcript provided by NPR, Copyright NPR.