IS THAT A FACT?

Here's what we know about Russia's disinformation campaigns

Season 1 Episode 5


Here's what we know about Russia's disinformation campaigns

Deen Freelon

About The Episode

Our guest this week is Deen Freelon, an associate professor at the Hussman school of Journalism and Media at Univesrity of North Carolina, Chapel Hill. Our host spoke to Freelon about how foreign adversaries, and particularly the Internet Research Agency in Russia, are using social media platforms against us. We explore how foreign governments wage disinformation campaigns against us, who they target and why. Are they succeeding? And what can we do as news and information consumers to avoid falling for this nefarious form of misinformation?

Freelon is known for his coding and computational methods to extract, preprocess and analyze large sets of data. He has researched how misinformation is spread and what people can do to prevent the spread of false information. Freelon has published over 30 peer reviewed journal articles and contributed extensive research to the Knight Foundation.  In the past few years, Freelon has done substantial analyses about the impact of state-sponsored disinformation campaigns on Twitter related to our elections and the Black Lives Matter movement.

Available on:

Episode Transcript

Darragh Worland: Welcome back to Is that a fact?, the all-new podcast brought to you by the nonprofit News Literacy Project. I’m your host, Darragh Worland. Today, for the fifth episode of our series on the impact of misinformation on democracy, we talk to Deen Freelon, an associate professor at the Hussman School of Journalism and Media at UNC Chapel Hill. Freelon is known for his use of coding and computational methods to extract, preprocess and analyze large sets of data. He even writes his own software to do that work. In the past few years, Freelon has done substantial analyses about the impact of state-sponsored disinformation campaigns on Twitter related to our elections and the Black Lives Matter movement.

Yeah, I know, it sounds intense, but Deen has a way of making it accessible, and he’ll help you understand Russian disinformation and its impact on our civic discourse.

You’ve studied disinformation, particularly Russian-sponsored disinformation campaigns, but before we dive deeper into your findings, can you just distinguish misinformation from disinformation for our audience, because I do think there’s some confusion?

Deen Freelon: Sure. Happy to. So the distinction between dis- and misinformation is an important one because these two terms are often conflated or confused with one another. Misinformation is false content that is spread unwittingly, so a relative of yours sends you something that is incorrect, they believe it’s correct, and so that is the classic definition of misinformation. Theoretically, if you could convince this person that this content was not true, then they might change their mind. Now with disinformation, that’s where the person who was spreading it knows it’s false, and is spreading this content for malicious purposes. So because the person knows it’s false, no amount of convincing is probably going to change their mind, because they’re already doing something underhanded from the start. The main factor that distinguishes these two things is not within the content itself, but is actually a relationship between the content and the person spreading it. So in other words, what is disinformation to one person counts as misinformation to another based on what they know about its truth or falsehood.

Worland: So disinformation is essentially spread with nefarious intent, that’s ultimately what you’re saying?

Freelon: Yes.

Worland: OK, got it. Now that we’ve got some understanding of what disinformation is, can you tell us what the IRA is? For people with names like mine, the default assumption might be that we’re talking about the Irish Republican Army, and that’s not what we’re talking about here.

Freelon: Yeah, so the IRA, the letters stand for the Internet Research Agency. They are a company that is based in St. Petersburg, Russia, that has done at least some contract work for the Russian government. So what they do is they spread content on social media. Some of that content is true, some of it is false. The vast majority, if not all of it, is spread under assumed names. In other words, this content does not come labeled as Russian propaganda. Folks will come disguised as American conservatives, as black protesters, as news outlets that are local in the United States, and so all sorts of different types of identities that are assumed to better spread this content. And so the content can be anything from news stories, it can be content that is true but extremely slanted toward one side or another, it can be opinions that are very extreme, and in some cases, it can be content that is not very extreme, but is meant to appeal to a particular audience.

An example of that would be some of the IRA agents who were disguised as black protesters spread content that was for example, “Hey, look at the first female mayor of Compton who just got elected to her second term in office. Isn’t that great? Let’s look at this young black teenager who just got admitted to eight Ivy League colleges,” or something like that. So not all of it is extreme or false. Some of it is just meant to foster group identity. Then they just sort of infiltrate, and they were able to do this until public knowledge of this campaign sort of broke in the news in November 2017. There were a bunch of account shut down, IRA accounts. Then there were additional accounts after that in 2018 that were discovered, but then you never really know who else is out there because the social media companies have to discover the accounts and shut them down, but they don’t know what they haven’t shut down yet, so there may still be some folks out there disguised as some identity that is not the true one.

Worland: You’ve done extensive research and analysis particularly of activity by Twitter handles that are now known to be associated with the IRA. Can you tell us what you found?

Freelon: Yes, so I’ve done a number of studies on this. I’ll try to summarize a bunch of them at once. What we found there was that IRA agents who disguise themselves as black protesters, or Black Lives Matter-style protesters, were able to attract more engagements, that is, more retweets, replies and likes on a per-tweet basis than any of the other IRA-assumed identities that we looked at. So those included white conservatives and non-Black leftists, news organizations, typical commercial spam and a couple other ones. So even though the total number of tweets from accounts that were presenting as Black activists was actually far less than conservative-presenting tweets, they were able to pull in more likes, shares and replies on a per-tweet basis than the conservatives who were-

Worland: So more engagements from people—

Freelon: Yeah from people. So the other thing we did in that study was we actually checked to see how many of those engagements actually came from the IRA itself. Those were very, very low numbers, less than 3% in terms of both replies and retweets. So this is not the IRA juicing its own stats. This is non-IRA accounts that are engaging with this content, and that is very consistent with the possibility that a large number of those engagements came from authentic people. So that’s one study. I was a coauthor on another study where we looked at the impact of the IRA on sort of changing peoples’ minds and shifting them from Republican to Democrat, or changing their attitudes toward the other side. We found that at least by 2017, which is when our study survey went out, the IRA weren’t changing people’s minds a whole lot. So the impact that they had, whatever it was, was not about swaying peoples’ opinions from one side to the other. They didn’t have any major impact on any of the metrics that we looked at. Now a number of people sort of responded to us after the study came out and said, “Oh well, that’s not really how it works. A lot of it is about voter suppression, or a lot of it is about pushing people further in a direction that they were already going in, in other words, making them more extreme.” But as far as actually swaying their opinions to the opposite side, or in terms of intensifying partisan dislike of the other side, we did not find any evidence.

Worland: I’d like to talk to you a little bit more about that because I would say the broad perception is that Russian interference influenced the outcome of the 2016 election, but your research has ultimately shown that the disinformation didn’t actually change anyone’s mind, but it really further entrenched pre-existing beliefs. So what did they actually accomplish? What’s their end game? Is there an actual political outcome?

Freelon: Yeah, so this is something that the empirical methods that [inaudible 00:08:47] research is not really able to find out. Just generally speaking, I don’t want to understate the difficulty of trying to make these kinds of determinations. If you think about all of the many, many influences on our behaviors – everything from all of the media that we consume, not just disinformation, but all the media we consume on digital platforms, everything we hear on the radio, everything we see on television, read in the paper, to the people who we talk to, to the way we were raised in terms of our ideologies and all the rest of that – it’s very, very difficult to pull apart the actual independent influence of any one source. So this is why it’s very, very hard to say, “OK, well the IRA’s influence was X,” because there’s so many other influences when it comes to media that can come into play here, and I think it’s also difficult to separate the IRA’s intentions from what actually happened, right?

Generally speaking, where I tend to land on this is consistent with what we understand about how media effects typically work. There’s been decades of research on this. The effect of any one piece of media, even repeated exposure to it, the connections to specific behavioral outcomes tend to be quite tenuous. We know this most clearly in research that looks at political ads. Every election season, candidates from both of the major parties pour tons and tons of money into political ads, but the research has shown that political ads don’t really have the kinds of behavioral effects a lot of people assume that they have. The effects that are typically shown are marginal at best. If professional political folks here in the states can’t do it with millions of dollars, we also know that the IRA spent a lot less than your typical political campaigns do. It seems unlikely that any major effects resulted from their campaigns.

Worland: Let’s talk a little bit more about how you conduct your research because I think it’s a mystery to a lot of people when you hear computational analysis, we imagine a lot of data going through some sort of filtering system. For people who aren’t computer programmers, or creating software like you are, it is really a mystery. So I’m just curious if you could break it down. How do you do what you do?

Freelon: So fortunately for researchers in this area, Twitter has made available the full set of tweets by the IRA, and also other disinformation campaigns. I just want to say that Russia is not the only country that’s doing this. So they also have caches of data from Iran, Venezuela, China and a couple of other countries that have also been pushing disinformation on their platform. But they’ve made this data public, and so anybody can download it and analyze it at their pleasure. What’s nice about that is that Twitter does all of the work in terms of protection, which is really important because Twitter has access to data that really is important in making a final determination as to whether a given account is associated with a known disinformation actor or not, where their IP addresses are coming from,so they’re able to put that together and say, “OK, here is the full set of tweets , for example, the Internet Research Agency that we had.” So this is after they shut everything down. So all this information is gone from the Twitter platform, but it’s in this very large essentially database that can be downloaded and analyzed. You can use a programming language like Python or R to be able to query this database, to be able to look for various patterns within it.

Worland: What are those patterns that you’re looking for?

Freelon: On a very basic level, you can look for things like who are they retweeting, so in other words, what messages are the IRA agents actually trying to boost. You can look at things like the extent to which the IRA actually retweeted or interacted with other messages by other IRA members. This was really important to establish that only a very small percentage of messages fit into that category. You can look at hashtags that they used, so most common hashtags that were in tweets that they had. You can look at which tweets actually got the most retweets, the most replies, the most likes, et cetera. So those are some very basic things you can look at, and then one thing that I did in one of my papers was I developed a statistical model that allowed me to predict which assumed-identity categories actually got the most retweets, likes and replies on a per-tweet basis. I was able to do that because we categorized the different IRA accounts manually, and then we were able to add some other predictors as well.

Worland: If you were to guess, based on the groups that you could see that they were engaging, what would you say they were targeting, what kinds of groups?

Freelon: I think you can infer a little bit based on the identities that they took on. So we have another study where we looked at some of the people who replied to the IRA, and we found that most folks who replied to IRA agents in different ideological categories shared that ideological leaning themselves. So for example, authentic conservatives replied to IRAs who presented as conservative, and liberals replied to IRA accounts that were pretending to be liberal. So, I think that one way that they appealed to folks – and I guess this is something I can say with some confidence as I looked into it – is then they tried to make themselves look like the groups that they were trying to appeal to, so that was sort of the strategy there. It’s sort of birds of a feather, sort of look like a conservative, look like a liberal, and try to insinuate yourself into the conversation on that basis. So I think that was sort of the general strategy that they tried to do. They had varying degrees of success with that, but I think that’s probably the main thing I can say about that.

Worland: So there’s a level of impersonation that’s happening – infiltration by looking like your target, sounding like them, acting like them, engaging with them. It’s sort of like espionage on some level. Would you make that comparison, even though there’s sort of a different goal, because here you’re not trying to get information, you’re trying to sort of sow distrust and separation, whereas with espionage, you’re essentially trying to get information? How would you compare and contrast what we might be more familiar with in terms of Cold War espionage?

Freelon: Yeah, well it’s certainly a disguise. I think it has that in common with what we traditionally think of as espionage. As you note, the goal is clearly different. You’re not trying to get information, you’re trying to insert content into a conversation that provokes some sort of response, whether it’s suppressing the vote, or making people more extreme, or distracting folks, whatever it is. I mean I think I would compare, more than anything else, to trolling, which is often done without any political motive. It’s just inserting yourself into a conversation, raising the temperature level on the conversation, and really just derailing things by posting extreme content, or content that steers the conversation in the direction that you want it to go. So this is something that we’ve known about on the internet and other digital platforms for decades, but in some ways, I think this is what we’re looking at, is sort of the weaponization of this trolling behavior for explicitly political purposes, and for purposes that satisfy the international ambitions of certain countries.

Worland: How damaging do you think these kinds of disinformation campaigns are ultimately to American democracy, and to our civic discourse? Maybe it’s not going to change how person A votes, or how person B votes, but is it actually harming our civic discourse? Is it causing damage?

Freelon: Yeah I think so. I divide the kinds of effects that disinformation can have into first- and second-order effects. So the first-order effects are what happens when you come into direct contact with disinformation content. So in other words, you read something or you see something that is furtively spread by a disinformation agent, and that has whatever effect that it has, so that’s first order. Second-order effects are when the idea that disinformation is out there is something that influences your behavior, irrespective of your actual content with disinformation. So for example, that fact itself may simply make you paranoid about whether what you’re encountering is disinformation. That itself can be damaging. Or in some cases, if something problematic for your side of the political aisle comes down the pike, you can say, “Oh well, that’s just disinformation,” as a way of discrediting it without any evidence to that effect. You’re basically just using disinformation as an insult to say, “Oh I don’t like that so it must be false. It must be spread by some foreign country to undermine my side of the political sphere.”

So in many ways, I think the second-order effects are actually more problematic, because a little bit of disinformation just puts that idea out there into the culture and causes that to be something that causes paranoia on the one hand, and also it was used to dismiss content that may be inconvenient for one side or the other as well.

Worland: Yeah so ultimately, it’s contributing to the pollution of our information ecosystem in such a way that it’s adding to truth decay.

Freelon: Yes, even when the actual content of disinformation may not be present at all. The fact that it could be causes you to waste mental energy on trying to figure out whether disinformation is present when it may not be present at all.

Worland: There are reports that Russian disinformation campaigns are taking place again this year. Have they changed their methods? What do we know about what might be happening in the lead-up to this year’s election?

Freelon: I don’t think we know a whole lot yet. I mean remember we didn’t really know a whole lot about who the Russian accounts were, or what they were doing until a year after the 2016 election. I think similarly, when you consider that disinformation is likely to ramp up right around now, we’re inside of a month before the election, I think it’s going to take a while for us to figure out first, who the disinformation accounts actually are, and then secondly, you’re going to need to do the research to figure out what they’re trying to do. So I don’t think you can really figure that out now, but I have no doubt that something is happening. In some respects, it’s actually kind of unfortunate that we can only do this after the fact, but that’s sort of the nature of the beast.

Worland: Well The New York Times reported that a Russian agency of the Kremlin is hiring American journalists to help them spread disinformation. Are you aware of that report?

Freelon: Yes I am, so that would actually indicate that the IRA is shifting its tactics. So in 2016, most people who were behind these accounts were actual Russians, and there’s pretty good reporting to that effect. So the recruiting of actual Americans I think is one way of making the content seem more authentic than when it was Russians that actually doing the posting. Now what I haven’t seen is large-scale evidence of who these people are, or what they were posting, what the actual effects were. I think you really need to do – or some folks, hopefully myself – will be able to do some rigorous research on this. But until all the accounts are exposed, I don’t think we can really say systematically what’s really going on.

Worland: As a journalism professor at Chapel Hill, you’re teaching young future journalists on a regular basis. What do you teach them about how to spot and debunk disinformation, both as consumers and then as future journalism professionals?

Freelon: As a researcher, I teach my students to try to seek convergent lines of evidence. So you try to find where multiple people who are not necessarily associated with each other are reaching the same conclusions. You also try to seek out content that is attempting to appeal to your own personal confirmation bias. This is what I tell students, right? Disinformation runs on confirmation bias. So that means that people who are spreading disinformation are trying to appeal to your sense of what you already believe to be true, and what you already believe to be right. So if you’re on the left, they’re going to try to disparage the right. If you’re on the right, they’re going to try to disparage the left. I think any content that fits into that bucket really deserves extra scrutiny because that is how disinformation typically works, especially if you see content that is disparaging the other side, and it seems too good to be true, definitely look a little bit further into that, and try to make sure that it actually is true before you start spreading it, because that to me, at least in my mind, raises the probability that it may be disinformation because that’s exactly how the disinformation playbook typically works.

Worland: You’ve also studied state-sponsored disinformation relating to the Black Lives Matter movement and recent protests around racial justice in particular. Can you tell us a little bit about what you’ve found in doing that research?

Freelon: Well, I actually have not looked at Black Lives Matter in its current evolution in 2020, and even if I had, the research probably wouldn’t be done by this (inaudible). So the protests started in May. I don’t know if your audience knows, but academic research moves very slowly. So you have to do the work, and you have to put it up for peer review, and it takes forever. As an informal observer of it, Black Lives Matter has been mostly the target of disinformation. This is true back when I was studying it 2014, 2016, that period. So typically what you have is people who don’t like the movement and are trying to say problematic and incorrect things about it. I suspect that’s probably true today. I was having a conversation with a journalist earlier, in which she pointed out that the conspiracy theory or adherents to the conspiracy theory QAnon were spreading disinformation about Black Lives Matter. That is something that didn’t surprise me, but this has been true of social movements at least going back to the ’60s, probably back before then when you had disinformation that was being spread about the civil rights movement. You know, I think that’s something that certainly deserves research, and I think that the jury’s still out on that just given the recency of the events, but I’ll certainly be interested to see exactly what those patterns are when the research comes out.

Worland: Is it fair to say that when disinformation campaigns are being launched, the strategy is ultimately to exploit your target’s vulnerabilities or political flashpoints, and to really exploit those and magnify them, or put Miracle-Gro on them?

Freelon: Yes, I mean I think that disinformation certainly does exploit controversial flashpoints, and one of the issues that we run into is we’ve having more and more of those, right? Something that is as in prior eras would have been as innocuous as suggesting a commonsense public health measure to reduce the spread of a contagious virus, like putting a mask on, becomes quite controversial in our era of extreme polarization. So I really don’t think that disinformation agents, be they foreign or domestic, have to look pretty hard to find those controversial issues to exploit. You know, I think in one sense, folks should think about how to best reduce polarization, because when you reduce polarization, you’re reducing the number of entry points that disinformation agents have to cause problems. Then the other issue too I think is understanding one’s own confirmation bias, and understanding that people are going to try to play to that as well. So it becomes, I think, very difficult for folks to reject things that seem to fit with their political beliefs, but if they are false, if they care about what’s true or false, then I think that to prove that, they really do have to reject disinformation, certainly when it’s revealed as such, and ideally by applying extra scrutiny to any content that seems to really try to overwhelmingly, or tries a little bit too hard, to appeal to your sense of what’s right for your side of the political aisle.

Worland: Other guests we’ve had on the show have been particularly critical of Facebook’s role in spreading misinformation, but why are you focused more on Twitter and not as much on Facebook?

Freelon: Well what’s interesting about that is Twitter has really made its data a lot more available to researchers, and so especially in the area of state-sponsored disinformation, so that’s I think is what we know a little bit more about that. Facebook makes its data available in a bit of a different way, and specifically, a lot of the disinformation content pertaining to the IRA for 2016 was released to certain researchers but not others. It wasn’t released publicly the way that Twitter did it. So I certainly don’t want to let Facebook off the hook here at all, but really this is a function of data availability. In fact, the IRA actually spent more money on Facebook than they did on Twitter, but a lack of availability of that data on the Facebook side makes it a lot harder to figure out what’s going on there. Now there has been some research on this, but that data was provided to research groups on a non-public basis, which is something I don’t agree with at all.

Worland: Yeah I was going to say I think you can guess what my next question is going to be, is should Facebook be more transparent with this data? What’s their explanation for why they’re not?

Freelon: Yeah, I think that’s something you have to ask Facebook about. I don’t know why they didn’t release the data the same way that Twitter did, I mean especially if a credible designation has been made to the effect that this is actually produced by the IRA, by state-sponsored disinformation agents. I can’t say why they didn’t do it. I think they should, and I hope that they will. But with a lack of that data, it becomes very difficult to figure out how disinformation was spreading on Facebook, and to compare it to other platforms in terms of who is consuming it, who is sharing it, and how the different identities worked to be able to target different segments of the population.

Worland: I think it’d be really fascinating to have the same kind of analysis that you’ve done for Twitter from Facebook. I think we’d all like to see that. Both Facebook and Twitter’s current policies require that so-called coordinated inauthentic behavior be removed as soon as it’s detected, but from what I’ve read, you’re not so much in favor of this approach. What do you think social media companies should be doing about these campaigns instead?

Freelon: The real issue I have with immediate removal of the content is it basically sort of throws an obscuring cloak over what’s been going on, so if you’re not following up on some of the press releases the social media companies put out after they do this, you know the accounts just sort of disappear. What I’ve been in favor of – and I think you actually need to test this before you did it I want to make that clear – but I think that it might work if you actually labeled the content first. So in other words, you sort of draw a frame around it and say, “This is disinformation.” I think that that may help people to actually detect similar types of disinformation if they can see what it looks like. Rather than sort of taking it off or one thing that Twitter has done has for example they’ll say, “This tweet is really problematic,” for this reason or another, or “This is sensitive material,” that’s not for disinformation content, it’s for graphic violence and that sort of thing.

You could put up a screen that says, “This is disinformation content. Do you really want to view it? yes or no,” and if you click yes, you get a boundary around it that indicates that that’s what it is. So I think putting it out there in that way would really help people to understand that it is present, and help understand what’s in it, and that may in turn help them to see it in other places.

Worland: So it sounds like you’re saying to make it a teachable moment. Instead of just taking it away, censoring it, let’s give people the opportunity to learn a little bit about how this stuff works.

Freelon: Yes that’s a possibility. Now one thing I would want to do is I want to do some extensive pretesting on this to make sure that it doesn’t make the problem worse, right? This is just an idea. I do want to say that a number of attempts to reduce or eliminate the effects of disinformation have backfired. So I don’t want to say, “OK, this is the end-all be-all solution.” But I think it’s a good idea, it sort of works in my head, but I wouldn’t want to say roll this out on a massive basis only to find out that there’s something I hadn’t thought of, and it’s actually making the disinformation problem worse. I think it’s a proposal worth exploring and worth testing, but absolutely that should be done before it’s rolled out on any sort of large scale.

Worland: Given that one of your suggestions, although with a caveat that more research needs to be done, given that one of your suggestions is education, you know, provide this brief opportunity for a teachable moment, what do you think the value is of news literacy, information literacy in helping stop the spread of disinformation?

Freelon: Well I think it’s incredibly valuable, but people need to be honest about what media literacy really means because there are lots of conspiracy theorists who claim to be engaging in media literacy by saying, “Oh I’m just not blindly accepting whatever the news media puts out.” That’s not really media literacy because most of the conclusions that they draw fall in line with their preexisting political beliefs.

Nobody wants to be uncritical, right? It’s one of these things that is universally treasured by people of good faith and conspiracy theorists alike. The difference is in the details – actual critical thinking that follows the facts wherever they lead, and sometimes lead you to places that are uncomfortable with respect to your political beliefs, versus something that sort of systematically rejects anything that comes from mainstream sources if it doesn’t flatter your own view of the world, and the chips are always sort of falling in the same place, and then you sort of play on this idea that you’re engaging in critical thinking because you blindly reject anything that comes from the mainstream media, and that’s just not what it is. Just because you call it critical thinking doesn’t make it so. So understanding what it really is, versus understanding when people are just saying that because they want to look smart, I think is really critical.

Worland: Right, yep, agreed. You know you’ve suggested that one of the solutions to state-sponsored disinformation is for the right and the left to work together to address the threat. This is a time when the country is just really deeply divided, perhaps more so than ever. What makes you so optimistic about our ability to do that, to come together? What do you suggest we do?

Freelon: So I’m not really optimistic about it. It’s a bit of a pipe dream I will admit. I think it would be great, but I understand there are deep divisions in our country. I’m not at all Pollyanna-ish about that. I think that it’s really, really important for folks to come together and recognize areas where there may be agreement. So we saw tiny inklings of this over the past few years back in the Obama administration around the idea of something like criminal justice reforms. I think that when people on different sides of the aisle understand that there are areas of agreement, that creates a space within which the kinds of harms from disinformation that we’ve seen can be sort of roundly rejected. That of course is difficult because politics is very tribal, right, so you want to have the beliefs that other liberals have if you’re liberal, or the conservatives have if you’re conservative, and so there’s a strong incentive not to reach across the aisle.

But without doing so, the disinformation problem is just going to get worse and worse. You can only deny a reality so long before it comes back to bite you. If folks are interested in having a political culture in which disinformation is not a part of it, I think that really does take both sides. You can’t have one side that sort of says that disinformation is a problem and the other side’s cool with it, because the entire system ends up feeling the negative effects of that. I would love it if both sides could at least agree that intentional disinformation is bad, but of course that starts with recognizing it on your own side, because recognizing it on the other side is extremely easy to do.

Worland: Right, so the solution kind of has to come from the top down.

Freelon: I think coming from the top down will certainly help. Again in our political system, there’s a give and take. There’s top-down aspects, but we also elect our leaders, so choose leaders that reject disinformation and embrace science and rest of that. So I don’t think it’s all top down or all bottom up. I think there’s sort of a back and forth that needs to happen that I’d like to see more of.

Worland: How important do you think truth is to democracy, just to bring it back to our topic?

Freelon: Yeah, I think truth is essential to democracy. I don’t think that’s any kind of understatement. When people can’t trust the words that their leaders say, or that each other says, that’s a problem because so much of our politics is about convincing people to take certain actions that will support certain positions or candidates. When that trust goes away, what typically enters that vacuum is confirmation bias, and believing whatever goes along with your political commitments. That’s not a great basis for making democratic decisions. So those facts are very, very important, facts about who is affected by the latest tax bill, or how many people police killed in a year, or the number of people in American prisons, et cetera, without the facts on these issues, it becomes extremely difficult to fashion sound public policy.

The ability or an environment within which disinformation can be created without consequence bodes ill I think for our politics, and for the policy that it produces.

Worland: Thanks for listening. We’ve been talking to Deen Freelon at UNC Chapel Hill about the impact of disinformation on democracy. Next week, we’ll talk to Rebecca Aguilar, a seven-time Emmy award-winning TV news reporter based in Texas who just recently became the first-ever Latina president-elect of the Society of Professional Journalists. We’ll talk to her about why newsroom diversity is essential for news organizations to accurately and fairly cover the communities they serve and how this connects back to trusting news organizations to keep us well informed.

“Is that a fact?” is a production of the News Literacy Project, a nonpartisan education nonprofit helping educators, students and the general public become news literate so they can be active consumers of news and information, and equal and engaged participants in a democracy. Alan Miller is our founder and CEO. I’m your host, Darragh Worland. Our executive producer is Mike Webb. Our editor is Timothy Cramer, and our theme music is by Eryn Busch. To learn more about the News Literacy Project, go to newslit.org.


FOR EDUCATORS

Checkology® can help your students tell the difference between fact and fiction.

What is Checkology?

FOR EVERYONE

Test your news literacy know-how with our app!

FOR EVERYONE

Check out our podcast Is that a fact?

FOR EVERYONE

Checkology® can help you tell the difference between fact and fiction.

What is Checkology?