In short, while the outbreak has yet (as of Feb. 2) to be classified as a pandemic, the misinformation about the new strain of coronavirus has achieved that status many times over.
As worrisome as this is, coronavirus misinformation patterns can also be used as a case study with students. Just as epidemiologists can glean valuable insights from outbreaks of disease, students can analyze the plethora of coronavirus rumors to refine their understanding of why and how falsehoods spread.
The two phenomena share some factors. As New York University journalism professor Charles Seife points out in the opening chapter of his 2014 book Virtual Unreality, the three epidemiological factors that determine how a disease spreads — transmissibility, persistence and interconnectedness — can also be used to explain the ways misinformation spreads online. Digital information is highly transmissible (easy to replicate) and highly persistent (easy to store and search) — and circulates in the most interconnected information environment the world has ever known.
But coronavirus rumors and other types of alarming medical misinformation have a specific potential for virality because they tap into extremely strong emotions. Rumors about infectious diseases incite fear, causing people to react emotionally and share those rumors with loved ones out of a strong desire to protect them. Medical information is also highly specialized, which increases the likelihood that people — perhaps especially conspiracy theorists — will misunderstand information they’ve dug up themselves (such as the results of a mock pandemic exercise, or old patents for a different strain of the coronavirus).
Finally, while photos and video related to many topics are easy to persuasively present in false contexts, images and footage of people suffering from medical conditions are among the easiest to shift to new (and false) contexts.
Note: Two other recent events generated a surge of out-of-context photos and video for the same reason: the killing of Iranian general Qassem Soleimani sparked a wave of false military strike visuals, and the recent wildfires in Australia prompted years-old photos of other wildfires to circulate.
What does it mean for a rumor to “go viral”? How is the outbreak of a disease similar to the spread of viral misinformation? What can the steps taken by medical professionals to control the spread of disease teach us about controlling the spread of misinformation?
NO: LeBron James is not promoting or endorsing the commemorative Kobe Bryant T-shirt pictured here. YES: A Facebook page called “Tees & OuFit” used this doctored photo on posts that link to an online marketplace with this shirt. YES: The page purchased a Facebook ad to promote one such post on Jan. 31.
NO: The unnamed source who leaked news of Kobe Bryant’s death to TMZ was not charged with murder in connection with the crash. NO: The Federal Aviation Administration has not obtained “surveillance proof” that the helicopter had been tampered with the night before it crashed. YES: A hoax “satirical news” website — with an unreadably tiny satire disclaimer at the bottom of the page — published a piece making these claims. It has since deleted the piece.
A partial screenshot of the unreadably small disclaimer at the bottom of the - lfrsolutions.com “satirical news” website. It reads, “The stories posted on lfrsolutions.com are for entertainment purposes only. The stories may mimic articles found in the headlines, but rest assured they are purely satirical.”
NO: Kobe Bryant’s helicopter was not “sabotaged” by a conspiracy of powerful interests including the NBA, the pharmaceutical industry or the fictitious “Illuminati” secret society. NO: Police did not say that when Bryant’s body was found he was hugging his daughter Gianna.
NO: The actress Sandra Bullock did not say that “Donald Trump is doing everything to improve our nation,” nor that people who don’t like Trump “can leave our nation and never returned [sic], particularly you Hillary.” YES: This false story has circulated online since at least November 2017, and has been debunked before. YES: Select phrases from the false story were taken from a 2015 People magazine interview with Bullock.
Note: Though no one has been able to verify the origins of this false story, its senseless grammar and nonsensical word choice suggest that it’s the product of an algorithm. This, for example, is the lead sentence:
Can celebrity endorsements of politicians influence voters? Was this tactic used during the 2016 U.S. presidential election? What about this post is suspicious?
NO: The actor and comedian Tim Allen did not write this social media post in support of Donald Trump. YES: Another man named Tim Allen appears to have written this post in August. YES: This post has circulated before, but went viral again last week.
A sample of Facebook posts from last week sharing this misleading post by “Tim Allen.” Click the image to view a larger version of it.
Five to teach
Twitter users in the United States can now report tweets containing misleading information regarding an election, the social media platform said in a Jan. 29 tweet. A user can now select “It’s misleading about a political election” in Twitter’s reporting menu. That leads the user to another screen asking the user to choose one of three types of election misinformation: false information about how or where to vote or register to vote; posts that seek to suppress or intimidate a voter; or posts that impersonate or misrepresent the affiliation of an elected official, candidate, political party or governmental body. Twitter has previously used the tool in other countries.
Twitter will review reported tweets according to its rules against misleading content or voter suppression; offenders could have their accounts restricted or permanently suspended. Pinterest also said it will take down misleading content about voting.
Do you think social media platforms are responsible for what their users post? Why or why not? If so, how would you rate major social media platforms’ efforts to prevent misinformation from spreading? What new ideas would you introduce if you had a meeting with these companies?
As a class, create a comparison of major social media platforms’ efforts to curb the spread of political misinformation, then add to it over time as new initiatives are announced in the run-up to the 2020 U.S. presidential election.
Should journalists vote in primary elections? It’s an ethics question that Poynter’s Kelly McBride explored initially on Twitter and then in a Jan. 28 column. If journalists vote in primary elections, their party affiliations could be made public, potentially exposing journalists to criticism and revealing indications of some newsrooms’ lack of political diversity. But McBride argued that newsroom leaders who discourage journalists from voting in primaries are guilty of accelerating “the oversimplification of journalism values” by “ignoring the difference between personal objectivity, which is impossible, and objectivity of the reporting process.” McBride suggests newsroom leaders instead work toward increased transparency by, for example, inviting audiences for an open discussion about how newsrooms make sure that their political coverage is fair. Embracing good citizenship is the best way to frame the discussion, McBride said. “Creating journalism is a political act. Whether you vote in a primary or not should be a political choice, not a choice made to appease your boss,” she said.
What did McBride mean when she wrote that “journalistic objectivity is about the process, not the person”? Do you agree? Do you think journalists should vote in primary elections, or do you think journalists’ political affiliations should remain private? Why or why not? Should journalists covering a primary vote in that election or any other political race they cover? Why or why not?
The Associated Press says it will expand diversity training for employees after a Ugandan climate activist criticized the news service for cropping her out of a photo taken alongside four white peers.
Vanessa Nakate, 23, of Kampala, was photographed at the World Economic Forum in Davos, Switzerland, on Jan. 24 alongside fellow activists Greta Thunberg, Isabelle Axelsson, Luisa Neubauer and Loukina Tille. The AP transmitted the photo to its customers, unaware that the photographer had cropped it.
When Nakate saw the photo, she queried the AP in a tweet about why she had been cut out. The photographer replied that the photo had been cropped to remove a distracting building in the background and to focus more tightly on Thunberg.
That ignited wider criticism on social media. Thunberg lent her support in an Instagram post. "You didn't just erase a photo. You erased a continent," she quoted Nakate as saying.
Sally Buzbee, executive editor and senior vice president of the Associated Press, tweeted an apology to Nakate on Jan. 26. It read, "Vanessa, on behalf of the AP, I want to say how sorry I am that we cropped that photo and removed you from it. It was a mistake that we realize silenced your voice, and we apologize. We will all work hard to learn from this."
Discuss: How can a journalist ensure source diversity? Is it important for sources to be diverse? How can a lack of source diversity hamper news coverage? In what ways can quoting diverse sources benefit news coverage?
Idea: Have students read and review Casselman’s Twitter thread. Then have students track and analyze the sources quoted by one straight-news journalist during a one-year period, following the same guidelines Casselman did. (Note: Like Casselman, students may not always be able to determine sources’ racial identities.)
YouTube “users consistently migrate from milder to more extreme content,” according to an analysis (PDF) by researchers at the Swiss Federal Institute of Technology Lausanne of 330,925 videos posted on 349 YouTube channels as well as tens of millions of comments. “Intellectual Dark Web” (or “IDW”) and “alt-lite” channels are “gateways to fringe far-right ideology,” and a large percentage of users who consumed “alt-right” content previously consumed alt-lite and IDW videos, the study found. In addition, their analysis of more than two million recommendations made by YouTube’s algorithm between May and July 2019 showed that alt-lite videos are easily accessible from IDW channels, but that alt-right content can be reached only through channel recommendations.
Discuss: Should YouTube ban “hateful” or extremist content? How should it decide what counts as “hateful” or “extremist”? Do you think YouTube should take steps to fight the radicalization pipeline described in the study? Why or why not? If you were in charge, what (if any) steps would you take in response to this study’s findings?
Finland has made “multi-platform information literacy … a core, cross-subject component” of its national curriculum since 2016, according to a Jan. 29 report in British publication The Guardian. For example, in math, students learn how statistics can be used to mislead. They study examples of propaganda in history, and language teachers emphasize “the many ways in which words can be used to confuse, mislead and deceive.” The school-based effort is part of a national awareness campaign that began in 2014 after Finns were first targeted by Russian disinformation campaigns online.
Should news and information literacy be embedded across subject areas? What skills do we, as a group of students, possess to help us critically evaluate information? Which of those skills are most needed in our communities? How can we share them?
Challenge groups of students to create a news and information literacy campaign targeting a specific set of skills, or a specific group of people in their community (for example, younger siblings or grandparents) who they think need help navigating information online. Have students share their results and vote on which campaign was most successful.