There are now 70 countries in which a political party or government agency is using “cyber troops” to engage in organized disinformation efforts online, according to a new report from researchers at the Oxford Internet Institute at the University of Oxford. This constitutes a 150% increase in state- and party-sponsored social media manipulation campaigns since 2017, when researchers found such activity in only 28 countries.
Among the study’s key findings:
Facebook (56 countries) and Twitter (47 countries) are by far the most popular platforms for these efforts.
A strong majority of countries use human-operated (61 countries) and automated (bot) accounts (56 countries).
Attacking political opponents (63 countries) is a significantly more common use of computational propaganda than spreading messages that support a government or party (51 countries). (Computational propaganda is defined [PDF] as “the use of algorithms, automation and human curation to purposefully distribute misleading information over social media networks.”)
The report also found that most countries use a combination of tactics in these campaigns, including creating and circulating misinformation, such as memes and “fake news” websites (55 countries); amplifying content, including legitimate news, that aligns with government or party interests (52 countries); and targeted trolling of people with opposing opinions and of journalists (47 countries).
Note: The study also found several cases in which countries with more established disinformation programs — such as Russia, India and China — provided training and other assistance to countries with upstart disinformation programs.
Discuss: The report concludes by pointing out that a “strong democracy requires access to high-quality information and an ability for citizens to come together to debate, discuss, deliberate, empathize, and make concessions.” Why are misinformation and disinformation considered threats to democracy?
Also discuss: The final lines of the report pose two key questions that provide an excellent way to spark student discussion, reflection and inquiry: “Are social media platforms really creating a space for public deliberation and democracy? Or are they amplifying content that keeps citizens addicted, disinformed, and angry?”
Idea: Have students reimagine a historic propaganda campaign — like those created during World War II or the ColdWar — with access to today’s information environment. How would it be different? How might history have been different?
Note: The Mediamass piece includes an “Update” dated Sept. 27: “This story seems to be false. (read more)” — which is linked to the site’s About section, which says that the “People” section of the site (in which this article appears) is “a humorous parody of Gossip [sic] magazines” and that “all stories are obviously not true.”
Also note:Snopes also noted that Mediamass has used this same fictional magazine cover template many times before:
Discuss: Why are there so many rumors about Greta Thunberg this week? Do parody websites have any responsibility for the ways that the fake pictures they create might be used by others? Why or why not? What steps could such websites take to try to stop satirical fakes from being used out of context? Are some sources of satire more easily mistaken as legitimate news than others? Why?
YES: President Donald Trump visited a newly replaced section of the Otay Mesa border wall near San Diego on Sept. 18 and praised its “anti-climb” features. NO: The wall in this tweet is not the Otay Mesa wall. YES: It is a section of border fencing in Imperial Beach’s Border Field State Park, about 10 miles to the west of Otay Mesa. NO: It does not show people crossing into the U.S. YES: It shows recently arrived members of a caravan of migrants who climbed and sat atop the fence before returning to the Mexican side of the border in November 2018.
Idea: Use this viral rumor to teach students how to do a reverse image search and use Google Street View. Using Google’s Chrome browser, right-click the image in this archived version of the tweet and select “Search Google for Image” from the menu. Use the image search results to find a credible source and debunk the tweet’s claim. Then challenge students to locate that section of fence using Google Street View.
CORRECTION: An item in last week’s Viral Rumor Rundown about a meme with a fake quote attributed to President Trump incorrectly cited the source of a video still used in the meme. The still was taken from the April 25 broadcast of Hannity, another program on Fox News. The link to the correct program in the TV Archive is here.
Five to teach
Anti-vaccination activists are using social media to recruit grieving parents whose babies have died from such causes as accidental suffocation and sudden infant death syndrome, NBC News reported last week. Inaccurate or misleading stories about these parents and their children are then often used as the basis for fundraising campaigns to support anti-vaccination organizations.
Discuss: Should social media platforms allow posts containing medical misinformation to be published? Why or why not? Which social media platform has done the best job combating vaccine misinformation? What else should platforms do to stop the spread of misinformation about vaccines and other medical issues?
The largest page in the Kosovar network, “Police Lives Matter,” falsely claimed to be managed by police officers in the U.S. and had over 170,000 followers. The names of other pages in the network invoked and exploited the controversial divide over the role of racial bias in law enforcement — and the pages themselves used tactics such as putting the stories and images of actual police officers killed in the line of duty into false contexts for higher engagement.
The motivation for the network appears to have been commercial; several of the pages sought to drive web traffic to an Albanian website with programmatic ads. Facebook removed all of the pages in the network on Friday, when Popular Information notified the company of its findings.
Discuss: How many other exploitative Facebook pages could there be? Should Facebook take proactive steps to eliminate them? What is it currently doing to remove such pages or prevent them from being created?
Another idea: Have students conduct an audit of Facebook pages followed by friends and family members. Then, as a class, create an anonymized database of the pages that includes a classification for the type of content published, the size of the page’s following and the location(s) of the page’s manager(s).
Both The New York Times and The Des Moines Register published explanations last week about their decisions to include certain personal information in articles — in the Times, about the whistleblower whose written complaint has prompted the U.S. House of Representatives to begin an impeachment inquiry, and in the Register, about Carson King, a 24-year-old whose inadvertent appearance on ESPN holding a humorous request for beer money ended up raising more than $1 million for an Iowa children’s hospital.
Amid criticism, Dean Baquet, executive editor of the Times, said that his publication provided details about the whistleblower to allow readers to use their own judgment in deciding whether the person was credible. (The Times’ article noted that the whistleblower was male and described him as a Central Intelligence Agency officer who previously worked at the White House, had been trained as an analyst and showed “a sophisticated understanding of Ukrainian politics.”) Carol Hunter, executive editor of the Register, explained her publication's decision to include information about two racist jokes King had tweeted when he was 16 — and then announced two days later that the reporter who wrote the profile had also posted offensive tweets over the last decade and no longer worked there.
Discuss: Did the Times need to publish the details about the whistleblower’s background to establish the person's credibility? Did the Register have an obligation to readers to share King’s offensive tweets? What might have happened in both cases if the publications had withheld the details in question?
Idea: Have students read the explanations from the executive editors of the Times and the Register (linked above). Then ask them what is similar — and what is different — about these cases.
Another idea: Divide students into two groups. Have students in one group pretend they are New York Times editors meeting to decide whether to publish details about the whistleblower; have students in the other group pretend they are Des Moines Register editors meeting to discuss whether to publish information about offensive tweets posted years before by the man their reporter is profiling. Give each group a list of several factors they must consider — for example, the consequences of withholding the information in question, the duty to be transparent with readers, and the need to be fair to the people they are writing about.
Equitable inclusion in journalism relies on a framework of community service, demonstrated respect, mutual trust, active inclusion, meaningful participation and shared power, says Heather Bryant, founder and director of Project Facet, which helps newsrooms with collaborative projects. This framework, which references strategies that some outlets are already using, is intended to provide a “roadmap” for news organizations and journalists who wish to “develop and maintain inclusive, equitable relationships between newsrooms and those we serve” — and it will continue to “adapt and evolve,” Bryant wrote last week on Medium.
Idea: Have students examine two or more weeks’ worth of news reports from one local news outlet. Does it include equitable coverage of a variety of voices and communites, such as marginalized groups? Is the full range of life in the city or communities being covered fully represented? If not, what factors might contribute to the current coverage, and how might the news outlet address these issues?
Facebook will continue to exempt politicians from its third-party fact-checking program under a “newsworthiness exemption,” the company’s vice president of global affairs and communications, Nick Clegg, announced last Tuesday. This policy, in place since 2016, allows posting of content that may violate Facebook’s community standards “if we believe the public interest in seeing it outweighs the risk of harm.” (It does not cover speech that places people in danger. If a politician references something that has previously been debunked by Facebook’s fact-checking partners, the post will be demoted and fact-checking information will be included.)
Digital media researchers and other experts responded to Clegg’s announcement by calling for more specificity about who is considered a politician, and by pointing out that the policy “opens up a hole in information integrity on the platform” that could be exploited, especially during campaigns.
Discuss: Do you agree that Facebook should not intervene in speech by politicians, or do you think that these statements should be reviewed by the platform’s third-party fact-checkers and subject to the same content rules applied to other users? How should Facebook define a “politician”? If the platform blocked inaccurate statements made by public officials, would there be any drawbacks? Does the public have a right to know when a public official makes a false or outrageous statement?