The Sift: ‘Troubling’ new study | ‘Greatest propaganda machine’ | Detecting deepfakes

  News Literacy Project

A new report from the Stanford History Education Group has found little change in high school students’ ability to evaluate information online since 2016, when SHEG researchers released the results of a similar study.

This skill set — dubbed “civic online reasoning” by Stanford researchers — consists of the ability to recognize advertising, including branded content; to evaluate claims and evidence presented online; and to correctly distinguish between reliable and untrustworthy websites and other sources of information. (The executive summary of the 2016 study summed up “young people’s ability to reason about the information on the Internet” in one word: “bleak” — and the executive summary of the latest report acknowledged that “the results — if they can be summarized in a word — are troubling.”)

For the latest findings, SHEG partnered with Gibson Consulting, an education research group, to assess 3,446 high school students from 16 school districts in 14 states. The students in the sample matched the demographic profile of high school students in the United States.

The researchers asked the students to complete six assessments with five distinct tasks, then assigned each response one of three ratings: Beginning (incorrect or showing use of “irrelevant strategies for evaluating online information”), Emerging (partially incorrect, or not fully showing sound reasoning) and Mastery (effective evaluation, reasoning and explanation).

A task asking students to evaluate the reliability of, a website about climate change run by a group funded by fossil fuel companies, had the lowest scores. Fewer than 2% of student responses were given a Mastery rating; more than 96% were rated Beginning. In another task, students were presented with a Facebook post from a user named “I on Flicks” containing a video compilation that purported to show Democrats committing voter fraud during the 2016 Democratic primaries but was actually a collection of videos showing ballot-box stuffing in Russia. When asked if the video presented strong evidence of voter fraud in the United States, more than half (52%) said that it did. Only three respondents — 0.08% of the students surveyed — were able to find the source video.

Two examples from the latest SHEG study: a website sponsored by the fossil fuel industry (left) and a video of Russian voter fraud presented out of context as Democratic voter fraud in the United States.
Click the image to view a larger version.

Other tasks asked students to identify the ads on a screenshot of Slate’s homepage and to compare the usefulness of two webpages — a page containing the text from “10 Myths About Gun Control,” a brochure published by the National Rifle Association’s Institute for Legislative Action, and the Wikipedia page for “gun politics in the United States” — as a starting place for “research on gun control.”

Idea: Create a series of bell ringers to help students with the core “civic online reasoning” skills measured in this study. For example, take screenshots of several websites’ homepages and present one of them to students each day for a week, asking them to identify all the ads on the page (including sponsored content). Or find websites sponsored by special interests — such as a so-called brand journalism site — and challenge students to evaluate the credibility of each one. Or use this newsletter’s viral rumor rundown each week to create quick assessments of students’ ability to effectively detect misinformation or evaluate evidence for claims.
Resource: The News Literacy Project’s new mobile app, Informable, has more than 100 examples assessing some of these same skills, including ad recognition and evidence evaluation.
FALSE Ilhan Omar fire facebook post
Click the image to view a larger version.

NO: Two children of U.S. Rep. Ilhan Omar, a Minnesota Democrat, were not arrested and charged with arson for a nonexistent fire at the nonexistent St. Christopher’s Church of Allod in Maine. YES: A “satirical” story making those claims was published in July on, one of a network of satire sites run by the self-proclaimed “liberal troll” Christopher Blair, who frequently provokes incautious readers online. YES: “Allod” is an acronym for “America’s Last Line of Defense,” another of Blair’s sites. YES: This item has been copied (and in some cases plagiarized) by a number of clickbait sites seeking ad revenue, and links to those stories continue to be shared online. NO: Many of those sites do not label this item as satire.

Discuss: Blair publishes absurd falsehoods, which he prominently labels as satire, but does so to troll and mock conservatives. Is this a legitimate form of “satire”? Why or why not? If Blair’s obviously labeled pieces get mistaken as actual news, who is at fault? Is it unethical for Blair to profit from his satire through ad placements on his websites? Is it unethical for digital ad brokers to place ads on Blair’s sites? On sites that plagiarize Blair’s work?
FALSE 40 million rumor post

NO: The House of Representatives’ impeachment inquiry into President Donald Trump has not cost $40 million. YES: This fabricated claim has gone viral in a coordinated “cut-and-paste” campaign across Facebook and Twitter over the last few weeks.

A sample of search results for this this false, text-based rumor on Facebook and Twitter.
Click the image to view a larger version.

FAKE Chris Wallace tweet

YES: On Nov. 17 President Trump tweeted a number of insults about Fox News Sunday host Chris Wallace after an interview in which Wallace repeatedly challenged the House of Representatives’ second-ranking Republican, Steve Scalise of Louisiana, on the details of Trump’s interactions with Ukraine. NO: Wallace did not respond by tweeting a series of insults to Trump. YES: An image of a fake tweet insulting Trump — purportedly from the @FoxNewsSunday Twitter handle, with the profile name “Chris Wallace” — circulated online last week. NO: The Twitter account @FoxNewsSunday did not tweet this. NO: The name on that account is not “Chris Wallace”; it’s “Fox News Sunday.” NO: Wallace does not have a verified Twitter account.


NO: Actor Clint Eastwood did not write a social media post explaining that he sticks his “neck out for Trump” because of the president’s accomplishments. YES: This text has circulated and been falsely attributed to Eastwood (and others) before. NO: Eastwood doesn’t have any social media accounts, as his daughter Morgan has repeatedly posted.

FALSE Stelter headline

NO: Brian Stelter, the host of the CNN program Reliable Sources, did not say that President Trump is “a destructive cult leader.” YES: Steve Hassan, a mental health professional and cult expert, was interviewed on Stelter’s Reliable Sources podcast and said that “Trump’s organization and followership” are a “destructive cult.” YES: A segment of that interview with Hassan was featured during Stelter’s program on Nov. 24. YES: A misleading headline of an item published by The Liberty Eagle, a partisan commentary site, falsely attributed this statement to Stelter.

In a searing speech on Nov. 21, comedian Sacha Baron Cohen said that social media platforms “amount to the greatest propaganda machine in history” and called for them to be held accountable for the “hate, conspiracies and lies” that they allow their users to publish.

Addressing the Anti-Defamation League’s Never Is Now summit, where he received the organization’s International Leadership Award, Baron Cohen targeted Mark Zuckerberg in particular, describing as “ludicrous” the Facebook CEO’s assertion that allowing misinformation is part of protecting free expression.

Cracking down on misinformation online is “not about limiting anyone’s free speech,” Baron Cohen said, but “about giving people, including some of the most reprehensible people on earth, the biggest platform in history.” He also pointed out that social media’s “entire business model relies on generating more engagement, and nothing generates more engagement than lies, fear and outrage.” He concluded by calling for “regulation and legislation” to curb the worst offenses of social media platforms and for the platforms to fix their “defective” products “no matter how much it costs and no matter how many moderators you need to employ.”
Discuss: Are social media companies responsible for what users post on their platforms? Does a commitment to “free expression” mean that any and all types of content must be allowed? Does Facebook allow all types of content to be posted now? Should major social media companies be regulated by the government? If so, in what ways?
Idea: Have students watch or read Baron Cohen’s speech (be aware that the video contains an instance of profanity that is not in the transcript), then use a four corners technique to break students into groups according to their opinions about what he said. Have students who strongly agree with Baron Cohen defend his talk, have those who partially agree and partially disagree explain the details of their position, and have those who strongly disagree explain why as well.
Another idea: In pairs or individually, ask students to annotate Baron Cohen’s speech, marking the statements with which they agree and the ones with which they disagree. Calculate which statements have the most consensus in the class and which are most controversial. Then create a poll for other students in the school to take, and publish the results.

Now that Michael Bloomberg has announced that he is running for the Democratic presidential nomination, Bloomberg News — the global news organization that is a division of his financial services company, Bloomberg LP — is suspending publication of unsigned editorials from its editorial board (since those editorials have reflected Bloomberg’s personal views) and will not investigate any Democratic presidential candidate, including Bloomberg, to avoid any potential conflict of interest, editor-in-chief John Micklethwait said in a Nov. 24 memo.

However, he wrote, if “other credible journalistic institutions publish investigative work” on Bloomberg or other candidates, Bloomberg News will publish or summarize those reports for its readers. (For now, the memo noted, Bloomberg News’ projects and investigations team “will continue to investigate the Trump administration, as the government of the day.”) It has already assigned a reporter to Bloomberg’s campaign (as it did to the New York City mayor’s office when Bloomberg held that position) and, according to the memo, will cover the primary, including “who is winning and who is losing,” proposed “policies and their consequences,” poll results and interviews with candidates.

Note: Micklethwait also said that Bloomberg Opinion columnists “will continue to speak for themselves” in signed columns and that two senior opinion editors — David Shipley and Tim O’Brien — are taking a leave of absence to join Bloomberg’s campaign.
Related: “‘Absolutely indefensible’: Bloomberg News slammed for scaling back coverage following owner’s 2020 bid” (Madison Dibble, The Washington Examiner)
Discuss: If the owner of a news organization runs for political office, how should that outlet cover the race? Is Bloomberg News right to suspend unsigned editorials? Is it right to abstain from investigating any of the Democratic candidates, including its owner? Do you think Bloomberg should sell Bloomberg LP if he becomes president? (During his three terms as mayor of New York City, he turned control over to a management team.)
Idea: Ask groups of students to step into the role of decision-makers at a major news organization whose owner is running for president by creating a policy for covering the race. How would they ensure that they live up to their obligation to rigorously cover the race while also avoiding conflicts of interest?
Online political advertising is a hot topic these days, and last week both Snapchat — the popular photo and video-sharing site — and Google — the world’s largest search engine — explained their policies on such ads.

In an interview on CNBC on Nov. 18, Snapchat CEO Evan Spiegel said that his company has created a place for political ads on the platform “because we reach so many young people and first-time voters and want them to be able to engage in the conversation” — but, he cautioned, “we don’t allow things like misinformation to appear in that advertising” (which he said is fact-checked by an internal team).

In a blog post two days later, Scott Spencer, Google Ads’ vice president for product management, said that political advertisers would still be able to purchase ads on specific websites, or on articles or videos about a specific topic, but that they could use only general categories of age, gender and location (ZIP codes in the United States) to target the people seeing these ads. He also said that Google’s ad policies prohibit any advertiser from making a false claim, “whether it’s a claim about the price of a chair or a claim that you can vote by text message” — and that this prohibition includes “doctored and manipulated media” such as synthetically doctored “deepfake” videos.
Note: Spencer also said that Google is expanding its practice of providing a “transparency report” for political ads to include state-level “candidates and officeholders, ballot measures, and ads that mention federal or state political parties.” The record for each ad contains a copy of the ad, who purchased it, how much was spent, who was targeted and how many times it has been seen.
Also note: Twitter’s revision to its political ads policy — which bans content and ads from candidates, parties and government officials and limits “cause-based” ads — took effect Nov. 22. In the U.S., the ban also includes ads from political action committees.
Discuss: Do you think that young people need to be exposed to (accurate) political advertising to “be able to engage in the conversation,” as Snapchat’s Spiegel said? Does Snapchat use targeted advertising? Will political advertisers be able to use data to target voters? Is Google’s new policy — limiting the data points political advertisers can use to target their audiences — a good one? Why or why not?
Idea: Use Google’s Transparency Report website to help students explore the political ad spending with Google. In which U.S. state is the most money being spent on political ads? Which political candidates and organizations have spent the most on Google since May 2018, when the company began tracking this?
Another idea: Extend the above activity by having students research groups they don’t recognize and review their ads and targeted audiences.
Related: “Google Changed Its Political Ad Policy. Will Facebook Be Next?” Kara Swisher, The New York Times.

Two governments that recently implemented “fake news” laws recently exercised them for the first time. In Singapore, Brad Bowyer, a member of the Progress Singapore Party, was instructed by the government to correct a Facebook post in which he questioned the independence of state investment firms. The government’s fact-checking site, Factually, had cited several examples of language in the post that fact-checkers thought were false or misleading. Bowyer updated his post on Nov. 25, noting in his correction: “I feel it is fair to have both points of view and clarifications and corrections of fact when necessary."

On Nov. 20, Poynter reported that Thailand’s Anti-Fake News Center had arrested an unnamed person for running a scam using closed messaging groups to spread links to obscene websites “that came with advertisements for diet supplement plans.”

Discuss: Do you think governments should pass laws against posting or spreading misinformation? Why or why not? How could such laws be misused by those in power? What actions, apart from enacting new laws, could governments take to help fight the spread of damaging misinformation?

Google is developing automated tools to detect deepfake videos, according to a Nov. 24 report in The New York Times. The company used paid actors to create its own synthetically engineered deepfake videos, then used those fakes to train an algorithm to detect those methods of video manipulation. It then made the collection of fakes available to other researchers trying to build similar tools.

Engineers at Dessa, a Canadian artificial intelligence company, used Google’s fakes to build a detection tool that worked perfectly for that particular sample. But when applied to deepfakes from elsewhere online, the tool failed more than 40% of the time — until the Dessa engineers incorporated those examples from “the wild” into the sample set used to train their tool. Experts predict that deepfake machine-learning technologies will continue to rapidly improve, and that companies developing automated tools to detect these videos will need to quickly incorporate new examples into their work to keep up.

Note: Dessa engineers are featured in the latest episode of The Weekly, a television series produced by The New York Times for FX and Hulu.
Also note: Despite the alarming potential of synthetic video and audio technologies to provoke or confuse voters with fakes of political figures — or to give politicians a way to dismiss as “fake” authentic videos that might damage them — many disinformation experts believe that the threat of deepfakes is overstated. As three current and former Harvard University researchers argue in a Nov. 20 piece for Nieman Lab, more rudimentary, crude “cheapfakes” will likely serve the purposes of propagandists better and are just as likely as more sophisticated videos to draw in those who are inclined to believe.
Discuss: Should deepfake videos be banned from social media platforms? How should fact-checkers and social media companies differentiate between deepfakes created for satire or amusement and those created for malicious purposes, like influencing voters with false information?
Idea: Contact a local reporter and ask them what, if any, discussions they have had in their newsroom about deepfake videos. How much of a threat do they believe these fakes pose?
Your weekly issue of The Sift is put together by Peter Adams (@PeterD_Adams) and Suzannah Gonzales of the News Literacy Project.
You’ll find teachable moments from our previous issues in the archives. Send your suggestions and success stories to [email protected].
Sign up to receive NLP Connections (news about our work) and Get Smart About News (news literacy tips).