The Sift: ‘Disinformation for hire’ | Teen Vogue mishandles ad | Can media literacy backfire?

  News Literacy Project

Coordinated efforts to disseminate propaganda online are supported by “a worldwide industry of PR and marketing firms ready to deploy fake accounts, false narratives, and pseudo news websites for the right price,” according to a Jan. 6 report by BuzzFeed News and The Reporter, an investigative news outlet in Taiwan.

Such businesses — which in the public relations industry are described as practicing “black PR” — operate all over the world, using a variety of increasingly sophisticated tactics to try to “change reality” in ways that benefit their clients, which include corporations, governments, politicians and political parties.

Among these tactics are publishing “fake news” stories, then using legions of fake social media accounts to amplify those and other messages — thus boosting their ranking in search results — and to spread them across social media platforms and in groups on private messaging apps. The fake accounts are also used to make comments designed to give false weight to specific sentiments, both positive and negative, in line with their clients’ interests. (This form of artificial grassroots expression is known as “astroturfing.”)

Some of these companies are developing advanced disinformation strategies and using emerging technologies, the BuzzFeed report found. The Archimedes Group, an Israeli “black PR” firm, created fake fact-checking groups to promote its clients’ interests. It also managed social media pages both against and for a Nigerian politician, Atiku Abubakar — ostensibly to damage him as well as to identify his supporters “in order to target them with anti-Abubakar content later.”

A “black PR” practitioner in Taiwan — Peng Kuan Chin, who is featured in the BuzzFeed article — has built an “end-to-end online manipulation system” that uses artificial intelligence to scrape organic articles and social media posts for key phrases. It then reassembles them into algorithmically generated pieces, publishes these articles to a group of websites Peng operates, and pushes the links out through thousands of automated social media accounts he controls.

Note: Social media platforms’ actions against “black PR” tactics include removing accounts, pages and groups that engage in such activity, yet the practice continues to grow.
Discuss: Can ordinary people avoid being influenced by professional disinformation efforts online? What steps are social media platforms taking to counteract these practices? What steps aren’t they taking that they should? What steps can you take to ensure the information you use as the basis for your decisions is credible? What kind of information environment might these kinds of practices — especially those that are automated at a large scale — produce if they are left unchecked? How might these kinds of practices affect elections in the coming years?
fake photo of Iranian missle strikes

NO: These photos are not of the Iranian missile strikes on Jan. 8 (local time) against two Iraqi air bases where U.S. troops are stationed. YES: These are photos of other attacks: an Iranian missile fired at Islamic State fighters in June 2017 (top); an explosion from an Israeli missile fired at a site in the Gaza Strip belonging to Palestinian Islamic Jihad in November 2019; and fires in the Gaza Strip following Israeli military strikes in July 2014.

fake video of missiles launching in Iran

NO: All of the missiles launched by Iran at the two Iraqi air bases were not shot down. NO: This video does not show an artillery system with electronic vision that fires 50 shots per second. YES: It’s a recording of the video game Arma 3, released in 2013. YES: The same clip was tweeted by multiple accounts last week.

Note: Photos and videos of missile launches and strikes, military exercises, explosions and fires often include little context, making them easy to present in false contexts. A number of other such images and videos circulated out of context on social media last week.
fake satellite image of austrailia wildfires

NO: This is not a satellite image of the ongoing wildfires in Australia. YES: It is a 3D digital visualization of wildfire hotspots, created by a photographer and graphic designer using data from NASA.

fake image of a koala burning in austrailia fire

NO: The animals shown in this tweet were not endangered or killed in the Australian wildfires. YES: The photos are taken from other contexts, including fires in California in 2018 and 2019 and a bonfire of taxidermy animals in Jakarta, Indonesia, in 2012.

Note: Dangerous, uncertain and provocative events tend to result in the production and circulation of more viral rumors — in part because people are understandably curious about such events, making them more vulnerable to being tricked. Rumors like this also seek to elicit strong emotional reactions, such as pity and shock.
fake photo of koalas hugging and a girl carrying a koala

NO: The photo of two koalas hugging (left) is not from the Australian wildfires. YES: It has been online since at least October 2019. NO: The image of the young girl (right) is not an authentic photo of the Australian wildfires. YES: It is a composite of several photos created as a piece of digital artwork. YES: The girl in the image is the artist’s daughter.

A flattering piece about Facebook posted by Teen Vogue was the source of much confusion last week. “How Facebook Is Helping Ensure the Integrity of the 2020 Election” appeared on Jan. 8 without any indication that it was a piece of sponsored content, paid for by the world’s largest social media company.

Soon after it was posted, an editor’s note — “This is sponsored editorial content” — was added at the top. Later, that note was removed. Finally, the entire piece disappeared. Lauren Rearick, a contributor to Teen Vogue, was at one point listed as the author, but she told Mashable that she didn’t write it. Asked on Twitter what the piece was, Teen Vogue replied: “literally idk.”

In a post that was later deleted, Facebook’s chief operating officer, Sheryl Sandberg, called it a “great Teen Vogue piece about five incredible women protecting elections on Facebook.” A company spokeswoman initially said that the piece was “purely editorial”; later, Facebook said that “there was a misunderstanding” and that it indeed “had a paid partnership with Teen Vogue related to their women’s summit, which included sponsored content.”

In its statement, Teen Vogue said: “We made a series of errors labeling this piece, and we apologize for any confusion this may have caused. We don’t take our audience’s trust for granted, and ultimately decided that the piece should be taken down entirely to avoid further confusion.”
Discuss: What is the difference between a piece of sponsored content (also known as “branded content” or “native advertising”) and a piece of journalism? Is it important for news outlets to clearly label such content? Why? If you were in charge of Teen Vogue, how would you have handled the piece about Facebook? Was Teen Vogue right to delete it? Did it sufficiently explain how the mistakes in handling the piece were made? Does this change the level of trust you have in Teen Vogue? Why or why not?
Idea: Have students find examples of sponsored content published by up to five different standards-based news organizations. Ask students to note the differences between the sponsored content and the straight news coverage from the same outlet. In what ways are they similar? In what ways are they different? Do they look the same, or are they labeled differently?
Related: “Branded Content,” a lesson in the Checkology® virtual classroom (Premium account required).
Could the critical-thinking skills taught as part of media literacy actually make some students more prone to conspiratorial thinking? That’s the question Will Partin raises in a Jan. 8 opinion piece in The Outline, an online publication "focused on the increasingly complex confluence of culture, power and technology." Partin, a graduate research assistant in the Department of Communication at the University of North Carolina at Chapel Hill, argues that many of the ways that adherents of the QAnon conspiracy theory support their beliefs — by, for example, doing their own research, reading critically and questioning all sources of information — are also strategies that students often learn as part of “media literacy.”

In short, Partin contends that problems (such as belief in conspiracy theories) can arise when skepticism runs amok and turns into a kind of cynicism that leads people not just to question information, but to distrust — including information from “knowledge-making institutions” like the medical establishment and mainstream news organizations.
Note:  The term “media literacy” is often used as an umbrella term for several overlapping fields, including news literacy and information literacy.
Also note: What is taught as “media literacy” may vary significantly from one educator to another.
Discuss: Can some healthy information habits, such as doing your own research about issues, lead people astray? How? How can we know when to trust experts and respected institutions about a given subject? Does exercising skepticism online mean questioning all sources of information equally? How can the practice of asking critical questions lead to an uncertainty that facts exist, and that some things are demonstrably “true”?
In The New York Times’ “most transparent endorsement process to date,” all editorial board interviews of candidates for the Democratic presidential nomination are, for the first time, being conducted on the record, Kathleen Kingsbury, deputy editorial page editor, wrote in a Jan. 9 Twitter thread. They are also being filmed for the first time, and “full, annotated transcripts” will be published online this week. The editorial board plans to announce its preferred candidate on Jan. 19 — two weeks before the Iowa caucuses, the first opportunity for voters to make a selection in the 2020 presidential campaign.

The Times’ opinion section, which includes the editorial board, is separate from the news organization’s newsgathering operation, Kingsbury noted. “Voters have a lot to think about in this election cycle, and we want to help,” she tweeted.
Note: “On the record” means that everything discussed in the interviews can be published.
Discuss: Should newspapers’ editorial boards endorse political candidates? Do you agree with the Times’ decision to make the endorsement process more transparent? Do you think that watching these interviews or reading the transcripts will help voters make decisions? Do you think that this kind of added transparency increases the value of the Times’ opinion journalism? Do you think the transcripts of these sessions are helpful to voters? Why or why not? What is the difference between a news organization’s opinion section and its newsgathering operation?
A settlement has been reached in the defamation lawsuit filed by Nicholas Sandmann, a student at Covington Catholic High School in Kentucky, and his parents against CNN. The Sandmanns sued the network in March (PDF) over its coverage of an incident at the National Mall a year ago.

The coverage grew out of a viral video — later found to have been taken out of context — that showed Sandmann, wearing a “Make America Great Again” hat, appearing to confront Nathan Phillips, a Native American activist, near the Lincoln Memorial on Jan. 18, 2019, as a nearby group of Black Hebrew Israelites, another activist group, shouted at them. (The National Mall was the site of two rallies that day, the Indigenous Peoples March and the March for Life.)

The Sandmanns’ suit against CNN had asked for $275 million or more in damages; no information about any payment has been made public. The Sandmanns have also sued The Washington Post and NBCUniversal.

Discuss:  What is considered “defamation” in the United States? What are the two main types of defamation? What must be proven about a published or spoken statement for it to be defamatory? Are thresholds for defamation different for private citizens and public figures? Is this fair?

Idea:  Have students learn the basic conditions that must be met for a statement to be found defamatory. (The statement in question must be provably false, not a matter of opinion; it must have damaged the subject’s reputation; and the publisher of the statement must be shown to have acted “negligently.”) Have students review one or more examples of the CNN coverage listed in the Sandmanns’ complaint (see the PDF linked above) — such as this Jan. 19, 2019, broadcast segment, or this one, or this archived version of CNN’s first print report published on its website — and ask them if they think that the reports are defamatory. (Note: If you choose to use “Nicholas Sandmann: The Truth in 15 Minutes” — the video prepared by the Sandmanns’ legal team, linked in paragraph #80 of the complaint — be aware that it includes raw footage with foul language.)

Note:  Since CNN has updated its original reporting on this incident but maintained the original URLs for the articles on its website, you might use this as an opportunity to teach your students how to search for captures of a specific URL at a particular time on a particular date, using web archivers like or the Internet Archive’s Wayback Machine.


Facebook has toughened its policy on the manipulated and misleading videos known as deepfakes, Monika Bickert, vice president for global policy management, wrote in a Jan. 6 post to the company’s Newsroom page. A deepfake will be removed from the platform if it has been altered in ways that are not obvious to “an average person” and could mislead users into thinking that someone said words that they actually didn’t; it will also be removed if it is produced by artificial intelligence or machine learning “that merges, replaces or superimposes content onto a video,” making it look authentic.

However, Bickert continued, the policy does not apply to parody, satire or video edited “to omit or change the order of words.”

Discuss: How should Facebook decide which deepfake videos are considered “satire”? How might bad actors attempt to exploit this exemption? Which pose a bigger threat for political misinformation: deepfakes or “cheapfakes” (video that has been edited or altered in deceptive ways using much less sophisticated methods)?
Your weekly issue of The Sift is put together by Peter Adams (@PeterD_Adams) and Suzannah Gonzales of the News Literacy Project.
You’ll find teachable moments from our previous issues in the archives. Send your suggestions and success stories to [email protected].
Sign up to receive NLP Connections (news about our work) and Get Smart About News (news literacy tips).