The Sift: Manipulating social media | TikTok demotes ‘special users’ | Infowars exposé

  News Literacy Project

Despite the steps that social media companies have taken in recent years to limit the spread of misinformation and combat coordinated inauthentic activity, two reports published last week serve as important reminders of how easy it still is for bad actors to manipulate online audiences in ways that can have a substantial impact on people’s political beliefs.

The Guardian uncovered a plot to use 21 far-right Facebook pages as conduits for pushing links to inflammatory Islamophobic articles hosted on 10 ad-heavy websites. People running the revenue-generating scheme convinced the administrators of the 21 pages — who live in the United States, Australia, the United Kingdom, Canada, Austria, Israel and Nigeria — to allow them to post content directly to the accounts, reaching a combined 1 million people around the world. (As part of their investigation, published Dec. 5, reporters Christopher Knaus, Michael McGowan, Nick Evershed and Oliver Holmes were able to confirm the identity of one person involved in the operation.)

The network repeatedly targeted candidates on the political left at critical points in national campaigns, the Guardian found. In addition, it singled out Muslim politicians, including Mehreen Faruqi, an Australian senator; Sadiq Kahn, the mayor of London; and Ilhan Omar, a member of the U.S. House of Representatives, who was the subject of 1,400 posts to the 21-page network over a two-year period.

On Dec. 6, researchers from the NATO Strategic Communications Centre of Excellence released a study demonstrating how easy it is to purchase artificial engagement on Facebook, Instagram, Twitter and YouTube. The research was an attempt to test these companies’ effectiveness at stopping paid influence campaigns designed to game their algorithms and deceive their users about the popularity of posts.

For 300 euros (about $330 U.S.), the researchers bought 3,530 comments, 25,750 “likes,” 20,000 views and 5,100 followers on 105 posts on the four platforms. They were purchased from 16 companies (11 in Russia, two in Germany and one each in France, Poland and Italy) that sell artificial social media engagement. In all, 18,739 accounts were used to manipulate the four platforms. Though the researchers reported the results of these purchases to the social media companies, 95% of the for-hire accounts were still active three weeks later, and most of the engagement metrics remained intact.

Facebook “was best at blocking the creation of accounts under false pretenses” but “rarely took content down.” Providers of the fake likes, shares, comments and followers found that Instagram, which is owned by Facebook, “was the easiest platform to manipulate”; inauthentic engagement on the photo-sharing site also cost the least. YouTube was “the worst at removing inauthentic accounts and the most expensive to manipulate.” Twitter emerged as the best at responding to the fake engagement, removing half the likes and retweets that were purchased.

False inflammatory stories and coordinated artificial engagement online were major features of social media manipulation during the 2016 presidential election. These investigations suggest that we can expect more of the same during the 2020 campaign.

Related:  “The Toxins We Carry” (Whitney Phillips, Columbia Journalism Review).
Note: Not all accounts involved in for-hire engagement arrangements are fake or automated. Many are run by people who agree, perhaps in exchange for payment, to like, share, watch and comment on content posted online.
Discuss: What effect might fabricated news stories have on our political conversations? Who is likely to be affected the most by the activities exposed by the Guardian’s investigation? Can fake engagement (likes, shares, views and comments) distort people’s understanding of political issues and priorities? Should companies that sell such services be banned? What could social media companies do to combat the “for-hire manipulation industry”?
fake ronald regan quote
Click the image to view a larger version.

NO: Ronald Reagan did not say that Nancy Pelosi is “extremely evil” and from “a well oiled corrupt democrat family.” YES: This false quote has been circulating online since late January 2019.

Discuss: How common are false quotes online? Do you think false memes like this are effective? Why? Why do you think the person who created this chose to feature Ronald Reagan? How would you classify this example of misinformation using this taxonomy (created by the journalism nonprofit First Draft)?
Idea: Make learning about false quotes into a game. Give teams of students a set amount of time (for example, 10 minutes) to collect as many false quotes online as they can. Then have the teams square off as they share their examples for points: +3 for a unique example that no other team found and +1 for examples that at least one other team found. The team with the most points wins.
Related: The Library of Congress offers a variety of resources for finding, investigating and verifying quotations.
false pelosi facebook post

NO: Holiday cards addressed to “a recovering American soldier, c/o Walter Reed Army Medical Center” do not get delivered to a hospitalized American soldier. YES: Walter Reed used to accept such mail, but the practice was discontinued for security reasons after the Sept. 11, 2001, terrorist attacks. YES: In 2011, Walter Reed Army Medical Center merged with the National Naval Medical Center to create the Walter Reed National Military Medical Center in Bethesda, Maryland. Like all military hospitals, it does not accept mail sent to unnamed hospitalized military personnel (the U.S. Postal Service either returns such mail to the sender if an address is available or opens it in a postal facility to determine if a return address can be found). YES: Well-intentioned people have shared this viral social media post for over a decade, and it tends to recirculate every year during the holidays.

Note: While many viral rumors target negative emotions — such as fear, anger, outrage and disgust — and are spread to advance an ideological agenda, some elicit positive emotions, such as hope, and are spread for altruistic reasons.
FAKE antifa facebook post

NO: The video in this Facebook post does not show a member of the antifa (anti-fascist) movement blocking a car and getting beaten up by the driver. YES: In the video, shot on Nov. 21 in Pueblo, Colorado, and spread widely on social media, a man wearing a ski mask forcefully attempts to prevent people from turning at an intersection if they fail to use a turn signal (WARNING: physical violence and profanity).

Related: “Viral video of Pueblo fight warps into right-wing ‘propaganda’” (Andrew McMillan and Dan Beedie, KRDO)
fake hedgehog doll

NO: One of the bags in this image does not actually bear a “Refugees welcome” message. YES: It is a doctored version of a stock photo:

YES: This was shared by Paul Joseph Watson, a British YouTube personality and protégé of the right wing conspiracy theorist Alex Jones.


NO: During the 2016 presidential campaign, George Soros and Hillary Clinton id not pay women to accuse Donald Trump of sexual assault. NO: The New York Times did not “affirm” this false claim. YES: On Dec. 31, 2017, the Times published a report about the practice of political operatives raising money to support accusers who come forward with allegations of sexual harassment against political figures. NO: The report did not mention Soros, a billionaire financier who has contributed to Democratic causes and candidates. YES: It mentioned that a lawyer, Lisa Bloom, had contacted a pro-Clinton political action committee for support in vetting a sexual harassment claim against then-candidate Trump. YES: The report included details of both Republican and Democratic activists engaging in these activities, and it outlined concerns among victims’ rights advocates about the dangers of politicizing such claims.

The video-sharing platform TikTok instructed its moderators to demote videos featuring people with disabilities and those deemed “susceptible to harassment or cyberbullying based on their physical or mental condition,” according to internal documents leaked to, a German website that reports on digital culture. Its report, published Dec. 2, included a screenshot of one line of the policy, which listed “Autism,” “Down Syndrome” and “disabled people or people with some facial problems” as examples of susceptible subjects.

The company told moderators — who have about 30 seconds, on average, to review each post assigned to them — to give these videos a “Risk 4” ranking, making them visible only to other users in the same country. Videos featuring people considered particularly vulnerable were given another ranking that removed them from the recommendation algorithm altogether.

Netzpolitik also obtained a list, compiled by moderators, of “special users” whom the moderators singled out as the most vulnerable to cyberbullying — for example, individuals who are obviously overweight and those who identify as lesbian, gay or non-binary in their profiles. A TikTok spokesperson called the policy “blunt and temporary” and claimed that the company has “long since removed the policy,” but Netzpolitik found that it was being applied as recently as September.
Discuss: What steps should social media platforms take to protect users who are especially vulnerable to bullying and harassment? What might some unintended consequences of this policy be for users whose videos were limited? Do you think the policy was intended to protect the targeted users or TikTok’s reputation?
Idea: TikTok would not confirm that copies of this policy have been removed from circulation at the company. Have students contact the company to try to get a response to this question, then share the resulting communications — including any non-response — with one or more reporters who have covered this story.

Traditional media organizations play only a small role in the way the British public gets election news, according to a study by Revealing Reality, a London-based research agency, and The Guardian, a British news organization, that analyzed the smartphone use of six volunteers. Their phone screen use over three days at the end of November was recorded as a video file and analyzed to find content related to the Dec. 12 general election, in which voters will choose all 650 members of the House of Commons, the lower house of Parliament. In addition, each volunteer was interviewed for three hours. (The five women and one man — ranging in age from 22 to 59, with diverse backgrounds, hometowns and political views — should be seen as a snapshot, rather than a representative sample, of the British population, the study noted.)

In the past, the news that people read, watched or listened to was largely determined by newspaper editors, the heads of broadcast networks’ news divisions and similar gatekeepers. However, news consumption on phones “is shaped more by the hands-off approach of companies such as Facebook,” Guardian media editor Jim Waterson wrote on Dec. 5. As a result, he continued, people today — who increasingly scroll passively through headlines, instead of actively seeking out information — are left with the responsibility of managing their own news diet.

“It’s total anarchy,” Revealing Reality’s managing director, Damon De Ionno, told The Guardian. “The idea of fake news and fake ads, with Russians manipulating people, is a really easy bogeyman. The reality is there’s many more shades of gray, and it’s hard to unpick.”

Discuss: How do students encounter political news? Do they seek it out, or does it find them? How much news do students receive on their smartphones? Is receiving news on a mobile device necessarily inferior to getting it through other methods, such as print, computer or television and radio? Are there ways people can use mobile and social media technologies to improve their media diets?
Idea: Have students re-create the study by observing their own election news consumption for three days and writing up a short report of their findings.
Another idea: Ask students to create an infographic or guide to help their parents avoid the pitfalls of consuming news on mobile devices and through social media while taking advantage of the opportunities afforded by these technologies and platforms.
A former video editor and “reporter” for Infowars wrote an exposé about his experience working for Alex Jones, the well-known conspiracy theorist who runs a far-right media network that includes several websites and a talk radio show (distributed primarily online). In an article for The New York Times Magazine (published online on Dec. 5 and in the magazine’s Dec. 9 issue), Josh Owens detailed the operations of the “conspiracist media empire,” highlighting the ubiquity of Jones’ reckless and volatile behavior and describing the staff’s attempts to come up with “evidence” for his wild assertions. Owens recounted how he and other “reporters” who worked for Jones ignored, twisted or fabricated details to fit their boss’s conclusions — and how they sought to exploit public fears to create increases in both online traffic and merchandising revenue.
Related: “YouTube says viewers are spending less time watching conspiracy videos. But many still do.” (Greg Bensinger, The Washington Post).
Discuss: What makes conspiracy theories attractive? What conspiracy theories is Jones known for espousing? How might conspiracy theories negatively affect the lives of the people they involve? Do you think YouTube, Apple Podcasts, Facebook and Spotify were right to ban Jones in August 2018?
Idea: Divide the class into two groups. Assign Owens’ article to one group and Greg Bensinger’s article (linked above) to the other. Then pair students from each group for a 15-minute conversation to share what they read, and ask them to determine whether tech companies are doing enough to combat the problems online that result from dissemination of conspiracy theories and how large those problems are.

Anti-vaccination activists whose efforts have been stymied by crackdowns on some social media platforms are turning to Instagram, where it’s still easy to post misinformation about vaccines. In a report published Dec. 6, Isobel Cockerell of Coda described how the anti-vaccination movement is taking advantage of Instagram’s lax enforcement and using creative hashtag methods — hijacking pro-vaccination and even pro-choice hashtags, such as #righttochoose, and using characters like left parentheses and cedillas to spell “vaccine” (for example, “va((ine” or “vaççine”) — to game the platform’s algorithm and reach a wider audience.

Anti-vaccination activists are also turning to in-person protests — physically confronting or following lawmakers, doctors and others who speak out about the importance of vaccinations, according to a Dec. 6 report by Brandy Zadrozny and Erika Edwards of NBC News. Joshua Coleman, who travels the country to help anti-vaccination groups plan and document protests, told NBC that because of crackdowns by Facebook and YouTube, he’s “not able to communicate online through videos anymore and actually have people see it. That’s kind of what started the whole concept of taking it to the streets and being out there with signs and with flyers.” Supporters of vaccination are concerned that this shift toward physical protests and harassment “could have a chilling effect among public health advocates and parents.”

Related: “Anti-Vaxxer Arrested in Samoa Boasted of American Support via Social Media” (Arturo Garcia,
Discuss: Should dangerous medical misinformation, such as anti-vaccination propaganda, be banned from social media? How can Instagram and other social media platforms prevent activists from using look-alike punctuation marks or characters in foreign alphabets to get around hashtag bans and blocked hashtag searches? How can they prevent anti-vaccination accounts from hijacking existing hashtags about other subjects?
Idea: Re-create Coda’s experiment by making a screencast recording of someone searching for “vaccines” on Instagram, then share the results publicly and with Coda’s Isobel Cockerell.
Another idea:  In groups, have students search Instagram for hashtags that anti-vaccination activists are using to spread misinformation, then have them report those posts to the platform’s moderators.

A new Russian law gives the government the authority to label individuals — including independent journalists and bloggers — as “foreign agents.” The law, signed by President Vladimir Putin on Dec. 2, requires reports produced by a person paid by a foreign country to be labeled as having been distributed by a foreign agent; punishment for not complying with the law may include fines and imprisonment.

Before the law was enacted, 10 human rights and press freedom organizations, including Human Rights Watch, the Committee to Protect Journalists and Reporters Without Borders, signed a statement contending that it would further curtail independent journalism. Russia already had a law allowing the government to designate nongovernmental organizations and media outlets that receive funding from foreign sources as foreign agents.

Related: “Moscow: Reporter’s Notebook” (Jamie Dettmer, Voice of America).
Discuss: How would a report identified as having been distributed or created by a “foreign agent” affect your view of it? Would the label cause you to doubt the article’s credibility? Was the U.S. Justice Department right to pressure RT (the state-supported English-language news outlet formerly known as Russia Today) to register its U.S.-based channel, RT America, as a foreign agent two years ago? (Here is how RT reported that action.) Is there a difference between the Russian government forcing a government-funded news outlet (such as Voice of America, Radio Free Europe and the BBC) to register as a foreign agent and forcing independent journalists to do so?
Idea: Have students research Russia on the Reporters Without Borders website. Where does Russia stand in the 2019 World Press Freedom Index? How does Russia’s ranking compare with the ranking of the United States?
Another idea: Have students compare the way mainstream news outlets covered this event with RT’s coverage. What differences do they notice?
Your weekly issue of The Sift is put together by Peter Adams (@PeterD_Adams) and Suzannah Gonzales of the News Literacy Project.
You’ll find teachable moments from our previous issues in the archives. Send your suggestions and success stories to [email protected].
Sign up to receive NLP Connections (news about our work) and Get Smart About News (news literacy tips).