The Sift: Facebook considers satire | New laws target misinformation | Timeless on TikTok

  News Literacy Project

The Wall Street Journal reported last week that Facebook is planning to exempt satire and opinion content from its fact-checking program. This would mean that posts that contain demonstrably false claims, but are deemed by the platform to be either satire or opinion, would not be referred to its network of third-party fact-checkers — and thus would not be downgraded in its algorithm or trigger fact-checks to appear alongside them.

News of the expected policy change came just a week after Facebook, citing a "newsworthiness exemption," said it would continue to exempt politicians' posts from its fact-checking program (unless the politician shares "previously debunked content" in which case the post will be demoted and will display fact-checking information). It also follows several contentious incidents involving satirical and opinion content and the company’s third-party fact-checking partners, including a debate in July between Snopes and the Christian satire site The Babylon Bee and a disagreement in August between Health Feedback, which focuses on accuracy in health and medical coverage, and Live Action, an anti-abortion group.

At issue is an ongoing debate about whether satirical and opinion content should be fact-checked, and how Facebook plans to determine which publishers and posts to exempt. A study published in August by three Ohio State University researchers found that false satirical claims wind up being believed by a significant number of people. This is no doubt partially because people don’t always recognize the satirical sources they see on social media, but it’s also because satire is often insufficiently labeled or is copied and shared out of its satirical context.

The wholesale plagiarism of satirical pieces by “fake news” websites that use them as clickbait for ad revenue is a well-documented problem, but individual elements of these stories also go viral. For example, a fake tweet attributed to President Donald Trump and created as a graphic for a piece by a Canadian satire site, The Burrard Street Journal , has taken on a life of its own (though the comments on the piece itself seem to indicate that it, too, was mistaken as legitimate reporting).

Those who intentionally spread disinformation online often claim to have been joking, or merely sharing an opinion, once fact-checkers call them out — which makes Facebook’s enforcement of this anticipated new policy even more difficult.

Note: The first item in this week’s viral rumor rundown (below) is another example of a satirical video, created by an individual, that circulated outside its original context and was mistaken as authentic.

Also note: An activist from the far-right conspiracy group LaRouche PAC engaged in a piece of "live-trolling" — which the group later claimed was satirical — at a town hall hosted last Thursday by Rep. Alexandria Ocasio-Cortez (D-New York).
Discuss: Do false claims on Facebook based on a staged "satirical" stunt qualify as satire under its new policy? Would a clip or a meme of the satirical performance presented out of context still be protected as satire under the new policy?
Idea: Put students in the place of decision-makers at Facebook and ask them to issue a ruling for the following cases: Which would they exempt under the platform’s policies, either anticipated or existing, regarding satire, opinion and statements by public officials?
  • A genuine piece of satire is published by a well-known satire publication. It seems to be broadly mistaken as legitimate by people online despite being clearly labeled as satire in the URL preview on the platform.
  • A genuine piece of satire is published by an obscure satire publication. It is broadly mistaken as legitimate by people online and is not clearly labeled as satire.
  • A single element — such as a doctored photo or a fake tweet — of a legitimate satire piece is copied and goes viral outside of its original satirical context.
  • An individual celebrity, known for pulling online hoaxes, posts a satirical video that many people mistake as serious.
  • An individual politician repeats a false claim that originated in a satirical publication, but doesn’t attribute it.
  • An online troll posts a piece of misinformation but says it’s just a joke.
  • A partisan activist posts a piece of misinformation but says that it illustrates an opinion about a larger truth.
Related: "Opinion: Facebook just gave up the fight against fake news" (Brian A. Boyle, Los Angeles Times)
Fake President of Finland Image
Click the image to view a larger version.

YES: Sauli Niinistö, the president of Finland, met with President Donald Trump on Oct. 2 at the White House. NO: Niinistö did not later post a video in which he said he prefers "the company of reindeer and snow." YES: Actor Rob Paulsen appears in the video and shared it to his Instagram account.

Note: This is an example of a piece of satire that has circulated outside its original satirical context, causing confusion.
Discuss: Should Paulsen have labeled this video more clearly as a piece of satire? Is there anything he could have done to prevent it from being misunderstood if it was copied and used elsewhere online? Is it reasonable to expect the creators of satire to take steps to prevent such confusion, or is that the sole responsibility of the audience? Even if you always recognize satire, can other people mistaking a piece of satire for something real have an impact on you?
Fake Hong Kong anti-face recognition device

NO: This "anti-face recognition device" was not used by pro-democracy protesters in Hong Kong. YES: It’s a design project by art students in the Netherlands — a prototype of a “distorting overlay” that could help confound facial recognition technology. YES: The abbreviation for the Dutch art school — Hogeschool voor de Kunsten Utrecht — is HKU, which is also the abbreviation for the University of Hong Kong.

Fake Facbook Post - City of New York did not make it illegal to call someone illegal
NO: The City of New York did not make it illegal to call someone "illegal" or an "illegal alien." YES: In September, New York City’s Commission on Human Rights released new legal enforcement guidance [PDF] for the New York City Human Rights Law as it relates to public accommodations, employment and housing. The city’s press release [PDF] about this guidance noted that "the use of the term 'illegal alien,' among others, when used with intent to demean, humiliate or harass, is illegal under the law."
Fake Thomas Jefferson quote image
NO: This quote does not appear in any of Thomas Jefferson's writings — or in transcripts and other accounts of his speeches.
Fake Facebook Post Israel does not train and arm teachers
NO: Israel does not train and arm teachers to prevent mass shootings. NO: No assailants in mass school shootings in Israel have been killed by armed teachers. YES: In 1995, the Israeli government required that armed guards be posted at every school.
It is now illegal in California to distribute political "deepfakes" (videos digitally manipulated to make a politician appear to say or do something that wasn’t said or done) within 60 days of an election. The new law, which Gov. Gavin Newsom, a Democrat, signed last Thursday, expires in 2023. Candidates can sue to stop the spread of videos and can seek financial damages, although the law imposes no criminal penalties for creating or distributing them.

The law also covers videos edited to portray a candidate in a false way, such as the clip of House Speaker Nancy Pelosi that was slowed down to make her speech sound slurred. It contains an exception for media organizations reporting on fakes and exempts images and audio that disclose that they were manipulated.

In Singapore, the Protection From Online Falsehoods and Manipulation Act — which makes it illegal to spread false statements — took effect last Wednesday, alarming supporters of free speech. Under the new law, the government can order content deemed false to be removed or can require that a correction be posted; it can also order Facebook, Google and other technology companies to block accounts or sites from their platforms and services. Those found guilty of violating the law can face hefty fines and prison terms ranging from 12 months to 10 years.
Discuss: Are laws like those enacted in California and Singapore positive moves in the fight against misinformation? Do you think they infringe on free speech rights? Why or why not? Are there other ways the laws might have a negative effect?

A U.S., Customs and Border Protection officer repeatedly asked a journalist to admit that he writes propaganda before returning his passport and allowing him into the U.S.,  according to the reporter in question, Ben Watson.

Watson, a news editor at Defense One, a Washington-based news outlet focused on defense and national security topics, said the incident happened last Thursday at Dulles International Airport as he was coming home from an assignment in Denmark. In his own report for Defense One, Watson lists several other instances in the last year in which journalists have reported being harassed by U.S. customs personnel. He has filed a civil rights complaint with the Department of Homeland Security.


Note: In April, the United States fell to 48th out of 180 countries on Reporters Without Borders’ annual World Press Freedom Index — largely due to increased harassment and physical attacks against journalists.

The BBC last week overturned its decision last week overturned its decision to partly uphold a complaint against Naga Munchetty, the co-host of BBC Breakfast, the network’s most popular morning program.

In July, Munchetty and her co-host, Dan Walker, were discussing the tweet in which President Trump called on four Democratic congresswomen of color — Reps. Alexandria Ocasio-Cortez, Ilhan Omar, Ayanna Pressley and Rashida Tlaib — to “go back and help fix the totally broken and crime infested places from which they came.” (All four are U.S. citizens.)

As Walker was talking about the tweet, Munchetty said: “Every time I have been told, as a woman of color, to 'go home,' to 'go back to where I came from,' that was embedded in racism,” When he then asked how Trump’s tweet made her feel, she replied: “Furious. Absolutely furious, and I can imagine that lots of people in this country will be feeling absolutely furious that a man in that position feels it’s OK to skirt the lines with using language like that.”

The BBC received a complaint about her comments, citing the company's guidelines about impartiality. It partially upheld the complaint and issued her a reprimand; a BBC spokeswoman said the complaints unit had determined that "while Ms. Munchetty was entitled to give a personal response to the phrase 'go back to your own country' as it was rooted in her own experience, overall her comments went beyond what the guidelines allow for."

But after dozens of journalists and actors of color called on the BBC to reverse its decision, Tony Hall, the BBC’s director general, said last Monday that the complaints unit was wrong and that he did not think that Munchetty’s "words were sufficient to merit a partial uphold of the complaint around the comments she made." In addition, The Guardian reported the same day that the original complaint had also cited Walker’s comments, though no action was taken against him. And today, Ofcom, the United Kingdom’s communications services regulator, said it has "serious concerns around the transparency of the BBC’s complaints process."

Discuss: Are some statements and actions objectively “racist”? When and how should news outlets use that word in straight news reporting? Do you think the BBC was right to overturn its decision? The BBC said in one statement that “[o]ur audiences should not be able to tell from BBC output the personal opinions of our journalists or news and current affairs presenters….” Do you agree with this standard for news outlets? Why or why not? 

A majority of American adults now go to social media for at least some of their news,  with 54% “often” or “sometimes” getting news from Facebook, YouTube, Twitter and other platforms in 2019, compared with 47% last year, according to a Pew Research Center study released last week. But the “noxious speech” prevalent on these platforms has led to real-life bloodshed, writes Andrew Marantz, a staff writer for The New Yorker, in an essay published last week in The New York Times.

“The question is where this leaves us,” Marantz says. “Noxious speech is causing tangible harm. Yet this fact implies a question so uncomfortable that many of us go to great lengths to avoid asking it. Namely, what should we — the government, private companies or individual citizens — be doing about it?”

Here’s one answer: “Nothing” — which Marantz says he hears from people across the political spectrum, citing, sometimes incorrectly, the First Amendment. Still, he notes in his essay, adapted from his forthcoming book, Antisocial: Online Extremists, Techno-Utopians, and the Hijacking of the American Conversation, there are steps that can be taken to reduce risks.

Discuss: How can people get news from social media platforms without being influenced by the “noxious speech” that has led to real-world harm? Why are some people vulnerable to extremism? Are there any implications for First Amendment rights when social media companies remove posts and accounts?

Idea: Review with students the steps that Marantz proposes in his essay for Congress and social media companies, such as lawmakers appropriating funds for a news literacy campaign and Facebook hiring thousands of content moderators (and paying them a fair wage) and replacing its chief operating officer, Sheryl Sandberg, with Susan Benesch, a human rights lawyer and the director of the Dangerous Speech Project. Do your students agree that these steps would help improve the climate on social media platforms and make the world safer? Should any steps be taken at all? Are there other measures that the government and tech companies can take? Formulate a response to Marantz based on this discussion and share it with him (@andrewmarantz) on Twitter from a class account.

Time is all but nonexistent on the explosively popular TikTok platform, explains Louise Matsakis in Wired. The short videos (most no more than 15 seconds) are not time-stamped when they are uploaded, and the user interface, unlike those on other social platforms, doesn’t include a native clock at the top of the screen. The only place on TikTok where a time stamp exists is on the comments — and those aren’t listed chronologically, so on a video with thousands of comments, it’s almost impossible to find the oldest one.


All of this might be good for business — making it extremely easy for users to lose track of time on the platform (and see more ads) — but it also means that verifying TikTok content can be arduous. For example, it can be difficult to determine who was the first to upload a particular video or type of video, and understanding the context of videos in search results — such as those returned by searching “protest” — can be challenging without knowing when the video was posted. The lack of a time stamp has also worked, whether intentionally or not, to keep news off the platform.

Discuss: Should TikTok add a time stamp to videos when they are uploaded? Could bad actors upload misleading videos to the platform after a major event that claim to be from before it? If TikTok were to include the location of uploaded content the way other platforms do, would that be a good or bad thing? How could old, out-of-context video clips of politicians be weaponized on the platform?
Related:  "How TikTok Holds Our Attention" (Jia Tolentino, The New Yorker)
Your weekly issue of The Sift is put together by Peter Adams (@PeterD_Adams) and Suzannah Gonzales of the News Literacy Project.
You’ll find teachable moments from our previous issues in the archives. Send your suggestions and success stories to [email protected].
Sign up to receive NLP Connections (news about our work) and Get Smart About News (news literacy tips).