Partisan local news outlets are filling the gap left behind by the steep decline of local media in the United States, including the disappearance of more than 2,000 local newspapers since 2004. “Pink slime” publications produce algorithmic stories and local news — including some fraudulent stories — and often lack transparency about who funds them. Meanwhile, openly partisan newsrooms may include some reporting from on-the-ground writers, but their work is nonetheless guided by political goals.
Idea: Share a local print or digital newspaper in class and have students examine and discuss the front page. Look at headlines, photos, bylines and the masthead. Which stories come from a wire service, such as the Associated Press? Which stories are written by local journalists? What topics do the local stories cover? How long has this newspaper been in your town or region? How is it funded? Do the articles appear to follow journalism standards and ethics, such as verification, fairness and transparency?
Dig Deeper: Use this think sheet to further explore "pink slime" journalism and what news should look like when it's produced by an organization that aspires to put the public's interest first.
There aren’t enough fact-checkers to keep up with social media content in English and there’s even fewer resources to vet Spanish-language posts. Waves of political and health misinformation reach Latinos across the United States on social media and through encrypted messaging apps like Telegram and WhatsApp. Researchers found that the spread of misinformation among Latinos increased in 2020, much of it focused on the COVID-19 pandemic and the contentious presidential election. Misinformation targeting Latinos is also found in political ads invoking the specter of “socialism” in connection with liberal policies.
Discuss: Why is the spread of misinformation across messaging apps challenging to moderate? Why is there less content moderation on Spanish-language social media posts online? What posts have you seen recently that raised credibility questions for you? Do your friends and family discuss politics online? Have you ever seen a political ad that contained provable falsehoods? What is the impact of misinformation on elections and democracy?
What makes some rumors go viral online? The Election Integrity Partnership compiled 10 factors that shape the potential for a falsehood to spread, including emotional appeal, novelty and repetition. Social networks, online algorithms and automated accounts also play a role in amplifying misinformation.
Discuss: What steps can people take to protect themselves against election misinformation as the November midterms approach?
NO: This is not a genuine video of a car escaping police by driving underneath a semitrailer. YES: This video features computer generated imagery (CGI).
NewsLit takeaway: This video of a sports car evading police by sliding under a semitrailer appears to show a real-life version of a fantasy that has been playing out in blockbuster movies for decades, but this viral video is even less realistic than those movie stunts. The YouTube user who created this video explained in 2019 that computer animation software was used to add a sports car, truck and police vehicle to a genuine video of a highway.
While this video was debunked in 2019, it continues to circulate online and generate millions of views for social media accounts that share it without credit or disclaimers. This provides two important lessons for better social media behavior. First, take a moment to critically analyze content before you share it. Was it posted by a credible source? Are there any credible news stories about it? Second, if it seems too good to be true, it likely is. Debunk this on your own by performing a reverse image search.
NO: The crowd in this video was not chanting “Let’s go, Brandon” at Biden during his September 2022 trip to London. YES: The audio in this video was altered to add the chant.
NewsLit takeaway: Doctored photos often include indications they have been altered. Manipulated audio, however, can be more difficult to detect. This video seems to capture a London crowd’s reaction to Biden’s presidential motorcade. How can you determine if it’s real? First, check the source. Which accounts shared this video? The video was shared by meme and partisan accounts that often amplify falsehoods aligning to their interests. Then, find the original footage and compare it to the viral version. The video was initially captured by a British writer for the Telegraph and shows Biden in his motorcade as a few people in the crowd cheer, but none are chanting “Let’s go, Brandon,” as in the doctored video.
Altering audio is one tactic purveyors of misinformation use to mislead their audience. While this content may seem genuine at first glance, social media users can determine its authenticity by searching for the original source.
NO: This video was not taken during Typhoon Hinnamnor, which hit Korea and Japan in early September 2022. YES: This is a genuine video of a rooftop shed being destroyed during Typhoon Jebi in September 2018.
NewsLit takeaway: Miscaptioning old but dramatic footage is a proven way to generate engagement on social media. These mislabeled and misleading videos are not without harm, however. They have two notable and negative impacts on online discourse. They give people a false impression of reality. Yes, Typhoon Hinnamnor caused devastation in Japan and Korea, but this video does not accurately depict what happened. And when these videos go viral, they amplify untrustworthy accounts and create larger audiences for them. Social media algorithms pick up on the increased engagement and further raise their profile. With larger followings, the accounts can be repurposed to sell products or to spread more misinformation.
While it is tempting to jump into the fray and share viral content during fast-moving news events, it is important to do some quick verification work. A reverse image search revealed that this video was several years old. Additionally, a quick check showed the account primarily shared videos of cheerleaders before scoring this viral hit. In other words, it likely was not the most trustworthy source for breaking news information.
You can find this week's rumor examples to use with students in these slides.
Ahead of the 2022 midterm elections, TikTok is requiring American politicians and government agencies to verify their accounts with a blue check mark. The new policy comes as QAnon conspiracy theories continue to get millions of views on the platform.
The revolting NyQuil chicken recipe began as a 4chan joke — but after the FDA declared it unsafe, TikTok searches for the recipe increased.
How does a disinformation expert navigate the information environment? Former CIA analyst Cindy Otis shares her tips in this recent podcast interview.
Two Indian engineers started a nonpartisan fact-checking website that has drawn about 80 million page views since 2017, but the site’s efforts to call out misinformation and hate speech have “put it on a collision course with the government.”
Comics can be people’s first encounter with local newspapers, but in light of recent changes some readers and cartoonists worry about the future of daily comics pages. (For more on the state of cartooning in newspapers, see NLP’s interactive lesson “Power in Art.”)
A new graphic novel for kids breaks down misinformation and news literacy with quirky illustrations and humor. The title? “Killer Underwear Invasion.”
You’ll find teachable moments from our previous issues in the archives. Send your suggestions and success stories to [email protected].
Sign up to receive NLP Connections (news about our work) or switch your subscription to the non-educator version of The Sift called Get Smart About News here.
Check out NLP's Checkology virtual classroom, where students learn how to navigate today’s information landscape by developing news literacy skills.