BBC News didn’t run a story about the alleged death of social media influencer Lil Tay because sources were unable to verify the information. Image credit: @liltay/Instagram.
When Canadian teen rapper Lil Tay’s alleged death was announced on her Instagram page, some news outlets reported her death as a fact — but not BBC reporter Daniel Rosney. He initially spent 10 hours trying to verify Lil Tay’s death by contacting her managers and multiple police departments. But, because no one could confirm her death, BBC wouldn’t publish a story about it. A day later, it was revealed Lil Tay was, in fact, alive and she claimed her Instagram account was hacked.
The incident was a good “reminder that just because it’s online — and even on a verified Instagram account — it isn’t always true,” Rosney tweeted in a viral thread.
Discuss: What steps did Rosney go through to verify Lil Tay’s alleged death? What red flags did he come across? What guidelines does the BBC newsroom follow to verify a story? How can social media users verify if content is legit or not? Why do you think celebrity death hoaxes are so common on social media?
Dig Deeper: Use this think sheet to help students understand how journalists fact-check information and how anyone can use this information to navigate online claims (meets NLP Standard 3).
A local news desert in a Boston suburb now has an AI-generated local news site launched by two residents — a software engineer and a veteran foreign correspondent. The site uses AI tools to scan local government websites and generate transcripts, and from those transcripts it has ChatGPT create summaries. The site owners check the generated text for “obvious errors,” according to the article, before publishing the summaries.
It’s a low-cost system (the owners call it “local news in a box”) to provide some civic information in towns that no longer have local news outlets. But journalism and digital experts say that generative AI technology has “no conception of truth” and its outputs lack context.
Idea: Have students compare and contrast any local news story written by journalists with one or more AI-generated reports published by Inside Arlington. How are they similar? How are they different? How are sources cited? How are stories and community issues interpreted? What are the pros and cons of having AI-generated news sites in towns that lack actual local news outlets?
A majority of Black American adults (63%) say news coverage of Black people is more negative than news about other racial and ethnic groups, according to a survey by the Pew Research Center. A large proportion of survey respondents (80%) said they come across racist or racially insensitive news coverage of Black people “sometimes” (41%) or “extremely/fairly often” (39%). Including more Black sources in news stories, educating more journalists about issues faced by Black people and hiring more Black journalists and newsroom leaders were some of the solutions many survey respondents said would be helpful.
Discuss: What do you think of these survey results? Can news coverage inadvertently strengthen harmful stereotypes? If you were in charge of a news outlet, how would you ensure news coverage of Black people is fair and accurate? How can ethics and standards of quality journalism work to prevent harmful coverage?
NO: President Joe Biden did not say that his Sept. 26 visit with striking members of the United Auto Workers in Belleville, Michigan, was the first time he ever picketed “in person.” YES: Biden, who became the first sitting president to join a picket line, said this was the first time he had joined “as president.” YES: Political propagandists often useedited and out-of-context footage to push the idea that Biden is mentally unfit for office.
NewsLit takeaway: These viral out-of-context photos and videos can be quite convincing at first glance because they often appeal to deeply held political beliefs. Critics of Biden may have accepted this false quote without even bothering to watch the video — possibly because it felt true. But anyone who actually watches the video can easily see that this is not an accurate quote. Unlike legitimate news outlets — which correct errors of fact when they happen — partisan media figures with political agendas often allow falsehoods like this one continue to circulate, even after they are debunked.
NO: A high-frequency signal cannot activate ingredients in a vaccine. YES: Federal law requires the Federal Emergency Management Agency to test emergency alert systems at least every three years. NO: FEMA said in a statement to the fact-checking organization AFP that claims about any dangerous emissions from the testing are false. NO: The COVID-19 vaccine does not contain graphene oxide, the ingredient frequently cited in numerous conspiratorial claims as included in the vaccine and activated by the alert.
NewsLit takeaway: Conspiracy theories thrive on fear and uncertainty. When FEMA announced in August that it would test its nationwide alert system in early October, purveyors of disinformation quickly moved to convert more conventional concerns about government overreach into conspiracy beliefs. Conspiracy theorists falsely claimed that the nationwide alert system was really an attempt to control the population. There is no factual basis whatsoever for these claims and experts agree there is no mechanism by which a high-frequency signal can somehow activate vaccine ingredients, which are short-lived inside the body.
Does Amazon founder and Washington Post owner Jeff Bezos have a hand in the paper’s news coverage? No, according to legendary journalist Marty Baron, who was the paper’s executive editor from 2013 to 2021. “I mean, if Bezos were telling me what to do as a journalist, I would have quit. I’m not gonna do that,” Baron said in this CBS Sunday Morning interview.
The impact of baseless conspiracy theories about the government using laser beams to intentionally start the Maui wildfires or the Federal Emergency Management Agency seizing properties from people applying for assistance weren’t just felt online — they also got in the way of recovery efforts on the ground.
A Canadian QAnon-inspired conspiracy theorist who believes she is “Queen of Canada” travels the country with followers in RVs spreading sovereign citizen beliefs — and calls for violence against those who vaccinate children.
Some teens get thousands of phone notifications a day, and about half of 11- to 17-years-olds get at least 237 notifications a day, a Common Sense Media report found.
A NewsGuard review found that engagement with Russian, Chinese and Iranian propaganda on X increased by 70% three months after labels indicating state-run media were removed.
Facebook allowed a network of fake accounts pushing propaganda and hate speech run by the Indian military to remain on the platform for a full year after it was discovered, putting Kashmiri journalists in danger.