The Sift: TikTok bans | AI stories | Redacted MLK speech


Teach news literacy this week
TikTok bans | AI stories | Redacted MLK speech

classroom-ready icon Dig deeper: Don’t miss this week’s classroom-ready resource.

Top picks

A graph from the Edelman Trust Barometer report shows political polarization in 28 countries surveyed. The U.S., Colombia, Argentina, South Africa, Spain and Sweden are the top six severely polarized countries.
A graph from the Edelman Trust Barometer report shows political polarization in 28 countries surveyed. The U.S., Colombia, Argentina, South Africa, Spain and Sweden are the top six severely polarized countries.

The U.S. is one of six “severely polarized” countries listed in the 2023 Edelman Trust Barometer, among the 28 countries surveyed. The driving factors behind this polarization, according to the report, include distrust in media and government, lack of shared identity, systemic unfairness and economic pessimism. Notably, only 34% of people with a polarized mindset trust media. 

Discuss: How can disinformation lead to political polarization? What role do you think online “echo chambers” play in terms of political polarization? How does the inclusion of multiple viewpoints make the national conversation stronger/better? 

Idea: Use NLP’s Newsroom to Classroom program to connect students with a journalist in person or online to discuss standards in news reporting and trust in media. 

Resources: “Misinformation” and  “Is it legit?” (NLP's Checkology® virtual classroom).

Several errors were recently found in stories generated by artificial intelligence on CNET, a popular consumer tech news website. Other news outlets have criticized CNET for its lack of transparency around this practice; the site used “CNET Money Staff” as a byline for AI-generated stories and failed to make a public announcement about it.


Following this criticism, CNET’s editor-in-chief wrote a post explaining that the news site began experimenting with AI in November and noting that 75 CNET articles authored by AI and edited by humans had been published since then. CNET is now reportedly pausing its AI usage. Meanwhile, a Futurism report found that CNET's AI articles included plagiarized work — a serious allegation that may diminish trust for CNET readers.

Discuss: If you were a news editor, would you consider using AI to generate stories? Why or why not? What might lead some media outlets to consider “automated journalism”? Is it ethical to publish stories written by AI without clearly disclosing it to readers? Why do you think some news organizations turn to AI as a resource? How is AI currently present in your life?

Idea: Ask students to share their favorite news websites. As a class or in small groups, visit the news sites and share observations about the bylines for each story. How are they represented? Are the reporters clearly credited and identified for each story? Is there contact information for them? Are some stories credited to staff or other, less transparent entities? Are any credited to AI? Why is it important for standards-based news outlets to be transparent about who (or what) is writing or generating their stories? 

Note: Reputable news organizations have utilized AI technology in stories. For example, The Associated Press began using AI in 2014 for different projects, including automated stories about corporate earnings and sports.  


classroom-ready icon Dig Deeper: Use this think sheet to take notes on the implications of AI generating stories for CNET.

Anti-vaccination conspiracy theories surrounding “sudden deaths” like singer-songwriter Lisa Marie Presley and radio DJ Tim Gough continue to spread on Twitter, which no longer enforces its COVID-19 misinformation policy and recently restored previously banned accounts. After renowned sports journalist Grant Wahl died of an aortic aneurism in December, his widow said in an NPR interview that she still receives harassing messages from conspiracy theorists, including one who blamed her for killing her husband through vaccination. “Grant did not deserve that. My family does not deserve that,” she said.


Discuss: Why do you think misinformation about COVID vaccines continues to spread online? How does mis- and disinformation online affect people offline? How should social media companies deal with false claims about vaccines?




As World Economic Forum begins, conspiracy theorists escalate their claims

An image collage features various tweets and website screenshots pushing conspiratorial claims about the World Economic Forum, including one that reads, “The people going to Davos… for the WEF conference do NOT want vaccinated pilots. ‘We’re getting calls now from wealthy businessman who require unvaccinated pilots and crew’ to fly planes.” The News Literacy Project has added a label that says, “EVIDENCE-FREE CONSPIRATORIAL CLAIMS.”

YES: World leaders at the WEF conference typically discuss major global problems and consider how to address them. NO: The WEF did not ban vaccinated pilots from transporting industry leaders to the conference. NO: The WEF did not establish a worldwide “15-minute city” zone that would prohibit people from traveling outside of this zone. NO: The WEF did not publish a statement declaring that pedophilia would save the world. NO: WEF founder and Executive Chairman Klaus Schwab did not detail a plan to launch a global cyberattack to bring vital services to a halt. 

NewsLit takeaway: Each year, world leaders gather in Davos for the WEF’s annual meeting — and each year, conspiracy theorists meet online to take their statements out of context, falsely interpret their videos and conjure up rumors out of whole cloth. 

Many of the conspiratorial claims paint the nongovernmental organization as an all-powerful entity like the illuminati or the New World Order, wielding power in secret and supposedly enacting global policies to fit its own agenda. But that isn’t the case. The WEF cannot make declarations that the rest of the world must follow. Furthermore, the forum often involves planning exercises that allow leaders to theorize and practice strategies they would implement in case of a catastrophe. This makes it particularly easy for conspiracy theorists and other bad actors to take these practice sessions out of context and misrepresent them online.


No, Rep. Ayanna Pressley didn’t say ‘IQ is a measure of whiteness’

A tweet reads, “Hi @AyannaPressley – Can you explain what this means? Thanks.” Underneath that is an image of a tweet that appears to be from Massachusetts Democratic Rep. Ayanna Pressley that reads, “IQ is a measure of whiteness.” The News Literacy Project has added a label that says, “FABRICATED TWEET.”

NO: Pressley did not post a tweet that says, “IQ is a measure of whiteness.” YES: This is a fabricated tweet that never appeared in Pressley’s Twitter timeline. 

NewsLit takeaway: Fake tweets often go viral when they reinforce the preconceived beliefs and convictions of a significant number of people. In January, commentary about the supposed teaching of critical race theory found its way back into the news cycle when the Florida Department of Education disallowed a new Advanced Placement course on African American studies to be offered in the state’s high schools. This may have helped this old fake Pressley tweet — which was originally published on the internet message board 4chan in June 2021 — to go viral again. 

Confirmation bias can narrow perspectives and even foster extreme political beliefs based on exaggerated caricatures of perceived political opponents. Avoid falling for these outrage bait traps by recognizing these biases and taking care to base political opinions on verified information. 

You can find this week's rumor examples to use with students in these slides.
A Maine newspaper faced backlash after running a heavily redacted version of Martin Luther King Jr.’s “I Have a Dream” speech on its editorial page. The editorial board published an apology and pledged to be “a voice for equality, freedom and justice.”
Following a fatal mass shooting at a Lunar New Year celebration in Monterey Park, California, the Asian American Journalists Association released guidelines for journalists covering violence — including centering “community experiences and victims’ and survivors’ stories.” 
At least 40 journalists were targeted with threats and physical violence during or after the Jan. 8 riots at the Brazilian capital, according to the Committee to Protect Journalists.
Twitter is failing to enforce its own policies against climate misinformation. Tweets containing climate change-denying language saw a 300% increase last year, according to an Advance Democracy report. 
TikTok bans on some college campuses are being criticized by students and internet freedom advocates as censorship. Others note that the bans are ineffective, since students can still access the app using cellular data on their personal devices.
TikTok will now label posts from state-controlled media outlets in 40 countries, including the U.S. The label was initially piloted last year after the Russia-Ukraine war began.
ICYMI: In case you missed it, the most-clicked link in the last issue of The Sift was this story about a psychic on TikTok who falsely accused a professor of the murders of four Idaho students.
Love The Sift? Please take a moment to forward it to your colleagues. They can also subscribe here.

Thanks for reading!

Your weekly issue of The Sift is created by Susan Minichiello (@susanmini), Dan Evon (@danieljevon), Peter Adams (@PeterD_Adams), Hannah Covington (@HannahCov) and Pamela Brunskill (@PamelaBrunskill). It is edited by Mary Kane (@marykkane) and Lourdes Venard (@lourdesvenard).

You’ll find teachable moments from our previous issues in the archives. Send your suggestions and success stories to [email protected].

Sign up to receive NLP Connections (news about our work) or switch your subscription to the non-educator version of The Sift called Get Smart About News here.


Check out NLP's Checkology virtual classroom, where students learn how to navigate today’s information landscape by developing news literacy skills.