The Sift: The rise of influencers | AI regulations

 

Teach news literacy this week
The rise of influencers | AI regulations

 
classroom-ready icon Dig deeper: Don’t miss this week’s classroom-ready resource.
 

Top picks

An illustration of a person placing their hands over their eyes as news and social media icons swirl around them.
News avoidance is a growing trend. Illustration credit: Shutterstock.com.
 

The number of Americans who closely follow the news has declined in recent years. About 51% of American adults followed the news “all or most of the time” in 2016, but by 2022 only 38% said they did, according to a new Pew Research Center analysis. Although declining attention in news occurred across most racial, gender and political demographics, Republicans had a particularly steep decline.

 
 
classroom-ready icon Dig Deeper: Use this think sheet to take notes on the Pew Research Center report on how fewer Americans are following the news (meets NLP Standard 5).

More than 40 states are suing Meta Platforms Inc., the parent company of Facebook and Instagram, for fueling the mental health crisis among children and teens with its addictive and harmful social media platforms. Nearly all teens aged 13 to 17 use social media, and it’s often easy for kids under 13 to get around social media companies’ age requirements and create accounts, the Associated Press says.

While Meta said in a statement that it has already introduced “over 30 tools to support teens and their families,” the lawsuits accuse the company of misleading the public about the harms of social media and prioritizing profit to make products that are purposefully addictive to young people. New York Attorney General Letitia James said in a statement that “Meta has profited from children’s pain” and its “manipulative features” lower kids’ self-esteem.

 

Being an online influencer was once considered a frivolous idea, but it’s now one of the most popular career aspirations for young people. The online creator and influencer industry emerged 25 years ago and currently has a global value of $250 billion, with little government oversight. Although almost anyone can build a following on social media and become an influencer, the door is also open for bad actors to spread misinformation. Many Americans now turn to creators to learn about major events and often find a blend of reporting and opinion that can be misleading and optimized for engagement, with some creators “using challenges, lies and outrage to capture short attention spans, no matter the cost,” according to The Washington Post.

 
 
Love RumorGuard? Receive timely updates by signing up for RG alerts here.
You can find this week's rumor examples to use with students in these slides.
Editor’s note: A RumorGuard entry published in the Oct. 23 issue of The Sift wasn’t updated to reflect more recent developments in coverage of the Gaza hospital complex explosion. We updated the rumor on Oct. 25 in the newsletter archive to reflect the latest news of the Israel-Hamas war.
 
 

AI-generated content distorts events in Gaza

Screenshots show three posts from the social media network X that contain images of an Israeli tent city, a large Palestinian flag at a soccer game and a baby crying amid rubble, supposedly from the Israel-Hamas war in October 2023. The News Literacy Project has added a label that says, “AI-GENERATED CONTENT.”

NO: These are not actual photos from the Israel-Hamas war showing a tent city for displaced Israelis, a Spanish soccer crowd supporting Palestinians or a baby crying amid the rubble of a destroyed building. YES: These are all synthetic images created with widely accessible artificial intelligence tools.

NewsLit takeaway: In the aftermath of breaking news, bad actors now use AI image generators to quickly manufacture convincing visuals to provoke strong emotions from their audience, with the Israel-Hamas war providing the latest example of this approach. But no matter how sophisticated these artificial images appear, the steps to detect fabricated content remain the same:

  • Be patient. If an image evokes a strong emotion, practice click restraint and give yourself time to critically consider the content.
  • Double-check the source. Where is this information coming from? Is this a trustworthy account? Have they shared misinformation in the past?
  • Survey multiple sources. Have trustworthy accounts shared the same information?
  • Do a reverse image search. Tracing an image back to its original appearance is key to determining its authenticity.
  • Try lateral reading. Are any credible, standards-based news organizations including this image?

AI-generated content may muddy the misinformation landscape, but practicing basic fact-checking skills can prevent fabricated images from clouding our perspective.

Kickers
A Spanish-language news outlet in the San Francisco Bay Area trained over 100 Latino and Mayan immigrants to defend themselves and their communities against disinformation. The efforts were inspired by the “promotoras” model of health education, which relies on a trusted community member to help educate people in their social circle.
AI technology regulation is coming. President Joe Biden signed an executive order on Oct. 30 that requires industry safety standards and calls for new protections for consumers.
Mysterious bylines recently cropped up at Reviewed, a USA Today website that publishes shopping recommendations. Although Reviewed staffers were unable to find evidence that the bylines were real people and suspected the pieces were AI-generated, parent company Gannett denied the claim.
The possibility of AI-generated disinformation about the Israel-Hamas war is leading people to dismiss genuine images and video, researchers found.
Journalists covering the Israel-Hamas war say they’re grappling with online harassment in addition to dangerous conditions and disinformation while reporting on the conflict, some from inside the war zone.
In the chaotic year since Elon Musk took over X, formerly Twitter, the social media platform remains popular, but his changes have allowed more misinformation and hate speech to flourish.
Meet the 16-year-old fact-checker who already has three years’ experience on the job writing about election misinformation, the COVID-19 pandemic, guns and even the moon landing.
Imitation front pages of Northwestern University’s student newspaper were found Oct. 25 scattered in newsstands and classrooms on campus. The printed leaflets parodied the university’s response to the Israel-Hamas war and featured a manipulated ad with anti-Zionist views.
 
How do you like this newsletter?
Dislike? Not sure? Like? Click to fill out our feedback survey.
 
Love The Sift? Please take a moment to forward it to your colleagues, or they can subscribe here.
 

Thanks for reading!

Your weekly issue of The Sift is created by Susan Minichiello (@susanmini), Dan Evon (@danieljevon), Peter Adams (@PeterD_Adams), Hannah Covington (@HannahCov) and Pamela Brunskill (@PamelaBrunskill). It is edited by Mary Kane (@marykkane) and Lourdes Venard (@lourdesvenard).

You’ll find teachable moments from our previous issues in the archives. Send your suggestions and success stories to [email protected].

Sign up to receive NLP Connections (news about our work) or switch your subscription to the non-educator version of The Sift called Get Smart About News here.

 

Check out NLP's Checkology virtual classroom, where students learn how to navigate today’s information landscape by developing news literacy skills.