GSAN: The rise of influencers | AI regulations

 

Learn news literacy this week
The rise of influencers | AI regulations

 

Top picks

An illustration of a person placing their hands over their eyes as news and social media icons swirl around them.
News avoidance is a growing trend. Illustration credit: Shutterstock.com.
 

The number of Americans who closely follow the news has declined in recent years. About 51% of American adults followed the news “all or most of the time” in 2016, but by 2022 only 38% said they did, according to a new Pew Research Center analysis. Although declining attention in news occurred across most racial, gender and political demographics, Republicans had a particularly steep decline.

 

More than 40 states are suing Meta Platforms Inc., the parent company of Facebook and Instagram, for fueling the mental health crisis among children and teens with its addictive and harmful social media platforms. Nearly all teens aged 13 to 17 use social media, and it’s often easy for kids under 13 to get around social media companies’ age requirements and create accounts, the Associated Press says.

While Meta said in a statement that it has already introduced “over 30 tools to support teens and their families,” the lawsuits accuse the company of misleading the public about the harms of social media and prioritizing profit to make products that are purposefully addictive to young people. New York Attorney General Letitia James said in a statement that “Meta has profited from children’s pain” and its “manipulative features” lower kids’ self-esteem.

 

Being an online influencer was once considered a frivolous idea, but it’s now one of the most popular career aspirations for young people. The online creator and influencer industry emerged 25 years ago and currently has a global value of $250 billion, with little government oversight. Although almost anyone can build a following on social media and become an influencer, the door is also open for bad actors to spread misinformation. Many Americans now turn to creators to learn about major events and often find a blend of reporting and opinion that can be misleading and optimized for engagement, with some creators “using challenges, lies and outrage to capture short attention spans, no matter the cost,” according to The Washington Post.

 
 
Love RumorGuard? Receive timely updates by signing up for RG alerts here.
 
 

AI-generated content distorts events in Gaza

Screenshots show three posts from the social media network X that contain images of an Israeli tent city, a large Palestinian flag at a soccer game and a baby crying amid rubble, supposedly from the Israel-Hamas war in October 2023. The News Literacy Project has added a label that says, “AI-GENERATED CONTENT.”

NO: These are not actual photos from the Israel-Hamas war showing a tent city for displaced Israelis, a Spanish soccer crowd supporting Palestinians or a baby crying amid the rubble of a destroyed building. YES: These are all synthetic images created with widely accessible artificial intelligence tools.

NewsLit takeaway: In the aftermath of breaking news, bad actors now use AI image generators to quickly manufacture convincing visuals to provoke strong emotions from their audience, with the Israel-Hamas war providing the latest example of this approach. But no matter how sophisticated these artificial images appear, the steps to detect fabricated content remain the same:

  • Be patient. If an image evokes a strong emotion, practice click restraint and give yourself time to critically consider the content.
  • Double-check the source. Where is this information coming from? Is this a trustworthy account? Have they shared misinformation in the past?
  • Survey multiple sources. Have trustworthy accounts shared the same information?
  • Do a reverse image search. Tracing an image back to its original appearance is key to determining its authenticity.
  • Try lateral reading. Are any credible, standards-based news organizations including this image?

AI-generated content may muddy the misinformation landscape, but practicing basic fact-checking skills can prevent fabricated images from clouding our perspective.

Kickers
A Spanish-language news outlet in the San Francisco Bay Area trained over 100 Latino and Mayan immigrants to defend themselves and their communities against disinformation. The efforts were inspired by the “promotoras” model of health education, which relies on a trusted community member to help educate people in their social circle.
AI technology regulation is coming. President Joe Biden signed an executive order on Oct. 30 that requires industry safety standards and calls for new protections for consumers.
Mysterious bylines recently cropped up at Reviewed, a USA Today website that publishes shopping recommendations. Although Reviewed staffers were unable to find evidence that the bylines were real people and suspected the pieces were AI-generated, parent company Gannett denied the claim.
The possibility of AI-generated disinformation about the Israel-Hamas war is leading people to dismiss genuine images and video, researchers found.
Journalists covering the Israel-Hamas war say they’re grappling with online harassment in addition to dangerous conditions and disinformation while reporting on the conflict, some from inside the war zone.
In the chaotic year since Elon Musk took over X, formerly Twitter, the social media platform remains popular, but his changes have allowed more misinformation and hate speech to flourish.
Meet the 16-year-old fact-checker who already has three years’ experience on the job writing about election misinformation, the COVID-19 pandemic, guns and even the moon landing.
Civics, misinformation and democracy lessons aren’t just for the classroom — some workplaces are offering these lessons to employees to ensure healthier work relationships.
 
How do you like this newsletter?
Dislike? Not sure? Like? Click to fill out our feedback survey.
 
Love this newsletter? Please take a moment to forward it to your friends, or they can subscribe here.
 

Thanks for reading!

Your weekly issue of Get Smart About News is created by Susan Minichiello (@susanmini), Dan Evon (@danieljevon), Peter Adams (@PeterD_Adams), Hannah Covington (@HannahCov) and Pamela Brunskill (@PamelaBrunskill). It is edited by Mary Kane (@marykkane) and Lourdes Venard (@lourdesvenard).

Sign up to receive NLP Connections (news about our work) or switch your subscription to the educator version of Get Smart About News called The Sift® here.

 

Check out NLP's Checkology virtual classroom, where you can learn to better navigate today’s information landscape by developing news literacy skills.