Learn news literacy this week The rise of influencers | AI regulations
News avoidance is a growing trend. Illustration credit: Shutterstock.com.
The number of Americans who closely follow the news has declined in recent years. About 51% of American adults followed the news “all or most of the time” in 2016, but by 2022 only 38% said they did, according to a new Pew Research Center analysis. Although declining attention in news occurred across most racial, gender and political demographics, Republicans had a particularly steep decline.
More than 40 states are suing Meta Platforms Inc., the parent company of Facebook and Instagram, for fueling the mental health crisis among children and teens with its addictive and harmful social media platforms. Nearly all teens aged 13 to 17 use social media, and it’s often easy for kids under 13 to get around social media companies’ age requirements and create accounts, the Associated Press says.
While Meta said in a statement that it has already introduced “over 30 tools to support teens and their families,” the lawsuits accuse the company of misleading the public about the harms of social media and prioritizing profit to make products that are purposefully addictive to young people. New York Attorney General Letitia James said in a statement that “Meta has profited from children’s pain” and its “manipulative features” lower kids’ self-esteem.
Note: The American Psychological Association released a health advisory on social media use in adolescence earlier this year.
Being an online influencer was once considered a frivolous idea, but it’s now one of the most popular career aspirations for young people. The online creator and influencer industry emerged 25 years ago and currently has a global value of $250 billion, with little government oversight. Although almost anyone can build a following on social media and become an influencer, the door is also open for bad actors to spread misinformation. Many Americans now turn to creators to learn about major events and often find a blend of reporting and opinion that can be misleading and optimized for engagement, with some creators “using challenges, lies and outrage to capture short attention spans, no matter the cost,” according to The Washington Post.
NewsLit takeaway: In the aftermath of breaking news, bad actors now use AI image generators to quickly manufacture convincing visuals to provoke strong emotions from their audience, with the Israel-Hamas war providing the latest example of this approach. But no matter how sophisticated these artificial images appear, the steps to detect fabricated content remain the same:
Be patient. If an image evokes a strong emotion, practice click restraint and give yourself time to critically consider the content.
Double-check the source. Where is this information coming from? Is this a trustworthy account? Have they shared misinformation in the past?
Survey multiple sources. Have trustworthy accounts shared the same information?
Do a reverse image search. Tracing an image back to its original appearance is key to determining its authenticity.
Try lateral reading. Are any credible, standards-based news organizations including this image?
AI-generated content may muddy the misinformation landscape, but practicing basic fact-checking skills can prevent fabricated images from clouding our perspective.
A Spanish-language news outlet in the San Francisco Bay Area trained over 100 Latino and Mayan immigrants to defend themselves and their communities against disinformation. The efforts were inspired by the “promotoras” model of health education, which relies on a trusted community member to help educate people in their social circle.
AI technology regulation is coming. President Joe Biden signed an executive order on Oct. 30 that requires industry safety standards and calls for new protections for consumers.
Mysterious bylines recently cropped up at Reviewed, a USA Today website that publishes shopping recommendations. Although Reviewed staffers were unable to find evidence that the bylines were real people and suspected the pieces were AI-generated, parent company Gannett denied the claim.
The possibility of AI-generated disinformation about the Israel-Hamas war is leading people to dismiss genuine images and video, researchers found.
Journalists covering the Israel-Hamas war say they’re grappling with online harassment in addition to dangerous conditions and disinformation while reporting on the conflict, some from inside the war zone.
In the chaotic year since Elon Musk took over X, formerly Twitter, the social media platform remains popular, but his changes have allowed more misinformation and hate speech to flourish.
Meet the 16-year-old fact-checker who already has three years’ experience on the job writing about election misinformation, the COVID-19 pandemic, guns and even the moon landing.