The Sift: Social media First Amendment cases | AI news

 

Teach news literacy this week
Social media First Amendment cases | AI news

 
headshot of Dan Evon

Coming up:

Our NLP colleague and RumorGuard writer Dan Evon is an expert at spotting and debunking online falsehoods. He's taking over next week's issue for a special edition of The Sift.

 
classroom-ready icon Dig deeper: Don’t miss this week’s classroom-ready resource.
 

Top picks

An illustration of a hand holding a smartphone. On the phone’s screen, a person has their hands up with the word “blocked” superimposed.
Constituents have sued public officials for blocking them on social media in two current U.S. Supreme Court cases. Illustration credit: Nadia Snopek/Shutterstock.com.
 

Should public officials be allowed to block constituents on social media? That’s the First Amendment question behind two Supreme Court cases, including one brought by California parents who sued their public school board members for blocking them on social media. A lawyer representing the school officials argued last week that their social media pages were personal — not government pages. However, a lawyer representing the parents told the justices that the board members’ individual accounts include school information they couldn’t obtain elsewhere, so being blocked from the pages violated their First Amendment rights as constituents.

 
 
classroom-ready icon Dig Deeper: Use this think sheet to take notes on the First Amendment issues in this case (meets NLP Standard 2).

After Microsoft began relying on artificial intelligence technology to curate news for MSN.com, inaccurate and sometimes bizarre stories and headlines began appearing for its millions of viewers. Other troubling algorithmic decisions include a story from The Guardian newspaper about a woman who was killed that was recently shown on MSN alongside an AI-generated reader poll that asked, “What do you think is the reason behind the woman’s death?” and listed three options — murder, suicide or accident. The poll prompted swift criticism from readers and from The Guardian for being potentially distressing to the victim’s family and tarnishing the reputation of the paper.

A Microsoft spokesperson said in a statement that the company was “committed to addressing the recent issue of low quality news.” MSN is the default homepage for Microsoft’s web browser, Microsoft Edge.

 

Out-of-context photos and videos of other conflicts are being passed off as images of the Israel-Hamas war on social media. Some examples include a 2013 photo of dead Syrian children that was falsely described in a post as being recently killed Palestinians and a gruesome video of a girl being attacked in Guatemala in 2015 was falsely described as a Hamas attack. Photographer Hosam Katan, whose work is among the visuals taken out of context, said in a New York Times interview that while these kinds of posts may be intended to gain empathy, “such fake videos or photos will have the opposite impact, losing the credibility of the main story.”

 
 
Love RumorGuard? Receive timely updates by signing up for RG alerts here.
You can find this week's rumor examples to use with students in these slides.
 
 
 

False ‘crisis actor’ claims about Israel-Hamas war spread via out-of-context images

Screenshots of three social media posts claiming to present images of crisis actors during the Israel-Hamas war. The posts read “Have you even seen a dead person texting?” and “Palestinian blogger ‘miraculously’ healed in one day from ‘Israel bombing’” and “She has miraculously survived Israel ruthless attacks on 3 separate occasions times on the same day. Wow!” The News Literacy Project has added a label that says, “CONSPIRATORIAL NONSENSE.”

NO: The videos and photos in these posts do not show “crisis actors,” or people hired to play a role, staging incidents from the Israel-Hamas war. YES: The video supposedly showing a dead person texting from inside a body bag was taken in Thailand in 2022 and depicts a participant in a Halloween costume contest. YES: The video provided as “evidence” that a person was a “crisis actor” because he was “miraculously” healed in one day shows two different people at two different times. YES: The photos apparently showing the same child surviving three separate Israeli attacks in October 2023 are of a girl being helped after Aleppo, Syria, was bombed in 2016. YES: False crisis actor claims often circulate after mass shootings and armed conflicts.

NewsLit takeaway: False and conspiratorial claims about “crisis actors” being used to stage or exaggerate mass casualty events are frequently spread by bad actors seeking to sow doubt about the authenticity and severity of tragedies, and to create distrust in institutions such as government and legacy news outlets. While these claims evoke sensational notions of nefarious plots and global conspiracies, they rely on the same old tricks of context that are used to spread a lot of online disinformation. The images included in the above screenshots are all authentic, for example, but none of them involve crisis actors.

Conspiratorial claims about crisis actors gain traction by exploiting highly emotional incidents and offering a temptingly simple explanation for otherwise incomprehensible events. While these claims may be emotionally appealing, checking them against credible sources and employing some basic fact-checking techniques provides evidence they are based on falsehoods.

 
 

Fake celebrity political endorsements spread over Israel-Hamas war

Three social media posts feature images supposedly showing celebrities voicing support for Israel or Palestine, including a doctored image of soccer star Lionel Messi holding an Israel flag, a misleading video of actor Jason Statham with a Palestinian flag on his car and a deepfake video of model Bella Hadid giving a speech in support of Israel. The News Literacy Project has added a label that says, “FALSE ENDORSEMENTS.”

NO: This is not a genuine photograph of soccer star Lionel Messi holding an Israeli flag. NO: This is not an authentic video showing a Palestinian flag on actor Jason Statham’s car. NO: This is not an authentic video of model Bella Hadid giving a speech in support of Israel. YES: These are fake, digitally manipulated, or out-of-context videos and images.

NewsLit takeaway: Falsely claiming that a celebrity has endorsed a certain political opinion is a common tactic used by purveyors of disinformation attempting to disrupt or distort the cultural conversation. They manipulate an image, in the case of the Messi rumor; present media out of context, so that Statham appears to be in a viral video, when he is not; or use artificial intelligence technologies to create deepfake videos, like the one featuring Hadid. These false rumors all rely on the popularity and appeal of celebrities to influence public opinion.

Getting in the habit of examining sources — both the account sharing a rumor as well as the origins of the post — is a good way to determine whether something spreading on social media is authentic. And remember, breaking news and current events are ripe for exploitation.

Kickers
More people are turning to online creators and influencers for updates on current events because their coverage is “more accessible, informal” and “feels more relevant,” according to a Reuters Institute report. While some of these influencers have training in journalism, some are activists and partisan commentators who spread misleading information.
Are news and social media just not meant for each other? The Atlantic’s Charlie Warzel has some thoughts.
Will AI contribute to election misinformation next year? Most American adults (58%) believe it will, and an even bigger majority (83%) say it would be a “somewhat or very bad thing” for presidential candidates to use AI to create false or misleading content for political ads, according to a new poll.
It’s been almost a year since ChatGPT debuted, and educators have adopted a wide range of outlooks on how students can or should use the generative AI text tool.
Women and teens are the most common targets for AI-generated nude images and videos, and there’s hardly any regulations in place to protect them.
Black Americans over age 65 are twice as likely (46%) as Black Americans under age 30 (23%) to see local news coverage about their community “extremely or fairly often,” according to new Pew Research Center findings.
What kind of news stories are most useful for voters? Should election coverage focus on poll numbers and treat the subject as a competitive race, or should stories be more focused on the issues and the positions and policies of the candidates? This professor thinks the answer is clear.
An Alabama newspaper reporter and publisher were recently arrested for revealing grand jury secrets after publishing an investigative report that revealed the improper use of federal COVID-19 funds by local officials. The Committee to Protect Journalists has demanded the charges be dropped because the reporter and publisher “should not be prosecuted for simply doing their jobs and covering a matter of local interest.”
 
How do you like this newsletter?
Dislike? Not sure? Like? Click to fill out our feedback survey.
 
Love The Sift? Please take a moment to forward it to your colleagues, or they can subscribe here.
 

Thanks for reading!

Your weekly issue of The Sift is created by Susan Minichiello (@susanmini), Dan Evon (@danieljevon), Peter Adams (@PeterD_Adams), Hannah Covington (@HannahCov) and Pamela Brunskill (@PamelaBrunskill). It is edited by Mary Kane (@marykkane) and Lourdes Venard (@lourdesvenard).

You’ll find teachable moments from our previous issues in the archives. Send your suggestions and success stories to [email protected].

Sign up to receive NLP Connections (news about our work) or switch your subscription to the non-educator version of The Sift called Get Smart About News here.

 

Check out NLP's Checkology virtual classroom, where students learn how to navigate today’s information landscape by developing news literacy skills.