Learn news literacy this week Social media First Amendment cases | AI news
Our NLP colleague and RumorGuard writer Dan Evon is an expert at spotting and debunking online falsehoods. He's taking over next week's issue for a special edition of Get Smart About News.
Constituents have sued public officials for blocking them on social media in two current U.S. Supreme Court cases. Illustration credit: Nadia Snopek/Shutterstock.com.
Should public officials be allowed to block constituents on social media? That’s the First Amendment question behind two Supreme Court cases, including one brought by California parents who sued their public school board members for blocking them on social media. A lawyer representing the school officials argued last week that their social media pages were personal — not government pages. However, a lawyer representing the parents told the justices that the board members’ individual accounts include school information they couldn’t obtain elsewhere, so being blocked from the pages violated their First Amendment rights as constituents.
After Microsoft began relying on artificial intelligence technology to curate news for MSN.com, inaccurate and sometimes bizarre stories and headlines began appearing for its millions of viewers. Other troubling algorithmic decisions include a story from The Guardian newspaper about a woman who was killed that was recently shown on MSN alongside an AI-generated reader poll that asked, “What do you think is the reason behind the woman’s death?” and listed three options — murder, suicide or accident. The poll prompted swift criticism from readers and from The Guardian for being potentially distressing to the victim’s family and tarnishing the reputation of the paper.
A Microsoft spokesperson said in a statement that the company was “committed to addressing the recent issue of low quality news.” MSN is the default homepage for Microsoft’s web browser, Microsoft Edge.
Note: Microsoft is one of the News Literacy Project's funders.
Out-of-context photos and videos of other conflicts are being passed off as images of the Israel-Hamas war on social media. Some examples include a 2013 photo of dead Syrian children that was falsely described in a post as being recently killed Palestinians and a gruesome video of a girl being attacked in Guatemala in 2015 was falsely described as a Hamas attack. Photographer Hosam Katan, whose work is among the visuals taken out of context, said in a New York Times interview that while these kinds of posts may be intended to gain empathy, “such fake videos or photos will have the opposite impact, losing the credibility of the main story.”
NO: The videos and photos in these posts do not show “crisis actors,” or people hired to play a role, staging incidents from the Israel-Hamas war. YES: The video supposedly showing a dead person texting from inside a body bag was taken in Thailand in 2022 and depicts a participant in a Halloween costume contest. YES: The video provided as “evidence” that a person was a “crisis actor” because he was “miraculously” healed in one day shows two different people at two different times. YES: The photos apparently showing the same child surviving three separate Israeli attacks in October 2023 are of a girl being helped after Aleppo, Syria, was bombed in 2016. YES: False crisis actor claims often circulate after mass shootings and armed conflicts.
NewsLit takeaway: False and conspiratorial claims about “crisis actors” being used to stage or exaggerate mass casualty events are frequently spread by bad actors seeking to sow doubt about the authenticity and severity of tragedies, and to create distrust in institutions such as government and legacy news outlets. While these claims evoke sensational notions of nefarious plots and global conspiracies, they rely on the same old tricks of context that are used to spread a lot of online disinformation. The images included in the above screenshots are all authentic, for example, but none of them involve crisis actors.
Conspiratorial claims about crisis actors gain traction by exploiting highly emotional incidents and offering a temptingly simple explanation for otherwise incomprehensible events. While these claims may be emotionally appealing, checking them against credible sources and employing some basic fact-checking techniques provides evidence they are based on falsehoods.
NO: This is not a genuine photograph of soccer star Lionel Messi holding an Israeli flag. NO: This is not an authentic video showing a Palestinian flag on actor Jason Statham’s car. NO: This is not an authentic video of model Bella Hadid giving a speech in support of Israel. YES: These are fake, digitally manipulated, or out-of-context videos and images.
NewsLit takeaway: Falsely claiming that a celebrity has endorsed a certain political opinion is a common tactic used by purveyors of disinformation attempting to disrupt or distort the cultural conversation. They manipulate an image, in the case of the Messi rumor; present media out of context, so that Statham appears to be in a viral video, when he is not; or use artificial intelligence technologies to create deepfake videos, like the one featuring Hadid. These false rumors all rely on the popularity and appeal of celebrities to influence public opinion.
Getting in the habit of examining sources — both the account sharing a rumor as well as the origins of the post — is a good way to determine whether something spreading on social media is authentic. And remember, breaking news and current events are ripe for exploitation.
More people are turning to online creators and influencers for updates on current events because their coverage is “more accessible, informal” and “feels more relevant,” according to a Reuters Institute report. While some of these influencers have training in journalism, some are activists and partisan commentators who spread misleading information.
Are news and social media just not meant for each other? The Atlantic’s Charlie Warzel has some thoughts.
Will AI contribute to election misinformation next year? Most American adults (58%) believe it will, and an even bigger majority (83%) say it would be a “somewhat or very bad thing” for presidential candidates to use AI to create false or misleading content for political ads, according to a new poll.
Black Americans over age 65 are twice as likely (46%) as Black Americans under age 30 (23%) to see local news coverage about their community “extremely or fairly often,” according to new Pew Research Center findings.
What kind of news stories are most useful for voters? Should election coverage focus on poll numbers and treat the subject as a competitive race, or should stories be more focused on the issues and the positions and policies of the candidates? This professor thinks the answer is clear.
An Alabama newspaper reporter and publisher were recently arrested for revealing grand jury secrets after publishing an investigative report that revealed the improper use of federal COVID-19 funds by local officials. The Committee to Protect Journalists has demanded the charges be dropped because the reporter and publisher “should not be prosecuted for simply doing their jobs and covering a matter of local interest.”
How do you like this newsletter?
Love this newsletter? Please take a moment to forward it to your friends, or they can subscribe here.