Learn news literacy this week AI and news literacy | Fake horse photo | TikTok ban
Note: Get Smart About News is taking a break and will return in a new format on June 6. Please take a few minutes to complete our annual reader survey and tell us how this newsletter can better meet your needs as we gear up for summer.
AI and news literacy
While social media, press freedom and misinformation are recurring topics in Get Smart About News, we saw within the last year a rapid advancement in artificial technology — a topic that dominated headlines and sparked intense public interest following the release of ChatGPT.
As we transition into our summer format, here are three key takeaways in news literacy about AI:
AI has the potential to accelerate misinformation. Generative AI chatbots generate impressively accurate, nuanced text reponses within seconds, but they’re also prone to error and have even been shown to fabricate nonexistent articles from legitimate news outlets. Other AI tools can generate synthetic images, voices and video. Experts worry bad actors could use these tools to generate disinformation and spread it at an alarming scale.
AI tools can combat misinformation. It’s not all doom and gloom. Although AI can be used to create disinformation, it can also potentially help combat it by automating fact-checking.
AI will affect journalism indefinitely. While some reputable news organizations have been using AI software for years (to parse financial reports and sports scores, for example), the shift in sophistication in publicly accessible AI tools will likely impact journalism practices and processes in ways yet to be fully realized. Newsrooms are already grappling with how to use AI while keeping their audience informed about these decisions. In January, it was revealed that CNET had been quietly publishing AI-generated stories without disclosing this practice to their readers. The stories contained several inaccuracies. Meanwhile, Wired became one of the first newsrooms to develop a generative AI policy to be transparent with its readers.
The emergence of ChatGPT and other AI tools shows that news literacy education is more important than ever. We’ll be following this technology as it continues to evolve.
NO: Liberal billionaire philanthropist George Soros did not die from a heart attack, as his Twitter account confirmed on May 15. YES: This rumor began spreading with a baseless claim from an ordinary social media account picked up by a series of disreputable publications. NO: No credible standards-based news outlet reported this claim.
NewsLit takeaway: Death hoaxes are frequently shared online as a form of engagement bait, but they often also serve as entry points to conspiratorial ideas and beliefs. In the case of Soros, a death hoax is only the latest in a long line of falsehoods aimed at him, beginning in the 1990s. The outlandish rumors and fabrications typically stem from Soros’ donations to liberal causes, are often rooted in conspiracies about global elites, and regularly include antisemitic tropes.
If Soros had died, the event would have created headlines from credible, standards-based organizations. When salacious rumors spread on social media, it’s always a good idea to be patient and wait for a credible source to confirm or debunk a claim.
NO: This is not a genuine photograph of the world’s largest horse. YES: This was created with the AI image generator Midjourney in April. YES: According to Guinness World Records, the world’s largest horse actually was 7 feet, 2.5 inches tall, named Sampson, and lived in the 1850s.
NewsLit takeaway: AI image generators already have been used to create fabricated photos related to current events, and now the technologies are creating fake historical photos, such as this AI image of a giant horse. While social media users may be able to spot these photo fakeries with close examination of the image (AI still has a difficult time rendering fingers), viewers should not discount the tried-and-true method of considering the media’s source. Was the image shared by a trusted source or by an account seeking engagement? Does a reverse image search reveal fact-check articles or any additional context? In this case, the image can be traced back to a subreddit dedicated to images created with Midjourney.
Local reporting from two news outlets was key in debunking a sham viral story that falsely claimed migrants had displaced homeless veterans at a New York hotel.
A journalist’s background, expertise in a subject and information about their newsgathering process are the kinds of details that “enhanced bylines” will convey in certain New York Times online stories after the paper’s Trust team “found that readers trust journalism more when they know the process of how it was produced.” BBC News is also looking to build trust with audiences by giving a more behind-the-scenes look at its journalism through BBC Verify, a new team of about 60 journalists who will cover disinformation; they also will showcase how the news organization verifies the information it shares.
Is summer vacation canceled? Nope! That’s just a rumor students debunk while playing a video game created by CBC Kids News that aims to teach essential critical thinking and news literacy skills along the way.
A Missouri high school student who filmed her geometry teacher using a racist slur was suspended for three days, according to her lawyer, raising concerns over whether the punishment conflicts with the student’s First Amendment rights.
In the ongoing public debate about objectivity, New York Times publisher A.G. Sulzberger examines the value of journalism in this essay.
Love this newsletter? Please take a moment to forward it to your friends. They can also subscribe here.