AI generative tools are evolving rapidly, and the technology is largely unregulated. (Illustration credit: Shutterstock.com)
A new source of AI-generated misinformation is Google’s new chatbot, Bard, according to findings in a study published April 5. The Center for Countering Digital Hate tested the bot and found it generated misinformation on 78 out of 100 false narratives about topics like vaccines, LGBTQ+ hate, sexism and racism “without any additional context negating the false claims.”
AI chatbots are also raising concerns because they cite nonexistent articles from legitimate news outlets as evidence to support false claims. When ChatGPT made up sexual harassment allegations against a law professor, it cited a fabricated Washington Post article. And the Guardian has reported being contacted by a researcher and a student looking for articles in the newspaper after seeing it cited on ChatGPT, only to find that the articles do not exist.
Idea: If chatbots are allowed in your classroom, have students conduct an AI test of their own. Form small groups and ask students to brainstorm a list of five topics that they are knowledgeable about, such as sports, pop culture, pets, history or science. Have students develop the topics into prompts for an AI chatbot like Bard or ChatGPT, then ask them to analyze the AI-generated text. Is the AI-generated information accurate or inaccurate? How do you know? Did the chatbot provide sources as evidence? Are these sources legitimate? How does generative AI affect the information landscape?
Changes to verification badges on social media platforms are raising important questions about how users can best determine the credibility of sources. On Twitter, The New York Times account recently lost its blue check mark after refusing — like many news organizations — to pay for Twitter Blue. While verification badges previously offered some protection against impersonation accounts, check marks alone are not a mark of credibility.
Meanwhile, Twitter added a “U.S. state-affiliated media” label to NPR’s account on April 4, which the public radio network said was inaccurate because it “operates independently of the U.S. government” and receives less than 1% of its annual operating budget from federal sources. (Amid criticism, Twitter later changed the label to “Government-funded Media,” which remained on NPR’s account as of April 10.)
Discuss: Have you noticed verification badges on social media platforms like Twitter, TikTok, Facebook and Instagram? What do you think when you see a blue check mark? Why does a verified badge not equal credibility? If you were in charge of a news outlet, would you pay to be verified? Why or why not? How can you tell if a news source on social media is legit?
Note: The News Literacy Project has decided not to pay for Twitter Blue.
What’s the best way to counteract conspiracy theories? The most promising strategy is prevention — including by teaching people critical thinking strategies and “how to spot shoddy evidence” before they’re exposed to conspiracy beliefs, according to a new study by behavioral researchers in Ireland. Among the least effective methods? Appealing to empathy or ridiculing people who believe in conspiracy theories, the research found.
Discuss: Do you or someone you know believe in conspiracy theories? How can conspiratorial beliefs result in real-world harm? Why do you think ridiculing believers is an ineffective way to counter conspiratorial thinking?
NewsLit takeaway: AI-image generators have provided purveyors of misinformation a new tool that can quickly create convincing photo fabrications with a few simple text inputs. That makes it particularly important for social media users to stay alert when scrolling through their feeds, especially during breaking news events. Users can investigate the authenticity of these and other viral images by double-checking the source via reverse image searches and lateral reading.
Dig deeper: Use this think sheet to take notes on techniques to evaluate the credibility of an online claim.
NO: This video does not show actor Morgan Freeman criticizing President Joe Biden for talking about his love of ice cream during a White House event before later commenting on the March 27 Nashville, Tennessee, school shooting. YES: A celebrity voice impersonator used a video filter to create this clip.
The best way for readers to protect themselves from deceptive impostor content is to double-check the source. Readers who searched Freeman’s official social media accounts would have found no trace of this video.
A barrage of conspiracy theories upended the life of Tiffany Dover, a nurse who fainted on camera after receiving the COVID-19 vaccine in late 2020. Many falsely claimed she had died. Her response? Initially, two years of silence — which ended up fueling the conspiracies and anti-vaccine propaganda even more — but now she’s speaking out.
Can you tell whether an image is AI-generated or not? As the technology advances rapidly, it becomes more difficult to detect, and as one expert says, “newsrooms will increasingly struggle to authenticate content.”
Trust is a key factor in how news on social media impacts teen mental health, according to psychology research led by Cornell University. The findings underscore the need for news literacy and “a more nuanced understanding of how social media use impacts well-being and mental health.”
Among American journalists, women are more likely to cover health and education, while men are more likely to cover sports. That’s among the findings of this Pew Research Center survey of nearly 12,000 journalists.
About 200 Russian journalists and activists signed an open letter demanding the release of Wall Street Journal reporter Evan Gershkovich, the first American reporter arrested on espionage charges in Russia since the end of the Cold War. Media historians say Gershkovich’s detention harks back to “old Soviet tactics.”