Top picks
|
AI generative tools are evolving rapidly, and the technology is largely unregulated. (Illustration credit: Shutterstock.com) |
 |
A new source of AI-generated misinformation is Google’s new chatbot, Bard, according to findings in a study published April 5. The Center for Countering Digital Hate tested the bot and found it generated misinformation on 78 out of 100 false narratives about topics like vaccines, LGBTQ+ hate, sexism and racism “without any additional context negating the false claims.”
AI chatbots are also raising concerns because they cite nonexistent articles from legitimate news outlets as evidence to support false claims. When ChatGPT made up sexual harassment allegations against a law professor, it cited a fabricated Washington Post article. And the Guardian has reported being contacted by a researcher and a student looking for articles in the newspaper after seeing it cited on ChatGPT, only to find that the articles do not exist.
- Idea: If chatbots are allowed in your classroom, have students conduct an AI test of their own. Form small groups and ask students to brainstorm a list of five topics that they are knowledgeable about, such as sports, pop culture, pets, history or science. Have students develop the topics into prompts for an AI chatbot like Bard or ChatGPT, then ask them to analyze the AI-generated text. Is the AI-generated information accurate or inaccurate? How do you know? Did the chatbot provide sources as evidence? Are these sources legitimate? How does generative AI affect the information landscape?
- Resources:
- Related:
|
 |
Changes to verification badges on social media platforms are raising important questions about how users can best determine the credibility of sources. On Twitter, The New York Times account recently lost its blue check mark after refusing — like many news organizations — to pay for Twitter Blue. While verification badges previously offered some protection against impersonation accounts, check marks alone are not a mark of credibility.
Meanwhile, Twitter added a “U.S. state-affiliated media” label to NPR’s account on April 4, which the public radio network said was inaccurate because it “operates independently of the U.S. government” and receives less than 1% of its annual operating budget from federal sources. (Amid criticism, Twitter later changed the label to “Government-funded Media,” which remained on NPR’s account as of April 10.)
- Discuss: Have you noticed verification badges on social media platforms like Twitter, TikTok, Facebook and Instagram? What do you think when you see a blue check mark? Why does a verified badge not equal credibility? If you were in charge of a news outlet, would you pay to be verified? Why or why not? How can you tell if a news source on social media is legit?
- Note: The News Literacy Project has decided not to pay for Twitter Blue.
- Resources:
- Related:
|
 |
What’s the best way to counteract conspiracy theories? The most promising strategy is prevention — including by teaching people critical thinking strategies and “how to spot shoddy evidence” before they’re exposed to conspiracy beliefs, according to a new study by behavioral researchers in Ireland. Among the least effective methods? Appealing to empathy or ridiculing people who believe in conspiracy theories, the research found.
|
|