|
Several errors were recently found in stories generated by artificial intelligence on CNET, a popular consumer tech news website. Other news outlets have criticized CNET for its lack of transparency around this practice; the site used “CNET Money Staff” as a byline for AI-generated stories and failed to make a public announcement about it.
Following this criticism, CNET’s editor-in-chief wrote a post explaining that the news site began experimenting with AI in November and noting that 75 CNET articles authored by AI and edited by humans had been published since then. CNET is now reportedly pausing its AI usage. Meanwhile, a Futurism report found that CNET's AI articles included plagiarized work — a serious allegation that may diminish trust for CNET readers.
Discuss: If you were a news editor, would you consider using AI to generate stories? Why or why not? What might lead some media outlets to consider “automated journalism”? Is it ethical to publish stories written by AI without clearly disclosing it to readers? Why do you think some news organizations turn to AI as a resource? How is AI currently present in your life?
Idea: Ask students to share their favorite news websites. As a class or in small groups, visit the news sites and share observations about the bylines for each story. How are they represented? Are the reporters clearly credited and identified for each story? Is there contact information for them? Are some stories credited to staff or other, less transparent entities? Are any credited to AI? Why is it important for standards-based news outlets to be transparent about who (or what) is writing or generating their stories?
Note: Reputable news organizations have utilized AI technology in stories. For example, The Associated Press began using AI in 2014 for different projects, including automated stories about corporate earnings and sports.
Related:
 |
Dig Deeper: Use this think sheet to take notes on the implications of AI generating stories for CNET. |
|