The late Rep. John Lewis gave a speech at the historic March on Washington for Jobs and Freedom in 1963. Photo credit: U.S. National Archives.
The Wall Street Journal issued a correction 60 years after the newspaper incorrectly quoted the late civil rights leader and congressman John Lewis in its coverage of the historic March on Washington. The original 1963 article apparently quoted Lewis from an early draft of his speech, which later changed significantly. Lewis, for example, voiced qualified support for the Kennedy administration’s civil rights bill in his actual speech, saying, “It is true that we support the administration’s civil-rights bill. We support it with great reservation, however.” The Journal originally reported that he didn’t support the bill and misquoted him as saying “it is too little and too late.”
The Journal’s U.S. Supreme Court correspondent filed the correction after a reader read an old anniversary piece on the march and alerted him of the inaccuracies from the 1963 coverage.
X, formerly known as Twitter, was ranked the lowest among five major social media companies in moderating climate misinformation content, according to a new report by the Climate Action Against Disinformation Coalition. The report noted that X “lacks proper public transparency” and that climate denial posts have spiked since billionaire Elon Musk took over the platform last year, with researchers awarding X a score of 1 out of a 21-point scale. Pinterest was ranked the highest (12 points) and was the only platform that defined climate misinformation in its community guidelines, according to the report.
How do professional fact-checkers find what they need online? One practice they follow is “click restraint,” or not clicking immediately on the first results that pop up, which can often include ads and sponsored posts — a media literacy strategy first developed by the Stanford History Education Group. Thinking like a fact-checker instead means taking time to scan the search results for trustworthy sources to make a more informed decision on which result to open first.
Love RumorGuard? Receive timely updates by signing up for RG alerts here.
NO: This is not authentic footage of a robot playing professional pickleball. YES: This video was created by manipulating a genuine clip from the Major League Pickleball women’s semifinal in Daytona Beach, Florida, on March 26. YES: A robot was digitally added using AI software to the video to mask Anna Leigh Waters, a No. 1 ranked professional pickleball player.
NewsLit takeaway: Sometimes all that is needed to detect a piece of misinformation is attention to detail. This video was shared with the hashtag #AI and clicking on the account’s name, @AIsport88, revealed similar videos. These are all indications that the videos used some form of digital editing. Of course, viral content is often shared outside of its original context, which is why it is vitally important to trace videos and photos back to their original sources.
A careful viewing of this video also reveals several visual oddities suggesting digital editing. The robot’s paddle, for instance, disappears several times during the video and there is a persistent blur around the robot’s body.
NewsLit takeaway: Viral videos often lose context as they migrate from one platform to another. This piece of staged outrage bait is designed to provoke anger (and engagement) from its audience. It was originally shared on TikTok accompanied by a disclaimer labeling it as a piece of staged fiction — but this context was removed when the video was reshared on X (the platform formerly known as Twitter) by users who falsely claimed it was genuine.
This video had one major red flag: a complete lack of specifics. The names of the people involved are not mentioned, nor is the location of the school. This should give viewers pause and a reason to seek out additional information. In this case, tracing the clip back to its original source reveals that the originating account frequently posts staged videos.
A Las Vegas Review-Journal reporter became the target of a harassment campaign after social media users accused her of covering up a retired police chief’s alleged murder because her initial coverage did not include details about the death that only came to light in the days and weeks after publication.
Who uses a typewriter anymore in the news industry? A 13-year-old homeschooled boy in Houston, Texas, who started his own newspaper — and even reported on the impeachment trial of the state’s attorney general.
Generative AI tools “can’t discern fact from fiction,” resulting in recent AI blunders by news outlets experimenting with the technology, like an obituary calling a former NBA player “useless at 42” and a travel article suggesting tourists visit a food bank with “an empty stomach.”
TikTok launched a tool to label AI-generated content on its platform in an effort to curb misinformation. Meanwhile, academics say the platform’s rules make it tough to study TikTok data.
To humanize reporters in an era of AI technology and misinformation, enhanced bios of New York Times reporters will now include how the paper’s ethics policy applies to their beats. For example, science and global health reporter Apoorva Mandavilli’s bio says, “I do not go on press junkets sponsored by companies or hospitals.”
AsianWeek, a historic San Francisco newspaper that ran from 1979 to 2009, recently launched an online database for its content. The paper was one of the first English-language news outlets to cover Asian American topics.
Families of QAnon conspiracy theorists share in this CNN report the emotional pain they experienced as loved ones became caught up in false narratives. One mother noted how her son “became more isolated” as he went down the conspiracy theory rabbit hole.
Love this newsletter? Please take a moment to forward it to your friends, or they can subscribe here. We also welcome your feedback here.