2023 The year in misinformation: News literacy takeaways

Updates


This past year, artificial intelligence generators became a proficient misinformation tool. With a little human input and a few clicks, anyone can quickly fabricate convincing (and misleading) fake images, audio or text. Still, bad actors don’t need elaborate technologies to mislead people. Tactics that are cheap and easy, like presenting an image or video out of context, are still more prevalent than their AI counterparts. Even worse, recent changes to X (formerly known as Twitter), such as altering the meaning of its blue check symbol or removing headlines from link preview panels, have made this content easier to spread and more difficult to spot. As the year ends, we review misinformation trends and the news literacy lessons they offer, so we can resolve to be better news consumers in 2024.  

1. Truth remains in the AI of the beholder

AI generators are complex tools that make it simple to quickly fabricate synthetic digital images of, well, almost anything: the pope in a puffy coat, explosions at the White House, former President Donald Trump being arrested and crying babies in the rubble of destroyed buildings. Voice cloning AI tools also can create other deceptions: Videos of public figures saying things they never did. By altering the mouth movements and adding an AI audio replica voice to a genuine video, purveyors of disinformation falsely made it appear that former NBA player Shaquille O’Neal promoted a new Medicaid spending card, that President Joe Biden confirmed the existence of aliens or that environmental activist Greta Thunberg called for the use of “biodegradable” bombs in Israel. 

Newslit takeaway

AI image generators aren’t flawless, and careful observation often reveals clues that may indicate if a particular image is authentic. But that’s not the only way to know something is amiss. Look at the surrounding context. Who shared the image? Where was it originally posted? Have any credible news outlets published it?

2. Israel-Hamas war and fake conflict rumors

Not only were AI image generators used to spread false rumors about the Israel-Hamas war, but the mere existence of these tools lent conspiracy theorists another reason to push absurd falsehoods that the war itself was being staged. Along with AI misinformation, many of the false claims that spread on social media after the start of the war relied on the same old tactics that we’ve seen in years past: conspiratorial rumors about “crisis actors” and fabricated news reports spread to sow doubt about the conflict itself, mislabeled images of crowds and protests shared out of context to garner support (or opposition) for one side or another, and video game clips shuffled into the mix of war footage by engagement-seeking opportunists.

Newslit takeaway

The amount of misinformation about the Israel-Hamas war is a clear reminder of how important it is to get news from standards-based sources with a track record of getting the facts right and strong incentives to do so. Remember, one of the main goals of misinformation is to create doubt and distrust in mainstream news coverage.

@newslitproject Let’s take a look back at some of the biggest stories in misinformation from 2023 and see what we can learn from them. #HappyNewYear #YearInReview #Misinformation #NewsLiteracy ♬ original sound – News Literacy Project

3. Musk, X and platforming misinformation

Following Elon Musk’s purchase of Twitter in October 2022, the site underwent numerous changes that disinformation researchers and experts criticized as seemingly designed to help the spread of misinformation and make accurate claims more difficult to find. The site’s verification symbol, which once signified the authenticity of an account, became a badge that anyone could buy for $8 a month. This, unsurprisingly, led to a rash of impostor accounts and some of the biggest drivers of false claims.

Several news outlets left the platform due to Musk’s changes, including NPR, after the site falsely labeled them as “state-affiliated media.” Many advertisers followed suit, as they didn’t want their content to appear beside antisemitic or white nationalist posts. Musk himself has promoted debunked conspiracy theories, and at the end of the year, reinstated the account of Alex Jones, a far-right pundit who promoted the false idea that the Sandy Hook Elementary School shooting was a “false flag” event. X also removed headlines from preview panels, which made it harder for people to recognize and read news — and easier for bad actors to misrepresent the contents of a link with a misleading claim.

Newslit takeaway

Information on X, like all social media sites, moves quickly and relentlessly — and the changes to the platform require everyone to recalibrate how they use it. It’s more important than ever to be cautious and use basic verification practices when using the platform.

4. Subtitles and subterfuge

In 2023, social media saw plenty of videos featuring people speaking languages other than English accompanied by inaccurate subtitle translations, such as this video, which appeared to show the emir of Qatar threatening to cut off the world’s oil supply. As with voice clones, this method of disinformation literally puts words into a person’s mouth and makes it seem as if they expressed an opinion they never did.

Newslit takeaway

One reason that these falsely captioned videos are effective is because people inherently trust subtitles, since we are all accustomed to encountering good-faith translations of movies and TV shows. On social media, however, it is important to take some time to consider who is providing the purported translation, as purveyors of disinformation often add these captions.

5. Climate and meteorologist misinformation campaigns

Meteorologists and weather reporters covering climate change and extreme weather events received a slew of death threats, spurred on in part by persistent misinformation campaigns attempting to cast doubt on the overwhelming scientific consensus about climate change. People who do not agree with mainstream science that says the climate is changing and who are bent on convincing the public to doubt this consensus relied on a familiar playbook in 2023: doctoring weather maps, fabricating translations and misrepresenting data points to try to convince the public that scientists, the media and meteorologists are lying about climate change.

Newslit takeaway

A key feature of online disinformation campaigns is the ongoing attempt to discredit reputable sources. By casting meteorologists, health groups, scientific organizations, the press and other credible sources about climate change as untrustworthy, purveyors of disinformation can steer audiences to false information and unreliable sources. Pushing back against misinformation isn’t just about debunking individual claims, but also understanding the overarching goal of these campaigns.

Bonus: Continuing COVID rumors just won’t quit

As 2023 began, vaccine skeptics were still busy spreading persistent falsehoods about the COVID-19 vaccine. Moments after NFL player Damar Hamlin collapsed during a football game at the end of 2022, they falsely blamed a COVID-19 vaccine. Then over the next few months, as Hamlin reappeared in public, conspiracy theorists also claimed that he died and a clone replaced him. While it has been nearly three years since vaccines were released, and studies continue to show that they are both safe and effective, they remain a reliable topic for bad actors to exploit with fearmongering and conspiratorial claims.

Newslit takeaway

While examples of falsehoods about COVID-19 vary in their details, they frequently fall into the same general buckets: A celebrity “died suddenly” due to the vaccine; the government is planning new pandemics to enact lockdowns, and vaccines are more dangerous than the disease itself. Being mindful of common misinformation tropes can help us better recognize them the next time we encounter one.

More Updates