2025 year in review: AI misinformation
Artificial intelligence tools evolved from an online oddity to a routine part of many people’s lives in 2025. AI assistants help automate tasks, AI chatbots answer questions, and AI image and video generators create near photo-realistic visuals. These tools have many beneficial uses, but they also make it more difficult to determine fact from fiction online. Here are some of the ways that AI contributed to the creation and spread of falsehoods on social media this year.
Distorting current events
Did you see videos of animals getting startled by Halloween decorations this year? What about this clip of a drunk raccoon at a liquor store? Purveyors of misinformation always have used viral news stories for their source material, and AI content generators make it easier to quickly capitalize on those trends. But it’s not all funny animal clips and seasonal spoofs. Important and often emotional news events also are exploited using convincing fabrications. AI-generated videos supposedly depicting conflicts between federal agents and protesters, the aftermath of Hurricane Melissa and the impact of policy changes for SNAP recipients spread widely online this year. Many of these deceptive videos are designed to look like raw, firsthand cellphone videos and easily can mislead viewers.
Newslit tip: While these videos are increasingly realistic, they still contain visual flaws. Look for watermarks indicating that the content was AI-generated and carefully examine objects in the background (especially text) for unusual distortions. But most importantly, pay attention to who is sharing the content and always look for a second source.
Supplying wrong answers
AI chatbots can quickly parse information to provide answers to queries, but their responses are frequently inaccurate or incomplete. And while these tools often are viewed as neutral arbiters of information, it is important to recognize the human influence over their responses. Billionaire Elon Musk’s AI chatbot Grok, for example, recently asserted that Musk was more physically fit than NBA superstar LeBron James. Chatbots may also generate responses based on information from unreliable sources. One study found that chatbots “lack skepticism” and often repeat information from low quality sources, such as unverified social media posts. This was the case during a large political rally in October 2025. Chatbots amplified the false online assertion that genuine news coverage was old footage, misleading people about the crowd’s size.
Newslit tip: AI chatbots are not objective and don’t have all the answers. They need to be fact-checked just like other online content.
Impersonating audio
AI-generated content that looks realistic is only part of the problem. These tools also are increasingly capable of producing content that sounds realistic. Audio impersonations are becoming a problem both for music fans, who may mistakenly tune into an AI impostor song from a favorite band, and for anyone on social media who may encounter purported “leaks” of fabricated audio of a celebrity or politician “saying” something they never said. Fabricated audio clips have depicted Vice President JD Vance criticizing Elon Musk and former President Barack Obama expressing health concerns about President Donald Trump. These fakes can influence public opinion and public policy. In July, a bad actor used a voice clone of Secretary of State Marco Rubio to contact U.S. and foreign officials.
Newslit tip: Fake audio can be especially deceptive because there are no visual clues to examine. Instead, question the audio’s origin and evaluate the account sharing it. Remember, verifying social media content often requires patience, as the information needed isn’t always available right away. If you are skeptical of a viral clip, wait for experts to weigh in on whether the audio is genuine.
Prioritizing facts in 2026
AI-generated content is likely to get more realistic as the technology continues to improve. While it’s still possible to spot visual flaws through careful observation, we should anticipate that identifying these artificial artifacts may no longer be a reliable way to detect AI content. To determine authenticity we should practice good news literacy habits, such as checking the source of a post, looking for supporting evidence and seeking out reputable sources to investigate suspicious claims.




