Wondering if content is reliable? Determine its origin
Know the techniques bad actors use to fool you online.
The most essential question to ask of viral online content is also frequently one of the most difficult to answer: Where did this come from?
While this question isn’t new — after all, purveyors of misinformation have always “adapted their workflows to the latest medium” — it has been elevated by an information ecosystem optimized for anonymity, alteration, fabrication and sharing.
It’s a perfect storm, the authors of a new report argue, for “source hacking”: organized information influence campaigns that exploit “technological and cultural vulnerabilities” to strategically amplify — and obscure the provenance of — false information. The paper provides new vocabulary for common tactics employed by bad actors online: viral sloganeering (packaging and pushing claims and talking points); leak forgery (fabricating and “leaking” false documentation); evidence collages (selectively bundling information and examples from multiple sources); and keyword squatting (using sockpuppet impersonation accounts and flooding or “brigading” comments, keywords and hashtags to misrepresent issues, individuals or groups).
Used in combination, these tactics make it difficult to determine the origins of pieces of coordinated misinformation. Disguising the content’s creators and their intent helps false claims get amplified to large audiences by social media influencers and by news outlets debunking or otherwise covering the campaign. But this new vocabulary can help the public move beyond opaque and less useful terms — like “trolling” — to accurately describe and better understand coordinated disinformation efforts online.
Take, for example, the false rumor spread in the wake of a shooting spree in two West Texas towns on Aug. 31. When police identified the gunman the day after the shooting, a false claim quickly spread online (viral sloganeering) that he was a supporter of 2020 Democratic presidential candidate Beto O’Rourke. At one point, this claim comprised more than 13% of all tweets about shooter (keyword squatting). Doctored images of the shooter’s truck with an O’Rourke campaign bumper sticker emerged, as did screenshots of fake online profiles (WARNING: linked page includes examples of hate speech) listing false information about the shooter’s political affiliations and ethnicity (leak forgery). These claims were thencombined (evidence collage) and spread.
Here are some activities for the classroom:
Note: The false information about the Odessa, Texas, shooter is not the only recent example of Donovan and Friedberg’s source hacking techniques in action. Last week, NBC News traced a number of divisive social media accounts impersonating Jewish people back to 4chan’s /pol/ message board, a forum known to be a cauldron of anti-Semitism, sexism and racism.
Also note: The U.S. Department of Defense announced last week that disinformation is a significant enough threat to U.S. security that it is launching an initiative to repel “large-scale, automated…attacks.”
Discuss: Can breaking down and naming the techniques used by disinformation agents online help people better avoid being exploited by mis- and disinformation? Will knowing these four techniques help you?
- “Tracing Disinformation With Custom Tools, Burner Phones and Encrypted Apps” (The New York Times, featuring Matthew Rosenberg)
- “Disinformation and the 2020 Election: How the Social Media Industry Should Prepare” (Paul M. Barrett, Stern Center for Business and Human Rights, New York University)
- “Bots in Blackface — The Rise of Fake Black People on Social Media Promoting Political Agendas” (Samara Lynn, Black Enterprise)