While most people tend to think of internet trolls as obnoxious personas who provoke others into infuriating exchanges online, two disinformation experts from Clemson University argue that “professional trolls” are far more likely to use positive ideological messages that affirm people’s existing beliefs to accomplish their goals of sowing division and distrust.
“Effective disinformation is embedded in an account you agree with,” Darren Linvill and Patrick Warren write in an article posted Nov. 25 on Rolling Stone’s website. “The professionals don’t push you away, they pull you toward them.”
Professional disinformation practitioners also use accurate stories about valid issues — but selectively. For example, they might home in on stories that undermine trust in American institutions, or subtly manipulate our emotions by focusing on “cultural stress points,” such as religion or homophobia, that are known to provoke feelings of disgust toward people with different beliefs or ideologies. They also attack moderates, and share links and comments that support candidates and activists further away from the political center. And because so many of our social media profiles work the same way — selectively sharing information that affirms our viewpoints and existing beliefs, often by demonizing or belittling “the other side” — it’s challenging for social media companies and others to catch those doing so with a deceitful motive.
But Linvill and Warren, both associate professors at Clemson, also offer some advice to help minimize our vulnerability to these campaigns, including the need to question our own biases, to stop believing and re-sharing posts from anonymous users online (for example, those using a hashtag we’re sympathetic with) and to engage in “digital civility” with those we disagree with by seeking common ground and resisting the urge to dismiss or demean them.
Russia’s goals, the authors warn, were “never just about elections”; they also were (and are) to encourage us “to vilify our neighbor and amplify our differences because, if we grow incapable of compromising, there can be no meaningful democracy.”
NO: House Speaker Nancy Pelosi was not asked to leave a children’s benefit after a “drunken outburst.” YES: This complete fiction is a piece of satire that was published in September on Bustatroll.org, part of Christopher Blair’s America’s Last Line of Defense (ALLOD) network of satirical sites. YES: This false item has been copied and posted on other websites — including one registered to a person in Lahore, Pakistan — that do not independently label it as satire.
Note: The image used in the copy of Blair’s piece shared in the Facebook post above still includes the “ALLOD” logo and an “S” satire “rating” — but the site where the piece was copied does not say itself that the story is satirical.
Also note: The comments on the Facebook post suggest that few people who engaged with it realized that the story was false.
NO: A bill that will be considered by legislators in Virginia would not outlaw martial arts and firearm instruction. YES: Under a 1987 Virginia law, “unlawful paramilitary activity” includes training a person how to use a firearm or any “technique capable of causing injury or death” when undertaken with the knowledge or intent that such training will be used to promote “a civil disorder.” YES: The bill to be formally introduced in the state Senate when it returns next month would amend the 1987 law to include assembling armed groups “with the intent of intimidating any person or group of persons” in the definition of “unlawful paramilitary activity.”
Note: This claim originated with a false article on NaturalNews.com — which frequently peddles misinformation, including anti-vaccination items — and has spread to Infowars.com and other conspiracy sites.
NO: This is not a real hedgehog. YES: It’s a toy made from wool felt by an artist in St. Petersburg, Russia. YES: The Amazing Animals Twitter account has misleadingly shared this photo 90 times since 2014.
Note: So-called engagement bait or click-farm accounts, like this one, often use posts like these to generate a large following. They then seek to monetize that following, either by selling posts or retweets or by selling the account, which the purchaser often renames and uses for a completely different purpose.
Five to teach
A measles outbreak in Samoa has killed 53 people (48 of them under 4 years of age) as of Dec. 2, and thousands more have been infected. Prior to the outbreak, only 31% of the population had been vaccinated against measles, according to the World Health Organization — a significant decline from previous years, largely due to widespread distrust of vaccine safety.
In July 2018, two infants died after two nurses incorrectly administered the measles-mumps-rubella vaccine by mixing it with an expired muscle relaxant instead of water. Following the incident, Children’s Health Defense, an anti-vaccination advocacy group founded by Robert F. Kennedy Jr., published several Facebook posts questioning the safety of the vaccine; however, it never provided information about the actual reason for the infants’ deaths. (The nurses pleaded guilty to manslaughter and were sentenced to five years in prison.)
Note: Kennedy visited Samoa in June and, apparently by chance, met up with Taylor Winterstein, an Australian anti-vaccination activist who was staying at the same resort. Winterstein was organizing an anti-vaccination workshop in Samoa as part of an international tour, but the event was canceled after Samoan health officials objected.
Discuss: Why is misinformation about vaccines so persistent? Why do false claims about vaccine safety continue to spread despite being disproven? Should people be allowed to make up their own minds about whether to vaccinate their children? Can one person’s decision not to vaccinate their children affect the health and safety of others?
Idea: Use the Samoan measles outbreak as an opportunity for student inquiry into vaccine misinformation, perhaps culminating in a problem-based learning assignment asking students to come up with an innovative solution to combat it.
A political reporter at Newsweek was fired and an editor demoted after an article about President Donald Trump’s activities on Thanksgiving Day did not initially include his surprise visit to Afghanistan, which was largely kept under wraps for security purposes and announced after Newsweek’s piece was published. The reporter, Jessica Kwong, said in a tweet that it was “an honest mistake” and that the article was eventually updated. Kwong, who had not been scheduled to work on Thanksgiving, said she filed the article on Wednesday based on Trump’s expected holiday plans. However, after the trip was announced, her piece wasn’t updated in a timely manner, the Washington Examiner reported, citing Kwong. A Newsweek representative told the Examiner that it had investigated the situation, that the story had been corrected, that the journalist responsible was terminated and that further action may be taken. Newsweek’s coverage prompted widespreadcriticism online, including from both Trump and his son, Donald Trump Jr.
Discuss: Was Newsweek correct to fire the reporter and demote an editor? Why or why not? Are the activities of the president on Thanksgiving Day newsworthy? How should news organizations handle articles that need to be updated with a new development? Should Newsweek’s failure to initially include Trump’s Afghanistan trip in coverage be considered an inaccuracy or omission that required a correction, or a new development in an ongoing story that simply warranted an update? Now that this incident has sparked criticism and allegations of partisan bias, what could Newsweek do to further explain what happened and address readers’ concerns?
The video-sharing platform TikTok blocked a New Jersey teen from accessing the service on her phone last week after a makeup tutorial she posted — in which she encouraged her viewers to “spread awareness” about the mass detention of minority Muslims in China — went viral. In addition, the video itself was removed from the platform for 50 minutes.
TikTok, which is owned by the Chinese company ByteDance, says that 17-year-old Feroza Aziz was blocked on Nov. 25 because her phone was associated with an earlier account that was permanently banned in mid-November for posting a video that contained an image of Osama bin Laden (TikTok’s policies prohibit posting “terrorist imagery”). Her access to the platform was restored two days later, following reports about the situation in news organizations around the world.
TikTok said the ban on her phone was part of “scheduled platform-wide enforcement” of policies preventing devices associated with banned accounts from accessing the service; Aziz, who describes herself on Twitter as a “human rights activist,” doesn’t believe this explanation and says the video that included the image of bin Laden was a satirical one about dating.
TikTok also said that the temporary removal of the viral video in which she discussed Chinese detention camps was the result of “human moderation error.” The company has comeunderscrutiny in recent months after repeated allegations that it censors videos that conflict with Chinese government interests.
Discuss:
Do you believe TikTok’s explanations about the actions taken against Aziz?
Idea: Have groups of students research subjects that are censored in China, then search TikTok for videos on those subjects. For example, students might see how much content is on TikTok about the Tiananmen Square massacre, Tibet, the Taiwan independence movement or the pro-democracy protests in Hong Kong.
Another idea: Challenge students to research several topics that have been censored by the Chinese government, then create TikTok videos about those subjects and post them to a new account (for example, a class account using a school-owned device). Then monitor the status of that new account to see if any action is taken against it.
Amid protests and a probe into the murder of Maltese journalist Daphne Caruana Galizia, Malta’s prime minister, Joseph Muscat, said on Dec. 1 that he would step down in mid-January. Caruana Galizia was killed on Oct. 16, 2017, when a bomb planted in her car exploded as she drove away from her home. Described by Politico in late 2016 as “a one-woman WikiLeaks, crusading against untransparency and corruption in Malta, ” she was known for investigating government and business wrongdoing (and had implicated some members of Muscat’s cabinet) and had been sued by some of the subjects of her reporting. On Nov. 30, a wealthy businessman, Yorgen Fenech, was arraigned on charges in connection with her death, including complicity in murder and membership in a criminal organization. Three known criminals accused of planting and detonating the bomb have been detained since December 2017; they were formally charged in July 2019 with voluntary homicide, membership in a criminal organization and possession and detonation of an explosive.
Discuss: What impact has the death of Caruana Galizia had on press freedoms in Malta? Why is a free press important? Why do powerful people sometimes threaten, jail or kill journalists?
Idea: Have students research Malta on the Reporters Without Borders website. Where does Malta stand in the 2019 World Press Freedom Index? How does Malta’s 2019 ranking compare with its ranking in 2018, and with the ranking of the United States?
In a new effort to combat false content online, the Chinese government is requiring the use of clear labels on any video or audio created or manipulated through the use of technologies or applications such as artificial intelligence and virtual reality. Publishing or hosting “deepfake” videos and other digitally manipulated or fabricated content without a prominent disclaimer could be a criminal offense, according to the Cyberspace Administration of China. The regulations, announced Nov. 29, take effect Jan. 1. (If you use the Chrome browser, Google will offer to translate the page.)
Also prohibited is using such technologies and applications to “produce, publish and disseminate false news information.” This new rule is similar to a California statute, enacted in October, that makes it a crime to distribute deepfakes of politicians within 60 days of an election.
Discuss: Should it be illegal to create deepfake videos of politicians that are designed to deceive and influence voters? How can such a law be enforced without also punishing fake videos created for fun or as satire? Should governments create mandatory labeling laws or regulations for doctored or fabricated videos? Should they apply only to videos of politicians, or should they include videos of other people?