Twitter and Facebook act to stem COVID-19 misinformation

Updates


Facebook logo with person holding phone in silhouetteIn the last week, both Twitter and Facebook have announced additional measures to combat the spread of misinformation about COVID-19 and SARS-CoV-2, the strain of coronavirus that causes the disease.

Twitter announced on March 18 that it would remove coronavirus-related content that goes “directly against guidance” from public health and government authorities, such as false and dangerous preventive measures or “cures” and claims that the virus is a hoax designed to harm the economy. Two days later, Twitter Support tweeted that it was working to “verify” the accounts of credible experts in public health. (One critic has urged the company to create a designation for this purpose other than its standard blue checkmark, which signifies only authenticity, not credibility.)

Facebook, WhatsApp

Also on March 18, Facebook, through its subsidiary WhatsApp, announced two initiatives to combat misinformation about COVID-19: a $1 million donation to the Poynter Institute’s International Fact-Checking Network (IFCN) and the launch of the WhatsApp Coronavirus Information Hub, where users can find credible health information. (The company also created a similar information center on Facebook.) In addition, WhatsApp is testing new features that enable users to search the internet for additional context about messages that are forwarded to them and is adding localized chatbots (including one run by the World Health Organization) to help guide people to credible information. The platform is also featuring the accounts of 18 IFCN members where users can forward messages for verification.

Person holds phone with Twitter open on the screen.Joint statement

On March 16, Facebook, Twitter, Google, LinkedIn, Microsoft, Reddit and YouTube issued a joint statement expressing their commitment to “working closely together on COVID-19 response efforts.”

Facebook flagged a number of credible posts about the coronavirus as spam last week, but later explained that this was due to an error in an automated anti-spam system. The company also noted that it was working with a reduced content moderation staff because of COVID-19.

Related coverage

“Facebook has a coronavirus problem. It’s WhatsApp.” (Hadas Gold and Donie O’Sullivan, CNN Business).

“YouTube Is Letting Millions Of People Watch Videos Promoting Misinformation About The Coronavirus” (Joey D’Urso and Alex Wickham, BuzzFeed News).

 

More Updates