The controversy stemmed from the Crimson’s coverage of a Sept. 12 on-campus protest calling for the agency’s abolition. The report included the sentence “ICE did not immediately respond to a request for comment Thursday night.”
That language, variations of which are often found in news reports, is a sign of transparency, demonstrating to readers that the reporters have done their due diligence to give individuals and institutions mentioned prominently in their articles a chance to respond, particularly if those individuals and institutions are being criticized.
Three days after the article was published, Act on a Dream — the student immigration advocacy group that organized the protest — criticized the Crimson on social media. On Oct. 11, several days after Crimson editors met with protest organizers to explain the journalism principles behind the request for comment, Act on a Dream started a Change.org petition. It condemns the Crimson’s decision to contact ICE, arguing that doing so showed “cultural insensitivity” and “blatantly endangers undocumented students on our campus.” In addition, it demands that the publication apologize, change its policies that require calling ICE for comment, and protect undocumented students.
Last Tuesday, in a “note to readers,” the Crimson’s president, Kristine Guillaume, and managing editor, Angela Fu, defended the decision to contact ICE for a statement. In keeping with the Crimson’s policies, they said, the request was made after the protest ended, but before the report on the event was published. They also said that no names or immigration statuses of those participating in the protest were given to ICE, nor was the agency told about the demonstration in advance. “We seek to follow a commonly accepted set of journalistic standards, similar to those followed by professional news organizations big and small,” Guillaume and Fu wrote.
Discuss: What factors make a piece of journalism “fair”? How is fairness connected to credibility? Do you think these student journalists were correct to give ICE a chance to comment? Why or why not? Should the Crimson make any of the policy changes the petitioners are demanding? What questions might arise about the Crimson if its reporters did not try to contact ICE? What about other reporting involving other authorities, such as local or campus police? What would it say about the Crimson’s credibility as a news source if it stopped this widely accepted practice?
Idea: Have students read and review the standards listed in the Society of Professional Journalists Code of Ethics, which was cited in the Crimson’s note to readers. Then ask students to identify which principles of the code were relevant in this case, and whether they would recommend any changes to the code.
Viral rumor rundown
Click the image to view a larger version.
NO: This image of Donald Trump with his parents, Mary and Fred, is not authentic. The Ku Klux Klan robes were added to a photo taken in 1994.
Discuss: What emotions does this claim elicit? How can emotions override rational responses to information?
NO: People are not allowed to cover their faces for a driver’s license photo in Ontario, Canada. NO: The image in the Facebook post is not authentic. The image of the woman wearing a face veil was added to a sample image of an Ontario driver’s license. NO: This rumor is not new; it was debunked by Snopes in 2015.
Note: Facebook said in September that it would not refer most politicians’ posts and ads to its fact-checking program under a “newsworthiness exemption.” Political ads placed by third parties, such as PACs, will be fact-checked.
“Regarding our al-Baghdadi obituary, the headline should never have read that way and we changed it quickly,” Post spokeswoman Kristine Coratti Kelly tweeted Sunday afternoon, referring to the “austere religious scholar” description. The current headline on the obituary identifies al-Baghdadi as “extremist leader of Islamic State.”
Discuss: Why was the description of al-Baghdadi as an “austere religious scholar” problematic? What headline would you have written for his obituary?
Idea: Have students curate other headlines about al-Baghdadi’s death, then compare the language and tone used.
Preserving freedom of expression without spreading harmful disinformation is “one of the central challenges of our time,” says misinformation expert Claire Wardle, the head of the nonprofit organization First Draft, in a newly released TED Talk. She outlines some of the reasons misinformation is such a significant and complex problem — noting, for example, that people’s relationship to information is more emotional than rational, and that social media platforms are designed with that connection in mind. Misinformation is also extremely difficult to combat, since much of it doesn’t violate the community standards of the platforms on which it appears — and even when it does, these posts, images, videos, graphics and other forms of personal expression are being created at a scale that no one is equipped to moderate.
Wardle ends her talk with ideas for an “infrastructure to support quality information,” such as strategies for utilizing the collective experiences of all users (especially underrepresented and historically targeted groups), establishing a repository of social media data for analysis, and creating opportunities for organizations and individuals fighting online disinformation and extremism to work together.
Also related: Wardle is the host of the Checkology® virtual classroom lesson “Misinformation.”
Discuss: Why do “zombie rumors” — pieces of widely debunked misinformation that are seemingly immune to fact-checking — persist online? How can the context of information be manipulated and weaponized? What steps should governments, social media companies and individual users take to fight the spread of harmful online content while also protecting freedom of expression? What skills and experiences do you have in this area that might benefit your family and friends? What skills and experiences do they have that might benefit you?
Idea: Have groups of students review examples of misinformation (using debunks from reputable fact-checking organizations or this newsletter’s viral rumor rundown) in search of insights about how misinformation works and why it circulates. Then have them compile their findings into an infographic to share with others.
The Facebook News tab is currently being piloted with a limited number of users in the U.S., and features stories curated partially by an editorial team at Facebook and partially by algorithms. Some news partners are paid directly for their participation; others benefit only from increased traffic to their websites.
Discuss: Should Facebook encourage its users to read standards-based news? How should it select which news outlets, and which stories, to present to people? Should Facebook personalize the tab for each user, or is there a benefit to some people (for example, throughout a city or region) seeing the same reporting? How much power does Facebook’s news-sharing program have to shape public opinion?
Note: Facebook’s ad policy was also subject to further scrutiny and debate last week. During a House Financial Services Committee hearing, Facebook's Mark Zuckerberg was questioned by Rep. Bill Posey, a Florida Republican, about Facebook’s decision to ban ads promoting anti-vaccination claims (Posey has posted such claims). Also, an investigation by The Daily Beast found that credible pro-vaccine ads are getting inadvertently blocked by Facebook and some anti-vax ads are getting through. Finally, some of the company’s third-party fact-checkers didn’t know that they could vet advertisements.
Idea: Use Facebook’s ad library to search for vaccination-related posts that violate Facebook’s ban on anti-vaccination ads.
Almost all (97%) of the public tweets about U.S. politics are posted by only 10% of the adult Twitter users in the U.S., according to a year-long study by the Pew Research Center. Pew also found that compared with non-political tweeters, these users were older, more politically active and more polarized (following accounts with similar views and having a more negative view of those with different opinions). The study was conducted using a random sample of 2,427 U.S. adult (18+) Twitter users who posted more than 1.1 million tweets — only 13% of which were about national politics — between June 10, 2018, and June 9, 2019. A Pew study earlier this year found that just under a quarter of U.S. adults (22%) say they use Twitter.
Discuss: Should people post about politics on social media? Should people who use social media to discuss politics intentionally follow people with whom they disagree? Does the political conversation on social media accurately reflect the true state of U.S. politics? Does it influence the true state of U.S. politics? Are social media platforms optimized for disagreement or for consensus-building?
Idea: As a class, have students re-create the Pew study by selecting a limited number of public Twitter accounts that appear to belong to adults in the U.S. and analyzing their tweets for the same time period and comparing their findings to the study’s results.
More than 200,000 videos documenting human rights violations in Syria have been taken down permanently from Google-owned YouTube, erasing history and removing evidence that could be used to prosecute war crimes, according to Hadi Al Khatib, a Syrian human rights activist and video archivist. “I’m pleading for YouTube and other companies to stop this censorship,” he says in a New York Times opinion video posted last Wednesday.
YouTube and other social platforms are under pressure to remove inappropriate videos — but while algorithms may be good at detecting and flagging violence, they are not as nuanced as humans, says Dia Kayyali, program manager at the human rights organization Witness, who also appears in the video: “They’re not as good at figuring out whether a video is ISIS propaganda or vital human rights documentation.”
To “avoid losing more evidence,” Al Khatib calls on platforms to hire and train moderators in each country who understand the context of the content, and to allow researchers, like him, to review takedowns and have the power to reverse moderators’ decisions.
Discuss: Should social media platforms preserve some videos that may document human rights violations, even if they contain graphic violence that would otherwise violate the platforms’ community standards? If so, how could the companies determine which videos show such violations? How can YouTube help preserve the work of those who are using its platform to document human rights abuses while also protecting its users — including children — from viewing disturbing footage?
Idea: Have students view the video with Al Khatib and Kayyali. Then have students pretend to be leaders of tech companies, meeting to discuss the issue of videos being removed on their platforms, as well as Al Khatib’s proposal to hire moderators and allow researchers to review videos that were taken down.