On Sept. 14, The New York Times published an essay by two of its reporters, Robin Pogrebin and Kate Kelly, that was based on their new book, The Education of Brett Kavanaugh: An Investigation. The Times’ opinion section — which is responsible for the Sunday Review section, where the essay appeared — also posted a tweet promoting the piece. Both the tweet and the essay sparked a firestorm of outrage and criticism across the political spectrum and exposed a series of flawed editorial decisions and blunders.
The essay also included a new allegation of sexual misconduct against Kavanaugh when he was in college. Some readers were perplexed that the Times hadn’t made this new allegation the focus of the piece. Then it came to light that two important facts that are included in the book had been omitted in the essay: that the female student did not recall the incident, according to friends, and that she had declined to be interviewed. The Times later added those details to the essay, along with an Editors’ Note.
Kelly and Pogrebin told MSNBC that there was no intention to mislead anyone, and that the omission occurred while the essay was being edited. The book identifies the female student by name; the Times typically does not identify victims in such cases. “In removing her name, they removed the other reference to the fact that she didn’t remember,” Pogrebin said of her editors during an appearance on The View.
The new allegation did lead several Democrats running for their party’s presidential nomination to call for Kavanaugh’s impeachment — a development that the Times did cover.
Note: Mollie Hemingway, a senior editor at The Federalist, a Fox News contributor and a co-author of Justice on Trial: The Kavanaugh Confirmation and the Future of the Supreme Court, was credited by news outlets for pointing out the omitted material. (Hemingway is a member of the News Literacy Project’s board of directors.)
Discuss: Do you agree that The New York Times should have deleted the initial tweet? Why or why not? What do you think of the way the Times handled the essay and the new allegation it contained? What do you think of the way it handled the criticism? Do you think adding details to a story after initial publication counts as a correction? In its Editors’ Note, should the Times have explained how the omission occurred?
Note: Using photos of trash in public spaces in a false context is a common disinformation tactic, frequently used to create the appearance of inconsistency or hypocrisy among those who support “green” policies.
NO: President Donald Trump did not say this during a phone interview on Fox News’ morning show, Fox and Friends:
“The Democrats can subpoena me and my administration for the next 10, 15, 20 years and we will never capitulate. They need to face the fact that I am in charge, this is my country and I will do as I please, they have no control over me. The people support me and will always support me.”
NO: The chyron “President Trump goes ballistic on Fox and Friends” did not actually appear on the show. YES: A video still of a Fox and Friends phone interview with Trump on April 25 was manipulated to add this false quote and chyron.
NO: A “Somali mob” did not attack a man in Minneapolis last month. YES: A man was assaulted and robbed by a group of young men (WARNING: violent video footage embedded) outside Target Field in Minneapolis on Aug. 3. NO: None of the suspects (who were arrested) is Somali, according to the Minneapolis Police Department. YES: The Daily Caller posted a video on YouTube and on its Facebook page falsely claiming that the group was a “Somali mob.”
Discuss: What actions did The Daily Caller take to correct its erroneous report?
Idea: Have students attempt to map the spread of this false claim online by searching for it across social media platforms and documenting what they find.
Five to teach
Automated digital ad brokers are channeling hundreds of millions of dollarsto websites that publish disinformation, according a new report (PDF) from the Global Disinformation Index (GDI), a U.K.-based nonprofit that describes itself as operating “on the three principles of neutrality, independence and transparency.” The report estimated that programmatic advertising — automated ad auctions and placements on websites — generated at least $235 million (U.S.) for 20,000 “disinforming domains” such as twitchy.com, zerohedge.com and the Russian state-run “news” sites RT.com and sputniknews.com. Programmatic ads are frequently the primary source of revenue for such sites, and are placed by third-party ad exchanges such as Google, Taboola and Revcontent. Brands whose ads are placed by these exchanges are often unaware of the websites that their ads end up supporting.
Note: Programmatic ad exchanges currently filter out websites that clients are likely to find offensive and damaging to their brands. The report’s findings urge companies running these exchanges to include disinformation websites in these lists, thus cutting off ad revenue.
Discuss: Should brands monitor where their ads appear online? Is this possible? What conflicts of interest exist for online ad exchange companies — which make money every time they place an ad — in excluding disinformation websites from their networks?
Ideas:
Challenge students to screenshot ads that appear on websites that publish content they deem problematic, then share them with the brands that the ads promote. (Note: Make sure students understand that when they visit disinformation websites, their traffic generates a small amount of ad revenue for those sites.)
Use a tool — such as BuiltWith — to look at a variety of these websites and explore which ad exchanges are active on them. Then contact those ad tech companies and ask if they have a plan to stop providing such sites with ad revenue.
Use AdBeat’s free preview to determine the number of ads seen this month on a disinformation website along with the percentage of those ads that are programmatic (placed by ad exchanges). Then use GDI’s method (see page 5 of the report) for estimating the programmatic ad revenue for that website this month.
A network of Facebook pages that have built up large audiences by sharing memes celebrating U.S. patriotism — but are managed primarily by people in Ukraine — have recently begun pushing propaganda supporting President Trump, according to a report in Popular Information, a newsletter written by Judd Legum, the founder and former editor of ThinkProgress. The network’s largest page, “I Love America,” has 1.1 million fans and more overall engagement on Facebook than USA Today, which has 8 million followers. It has repeatedly posted memes that had previously been posted to pages run by Russia’s Internet Research Agency (IRA), which worked to influence the 2016 U.S. presidential election. The Ukrainian network includes pages dedicated to other topics (“Cute or Not?” shares cute puppy memes, for example, and “I Love Jesus Forever” offers religious memes); these pages also cross-post partisan content from the other pages in the network. The network does not appear to be run or supported by any government.
Note: According to a spokesman, Facebook does not believe that these pages violate its policy against “coordinated inauthentic behavior,” which occurs “when groups of pages or people work together to mislead others about who they are or what they’re doing.”
Also note: Several of the pages in the Ukrainian network have even more followers than the largest IRA-run pages did before they were suspended. Also, the 29.6 million engagements (likes, shares and comments) on the Ukrainian network’s pages over the last 90 days are not that far from the combined number of engagements on The New York Times and The Washington Post pages (31.7 million) during that same period.
Discuss: Should Facebook filter or remove content that was shared on pages managed by the IRA to influence the 2016 U.S. presidential election? How should Facebook regulate pages that are managed by people in one country but share political content focused on another? Are there instances where such activity should be allowed?
AI-generated stock photography has arrived. One company, Icons8, is offering 100,000 headshot images generated with artificial intelligence at no cost, The Verge reported. The headshots are free to use as long as “generated.photos” is credited (and linked). The images offer a range of advantages — including consistent lighting and sizing; a range of ethnicities, ages and facial expressions; and a lack of copyright requirements and royalties — but critics are concerned about the possible misuses of such technology, such as fake social media profile pictures that are difficult to trace.
Discuss: This technology is intended to provide low- or no-cost stock photography options, but are there any drawbacks? To what other uses might fake face technology be put?
Idea: As a bell-ringer, display whichfaceisreal.com and ask students to guess, eliciting their reasons and observations. Then have them review the site’s “Learn” section and develop a tip sheet to help friends and family spot computer-generated faces.
A New York Times analysisof a large network of “sockpuppet” accounts run by the Chinese government found that some consistently advanced Chinese government interests, while others suddenly switched from tweeting about non-political topics — often in other languages — to tweeting about politics in Hong Kong and China. These tweets mostly originated from inside China, where Twitter and other non-Chinese social media platforms are blocked. Over 200,000 accounts were banned by Twitter last month — followed by 4,300 more last Friday — for engaging in a coordinated campaign to “sow discord about the protest movements in Hong Kong.”
Discuss: How are governments using fake social media profiles to advance their interests? What steps could social media platforms take to combat this practice? Have you ever engaged with a social media account you suspected of being a sockpuppet?
Instagram has begun restricting who can see poststhat promote weight loss products and cosmetic procedures. The Facebook-owned platform announced Wednesday that posts giving a price or offering incentives to buy such products — which are often mentioned by celebrity influencers, such as the Kardashian sisters — will be hidden from users under 18, and those that make “a miraculous claim” about specific weight loss products and provide a discount code will be removed. Instagram is also introducing tools that will enable users to flag posts that they believe violate this new policy. .
Discuss: Are “miraculous” weight loss products commonly promoted on Instagram? Is the company right to restrict these posts? Do teens need to be protected from them? Did Instagram structure this new policy in an effective way? How big a problem is medical misinformation in general?
Idea: Have students check their Instagram feeds to see if the new restrictions have taken effect for their accounts.