Did You Know?
Deepfakes: When you can’t believe your own eyes
Remember when you could watch a video and feel fairly confident that the people in it were actually saying and doing what you were hearing and seeing? And if someone had manipulated the video to alter its meaning or context, the fingerprints of fabrication were probably pretty easy to spot.
Those days are long gone.
Digital technology allows just about anyone to create deepfakes — videos that have been digitally manipulated to make a person appear to say or do something that the person never said or did. You may have seen some of the more infamous ones, such as a of Jennifer Lawrence and Steve Buscemi. While that video is largely harmless, it demonstrates how the technology can be used to deceive audiences (and potentially cause harm).
A less sophisticated manipulation, known as a “cheapfake,” targeted House Speaker Nancy Pelosi in May 2019. A video of her remarks at the Center for American Progress was slowed down, making her speech sound slurred (and giving rise to false claims that she was drunk).
Some media experts believe that in this presidential election year, cheapfakes will turn out to be a bigger problem than deepfakes. As three current and former Harvard University researchers argued in a piece for Nieman Lab, crude cheapfakes will likely serve the purposes of propagandists better — and are just as likely as more sophisticated videos to draw in those who are inclined to believe.
Lawmakers in California reacted to such concerns by passing legislation that makes it illegal to distribute deepfakes of a candidate for public office within 60 days of an election. Gov. Gavin Newsom, a Democrat, signed the measure into law in October 2019. Candidates can sue to stop the spread of videos and can seek financial damages, although the law imposes no criminal penalties.
Technology companies are also taking action against the alarming potential of synthetic video and audio technologies to provoke or confuse voters — or to give politicians a way to dismiss an authentic but potentially damaging video as “fake.” The New York Times reported in November 2019 that Google is developing automated tools to detect deepfakes (an episode of The Weekly, the Times’ investigative journalism television series, featured similar work). Google hired actors to create its own synthetically engineered deepfakes and then used those fakes to train an algorithm to detect those methods of video manipulation. The company is making its collection of fakes available to other researchers trying to build similar tools.