07.15.19

Schiff Presses Facebook, Google and Twitter for Policies on Deepfakes Ahead of 2020 Election

Schiff: I am gravely concerned the experience of 2016 may have just been the prologue

Washington, D.C.—Today, Rep. Adam Schiff (D-CA), Chairman of the House Permanent Select Committee on Intelligence, wrote to the chief executive officers of Facebook, Google, and Twitter asking how they plan to address realistic fake videos and images generated using machine learning, often referred to as “deepfakes,” and related false content on their platforms ahead of the 2020 U.S. elections.

“As we look ahead to the 2020 election, I am gravely concerned the experience of 2016 may have just been the prologue,” Schiff wrote in the letters. “Social media platforms can catapult a compelling lie into the conversations of millions of users around the world before the truth has a chance to catch up. The consequences for our democracy could be devastating: a timely, convincing deepfake video of a candidate going viral on a platform could hijack a race—and even alter the course of history.”

Facebook, Google, and Twitter each operate platforms and subsidiary products like Instagram and YouTube that were used by foreign adversaries to mislead and deceive American voters to influence the outcome of the 2016 election. While each company has since taken steps to combat coordinated inauthentic behavior, online misinformation remains a persistent and evolving challenge.

“Social media companies and platforms have taken a variety of actions since 2016 to address disinformation campaigns, but I am concerned they remain unprepared and vulnerable to sophisticated and determined adversaries. With voting in the first 2020 primaries less than eight months away, I encourage you to use this time to prepare for what may come so we are not in the position in the weeks and months after the election of wishing we would have done things differently or acted more quickly,” Schiff continued.

In the letters, Schiff asks about the companies’ formal policies on deepfakes and false impersonations on their platforms, their responses to a manually edited “cheapfake” video of Speaker Pelosi that circulated in May, and their research into technology for automatically detecting machine-manipulated images and videos.

In June, the House Permanent Select Committee on Intelligence held an open hearing on the national security challenges of artificial intelligence, manipulated media, and deepfake technology. Schiff also recently introduced the Damon Paul Nelson and Matthew Young Pollard Intelligence Authorization Act for Fiscal Year 2020, which includes a provision directing the Intelligence Advanced Research Projects Agency (IARPA) to carry out a public prize competition to award innovative research into technologies to automatically detect machine-manipulated media like deepfakes.


The full text of the three letters is below:

Mark Zuckerberg
Chairman and Chief Executive Officer
Facebook Inc.
1 Hacker Way
Menlo Park, CA 94025

Dear Mr. Zuckerberg:

In an op-ed last year in the Washington Post, you recounted how Facebook did not notice foreign actors exploiting its platforms to run coordinated interference campaigns to influence America’s 2016 elections until after the votes were tallied.[1] I appreciate the ongoing dialogue we have had with Facebook in the time since on the topic of disinformation and coordinated inauthentic behavior on your platforms by foreign and domestic actors. As we look ahead to the 2020 election, though, I am gravely concerned the experience of 2016 may have just been the prologue.

In June, the House Permanent Select Committee on Intelligence held an open hearing on the national security challenges of “deepfake” images and videos generated using machine learning to depict events, actions, or speech that never occurred. As you are aware, manipulated or misleading media is already a constant feature online, but deepfake technology has the potential to make the problem far worse.

The tools to create deepfakes are widely accessible and quickly improving, which means developing sophisticated disinformation will no longer be the sole purview of well-resourced foreign adversaries, but of anyone with a computer. Furthermore, social media platforms like Facebook and Instagram can catapult a compelling lie into the conversations of millions of users around the world before the truth has a chance to catch up. The consequences for our democracy could be devastating: a timely, convincing deepfake video of a candidate going viral on a platform like Facebook could hijack a race—and even alter the course of history.

The nation saw a small preview of the havoc a well-timed deepfake could wreak in our current political environment when a crudely modified video of Speaker of the House Nancy Pelosi was posted on Facebook in May. In the short time before fact-checkers flagged the video as “false,” it received millions of views. This manually edited video was not a deepfake and was easily debunked, but for millions of Facebook users who viewed it, the damage was already done.

Facebook and other social media companies and platforms have taken a variety of actions since 2016 to address disinformation campaigns, but I am concerned they remain unprepared and vulnerable to sophisticated and determined adversaries. With voting in the first 2020 primaries less than eight months away, I encourage you to use this time to prepare for what may come so we are not in the position in the weeks and months after the election of wishing we would have done things differently or acted more quickly.

Developing an effective response to the pernicious potential of deepfakes on platforms in which virality is a central feature is a particularly pressing concern. Accordingly, I ask that you address the following questions regarding deepfake and related content on Facebook, Instagram, and WhatsApp:

  1. How many views did the manually altered video of Speaker Pelosi receive on Facebook before it was marked as “false” by independent fact checkers? How long did it take to initiate and then complete that independent review? How many views did the video receive after being marked false?

  2. In a recent interview, you stated that deepfakes may be “a completely different category of thing from normal false statements overall.”[2] Does Facebook have a written policy on deepfake content on Facebook, Instagram, and WhatsApp? If so, will you provide it in response to this letter? If not, are you developing such a policy and when will it be finalized?

  3. Facebook’s Terms of Service explicitly prohibit users from sharing “anything that is unlawful, misleading, discriminatory or fraudulent.” Are fake images or videos that realistically portray individuals saying or doing something they never did considered to be misleading? Are they allowed on Facebook’s platforms? Under what circumstances, if any, would Facebook remove such content and block its upload to your platforms?

  4. Is Facebook conducting research into techniques for automatically detecting deepfakes and other forms of machine-manipulated media on its platforms? To the extent machine-manipulated media is detected upon upload to a Facebook platform, will Facebook take specific steps to dampen the virality of such content, take it down completely, or require a human review for politically relevant content?

Thank you for your attention to these issues. Given the importance of these challenges and the short time we have remaining to harden our democracy against further foreign interference, I request that you respond no later than July 31, 2019.

Sincerely,

Adam B. Schiff
Member of Congress


Sundar Pichai
Chief Executive Officer
Google LLC
1600 Amphitheatre Parkway
Mountain View, CA 94043

 

Dear Mr. Pichai:

In written testimony submitted to the Senate Intelligence Committee last fall, a Google representative described how state-sponsored entities used Google products to disseminate information to interfere with America’s 2016 elections.[3] I appreciate the ongoing dialogue we have had with Google in the time since on the topic of disinformation and coordinated inauthentic behavior on your platforms by foreign and domestic actors. As we look ahead to the 2020 election, though, I am gravely concerned the experience of 2016 may have just been the prologue.

In June, the House Permanent Select Committee on Intelligence held an open hearing on the national security challenges of “deepfake” images and videos generated using machine learning to depict events, actions, or speech that never occurred. As you are aware, manipulated or misleading media is already a constant feature online, but deepfake technology has the potential to make the problem far worse.

The tools to create deepfakes are widely accessible and quickly improving, which means developing sophisticated disinformation will no longer be the sole purview of well-resourced foreign adversaries, but of anyone with a computer. Furthermore, global platforms like Google and YouTube can catapult a compelling lie into the conversations of millions of users around the world before the truth has a chance to catch up. The consequences for our democracy could be devastating: a timely, convincing deepfake video of a candidate going viral on a platform like YouTube could hijack a race—and even alter the course of history.

The nation saw a small preview of the havoc a well-timed deepfake could wreak in our current political environment when a crudely modified video of Speaker of the House Nancy Pelosi was posted on several online platforms, including YouTube, in May. This manually edited video was not a deepfake and was easily debunked, but for millions of people who viewed it before platforms removed it or flagged it as misleading, the damage was already done.

Google, YouTube, and other companies and platforms have taken a variety of actions since 2016 to address disinformation campaigns, but I am concerned they remain unprepared and vulnerable to sophisticated and determined adversaries. With voting in the first 2020 primaries less than eight months away, I encourage you to use this time to prepare for what may come so we are not in the position in the weeks and months after the election of wishing we would have done things differently or acted more quickly.

Developing an effective response to the pernicious potential of deepfakes on platforms in which virality is a central feature is a particularly pressing concern. Accordingly, I ask that you address the following questions regarding deepfake and related content on Google platforms, including YouTube:

  1. How many YouTube users viewed the manually altered video of Speaker Pelosi before YouTube removed it? What triggered the review process before the video was taken down, and how long did the review take to complete?

  2. Does Google have a written policy on deepfake content on YouTube or its other platforms, including use in advertising? If so, will you provide it in response to this letter? If not, are you developing such a policy and when will it be finalized?

  3. Are fake images or videos that realistically portray individuals saying or doing something they never did allowed on YouTube, including use in advertising? Under what circumstances, if any, would Google remove such content and block its upload?

  4. Is Google conducting research into techniques for automatically detecting deepfakes and other forms of machine-manipulated media on its platforms? To the extent machine-manipulated media is detected upon upload to a Google platform, will Google take specific steps to dampen the virality of such content, take it down completely, or require a human review for politically relevant content?

Thank you for your attention to these issues. Given the importance of these challenges and the short time we have remaining to harden our democracy against further foreign interference, I request that you respond no later than July 31, 2019.

Sincerely,

Adam B. Schiff
Member of Congress


Jack Dorsey                                                                
Chief Executive Officer
Twitter, Inc.
1355 Market Street, Suite 900
San Francisco, CA 94103

Dear Mr. Dorsey:

In testimony to the House Energy and Commerce Committee last fall, you described how Twitter—the “global town square” whose intended purpose is to advance the public conversation—was misused by foreign actors to manipulate the political conversation in advance of America’s 2016 elections. I appreciate the ongoing dialogue we have had with Twitter in the time since on the topic of disinformation and coordinated inauthentic behavior on your platform by foreign and domestic actors. As we look ahead to the 2020 election, though, I am gravely concerned the experience of 2016 may have just been the prologue.

In June, the House Permanent Select Committee on Intelligence held an open hearing on the national security challenges of “deepfake” images and videos generated using machine learning to depict events, actions, or speech that never occurred. As you are aware, manipulated or misleading media is already a constant feature online, but deepfake technology has the potential to make the problem far worse.

The tools to create deepfakes are widely accessible and quickly improving, which means developing sophisticated disinformation will no longer be the sole purview of well-resourced foreign adversaries, but of anyone with a computer. Furthermore, social media platforms can catapult a compelling lie into the conversations of millions of users around the world before the truth has a chance to catch up. The consequences for our democracy could be devastating: a timely, convincing deepfake video of a candidate going viral on a platform like Twitter could hijack a race—and even alter the course of history.

The nation saw a small preview of the havoc a well-timed deepfake could wreak in our current political environment when a crudely modified video of Speaker of the House Nancy Pelosi was shared on several online platforms, including Twitter, in May. This manually edited video was not a deepfake and was easily debunked, but for millions of people who viewed or shared it before platforms removed it or flagged it as misleading, the damage was already done.

Twitter and other social media companies and platforms have taken a variety of actions since 2016 to address disinformation campaigns, but I am concerned they remain unprepared and vulnerable to sophisticated and determined adversaries. With voting in the first 2020 primaries less than eight months away, I encourage you to use this time to prepare for what may come so we are not in the position in the weeks and months after the election of wishing we would have done things differently or acted more quickly.

Developing an effective response to the pernicious potential of deepfakes on platforms in which virality is a central feature is a particularly pressing concern. Accordingly, I ask that you address the following questions regarding deepfake and related content on Twitter:

  1. How many views did the manually altered video of Speaker Pelosi receive directly on Twitter? How many tweets linked to the altered video on other platforms, and how many impressions, retweets, and likes did these tweets receive?

  2. Does Twitter have a written policy on deepfake content? If so, will you provide it in response to this letter? If not, are you developing such a policy and when will it be finalized?

  3. Are fake images or videos that realistically portray individuals saying or doing something they never did allowed on Twitter? Under what circumstances, if any, would Twitter remove such content and block its upload to your platforms?

  4. Is Twitter conducting research into techniques for automatically detecting deepfakes and other forms of machine-generated posts on its platforms? To the extent machine-manipulated media is detected upon upload to Twitter, will the company take specific steps to dampen the virality of such content, take it down completely, or require a human review for politically relevant content?

Thank you for your attention to these issues. Given the importance of these challenges and the short time we have remaining to harden our democracy against further foreign interference, I request that you respond no later than July 31, 2019.

Sincerely,

Adam B. Schiff
Member of Congress

###



[1] Mark Zuckerberg, “Protecting democracy is an arms race. Here’s how Facebook can help.” The Washington Post, September 4, 2018.

[2] Alexis Madrigal, “Mark Zuckerberg Is Rethinking Deepfakes.” The Atlantic, June 26, 2019.

[3] Kent Walker, “Written Congressional Testimony.” Submitted to the Senate Select Committee on Intelligence, September 5, 2018.