Tip Sheets

Cornell Tech expert on Meta controversy, deepfake detectors

Media Contact

Becka Bowyer

 

Meta’s oversight board is reviewing the company’s handling of two sexually explicit AI-generated images of female celebrities that circulated on Facebook and Instagram.


David Widder

Postdoctoral Fellow, Cornell Tech

David Gray Widder, a postdoctoral fellow at Cornell Tech, studies how people creating artificial intelligence systems think about the downstream harms their systems make and the wider cultural, political and economic logics which shape these thoughts.

Widder says:

“Meta's focus on celebrities versus ordinary people, and their initial disparate treatment of deepfake porn depicting an American versus an Indian woman in this case, suggest that Meta and other social media companies are more concerned about protecting their corporate image rather than making necessary investments in diligent content moderation across the board. Reporting in recent years has shown that Meta's Facebook woefully underinvests in content moderation teams in non-English-speaking regions, and recent reporting demonstrates the wide proliferation of AI deepfake ‘influencers’ on Meta where AI generated faces are photoshopped onto real women's bodies.

“Current technological mechanisms to detect and remove deepfakes will not endure. Because companies and others are actively investing in improving the realism of deepfake technology, and releasing this technology openly, it becomes technically infeasible to build ‘deepfake detectors’ that are reliably able to spot deepfakes on their imagery artifacts or other technical signatures alone as deepfake technology improves. Instead, content moderation, which can take into account political and social context, and check the claims made in a video against trusted sources, is necessary. But this is costly, and thus underinvested in. 

“Importantly, non-consensually created and shared porn should be removed regardless of whether it is a deepfake or not, and a technological tool is not going to be able to check for consent. Deepfake porn is not harmful merely because it is deepfaked—it is harmful because it is porn depicting women that is created and spread without their consent. The harm does not lie primarily in fooling viewers, but in the impact on women.”

Cornell University has television, ISDN and dedicated Skype/Google+ Hangout studios available for media interviews.