X has paused searches for Taylor Swift after a proliferation of fake, sexualized, AI-generated photos surfaced on its platform.
Gili Vidan, assistant professor at Cornell University, is a historian of information technology. She says a focus on technical fixes to the problem of deepfakes like developing a federal digital watermarking standard to tag AI-generated content is insufficient.
“What is so startling about the case of Swift’s deepfake images is not how convincing new synthetic media can be, but how quickly it garnered so many views and attention. The violation that nonconsensual intimate images constitute is not dispelled by the widespread knowledge that these were AI-generated. For that reason, a focus on technical fixes to the problem of deepfakes, such as the White House executive order last fall, which proposed developing a federal digital watermarking standard to tag AI-generated content as such, is insufficient.
“A more comprehensive regulatory response would center on protecting the groups that are most vulnerable to the representational harms of social media – often young women with far less resources than Swift – demanding a proactive approach from platforms in screening such content.”