Tip Sheets

AI slop can’t be fixed through opt-in settings alone

Media Contact

Becka Bowyer

TikTok is testing a new setting that lets users choose how much AI-generated content they want to see in their “For You” feed. The change is rolling out over the coming weeks. 


Alexios Mantzarlis

Director of the Security, Trust, and Safety Initiative (SETS)

Alexios Mantzarlis, director of the Security, Trust And Safety Initiative at Cornell Tech, has previously written about the pervasiveness of AI slop.

Mantzarlis says:

"We can't fix AI slop through opt-in settings alone. That said, increasing user agency is an essential element of platforms taking responsibility for the deluge of synthetic content in our feeds. I am delighted to see TikTok provide this filtering option – following a similar decision by Pinterest earlier this year – and I hope all social networks go down this path. For AI filters to work, however, we need AI detection and labeling technology and implementation to catch up. An audit I conducted earlier this year showed there's still plenty of work to be done on that front."

Brooke Erin Duffy

Associate Professor of Communication

Brooke Erin Duffy, associate professor of communication at Cornell University, is an expert on social media.

Duffy says:

“With the rise of ‘AI slop,’ rival platforms TikTok and Meta are pursuing markedly different approaches. While TikTok is ceding more control to users with its just-launched AI controls, Meta is pairing its Vibes AI feed with a newly announced commitment to protecting creators. The open question, however, is how either company will quell the AI-related fears of advertisers.”

Abe Davis

Assistant professor, Cornell Ann S. Bowers College of Computing and Information Science

Abe Davis, assistant professor of computer science at Cornell University, developed a way to “watermark” light in videos, which can be used to detect if video footage is fake or has been manipulated.

Davis says:

“There are two fronts to think about here, and both are really important. The first deals with how to tell what is real anymore. This is what most people mean when they talk about detecting ‘deep fakes,’ which is only getting harder with time. In the long term, we may only be able to solve that problem when some specific context about a video is available, like knowledge about the people or places it's supposed to show. But the second front, which is also very important, deals with how to keep meaningless AI slop from drowning out content created by real people. Here, it's not about detecting whether the video represents truth, but whether someone had to put any thought or effort into making it. This is going to be crucial for companies like TikTok and others, where the real value of the company is in having a community that generates a constant stream of evolving original content.

“This isn't just a problem for social media platforms. It's a different version of the same threats facing hiring, education, and a lot of other areas. I think people are just now starting to realize that ‘AI slop’ is really a cute name for something with potentially terrifying consequences in a lot of important domains.”

John Thickstun

Assistant Professor

John Thickstun, assistant professor of computer science at Cornell University, studies machine learning and generative models.

Thickstun says:

“It's encouraging to see more AI platforms adopting watermarking technologies, joining AI providers such as Google (with their SynthID watermarks). By watermarking AI Editor Pro content, TikTok will increase its ability to track its own AI-generated content, but this doesn't help with the detection of AI-generated content on their platform that originated from other AI providers. If the AI community is able to establish industry standards whereby all major AI providers agree to implement watermarks, this would be a big step forward towards transparency over the dissemination of AI in information ecosystems.”

Cornell University has dedicated television and audio studios available for media interviews.