Media Contact
X will temporarily demonetize accounts that share AI-generated war footage without a label. The news comes after fake war footage flooded social media following U.S. and Israeli airstrikes in Iran.
Alexios Mantzarlis is director of the Security, Trust, and Safety Initiative at Cornell Tech. He previously served as director of the International Fact-Checking Network.
Mantzarlis says:
“X's policy is a reasonable countermeasure to viral disinformation about the war. In principle, this policy reduces the incentive structure for those spreading disinformation. This is a core strategy of any platform's Trust & Safety operation.
“The devil will be in the implementing detail: Metadata on AI content can be removed and Community Notes are relatively rare. It is unlikely that X will be able to guarantee both high precision and high recall for this policy.
“The irony is that this decision is coming from a company whose CEO repeatedly attacked Trust & Safety in general (and disinformation interventions in particular) as a tool of censorship. Trust & Safety isn’t censorship. It’s the imperfect and tech-mediated price we pay to co-exist online. It seems like X is figuring that out like every other platform before it.”