Media Contact
The Trump administration is discussing oversight on artificial intelligence models before they are made publicly available. The proposal is a departure from their long noninterventionist approach to AI. The following Cornell University experts are available to comment.
Gregory Falco, assistant professor of mechanical and aerospace engineering, has published extensively on governance of AI and has led calls for an independent oversight body to assess, audit and monitor some AI systems.
Falco says:
“The only viable path is some form of independent audit. The federal government does not currently have the in-house technical expertise, infrastructure, or day-to-day insight needed to directly evaluate these systems on its own. At the same time, a purely voluntary model of self-governance is not enough. The more realistic approach is a market-compatible oversight system in which companies continue to innovate, but their claims, safeguards, and practices are subject to independent review through random or targeted audits. For example, with the IRS, the government does not monitor every transaction in real time, but the possibility of audit creates accountability and discipline across the system.
“In that sense, this reported shift by the Trump Administration strongly reaffirms the argument we made in our Nature paper. Government oversight of AI cannot simply mean political review of model outputs, nor should it become a mechanism for deciding whether a model says favorable or unfavorable things about a president or administration. That would be the wrong frame entirely. The point should be to create a technically credible audit structure that evaluates risk, safety, security and accountability in a way that preserves U.S. innovation while creating real consequences for reckless deployment.
“The danger is that government oversight becomes political, performative or captured by the companies it is supposed to evaluate. The opportunity is to build a practical audit framework that lets the U.S. remain the global leader in AI while creating credible accountability around the most consequential risks.”
Sarah Kreps, director of the Tech Policy Institute, focuses on the intersection of international politics, technology, and national security.
Kreps says:
“The question of how to oversee AI models is harder than it looks. Two things are simultaneously true. The first is that Mythos and the models like it are real national security concerns. The second is that the obvious response, government vetting, carries risks of its own.
“Mythos and the models like it are real national security concerns. The recent demonstrations of AI-enabled cyberattack capability have made that concrete in a way the abstract debate never did. Anthropic and the other labs are now part of the national security complex whether they want to be or not, and that requires a closer working relationship with the government than the current arrangement allows.
“But once you build a government vetting process for technology, you get the good with the bad. The definition of ‘safe’ is contested. The process can be politicized. Whoever holds power gets to shape how the vetting works. The Biden administration tried to advance regulation with the 2023 executive order, which the Trump administration revoked on its first day in office. Now new kinds of measures are being considered again, just under different political auspices. The challenge is doing the coordination without building an approach that is either quickly obsolete because of the fast-moving technology or that gets weaponized by the next administration, whoever that is. Neither administration has fully figured this out.”