As part of the Assessing and Imagining the Impact of Generative AI on Science Symposium, Yian Yin, Peter Loewen, danah boyd, Morgan Frank and Sukwoong Choi (r-l) field questions on AI innovation and policy during a March 5 panel discussion.
News directly from Cornell's colleges and centers
'Wild west' era of GenAI poses opportunities and challenges for science
By Patricia Waldron
Experts from across academia, industry and funding agencies gathered to discuss how generative artificial intelligence (GenAI) is transforming science – the good, the bad and the unknown – at the Assessing and Imagining the Impact of Generative AI on Science Symposium, held March 3-5 on the Cornell campus.
In a series of panel discussions, participants considered both the incredible boom in scientific productivity this technology has enabled, and associated issues related to AI governance, equity, access and public trust.
This was a forward-looking event where diverse scholars from Cornell and beyond – including computer scientists, biologists, philosophers and social scientists – met to grapple with the impacts of GenAI across scientific research as a whole, said co-organizer Yian Yin, assistant professor of information science in the Cornell Ann S. Bowers College of Computing and Information Science.
“Normally, we’re off in our own little slices of the universe," said co-organizer AJ Alvero, assistant research professor in Cornell's Center for Data Science for Enterprise and Society. "But for this week, we were all in the same room, able to talk about this important issue and how it’s changing perspectives in our fields.”
Thorsten Joachims, vice provost for artificial intelligence strategy and the Jacob Gould Schurman Professor in the departments of Computer Science and Information Science, gave the opening remarks for the symposium.
"We're witnessing a fundamental shift in how science is conducted and communicated," Joachims said. "While these changes are exciting, there are important implications we must explore to ensure that the use of generative AI is rigorous and respects disciplinary norms, and does not erode public confidence in research."
GenAI is accelerating the rate of scientific discovery and leading to greater numbers of scientific publications for scientists who can access them. Large language models, like the one that powers ChatGPT, are a major asset when writing papers – especially for scientists who are not native English speakers. AI tools can even assist with troubleshooting, create websites and help with coding – and these tools are expected to only improve in the future.
But while AI tools have been beneficial for individual scientists, this glut of AI-authored papers presents a new challenge: how to evaluate each publication and judge its contribution to the field, Yin said. With GenAI, a well-written paper or grant application can conceal substandard science, complicating decisions about whether to accept a paper to a journal, which project to fund, and even who should receive tenure. At the symposium, university leadership and representatives from funding agencies discussed this ongoing challenge.
“I'm seeing a lot of nuance and new challenges in efficient and fair evaluation," Yin said. "That's something everyone agrees is urgently needed, and we need to do more work and think more carefully on that aspect.”
Another issue that panels discussed was scientific fraud. While plagiarism and falsified data have always been an issue, with GenAI tools, committing fraud has never been easier. "The barrier to entry to doing bad science is much, much lower," Alvero said. "On the flip side, these same tools might be able to identify bad actors more easily.”
A common refrain in the discussions was the need for greater regulation during this "wild west" era of AI. Currently, there is little regulation of GenAI or guidance for appropriate use, and researchers and universities often are left to make their own policies. As AI becomes increasingly powerful, the need for regulation will only grow.
“Hopefully, this is our contribution to larger conversations as we continue to figure out this new terrain,” Alvero said.
The Cornell AI Initiative, Cornell Bowers, Research & Innovation, the College of Arts and Sciences and the Center for Data Science for Enterprise and Society co-sponsored the symposium.
Patricia Waldron is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.
Media Contact
Get Cornell news delivered right to your inbox.
Subscribe