Tianyi Chen is pushing the boundaries of artificial intelligence by asking a pressing question: What if AI could be engineered not just to optimize for a single outcome, but to make smarter, more balanced decisions — much like humans do?
The artificial intelligence models underlying popular chatbots and content moderation systems struggle to identify offensive, ableist social media posts in English – and perform even worse in Hindi, new Cornell research finds.
Researchers used advanced data analytics to create a state-by-state look at that environmental impact of the AI boom and how to make the computing infrastructure that supports it more sustainable.
The Trump administration is expected to announce an executive order that would direct the Justice Department to sue states that pass laws regulating artificial intelligence.
Students who plan to use ChatGPT to write their college admissions essays should think twice: Artificial intelligence tools write highly generic personal narratives, even when prompted to write from the perspective of someone with a certain race or gender.
The Cornell Institute for Digital Agriculture (CIDA) convened its annual workshop on Oct. 21, 2025, at the Statler Hotel on the Cornell University campus. The day-long gathering featured project updates, networking, and a keynote exploring how artificial intelligence is reshaping food systems.
Researchers at Cornell Tech and Cornell Bowers engaged directly with 15 content moderators on Reddit to see exactly how they try to preserve the news sharing site's humanity in an increasingly AI-infused world.
Mako, co-founded by assistant professor Mohamed Abdelfattah, sets out to tackle one of artificial intelligence’s most pressing infrastructure challenges: optimizing the computing efficiency of graphics processing units.