Media Contact
Google is pausing the ability for its Gemini AI to generate images of people, after the tool was found to be creating inaccurate historical images. The following Cornell University experts are available for comment.
Abe Davis, assistant professor of computer science, specializes in computer graphics and computer vision.
Davis says:
“In some ways, what worries me most is people expect a generative AI to be accurate. If I ask Gemini to generate a picture of a cat, it can probably give me a pretty realistic-looking cat. But if I get more specific – for example, by asking for some specific breed of cat – it would need much more specific knowledge to avoid answering something objectively wrong. But AIs like Gemini are not trained to be cat experts, and they are not designed to be correct. They are designed to guess. We should always keep in mind that a lot of those guesses will be wrong.
“I think it is important to have discussions about the biases that are reflected in AI-generated data. We should be careful not to assume that there is always an intentional effort behind those biases, though.”
Allison Koenecke, assistant professor of information science, studies fairness in algorithmic systems.
Koenecke says:
“Much research has shown that presenting diverse options in recommendations aligns with user preferences. However, this needs to be considered alongside the user’s goals: for example, the best set of image recommendations may be different for someone seeking inspiration for creative endeavors, versus someone doing historical research. As such, it is important to consider how to reflect historical accuracies – and how to avoid historical erasures – among a diverse panel of generated images.”