Tip Sheets

New features of OpenAI’s GPT-4 are largely ‘unsurprising’

Media Contact

Becka Bowyer

OpenAI launched the newest version of its language software, GPT-4, an advanced tool for analyzing images and mimicking human speech. The following Cornell University experts are available to discuss the latest model and implications for the tech industry, and beyond.

Yoav Artzi

Associate Professor

Yoav Artzi, associate professor of computer science, studies natural language processing.

Artzi says:

“Largely, OpenAI released little information of substance. They claim improvement on some public benchmarks. Their report provides no technical insight into GPT-4, even less than with previous releases (i.e., GPT-3). While they announced the possibility of using images as input, they did not release this feature – and provided no estimate of when this will be released. Largely though, it's not a surprising feature. The technology is well known, and multi-modal models have been reported for a while now (i.e., internally at Google).”

Kenneth Rother

Visiting Lecturer

Kenneth Rother, visiting lecturer, studies innovation, entrepreneurship, and technology.

Rother says:

“Lots has been written about ChatGPT and now GPT-4. Analysis ranges from ‘no big deal’ to ‘this changes everything’. The ‘no big deal’ folks tend to be knowledgeable computer scientists who see this technology as a logical progression of current research but are quick to point out there is still a long way to go towards true artificial intelligence. They are not wrong. The other more optimistic camp tends to look at the business potential and how these new technologies will influence customer facing products. Perhaps airplane autopilot and monitoring systems provide a helpful analogy. These tools make pilots better and safer but without human interaction and oversight they are just a pile of technology.”

Cornell University has television, ISDN and dedicated Skype/Google+ Hangout studios available for media interviews.