International team compares English, French in the brain

Wenming Luh, John Hale and Jon Brennan
Jason Koski/University Photography
Wenming Luh, technical director of the Cornell MRI Facility, left, John Hale, associate professor of linguistics, and Jon Brennan, of the University of Michigan, look at images of the brain.

Researchers at Cornell and the University of Michigan, along with two teams in France, have joined together to find out if native speakers of American English and French use the same brain structures to understand a story when it is read to them in their own language.

John Hale, associate professor of linguistics, is principal investigator on the study and Wenming Luh, technical director of the Cornell MRI Facility, is co-principal investigator. Hale and his collaborators at Cornell and Michigan are supported by a new grant from the National Science Foundation. The French teams are supported by the French National Research Agency. In all, funding for the collaborative project totals $1 million.

“People have always wondered, ‘What is it about the brain that enables you to understand?’ That’s really the driving question in this study,” said Hale, who specializes in computer models of human language processing.

To answer this question, the team will examine which sorts of computer models fit the brain data best. Doing so paves the way for future work with individuals who have trouble using language, such as those with autism spectrum disorder. The results could also lead to better computer systems that use language in a brain-inspired way.

Hale and his collaborators at Cornell and Michigan will analyze the brain patterns of study participants generated in an fMRI scanner as they listen to segments of a novel being read to them, in American English, for 90 minutes. The teams in France will do the same, while their participants listen to the same novel in French.

“We’re looking for very subtle patterns of difference across an hour and a half that might differentiate leading ideas, or models, of how comprehension really works,” Hale said.

The fMRI scanner will take a 3-D image every two seconds of every three-millimeter cube of the study participants’ brains. This technique will generate close to 100 gigabytes of data per person. The abundance of data will help the researchers discern among the brain’s basic functions and the physical act of understanding.

“The brain keeps changing very subtly over time even when you’re trying not to move. The motion of blood circulating, of breathing, generates a lot of ‘noise,’” Luh said. “Taking three images per time point allows us to better clean up those noises, and that makes the data quality much better in terms of trying to distinguish between different linguistic models.”

Hale added, “The typical assumption in linguistics is that all human beings pretty much share the same brain hardware and develop language in very similar ways. But, of course, not all people are the same and not all languages are the same.”

For example, a particular phrase may occur at the beginning of a line of text in the English version of the novel. But it may occur at the middle or end of the line in the French. The difference between the fMRI patterns that each text produces may offer insight into how we comprehend language, Hale said.

The project is an outgrowth of similar work that Hale conducted using the first chapter of “Alice in Wonderland” as a stimulus. The results of that study were published last year in Brain and Language.

Media Contact

Rebecca Valli