New research on the structure of standardized tests offers an answer to a question that has long puzzled experts on the economics of education: Why do children show more progress on math exams than English language arts exams (ELA) in response to education policies?
Researchers Evan Riehl, an assistant professor in the ILR School, and Meredith Welch, a doctoral student studying in the Cornell Jeb E. Brooks School of Public Policy, have determined the confounding pattern at least partially stems from incentives embedded in the way standardized tests are designed.
The researchers collected data on exam structure for grades three to eight accountability tests in six states – Florida, Illinois, Massachusetts, New York, North Carolina and Texas.
They found that math exams typically measure ability more precisely than ELA exams for students who are close to achieving proficiency. This matters because proficiency is a central metric for many state accountability systems, and prior research has found that teachers tend to target instruction to marginally proficient students.
An article summarizing the research will be published in the Summer 2022 issue of the Journal of Policy Analysis and Management and is available online now.
Riehl and Welch’s findings suggest that teachers may focus their test prep activities and time on math instead of ELA because the math exam is more likely to reward their effort. The resulting scores are more than just a matter of job-well-done for the children – they also influence school funding, renewal of school charters and teacher employment.
“To explain why schools often have an easier time boosting math scores than ELA scores, education researchers hypothesized that most math learning takes place at school, while students primarily learn English and reading at home,” Riehl said. “What we discovered is that there is more to it – that the structure of the exams gives educators an incentive to focus on math.”
Riehl and Welch found that math and ELA exams tend to differ in systematic ways. Math exams typically contain more questions than ELA exams and more of those questions are at an appropriate difficulty level for marginally proficient students – those who can be coached to achieve proficiency. Lastly, math exams often had lower proficiency rates than ELA exams, which meant that there were more students near the proficiency margin in the average classroom.
Riehl and Welch said their findings should be useful to educators and researchers as they seek to understand why education policies may have different impacts on math and ELA scores. They emphasized that both teachers and test designers are usually acting ethically and not trying to manipulate results.
“It is often said that math requires cumulative knowledge,” Welch said. “This, coupled with the fact that most math questions can be described briefly, may make it easier for test designers to write exams for math that precisely measure ability on the proficiency margin, and likewise make it easier for educators to prepare students for math exams.”
Jim Hanchett is assistant dean of communications in the Cornell Jeb E. Brooks School of Public Policy.