Postdoc recognized for research on teacher assessment

Leslie Hawley
Leslie Hawley

Leslie Hawley, a postdoctoral trainee with the Nebraska Center for Research on Children, Youth, Families and Schools, will receive a UNL research award that recognizes her dissertation’s potential to improve the evaluation of K-12 teachers.

UNL’s Phi Delta Kappa Chapter recently announced Hawley as the recipient of its Dr. Ron Joekel Research Award, which honors outstanding education-focused research conducted by a UNL graduate student.

Hawley, who completed her doctorate from UNL’s Qualitative, Quantitative and Psychometric Methods program in December, will receive the $1,500 award at a fall meeting of the PDK Chapter. James Bovaird, director of the CYFS Statistics and Research Methodology Unit, was her advisor.

Hawley’s dissertation outlines a methodology that could more precisely measure the performance of teachers, whose job security increasingly rests on student assessments, she said.

According to Hawley, many states evaluate teachers using “value-added” performance measures that place great weight on students’ standardized test scores, which often account for up to 50 percent of a teacher’s evaluation.

Combining this information with other factors to construct a single evaluation score, however, requires making decisions and assumptions about data that can diminish the stability of the outcomes, Hawley said. Her dissertation found that different methods of merging information can lead to significant fluctuations in where teachers rank among their peers.

Hawley explored the potential of “latent variable” methods -- which incorporate multiple test scores -- to reduce measurement error and produce more consistent estimates of teacher effectiveness.

“No method is perfect,” Hawley said. “But I would hope that if we had 300 teachers, it would be the difference between ranking a teacher 255th or 250th. But if a teacher is ranked 250th under one set of assumptions and ranked second under another set, that’s a problem, one that was sometimes present in the results.

“When you use latent variable methods … you are making fewer assumptions about how the information is combined. You’re essentially letting the data do a little bit more of the talking.”

Hawley said her results provided support for this hypothesis, with the latent variable methods yielding more stable teacher rankings across conditions examined in her study. This indicates that the latent variable approach could reduce the chances of undervaluing quality teachers and overrating substandard educators, she said.

“We’re making very high-stakes decisions. I feel very strongly that if we’re going to make these decisions – and states are going to use these methods – that they need to be done very carefully and cautiously,” she said. “The methods need to be fleshed out and rigorously evaluated for all kinds of potential scenarios.”