Couch, Uminski look at higher- vs. lower-stakes assessments

Brian Couch and Crystal Uminski
Brian Couch and Crystal Uminski

by Scott Schrage | University Communication and Marketing

Ideally, assessments help instructors identify how well their students are grasping concepts without sapping too much of the time needed to teach those concepts in the first place. Choosing how and where to administer an assessment involves striking another balance: maximizing the effort students put into it while minimizing the incentive to rely on external resources that can misrepresent a student’s true level of understanding.

It’s little wonder, then, that educators and education researchers alike have varied both the types and settings of assessments in the hope of pinpointing the best combinations. The School of Biological Sciences’ Brian Couch and recent doctoral graduate Crystal Uminski have taken special interest in higher-stakes vs. lower-stakes assessments — the former scored according to correct answers, the latter on participation — and whether students take them inside vs. outside the classroom.

Couch and Uminski led a five-year study involving 1,578 undergraduate students taking an introductory course on molecular and cell biology. All students were administered some conceptual questions in the classroom and some outside it, with the stakes of the assessments alternating by year. The researchers discovered that students scored roughly the same on higher- and lower-stakes assessments when administered in the classroom.

But participants given lower-stakes assessments outside the classroom also scored about the same as the two in-classroom groups. Also promising? Other data indicated that students taking those participation-graded assessments outside the classroom tended not to rely on external resources that might otherwise skew scores and teacher perceptions of student learning. Collectively, the findings point to lower-stakes, out-of-class assessments as a viable alternative for teachers looking to optimize instruction time without sacrificing the validity of test results, the researchers said.

Read study:
https://www.lifescied.org/doi/10.1187/cbe.22-09-0181
As for the students taking the higher-stakes, answer-graded assessments at home? They scored higher and spent more time answering questions than did their counterparts. Unfortunately, comparing those scores against traditional unit exams suggested that the students were spending that extra time consulting the internet or other shortcuts to arrive at their answers. Considering the unfair advantage such shortcuts might provide over some peers — and the possibility of inflated scores leading to overestimates of student learning — Couch and Uminski cautioned against raising the stakes when leaving students to their own devices.