
Last month, Sydney Brown, assistant director of the Center for Transformative Teaching, shared her thoughts about Gallant and Rettinger’s book "The opposite of cheating: teaching for integrity in the age of AI." This month, she shares a few notes from chapter four: designing assessments for integrity.
According to Gallant and Rettinger, there are two primary reasons students cheat on a specific assessment or assignment: first, they don’t value the work they’re doing and second, they're afraid to fail. They suggest making use of the following strategies.
Infuse ethics into lessons and assessments.
Leon Furze, a consultant focusing on the “practical and ethical implications of Generative Artificial Intelligence” wrote several posts on Teaching AI Ethics which are sorted into beginner, intermediate, and advanced topics. Beginner includes bias and discrimination, environment, and truth and academic integrity. Intermediate encompasses copyright, privacy, and datafication. Advanced topics include affect recognition, human labor, and power. Each post gives several examples along with ideas for talking about it in different disciplines.
Make use of teachable moments.
Gallant and Rettinger propose that any time students present instructors with “a dilemma that pits one honorable value against another, such as honest versus fairness, respect versus trustworthiness, privacy versus equity,” there is an opportunity to discuss integrity. They give an example of an engaged student who didn’t finish turning in an assignment and who asked to turn it in late. While it “rang true,” it was a favor to the student to accept it. The instructor asked the student to write a couple of paragraphs about the ethical dilemma presented and what led to it along with all the possible resolutions. The student completed the task, and the instructor went ahead and accepted the paper. For a deeper dive into right versus right ethical dilemmas, the authors recommend "How Good People Make Tough Choices" by Rushworth M. Kidder.
Align expectations with assessment rubrics.
Tight alignment helps communicate the importance of the assessment and underscores why students are being asked to do it. Additionally, Rettinger and Gallant emphasize that rubrics must be tested against content to make sure they work. If instructors find themselves changing scores based on their intuition, then the rubric is performative, and students will devalue the assignment which helps them rationalize cheating.
Allow and co-opt collaboration.
Students working together on assignments intended to be completed, or assumed to be completed solo, are the most reported form of cheating. However, collaboration is also an important active learning strategy and difficult to prevent regardless of whether the collaborator is human or machine. Consequently, it’s worth considering allowing collaboration and instead, asking students to provide a reflection on the collaboration and comments on their collaborator's contributions. If the collaboration was with a chatbot, have them provide a transcript of their exchange. Frame the assignment as one of learning and preparation for higher-stakes assessments.
Give opportunities for revision.
With opportunities to revise, students may worry less about failing and be able to focus more fully on learning.
Plan for cognitive offloading.
Instead of banning technology, design for it. Philip Dawson, a researcher focusing on threats to assessment validity, says students will cognitively offload — making use of technology to lighten the cognitive load of tasks — whether allowed to do so or not, so prohibition is not a realistic option. To redesign an assessment, Gallant and Rettinger say the first step is to reexamine course objectives and decide three things:
- Should any learning objectives change given technological advancement? As an example, with built-in grammar and spelling support in word processors, why would grammar and spelling be included in a grading rubric?
- Decide which learning objectives students must reach before employing a tool. Are there fundamental skills that must be mastered without technological assistance?
- Is it possible to assess students’ performance in alternate ways that take chatbots and contract cheating into account? For example, could students learn and practice a skill on chatbot output as opposed to creating the artifact?
Include oral assessments.
Oral assessment is stressful for students but good for learning. It can range from a casual one-to-one conversation to something much more formal, and many decisions must be made before incorporating them into the course curriculum. Decisions include formality, the degree to which the ability to speak about subject matter is relevant to course objectives, whether one-to-one with instructors or in a group setting, and finally how interactive the session will be.
Be bold and creative.
Redesigning assessments is difficult but also an opportunity to try something completely new. In a meeting with the AI Skill Sharing learning community, Elizabeth Niehaus, professor of educational administration, shared her experience with having a class AI policy that allowed all uses of AI with the caveat that students were responsible for everything they submitted for the class. This meant that students had to take output evaluation and verification seriously if they chose to use AI.
I've also been thinking about a couple of blog posts I've read recently. The first, "On working with wizards" by Ethan Mollick, author of Co-Intelligence, asserts that with the release of GPT-5 Pro we've begun to work with "wizards," and these wizards are more like agents who do things on their own, rendering perfect verification of process prohibitively difficult at best and frequently impossible. Consequently, we become more "connoisseurs of output" rather than process. This means the skills we need are to know when to summon a wizard, work with it enough to develop instincts for when it succeeds or fails, and to be able to tell when "it's worth the risk of not knowing." He says that the problem for education is how do we train students to verify work in fields when they haven't yet attained mastery, especially since the AI prevents the development of mastery?
The other blog post, this one from Robert Talbert, a math professor and advocate for alternative grading, focuses on deliberate practice and is relevant to the motivational aspect of learning. Talbert discusses the feedback loops that yield learning and proposes what productive engagement with a feedback loop might look like:
- Students should have a clear idea of what specific elements of their work need improvement and focus.
- Students should have a means of targeting those elements, through exercises or other focused tasks, that address specific issues of performance.
- Those exercises should be easy to repeat, should be repeated, and should get immediate feedback (not necessarily from the instructor).
- They should also represent accumulating incremental steps that lead the student toward improvement on the main task.
- And while we don’t exactly desire this, we recognize that all of the above is hard work, sometimes exhausting, and not often fun. If it sounds or feels easy, we should get suspicious.
Reading through this list, it seems there might be possibilities for co-opting and collaboration with an AI that could facilitate productive engagement. In his post, Talbert shares ideas for "helping students become experts at practicing our subjects." These ideas may be helpful to you as you modify your assignments and assessments.
For more details on the strategies from Gallant and Rettinger, access their book online through the Libraries. For help in adapting assessments in your classes, contact an instructional designer assigned to your college.