
Sydney Brown, assistant director in the Center for Transformative Teaching, offers reflections and guiding questions for adapting pedagogy in an AI-saturated academic environment.
I recently encountered questions about the value of a college degree in light of “a job market where everyone is using AI.” I replied that education is more important than it ever has been. This is because the enduring value of education lies less in the memorization of facts and acquisition of knowledge, and more in the development of habits of mind that enable individuals to think, judge, and adapt independently. Moreover, we often don’t recognize the full value of our education until well after we receive our diplomas. This is one reason it’s difficult to convince students concerned about course grades and pressed for time that it’s worth the effort to do their own work. However, because 57% of students use large language models like ChatGPT, Gemini, and Copilot, daily or at least weekly in their coursework (Lumina-Gallup, 2026), it’s important to look at the risks they are running in doing so and what we, as instructors, might do in response.
Last week, I read an opinion piece in Educause Review that focused on "Creeping cognitive displacement syndrome" (March 26, 2026) where the author, Marvin Starominski-Uehara, discussed three increasingly GenAI-dependent stages his students go through.
In the first stage, students use GenAI like an assistant, crafting their prompts and substantially editing the output. However, a change takes place. Students start to think in prompt and response loops and internalize the phrasing, cadence, and patterns produced by the GenAI.
The second stage is notable because students begin to “ask the machine to generate arguments they have not thought through, synthesize research they haven’t read, to take positions they haven’t examined.” In stage two, students begin to use GenAI uncritically.
In stage three, students have abdicated their “cognitive sovereignty,” or their capacity for independent judgement and the ability to generate and defend their own ideas without offloading to GenAI. In fact, when asked to explain their thinking, students will again turn to prompting. Starominski-Uehara asserts that students, and maybe all of us, are at risk of changing ourselves to fit GenAI instead of the other way round.
This resonated with me. In the early days of Twitter, I noticed myself editing my self-talk to capture a moment in 140 characters or less. This wasn’t even for a tweet. This was just noticing something and expressing it in my mind like a tweet. GenAI is no different. After beginning to use them on the regular, I felt myself become a little less confident in my own expression and little more inclined to go to the prompt. If this is happening to me, a seasoned professional fully cognizant of the dangers of offloading, how much more might it happen to students, not yet confident in their disciplinary knowledge?
These days, I forbid myself from using AI for any first draft of work. While GenAI may assist me in some way as a project develops, the initial lift must be my own. Otherwise, I feel I risk deferring to GenAI in some way. Additionally, and perhaps most importantly, without doing the first effort on my own, I will not get an accurate assessment of what I know and what I need to learn more about. This is my version of taking the stairs, even when the elevator is right there.
The negative effects of too much prompting often arise in my conversations with UNL faculty and they are not alone. Cognitive offloading was an oft-mentioned concern at the annual American Education Research Association meeting last week — even when folks were discussing positive results of using GenAI in teaching. Additional takeaways from the many AI-focused sessions I attended were as follows:
- Most students both domestically and abroad are using GenAI in their coursework.
- The research shows that using GenAI may help or hinder learning depending on how it is used.
- Students must be taught how to use GenAI in structured, context-specific ways to ensure they develop the right critical analysis skills and are confident in questioning results.
- Instructors struggle to teach students critical analysis skills and find it difficult to model them when teaching.
- Assessments must change to be sure students have the critical thinking skills we believe they are gaining in our classes.
Here are a few things I’ll be reflecting on as I anticipate the end of the spring term and look towards summer and fall:
- How might instructors make the critical thinking skills needed in their classes and discipline more explicit for students?
- How might instructors better communicate the value of the learning students do in their classes?
- What are effective ways to assess critical thinking skills?
Finally, I’ll conclude with a one quick way I use to decide whether to employ GenAI for a task. I ask myself, is this a gym situation or a construction situation? The question comes from Helen Toner, interim executive director of Georgetown University’s Center for Security and Emerging Technology (New York Times, February 2, 2026). She posits that on a construction site, machines are essential for magnifying what humans can do, but we go to a gym to increase our own capabilities.
Before you teach the next version of your courses:
- See if you can identify at least one “gym task” that explicitly prohibits the use of AI and in which you explicitly lay out the cognitive gains in your explanation of why.
- Design an assessment where students explain their reasoning without access to GenAI.
- Design a learning experience using AI in which students make use of AI in a construction-site way, where they must explicitly apply disciplinary-specific critical thinking in a structured way to accomplish something they cannot solely on their own.