How to communicate AI policies effectively

ai announce_file214424.jpg

Justin Olmanson, associate professor, Minji Jeon, assistant professor, and Azadeh Hassani, graduate student, in the Department of Teaching, Learning, and Teacher Education investigated how students use generative AI. This work yielded a six-level taxonomy:

  1. Not for me: Don’t know how, too risky, don’t need it, I want to learn…
  2. Escape: Either (L1) Don’t want to, no time, low energy, or (L2) Too tedious, peripheral, meh…
  3. Get me going/started: help me: plan, brainstorm, find sources, write a first paragraph, get me unstuck…
  4. Feedback please: Critique my thinking / work and make suggestions, keep me company…
  5. Help me learn: Explain concepts / ideas, guide me, test my understanding…
  6. Magnify my work: Help me be more creative, productive, and ambitious…


Research shows that there is rarely consensus among students at all levels, as well as among faculty, in terms of what constitutes cheating (Gallant and Rettinger, 2025). This is an even larger issue with the growing ubiquity of generative AI and the many ways it can be used. For example, there are ways of using AI as a study partner, much in the same way as working with other students as part of a study group. In your class, is this cheating or an allowed use of AI? Examples of how ambiguous some initially obvious-seeming policies may be are presented in Chapter 2, “Communicating Integrity,” in The Opposite of Cheating.

Because of the confusion around how AI may be used, Olmanson and several College of Education and Human Sciences colleagues have made use of the taxonomy to develop course policies that help students better understand how AI may or may not be used in their classes. In addition, this approach makes use of a research-supported techniques to discourage cheating and support academic integrity.

Research shows that the less students value the work, the more likely they are to cheat (Gallant and Rettinger, 2025). Consequently, Olmanson’s opening paragraph for the AI policy used with his TEAC 259 course underscores the value of exerting effort by explicitly stating the rationale.

"Struggle and effort are an important part of learning, developing professional skills, and producing work that is yours. AI should not replace your struggle and effort but rather, should be used to support your learning, and expand your analysis, critical thinking, problem-solving, and production."

Subsequently, the substance of the policy is laid out in two tables. The first focuses on allowability, and the second gives specific examples of prompts and the reasons for why it is prohibited, allowed, or even encouraged.

The headers of the first table include the following:
  • AI Use Category: do all the work for me, do my busywork, get me going/give me feedback, help me learn, magnify my work.
  • Examples: These may be specific to the course but are intended to clarify what is meant in this course at the specified level of AI use.
  • Status and Directions: whether use is allowed and if allowed, how to use it.


An example from Olmanson’s TEAC 259 course is as follows:
  • Use category: “Do my busywork.”
  • Example: “Using AI to complete course projects or parts of projects because you feel they are repetitive or will not help you learn anything.”
  • Status and Directions: Discouraged without permission. Contact instructor.


The corresponding row in the second table that includes example prompts gives this guidance:
  • Sample prompts: “Write me a short summary of the attached article that I can post on a canvas discussion board.” and “Clean up my citations and put them in APA format.”
  • Rationale: Why: Discouraged without permission. Let your instructor know you do not see the learning potential in the task.


Some instructors may balk at providing explicit prompts. However, one consideration is that students, and each of us, really, increasingly have to make the choice of using AI or not to assist us in our work. Embedding these types of rationales and examples provide a way to help students learn to use AI more effectively for different types of tasks while at the same time supporting intentionality of use by prompting students to consider the why.

In a recent New York Times opinion piece on the future of AI, Helen Toner, interim executive director of Georgetown University’s Center for Security and Emerging Technology, said:

“For anything you might work on, ask yourself: Is this like a construction site, or like the gym? On a construction site, machines are amazing — you can lift heavier things and build better buildings with an excavator and a crane. But at the gym, the whole point is to increase your own capacity. With AI, the analogy is that we now all need to figure out where AI can help us do bigger, cooler things, like building personalized software, and where we need to build our own cognitive abilities first, like learning to write.”

When you consider your courses, how are you helping students think about AI use in the course as well as the discipline?

The Center for Transformative Teaching facilitates two learning communities focused on artificial intelligence: The AI Skill Share Learning Community and the new AI-Resistant Pedagogy Learning Community, which focuses on creating conditions for students to use their own brains. The CTT encourages instructors to share their success stories for communicating AI policies to students.

References


More details at: https://go.unl.edu/ai-taxonomy