Student Code of Conduct updates underscore need for clear course policies

120424_Campus_097.jpg

“What exactly is the university’s position on student AI use?” The question is a frequent one for both instructors and students navigating the inclusion of artificial intelligence in many academic tools. Amy Ort, senior instructional designer, shares helpful guidance and resources on this topic.

The widespread introduction of ChatGPT and its function as a surrogate writer opened a wide range of possibilities for students to both expand their thinking and cheat on their coursework. Since then, many common tools like Microsoft, Google, and Grammarly have augmented their products with AI in a way that has made it hard for many students and instructors to know if they’re using AI or not. And harder still to know if their use of it is acceptable or appropriate.

On Sept. 9, the CTT and the International Student and Scholar Office hosted a workshop on supporting international students through academic integrity challenges, and Andie Barefield, director of Student Conduct and Community Standards, talked about updates to the University of Nebraska Student Code Of Conduct in light of artificial intelligence. One major change is that all instances of academic misconduct referring to "someone else" have had "or entity" added. This ensures that inappropriate use of programs such as artificial intelligence can clearly be considered academic misconduct.

The workshop covered additional code of conduct updates along with best practices for talking to students about academic integrity and working with international students through academic conduct issues. Participants had a lively discussion and the recording is available on Yuja.

Importantly, the code of conduct update doesn’t mean that all student use of AI is considered a problem. Instructors have complete freedom to determine whether and in what circumstances they want to allow AI use in their courses. Some may encourage students to explore the different ways that AI can be used to save time or push their thinking to new levels, while others may ban its use entirely. This wide range of approaches means it is very important for all instructors to add an AI policy to the academic integrity section of their syllabus. Without this, students may be confused or assume that policies in place in other courses apply to all of them.

When developing an AI policy for your course, the CTT recommends following these steps, which were highlighted during a recent workshop on developing AI syllabus policies:

  1. Reflect on your core values as an instructor. Why do you teach? What are you hoping that students get out of your course?
  2. List all the different types of assessments you use in your course. Try to be as comprehensive as possible and include both formal assignments and exams, as well as in-class work and formative assessments.
  3. For each assessment identified, list how AI could be used by students. Keep in mind that AI incorporates more than just ChatGPT. It is also built into some Microsoft and Adobe products, writing tools like Grammarly, and search engines like Google.
  4. Craft a comprehensive policy. You’ll want to take everything you came up with during the previous steps and turn it into concise and clear language.
  5. Include your “why.” Students are more likely to follow a policy if they understand your motivations for it, as well as how they’re going to benefit from following it.
  6. Talk to your students. Instead of just giving them a written policy, have a discussion that allows them to ask for clarification and also express their thoughts on the benefits and drawbacks of AI use.


See the CTT AI resource for examples and additional guidance on crafting an AI syllabus policy. If you have any questions or would like help developing a policy for your course, contact an instructional designer assigned to your college.

More details at: http://go.unl.edu/teachingandAI