A major concern when examining student writing nowadays is the fear that much or all submitted text could have been generated by a widely available chatbot (such as ChatGPT or Perplexity), with minimal or no effort on behalf of the submitting student to actually learn or understand the topic at hand (Bahroun et al., 2023). This concern is exacerbated by various studies indicating that chatbots often tend to produce text material that has a good chance of passing typical higher-education writing assignments, sometimes even to the extent of earning top grades (Nikolic et al., 2024).
In addition, regulation is exceedingly difficult, as there are no reliable methods for detecting AI-generated output -- not least as the models powering the chatbots are constantly improving -- and students may decide to use a chatbot even where doing so is explicitly against the rules.
At the same time, there are legitimate questions about whether students should be prohibited from using chatbots. Given how quickly use of these tools has spread -- including among researchers themselves (Shapira, 2024) -- as well as the fact that there are arguably unproblematically legitimate use cases (e.g. having the chatbot help correct their grammar mistakes in an otherwise completed text of their own), it can be argued that what students need is not a prohibition on the use of a chatbot, so much as an opportunity to explore its use in a way that helps them be more mindful in their reliance on it.
For all these reasons, I recently decided to incorporate a "no-limitations-at-all" AI policy in a Master-level course I run: IT805A -- Privacy A1N. The course consists of three examined reports, written individually, each worth 2.5 credits. Instead of attempting to restrict the student use of chatbots, I encouraged them to explore the tools, and attempt generating or revising their own material in non-examining forum discussions, where I would participate and give feedback on the student texts. I also designed the questions to be answered in the examined reports in such a manner that the default chatbot responses tended to fail, for various reasons, and clearly and repeatedly communicated this information to the students.
In this presentation, I will briefly summarize my experiences of the above experiment: what I thought worked, what didn't, and how other educators can apply the lessons learned in their own teaching.
References
Bahroun, Z., Anane, C., Ahmed, V., & Zacca, A. (2023). Transforming education: A comprehensive review of generative artificial intelligence in educational settings through bibliometric and content analysis. Sustainability, 15(17), 12983.
Nikolic, S., Sandison, C., Haque, R., Daniel, S., Grundy, S., Belkina, M., Lyden, S., Hassan, G. M., & Neal, P. (2024). ChatGPT, Copilot, Gemini, SciSpace and Wolfram versus higher education assessments: An updated multi-institutional study of the academic integrity impacts of Generative Artificial Intelligence (GenAI) on assessment, teaching and learning in engineering. Australasian Journal of Engineering Education, 29(2), 126-153.
Shapira, P. (2024). Delving into 'delve'. https://pshapira.net/2024/03/31/delving-into-delve/
Skövde: Högskolan i Skövde , 2025.
DAL25, Det akademiska lärarskapet, Examination och bedömning, Högskolan i Skövde den 25 april 2025