Generative AI tools have impressive capabilities that can be leveraged by academics to improve various aspects of assessment design. Creative ideas in this domain are beginning to emerge both online and across the University, and several of these are summarised below. In addition, links are provided to resources showcasing more detailed examples of some early thinking about how generative AI tools can be used to enhance assessment. Over time, this page will feature additional resources and advice for academic staff, as well as case studies of practice from University of Melbourne academics.
Strategies for using ChatGPT to enhance assessment tasks
- Designing assessment materials to help students understand the required standards for an upcoming task. For example, ChatGPT can create a clear and concise assessment task outline, or a marking rubric with examples of standards for each quality criterion, or exemplars of varying quality for an upcoming written assessment task, which students can be asked to critique based on the assessment rubric.
- Creating aspects of the assessment itself, such as creative essay prompts, multiple choice questions, essay questions, case study examples or real-world problems for students to solve.
- Helping students to focus on the process of writing rather than the product. Encouraging students to leverage AI appropriately as they develop written work using a staged approach. For example, ChatGPT could be used to generate an essay outline or early draft which students then build upon using tracked changes in Word.
- Enabling students develop critical thinking skills. For example, ChatGPT could be used by students to generate solutions to real-world problems or to produce multiple responses to a particular prompt. Students can then be asked to compare and critique or critically evaluate the output, explaining in their work where it may be inaccurate and/or how it should be strengthened.
- Providing constructive feedback for students. For example, students can use ChatGPT to review outlines and/or drafts of their written work. Students can then write about the process of evaluating and reflecting upon the resulting feedback to demonstrate and build upon their evaluative judgement skills. ChatGPT could also be used by academics to generate automated feedback comments for multiple choice questions that can be integrated into practice quizzes.
Links to more detailed resources
This case study outlines how Dr Solange Glasser and Dr Julian Harris from the University of Melbourne integrated generative AI into an assessment task in the subject Music, Mind & Wellbeing. The task was also redesigned to better align with Advancing Students and Education strategic priorities around inclusive, student-centred, and innovative assessment design.
This case study shows how Dr Anna Lidfors Lindqvist (a Lecturer from the Faculty of Engineering and Information Technology at UTS) has been using ChatGPT to suggest assessment ideas for Mechanical and Mechatronic Engineering students. As Anna shows, ChatGPT can even write a clear and detailed task outline for students and create an assessment rubric for that task that align with the subject learning outcomes.
Assessment and feedback expert Professor David Carless (The University of Hong Kong) writes for Times Higher Education about how ChatGPT can be used reduce the burden of assessment for students and help them strengthen their writing processes rather than focusing solely on the final product. For example, AI-generated output could be integrated at various points in a staged process-orientated assessment design (e.g., by producing early drafts that students improve upon and/or generating constructive feedback on students’ early work which they then evaluate and implement).
This journal article, published in Social Science Research Network (SSRN), provides three examples of assessment tasks which incorporate generative AI to build students’ higher order skills. For example, one example is designed to help students with knowledge transfer (e.g., using AI to produce content on a particular topic and then making judgements about the veracity of that content and articulating how it could be improved), while another focuses on developing their evaluative judgement skills (e.g., using AI to produce a first draft of an essay and then having the student give the AI feedback for how to improve the essay).
The second page of this information sheet offers six examples of how academics can incorporate AI into assessment to build students’ higher order skills. These include using ChatGPT to generate a solution to a real-world problem and then asking students to critically evaluate this generated solution using content from the subject and peer-reviewed literature. As explained in the information sheet, marking for this assessment could focus more on the process of critique rather than on the product.
Associate Professor Danny Liu (Educational Innovation) and colleagues from The University of Sydney provide several practical examples showcasing how teachers can use generative AI tools to enhance assessment design for the benefit of both themselves and their students. For example, they explain that AI can create a mid-quality exemplar of written work that students can review and critique to help them understand the required standards of an upcoming assessment task.