More Resources

  • Blog Posts

    OMG it’s ChatGPT: how you could adapt your assessments

    Ann Wilson (Senior Curriculum Designer) and Associate Professor Caroline Havery from the University of Technology Sydney have written a blog post highlighting five tips academics can use when adapting assessments to reduce the risk of academic misconduct. These include using oral presentations instead of written and getting students to start their assessments in class. They also recommend that academics consider being transparent about their teaching, by explaining to students the expectations of the task and how completing it themselves will benefit them through learning.

    ChatGPT: why I don’t fear our new AI overlord

    On the Melbourne CSHE Scholarship of Technology Enhanced Learning (SOTEL) blog, Associate Professor Charles Sevigny from the Faculty of Medicine, Dentistry and Health Sciences writes about several of the limitations of ChatGPT for assessment.

    Critical AI: adapting college writing for the age of large language models such as ChatGPT: some next steps for educators

    Anna R. Mills (a California Community College educator) and Professor Lauren M. E. Goodlad (Rutgers University, USA) offer a list of practical strategies for updating assessments in the age of generative AI. These include modifying essay prompts in ways that ChatGPT will not be able to answer, asking students to analyse images, audio or videos, or topics that draw on in-class discussions.

    How can I update assessments to deal with ChatGPT and other generative AI?

    Academics from The University of Sydney, Danny Liu (Professor of Educational Innovation) and Adam James Bridgeman (Pro Vice-Chancellor, Educational Innovation) outline short- and longer-term strategies academic staff can use to modify their assessments. A useful short-term strategy includes asking students to bring local contexts into their assessments, such as personal events or examples from their local area. In the longer-term, they suggest implementing more multimodal forms of assessment.

    ChatGPT is old news: How do we assess in the age of AI writing co-pilots?

    Danny Liu (Professor of Educational Innovation) and Adam James Bridgeman (Pro Vice-Chancellor, Educational Innovation) consider how educators can rethink assessment design for a world where word processing applications and programs have embedded generative AI tools (e.g., the forthcoming Microsoft Co-pilot for Word). These tools will make it difficult for educators to ignore or ban the use of generative AI, as they will automatically generate first drafts and offer suggestions as students write. Liu and Bridgeman argue that, as a result, we need to redesign learning outcomes so that they focus on developing graduate qualities and attributes that AI cannot perform (e.g., resilience, critical thinking, cultural sensitivity), and design more authentic assessment tasks to evaluate these learning outcomes. They also suggest that students should be encouraged to collaborate with generative AI tools as part of assessment to help them with the kinds of tasks it is good at, such as ideation, drafting, analysing, and editing.

    Ten myths about generative AI in education that are holding us back

    In this blog post, Associate Professor Danny Liu outlines ten myths about generative AI tools and their uses in higher education, focusing particularly on how the initial limitations of ChatGPT are being rapidly overcome by newer tools and functionalities. Myths debunked here include that gen AI tools are unaware of anything that has happened in the world since September 2021, that they are not able to write reflectively or include accurate referencing.

  • External Websites

    Monash University: generative AI and assessment

    Monash University have developed a useful site showcasing a range of ways that academics could redesign assessment to reduce the possibility that students would use generative AI tools to complete them. This includes testing the vulnerability of existing assessment designs by running them through generative AI tools, designing assessments that evaluate students’ higher order thinking skills, and using authentic, future-focused or programmatic approaches to assessment.

  • Media Articles

    Update your course syllabus for ChatGPT

    In this article for Medium, Ryan Watkins, Professor of Educational Technology Leadership and Human-Technology Collaboration at George Washington University (USA), provides ten ideas for how academics can get creative with assignments, including having in-class debates, asking students to create videos or podcasts, or allowing students to use ChatGPT to start their assignment as long as they show tracked changes for how they improved upon the AI generated output.

    Seven Recommendations for Assessment Reform

    In this Times Higher Education article, Dr Amir Ghapanchi from Victoria University provides seven recommendations for assessment reform that may thwart the use of generative AI by students. These recommendations present viable alternatives to written essays and exams which may be helpful for academics who wish to engage in longer-term assessment redesign.

    ChatGPT advice academics can use now

    This Inside Higher Ed article provides assessment redesign strategies from a range of academics working at universities in the USA. For example, Johann N. Neem from Western Washington University suggests using more in-class writing activities where students get an opportunity to think on their own and practice their writing skills.

  • Scholarly Works

    ChatGPT versus engineering education assessment: a multidisciplinary and multi-institutional benchmarking and analysis of this generative artificial intelligence tool to investigate assessment integrity

    In this academic article, engineering academics from seven Australian universities and ten different subjects test whether ChatGPT (using GPT3) can successfully respond to their existing assessment task prompts. The findings show that ChatGPT can produce output that would either excel or be passable for quizzes and coding assignments, while it struggles for tasks involving the creation of mind-maps or drawings, mathematical equations, oral presentations, lab work, and reflective and critical thinking-based written tasks (see also this video produced by the study authors). A comparison of GPT3 and GPT4 performance on a Physics exam is also provided, with the results indicating that GPT4 would only score 2% better than GPT3, and has several of the same limitations (i.e., not being able to produce images or complex mathematical equations).

  • Text Resources

    James Cook University: Assessment and artificial intelligence

    The first page of this two-page information sheet provides a handy list of assessment task types that are thought to be less vulnerable to academic misconduct, such as problem-solving tasks, hands-on activities, group projects and simulations.

    ChatGPT: Understanding the new landscape and short-term solutions

    Dr Cynthia Alby (Professor of Teaching Education at Georgia College, USA) has created a Google Doc which compiles a series of FAQs that academics might currently be asking about ChatGPT. Along with these FAQs she also provides several practical solutions. For example, her suggested solutions to the issue of academic misconduct include asking students to complete assignments in class, moving from written assessments to mind maps, or replacing exams with performance-based tasks.

    Unlocking the Power of Generative AI Models

    Academics from five European universities have developed a white paper titled Unlocking the Power of Generative AI Models and Systems such as GPT-4 and ChatGPT for Higher Education. Section 4 provides helpful guidance for academic staff, including eight recommendations relating to assessment and AI (see pp 31-37).

    CRADLE suggests… Assessment and genAI

    Professor Margaret Bearman and colleagues from Deakin University’s Centre for Research in Assessment and Digital Learning (CRADLE) have developed a brief resource offering ideas for educators wanting to adapt their current assessment practices to account for generative AI tools. These include limiting the evaluation of lower-level learning outcomes (e.g., knowledge recall) wherever possible, and instead designing tasks that help develop higher-order skills, such as evaluative judgement.

  • Webinars

    Implications of generative artificial intelligence for higher education – how should educators respond?

    The Tertiary Education Quality and Standards Agency (TEQSA) in association with the Centre for Research on Assessment and Digital Learning (CRADLE) at Deakin University present this excellent webinar featuring Professor Margaret Bearman (Deakin), Professor Simon Buckingham-Shum (UTS), Dr Lucinda McKnight (Deakin) and A/Prof Sarah Howard (UoW). Professor Bearman provides some useful advice about redesigning assessment (starting from around the 12.00 minute mark), including asking ourself ‘If a machine can do it, how much do we need to assess it?’.

    Luminaries: AI and the future of higher education

    The University of Wollongong’s Luminaries series featured a panel discussion about the use of generative AI in assessment and how to help students develop digital literacies in this area. The panel, which was hosted by Senior Professor Sue Bennett, included Thomas King (Microsoft), Professor Rhona Sharpe (Oxford University), and Professor Michael Henderson (Monash University). While the whole webinar is worth watching, Professor Henderson’s provocations around AI and assessment design are particularly interesting (from around the 13.00 minute mark).