FAQs about Generative AI in teaching and learning at the University of Melbourne

Background and definitions

  • What is Generative AI?

    Generative AI (GenAI) refers to artificial intelligence technologies that can create unique content—such as text, images, audio, and video based on a prompt or input data. These systems utilize machine learning algorithms to understand patterns and features from vast amounts of data, enabling them to generate new, coherent, and contextually relevant outputs that mimic human-like creativity.

  • What is a Large Language Model?

    A large language model (LLM) is a specific type of GenAI technology that can predict the most likely response to a given prompt. It is the underlying technology ‘engine’ that powers tools like ChatGPT. Many of these tools have a conversational interface that allow a user to provide ‘prompts’ or ‘queries’ which the tool responds to with ‘outputs’. These outputs can involve the generation of anything from text to images, videos, sounds, PowerPoint presentations and almost every other form of media imaginable.

    The most well-known GenAI platform is ChatGPT, which is a text-based GenAI platform with a conversational interface, developed by the company OpenAI. OpenAI has been rapidly improving the capability of its LLMs, with several recent releases including ChatGPT 3.5, ChatGPT 4 and ChatGPT 4o.

  • What do I need to be aware of in relation to GenAI?

    GenAI platforms generate outputs by statistically analysing the distributions of words, pixels or other elements in the data they have ingested and identifying and repeating common patterns (for example, which words typically follow which other words).

    There is generally no accountability, traceability, attribution or transparency in how outputs are arrived at or what sources were used to generate them. The outputs that are generated may violate intellectual property, data privacy or copyright regulations. Any prompts you enter into third-party (non-secure) platforms are considered ‘fair game’ for the platform to use to further train the algorithms.

    Because GenAI outputs are generated using probabilistic algorithms, they can be factually inaccurate (these are colloquially referred to as ‘hallucinations’). And although they draw on vast amounts of data, these data sources include many potential sources of bias, and therefore the outputs generated are often also biased. Finally, companies such as OpenAI have now introduced tiered subscription models for their platforms. This means that access to the most capable LLMs is restricted to those able to pay, creating potential inequity.

    AI is rapidly becoming embedded in many of the productivity tools we use on a daily basis (e.g. Microsoft Office 365). Over time, it is likely to become more and more difficult to determine whether or not we are interacting with platforms that use some form of AI.

AI and assessment

  • What are the concerns around GenAI and assessment?

    Assessments play a critical role in evaluating and measuring students’ knowledge and competencies in a subject area and determining how well they have achieved the intended learning outcomes for the subject. Students typically demonstrate this achievement by producing an artefact of some kind (commonly an essay, report, examination etc.) which is assessed or graded.

    The widespread availability of generative AI platforms such as ChatGPT means that students now have access to powerful tools that are capable of producing artefacts that are equivalent or better in quality to those created by students. If such artefacts can be created automatically without any learning taking place, and we cannot distinguish between assessments created by AI and those created by students, how can we be sure that our graduates have learned what they need to be safe and competent professionals?

  • What are students being told about GenAI?

    The University has been providing advice and guidance to students on the appropriate use of GenAI in their study since 2023. From the start of 2024, all commencing undergraduate and graduate coursework students have been required to complete a module on the cornerstones of good scholarship which includes material on the appropriate use of GenAI. This includes how to reference and acknowledge the use of GenAI tools, information on the Turnitin detector tool active at the University, and the potential consequences of misuse. The LMS point of submission declaration has been changed to include an acknowledgement from students that they understand and have complied with these requirements. From semester 2, 2024, the University will also offer students additional learning modules on the appropriate learning uses of GenAI.

  • Where can I learn more about GenAI in relation to assessment?

    The Centre for the Study of Higher Education has a website dedicated to Assessment, AI and Academic Integrity. It provides staff with practical advice relating to the use of GenAI tools in assessment and academic misconduct.

    Staff can also find updates, news, current guidance, principles and policies relating to GenAI via the GenAI Taskforce (GAIT) webpage.

  • Does the University offer professional development in relation to GenAI?

    Yes. Both the CSHE and LE offer PD in relation to Generative AI in teaching and learning. To see what upcoming PD sessions are available, consult the relevant websites:

    CSHE: https://melbourne-cshe.unimelb.edu.au/pd/teaching-learning-and-assessment/ai-in-higher-education

    LE: https://le.unimelb.edu.au/training-and-workshops#list-of-workshops

  • Can I detect whether a student has used GenAI to create their assessment?

    A number of prominent technology companies such as Turnitin have released detector tools that may help identify whether work has been partially or wholly generated by AI.

    Turnitin’s AI writing detection tool is currently accessible to staff within Canvas as part of its similarity report function, for work submitted in English.  It is not visible to students in live view, though a report can be printed and shared with students. The percentage of text that is suspected to be AI-generated is estimated and highlighted in the student’s submission.

    While the Turnitin tool appears to be one of the most reliable available, staff should be aware that the tool has significant limitations and is known to generate false negatives (not flagging work as AI generated when it is – at a rate of about 16%) and false positives (flagging work as AI generated when it is not – at a rate of less than 1%). More information on this and the University’s ongoing evaluation of this tool is available via the academic integrity page.

  • How do I interpret Turnitin’s AI detection score?

    In interpreting Turnitin’s output, staff should be keenly aware of the limitations of the tool. It is least reliable when submissions are short (i.e. less than 300 words in length), when the score returned is 20% or lower, on a sentence-by-sentence basis (refer to whole document scores only), or where submissions involve a lot of routine or formulaic expression. More advice on interpreting the score is available on the academic integrity page.

  • What do I do if I think a student has inappropriately used GenAI on an assessment?

    A high AI score in Turnitin’s writing detection report is not proof that academic misconduct has taken place (any more than would be the case when using the more familiar similarity report tool to flag potential plagiarism).  It does not, on its own, constitute grounds for making an allegation of academic misconduct.

    Instead, if a subject coordinator suspects inappropriate use of AI, they should look for additional indicators of AI writing, such as false references, facts and other indications of AI ‘hallucinations’, odd and inconsistent language compared to the students’ other work, or what is typical of the subject, course or discipline. Staff may also choose to speak to students about the process they used in producing the work, though this should be done with care. More information, including key faculty contacts to speak with, can be found on the academic integrity page.

  • How can I check whether my assessment is potentially vulnerable to GenAI?

    One simple way to audit an assessment is to determine whether, in response to the assessment prompt, an artefact of passable standard can be generated by GenAI.

    Since assessments and other teaching materials constitute University IP, they should never be tested on third-party external platforms such as ChatGPT, since these platforms use all prompts and inputs to further train their models.  Any such testing/auditing must be done only within the University’s secure GenAI platform (SparkAI) or other University of Melbourne-sanctioned platforms.

  • How can I design assessments that are AI-proof?

    The rapid pace of innovation in GenAI platforms means that it is probably unhelpful to imagine that any form of assessment can be AI-proof. Instead, the threat of GenAI provides an opportunity to review whether a subject’s Intended Learning Outcomes (ILOs) are still appropriate, and what forms of assessment might constitute valid evidence of learning and achievement of ILOs in the age of GenAI.

  • Where can I find examples of assessments that have been redesigned in response to the threat of GenAI?

    The CSHE website on AI, Assessment and Academic Integrity includes suggestions on how assessment can be rethought, with examples of assessments that may be less vulnerable to GenAI and subjects that have redesigned their assessment. Staff may find the following guide helpful: Rethinking Assessment in Response to AI

  • What about incorporating GenAI into the assessment itself?

    The CSHE website includes a range of strategies for using GenAI tools to enhance assessment tasks.

    When implementing learning activities or assessments that require the use of GenAI tools, staff should be keenly aware of the potential for inequity arising from such use. Any tools which students are required to use should be freely available for them and students should be able to opt out of the use of a GenAI tool if they have concerns over their own data privacy.

  • Can students use GenAI for language translation?

    GenAI has powered a large increase in accuracy, speed and reliability of translation and editing tools, with many standard such tools (eg. Grammarly, Google Translate) now AI-powered. As a result, these tools have new capabilities, including paraphrasing, summarising and writing along with translation and editing services. Student use of these tools may pose risks to academic integrity, and staff should consider and provide advice on the limits of acceptable use in their subject. The University’s advice to staff on these tools is available on the academic integrity page.

    Similar advice for students regarding translation and editing tools has been published.

    Given how much these tools are changing day-to-day, this advice directs students to be guided by a critical core principle of maintaining academic standards: you are expected to create and express your own ideas. Therefore, writing of an entire assessment in a language other than English and using a translator tool to translate the assessment into English before submitting it would constitute a clear case of academic misconduct.

    Beyond this, it is recognised that the acceptable use of translation and editing tools will likely vary across assessment tasks, subjects, and entire disciplines. Students are directed to check their assessment guidelines.

University policies in relation to Generative AI

  • Are students allowed to use GenAI?

    Students are not banned from using GenAI tools at the University.  However, this does not mean that any and all use of GenAI tools is allowed.

    It is the responsibility of each subject coordinator to set out the bounds of appropriate GenAI use within their subject. Coordinators are strongly encouraged to consider possible use case scenarios for their assessments, set clear boundaries, and have conversations with their students to enable clarity about what tools are appropriate and for what tasks.

    In 2023, the University issued a statement on students’ use of GenAI in assessment tasks, which makes it clear that any use of GenAI in the preparation of an assessment submission must be appropriately cited. It’s important your students realise that if an assessment task does not permit the use of such tools, or if they use such tools in the preparation of an assessment submission without acknowledgement, this consititues academic misconduct.

    Students can find guidance about how to cite generative AI appropriately on the Academic Skills website and the Library’s Re:cite website.

  • Can staff use GenAI tools to help create teaching materials?

    Staff can use GenAI tools to help in the preparation of teaching materials, however care must always be exercised to ensure proper use of university IP, and ethical use of GenAI. University IP cannot be uploaded to external sites, so if you want AI to review some of your materials, we recommend using the SparkAI platform. Similarly, you as the academic are solely responsible for the material in your subject and as such should be vetting and verifying any output from GenAI. We should also model the transparency we are requesting from students and acknowledge the use of any GenAI tools in materials we provide to them. The University AI principles (https://www.unimelb.edu.au/generative-ai-taskforce/university-of-melbourne-ai-principles) provide general guidance to reflect on when considering when, and how to use AI tools.

  • Can staff use GenAI for marking/grading of student work?

    GenAI tools have the potential to make feedback and assessment processes more efficient, more timely, more scalable, and more interactive and dialogic.  However, because GenAI systems have imperfections and make mistakes, there are considerable risks associated with hasty or ill-considered use of GenAI for such purposes.  Among these are risks to the fundamental pedagogical relationship between students and academic staff. Use of GenAI tools should never be a substitute for staff exercising their own evaluative judgement.

    Staff wishing to explore the use of GenAI for marking/grading are encouraged to read the University’s advice to ensure they managing these risks appropriately, including communicating and agreeing the intended use with students, the tools used are secure, valid and reliable, and they have sought the endorsement of their faculty’s Associate Dean Teaching & Learning.

SparkAI

  • What is SparkAI?

    SparkAI is a secure platform internal to the University of Melbourne that can be accessed by staff (but not by students). It allows staff to explore Generative AI tools and capabilities using information that may be sensitive to the University, such as teaching/assignment materials, potentially copyrighted material, or other data that have Intellectual Property overlays.

    This means that you can input restricted, confidential and internal University data into Spark AI in order to query it using GPT models. It should be noted though, that you first need to have established that you have the right authority and are adhering to policy on data use and privacy before uploading data to Spark AI.

    You should never input restricted or confidential University data into ChatGPT or other public, third-party tools directly.

    You do not need to use Spark AI if the data you are using is publicly available University information.  This data is suitable for direct input into ChatGPT or other public, third-party tools.

  • Who can use SparkAI?

    Any University staff member is welcome to use Spark AI. Even in the secure environment, however, it is important to ensure that you have the appropriate permissions for any data you upload or use within this environment.

  • How do I get access to SparkAI?

    Spark AI can then be accessed at: https://spark.unimelb.edu.au

  • Is SparkAI as capable as ChatGPT? How is it different?

    SparkAI provides a conversational interface that allows the user to conduct Generative AI queries in an environment very similar to that of common AI platforms such as ChatGPT.

    Users can specify which Large Language Model engine they wish to use for their queries in Spark’s Secure chat configuration. Currently these include OpenAI’s ChatGPT 3.5, ChatGPT 4, Anthropic Claude 3 Haiku (200k) and Anthropic Claude 3 Sonnet (200k).

    Please see Spark AI Software 1.0 release notes for more information on the latest version of Spark AI available at the University.

  • Can I experiment with GenAI tools in the University environment without using Spark AI?

    Spark AI is intended to allow staff to use GenAI tools without compromising information that is sensitive to the University.  Where staff are wanting to use GenAI tools to input information or data that is not sensitive (i.e. that does not involve confidential information, University or student IP etc.), then they are free to experiment without using Spark AI. Publicly available tools such as ChatGPT are permitted for these uses.

  • What tools is the University making available to staff to experiment with GenAI in teaching and learning?

    From semester 2, 2024, staff will be able to access an experimental tool within the LMS that has been built to act as a learning assistant to students. Staff can control which learning materials the tool has access to and will use to answer student enquiries. Staff will also be able to set the behaviours of the tool in answering student queries, determining what questions it will and won’t answer, and how it will answer them. As an experimental tool, this release is focused on finding out how useful its features are, and what staff do or don’t like about it, so please provide feedback if you do choose to try it out

Enquiries

Professor Raoul Mulder