At the end of 2022, OpenAI released a conversation based artificial intelligence (AI) tool (or online response generator) – ChatGPT. It was described as the fastest growing technology platform ever, with one million users within five days. The tool has seen significant use within the education community, and has drawn both interest and concern, due to its ability to produce human-like responses to instruction, including homework tasks, assignments and coursework.
Since its release, some schools are concerned students may be using online response generators like ChatGPT, to help with written work, or even submitting work produced by these tools in place of their own. Due to online response generator’s ability to produce plausible writing at the level and in the style of school-age learners, we understand teachers are concerned about how to tell when they have been used.
Use of online response generators cannot always be identified by detection tools, which can be unreliable - generating both false negatives and false positives. This can create doubt in teachers’ minds and introduce a sense of mistrust into their relationships with students.
To support teachers, we encourage schools to consider the following guidance:
- Review the school’s ‘academic honesty’ and ‘approaches to learning’ policies, to ensure teachers talk with students about how and when large language models (LLMs) such as ChatGPT and generative AI can be used. We are not recommending banning the use of these tools, but students need to be clear about what they can be used for and when they must not be used. They must also understand how they should be referenced.
- Discuss the strengths and weaknesses of LLMs and the technology that underpins AI tools, such as OpenAI’s ChatGPT or Google’s Bard. On the one hand, they might be a good way for students to create an essay framework or to help them compile a written response from multiple AI outputs. On the other, these tools can suffer from ‘hallucination’ and make factual errors that, on the face of it, look highly plausible, so it is a good idea to work with students on how to check for inaccuracies and inherent bias.
- Carefully consider the command words you might want to use with students, as they engage with such tools. Examples could include: prompt, compile, modify, correct, improve upon, generate, challenge - all of which foster higher-order thinking and deeper levels of understanding.
- Encourage teachers to collaborate on the changing nature of assignments and tasks they set for students and to consider ways they could ask for evidence of understanding that is not just based on prose or short text – for example, label a diagram, deliver a presentation, employ the Socratic method, or create a flow chart. Using images alongside thinking routines, such as See, Think, Wonder, can help to both uncover student thinking, as well as to develop it. Also, requiring students to draw on specific perspectives or examples from class in their written work will ensure contextualisation that is harder for generative AI to reproduce.
- Ask students to reference their sources and create bibliographies, as LLMs currently struggle to provide accurate citations – although this seems likely to change soon.
We understand that AI tools to support teachers are also emerging. These tools can help to create handouts, lesson plans and schemes of work, based on instructions provided by the teacher. Tools like this would not only save time and increase productivity, but also support teachers in the process of scoping and sequencing a learning progression – a critical element of scaffolding learning for students.
Ultimately, we recognise that AI is creating a whole new paradigm in teaching, learning and assessment. Cambridge is developing a strong understanding of how this will evolve, and we are committed to playing a key role in supporting schools as they look to embed these new models into education policy and practice.