(I didn’t ask ChatGPT to write this for me.)
Generative AI has educators worried. I am assuming you have heard enough to know what AI-based tools like ChatGPT can offer and, like me, are thinking about the impact on learning. After a disappointing webinar from Microsoft, where we were effectively told to get used to the idea, I’ve been considering what was unsatisfactory about that suggestion and why educators are concerned about generative AI, which brought to mind Bloom’s Taxonomy.
The following diagram represents Bloom’s taxonomy (1956 version) with a few extras. I showed Bloom’s Taxonomy to a gathering of school IT officers and none recognised it. But I’m certain teachers would as they are trained using this theoretical framework. There are multiple versions, but they all represent levels of understanding that can be measured through assessment.

ChatGPT, and tools like it, have been compared to past technologies that challenged educators at their inception. Calculators were, at first, resisted in schools as they could answer mathematical questions that students needed to answer. The Internet, with its search engines and sites like Wikipedia, has also challenged the way students are assessed.
Generative AI has concerned educators to the same degree as these earlier technologies, if not more. Considering Bloom’s Taxonomy gives insight into what may be going on in the back of teachers’ minds.
- A quick web search can substitute a student’s recall of knowledge.
- Sites like Wikipedia can provide information showing a level of comprehension.
- A calculator allows a student to take solve mathematical equation without demonstrating the application of their understanding.
Generative AI, in comparison, can:
- take a sophisticated prompt,
- gather information that demonstrates knowledge and comprehension,
- contrast views in a way that shows application and analysis, and
- synthesise these ideas into a cohesive written piece.
As suggested in the diagram above, artificial intelligence can now be used to substitute higher levels of understanding that were previously reserved for the human student. Together with the relatively sudden release of generative AI, I believe this is what has teachers worried.
At this stage, the quality of output from generative AI is not perfect, containing factual errors and bias, but it is improving. It is already at a stage where it could be equated to responses expected from post-graduate students, such as medical and legal students. The output is also relatively unique, so existing plagiarism detection is less effective against it.
Going back to my original question, educators are concerned as ChatGPT threatens their methods of assessing student understanding in a new way, which threatens academic integrity and institutional reputation, but also means they have work to revise their assessment approach.
What can be done?
It seems that generative AI is not going to disappear any time soon, in fact it will probably have deeper impacts on education than we can currently imagine. From what I can see, the possible responses for educators to adapt their assessment to this new paradigm.
Block
Blocking generative AI tools, such as ChatGPT and QuillBot could discourage students from attempting to use these tools. This would be a similar response to when earlier technological tools have been introduced and it has been the response from ACT Education (where I work). However, in the case of ChatGPT, the number of places where it can be accessed, beyond its main website (such as Bing and various chat platforms), is proliferating, so blocking may not be effective.
Control
If generative AI tools are not blocked, controls can be put in place to prevent cheating and, perhaps more importantly, ensure students are able to learn and demonstrate their understanding appropriately.
- Education
As with other tools, students need to be equipped to understand the associated ethical standards expected. Plagiarism and other forms of academic misconduct are frowned upon and do not lead to successful learning outcomes for students. There already needs to be information literacy training around content from online sources, to allow students to grow into citizens who can identify trustworthy content, and generative AI output is an extension of this. - Task Controls
In mathematical tasks, we ask students to show their working; in essay tasks, we ask students to note their sources. Similarly, with tasks that could make use of generative AI, students can be asked to give more than the answer to demonstrate how they came to their understanding. Assessment design can be improved to deter (or at least complement) use of generative AI by adding specificity to the task or by asking students to deliver artefacts in multiple modalities (other than written text). Ultimately, the best way to avoid cheating is to make tasks seem achievable by providing clear instruction and appropriately weighting tasks. - Technological Controls
Plagiarism detection seems to have been diminished now that generative AI can synthesise a novel text presentation, with appropriate citations. So what can be done technologically to control the use of generative AI?- The makers of OpenAI have released a tool that can help detect AI-generated text, which may be useful, but for now it is hard to tell.
- It’s possible to ask students to present their work using a shared Word document, Google Doc or OneNote file, which shows a history of how the document was constructed, allowing teachers to scrutinise where content may have been copied and pasted. This is not foolproof, but a useful check for teachers.
- Quizzes have been shown to allow demonstration of understanding that is as good or better than a written essay. A quiz can elicit responses to various question types, which may be useful to redirect students away from generative AI. Quizzes can also be partially or fully automatically marked, which is always an incentive for time-poor teachers. Adding time constraints and a lock-down browser to a quiz should give confidence for most assessment.
- Physical Controls
When authenticity of assessment really counts, it still means asking students to undertake tasks in person, away from computers. That could mean a paper exam or an in-person question and answer session. The utopia of online exams, temptingly closes after Covid19 remote learning, will be challenged by generative AI when institutional reputation is at stake.
Embrace
Educators have the opportunity to employ generative AI as a tool for learning and assessment. Like plagiarism detection (which began as a tool for educators to spot cheating, but became a tool shared with students to learn appropriate citation) generative AI in the hands of students can have learning benefits. The possibilities are already being enumerated and I anticipate we’ll see many pedagogical applications of generative AI over the coming years. Here are some.
- Providing summaries of written works to gain a better understanding
- Generating exercises to self-assess understanding
- Supporting text revision where non-native language skills are lacking
- Providing a starting point for in-depth research
Evaluation is a level of Bloom’s Taxonomy that I don’t think AI has yet conquered and that leaves room for higher-order thinking to be assessed. A colleague pointed out an article from the Sydney Morning Herald describing tasks being prescribed to medical students, who were instructed to prompt ChatGPT for information and then critique it.
Conclusions
The benefits of generative AI could lead to better student outcomes, if educators allow learning to make use of them. At present, there is significant positive innovation to match the reactions of those wishing to block generative AI.
I don’t expect efforts to block generative AI to last long, especially while they are less than fully effective. Ultimately a point of balance between control and embrace needs to be established, where assessment of understanding can happen and the learning benefits of AI-based tools can be achieved.
Limitations
I need to say that these views are my own and not an official policy or stance of ACT Education.
I haven’t completed a comprehensive survey of opinions from “the coalface”, however I’ve been communicating with people who have been gathering reactions to generative AI and writing briefs and presentations for Education decision-makers, which led me to this understanding.