Rethinking Assessment for Generative AI: Orals and discussions

Despite the hype around GAI, chatbots, and image/audio/video generation tools, technologies are not going to “revolutionise the education system”. If we persist with high stakes, standardised testing and essay-based examination, we will see GAI which supports (or helps people to “game”) those systems. This post explores the many diverse and effective ways to assess and provide feedback and calls for a commitment to update assessment practices in age of AI.

In a previous post, I talked about the risks associated with trying to “catch” students, such as the ethical issues with detection software, the mistrust created by heavy-handed academic integrity policies, and the danger of false accusations. Unfortunately, we have a system that is heavily geared towards high-stakes summative assessments in written forms, such as essays and examinations. It’s a hard habit to break.

And, sorry to disappoint, but generative AI isn’t going to save us. Despite the hype around GAI, chatbots, and image/audio/video generation tools, these technologies are not going to “revolutionise the education system”. Nor will they lead to a renaissance of knowledge, a great levelling of educational inequity, or profound opportunities for personalised learning.

Maybe I’m a little cynical, but I’m yet to see the benefits of edtech that for years has promised the world and delivered little. During COVID, the increased move to platforms and educational apps was touted as a means of reforming education, but may have just further contributed to the digital divide between those who can and cannot access the technologies.

In a recent panel with the University of Melbourne’s Sandra Milligan, NAPLAN’s Stuart Mitchell, and Grattan Institute’s Nick Parkinson I said that generative AI will reinforce whatever system we have. That means that if we persist with high stakes, standardised testing and essay-based examination, we will see GAI which supports (or helps people to “game”) those systems.

For example, whilst we’ll see “personalised learning” chatbots, we’ll also see “personalised NAPLAN tutors” and “essay helpers” flooding the market. Developers will build the tools that the system requires, and not necessarily the ones that will “revolutionise” the system. The former will make money quickly, the latter, maybe not at all.

The real opportunity

The real opportunity doesn’t come from the technology, it comes from the discussions we’re having because of the technology. Like I wrote in the previous article, there is no way to detect generative AI. That means certain assessment types are now defunct. In fact, those assessment types have been inequitable and problematic for much longer than GAI – these technologies have just shone a light on the issue.

There appears to be a groundswell at the moment of people in both K-12 and tertiary calling for updated assessment practices. It may have been brought about by this latest wave of GAI, but it’s down to the people in the system, not the technology, to make the changes.

So this series of posts explores what’s already out there in terms of good assessment practice. I’m not relying on the technology to save us – just an understanding of the many diverse and effective ways to assess and provide feedback. Some will be supported by GAI, some will go without.

Orals and discussions

I’m basing some of this post on a great document from Eliana Elkhoury, PhD, which covers types of oral assessments, their characteristics, and examples in academic literature. Elkhoury’s document is much broader in scope than this post, and I’ve selected just a few of the possibilities that might apply in various contexts. The full list can be found here.

Oral assessments are nothing new. Having students deliver presentations or PowerPoints is fairly standard in courses in both K-12 and secondary. Unfortunately, oral tasks often get relegated to being “tacked on” at the end of a unit as an additional assessment on top of the “real” written task. But oral assessments can and should occasionally replace, not simply add to, other modes of assessment in a unit of work. And it’s not all about solo speeches and slide decks.

The caveat over this entire series is that there are no “one size fits all” approaches to assessment. Oral assessments may cause anxiety for some students, or may be inaccessible due to language barriers, non-verbal or selective mutism, or other factors.

Oral presentation

Let’s get the most obvious kind of presentation out of the way first. Of course, one way for students to demonstrate their knowledge is through a presentation, solo speech, PowerPoint, or similar. This has the advantage of allowing an individual to demonstrate knowledge as opposed to a group, and is also a genuinely useful skill for many knowledge-based jobs.

Of course, students could either perform entirely tech-free, or use a variety of tools to help with oral presentations. If you wanted to incorporate GAI into an oral presentation, students could:

  • Use GAI text generation like ChatGPT, Bing, Bard, or Claude to generate ideas, create scripts, edit, and so on
  • Use an app like Gamma to create the slides which accompany the presentation
  • Use image generation to create visuals, and add them to a standard format like PowerPoint (which currently has the AI-assisted Design feature, and will soon have Copilot)
  • Use an app like Canva which includes GAI features such as text and image generation and AI assisted design

As I wrote in an earlier post, there’s no way to guarantee students are not using GAI if they are completing part of the task out of class. This includes generating the scripts, but also using audio generation to create convincing versions of their voices for recorded orals. Like I said in that post, if you want it to be totally GAI free, it has to be a supervised task. Otherwise, accept that GAI might be used and move on. That logic applies for every assessment type in this series.

Debate and discussion

Debates and discussions have been an effective way of sharing, creating, and assessing knowledge since long before our current education system, existed. As well as being useful for assessing knowledge, debates and discussions can create healthy competition, strengthen critical and creative thinking skills, build communication skills, and contribute to more effective ideas.

Again, you could stage a debate or discussion in class with no technology whatsoever. It could be an informal conversation, a deliberately reflective practice like a yarning circle, semi-structured like a fishbowl or socratic seminar, or fully structured like a formal debate.

If you wanted to deliberately incorporate GAI into a debate or discussion, you could:

You obviously don’t have to assess every conversation that happens in a class, tutor group, or online discussion. However, these moments can provide useful insights into how individual students are contributing to the overall knowledge demonstrated through the unit.


I’m on the board of Young Change Agents, a national not for profit that helps young people create meaningful social enterprises through programs like $20 Boss, Digital Boss, Academy for Enterprising Girls, and various Design Challenges. Throughout the YCA programs, pitching is a key aspect of getting an idea in front of an audience and persuading them to back it.

A pitch is also a great way for both individuals and groups to demonstrate their knowledge of a subject: if a student can’t successfully pitch an idea, they might need to work on their content knowledge. It’s also necessary in a pitch to get to the core of the idea, empathise with your audience, and develop strong arguments.

A pitch can be delivered off the cuff, but it’s better to plan and prepare. Again it could be tech free, or might incorporate GAI in various ways, including:

  • Using GAI as a mock audience member to test and refine ideas. Although a chatbot can’t replace a real person when developing an idea, it can be a useful starting point
  • Using tools like the ones listed earlier to create compelling pitch decks
  • Testing the logic and persuasiveness of an argument against a chatbot
  • Using GAI to write code for prototype apps and webpages, noting that a certain level of coding skill would be required to check for errors or issues

Pitches are great for persuading someone to back a project, product, or social enterprise, but can be useful for assessing knowledge to. Even something as simple as an elevator pitch or Gaddie pitch can allow a student to succinctly demonstrate what they know, without relying on a written response.

Learning Conference

Elkhoury’s list of oral assessments includes a reference to this paper from Sindija Franzetti about learning conferences. In the article, Franzetti writes something which I think most of us can identify with: “Like so many of my colleagues, I resent grading for the labor and energy it takes away from doing the meaningful work of teaching to learn.”

In response to this resentment towards grading assignments, Franzetti suggests learning conferences: individual conversations with students lasting 20-40 minutes which included reflection on the course, their participation, and the assignment. I’ve written myself about the conferencing I used in my Y12 English classes as an alternative to collecting piles of books. I have also suggested ways that GAI chatbots can be used as part of the conferencing process, including:

  1. Pre-Class Preparation & Research: Students can use a chatbot for researching and gathering information pre-lesson, with a reminder to validate accuracy.
  2. Writing Practice & Error Correction: A GAI offers immediate technical feedback on writing, including grammar and structure, allowing students to correct errors and improve without waiting for individualised teacher feedback.
  3. Group Discussions & Peer Feedback: A chatbot can support group discussions and peer feedback sessions, providing prompts, tracking contributions, and acting as a resource for small groups while teachers give 1:1 attention.
  4. Conferencing & Personalised Feedback: During conferencing, a GAI provides additional context, individualised feedback based on unique needs, and supports follow-up questions, aiding teachers and addressing diverse student requirements.
  5. Vocabulary Expansion, Reflection, & Progress Tracking: A chatbot suggests new vocabulary, guides students in reflection and goal-setting, and tracks progress, offering a comprehensive record and contextual examples for ongoing improvement.

Interviews and vivas

Interviews and vivas are traditional methods of oral assessment that allow students to demonstrate their knowledge, communication skills, and critical thinking in a structured conversation. These formats can encourage students to think on their feet and provide well-thought-out responses to questions or problems posed by an examiner or panel. Questions can be seen or unseen, and the accessibility needs of students should of course be taken into account.

If you choose to integrate GAI in interviews and vivas, several strategies can be employed:

  • Students could leverage GAI for preparing responses to potential questions, honing their articulation skills and refining their arguments
  • Chatbots can serve as practice interviewers, providing an opportunity for students to simulate the interview experience and receive immediate feedback
  • GAI tools could assist in organising and managing interview schedules, transcribing conversations, and highlighting key points for assessment

Incorporating GAI doesn’t have to undermine the value of interviews and vivas but could add to the preparation, execution, and assessment. It goes without saying by this point that any use of the technology before, during, or after an interview should be appropriately acknowledged by both the students and the teacher.

There are many ways to use orals to assess knowledge, and they don’t have to be seen as onerous or an addition to other forms of assessment. In future posts in this series, I’ll be exploring other ways to develop assessment tasks more suited to GAI.


Leon Furze is an international consultant, author, and speaker with over fifteen years of experience in secondary and tertiary education and leadership. Leon is studying his PhD in the implications of Generative Artificial Intelligence on writing instruction and education.

This post is part of a series on rethinking assessment in light of generative AI and was originally published here

The posts draw on research and resources from K-12 and tertiary to suggest ways that educators can design engaging, compelling assessments which shift the narrative away from GAI and “cheating”.

You can access a free copy of the book Rethinking Assessment for Generative Artificial Intelligence’. The sixty page eBook contains advice on alternative assessment forms which are more “AI proof”, as well as research-based discussions of AI detection tools (and why they don’t work), and the impact of GenAI on every subject discipline. Click here to access

Further blogs

Rethinking Assessment: A Holistic Creative Thinking Journey from KS1-5 cover

Rethinking Assessment: A Holistic Creative Thinking Journey from KS1-5

Harness the power of creative thinking pedagogies to create a vertically connected expansive and community connected experience for learners from EYFS through to Sixth Form...

It’s time to rethink assessment

It’s no longer enough for students to emerge from their formal schooling with a few grades or marks - we owe it to them to give a much more nuanced picture of what they can do

Drawing on evidence from across the world, we show how high-stakes assessment is harming students and increasingly ignored by key stakeholders. We need to remind ourselves what matters in education….

Join CfEY for the The Primary Extended Project Award pilot


Through the PEPA, we want to give primary pupils the opportunity to delve into the topics they feel passionate about and develop and value a wide set of learning dispositions….