Professor Rose Luckin on AI in assessment

AI is reshaping assessment, offering new ways to measure motivation, learning mastery, and critical thinking beyond traditional exams. At a Rethinking Assessment Keynote event, Professor Rose Luckin argued that we must weigh the opportunities—such as adaptive learning and automated feedback—against the threats of bias, data security risks, and overreliance on AI in education. Here are her thoughts…

AI's got a very long history. It didn't just appear in November 2022, when ChatGPT launched itself onto the world.

In the middle of the last century, we were starting to think about how we could automate processes, and in particular, trying to understand enough about human cognition to be able to start thinking about how you automate certain aspects of it.

Generative AI and the ChatGPT-type tools are just part of the story and we need to think more broadly when we think about education and assessment about AI – beyond generative.

It's really important to realise, firstly, that this is all about big tech companies making lots of money. We need to make sure that we recognize that and, as educators, try and get a louder voice about what actually happens with AI. It will, without question, transform society, the world, and education. What we don't know is exactly how.

So where are we as educators? AI brings us opportunities, and it brings us threats.

The opportunities are for an adaptive, personalized learning experience. Automated marking, and feedback can be powerful in terms of time-saving but also in terms of collecting data that can be fed back into course design. We can think about AI to help us generate content. It's very useful for generating policies and standard documents. It can also do some lesson planning, but one needs to be more careful there, and we should certainly see it as an assistant for teachers and an assistant for learners. We can think of AI as being part of other technologies, like augmented and virtual reality.

So it's not just AI; it's how AI enhances other existing technologies.

However, there are considerable threats. There's inaccuracy and a lack of transparency in a lot of AI systems. We need to be very careful about personal data and data risk data security. Many AI systems are inherently biased. Speech-to-text applications, for example, will have been trained on millions of examples of the relationship between sounds and characters on a screen. Those millions of examples will not have been drawn from a population that's representative of the whole world and, therefore, there will be bias within that training towards certain cultures, certain accents. And that's true for many AI applications because we don't always have representative data for the populations that we want the AI to be able to serve. So we need to be aware of that.

Open source development means that those with an agenda or purpose that might be inappropriate or dangerous can easily develop powerful tools. There are lots of ethical and safeguarding challenges when it comes to AI, from protecting children and young people from inappropriate content, and protecting students' IP, to protecting them from overreliance on AI. There are lots of challenges there and, of course, the unintended consequences.

Nevertheless, I think we have to look at the opportunities, and the implications for assessment.

We could think about assessment in a more traditional way, developing digital assessment and computer-adjusted personalized testing, automated item generation and adjustable testing. That's perfectly possible. There's lots of work in that space already and including high-stakes certification using digital technologies and digital means.

But that's not where the really exciting assessment opportunities are.

We need to rethink what it is we need to assess, because we want to make sure that we are helping our students to thrive in a world that will be very different from the one we faced when we left school.

One thing I am certain about is that this is a moment in time when, as humans, our brains haven't finished evolving. We are not a finished product in terms of our human intelligence. We're more intelligent than we were 500 years ago or 100 years ago, and now we need to have a step change in the sophistication of our human intelligence because we now have these incredibly smart technologies.

We need to make sure that we help our students develop intelligent behaviours that are not necessarily accessible to AI, as much as those that AI can do. We need to think more in terms of AI literacy and data literacy, being able to use these tools powerfully, and focusing on learning mastery and knowledge mastery – understanding what knowledge is, where it comes from, and how to learn about the world in a flexible way.

Traditional subject disciplines might usefully become the medium through which these sophisticated thinking and learning skills are acquired, as much as what we actually assess. I'm not saying they are not still useful—they are—but perhaps they have a different role as we move forward. After all, you can't learn something like metacognition in a vacuum; you have to learn it through activities that are likely related to a particular subject area.

So what does that mean in terms of assessment and AI? The wonderful thing about AI right now is that it serves as both a catalyst for change—prompting us to rethink intelligence—and a tool to help us achieve it.

To explore this, let's take the example of motivation.

We know a great deal about what motivates people and how to identify when someone is or isn't motivated to learn. There is also a strong correlation between motivation and academic success. So how can we use data and AI to measure motivation in a meaningful way?

Traditionally, we might have gauged motivation through attendance records, academic grades, extracurricular activities, or library book checkouts. But now, by applying AI techniques, we can enrich this basic dataset. We can derive new forms of data that AI can help us analyze, allowing for a much more nuanced understanding of a student's motivation.

We can break motivation down into subcomponents and look for specific evidence that aligns with these elements. Mind maps and data proxy mapping can help connect motivation's subcomponents to the types of data we collect. Then, through feature engineering, AI can enrich this data, and machine learning can identify patterns related to a student’s motivation—where they are, where they've been, and what is or isn’t working for them.

We can enrich these traditional components—such as attendance and academic grades—by combining them with the microcomponents derived from our academic analysis of motivation. As we refine this approach, we can conduct increasingly detailed analyses. We can identify different student profiles and assess how specific interventions impact their motivation. We can use process mining to detect micro-steps in student behaviours that correlate with increased or decreased motivation, then use these insights to design appropriate interventions.

This type of approach and use of AI opens up real possibilities for assessing new, essential aspects of learning. Understanding motivation is key to being good at learning, which is central to learning mastery. And this is just one example of how AI can help us rethink assessment in meaningful ways.

Ultimately, we need to support learners to develop very high-level metacognitive skills. We need to develop high-level self-directed learning. We need to help our students develop very sophisticated thinking and learning skills so they truly understand themselves as learners.

AI – yes, we need to include it. But it's not the whole story. We need to treasure much more of our human intelligence and build that more richly into our assessment systems.”

You can visit Rose’s website here and sign up to the AI Skinny on AI for Education newsletter here.

Further blogs

How e-portfolios can support learning, reflection and engagement

Case study from Golftyn Primary School in Wales

In this blog, Gavin O’Loughlin from Golftyn Primary School explains how his school is implementing e-Portfolios and the impact that this is having on pupil learning and engagement…

10 big ideas from the Next Generation Assessment Conference

Highlights and Themes

The conference brought together 250 delegates from the UK plus international speakers. Here are some of the main takeways from the day…
Rethinking Assessment for Generative AI: Beyond the Essay cover

Rethinking Assessment for Generative AI: Beyond the Essay

This post explores alternatives to the traditional essay assessment across a range of curriculum subjects, searching for tasks that are “GAI-proof” and which still tick the boxes of what could be assessed via an essay....