The four myths of our exams system (and why we need to think again)

There is too big a price being paid in demoralisation, lost potential and reduced life chances.

Oli de Botton Headteacher, School 21

It was a dispiriting scene. Talented and hard working students on the phone begging Sixth Forms to let them in. They had 4s, they needed the 6s they got in their mocks. Aged 16 their options were narrowing. As always there was a group of children (significant in size) with grades but no qualifications. They had a handful of 1 and 2s; the standard distribution was satisfied. 

It was worse the week before. UCAS clearing was doing anything but. Students had missed their grades by margins; their pathways were blocked. University admissions proceeded happily – they had the data they needed to sift and sort. In schools and colleges across the country apprentice placements were full and work opportunities were sparse. This was summer of 2019 like the ten before it. 

The truth is that the algorithm of 2020 worked. It accurately replicated what happens every year. A third consigned to failure; top grades rationed; no way of judging the full range of what students know and can do. A system in service of stable patterns of data. Perfect for an algorithm.

How did we end up here? Four mutually reinforcing myths.

Myth 1: Admission to University is a problem for the exams system to solve

Things are the way they are because of how we structure higher education. Very selective universities need ways of working out who to choose and who to reject. If you have more students getting A*s at A-level there will be too many people at Oxford colleges (a solution in search of a problem perhaps). So for them (and for all selective Universities) it’s best to have the same data each year. It’s fine to have people swapping places between Cs and Bs but you can’t change the overall number (except at the margins).

It gets more distorting. Universities are guardians of subject domains. They police the contours of what makes it into English literature or Physics. So the content of A-levels exams (and their predecessors at GCSE) are shaped almost entirely by what you might learn at academic universities. The school curriculum is a set of building blocks to the higher calling of higher education. No space for other skills: oracy, work readiness, team work. Employers who want more taught and more assessed don’t get a look in.

This wouldn’t be so bad except at present around 40% of 18 year olds get a place at higher education and the government has dropped the 50% participation target. So fewer young people are ending up in the institutions they have been preparing for since age 11. The future is cancelled.

If you tried to invent a system to entrench the status quo this would be a very decent first draft. Given the lifetime benefits accrued to graduates and the difficulty higher education institutions have had in widening participation, we end up increasing inequality in the very institutions set up to reduce it.

Myth 2: Exams are fair and reliable

Proponents of the current system say it is the fairest way to assess children. Exams are, after all, linked to a published curriculum and that gives fair access. They are designed by experts with considerable skill. And they are right. Exams are a great way of sampling what students know about a particular area of study. In combination with other forms of assessment they might even give you some useful information about whether students can think like Historians, for example.

The problem is the exams only assess a small part of what a young person knows and can do. Some students write brilliant poems, or give fantastic speeches or can organise their peers to support a local charity. Fairness means recognising the potential and skills of every young person. Anything else is partial. In fact given the relative weight we put on exam grades it seems odd that we aren’t trying harder to understand more about pupil’s capabilities. This narrow thinking is everywhere. Drama is now a 70% written exam at GCSE, vocational qualifications look more like examined subjects than ever.

But aren’t exams more reliable than other assessments like teacher grades? The data scientists certainly think so. This year the algorithm changed around 40% of teacher grades (some up; most down). But in a normal year Ofqual itself admits it gets about a quarter of grades wrong anyway. In 2019 there were 71 errors in the wording of exam questions and instructions for good measure. Reliability, it seems, is a matter of degree. Surely with enough research and imagination we can offer other modes of assessment that can do at least as well?

Myth 3: Perverse consequences are a price worth paying

The exams system is full of good intentions. We want to raise standards. We want reliable assessments. We want to compete with the best in the world. But intention does not outweigh impact. And the impact is real. Brilliant school leaders lose their reputation and confidence because they can’t get enough young people over the arbitrary line each year. (It’s arbitrary because even assessment researchers admit getting grade boundaries right is only ever a best guess. In reality there isn’t any significant difference between a bottom C and middle D).

Teachers leave the profession because their Heads pressure them to get more young people over that arbitrary line. Year 11 is prioritised over everything else and intervention proceeds ad nauseam. This is not what we got into the profession for and is a large part of the reason 40% leave within the first 5 years. A system that can’t hold on to its workforce has a ceiling of improvement.

But it is students who are disadvantaged the most. Wherever you look in a school there is the potential for brilliance. Our exam system convinces us and them that that potential can only be expressed by sitting 30 (at last count) exams in one summer. The price is paid in demoralisation, lost potential and life chances.

Myth 4: There is no other way

Of course this is not easy to untangle. The system is a coalition of interests that have been built over time. Oxbridge is not going anywhere. Academic education remains important. Schools need accountability.

However there are countries and models that have done things differently and the world has not fallen in. Universities in Canada select different (and new models like the London Interdisciplinary School are developing here), the international baccalaureate assesses a wider range of skills, go ahead employers are finding different ways to assess potential whilst suppressing bias. This blog and the brilliant colleagues on it will no doubt find the best approaches.

From my own experience of the successes and failures at School 21 it’s clear that where there is a vision of excellence broader assessment can work. Schools that can rigorously deliver exam results, can rigorously deliver moderated teacher assessments, portfolio assessments, oracy assessments. With enough imagination and will things can and should be different.

Discuss this post