Richard Davis, CEO: 40-essay study reveals Top Marks AI achieving 0.90 correlation with OCR, outperforming human markers

Top Marks Correlates With OCR Better Than Humans: GCSE Religious Education - 15-Mark Question

Richard Davis, CEO: 40-essay study reveals Top Marks AI achieving 0.90 correlation with OCR, outperforming human markers, February 9, 2025

Top Marks AI Correlates with OCR Better Than Humans: GCSE Religious Education

Time and again, we're asked this crucial question: how accurate are Top Marks' GCSE Religious Education AI marking tools?

So we have been conducting a series of experiments to help provide answers.

In today's experiment, we will be looking at the OCR Religious Education course - specifically, the 15-mark question.

On their website, OCR have published 40 exemplar essays for the 15-mark question. These exemplars represent a broad range of quality of answers. These essays are made available for standardisation purposes - so teachers can see what various levels of responses actually look like in the wild.

We downloaded all 40 of these essays – all handwritten – and put them through our 15-mark OCR tool. Then we measured the correlation between the official marks the board gave the essay, and the marks Top Marks AI gave those essays.

We used a measurement called the Pearson correlation coefficient. In short:

  • • A value of 1 would mean perfect correlation -- when one marker gives a high score, the other always does too, and when one gives a low score, the other always does too.
  • • A value of 0 means no correlation whatsoever -- knowing one marker's score tells you nothing about what the other marker gave.
  • • Negative values would mean the markers systematically disagree - when one gives high scores, the other gives low scores.

For context, how do humans perform?

What sort of correlation do experienced human markers achieve when marking essays already marked by a lead examiner?

Cambridge Assessment conducted a rigorous study to measure precisely this. 200 GCSE English scripts - which had already been marked by a chief examiner - were sent to a team of experienced human markers. These experienced markers were not told what the chief examiner had given these scripts. Nor were they shown any annotations.

The Pearson correlation coefficient between the scores these experienced examiners gave and the chief examiner was just below 0.7. This indicated a positive correlation, though far from perfect. If you are interested, you can find the study here.

It’s important to note that these figures relate to the marking of GCSE English scripts - not the marking of Religious Education scripts. Nevertheless, in the absence of RE-specific data, we believe this study gives a useful insight into the current state of human-led marking of open-ended GCSE essays.

How did Top Marks AI perform?

Top Marks, across the 40 essays, achieved a correlation of 0.90 -- an incredibly strong positive correlation that far outperforms the experienced human markers in the Cambridge study. (Top Marks AI was also not privy to the "correct marks" or any annotations).

Moreover, 75% of the marks we gave were within a 2-mark tolerance of the mark given by the chief examiner.

Another interesting metric is the Mean Absolute Error, for which it scored a 1.32. On average, the AI differed from the board by 1.32 marks, which is comfortably within a 2-mark tolerance. As a percentage, that's an average of 8.8% different.

In contrast, in that same Cambridge study, experienced examiners marking a 40-mark question showed a Mean Absolute Error of 5.64 marks. On a 15-mark scale, this would proportionally adjust to a Mean Absolute Error of 2.1 marks. These results highlight the exceptional accuracy of Top Marks AI compared to traditional marking practices.

For transparency, you can also access 40 exemplars we used here, and see where we sourced them from.

Can I see a graph to help me visualise this?

Absolutely.

First, here’s a scatter graph to show you what a theoretical perfect correlation of 1 would look like:

Perfect Correlation Graph

Now, let’s look at the real-life graph, drawn from the data above:

Actual Correlation Graph

On the horizontal axis, we have the grade given by the exam board. On the vertical, the grade given by Top Marks AI. The individual dots are the essays -- their position tells us both the mark given by the exam board and by Top Marks AI. You can see how closely it resembles the theoretical graph depicting perfect correlation.

The Handwriting Factor

As mentioned, all the essays we downloaded were handwritten. That Top Marks was able to correlate so closely with the official board grades indicates not only its marking efficacy but also the strength of its transcription technology.

Discover how Top Marks AI can revolutionise assessment in education. Contact us at info@topmarks.ai.