Richard Davis, CEO: 68-essay study reveals Top Marks AI achieving 0.91 correlation with Eduqas, outperforming human markers

Top Marks Correlates With Eduqas Better Than Humans: GCSE English Literature - 19th Century Prose Question

Richard Davis, CEO: 68-essay study reveals Top Marks AI achieving 0.91 correlation with Eduqas, outperforming human markers, February 27, 2025

Top Marks AI Correlates with Eduqas Better Than Humans: GCSE English Literature

Time and again, we're asked this crucial question: how accurate are Top Marks' GCSE English AI marking tools?

So we have been conducting a series of experiments to help provide answers.

In today's experiment, we will be looking at the Eduqas English Literature - specifically, the 40-mark 19th Century Prose question.

On their website, Eduqas have published 68 exemplar essays for the 19th Century Prose question. These exemplars represent a broad range of quality of answers. These essays are made available for standardisation purposes - so teachers can see what various levels of responses actually look like in the wild.

We downloaded all 68 of these essays – all handwritten – and put them through our Literary marking tool. Then we measured the correlation between the official marks the board gave the essay, and the marks Top Marks AI gave those essays.

We used a measurement called the Pearson correlation coefficient. In short:

  • • A value of 1 would mean perfect correlation -- when one marker gives a high score, the other always does too, and when one gives a low score, the other always does too.
  • • A value of 0 means no correlation whatsoever -- knowing one marker's score tells you nothing about what the other marker gave.
  • • Negative values would mean the markers systematically disagree - when one gives high scores, the other gives low scores.

For context, how do humans perform?

What sort of correlation do experienced human markers achieve when marking essays already marked by a lead examiner?

Cambridge Assessment conducted a rigorous study to measure precisely this. 200 GCSE English scripts - which had already been marked by a chief examiner - were sent to a team of experienced human markers. These experienced markers were not told what the chief examiner had given these scripts. Nor were they shown any annotations.

The Pearson correlation coefficient between the scores these experienced examiners gave and the chief examiner was just below 0.7. This indicated a positive correlation, though far from perfect. If you are interested, you can find the study here.

How did Top Marks AI perform?

Top Marks, across the 68 essays, achieved a correlation of above 0.91 -- an incredibly strong positive correlation that far outperforms the experienced human markers in the Cambridge study. (Top Marks AI was also not privy to the "correct marks" or any annotations).

Moreover, 78% of the marks we gave were within a 10% tolerance of the mark given by the chief examiner. In other words, 78% of all the marks we gave were within 4.4 marks of the board's grade.

Another interesting metric is the Mean Absolute Error, for which it scored a 2.86. On average, the AI differed from the board by 2.86 marks, which is comfortably within a 4 mark tolerance. As a percentage, that's an average of 7% different.

In contrast, in that same Cambridge study, experienced examiners marking a 40-mark question showed a Mean Absolute Error of 5.64 marks. These results highlight the exceptional accuracy of Top Marks AI compared to traditional marking practices.

For transparency, you can also access 68 exemplars we used here, and see where we sourced them from.

Can I see a graph to help me visualise this?

Absolutely.

First, here's a scatter graph to show you what a theoretical perfect correlation of 1 would look like:

Perfect Correlation Graph

Now, let's look at the real-life graph, drawn from the data above:

Actual Correlation Graph

On the horizontal axis, we have the grade given by the exam board. On the vertical, the grade given by Top Marks AI. The individual dots are the essays -- their position tells us both the mark given by the exam board and by Top Marks AI. You can see how closely it resembles the theoretical graph depicting perfect correlation.

The Handwriting Factor

As mentioned, all the essays we downloaded were handwritten. That Top Marks was able to correlate so closely with the official board grades indicates not only its marking efficacy but also the strength of its transcription technology.

Discover how Top Marks AI can revolutionise assessment in education. Contact us at info@topmarks.ai.