Judging GCSE English: Validity of grades

Chris Wheadon
The No More Marking Blog
3 min readApr 2, 2018

--

So far we have established that teachers produce a highly reliable scale of marks for the mock essays, and that our marks reflected those derived directly from a mark scheme. The next question we need to answer is whether the grades we awarded are meaningful?

The Grading Process

For each of the writing marks, reading marks, and the total score separately we followed this grading process:

We took the summer 2017 GCSE English results for the schools in our study who provided them. For example, for each of these schools, we identified the percentage of pupils who got a grade 9, a grade 7, and so forth.

We then used a weighted average to calculate the percentage of pupils we would expect to achieve the key grades for these schools only based on their summer 2017 outcomes. For example, in summer 2017, 2.9 per cent of the pupils of this cohort of schools achieved grade 9 last summer.

We then applied these percentages to our scale. That is, we awarded 2.9 per cent of grade 9s to the pupils for these schools only. The pupil at the bottom of our grade 9 achieved a total score of 50. So, our boundary mark for 9 is therefore 50.

Having derived our boundaries from schools with results from summer 2017 we applied these boundaries to all pupils in the study.

Grading Reading and Writing

Grading the essays separately is unusual, but allows schools to see relative strengths in reading and writing. The school below is stronger in writing than reading.

Overall Grades

As these pupils are in year 10 we won’t know for some time if the grades have predictive power. We do have one thing in our favour, however, that the teachers have collaborated directly in the judging, creating a single shared measurement scale across schools. The validity of the scale depends on the teachers’ shared understanding of what good looks like matching the same set of skills rewarded by the exam board markers. We have already seen that they match.

One further source of information that we can use to check our grading process is the outcomes of schools from June 2017. We would expect the grades achieved by schools from year to year to be reasonably stable.

The figure below shows the relationship between the percentage of pupils achieving grade 4 and above in June 2017 and in our study. There is a good relationship, with a correlation of 0.72, which suggests our grades are reasonable. Much of the variation would probably be explained by changes in the prior achievement of the two cohorts.

Relationship between grades awarded in our study and the grades achieved by centres in June 2017

If you would like to take part in judging your GCSE English essays next year, take a look at our national project.

--

--