Book now to secure your child a space

Book Now
Introduction

As we have mentioned in a previous blog, The Grading System During Lockdown: GCSEs and A-Levels, the final grades that students receive every August are subject to a partially statistical moderation process. This year has been unique as students have obtained grades without taking the exam.

Many A-Level students have been disappointed with their results as the moderation process has resulted in their receiving low grades. Most of these A-Level students have been left completely at the mercy of Ofqual’s algorithm.

What was different this year?

As COVID-19 has presented a significant challenge for the exam system, Ofqual had to rely on teachers providing both a projected grade for their students and had to rank each student in their class.

These results from the exam centres were then moderated by Ofqual and adjusted.

Why is Ofqual’s algorithm a problem?

Every year, Ofqual has had to combat the issue of ‘Grade Inflation’. This is the phenomenon whereby GCSE and A-Levels seem to rise every year. Ofqual aims to limit this inflation as much as possible. The reasoning behind this is that every cohort should be more or less the same each year in terms of intelligence, which follows with outstanding regularity, a normal distribution curve. Also, Ofqual wants to maintain consistency each year, as their role is to preserve the value of these qualifications.

However, teachers get better at teaching to a set exam specification after every year. Students also gain access to more materials, past exam papers, and so on. This allows them to gain a significant advantage over their previous year groups; especially the first cohort, who have no past papers. Furthermore, teacher predictions also tend to trend up over time. Exam boards and Ofqual, thus adjust these grades.

The main issue was that even with no exams to moderate, Ofqual’s algorithm still attempted to combat grade inflation. Teachers’ predicted grades for students were extremely high: they represented a 14% increase over last year. This has meant that many A-level students have had to have their grades pushed down by the algorithm. It is important to point out, however, that results still rose significantly from last year. For instance, in 2019, only 7.8% of scripts received A*s, yet this year, the figure rose to 9%.

Some exam statistics (source):
  • The proportion of candidates receiving top grades is the highest on record. A total of 27.9% of entrants scored either an A or A*, up from 25.5% in 2019.

  • Some 9.0% of entrants received an A*. This is another record high and is up from 7.8% last year.

  • The overall pass rate (grades A* to E) was 98.3% – again, another record high. It is up from 97.6% in 2019.

  • Some 78.4% received a C or above, up from 75.8% in 2019 and the highest since at least 2000.

  • Girls have extended their lead over boys in the top grades. The proportion of girls who got A or higher was 28.4%, 1.1 percentage points higher than boys (27.3%). Last year, girls led boys by just 0.1 percentage points (25.5% girls, 25.4% boys). Boys briefly took the lead in 2017 and 2018, following a long period in which girls had been ahead.

  • The gap between the best-performing boys and girls has fallen slightly. The proportion of boys who got A* was 9.3%, 0.5 percentage points higher than girls (8.8%). Last year, the gap was 0.7 points.

  • The most popular subject this year was maths. It was taken by 94,168 entrants, up 2.5% on 2019.

  • Psychology was the second most popular subject, overtaking biology. It was taken by 65,255 entrants, up 1.0% on 2019. Biology slipped to become the third most popular subject, taken by 65,057 entrants, a fall of 6.0%.

  • ICT (information and communications technology) saw the biggest drop in candidates for a single subject with more than 1,000 entrants, falling by 15.3% from 1,572 to 1,332.

  • Computing saw the biggest jump in candidates of any subject with more than 1,000 entrants, rising by 11.7% from 11,124 to 12,426.

  • There were 780,557 A-levels awarded, down 2.6% on last year’s total (801,002) and the lowest number since 2004.

The media has also focused particularly on the fact that selective independent schools have been treated differently from state schools; however, it is not a clear-cut issue. Selective independent schools pose an issue for the algorithm because they more or less select students that sit at the right-hand side of a normal distribution. Furthermore, niche subject choices such as classics, and Latin, have fewer entrants than mathematics. Fewer entrants mean higher results.

Smaller class sizes similarly result in higher results. This is similarly the case with GCSE English Language where AQA students are typically less likely to get high results compared to Edexcel and OCR students; the pool of students who take the AQA exam is larger. This is one of the reasons why I have argued that the exam board system is not fair and is certainly not objective.

Another issue that the algorithm usually takes into account is the fact that certain schools can improve their performance over time. The algorithm did not account for this.

In my opinion, the most important issue is the way the students have perceived the situation. Because this year is so different, and the students did not sit an exam, grade assignment is now standing out as a moral issue; it feels as if students are being arbitrarily labelled successes and failures.

Moral Questions

Many A-Level students have rightly, voiced their concerns about having their futures decided by an algorithm. Some A-Level students were relying on getting top grades to secure their place at top universities. Many of these top universities have already rejected those candidates who have not made the grades, even though the algorithm is now being shelved, and predicted grades are being used as the final A-Level grades.

Rejection naturally comes every year, with every cohort, and it is natural too, that universities will have to filter out candidates. This year, however, it seems especially brutal.

Personally, even my own family has been affected by this, as one of my cousins, who had offers from King’s College London, amongst others, received lower than her predicted grades. Her predicted grades were: Mathematics A*, Further Mathematics A*, Economics A, yet her results after the algorithm were Mathematics A*, Further Mathematics B, Economics C. She has been anxious for many days as a result of this, as she feels that she is in limbo; unsure about what to do next. This makes the situation difficult for both the universities, which needs to make the selection and the students. Many other A-Level students seem to be in a similar position.

For those who are not taking the university route, these A-Level grades could still be important for their access into certain apprenticeship schemes and employers will still take them into account. Furthermore, because of the COVID situation, there are probably going to be fewer opportunities around for young people.

The ones who I feel the sorriest for are the students who ended up with grades below a ‘C’ and even ‘U’s. At GCSE level, this is less of an issue, as some of these students would probably have had to retake the exam for the certain core subjects anyway, however, at A-Level, students are desperate to move to the next stage in their lives. The idea of failing a student who did not have the opportunity to take the exam in the first place does not sit right with me.

Solutions

In terms of solutions, there are severe limitations in terms of
what Ofqual could have done differently:

1. A Lack of Coursework

Thanks to the examination reforms put in place during Gove’s rule over the Department of Education, coursework has been almost completely replaced by examinations. In the absence of examination results, there is a distinct lack of available information.

2. Non-standardised Mocks

Schools vary greatly in the way they administer mock exams. Some schools create their internal assessments, some use past papers and others use exam board designed papers. This means that mock grades do not provide a fair reflection of how students will perform in the real exam.

3. No AS Exams

A-Levels used to be split into two distinct sets of exams before the reforms in 2015. One set of exams called AS exams allowed 3 modules to be taken in year 12, and the final set of modules to be taken in year 13. This system would have given Ofqual more of an idea of how these students could have done if they had taken the real exam.

4. Standardised Mocks

It would have been an easier task for students to be assigned grades if they had standardised mock exams that are set by exam boards.

5. Treat Algorithms With Caution

Although I accept that a great deal of work goes into creating these algorithms, they are still imperfect tools and need continual refinement. I, personally, believe that they still have an important part to play in grade assignment. This year has highlighted more issues than usual because of the exceptional set of circumstances that had arisen; however, it reveals that there is still significant danger in relying too heavily on them. These algorithms are, after all, created by humans who have their own sets of assumptions.

Conclusion

I am hopeful that the GCSE students (especially our students) are mostly pleased with their grades, and that the chaos that has accompanied the A-Level results is not as pronounced.

There are flaws in the way education is assessed, and I believe that this blog post has highlighted some possible changes that would make the assessment regime more resistant to failure.

Education is an emotive topic, and important for students not to feel too disheartened. Learning never stops, and ultimately, if you keep upskilling, life should be fruitful.

Categories

Uncategorized

Or Leave A Comment





Tick BoxHere