Hello everyone and thanks if you have filled in my History GCSE 2018 survey. If you are interested in insights from this kind of survey, it would be great if you could also take a moment to fill in the Historical Association Annual Survey. This helps the HA act as a voice for the profession more effectively. You can fill it in HERE. In my last blog I used overview data to compare the GCSE results this year to last year, as well as looking at the grade boundaries for different boards. The survey data collected allows me to look more specifically at the results of departments. So for example, big data suggested that the proportion of grades was essentially similar to last year, but it was impossible to tell if this was because departments did the same, or because some did brilliantly and others appallingly. In this blog I am going to try to pull out some answers to key questions I am hearing a lot on Twitter and Facebook. Please note that this data is only a sample of 363 schools, all of whom engage actively with social media, so the data is mostly suggesting things rather than proving anything. However, I did manage to get a broadly representative sample in terms of the distribution of data around the 65% 9-4 average. For more on the data set please see the end of the blog. If you'd like a fully anonymised copy for your very own, please do let me know. As each post is quite long I have indexed the questions here: 1. Were results more or less stable in 2018 compared to 2017 - Read On Further 2. Did teaching a 2 or 3 year curriculum have the biggest impact on 2018 results? 3. What impact did exam question practise have on results? 4. Was the A*-A / 9-7 rate affected? 5. What impact did non-specialist teaching have on grade stability? Conclusions In lost of ways I am glad I cannot draw many clear conclusions from this data. The slight differences between results from 2017 to 2018 suggest that departments have not been unduly hit by these changes and these is a broad level of fairness in how they have occurred. While some departments clearly will not be pleased with their results (and others by turn delighted), this is not unusual in the picture of GCSE. The only slight impact here may be that departments who have been historically stable, are now in a category of less stable schools. However, to assess this, I would need data going back further, and probably in more depth. What I do now have is a large bank of data on the 2018 exams, so if anyone has a question they’d like me to investigate using the dataset I have, please do get in touch. NOTE: For almost all of these explorations I have compared results in terms of stability. So for example 2018 stability is calculated by comparing a school’s results in 2018 with their results in 2017. If the results are in the same bracket, then I would consider them broadly stable. If their results this year were a bracket or more above last year e.g. last year they got 50-59% A*C and this year 60-69% 9-4, then they are consider as doing better, and the reverse if they are lower this year. 1. Were results more or less stable in 2018 compared to 2017? This is a big one! The first chart compares centres on the stability of their A*-C/9-4 results for 2018 and 2017. As you can see, there are some minor differences here. Notably the stability of results in 2018 seems to be better than in 2017. This is quite positive as it means departments were not necessarily suffering due to the changes in examination as we might have expected (and as I implied in my last blog). Of course, this could just be related to the fact that the data comes from schools who are quite engaged in the process, and who therefore were more likely to have stable results this year. We can break this down a bit further and look at individual boards. For the main three (I did not get enough data for OCR A) the results are quite interesting, in that they are very similar. Around half of centres with each board did around the same in 2018 as in 2017. OCR B has a slight edge in centres doing better, but it really is small. A more interesting comparison is to break this down by school type. In the chart below you can see higher (80+% A*-C in 2016), middle (50-79% A*-C in 2016) and lower (30-49% A*-C in 2016) achieving schools, whose results were also stable between 2016 and 2017, compared (i.e. schools which go roughly the same results two years running). This picture is fascinating. It seems that the vast majority of higher achieving schools remained broadly stable, whilst lower achieving schools actually were more likely to improve their results. The middle band of schools showed the most diversity in terms of stability between 2017 and 2018. Breaking this down further and looking at board specifics for these “stable” schools reveals almost no deviation from the pattern given above. A final comparison is to look at schools who were not stable between 2016 and 2017. There were no major differences in terms of school type here and very little in terms of board. EDIT: I have updated the figures here after noticing a calculations error. For schools whose results were falling between 2016 and 2017, around 47% stayed in their new grade bracket, 15% got worse, but 38% improved, reverting towards the mean. This means if you got poor results this year, it seems unlikely you will fall further. For those departments whose results were rising the picture was essentially the inverse. Around 55% remained in their new bracket, 13% improved further, and 32% dropped by at least one bracket. The good news again: if you are on the up you are most likely to maintain or improve your position. Takeaway: Not a lot to take from this really, other than the fact that lower attaining departments were potentially better off here and that schools in the mid range suffered most variability. Differences between boards were negligible.
Next Question Data Info and Caveats: In total 363 schools responded to my survey. Of these, 26 were independent or private, 2 were special schools, 228 were state academies, 7 free schools, and 100 LA controlled schools. It is perfectly possible that a single school may have answered the survey twice or more. I am aware that the boundaries for results have a lot of scope for error, however they broadly fit with the national distributions, so I am happy with the approximation. My metrics for calculating percentages of the course taught, non-specialists etc. are far from ideal, but again this was aiming to give a broad brush picture.
4 Comments
Samantha Groom
9/14/2018 10:32:09 am
I rang AQA yesterday to ask about locations for depth studies and have been told they now intend to change the historical site every year. They originally said they would work as a 4-5 year cycle but apparently this has been changed and we have not been informed. I think this is disgusting for the historical sites who have invested time, money and resources, and for us, we have spent a huge amount of time creating SOW, model answers, fun activities, bought DVDs and packs from the site, organised visits, created risk assessments and all for what a single year apparently? I have spoken to other local HOD and no one had heard this. I rang the history department at AQA and that was their direct response to me. DISGUSTED.
Reply
Alex Ford
9/14/2018 09:11:59 pm
I think the changing site was in the spec from the outset. Did the message about the site come via an AQA rep?
Reply
6/29/2021 01:48:12 pm
I completely resolved my question when I read this post, thanks to the author for the very detailed description. I wrote my review on the site, you can go in and read. Thank you very much for your attention in your time.
Reply
Leave a Reply. |
Image (c) LiamGM (2024) File: Bayeux Tapestry - Motte Castle Dinan.jpg - Wikimedia Commons
Archives
August 2024
Categories
All
|