Hello everyone. This is an updated version of last year's post on GCSE history results. I have left the previous post in tact but added the new results for 2019 in blue text for those interested and commented where things have changed from 2018. The first version of this post was published on 24 August 2018. ------- Evening all, I thought I'd take a few minutes to outline five key things we found out about the new History GCSEs. OK, it's really four and a question, but hey ho! 1) Pupils this year did pretty much the same as last year overall No changes here, and no surprise given the details below. I have updated the chart to show the 2019 results side by side. This is not really a surprise as Ofqual demanded a statistical tie between 2017 and 2018. Therefore almost the same proportion of kids got a G/1 or greater as last year, and the same for C/4 and A/7. There were some minor differences, but at a school cohort level of say 100 pupils, the difference would have been less than a single pupil missing out on a grade C/4.
Of course, this does not mean everyone’s results will have been stable. It is common with new specifications for some schools to do much better than normal and some to do much worse. This is usually because some schools manage to match what examiners were looking for more closely. It is almost impossible to second guess this precisely in the first year of a specification as examiners refine their expectations during first marking and grading discussions. Takeaway lesson: NO CHANGES HERE Read the examiners’ reports closely and get some papers back if you were not happy with your overall results. 2) Your choice of board made almost no difference to overall grades (on average) There is very little change here this year. The distribution of awards per board seem to be fairly static and this reflects the fact that awards are still tied to pupil prior attainment. From this we can therefore infer that centres doing OCR A tend to have cohorts with higher prior attainment and that therefore a greater proportion of higher grades can be awarded. Discounting the statement at the end of the last point: because the boards all had to adhere to this basic rule when awarding grades, the differences between boards are also non-existent. If you look at the detail you will see that some boards did deviate from the 2017 figures, however this is because they have to take prior attainment into account. So, the reason that OCR A seem to have awarded more 4+ and 7+ grades would suggest that more high attaining pupils took these exams. By contrast OCR B probably awarded slightly fewer 4+ and 7+ grades due to a weaker cohort. This might imply that OCR centres chose their specification based on the ability range of their pupils (though this is pure speculation). AQA and Edexcel pretty much fit the Ofqual model, suggesting they had a broadly representative sample of pupils.
19 Comments
In my previous two blogs I looked at some of the serious problems which exist in the marking of subjects like History and English at GCSE and A Level, and at potential changes which could be made to improve the reliability of examinations. However, I also noted that such modifications might not resolve all of the problems identified.
In this final blog, I want to explore a more radical solution to the search for the “gold standard” of examinations: it’s abandonment. Indeed, to take a gold rush analogy, it was seldom the gold hunters who profited much from the great gold rushes in America. In fact the gold hunters gave way to huge corporate interests and long term destruction was the result (though the companies certainly did well). Instead it was those who supplied the tools, cooked the food, cleaned the cabins, and provided the clothes who really made the profits (most notably of cause one Levi Strauss). In short, those people who recognised that the opportunities lie in the everyday, not the elusive. So what would this look like? First, I want to suggest that we need to reconsider the purpose of summative assessment in schools. Up to now, examination has been seen only in terms of measuring the standardised “outcomes” (and thereby potential) of students and schools. However, I would suggest that well designed assessment should in fact be supporting the development of rich curricula, improving teachers’ engagement with their subjects, and promoting deep curricular engagement among students. This in turn would impact on students’ knowledge and understanding, and thereby implicitly their outcomes. Second, and in order to achieve the above. I think the creation of assessments need to be devolved to the level of schools, or groups of schools working together. This is not the same as saying all work should be coursework, just that the assessments should be designed and set in smaller, local groupings. In such as system, students learning might not be so easily comparable nationally (though this clearly isn’t working well in some subjects anyway), but the improved quality of teaching might well mean better outcomes in real terms, regardless of the grading systems used. Why are such changes needed? To understand the power a locally led examination system might have, one must first focus on the problems inherent in assessing a subject, like History or English, where there is no definitive agreement on content at a national level. I have outlined a selection of these below: Note: there are two blogs which follow this one which offer some solutions to the problems outlined here.
A few days ago, Ofqual published an interesting blog looking at the state of the examinations system. This was based on an earlier report exploring the reliability of marking in reformed qualifications. Tucked away at the end of this blog was the startling claim that in History and English, the probability of markers agreeing with their principal examiner on a final grade was only just over 55%. The research conducted by Ofqual investigated the accuracy marking by looking at over 16 million "marking instances" at GCSE, AS and A Level. The researchers looked at the extent to which markers’ marks deviated from seeded examples assessed by principal and senior examiners. The mark given by the senior examiner on an item was termed the “definitive mark.” The accuracy of the other markers was established by comparing their marks to this “definitive mark.” For instance, it was found that the probability that markers in maths would agree with the “definitive mark” of the senior examiners was around 94% on average. Pretty good. They also went on to calculate the extent to which markers were likely to agree with the “definitive grade” awarded by the principal examiners (by calculation) based on a full question set. Again, this was discussed in terms of the probability of agreement. This was also high for Maths. However, as noted, in History and English, the levels of agreement on grades fell below 60%. When Michael Gove set about his reforms of the exam system in 2011, there was a drive to make both GCSE and A Level comparable with the “the world’s most rigorous”. Much was made of the processes for making the system of GCSE and A Level examination more demanding to inspire more confidence from the business and university sectors which seemed to have lost faith in them. Out went coursework and in came longer and more content heavy exams. There was a sense of returning GCSE and A Level examinations to their status as the "gold standard" of assessment. The research conducted by Ofqual seems suggest that examinations are a long way from such a standard. Indeed, it raises the question of whether or not national examinations have never really been the gold standard of assessment they have been purported to be. Have we been living in a gilded age of national examinations? The answer is complex. Before I launch into this, I should also note that I understand the process of examining is a difficult one and that I have no doubt those involved in the examinations system have the best interests of students at heart. I also don’t want to undermine the efforts of those students who have worked hard for such exams. That said, there were some fairly significant findings in the Ofqual research which need further thought. Overt the last 2 weeks I have been reading and analysing Daisy Christodoulou's latest book "Making Good Progress?" This is an area which I have long been interested in, as followers on my blog will know. You can find the chapter analyses below, however I want to offer something by way of an overall summary of the book. CHAPTER 1 - Why didn't AfL Transform Schools? CHAPTER 2 - Curriculum Aims and Teaching Methods CHAPTER 3 - Making Valid Inferences CHAPTER 4 - Descriptor Based Assessment CHAPTER 5 - Exam Based Assessment CHAPTER 6 - Life After Levels CHAPTER 7 - Improving Formative Assessment CHAPTER 8 - Improving Summative Assessment CHAPTER 9 - An Integrated Assesment System In Conclusion...
I generally enjoyed reading “Making good progress?” and found the argument cogent and, on the surface, persuasive. I do however have some reservations about Christodoulou’s complete rejection of carefully considered constructivism, as I detail in the chapter reviews. If nothing else, Christodoulou has provided a useful synthesis of many decades of thought on assessment, across many disciplines. Yet I also feel like the book was something of a missed opportunity. If it is read widely, then this book may have a big impact (though I imagine it will circulate I'm afraid today's blog, a bit like yesterday's is a bit of a rant. More to the point, it is a response to yet another partially researched claim, namely that Grammar Schools (Schools for Everyone TM?) are more effective than their comprehensive counterparts. Indeed, the Telegraph ran an article a few days ago stating that this was a ringing endorsement for May's flagship education policy. Here I wanted to unpick some of the claims being made about Grammar Schools, and ask that we take a moment to be cautious before endorsing a systemic change based on limited evidence. I am fairly sure I have my calculations right here, however please let me know if you think I have made a mistake.
The Progress 8 Issue The claim that Grammar Schools outperform state Comprehensives does have some basis in evidence. This can be seen in the latest GCSE statistics published by the DfE. The Telegraph explains that...
here's been a lot going around on Twitter recently about reducing the marking load of teachers. Much of this is to be applauded. I have seen some really nice ideas for dealing with feedback more effectively from Ben Newmark, Toby French, Tom Bennett, even the Michaela bods. However, I have a major worry: school marking policies won't actually change!
In the current educational climate, school approaches, and especially those relating to marking and feedback, are driven by a few key factors:
So here's the rub. If schools want to achieve the first aim, the following drivers are often counter productive. * I am not going to discuss the reductive nature of the first educational goal, though that in itself plays a major part here too. Nor will I be dealing with the impact of a narrowly target driven system which means that some schools are in the habit of changing their policies more frequently than I change my socks. Indeed, some schools I have worked in have been so malleable in their policy approaches to teaching that they have become almost invertebrate. In the course of five years in one school we shifted from a focus on Kagan groups and peer marking, to flipped classroom, to next-step marking, to triple marking, to digital marking, to purple pens of progress, without ever stopping to think about the impact of any of these approaches. ** I could write a whole blog on the rise of the purple pen as a gateway pass to Deputy Head status, but I think I might leave that for another day Bad Advice and Poor Models As people have been pointing out all week - good feedback does not mean detailed written marking on every child's work. Yet, if we look at some of the "Outstanding" schools and "Teaching Schools" which have been set up as beacons of excellence, we see such policies being advocated. This "Outstanding" Teaching School for instance says:
This school has not been formally inspected since 2007 so it seems somewhat remiss of the DfE to allow it to advise other schools to follow such policies. (see http://www.harrogategrammar.co.uk/content/uploads/2015/04/Learning-21.01.15.pdf and http://www.harrogategrammar.co.uk/content/uploads/2014/02/Policy_AssessmentRecordingReporting23.01.13.pdf) Another "Outstanding" school has a marking policy which demands extended written feedback in a rainbow of colours: http://www.rossettschool.co.uk/parents/policies/marking/ (last inspected in 2010)
These two examples are far from the only ones, nor are they the worst cases. Countless others come out of the wordwork in conversations with teachers up and down the country - sadly not all put their marking policies online. The big worry is that these "Outstanding" schools (many of whom have not be inspected in nearly a decade) shape the approaches taken by "Good", "RI" and "Inadequate" schools in significant ways as they strive to model the "excellent practice" of their "betters".
The Ofsted Factor But the problem doesn't stop there. In every school I have been to, there has always been someone with the job to read Ofsted inspection reports and pull out and apply key approaches deemed necessary to attain the elusive "Outstanding" grade. Yesterday I suggested that Ofsted, through their reports, has been key in encouraging schools to implement poor marking practices. When I mentioned this, I was promptly slapped down by Ofsted's Sean Harford. Just a short blog to thank all of you who attended my session at TFSI yesterday. As promised I am uploading the session resources here for you to download. Please do have a browse around the rest of the site whilst you are here. There are lots of links to blogs on progress, progression, second-order concepts and substantive knowledge in the blog bar on the right. Alex Blog posts which may be of particular interest:
Creating flight paths to replace levels Year 7-11 - the impact of the new GCSE grade descriptors7/17/2016 So with very little fanfare, the grade descriptors for the new GCSE grades 9-1 were released on Friday. Those schools who have opted to use these as a progression model, assessment system and general replacement for both NC Levels and GCSE grades must be delighted that they can finally start implementing their systems. In this blog I want to take a look at how these grade descriptors can be utilised from Years 7-11 to provide a clear and coherent flight path for pupils in history. Using the grade descriptors to create a flight path Many schools for instance have opted for the following basic flight path: As such, the grade descriptors will help to define what this path might look like for pupils in lessons. They could be used as part of the reporting process, discussed with parents and form targets for written feedback….
This is a primer post which goes with my upcoming blog on progression in history. You can find the main post HERE. You can also find my other primer post, in which I attempted to do a demolition job on the idea of linear progression models HERE.
Linear progression models are riddled with problems. In this post, I want to focus on one response to the confusions created them: research-based progression models. Let me be clear from the outset, research-based models of progression were not designed as a replacement for National Curriculum levels, rather to address their myriad shortcomings and help teachers to really get to grips with what progression in history looks like. The best thing to do to understand research-based models of progression is to read Lee and Shemilt’s seminal article in Teaching History 113, “A scaffold not a cage”. Research based models differ from linear models because they take students’ work and understanding as a starting point to describe what improvement in history actually looks like. In the above article, Lee and Shemilt offer some descriptions of what pupils’ thinking about historical evidence looks like. They then divide this into “stages” of development. The upper four stages of their model are given here. This blog is a primer for my upcoming post on thinking about progression in history. You can find the main post HERE.
Linear progression models have dominated the way we talk about progression in history for many years now. Once upon a time, they were an odd perversion created by the mis-use of National Curriculum levels, but now they are absolutely everywhere, infiltrating everything from A Level to GCSE. Linear progression models attempt to create a series of steps which pupils must climb in order to improve at a subject. In maths for example, pupils might begin to understand the concept of angles and their relationships to straight lines by looking at how angles on a straight line add up to 360 degrees, then moving on to deal with angles on parallel lines, and so on. In maths this has the potential to work because there are clear steps for pupils to take to improve in their knowledge of angles and lines. In history the situation is quite different. Linear progression models, such as those used by many schools up until the abolition of the National Curriculum level descriptors, were generally based on the idea that students could improve through a focus on their second-order understanding, rather than knowledge; that is to say, the ways in which we understand history, for example through our understanding of significance, cause, or change. See the example below: This is just a very short post. I am uploading (bottom of the post) an updated version of my thoughts on assessing historical thinking. This is more of an update to previous posts than anything radically new, and I am painfully aware that I need to incorporate something of Kate Hammond's latest research on substantive knowledge. However, it does expand upon some of the ideas I have put forward in the "Setting us free?" article in this month's Teaching History. As such I thought it was worth putting on. All comments appreciated. For context read "Setting us free?" HERE. To be frank - read the whole issue - especially Hammond! Mr F
So I have been reading "Make it Stick" by Baron, Roediger and McDaniel over the last few days, and I have to say that it has been quite enlightening. The book sets out to explore the neuroscience behind how we learn, setting out the processes involved in learning in an accessible way and making reference to a number of studies. The book does not claim to have complete knowledge of the subject and notes where more research is needed, however I feel it has offered some very useful guidance on how I could move my students on. Most importantly the book deals with a number of myths which are peddled in education especially and debunks these once and for all (VAK I am looking at you).
I was pointed in the direction of the book by Christine Counsell and Michael Fordham. Michael in particular has posted three excellent blogs on the specific application of the book to issues of planning in history. (Blog 1, Blog 2, Blog 3). However, I wanted to focus more specifically on helping students to grasp some of the key points from the book. As such, I attach a PowerPoint which could be looked at in one, two or three sessions. I would suggest it will work best with KS4 or 5 and, in the spirit of creating a sense of urgency, should be linked to the idea that mastering memory is key to exam success.
OK, so I am not even going to make the pretense of being brief here! The attached document (below) outlines my current thinking about how we might make assessment work in a whole school context. As suggested by the title, this has been something of an epic struggle and I am pretty sure I haven't got it right, however I hope that it might at least spark some discussion.
I would like to make mention again however of the excellent work being done by Michael Fordham at http://clioetcetera.wordpress.com/ on this issue which has been crucial in forming many of my ideas. As ever, all thoughts and comments are appreciated! So yesterday I delivered a session about progression in History in a post-levels world at the Leeds Learning Network History Conference. The most valuable part of the day for me, other than the excellent sessions, was the chance to speak to other heads of department about how their schools were approaching the issue of assessment and reporting in Key Stage 3. The major worry across the board was that we end up replacing the Levels system with something essentially identical. In the worst cases, schools seemed set on keeping Levels and not updating assessment and reporting arrangements at all. The main reason for this seems to come down to the age old issue (or is that actually a recent issue) of accountability.
This got me to thinking. Is there any point in replacing our progression systems if we end up keeping a debunked system of assessment and reporting? Now I completely accept that schools are (in our current culture) going to have to show evidence of pupil progress, as it forms a major part of the Ofsted framework. However, I think there may be some ways we can make something which both satisfies the need for data reporting and allows us to develop and use our meaningful models of progression which we have been crafting over the last few months. Once again, I would like to thank Helen Snelson at the Mount and Michael Fordham at Cambridge for their inspiration on these issues! What is crucial for me, as for many others, is that we don't let our progression revolution die a death at the hands of data systems wedded to an outmoded way of thinking. So we have spent quite a long time over the last nine months considering how we might develop a system of progression and assessment suitable for a post levels world. We have made many revisions along the way, however I think we now have something which we are reasonably happy with. To give a brief outline we:
The AndAllThat.co.uk teacher blog is moving from WordPress onto the main website. From now on you will find non-topic related content here. You can still access the archives from the WordPress site by visiting http://andallthatweb.wordpress.com . I will endeavour to transfer the content over the next few months. Mr F |
Image (c) LiamGM (2024) File: Bayeux Tapestry - Motte Castle Dinan.jpg - Wikimedia Commons
Archives
August 2024
Categories
All
|