A quick review
Before I get into what we can glean from these reports, I think it is worth revisiting some of the hopes and fears I had about the new Education Inspection Framework when it was first announced. I have summarised these briefly below.
So, the first tranche of Ofsted reports from the new framework have now been released. I thought this would be a good time to reflect on their content and consider what they reveal about the new, and much touted, curriculum focus. It should be noted of course that there are still only a handful of reports to look at, so this is very much initial reactions.
A quick review
Before I get into what we can glean from these reports, I think it is worth revisiting some of the hopes and fears I had about the new Education Inspection Framework when it was first announced. I have summarised these briefly below.
Hello everyone. This is an updated version of last year's post on GCSE history results. I have left the previous post in tact but added the new results for 2019 in blue text for those interested and commented where things have changed from 2018. The first version of this post was published on 24 August 2018.
Evening all, I thought I'd take a few minutes to outline five key things we found out about the new History GCSEs. OK, it's really four and a question, but hey ho!
1) Pupils this year did pretty much the same as last year overall
No changes here, and no surprise given the details below. I have updated the chart to show the 2019 results side by side.
This is not really a surprise as Ofqual demanded a statistical tie between 2017 and 2018. Therefore almost the same proportion of kids got a G/1 or greater as last year, and the same for C/4 and A/7.
There were some minor differences, but at a school cohort level of say 100 pupils, the difference would have been less than a single pupil missing out on a grade C/4.
Of course, this does not mean everyone’s results will have been stable. It is common with new specifications for some schools to do much better than normal and some to do much worse. This is usually because some schools manage to match what examiners were looking for more closely. It is almost impossible to second guess this precisely in the first year of a specification as examiners refine their expectations during first marking and grading discussions.
NO CHANGES HERE
Read the examiners’ reports closely and get some papers back if you were not happy with your overall results.
2) Your choice of board made almost no difference to overall grades (on average)
There is very little change here this year. The distribution of awards per board seem to be fairly static and this reflects the fact that awards are still tied to pupil prior attainment. From this we can therefore infer that centres doing OCR A tend to have cohorts with higher prior attainment and that therefore a greater proportion of higher grades can be awarded.
Discounting the statement at the end of the last point: because the boards all had to adhere to this basic rule when awarding grades, the differences between boards are also non-existent. If you look at the detail you will see that some boards did deviate from the 2017 figures, however this is because they have to take prior attainment into account. So, the reason that OCR A seem to have awarded more 4+ and 7+ grades would suggest that more high attaining pupils took these exams. By contrast OCR B probably awarded slightly fewer 4+ and 7+ grades due to a weaker cohort. This might imply that OCR centres chose their specification based on the ability range of their pupils (though this is pure speculation). AQA and Edexcel pretty much fit the Ofqual model, suggesting they had a broadly representative sample of pupils.
This is just a very quick post to provide links to my 2019 SHP Conference workshop: Making America Great to Teach Again.
You will find links to everything in the post here. The actual session can be found in the CPD section and the resources in the relevant linked sections.
Have fun and do let me know how you get on with it.
Yesterday I read an interesting blog by Rich McFahn, commenting on the problems he sees with Michael Young’s concept of ‘powerful knowledge’ in history. I have to say that I have been having similar musings and this led to a very interesting discussion on Twitter, which you can follow here. The following is a bit of a rambling muse about 'powerful knowledge' in history.
If you are new to the concept of ‘powerful knowledge’ here is a brief crash course (you might also like to read this). In Young and Lambert’s phrasing: “knowledge is ‘powerful’ if it predicts, if it explains, if it enables you to envisage alternatives” (Young and Lambert, 2014, p. 74). However, this is not the full picture. There are other criteria Young uses to define ‘powerful knowledge’:
Powerful knowledge and curriculum
Young and Lambert make the case in “Knowledge and the Future School” that the identification of ‘powerful knowledge’ is an important tool for considering curriculum construction. They argue that the concept of ‘powerful knowledge’ might help schools “reach a shared understanding about the knowledge they want their pupils to acquire” through the collective wisdom of the various disciplines (Young and Lambert, 2014, p. 69).
In my previous two blogs I looked at some of the serious problems which exist in the marking of subjects like History and English at GCSE and A Level, and at potential changes which could be made to improve the reliability of examinations. However, I also noted that such modifications might not resolve all of the problems identified.
In this final blog, I want to explore a more radical solution to the search for the “gold standard” of examinations: it’s abandonment. Indeed, to take a gold rush analogy, it was seldom the gold hunters who profited much from the great gold rushes in America. In fact the gold hunters gave way to huge corporate interests and long term destruction was the result (though the companies certainly did well). Instead it was those who supplied the tools, cooked the food, cleaned the cabins, and provided the clothes who really made the profits (most notably of cause one Levi Strauss). In short, those people who recognised that the opportunities lie in the everyday, not the elusive. So what would this look like?
First, I want to suggest that we need to reconsider the purpose of summative assessment in schools. Up to now, examination has been seen only in terms of measuring the standardised “outcomes” (and thereby potential) of students and schools. However, I would suggest that well designed assessment should in fact be supporting the development of rich curricula, improving teachers’ engagement with their subjects, and promoting deep curricular engagement among students. This in turn would impact on students’ knowledge and understanding, and thereby implicitly their outcomes.
Second, and in order to achieve the above. I think the creation of assessments need to be devolved to the level of schools, or groups of schools working together. This is not the same as saying all work should be coursework, just that the assessments should be designed and set in smaller, local groupings. In such as system, students learning might not be so easily comparable nationally (though this clearly isn’t working well in some subjects anyway), but the improved quality of teaching might well mean better outcomes in real terms, regardless of the grading systems used.
Why are such changes needed?
To understand the power a locally led examination system might have, one must first focus on the problems inherent in assessing a subject, like History or English, where there is no definitive agreement on content at a national level. I have outlined a selection of these below:
Note: my final blog offering suggestions for real examination reform is now live here.
In my previous blog I looked at the ways in which the marking of examinations in England is particularly problematic in subjects like History and English. For a full review of this, you may like to read Ofqual’s blog and report on the subject. Today I want to deal with the question of what action the educational establishment might take.
GCSE and A-Level examinations are still being held up vital to the education system. The impetus has been to seek to cement their place as the “gold standard” both nationally and internationally. If we want this to be true in reality, I wonder if we need a more fundamental rethink of what examinations (especially GCSE examinations) are for and therefore what they might be in an educational landscape which is vastly different from that of the late 1980s when GCSEs were first introduced.
What are GCSE examinations for?
The seemingly simple question of the purpose of GCSE examinations is actually very complex indeed. But of course, as with all assessment, validity is heavily connected to the inferences one wants to draw from an exam (Wiliam, 2014). The range of inferences which are suggested as valid from a set of GCSE examinations are extremely diverse:
Note: there are two blogs which follow this one which offer some solutions to the problems outlined here.
A few days ago, Ofqual published an interesting blog looking at the state of the examinations system. This was based on an earlier report exploring the reliability of marking in reformed qualifications. Tucked away at the end of this blog was the startling claim that in History and English, the probability of markers agreeing with their principal examiner on a final grade was only just over 55%.
The research conducted by Ofqual investigated the accuracy marking by looking at over 16 million "marking instances" at GCSE, AS and A Level. The researchers looked at the extent to which markers’ marks deviated from seeded examples assessed by principal and senior examiners. The mark given by the senior examiner on an item was termed the “definitive mark.” The accuracy of the other markers was established by comparing their marks to this “definitive mark.” For instance, it was found that the probability that markers in maths would agree with the “definitive mark” of the senior examiners was around 94% on average. Pretty good. They also went on to calculate the extent to which markers were likely to agree with the “definitive grade” awarded by the principal examiners (by calculation) based on a full question set. Again, this was discussed in terms of the probability of agreement. This was also high for Maths. However, as noted, in History and English, the levels of agreement on grades fell below 60%.
When Michael Gove set about his reforms of the exam system in 2011, there was a drive to make both GCSE and A Level comparable with the “the world’s most rigorous”. Much was made of the processes for making the system of GCSE and A Level examination more demanding to inspire more confidence from the business and university sectors which seemed to have lost faith in them. Out went coursework and in came longer and more content heavy exams. There was a sense of returning GCSE and A Level examinations to their status as the "gold standard" of assessment. The research conducted by Ofqual seems suggest that examinations are a long way from such a standard. Indeed, it raises the question of whether or not national examinations have never really been the gold standard of assessment they have been purported to be. Have we been living in a gilded age of national examinations? The answer is complex.
Before I launch into this, I should also note that I understand the process of examining is a difficult one and that I have no doubt those involved in the examinations system have the best interests of students at heart. I also don’t want to undermine the efforts of those students who have worked hard for such exams. That said, there were some fairly significant findings in the Ofqual research which need further thought.
"Who shot JFK?" and other historical problems. Part 3: role playing power imbalances through slave auctions
In my previous two blogs I looked at the problems with teaching the assassination of JFK as a murder mystery, and with imagination type activities in learning about the Holocaust. Today I want to explore one of the most controversial lessons I have witnessed.
The “slave auction”
Reading the title of this, I hope most people would be baulking already. However, in the last five years, I have heard of this kind of lesson being used in multiple history departments and the image above is not invented but actually came from a grammar school in the South East. Just as with the Holocaust example I gave last time, this type of activity can end up being done in multiple topic areas, but effectively involves role-playing an extreme power imbalance.
The reasons departments persist with “lessons” like this one are usually vaguely couched in terms of empathy, and the need to clarify complex concepts like chattel slavery. However, more often than not they are promoted for their “interactive”, or “engaging” elements. Indeed, one non-historian described seeing such a lesson to me once as being “a good, fun way to get across a difficult idea.”
My thanks to Sally Thorne @MrsThorne for reading this blog and contributing her expertise to refining the very rough-round-the-edges original. For more of Sally's thoughts on teaching excellent history, do read her book: "Becoming and Outstanding History Teacher"
Today I want to briefly cover the issue of differentiation. In part this is responding to an anonymous blog HERE which suggests that differentiation is a well-intentioned but morally bankrupt educational approach.
“Differentiation was a mistake, it sounded great and we meant well but there are fundamental reasons why it always fails in comparison to whole-class teaching. We are teachers: we are here for our students and our subjects and we’re prepared to change our minds if it means better outcomes for all."
Now there are many things I actually agree and sympathise with in this blogpost, especially the problems of “personalised learning” which became so prevalent in the early 2000s, and the demands for teachers to make tasks easier for pupils to access etc. However, I think the blog itself is based in a major logical fallacy: “because differentiation has been done badly, all differentiation must be bad.” It is for this reason I simply cannot agree with the conclusions the author reaches.
Today the DfE have released their much vaunted teacher recruitment and retention strategy. The document covers four main areas for improvement and was compiled in consultation with some key partners including ASCL, the EEF, the CCoT, Ofsted, and the NAHT. I have to say it is welcome to see this kind of discussion happening, though I do think some quite partisan lines remain in the strategy.
Last January I published some key steps I thought the DfE might take to improve teacher recruitment and retention. Today I want to go back to these suggestions and consider them in light of the DfE's new strategy. First a quick reminder:
A marked improvement?
Let’s start with the positives. It is certainly evident that the DfE has gone well beyond the measures we have seen in the last few years when considering their recommendations. There are welcome
Last week I published a number of blogs exploring the proposed Ofsted Framework for 2019, as well as some of the individual elements of that framework. Today I want to explore the proposals from the point of view of an ITE provider, rather than that of a school.
As you may have noted, I was reasonably upbeat about the revisions and opportunities for schools in the new framework. Although I am aware that there may be a lot of work for people to do to feel confident in meeting the criteria of the “Quality of Education” element. When it comes to ITE however, I am less encouraged. I have written about some of the struggles in ITE in the past (HERE and HERE). I know that Ofsted was never going to do much to reverse the tide of generic teacher training. However, there was one major area where I hoped a new framework might be of some use: improving the status of mentoring.
You might have missed it in the flurry of dismay/excitement (delete as you see fit) around the government’s historic defeat on the Brexit plan yesterday, but today saw the publication of Ofsted’s new draft framework for school inspections. The cynical might argue this was a deliberate ploy, however a detailed read of the proposals does leave a lot of room to be optimistic.
Before I begin, I should also note that Leeds Trinity University will be launching a new network event called “Building a Powerful Curriculum” which aims to give curriculum leads input, time, and space to discuss the implications of potential framework changes for schools in Yorkshire. I have also created an in-depth analysis of the framework draft for anyone interested.
In some ways the new inspection framework seems to have just two main goals: to place the focus of school provision firmly on the base of curriculum design and provision; and to reduce the role of Ofsted in mandating curricular and pedagogical approaches from the centre.
Below is a brief summary of key similarities and differences between the proposed framework and the existing one. The rest of the post takes a more detailed look at some of the differences with potential implications for schools.
Over the next few days I will be publishing some in-depth analyses of some of the key differences, along with the potential implications for schools. Please note that I will be doing this within the bounds of my own expertise, which ranges upper Key Stage 2 to University. I appreciate that the framework may have very different implications for schools teaching Foundation Stage, Key Stage 1 and lower Key Stage 2.
I should also note that many of the changes are actually connected to issues of curriculum. Because curriculum is a core point of discussion amongst subject specialists already, the answers to many of the challenges of the new framework are already out there and accessible. The best thing senior leaders might do over the next few months is establish where their in-school, subject expertise already lies and draw on existing research and knowledge from subject associations, experts, and subject-led university education departments.
In my previous blog I discussed the potentially ahistorical nature of studies of the assassination of John F Kennedy. In today’s blog I am entering into more controversial territory and looking at some activity choices with relation to the teaching of the Holocaust.
The letter from Auschwitz
I have seen this type of lesson in all kinds of guises, but this is probably the version I have most issue with. The task itself if fairly straight forward: having learned about the Holocaust and concentration camps, students are asked to imagine they are in a concentration camp and to write about their experiences in some kind of letter or diary.
I completely understand where lessons like this come from. The letter/diary device is straightforward to students to access (they may even be familiar with Anne Frank’s diary), and on the surface it appears to give them a chance to really empathise with people in the past. In reality however, I fear it undermines this latter aim, and raises a host of other issues. For more on this, you may like to read Totten’s “Holocaust Education: Issues and Approaches”, especially chapter seven.
First, because of the placement of tasks such as these, they often end up being a stand-in for a factual recall, rather than a real...
Before Christmas, Ben Newmark posted a blog in which he outlined a range of things which he had found unsuccessful in teaching. This included things such as card sorts (#2), role play (#3), flipped learning (#17), group work (#20) and fifty others. At the time I replied to say that I felt some of the focus on methods was problematic, as things which don’t work in one place, may well in another.
However, I could not quite leave the idea alone. Since Ben published the post in October, I have had a range of discussions with colleagues where we considered whether we actually had our own red lines in terms of history teaching. It turns out that we do. The main difference I think is that they are mostly linked not so much to methods, but to whether or not core educational values, and nature of history as a discipline are being appropriately, and indeed rigorously, served (I went into this to some extent in my post HERE) . In the end we established five or six big problems we had come across in history lessons (not including broader issues of assessment). Of those we agreed pretty much unanimously on two, and partially on a third. I am to outline these in three separate blogs:
Now I am very keen not to make this too negative, so before I begin I’d like to highlight two key points:
Hello everyone and thanks if you have filled in my History GCSE 2018 survey. If you are interested in insights from this kind of survey, it would be great if you could also take a moment to fill in the Historical Association Annual Survey. This helps the HA act as a voice for the profession more effectively. You can fill it in HERE.
In my last blog I used overview data to compare the GCSE results this year to last year, as well as looking at the grade boundaries for different boards. The survey data collected allows me to look more specifically at the results of departments. So for example, big data suggested that the proportion of grades was essentially similar to last year, but it was impossible to tell if this was because departments did the same, or because some did brilliantly and others appallingly.
In this blog I am going to try to pull out some answers to key questions I am hearing a lot on Twitter and Facebook. Please note that this data is only a sample of 363 schools, all of whom engage actively with social media, so the data is mostly suggesting things rather than proving anything. However, I did manage to get a broadly representative sample in terms of the distribution of data around the 65% 9-4 average. For more on the data set please see the end of the blog. If you'd like a fully anonymised copy for your very own, please do let me know.
As each post is quite long I have indexed the questions here:
1. Were results more or less stable in 2018 compared to 2017 - Read On Further
2. Did teaching a 2 or 3 year curriculum have the biggest impact on 2018 results?
3. What impact did exam question practise have on results?
4. Was the A*-A / 9-7 rate affected?
5. What impact did non-specialist teaching have on grade stability?
In lost of ways I am glad I cannot draw many clear conclusions from this data. The slight differences between results from 2017 to 2018 suggest that departments have not been unduly hit by these changes and these is a broad level of fairness in how they have occurred. While some departments clearly will not be pleased with their results (and others by turn delighted), this is not unusual in the picture of GCSE. The only slight impact here may be that departments who have been historically stable, are now in a category of less stable schools. However, to assess this, I would need data going back further, and probably in more depth.
What I do now have is a large bank of data on the 2018 exams, so if anyone has a question they’d like me to investigate using the dataset I have, please do get in touch.
NOTE: For almost all of these explorations I have compared results in terms of stability. So for example 2018 stability is calculated by comparing a school’s results in 2018 with their results in 2017. If the results are in the same bracket, then I would consider them broadly stable. If their results this year were a bracket or more above last year e.g. last year they got 50-59% A*C and this year 60-69% 9-4, then they are consider as doing better, and the reverse if they are lower this year.
1. Were results more or less stable in 2018 compared to 2017?
Hi all. Apologies for the delay. I have finally got around to uploading my resources from this year's SHP Conference.
This year I decided to have a long look at curriculum and look at ways we can build powerful curricula as history teachers. I split the presentation into three elements:
* A very brief overview of curriculum theory
* Some considerations about a disciplinary curriculum
* Examples of a disciplinary curriculum in action
Access files - SHP 2018: Sowing New Seeds
In the folder below you will find the resources shown in the slideshow at the top of the page.
If you would like access to previous SHP sessions I have run, you will find them here
Teacher recruitment is in something of a crisis. For well over three years now, teacher recruitment and retention has been in the headlines, and never more so than this year. The Commons Public Account Committee have noted that there is a “growing sense of crisis”, whilst policies to address shortages have been “sluggish and incoherent.”
The figures around recruitment are pretty stark. In the last six years, the number of primary age students increased by over 14%, and the numbers in secondary are predicted to rise by 19% by 2025. The crisis is neatly illustrated by the following statistics:
So, it’s that time of the year once again. Time for Christmas decorations and carol concerts. Time for the RE department to show Christmas Simpsons episodes every lesson, claiming some tenuous curricular link. Time for kids to bounce into your classrooms demanding quizzes (or low-stakes cultural capital tests if you prefer) and Haribo with menaces. And of course, it is time for the annual Christmas blog.
In my previous Christmas blogs (HERE) and (HERE) I tried to put together a few thoughts (or tortured metaphors) to help us all reflect on the year that was. However, I got the sneaking suspicion that many people read these Christmas blogs with little thought for their long term curricular impact, or how this knowledge might be retained for further use. (And did you SLANT whilst reading and follow them with your ruler? Thought not!)
This year therefore, I have decided to summarise the year in the form of a knowledge organiser. This means you can test yourself on the professional cultural capital of 2017, until you know it by heart. This will also allow you to more effectively address the end of year essay question. Don’t forget, that in order to attain the full 18 marks, you will need to cover all sides of the argument, whether it makes sense or not, and refer to the provenance of the author (who is a raging prog, or despicable trad, depending who you ask). If you are lucky, I might even make you some structure strips to help. Essays will be due on 26th December. No excuses.
Note: A copy of the knowledge organiser can be purchased directly in pdf format for £2.99 via the author's website: knowledgeisall.com , or will probably be ripped off on TES resources next week for £23.99.
Pointless or Powerful? Should we bother creating teaching and learning policies in secondary, and what should they do?
This is just a short blog on the back of discussions I had yesterday evening about Teaching and Learning policies in secondary schools. I posted out some extracts from T&L policies from a couple of different schools, which (in my view): manage to place students at the heart of the teaching experience; set out broad principles for why teaching is important; promote wider aspects of good teaching; and are not overly prescriptive in terms of pedagogy.
The reaction to this post was very interesting. On the one hand, the examples got a handful of "likes" but there was quite a lot of criticism too: