The first part looks at the power of history-specific subject communities in empowering teachers to build better curricula for their students.
A three part series looking at the power of subject communities to enact change in education. This builds on the important notion of principled communities of practice as embodied by groups such as the Schools History Project www.schoolshistoryproject.co.uk
The first part looks at the power of history-specific subject communities in empowering teachers to build better curricula for their students.
The second part of the series explores the problems of communities.
The final part focuses more practically on how we might engage meaningfully with subject communities.
Every so often a crisis appears in education which causes us to stop think. The A-Level crisis of August 2020 needs to be one of those moments. Although it has been portrayed as the catastrophic result of changes brought in haste due to Covid-19, the systems which have underpinned the current crisis have been in place for decades. The examinations system is the sick-man of education. What we have been witnessing over the last week is the tragic outcome of a diseased system, the underlying issues of which have festered away unchecked and untreated for far too long. It’s time to look for a cure. Let me explain...
First a very brief overview of the specific crisis this summer. During the coronavirus lock down, formal examinations of pupils were cancelled by the DfE. A decision was taken to ensure students were still graded despite not sitting exams (we could discuss the problems in this too, but there is no space here). The statement from Gavin Williamson (below) really should have raised more questions and scrutiny at the time. The notion that grades for 2020 would be indistinguishable from other years despite students not sitting exams, or that “grading” in the usual way was the best outcome for students, were assumptions which should have been more robustly challenged. However, too many were unwilling to think through the potential consequences or were blinded by their faith in what they believed to be a robust and functioning examinations system which achieved fairness in normal years.
This blog is trying to capture something I have been wrestling with for a while now. Should we be proud of our history. And no, I don’t mean our national story! Growing up with a Welsh father punctured any notion I might have developed that the history taught in schools was in any way a “national” or representative story of Britain. I can remember him quizzing me weekly on what Welsh history we had studied. The answer, always, was “none!”. What I want to talk about today is a different kind of history: the history of our profession.
I have turned the remainder of my blog here into a short video lecture series which you can access here: PLAYLIST
Being proud of “the community”
Over the years I have been teaching history (and latterly history teachers), I have developed something of a sense of pride in the way in which history, as a school subject, has engaged with complex issues in curriculum and pedagogy. I have even taken to referring to “the history community” in an almost reverential way. I am sure I am not alone. If you look at the discussions which happen, especially on Twitter, you will often see people expressing pride in “the history community” and its various achievements. Often the narrative we tell about “the community” is framed as a story of social justice in which pupils are liberated through carefully curated content and powerful pedagogical knowledge.
I originally posted a version of this blog back in January 2017. A number of debates about teaching methods - especially the value or otherwise of PowerPoint - have led me to revisit and update it. You can find the original post here.
This blog is a reflection upon a conversation with Fred, a friend we got to know through church when we lived in Idle. Fred is well into his eighties and his health is beginning to fail him. However he still has a mischievous twinkle in his eye and absolutely loves to have company. Thankfully he is seldom short of it. People from all over seem to come and visit Fred and he can talk on a stunningly wide range of subjects. This is probably because, in his time, he has worked as a craftsman, a joiner, a coffin maker, a press photographer and a middle school teacher.
Fred lives on the edge of Bradford in a house which looks like it has never left the 1960s. His hallway overflows with fishing paraphernalia, whilst his attic houses a fully functioning dark room: bottles, trays and developing tanks, all meticulously organised into a careful workflow.
The real joy however is Fred's garden. He told us once that his mother helped him fall in love with gardening at the age of five. His garden therefore is a testament to eighty years of gardening knowledge and understanding: a labyrinthine maze of potatoes and carrots, towering sweetcorn, gnarled apple and pear trees, raspberry canes, ripe tomatoes, and juicy strawberries. My daughter loves exploring there in the summer, and so do I.
Just before our daughter was born, back in 2016, my wife and I were invited to join Fred for lunch. At this point, my wife already knew Fred quite well, however this was the first time I had really spent time getting to know him.
Before she became a vicar, my wife had been a primary school teacher, and of course I was in my first year of running a PGCE course. Our discussion that day therefore turned quite quickly to teaching. And so it was that we ended up talking about Fred's "remedial class" over a mid-week discount carvery at the local pub. This is a story about that meal.
So, the first tranche of Ofsted reports from the new framework have now been released. I thought this would be a good time to reflect on their content and consider what they reveal about the new, and much touted, curriculum focus. It should be noted of course that there are still only a handful of reports to look at, so this is very much initial reactions.
A quick review
Before I get into what we can glean from these reports, I think it is worth revisiting some of the hopes and fears I had about the new Education Inspection Framework when it was first announced. I have summarised these briefly below.
Hello everyone. This is an updated version of last year's post on GCSE history results. I have left the previous post in tact but added the new results for 2019 in blue text for those interested and commented where things have changed from 2018. The first version of this post was published on 24 August 2018.
Evening all, I thought I'd take a few minutes to outline five key things we found out about the new History GCSEs. OK, it's really four and a question, but hey ho!
1) Pupils this year did pretty much the same as last year overall
No changes here, and no surprise given the details below. I have updated the chart to show the 2019 results side by side.
This is not really a surprise as Ofqual demanded a statistical tie between 2017 and 2018. Therefore almost the same proportion of kids got a G/1 or greater as last year, and the same for C/4 and A/7.
There were some minor differences, but at a school cohort level of say 100 pupils, the difference would have been less than a single pupil missing out on a grade C/4.
Of course, this does not mean everyone’s results will have been stable. It is common with new specifications for some schools to do much better than normal and some to do much worse. This is usually because some schools manage to match what examiners were looking for more closely. It is almost impossible to second guess this precisely in the first year of a specification as examiners refine their expectations during first marking and grading discussions.
NO CHANGES HERE
Read the examiners’ reports closely and get some papers back if you were not happy with your overall results.
2) Your choice of board made almost no difference to overall grades (on average)
There is very little change here this year. The distribution of awards per board seem to be fairly static and this reflects the fact that awards are still tied to pupil prior attainment. From this we can therefore infer that centres doing OCR A tend to have cohorts with higher prior attainment and that therefore a greater proportion of higher grades can be awarded.
Discounting the statement at the end of the last point: because the boards all had to adhere to this basic rule when awarding grades, the differences between boards are also non-existent. If you look at the detail you will see that some boards did deviate from the 2017 figures, however this is because they have to take prior attainment into account. So, the reason that OCR A seem to have awarded more 4+ and 7+ grades would suggest that more high attaining pupils took these exams. By contrast OCR B probably awarded slightly fewer 4+ and 7+ grades due to a weaker cohort. This might imply that OCR centres chose their specification based on the ability range of their pupils (though this is pure speculation). AQA and Edexcel pretty much fit the Ofqual model, suggesting they had a broadly representative sample of pupils.
This is just a very quick post to provide links to my 2019 SHP Conference workshop: Making America Great to Teach Again.
You will find links to everything in the post here. The actual session can be found in the CPD section and the resources in the relevant linked sections.
Have fun and do let me know how you get on with it.
Yesterday I read an interesting blog by Rich McFahn, commenting on the problems he sees with Michael Young’s concept of ‘powerful knowledge’ in history. I have to say that I have been having similar musings and this led to a very interesting discussion on Twitter, which you can follow here. The following is a bit of a rambling muse about 'powerful knowledge' in history.
If you are new to the concept of ‘powerful knowledge’ here is a brief crash course (you might also like to read this). In Young and Lambert’s phrasing: “knowledge is ‘powerful’ if it predicts, if it explains, if it enables you to envisage alternatives” (Young and Lambert, 2014, p. 74). However, this is not the full picture. There are other criteria Young uses to define ‘powerful knowledge’:
Powerful knowledge and curriculum
Young and Lambert make the case in “Knowledge and the Future School” that the identification of ‘powerful knowledge’ is an important tool for considering curriculum construction. They argue that the concept of ‘powerful knowledge’ might help schools “reach a shared understanding about the knowledge they want their pupils to acquire” through the collective wisdom of the various disciplines (Young and Lambert, 2014, p. 69).
In my previous two blogs I looked at some of the serious problems which exist in the marking of subjects like History and English at GCSE and A Level, and at potential changes which could be made to improve the reliability of examinations. However, I also noted that such modifications might not resolve all of the problems identified.
In this final blog, I want to explore a more radical solution to the search for the “gold standard” of examinations: it’s abandonment. Indeed, to take a gold rush analogy, it was seldom the gold hunters who profited much from the great gold rushes in America. In fact the gold hunters gave way to huge corporate interests and long term destruction was the result (though the companies certainly did well). Instead it was those who supplied the tools, cooked the food, cleaned the cabins, and provided the clothes who really made the profits (most notably of cause one Levi Strauss). In short, those people who recognised that the opportunities lie in the everyday, not the elusive. So what would this look like?
First, I want to suggest that we need to reconsider the purpose of summative assessment in schools. Up to now, examination has been seen only in terms of measuring the standardised “outcomes” (and thereby potential) of students and schools. However, I would suggest that well designed assessment should in fact be supporting the development of rich curricula, improving teachers’ engagement with their subjects, and promoting deep curricular engagement among students. This in turn would impact on students’ knowledge and understanding, and thereby implicitly their outcomes.
Second, and in order to achieve the above. I think the creation of assessments need to be devolved to the level of schools, or groups of schools working together. This is not the same as saying all work should be coursework, just that the assessments should be designed and set in smaller, local groupings. In such as system, students learning might not be so easily comparable nationally (though this clearly isn’t working well in some subjects anyway), but the improved quality of teaching might well mean better outcomes in real terms, regardless of the grading systems used.
Why are such changes needed?
To understand the power a locally led examination system might have, one must first focus on the problems inherent in assessing a subject, like History or English, where there is no definitive agreement on content at a national level. I have outlined a selection of these below:
Note: my final blog offering suggestions for real examination reform is now live here.
In my previous blog I looked at the ways in which the marking of examinations in England is particularly problematic in subjects like History and English. For a full review of this, you may like to read Ofqual’s blog and report on the subject. Today I want to deal with the question of what action the educational establishment might take.
GCSE and A-Level examinations are still being held up vital to the education system. The impetus has been to seek to cement their place as the “gold standard” both nationally and internationally. If we want this to be true in reality, I wonder if we need a more fundamental rethink of what examinations (especially GCSE examinations) are for and therefore what they might be in an educational landscape which is vastly different from that of the late 1980s when GCSEs were first introduced.
What are GCSE examinations for?
The seemingly simple question of the purpose of GCSE examinations is actually very complex indeed. But of course, as with all assessment, validity is heavily connected to the inferences one wants to draw from an exam (Wiliam, 2014). The range of inferences which are suggested as valid from a set of GCSE examinations are extremely diverse:
Note: there are two blogs which follow this one which offer some solutions to the problems outlined here.
A few days ago, Ofqual published an interesting blog looking at the state of the examinations system. This was based on an earlier report exploring the reliability of marking in reformed qualifications. Tucked away at the end of this blog was the startling claim that in History and English, the probability of markers agreeing with their principal examiner on a final grade was only just over 55%.
The research conducted by Ofqual investigated the accuracy marking by looking at over 16 million "marking instances" at GCSE, AS and A Level. The researchers looked at the extent to which markers’ marks deviated from seeded examples assessed by principal and senior examiners. The mark given by the senior examiner on an item was termed the “definitive mark.” The accuracy of the other markers was established by comparing their marks to this “definitive mark.” For instance, it was found that the probability that markers in maths would agree with the “definitive mark” of the senior examiners was around 94% on average. Pretty good. They also went on to calculate the extent to which markers were likely to agree with the “definitive grade” awarded by the principal examiners (by calculation) based on a full question set. Again, this was discussed in terms of the probability of agreement. This was also high for Maths. However, as noted, in History and English, the levels of agreement on grades fell below 60%.
When Michael Gove set about his reforms of the exam system in 2011, there was a drive to make both GCSE and A Level comparable with the “the world’s most rigorous”. Much was made of the processes for making the system of GCSE and A Level examination more demanding to inspire more confidence from the business and university sectors which seemed to have lost faith in them. Out went coursework and in came longer and more content heavy exams. There was a sense of returning GCSE and A Level examinations to their status as the "gold standard" of assessment. The research conducted by Ofqual seems suggest that examinations are a long way from such a standard. Indeed, it raises the question of whether or not national examinations have never really been the gold standard of assessment they have been purported to be. Have we been living in a gilded age of national examinations? The answer is complex.
Before I launch into this, I should also note that I understand the process of examining is a difficult one and that I have no doubt those involved in the examinations system have the best interests of students at heart. I also don’t want to undermine the efforts of those students who have worked hard for such exams. That said, there were some fairly significant findings in the Ofqual research which need further thought.
"Who shot JFK?" and other historical problems. Part 3: role playing power imbalances through slave auctions
In my previous two blogs I looked at the problems with teaching the assassination of JFK as a murder mystery, and with imagination type activities in learning about the Holocaust. Today I want to explore one of the most controversial lessons I have witnessed.
The “slave auction”
Reading the title of this, I hope most people would be baulking already. However, in the last five years, I have heard of this kind of lesson being used in multiple history departments and the image above is not invented but actually came from a grammar school in the South East. Just as with the Holocaust example I gave last time, this type of activity can end up being done in multiple topic areas, but effectively involves role-playing an extreme power imbalance.
The reasons departments persist with “lessons” like this one are usually vaguely couched in terms of empathy, and the need to clarify complex concepts like chattel slavery. However, more often than not they are promoted for their “interactive”, or “engaging” elements. Indeed, one non-historian described seeing such a lesson to me once as being “a good, fun way to get across a difficult idea.”
My thanks to Sally Thorne @MrsThorne for reading this blog and contributing her expertise to refining the very rough-round-the-edges original. For more of Sally's thoughts on teaching excellent history, do read her book: "Becoming and Outstanding History Teacher"
Today I want to briefly cover the issue of differentiation. In part this is responding to an anonymous blog HERE which suggests that differentiation is a well-intentioned but morally bankrupt educational approach.
“Differentiation was a mistake, it sounded great and we meant well but there are fundamental reasons why it always fails in comparison to whole-class teaching. We are teachers: we are here for our students and our subjects and we’re prepared to change our minds if it means better outcomes for all."
Now there are many things I actually agree and sympathise with in this blogpost, especially the problems of “personalised learning” which became so prevalent in the early 2000s, and the demands for teachers to make tasks easier for pupils to access etc. However, I think the blog itself is based in a major logical fallacy: “because differentiation has been done badly, all differentiation must be bad.” It is for this reason I simply cannot agree with the conclusions the author reaches.
Today the DfE have released their much vaunted teacher recruitment and retention strategy. The document covers four main areas for improvement and was compiled in consultation with some key partners including ASCL, the EEF, the CCoT, Ofsted, and the NAHT. I have to say it is welcome to see this kind of discussion happening, though I do think some quite partisan lines remain in the strategy.
Last January I published some key steps I thought the DfE might take to improve teacher recruitment and retention. Today I want to go back to these suggestions and consider them in light of the DfE's new strategy. First a quick reminder:
A marked improvement?
Let’s start with the positives. It is certainly evident that the DfE has gone well beyond the measures we have seen in the last few years when considering their recommendations. There are welcome
Last week I published a number of blogs exploring the proposed Ofsted Framework for 2019, as well as some of the individual elements of that framework. Today I want to explore the proposals from the point of view of an ITE provider, rather than that of a school.
As you may have noted, I was reasonably upbeat about the revisions and opportunities for schools in the new framework. Although I am aware that there may be a lot of work for people to do to feel confident in meeting the criteria of the “Quality of Education” element. When it comes to ITE however, I am less encouraged. I have written about some of the struggles in ITE in the past (HERE and HERE). I know that Ofsted was never going to do much to reverse the tide of generic teacher training. However, there was one major area where I hoped a new framework might be of some use: improving the status of mentoring.
You might have missed it in the flurry of dismay/excitement (delete as you see fit) around the government’s historic defeat on the Brexit plan yesterday, but today saw the publication of Ofsted’s new draft framework for school inspections. The cynical might argue this was a deliberate ploy, however a detailed read of the proposals does leave a lot of room to be optimistic.
Before I begin, I should also note that Leeds Trinity University will be launching a new network event called “Building a Powerful Curriculum” which aims to give curriculum leads input, time, and space to discuss the implications of potential framework changes for schools in Yorkshire. I have also created an in-depth analysis of the framework draft for anyone interested.
In some ways the new inspection framework seems to have just two main goals: to place the focus of school provision firmly on the base of curriculum design and provision; and to reduce the role of Ofsted in mandating curricular and pedagogical approaches from the centre.
Below is a brief summary of key similarities and differences between the proposed framework and the existing one. The rest of the post takes a more detailed look at some of the differences with potential implications for schools.
Over the next few days I will be publishing some in-depth analyses of some of the key differences, along with the potential implications for schools. Please note that I will be doing this within the bounds of my own expertise, which ranges upper Key Stage 2 to University. I appreciate that the framework may have very different implications for schools teaching Foundation Stage, Key Stage 1 and lower Key Stage 2.
I should also note that many of the changes are actually connected to issues of curriculum. Because curriculum is a core point of discussion amongst subject specialists already, the answers to many of the challenges of the new framework are already out there and accessible. The best thing senior leaders might do over the next few months is establish where their in-school, subject expertise already lies and draw on existing research and knowledge from subject associations, experts, and subject-led university education departments.
In my previous blog I discussed the potentially ahistorical nature of studies of the assassination of John F Kennedy. In today’s blog I am entering into more controversial territory and looking at some activity choices with relation to the teaching of the Holocaust.
The letter from Auschwitz
I have seen this type of lesson in all kinds of guises, but this is probably the version I have most issue with. The task itself if fairly straight forward: having learned about the Holocaust and concentration camps, students are asked to imagine they are in a concentration camp and to write about their experiences in some kind of letter or diary.
I completely understand where lessons like this come from. The letter/diary device is straightforward to students to access (they may even be familiar with Anne Frank’s diary), and on the surface it appears to give them a chance to really empathise with people in the past. In reality however, I fear it undermines this latter aim, and raises a host of other issues. For more on this, you may like to read Totten’s “Holocaust Education: Issues and Approaches”, especially chapter seven.
First, because of the placement of tasks such as these, they often end up being a stand-in for a factual recall, rather than a real...
Before Christmas, Ben Newmark posted a blog in which he outlined a range of things which he had found unsuccessful in teaching. This included things such as card sorts (#2), role play (#3), flipped learning (#17), group work (#20) and fifty others. At the time I replied to say that I felt some of the focus on methods was problematic, as things which don’t work in one place, may well in another.
However, I could not quite leave the idea alone. Since Ben published the post in October, I have had a range of discussions with colleagues where we considered whether we actually had our own red lines in terms of history teaching. It turns out that we do. The main difference I think is that they are mostly linked not so much to methods, but to whether or not core educational values, and nature of history as a discipline are being appropriately, and indeed rigorously, served (I went into this to some extent in my post HERE) . In the end we established five or six big problems we had come across in history lessons (not including broader issues of assessment). Of those we agreed pretty much unanimously on two, and partially on a third. I am to outline these in three separate blogs:
Now I am very keen not to make this too negative, so before I begin I’d like to highlight two key points:
Hello everyone and thanks if you have filled in my History GCSE 2018 survey. If you are interested in insights from this kind of survey, it would be great if you could also take a moment to fill in the Historical Association Annual Survey. This helps the HA act as a voice for the profession more effectively. You can fill it in HERE.
In my last blog I used overview data to compare the GCSE results this year to last year, as well as looking at the grade boundaries for different boards. The survey data collected allows me to look more specifically at the results of departments. So for example, big data suggested that the proportion of grades was essentially similar to last year, but it was impossible to tell if this was because departments did the same, or because some did brilliantly and others appallingly.
In this blog I am going to try to pull out some answers to key questions I am hearing a lot on Twitter and Facebook. Please note that this data is only a sample of 363 schools, all of whom engage actively with social media, so the data is mostly suggesting things rather than proving anything. However, I did manage to get a broadly representative sample in terms of the distribution of data around the 65% 9-4 average. For more on the data set please see the end of the blog. If you'd like a fully anonymised copy for your very own, please do let me know.
As each post is quite long I have indexed the questions here:
1. Were results more or less stable in 2018 compared to 2017 - Read On Further
2. Did teaching a 2 or 3 year curriculum have the biggest impact on 2018 results?
3. What impact did exam question practise have on results?
4. Was the A*-A / 9-7 rate affected?
5. What impact did non-specialist teaching have on grade stability?
In lost of ways I am glad I cannot draw many clear conclusions from this data. The slight differences between results from 2017 to 2018 suggest that departments have not been unduly hit by these changes and these is a broad level of fairness in how they have occurred. While some departments clearly will not be pleased with their results (and others by turn delighted), this is not unusual in the picture of GCSE. The only slight impact here may be that departments who have been historically stable, are now in a category of less stable schools. However, to assess this, I would need data going back further, and probably in more depth.
What I do now have is a large bank of data on the 2018 exams, so if anyone has a question they’d like me to investigate using the dataset I have, please do get in touch.
NOTE: For almost all of these explorations I have compared results in terms of stability. So for example 2018 stability is calculated by comparing a school’s results in 2018 with their results in 2017. If the results are in the same bracket, then I would consider them broadly stable. If their results this year were a bracket or more above last year e.g. last year they got 50-59% A*C and this year 60-69% 9-4, then they are consider as doing better, and the reverse if they are lower this year.
1. Were results more or less stable in 2018 compared to 2017?