This is part 1 of my blog series on choosing good history textbooks. You can find the main page HERE.
One key aspect of a good history textbook or textbook series is the core narrative it develops. The best textbooks in my opinion have good extended prose, flow well, read aloud well, and have people's individual stories woven through them. A good textbook creates an engaging narrative which children enjoy engaging with. Over the last couple of years, Robert Peal has written a number of pieces criticising history textbooks for lacking clear narrative and for being broken down into tiny chunks. As a proponent of the Annales approach to history, I am also a great lover of overarching narratives, though I suspect somewhat different ones to Peal. The call for clear narrative was also an important message which came out of the West London Free School History Conference, held on 25 February 2017. Counsell for instance noted that having a good grounding in historical knowledge is an issue of social justice. She also referred back to her long-standing plea to think about the fingertip and residue knowledge we develop as part of our individual curricula. To an extent therefore I agree with Peal's point (though not his critique) that a good set of textbooks needs to have solid coverage of British history as an absolute minimum (what else should be covered is of course a matter for debate).
How well is the core narrative developed?
The first question I asked of the 'Knowing History' series was how well it developed the core narrative of the book. To do this, I looked at the second book in the series, which covers the period 1509-1760. A sample chapter of this book is available online via Collins. In terms of the text itself, it is well written and flows naturally within each chapter. The language used is also very ambitious, which is to be applauded. There is no shying away from terms such as “intercession” or “Indulgences”, again something I think more textbooks publishers should promote. Sadly however some terms are left for students to look up in the glossary, rather than having in-text or contextual explanations. I did experiment with reading the book aloud and found it to be passable. The story jumps around a bit in different sections, but there are certainly chapters which flow extremely well. There are also a range of stories woven into the text. Sadly, many of these seem to be from a very small and exclusive group at the top of society. Looking at the broader story, it is clear that Peal has adopted something of the “Grand Narrative” approach to the history of Britain, as you can see below from the chapter titles of the first two books:
Despite the good flow within chapters, the overall story feels somewhat stilted. In fact it feel like the book might fall into the trap of “stumbling from reign to reign [calling a halt] at each prince’s death” (Bloch, 1992, p. 146). The Reformation section provides a good example of this. We begin with Henry jousting and making peace with Francis I; diverge into the European Reformation; segue back into Henry VIII’s “great matter” of marriage and child bearing; before 300 or so words note that Henry broke with Rome, ostensibly to get power and remarry. None of this feels like a particularly detailed or cogent understanding of the major issues involved in the English Reformation.
On the plus side, there does appear to be a good coverage of what might be termed the British (English?) 'historical canon'. Yet the first thing which really struck me is that this content list bears a striking similarity with the chapters of Sellar and Yeatman’s 1930 satire '1066 and all that'; the classic history text containing “All the history you can remember”. To take Sellar and Yeatman's section on Henry VIII as a comparison, we find:
Of course it could be said that 'social justice' sits at the heart of the curriculum design as it is so heavily influence by the Hirsch school of thinking; there is a focus on developing of 'cultural capital' from the outset. In the sample chapter from the Early Modern book, pupils are quickly acquainted with the Holy Roman Empire, the Field of the Cloth of Gold, Wolsey and his role as Henry VIII’s adviser, Martin Luther, Alexander VI and so on (although curiously Clement VII only ever appears as ”Clement” for some reason). Important for students to know? Certainly. But the best way for them to develop this knowledge? Debatable. Are there any narrative oversights? Another aspect I look for in a textbook series is what stories are not told (or in other words, what I will need to fill in using other resources). I don't have time to do this for the whole series, so I will just look at the Year 7 book. Comparing the 'Knowing History' series to the Year 7 SHP textbook being sold by Hodder, there are many areas of similarity, but also some real differences. Items missing from SHP book but in Peal: Viking invasion and settlement; Alfred the Great and the Danelaw; The Anglo-Saxon Golden Age; The names of Crusader states (oddly one of the few non-English aspects included); Eleanor of Aquitaine and Isabella of France; The Wars of the Roses Items missing from Peal's book but in SHP: Life in Iron Age Britain; Life in Roman Britain; Building of Roman Empire; Roman Army and their success; Changes in Britain from Iron Age to Normans; A study of soldiers who fought at Hastings; the cultural impact of the Norman Conquest; Life in medieval towns; The development of armour; The nature of medieval kingship; A study of Edward II and Richard II; Overview of all medieval rulers; Why people liked Robin Hood stories; A study of Shakespeare's interpretation of Agincourt; A detailed look at life in Baghdad; Late medieval exploration, early Renaissance, early medical Renaissance, political Renaissance, Reformation ideas, development of the printing press, The Dissolution of the monasteries; Henry VIII’s rule; A range of enquiries looking at evidence, significance, historical interpretation, and change and continuity. There are also smaller narrative oversights which are interesting. In the sample section of the Year 8 book for example, I found it strange that, in a book which is ostensibly about England, is that there is no real mention of English reformers whose actions arguably played a much bigger role in the Reformation in England than Calvin or Luther. There is no hint of Wycliffe, the Lollards, Ball, or the like. Why is this so important? Well, because it is key to understand that the Henrician Reformation was not a sudden break but part of a much longer process of agitation for reform. Rather than Henry leading his people into change, it could be argued that he followed them into it. Equally it is important to note that Luther and Calvin had a much greater impact on later dissenting groups, such as the Puritans. So this begs the question: why is it necessary for students to look at Calvin and Luther at this point in the curriculum? Certainly students are not asked to engage with their ideas, nor are they asked to draw on their knowledge to explore whether the criticisms the reformers made were valid or otherwise. And of course, with the English focus, students are also not asked to explain why there was a Reformation in Europe. Indeed, the Reformation is pinned to a single main event, the nailing of Luther’s theses to the door of Wittenberg Cathedral. I fear that the causation here remains implicit and might fall more comfortably under the heading of what Marc Bloch called “the obsession with origins” (Bloch, 1992, p. 24). However, as Bloch notes, this search for the roots of a phenomenon such as the English Reformation can often lead to ambiguity. In searching for origins are we seeking beginnings, or causes. He goes on to say that “in popular usage, an origin is a beginning which explains. Worse still, a beginning which is a complete explanation. There lies the ambiguity, and there the danger” (Bloch, 1992, p. 25). Here, Calvin and Luther appear both as origins of the English Reformation, and an explanation for it. There is no real critical engagement with the inevitability or otherwise of events which then transpired in England. We have an origin story which explains in totality. Summary Thus far we have looked at the importance of unpicking what narrative is provided in any textbook series and whether this contains any oversights. For me there are too many narrative oddities in Peal's book which would need plugging by other resources or teacher expertise. This is why I think it is so important that we ask these basic questions of any textbooks, from Aaron Wilkes to Ben Walsh. In my next blog I intend to focus further on choosing a textbook series by asking questions about the nature of the narratives developed in books. As ever, please feel free to leave comments below or via Twitter.
0 Comments
Chapter 7: Improving formative assessments
In chapter 7, Christodoulou sets out to suggest some practical approaches to formative assessment. She makes some useful points about the potential benefits of multiple choice questions and offers some helpful strategies to increase the rigour of these as an assessment tool. Whilst Christodoulou does make a good case for how multiple choice questions might be used productively in assessing history, I am not sure she does as much to balance this with their many limitations – especially when trying to build students towards tackling complex topics. Attempts to assess history completely through MC questions have already been made in the USA and have met with limited success, especially in helping students to transition towards deeper historical thinking. This is something which the Stanford SHEG project was set up to tackle. Whilst this approach to multiple choice questioning does have some advantages, it ironically downplays the role of knowledge in making some of the judgments and leads precisely back to that idea of over-testing which Christoudoulou criticised earlier in the book. That said, I do wish more history teachers would make careful use of MC. A good case is then made for frequent and repeated low-stakes testing in the classroom. Again, some examples of this working would have been helpful. Even then, as Dennis pointed out in his article in Teaching History 164, the results of low stakes testing are still tied very much to a Hisrschean interpretation of curriculum and need to be assessed in the context of other curriculum aims too. All of this aside, this feels like a useful chapter for trainee teachers to read. Chapter 8: Improving summative assessments Christodoulou returns at the opening of this chapter to a criticism of marking rubrics and descriptors. She notes the perverse incentives a rubric can create and the major issues with their reliability. Again, this is based on an assumption that our ultimate goal is to compare children across the board (in itself debatable). However, she raises some valid concerns which have plagued GCSE and A Level examinations for years (as well as earlier assessments). It was useful to see a clear explanation of the comparative judgment approach. This is something which may end up having real merit as we move more and more towards digital working. That said, comparative judgment relies upon a system of norm referencing in which there will always be winners and losers. There is no scenario in the use of comparative judgment where everyone can succeed, even if they meet what might be considered base criteria. Therefore, in a comparative judgement assessment on how to make a cup of tea, there would still be 50% of people falling below the mean, even if everyone was capable of the task. I also feel the idea of where “tacit knowledge” comes from is somewhat under-explored. Presumably this is based on a kind of internal rubric which is not standardisable. In order to mitigate for this, comparative judgments would need to be made by huge numbers of people. This might be possible at a national level, but possibly not in school. Equally it implies that a piece of work may receive an B grade because 10 people think it is a B grade. However, it might also get a B grade is 5 people thought it was worth a C and 5 an A. This is simplified but the essential point is a valid one I think. The final section on grading was quite a useful introduction to the conversion of pupil scores into grades and would make a good read for trainees again. Chapter 9: An integrated assessment system Chapter 9 contains some interesting reflections on how an integrated system of assessment and curriculum might be created. All of this relies on a set and standardised national curriculum however (which has its own issues). In some ways, one might argue that such a system partially exists where schools have bought into GCSE textbook schemes which come with diagnostic tests, exam papers, and allow teachers to access a national bank of resources. Still, there are some interesting implications here for how curriculum might be controlled an homogenised – whether this is a good thing, is another matter. Chapter 6: Life after levels
Ah, now we get to the one I have been waiting for. Life after levels has obsessed me since 2011 when I was allowed to start tinkering with the assessment systems in my department. Since then, I have spoken to hundreds of people on this same subject, even more so since the demise of levels in 2014. Yet when I give these talks, I always seem to meet the same resistance: a lack of support from senior leaders; a lack of time; a lack of expertise; a belief that levels work; a slavish adherence at school level to GCSE grades. I am fascinated to see what Christodoulou offers as suggestions therefore and whether these have any impact on the prevailing inertia amongst school leaders up and down the country. I was pleased to see Christodoulou put so much emphasis on the importance of a progression model and to highlight those same problems of resistance to change. Where I am more sceptical is her belief that the textbook can be a stand-in for a progression model. Whilst I am a big fan of textbooks, I am not sure that they can fully replace a department’s responsibility to consider progression for three main reasons:
Beyond this point, Christodoulou makes some useful observations about the need for a clarity of purpose in a curriculum (though once again it is implied that all should see the same value in education if we are to follow a core curriculum set by a textbook). I was left less convinced however that there were any concrete proposals to work on. Perhaps this is because defining progression needs to happen at a subject-specific level; however I feel a few more links to useful models of progression (which are mentioned) would have been very helpful. The one model of progression which is explored in more depth is that connected with phonics. The links to Christodoulou’s suggestions are clear, but there are again aspects of control and power dynamics which are conveniently side-lined. When teachers decide which words to teach they are automatically applying judgements to them. When they allow someone else to make those judgments (as in a textbook) they are absolving themselves of responsibility for those choices and potentially reinforcing particular power structures. We may feel we are a long way from this in the liberal idyll of the C21st but this was very common practice in the 1930s and 40s as well as in oppressive regimes today. Above all, I think much of the work Christodoulou talks about here on progression already exists in the history teaching community. Indeed, there are even whole textbook series published by the SHP dedicated to making these progression models accessible to teachers and helping those same teachers to improve their expertise. Some further exploration of these texts would be particularly helpful for anyone interested in this area I feel. For those wanting a quick summary of Christodoulou’s main thrust here (especially in relation to history) it might be quicker to read Burnham & Brown’s one page “Do and Don’t” summary for life after levels (Teaching History 157, p.17) This is part 5 of my review of "Making good progress?" by Daisy Christodoulou. You can find the review index and my analysis of chapter 1 HERE. Chapter 5: Exam-based assessment In chapter 5, Christodoulou takes a closer look at exam based assessment and makes a fair case for the use of question level analysis to pinpoint student weaknesses. Of course, as she notes, this relies on checking of a good portion of the domain and therefore is not immediately applicable to subjects like history, where extended answers are more common and where pupils might only answer 5 questions rather than 55. Christodoulou also does a good job of exploring some of the issues of question setting and domain sampling which are inherent in all exams. She makes the important point that it is difficult to draw really reliable formative data from summative tests due to the broader focus of many exam questions. This would certainly be a useful lesson for some practitioners to learn when declaring the success or otherwise of their methods based on exam results alone. (In brief defence of exam boards, there are extensive guidelines on making exam texts accessible; whether or not these are followed however, is debateable). Overall, this is a useful summary of the valid use of question level analysis, but once again there is an underlying implication that question level analysis is not happening in schools around the country. Once again it feels that a study of actual classroom practice would have yielded more useful insights into how teachers might move forwards. More interesting is the suggestion that, whilst authentic tasks may provide some summative benefits, they make poor formative assessments. Christodoulou makes the case that formative exams should focus much more on the building blocks of the authentic tasks. Here I would also tend to agree. Formative assessments in history lessons tend to be the timeline activities, dates quizzes, sequencing and inference activities which become the building blocks for final summative assessments. However, the obsession with linear progression models has encouraged the use of inappropriate tests during teaching. However, I am less convinced that pupils’ historical writing would improve if they only focused on comprehension questions for example. If memory is the product of thought, then pupils need to engage more critically with what they read (and that is before we get onto the important issues surrounding motivation).
Where I do think this falls down a little is the suggestion that the more complex a task the less use it is formatively. Christodoulou gives many example of English exams (and I have a whole other rant about English language) butt these do not really reflect the kind of complex task related to history. In fact, some historical misconceptions might only begin to appear when applied to a complex task. It is difficult to assess a pupil’s understanding of the significance of the Renaissance until they begin to place it into wider context and develop their criteria for assessing it for example. Whilst I agree that many shorter, more specific formative tasks might aid in getting pupils to write this final piece, the final essay would still have a lot to reveal I think. In the final two sections of the chapter, Christodoulou explains why grades fail to provide useful formative information. This goes back to earlier worries about linear progression models and the incomparable nature of different exams. Here I find myself in complete agreement with Christodoulou on the limits of these grades and their pernicious effects on the curriculum. She also goes on to note the significant tensions between teachers and senior managers and begins to explore this power dynamic for the first time in any depth. Of course, her interpretation of the senior manager’s concern “is the test valid?” does not reflect my own experiences of the same concerns “does the number go up?” but that’s context for you. She also muses on the potential benefits and limitations of modular exams, concluding that they are worse than final summative ones. Again, I think this sets up modular and final exams as polar opposites, when a mixed methods approach might be a useful compromise. A system in which modular exams are conducted half termly and a final summative exam covers the whole domain, might allow for better triangulation of evidence. Some practical suggestions here or some relevant research might have been nice. This is part 4 of my review of "Making good progress?" by Daisy Christodoulou. You can find the review index and my analysis of chapter 1 HERE. Chapter 4: Descriptor-based assessment Chapter 4 begins with an overview of descriptor based assessment. This is by far and away the most common form of assessment used in schools today, in both its formative and summative uses. Christodoulou notes how the summative descriptors for Key Stage 3 rapidly expanded to become APP criteria, designed for formative use. She also notes how many schools’ post-levels solutions are also based on the generation of generic, linear descriptors. This is a topic I find quite interesting and have written on the subject HERE, HERE and HERE. Christodoulou goes on to explore the uses and limitations of descriptors as a tool for formative assessment. She argues that descriptors do not allow teachers to analyse the performance of students, or distinguish between “fleeting performance and genuine long-term learning” (p.85). Whilst I do take her point here, certainly in terms of KS3 level descriptors or GCSE bands, there seems to be a conflation of descriptors and generic descriptors going on here which is not fully acknowledged. I would argue that it is possible to create useful performance descriptors for individual assessments (an essay for example), and to tailor these for the purpose of assessing both summatively and formatively in that particular task. Indeed, Burnham and Brown have written on this theme in Teaching History 157. It is also possible, I believe, to use descriptors of gold-standard performance to tailor a descriptor-based mark scheme for formative purposes. In a sense, it is about whether the descriptors we use have been properly adapted for the purpose we want to use them for, as well as whether or not they were valid descriptors in the first instance. In the case of APP grids, I would say they answer to both of these questions is in the negative. Christodoulou then uses an example of how summative descriptors cannot help teachers to analyse the performance of their students. Her example of the need for vocabulary to make inferences is a good one and shows the limits of her example mark scheme in diagnosing the formative needs of the child. However, this also assumes that the teacher is only using the summative mark scheme to formatively assess the child in question. I am sure that some teachers only use the mark schemes to form their views on pupils’ progress, however I know that our teacher training course at Leeds Trinity tells trainees that this is a very poor method of assessing formatively. So once again, Christodoulou seems confuse the existence of a particular form of assessment with evidence of how teachers practice assessment on a day to day basis. What is missing here, indeed what this book is crying out for, is a detailed study of what teachers do in the classroom. Again, I suspect that the use of descriptors is more connected with a practical concern (driven by policy) to get the greatest possible advantage in public examinations. At this point I became quite interested in what happens in Ark schools, where Christodoulou serves as head of assessment. Trawling through the websites of all Ark secondaries, I managed to find 2 schools adopting Christodoulou’s suggestions and a large number still using KS3 levels. I did not get around to investigating Key Stage 4, but the use of summative, generic descriptors certainly seemed to still be in place in 4 schools, and APP grids in another. Most did not publish any information about assessment. This may not mean very much in the long run, but it is an interesting study in power dynamics and the ways in which schools respond to changing external demands. In the next part of the chapter, Christodoulou suggests that it might be more valid to test spelling and vocab separately from the final tasks. She glosses over the inherent sampling issue of this however and does not go on to discuss the phenomenon where students can accurately recall information but struggle to apply in in context – the very aspect which APP grids were meant to solve. I am in no way defending APP as an approach here, I tend to agree with Christodoulou that it encouraged lazy assessment based on unhelpful criteria. However, I also think this misuse was driven by that same misuse of power and accountability which I have discussed before, rather than genuine beliefs among teachers that the APP grid was anything more than hoop jumping. The remainder of the chapter deals with the issue of generic feedback, which I feel was covered suitably in Chapters 1 and 2. Here it feels that we are labouring the point by using extreme examples of practice. In the history example for instance, Christodoulou gives an example of using generic descriptors to offer feedback on an essay on the Battle of Hastings. However, I would contend that most teachers would check the knowledge of the essay as well, thereby overcoming much of the damage of the generic mark scheme. Even this might be overcome by following Burnham and Brown’s suggestions referred to above. In the further example given of a history exam on Stalin, Christodoulou suggests that such a question might quickly highlight misunderstandings. But this question actually opens up a continuum of answers for which multiple choice is probably not appropriate, as a case might be made for options A and C and a series of other potential options are missing, narrowing the scope of the potential answer. Would I use this as a short formative task, yes. Would I want it to be the only assessment of this, no! The final section of the chapter goes back to the issue of the validity of descriptors for summative assessment and raises a number of useful and relevant points on their limitations. I found this uncontroversial and fairly measured. However, in subjects like history it is very difficult to depart from descriptors without affecting the validity of that assessment for determining the historical understanding of the student. All written subject will need at least some qualitative aspect, so I am interested to see how Christodoulou deals with this in her next chapter.
OK, I'll admit that was blatant click bait (or a social experiment if you prefer). Over the last few days I have been publishing my thoughts on Daisy Christodoulo's book "Making Good Progress?" In the spirit of AfL, I have actually done more of an analysis than a summative review. However, noticed that engagements with these analyses were far lower than for other posts, despite the apparent popularity of the book.
So the test was this: which link would get the most clicks and from whom (results soon)? - the "book analysis" link with no grade? - the one giving the book a D grade? - the one giving the book 5 stars? I imagine that the love/hate links will prove to be most popular. I wonder if this links back to our natural desire for summative feedback (indeed this is one area which is a constant battle with kids when giving formative feedback). You will note that I have kept the headings deliberately ambiguous: 5 stars out of how many? And D as in GCSE or as in BTEC Distinction? This was very deliberate. Anyway, if you'd like my actual thoughts please follow the link HERE This is part 3 of my review of "Making good progress?" by Daisy Christodoulou. You can find the review index and my analysis of chapter 1 HERE.
Chapter 3: Making valid inferences In chapter 3, Christodoulou addresses the idea of the different purposes of assessment. This is a good introduction to the notion of how purpose affects what data we collect. The overview of validity and reliability is clear and cogent and serves as a useful guide to the teaching novice. In fact, I am seriously considering using this chapter with my PGCE students when we discuss assessment. The section on the trade-offs between reliability and validity in subjects like history is particularly helpful. Certainly it might help students question the merits of allowing students to take assessments home to finish for example. The second part of the chapter deals with the issue of formative assessment. Of particular note, and very important, is the point that formative assessments cannot legitimately use summative grading criteria when the conditions are so different. This is a very direct challenge to all those schools whose post-levels solution has been to bring in GCSE grading for every piece of work from KS3-4. This echoes much work done over the last 2 decades in history education, notably that of Lee, Shemilt, Counsell, Brown, Burnham, and many others. So far, I have found this the most useful chapter, possibly because the underlying idea of a clash of ideologies seems to be less evident here. This is part 2 of my review of "Making good progress?" by Daisy Christodoulou. You can find the review index and my analysis of chapter 1 HERE.
Chapter 2: Curriculum aims and teaching methods Chapter 2 opens with a focus on how the “generic skills” approach to teaching came to dominate. Christodoulou offers some good examples of the emptiness of approaches such as the RSA’s opening minds (although I wonder how many still teach this curriculum). She also uses Ofsted subject reports to show how generic approaches have been promoted. In many senses, this comes back to my own point about chapter 1, namely that I think the power of Ofsted, and their abuse of this power, has much to answer for in terms of promoting poor approaches to teaching. It is good to see this being acknowledged more fully here. However, it is worth noting that most of the examples Christodoulou cites come from several years ago now, especially the work from DfE level. I think there has already been a significant shift in the educational landscape and the “generic skills” voices have certainly lost much of their prior power and influence. A good deal of the chapter makes the case for deliberate practice as a means of targeted improvement. The example is given of a baseball player who might only get to run a particular play once in a game, but might ask for the pitch to be given over and over in practice. This is argument does make logical sense, although I wonder if it runs counter to some of the research set out in Brown et al.’s “Make it Stick”. In the book and using the same baseball analogy, the researchers suggest that deliberate practice is important, but that ultimately the moves need to be practiced in a game-like situation for the learning to fully embed. This does not discount Christodoulou’s point, but I wonder if we place so much emphasis on deliberate practice, that we might in turn encourage teachers to ignore the final outcomes altogether. Pianists may play scales and football players practise drills, but at the end of the day they all play regular concerts or games too. Christodoulou’s also makes the assertion, later in the chapter, that learning does not need to involve significant effort, also seems to run counter to Willingham’s research suggesting that memory is the product of thought: the more thought happening the better the memory. I think her point here may be about cognitive overload, but I would be interested to know more about this debate. I am also interested in Chrostodoulou’s complete rejection of “authentic tasks”. Whilst the cognitive science (and to be frank common sense) supports the notion that students cannot be asked to think through problems for which they have limited knowledge, there is a point in every person’s academic life when they must bridge the gap between knowledge acquisition and knowledge generation. I would hold that it is also important to inculcate pupils in the methods of a discipline as well as its core content. This is especially true of history where the “core content” is completely limitless. Although Christodoulou does not address this point directly, I think there is a disconnect between the view shared by many in the “traditionalist” school, that subjects should not be “dumbed down,” and at the same time, holding the same children off from advancing in their knowledge of their subject as an academic discipline. Christodoulou’s observations on self- and peer- assessment were quite interesting. I was partly expecting a full rejection of these in favour of teacher led assessment. However, Christodoulou seems to suggest that peer- and self- assessment are vital components of teaching and learning. At this point I have found myself in general agreement with most of the rest of the chapter. Christodoulou makes a convincing case for deliberate practice. I am sure such practices are embedded already in many classrooms, even if they are not seen more widely during Ofsted inspections. I do wonder however if there is an implication that direct instruction and deliberate practice were once done more effectively (before the advent of “generic skills” for example) and if so, if there is any reliable evidence to support this notion. Certainly, in terms of history, Cannadine’s investigation into 100 years of the history curriculum would suggest not. Chapter 1: Why didn’t assessment for learning transform our schools?
This chapter gives a very useful summary of the key difference between assessment for learning and assessment of learning. Christodoulou suggests that the perversion of AfL is partially to do with Ofsted and the DfE confusing the two types of assessment. However, she goes on to assert that the main reason why AfL has failed is that many teachers subscribe to a generic skills model of education and therefore believe that assessment for learning and of learning are essentially the same. She argues that teachers need to see improvement as the process of deliberate practice. Therefore, to get better at writing an essay about the First World War, they might engage in answering short questions from a textbook, mastering the chronology etc. I completely agree with Christodoulou's point here, and I certainly think that some aspects of ITT have contributed unhealthily to this over the years. The growing use and perversion of Key Stage 3 levels also contributed to this problem (as Wiliam notes in the preface). However, I think that her implication that the majority of teachers are believers in the "skill based" model of education misses the mark somewhat. Indeed, I could not see any concrete research supporting the notion that the majority of teachers have bought into the "skills" model. From my experience working in schools, most history teachers I know engage in deliberate practice when getting their pupils to improve at history (and I am yet to meet a maths teacher who doesn’t believe in deliberate practice). Where this breaks down, and where I do tend to agree more with Christodoulou’s assertion, is at GCSE. Here the temptation is to practice exam questions or “exam skills” over and over to the detriment of other aspects of deliberate practice. Unlike Christodoulou, I would argue that many teachers have been put in a situation where they have accepted (or feel they have to accept) the generic “skills model”, even when it runs contrary to how they might prefer to teach. This, in my opinion has been driven much more significantly by Ofsted than Christodoulou suggests. Many senior leaders have responded to Ofsted pressures by reducing the professional freedoms of their staff and pushing “generic skills” models of teaching. I can list scores of whole school initiatives which have shoved generic skills and assessment of learning to the fore in schools – notably those which have replaced KS3 levels with generic criteria from the GCSE mark schemes. In many ways I would argue that this is connected to the promotion of effective “leaders” over those whose educational pedagogies might have been more sound. These directives get the support of a small core of people who also subscribe to the “generic skills” model, and so a hegemony is created. Many teachers who are uneasy with this shift either do not have the confidence to challenge such directives from above, or lack the rigorous training and professional knowledge to offer a reasoned challenge (an issue of how teachers are trained on which I have written before). I also think exam boards have played a major role in a way that Christodoulou does not acknowledge here either. The simple fact is that deliberate practice and generic skills have often been synonymous at GCSE. Mark schemes in history have hitherto demanded that students provide 2 points on one side, 2 points on another, and a conclusion, for example. The increasing shift towards limited examinations training has meant that knowledge in such exams is rarely taken fully into account where exam structures are followed. Therefore the pragmatic classroom practitioner teaches “exam skills” in full knowledge of the fact that these are not synonymous with teaching history (or English, or science, or whatever). Again, there are some who miss this distinction, but it is notable that many history teachers see GCSE as an odd deviation from proper history teaching at Key Stages 3 and 5. In essence, I agree completely with Christodoulou’s concerns and her analysis of the problem, however I think that she paints the reason for this problem as one of educational aims in stark blank and white. In reality I would suggest this is a multifaceted, three-dimensional sculpture including significant aspects of power and control, mixed with aspects of pragmatism, ideology, idealised leadership, ignorance, and wrong-headedness. Of course, at this juncture, I accept that she may well cover these other aspects in the following chapters. |
Key FilesArchives
July 2020
Categories |