This is part 5 of my review of "Making good progress?" by Daisy Christodoulou. You can find the review index and my analysis of chapter 1 HERE. Chapter 5: Exam-based assessment In chapter 5, Christodoulou takes a closer look at exam based assessment and makes a fair case for the use of question level analysis to pinpoint student weaknesses. Of course, as she notes, this relies on checking of a good portion of the domain and therefore is not immediately applicable to subjects like history, where extended answers are more common and where pupils might only answer 5 questions rather than 55. Christodoulou also does a good job of exploring some of the issues of question setting and domain sampling which are inherent in all exams. She makes the important point that it is difficult to draw really reliable formative data from summative tests due to the broader focus of many exam questions. This would certainly be a useful lesson for some practitioners to learn when declaring the success or otherwise of their methods based on exam results alone. (In brief defence of exam boards, there are extensive guidelines on making exam texts accessible; whether or not these are followed however, is debateable). Overall, this is a useful summary of the valid use of question level analysis, but once again there is an underlying implication that question level analysis is not happening in schools around the country. Once again it feels that a study of actual classroom practice would have yielded more useful insights into how teachers might move forwards. More interesting is the suggestion that, whilst authentic tasks may provide some summative benefits, they make poor formative assessments. Christodoulou makes the case that formative exams should focus much more on the building blocks of the authentic tasks. Here I would also tend to agree. Formative assessments in history lessons tend to be the timeline activities, dates quizzes, sequencing and inference activities which become the building blocks for final summative assessments. However, the obsession with linear progression models has encouraged the use of inappropriate tests during teaching. However, I am less convinced that pupils’ historical writing would improve if they only focused on comprehension questions for example. If memory is the product of thought, then pupils need to engage more critically with what they read (and that is before we get onto the important issues surrounding motivation).
Where I do think this falls down a little is the suggestion that the more complex a task the less use it is formatively. Christodoulou gives many example of English exams (and I have a whole other rant about English language) butt these do not really reflect the kind of complex task related to history. In fact, some historical misconceptions might only begin to appear when applied to a complex task. It is difficult to assess a pupil’s understanding of the significance of the Renaissance until they begin to place it into wider context and develop their criteria for assessing it for example. Whilst I agree that many shorter, more specific formative tasks might aid in getting pupils to write this final piece, the final essay would still have a lot to reveal I think. In the final two sections of the chapter, Christodoulou explains why grades fail to provide useful formative information. This goes back to earlier worries about linear progression models and the incomparable nature of different exams. Here I find myself in complete agreement with Christodoulou on the limits of these grades and their pernicious effects on the curriculum. She also goes on to note the significant tensions between teachers and senior managers and begins to explore this power dynamic for the first time in any depth. Of course, her interpretation of the senior manager’s concern “is the test valid?” does not reflect my own experiences of the same concerns “does the number go up?” but that’s context for you. She also muses on the potential benefits and limitations of modular exams, concluding that they are worse than final summative ones. Again, I think this sets up modular and final exams as polar opposites, when a mixed methods approach might be a useful compromise. A system in which modular exams are conducted half termly and a final summative exam covers the whole domain, might allow for better triangulation of evidence. Some practical suggestions here or some relevant research might have been nice.
0 Comments
Leave a Reply. |
Key FilesArchives
July 2020
Categories |