Add new comment

Submitted by Pat Yarker (not verified) on Mon, 31/08/2020 - 16:25

Martin writes: 'But it would be bad, not good, for student gradings (for example) to be decided by personal judgements rather than by objective rules.' In relation to high stakes summative exams I disagree, though I think two separate issues may have become conflated here.

There have been attempts to eliminate 'personal judgement' from public summative assessment in England. For example, in 2006 the mark-scheme for the Key Stage 2 Writing SAT boiled down to a tick-box list of technical features: a mark for using a semi-colon or for deploying the subjunctive. Deliberately, the mark-scheme had nothing to say about the degree of imagination your writing showed or the level of interest it evinced in the reader. These central features of a piece of creative writing were left unaddressed. Teachers were quite properly outraged, for a qualitative or interpretative dimension is necessarily part and parcel of summative assessment in many curriculum areas. Coming to a reasonable determination in such cases indeed depends importantly on experience, provided that experience is reflected on, informed by the views of fellow-practitioners (for example through processes of moderation) and weighed up in the light of advice by those who can justly claim authority (senior examiners, for example, who have engaged intensely over long experience and have reflected much). That is why departments in secondary schools will undertake quite extensive processes of trial-marking, internal moderation, ( a form of triangulation of judgement), assessor 'training' by senior examiners and so on. Practitioners will also reflect on the official reports written on exam papers and individual answers in order to refine their approach in the future. To present exam assessment as done on a whim or superficially is a caricature.

Martin's apparent rejection of the value of considered appraisal founded on experience in matters of assessment, and of the importance of informed individual judgement rather than marking-by-numbers, risks aligning him with those who clamour for an increase in machine-marking of exam-scripts, and hence in the further narrowing-down of exam-questions (and hence pedagogic tactics) in order to suit 'algorithmic' approaches to marking. In reality, this means more reductive multiple-choice Q&A of the kind prevalent in the USA.

Martin is right to argue that transparency is vital in public assessment. Students and teachers need to know what the rules are, how the whole process works, the criteria on which judgement will be based and by which grade or mark will be arrived at, and what are the grounds for appeal. That is, everyone needs to know the objective rules of the game, that these are as fair as may be and that they can be appealed to and revised if required. But it is not possible to eliminate the human element in assessing exam material in subjects such as English Literature, Drama, Art or History while still retaining the subject's integrity and offering students an exam-course it is educationally valuable to undertake. A mark scheme is an algorithm in the sense that it is 'a precisely defined set of instructions'. But it is not possible to anticipate precisely, and hence to define, what a student may present by way of an answer to such questions as 'How successful was the 1945 Labour Government in introducing the Welfare State?' or 'Make an artwork entitled 'Self Portrait'' or 'Who or what do you blame for the deaths of Romeo and Juliet, and why?' Nor is it possible definitively to state everything which should or should not be rewarded in a response to such questions. There must be room for judgement and evaluation which is rationally founded and which is open to challenge. Better, I think, to separate out the qualitative/interpretative aspects of assessment in those subjects where because of their nature it is evidently necessary, from the valuable role an algorithm may play in clarifying the steps in the overall process and the general rules of the game. Assessment criteria do not necessarily have to be algorithmic in order for evidently fair and reasonable (though of course not 'objectively certain') evaluation to be made.

This website uses cookies, you can find out more and set your preferences here.
By continuing to use this website, you agree to our Privacy Policy and Terms & Conditions.