Assessment in Higher Education Conference, 26-27th June, 2019.

On June 26th and 27th 2019, the seventeenth Assessment in Higher Education Conference was held in Manchester. Collated below are the thoughts of Dr. Richard Walker, Rob Shaw and James Youdale from the Programme Design and Learning Technology Team.

Richard Walker

Peter Hartley’s master class focused on the challenge of designing assessment strategies which ensure that learning outcomes are addressed at the programme level. Reflecting on findings from the National Teaching Fellowship Scheme funded project on Programme Assessment Strategies (PASS) the workshop discussed the risks of traditional ‘atomised’ assessment strategies which place too much emphasis on modular tasks, restricting opportunities for skills development and the delivery of meaningful feedback over an extended period of time. This may be due to assessment regulations, as well as the pressures of the modular curriculum with short timescales for tasks to be completed in.

The master class discussed how traditional approaches have tended to favour ‘one size fits all’ tasks such as essay assignments with an overemphasis on summatively assessed activities. Programme focused assessment strategies, in contrast, offer a different way of supporting student learning with a clearer priority on the achievement of key outcomes over the duration of a study programme through a mixture of formative and summatively assessed tasks, often based on integrative designs.

The PASS website hosts a series of case studies on programme focused assessment, with attention to integrative level / year assessment tasks. One such example is Peninsula Medical School’s assessment modules (case study pdf) which run through their 5 year programme. These assessment modules are not linked to specific areas of teaching but are designed around a spiral curriculum: this approach enables assessment to revisit topics longitudinally ‘with the aim of reinforcing learning and allowing for increasing complexity, with plenty of time for student-selected special study units and electives, as well as self-directed learning’. 

Coventry Business School’s undergraduate programme has developed integrative assessment tasks in a different way by assessing the outcomes of a group of modules in a project task for each year of the undergraduate programme (case study pdf). The task is worth 50% of the assessment on offer in each year and reflects a unifying theme (e.g. informed by a specific sector or employer). The aim is to provide students with an opportunity to demonstrate their achievement of key course outcomes through the performance of a ‘larger, more complex, real-world assessment task’.

Rob Shaw

In my role as Educational Adviser in the Programme Design and Leaning Technology Team I was particularly keen to hear about ideas related to assessment as a driver for learning (assessment ‘for’ as well as ‘of’ learning), the relationships between assessment at the module and programme level, and the interplay between formative activity and summative assessment.

From this perspective, the conference started well with David Boud’s masterclass about ‘evaluative judgement’. This is described as “the capability to make decisions about the quality of work of self and others” (Tai et al. 2017, 471) and Boud positioned this as a skill that needs explicit attention not just in the immediate context of a particular assessment for a module and how this connects to the programme level, but also as a fundamental skill post graduation. He began with the vignette of a high-flying graduate struggling to make his way in his first employed position in a context where detailed criteria and feedback are no longer immediately available, arguing that the Higher Education system is letting students down if they graduate with a dependence on external judgment. We should, instead, be providing students with explicit and structured opportunities to develop self-reliance in their ability to understand quality in context and apply this understanding to their own work and the work of others around them.

This chimed with me as someone who has frequently been on the receiving end of laments from teaching staff that students do not seem to use and value the information that they receive through criteria and feedback, but that they expect greater and greater levels of detail. Boud was clear in his belief that much of the ‘standard toolset’ that is commonly used in Higher Education to inform students on the requirements of assessment and provide feedback on performance may, if used uncritically and uni-directionally, actually work against the development of evaluative judgement. Detailed assessment criteria, exemplars, self and peer-assessment activities, and detailed feedback on the quality of a submission can all position assessment as entirely external and inhibit the development of self-reliance. Of course, Boud was not arguing that these aspects should be withdrawn. Rather, he was positioning them as elements that should involve student activity not just tutor activity, learning not just measurement, and doing not just telling.

From a programme design perspective, Boud suggested that an integrated and staged approach is needed involving both teaching and learning activities and assessment tasks and sustained development beyond the module level. Underpinning this, he called for a greater recognition of the complex and tacit knowledge involved in assessment and its criteria, that we should acknowledge this, ‘let students in’ and develop more realistic and (if necessary) holistic criteria as a start point for critical thinking rather an attempt to fully and explicitly represent quality and practice which can lead to atomisation and box-ticking approaches. Key suggestions included:

Learning activities

  • Identifying standards and criteria (students encouraged to develop their own ideas before being provided with definitive criteria)
  • Utilising criteria (regular opportunities to practise making judgements; tasks with increasing levels of sophistication; making space for discussion of nuances and complexity; referencing programme and stage outcomes)
  • Use of exemplars (incorporating dialogue about multiple contrasting examples; starting with more extreme examples working towards finer degrees of discrimination)
  • Self assessment (over time and over multiple tasks)
  • Peer assessment (focused on giving rather than receiving feedback; formative and qualitative rather than focused on grading)

Assessment tasks

  • Incorporating prior self-assessments
  • Integrating feedback dialogue (feedback focused on calibration and the quality of students’ own assessment of their work; opportunities for students to communicate what they were aiming for beforehand and to discuss outcomes)
  • Including opportunities for post-feedback learning action planning

Whilst there may be nothing particularly new about these suggestions, bringing them together under an overarching goal of developing evaluative judgement across and beyond a programme may increase the coherence and purpose of the individual elements. This might help programme teams to communicate with students about the complexity of standards and help to increase self reliance and engagement with assessment and feedback. It might do this through a supported process of discussion and negotiation with experts and peers underpinned by multiple opportunities to engage with exemplars and other artifacts embodying standards in the discipline. This might then help students to utilise criteria, feedback and other forms of ‘input’ more effectively and realistically as they progress through a programme.

A key question for me is how transferable evaluative judgement might be from one context to another. It seems intuitive that by developing a complex and sophisticated appreciation of quality within a particular discipline or domain, an individual is able to develop capabilities that might transfer to another context. However, this might underplay the extent to which notions of quality and the practices they relate to need to be negotiated and appreciated ‘afresh’ each time an individual moves from one context to the next. Questions of agency and power also seem important here. From the perspective of programme design, however, this could be seen as a moot point given the potential for improving assessment and feedback practices within a programme.

Of course, implementing the approaches suggested by Boud adds increasing pressure to the often already stretched opportunities to coordinate activities across a programme and its modules and requires ‘space’ to be found within the curriculum. However, as a vehicle for bringing learning and assessment closer together and operationalising a more student-focused, programme-oriented approach to assessment, they might warrant some attention at moments such as programme development and review meetings, or criteria development and moderation meetings when approaches and standards are discussed.

Boud’s masterclass ‘seeded’ ideas that were revisited through the remainder of the conference related to the balance between the goals of measurement and learning, and of the need to encourage engagement to ensure that the time spent dealing with assessment matters is most efficiently used to drive learning. This was picked up through sessions on institution-wide approaches to grid rubrics, toolkits for universal assessment design and assessment for social justice, issues surrounding re-assessment and its ‘cinderella’ role, ‘marginal gains’ approaches to improving the use and effectiveness of feedback, and case studies on UG viva assessment in Economics and self assessment in mathematics and Pharmacology. Of course, these are not simply technical matters and, as the educational ‘bottom line’, assessment and the decisions surrounding it are at the heart of debates on the purpose and value of HE in society. Ultimately, the notion of evaluative judgement might be of most use in providing a frame for uncovering tacit assumptions about assessment and standards and discussing these issues across programme teams and with students.

References

Boud, D., Ajjawi, R., Phillip Dawson, P. and Tai, J. (Eds.). (2018). Developing Evaluative Judgement in Higher Education : Assessment for Knowing and Producing Quality Work. London: Routledge.

Permalink to the University of York library record (University of York log in required)

Tai, Joanna, Rola Ajjawi, David Boud, Phillip Dawson, and Ernesto Panadero. 2018. Developing Evaluative Judgement: Enabling Students to Make Decisions about the Quality of Work. Higher Education. 76 (3): 467–81.

James Youdale

Through both the sessions I attended and the conversations that I had with delegates, I found myself returning to the all-encompassing notion of ‘learner engagement’, and just how deeply this underlying, under-defined, but wholly significant aspect feels rooted within the successes and failures of every assessment or feedback strategy. In the morning of the Wednesday that I attended, I partook in the Masterclass facilitated by Professor David Carless from the University of Hong Kong. I found the session to be a thought-provoking overture of his own school of thought in regards to assessment processes, their deficiencies, and some of the assumptions and practices that underpin and can also undermine feedback processes.

Carless seemed settled on the idea that feedback in higher education has historically been regarded as a process of providing students with information that has been designed to produce an action. Proposing a new paradigm that leaves behind any suggestion of the mere act “grading equating to feedback”, Carless identified the present barriers of: 

  • modularisation in assessment practice,
  • a lack of continuity of academic staff,
  • an overarching culture of students perceiving feedback as being a practice which is tied to a product or an award. 

Carless bemoaned the perception that feedback should be separated from pedagogy, and that all teaching encounters – either face to face or blended – should be considered as viable opportunities for ‘branded-as’ feedback to be constructed, delivered or discussed. With specific regards to module-based assessment, Carless proposed that a programme-level approach represents a great step forward as a means of developing a continuity of tasks/cumulative building of sequences. To do this, however, Carless conceded that instructors also require feedback literacy, and programme leaders require additional resources and space to develop such assessment strategies.

The concept of feedback literacy was at the core of the conversation, both with regard to the student part of the contract, and also the instructional element. Throughout the day in general, there was a recurring notion of modern educators needing to scaffold students towards developing the feedback literacy skills that will empower them to reflect on and utilise the feedback that they are provided with. Whilst aspirational in isolation, this seems to jar with the parallel dialogue around creating assessment strategies which stretch students to evidence the authentic skills that will propel their real-world pursuits beyond their Programmes of study. Both Professor Carless and Bruce MacFarlane – the latter in his keynote address – provided fairly convincing throw-downs on the validity of assessment processes which reward students who learn to ‘play the game’.

This, it was argued, promotes assessment and feedback literacy only so much as it encourages students to become nimble at traversing the mazes that we as curriculum designers ask them to traverse. Macfarlane in particular questioned the overarching autonomy, and, indeed, value, of a Higher Education paradigm which rewards those learners who embody behaviours that, ultimately, are designed to please us and warrant a reward. The notion of ‘performativity’ (bodily, emotional, participative) permeated his entire meta-analysis of HE – and whilst I certainly identified with his critique of many of the ‘inconvenient truths’ of our present system, I found myself wondering whether the notion of complete emancipation of learning and a scaffolded, fair, reproducible and scalable assessment process could ever be mutually inclusive.

Having reflected on it for a few weeks since, I’m still not convinced!

I do, however, feel that this is certainly food for thought, and that it does raise some interesting questions when the conversation moves towards the notion of partnerships in assessment and feedback. In Higher Education, we often talk about feedback as being a dialogue or a discussion, but what we’re actually talking about is an opportunity for students to question or for markers to explain/expand on what has already been crystallised in a grade. In addition, this is an area which can be inherently problematic to sell to students who may not feel a desire to take on board feedback that lacks visible import into their future ventures. Sara Eastburn, from the University of Huddersfield’s School of Human and Health Sciences, made this exact point when students receive their final grade at the end of their Programme. Throughout the conference, various co-designed assessment strategies were presented, but uniformly, to my eye, all were primarily motivated by an intent to foster greater student engagement. It was at this point that I found my thoughts turning to Ruth Healey’s keynote at The University of York’s Learning and Teaching Conference in June. I feel that Dr. Healey’s exploration of the semantic of a ‘partnership’ is quite apt when considering some of models of ‘partnership’ that were presented during the conference. Although aspirational, when properly unpicked, many of these processes are situations where students are no more than heavily-informed stakeholders. This isn’t to say that dialogic feedback can’t be a two way street, but it’s undoubtedly challenging to create a framework where it is valid and uncontentious for markers to use these dialogues as a means of developing their practice and becoming more adept in how they construct feedback – especially when your institution mandates anonymous summative marking!

References

Naomi E. Winstone, Robert A. Nash, Michael Parker & James Rowntree (2017) Supporting Learners’ Agentic Engagement With Feedback: A Systematic Review and a Taxonomy of Recipience Processes, Educational Psychologist, 52:1, 17-37, DOI: 10.1080/00461520.2016.1207538

Hattie, J. and Timperley, H. (2007) ‘The Power of Feedback’, Review of Educational Research, 77(1), pp. 81–112. doi: 10.3102/003465430298487.

Berry O’Donovan, Chris Rust & Margaret Price (2016) A scholarly approach to solving the feedback dilemma in practice, Assessment & Evaluation in Higher Education, 41:6, 938-949, DOI: 10.1080/02602938.2015.1052774

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s