ALT-C 2015: Engaging learners in computer-based summative exams

How do we prepare university students for high stakes computer-based testing? What steps should instructors take to ensure that students can perform to the best of their abilities? These are the some of the questions that my colleague Zoe Handley and I have been considering in a research study, looking at the experiences of postgraduate education students taking high stakes computer-based examinations. We reported on our findings at last week’s Association for Learning Technology conference in Manchester. 

The institutional adoption of computer-based testing as an assessment method is now well advanced across the UK higher education sector. Last year’s UCISA Technology Learning Survey revealed that high stakes testing is now a mainstream activity, with nine higher education institutions reporting their use of computer-based testing in 50% or more of the courses that they deliver. The rapid uptake of computer-based assessment – typically in the form of defined response multiple-choice question items – does not appear to have been matched, though, with extensive research into the impact of this assessment method on the student learning experience. Hillier (2014) has observed that only a limited number of studies have been conducted on computer-based assessment from the perspective of students, and these have tended to take the form of post-intervention reviews, with little sense of how student experiences and attitudes may evolve as they are exposed to this form of assessment.

The research that we have been conducting is focused on postgraduate taught students – a community that is often ignored in the literature on computer-based testing. Through a longitudinal study which has tracked student experiences with a computer-based examination on research methods across two cohorts of MA international students, we have invited students to reflect on their experiences and have drawn out from their responses a range of factors which appear critical to their acceptance of this assessment method.

Socialising students to the aims and drivers for computer-based testing emerges as a key responsibility for instructors to address, and appears to help to reduce students’ anxiety levels. This is particularly important for postgraduate taught students, who are new to the university and have not had the necessary time and space to become accustomed to institutional assessment practices, even if they have encountered e-assessment in different forms elsewhere. Although there is no hard evidence to show that digital literacy is a discriminating factor in performance in computer-based tests, our findings show that students perceive this to be an issue, and are concerned that mature students and those returning to education are at a disadvantage compared with digital natives who they believe are naturally suited to this form of assessment. Reassurance over the equity of computer-based testing to different types of students and the suitability of the methods to the discipline (knowledge & skills) being assessed are therefore key responsibilities for an instructor to address in the orientation of students.

One of the key findings to emerge from our study relates to the preparation of students for online exams. On one level our study confirms previous research in highlighting the need for students to be given plenty of opportunities to practise the type of questions that they will encounter in the real exam, with formative assessment activities aligned to the format of the summative assessment. However, when looking at the student experience more closely, we observed that giving students practice opportunities is not in itself sufficient or effective, unless they are encouraged to focus on their approach to a computer-based exam and think about the differences between how they would tackle a pen-and-paper test compared with a computer-based exam.  Students who struggled the most with computer-based testing in our study suffered frustration when attempting to transfer paper-based exam techniques to a computer environment (e.g. marking up and annotating questions online and underlining key parts of the question).

Our findings show that students need time and space to reflect on their approach and adapt organisational and cognitive strategies to suit the computer-based environment that they are working in – adapting strategies to suit the affordances of the computer environment they are working in. This relates to their exam craft, rather than keyboarding skills or general levels of digital literacy. Indeed it touches on their test-taking techniques for computer-based assessment such as question selection and the memory aids they employ when formulating responses to open-ended questions. It also touches on their time management skills and how they navigate the exam environment allocating time to different question types – particularly when facing a combination of open (free text) and closed (defined response) question items.

Drawing out the lessons from this study, we have developed a Learner Engagement with e-Assessment (LEe-AP) framework for instructors which highlights some of the engagement issues that may arise with the introduction of computer-based assessment and the recommended actions that can be taken to guide students and prepare them effectively for the exam. This was presented in the working paper accompanying our presentation at ALT-C, which is available for viewing here [pdf]. It has subsequently been presented in a research paper in Research in Learning Technology (Walker & Handley, 2016), available for viewing here.

The framework touches on socialisation measures, preparatory work with student on test-taking and revision strategies and goes on to discuss the organisation and presentation of questions to students via the user interface. We will be using this framework to inform future preparations for testing and would be interested to hear how transferable the principles are to other institutional contexts in helping students to make the transition from paper-based to computer-based assessments effectively.

References

Hillier, M. (2014). The very idea of e-Exams: Student (pre) conceptions.  In B. Hegarty, J. McDonald, & S.-K. Loke (Eds.), Rhetoric and Reality: Critical perspectives on educational technology. Proceedings ascilite Dunedin 2014 (pp. 77-88). http://ascilite.org/conferences/dunedin2014/files/fullpapers/91-Hillier.pdf

Walker, R., & Handley, Z. (2015). Engaging learners in computer-based summative exams: Reflections on a participant-informed assessment design. ALT-C 2015: Shaping the future of learning together, Manchester, 8-10 September 2015.

Walker, R. & Handley, Z. (2016). Designing for learner engagement with computer-based testing. Research in Learning Technology, v.24 Dec. 2016. ISSN 2156-7077.
http://www.researchinlearningtechnology.net/index.php/rlt/article/view/30083

Further resources

4 responses to “ALT-C 2015: Engaging learners in computer-based summative exams

  1. Pingback: ALT-C: Reflections from a first-timer | E-Learning Development Team·

  2. Pingback: York E-Learning Newsletter – October 2015 | E-Learning Development Team·

  3. Pingback: Reflections on Day 2 of the Blackboard Users’ Conference | E-Learning Development Team·

  4. Pingback: Transforming Assessment: Reporting on our research on computer-based assessment | E-Learning Development Team·

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s