Overview of the webinar
How might we reflect upon the last academic year, drawing on lessons learned from the course cycle to inform changes to teaching activities for the year to come? This was the theme for our July webinar, with a view to thinking about blended course design and delivery for the next academic year. [Web Link to webinar recording]
The first part of the webinar explored design issues and approaches to evaluation of blended modules, touching on data collection methods – what’s available to us and how we can plan this in to course design and delivery.
In the second part, we considered what the evaluation data may tell us and what actions we should take as a result, discussing some of the key themes for consideration for module planning for the next academic year. The use of evaluation techniques in this way supports a virtuous development cycle – feeding through lessons learned to future course design and delivery, so that we have an appreciation of what works and which aspects of the course need changing. The choice of a cycle is deliberate – to highlight the connections between the design, delivery and review phases of a course, addressing the effectiveness of our course design through a continual process of improvement, rather than a one-off design and delivery process.
Evaluating blended learning
Typically we have found that approaches to evaluation of blended modules are retrospective in nature, as opposed to diagnostic or formative approaches (i.e. learning about students as they engage in course activities). They look back on course activities after they have been completed. Summative evaluation tends to focus on key themes such as:
- levels of student engagement in online activity, relating to discourse and interaction patterns;
- the suitability of the technology in supporting study activity (targeted learning) – outcomes, engagement levels & interaction patterns.
Summative evaluation may also review the interrelationship between online and face-to-face elements of the course, considering themes such as the degree to which the online and class-based study methods complement each other – i.e. the reception of the study methods and degree to which they are viewed by students as interrelated.
Whichever approach we take, the evaluation focus should be aligned with the targeted learning outcomes for the course – i.e. what you set out to achieve with the initial design of the course – and consequently linked to the overarching course objectives.
Once we are clear on the evaluation approach and focus of the inquiry, we can then choose suitable evaluation methods. A wide variety of data collection methods have been employed by staff at York, ranging from quantitative measures involving survey data, course statistics and contribution statistics, to interpretive methods based on focus groups and reflective activities performed by course participants (e.g. personal learning journals, reflective diaries (video or blog) or through completion of self-assessment pro formas, which help them to assess their own progress with the learning tasks).
We must be aware though that the visibility of student learning online may be restricted to the formal learning space where students present their finalised work, unless they are required to work within this space on the formative phases of the task. The context of learning – how, when and where students undertake their learning activities may also be hidden from us and may well affect learning outcomes – in terms of the performance of tasks and reception of the learning methods. These are thoughts to bear in mind when drawing conclusions on the effectiveness of the learning design.
Drawing lessons learned from our data
Once the data has been collected and analysed – what do we do with it? What sort of issues should we be thinking about when considering module planning for next year? Depending on what the data is telling us about the levels of student engagement with the targeted online learning tasks, we might wish to review the structure and sequencing of the class-based and online tasks and the relevance of the assessment plan. The design of the tasks and course materials – whether they supported different levels of learning – might also be reviewed along with the timing of tasks and sequencing with other modules which require students to be performing online tasks. We might also consider the scope and levels of instructional support afforded to learners in the preparation and performance of the course activities. Attention to these design issues may help us in the long run to engage students more effectively with the targeted learning activities and achieve the outcomes that we have identified for the module.
Want to know more?
If you would like to review any of the points from the webinar in more detail, please take a look at the York TEL Handbook – specifically section 7 which focuses on the rationale for evaluation and touches on evaluation methods:
Section 7 focuses on planning (including a planning template for a blended module) and the interpretation of evaluation data. You will also find links to other useful resources, such as guidance on survey design, how to run focus groups etc.
Further questions on evaluation may be directed to the E-Learning Development Team at: email@example.com