Interpreting evaluation data
You will need to consider how the evidence you have collected may be combined to form a rich picture of student activity within your course. As previously mentioned, interviews or focus groups may serve as a way of probing patterns of activity recorded through activity logs and course statistics, providing a layer of interpretation to the trends that you have identified through statistical data sources. Both qualitative and quantitative data therefore can contribute to your evaluation.
You will need to be careful in interpreting data at face value. If the evaluation instruments (e.g. survey tools) have been pre-tested prior to their use within the course, we may have a higher level of confidence regarding their internal validity, i.e. that they measure what they supposed to do. However, the timing of the delivery of the survey can influence the nature of the feedback. For example, the proximity of survey completion to the distribution of final marks may lead to a halo or horns effect, with the reception of the study methods strongly influenced by assessment performance. Triangulation of results with other data sources may help to confirm or throw into doubt the trends that have been identified.
You should also treat qualitative data with caution. It is often hard to get representative samples of a cohort for focus groups and volunteers may reflect the most motivated students within the class, presenting a one-sided view of the learning experience. For international cohorts, cultural issues may come into play, with some groups of students reluctant to offer negative feedback on the course design or level of instructional support for their learning.
Finally, you will need to question whether the range of data collected enables you to have a comprehensive picture of the learning that has taken place. Do you have the complete picture? Where are there gaps? The visibility of student learning for a group task may be restricted to the formal learning space where students present their finalised work, unless they are required to work within a designated space on the formative phases of a task. Students may indeed opt to use multiple communication tools outside the formal learning space (for example outside Yorkshare). This complicates the tracking process for student learning, with activity logs recording online activity only in part.
The context of learning in terms of how, when and where students undertake their learning activities may also be hidden and may well affect learning outcomes, in terms of the performance of tasks and reception of the learning methods. These are thoughts to bear in mind when drawing conclusions on the effectiveness of the learning design.