Design and implementation choices > Lecturer guidance

Question 34
How can lecturers be informed about how students interact with e-assessment tasks, and so help lecturers act upon these findings in an effective way?

What motivates this question?

In formative computer-aided assessment, feedback is mostly directed toward students. It gives them information about their performance in a task and is meant to help them improve their competencies. But how can lecturers know about the use of the assessment by their students and their progression in learning? But even when such information is available to the lecturer, how can(s)he support student in their learning path, during and after the formative assessment? Similar questions hold for task designers: how can they know how students really interact with the formative assessment tools and which intermediate steps they made toward a solution? How can they use this knowledge to improve their task design? Gaining knowledge about what students really think and do while interacting with a digital learning environment is difficult. Learning analytics seems to focus on tracing students’ interactions with the learning tools. Even though learning analytics provides an impression of the progress of individual students and the class as a whole, it is difficult to relate the traces left by students in their work to their thinking processes and the possible interactions with the lecturer or fellow students during the tasks. Also it is not easy to act adequately and in time upon findings.

What might an answer look like?

Here the focus is on identifying mechanisms for informing lecturers and designers about student behaviour. This could be in part based on investigating (e.g. through interviews) what types of data lecturers already use and how - for instance, STACK offers variant-level data on student performance and inputs, but is not very accessible. A full answer to the question would seem to require an iterative approach, of testing different mechanisms for informing lecturers, and investigating their effectiveness in terms of impact on the lecturers’ practices.

Of course, addressing this question relies on knowledge of student behaviour, addressed elsewhere in the agenda (see Q15: How do students engage with automated feedback? What differences (if any) can be identified with how they would respond to feedback from a teacher?), which could be enhanced by:

  • further development of learning analytics methods and through observational studies (e.g. thinking-aloud studies with students while there are taking part in formative assessments). Comparison studies of HE practice regarding these aspects of formative computer-aided assessment seem informative as well.

  • use of eye-tracking software, or asking students to “speak their thoughts” whilst working through an e-assessment exercise.

  • Isocial networking methods (Alcock et al. 2020), as well as observation protocols and interviews (Dorko 2020).

In section Errors and Feedback > Student errors along with:

References

Alcock, L., Hernandez-Martinez, P., Patel, A. G., & Sirl, D. (2020). Study habits and attainment in undergraduate mathematics: A social network analysis. Journal for Research in Mathematics Education,51(1), 26–49

Dorko, A. (2020). Red X’s and Green Checks: A Model of How Students Engage with Online Homework. International Journal of Research in Undergraduate Mathematics Education, 6(3), 446–474. https://doi.org/10.1007/s40753-020-00113-w