Design and implementation choices > Roles of e-assessment in course design
Question 27
How can formative e-assessments improve students’ performance in later assessments?
We believe that practice is essential to learning mathematics. We create formative e-assessments to give students that practice. Can we find evidence that this really helps students learn?
What motivates this question?
Answering this question may help teachers know what is the most effective way to help their students learn. It may also help justify investment into e-assessment by an institution, either in systems or in staff time to develop and maintain questions.
E-assessment can provide encouragement for students to practise, important because mathematics “needs to be done to be learned” (Greenhow, 2015). Providing opportunity to practise and instant feedback can encourage the student into a positive interaction that helps them construct their learning, possibly motivated through gamification. Success at e-assessment can build student confidence, and indeed students may feel more secure attempting answers with a computer, due to “less embarrassment in giving a foolish answer when it is only the machine that ‘knows’” (Lawson, 2002).
Many studies find a postive correlation that doing well in e-assessment correlates with other assessment methods. However, this may not reveal a causal relationship. Is it comfortable to go over things you alreedy know with the computer providing validation, so it is just that the more able students are more likely to engage? The finding of Hannah et al. (2014) that further practice with e-assessment was a negative predictor for exam performance is troubling, and the resulting question of whether e-assessment is just ‘busy work’, taking up time that could be used more effectively, seems to merit investigation.
What might an answer look like?
It is extremely difficult to distinguish two scenarios:
- The formative assessments help the student learn.
- The students who already know the material are more likely to engage with the formative exercises.
a convincing answer would prove that it was the former, not the latter, but this is not easy. One approach to untangling this might be statistical modelling (e.g. Lowe & Mestel, 2020). The other approach is to actually conduct randomised controlled trials, but these can be difficult to organise.
What are we comparing formative quizzes with? Are we comparing 30 mins attempting a formative quiz to doing nothing, or 30 minutes spent on some other relevant activity?
Related questions
- This question is closely related to Q36: To what extent do existing e-assessments provide reliable measures of mathematical understanding, as might otherwise be measured by traditional exams?.
- The issue of how e-assessments can improve students’ performance in end-of-module assessments is connected to Q28: How can regular summative e-assessments support learning?
- There is discussion of whether repeated practise with instant feedback is positive for learning in Q14: To what extent does repeated practice on randomized e-assessment tasks encourage mathematics students to discover deep links between ideas?
- It would be worthwhile to know how students engage with the formative e-assessments in practice, as in Q13: How do students interact with an e-assessment system?
- Decisions about formative vs summative assessments depend on the capabilities of e-assessment: Q29: What are suitable roles for e-assessment in formative and summative assessment?
References
Greenhow, M. (2015). Effective computer-aided assessment of mathematics; principles, practice and results. Teaching Mathematics and its Applications, 34(3), 117-137. https://doi.org/10.1093/teamat/hrv012
Hannah, J., James, A. & Williams, P. (2014). Does computer-aided formative assessment improve learning outcomes? International Journal of Mathematical Education in Science and Technology, 45(2), 269-281. https://doi.org/10.1080/0020739X.2013.822583
Lawson, D. (2002). Computer-aided assessment in mathematics: Panacea or propaganda? International Journal of Innovation in Science and Mathematics Education, 9(1). Retrieved from https://openjournals.library.sydney.edu.au/index.php/CAL/article/view/6095
Lowe, T. W., & Mestel, B. D. (2020). Using STACK to support student learning at masters level: a case study. Teaching Mathematics and its Applications: An International Journal of the IMA, 39(2), 61-70. https://doi.org/10.1093/teamat/hrz001