Errors and feedback > Optimising feedback efforts

Question 10
Under what circumstances is diagnosing errors worth the extra effort, as compared with generally addressing errors known to be typical?

Diagnosing and responding in detail to students’ specific errors/misconceptions has become perceived as highly valuable, and e-assessment systems have features that can facilitate this. But might it be more efficient overall to avoid attempting to diagnose errors after the fact, and instead “treat” everyone by addressing known typical errors in other ways?

What motivates this question?

Recent research suggests that “elaborated feedback” is more useful than simply giving correct/incorrect feedback (Attali & van der Kleij, 2017). Moreover, results from a recent small-scale study (Pinkernell et al., 2020) suggest that tailored feedback based on error analysis can be more effective than giving a generic model solution as feedback.

However, devising this feedback and implementing it in an e-assessment system requires up-front effort from the teacher. That effort could perhaps be better spent in other ways.

Moreover, such specific feedback may be more valuable where the difficulty is in some way unusual and requires bespoke intervention – something that is perhaps better achieved with a 1-1 interaction rather than being mediated through an e-assessment system.

Although the personalised nature of e-assessment feedback may be advantageous, it may be that detailed feedback doesn’t always help students to learn. Rønning (2017) has 60% of student questionnaire respondents agreeing that they learn a lot from doing e-assessment problems, but reports that this is low compared with other learning resources. Robinson et al. (2012) report concern from lecturers that, while e-assessment confirms to “the most able” that they “have carried out the procedure correctly,” it may “struggle to provide the feedback necessary to facilitate understanding in weaker students”. This is because the feedback “isn’t much more than another worked example, as you find in the lecture notes, or as you find in the textbooks”.

What might an answer look like?

Would need to decide what consitutes “extra effort” which may depend on the life of the quiz and the number of expected attempts. It may also be the case that some diagnosis can happen with very little extra effort, e.g. the way that some STACK “answer tests” provide feedback on common errors. Similarly, e-assessment systems may have features to report on frequent wrong answers, that could help the teacher to diagnose common errors.

The question asks “under what circumstances?” and these may include:

  • Mode of study of the student. Diagnosing errors may be of more importance to a student without easy access to a teacher.
  • Topic. Some topics may have a small number of very well-defined errors/misconceptions; for other topics we may just not know, or we may know that difficulties are very diverse.

Determining whether it is “worth” the extra effort could be based on quantitative measures (e.g. studying whether the presence of diagnostic feedback during practice leads to better performance on a subsequent test) or qualitative analysis (e.g. of students’ perceptions of the usefulness of the feedback).

References

Attali, Y., & van der Kleij, F. (2017). Effects of feedback elaboration and feedback timing during computer-based practice in mathematics problem solving. Computers and Education, 110, 154–169. https://doi.org/10.1016/j.compedu.2017.03.012

Pinkernell, G., Gulden, L., & Kalz, M. (2020). Automated feedback at task level: Error analysis or worked out examples – which type is more effective? Proceedings of the 14th International Conference on Technology in Mathematics Teaching – ICTMT 14: Essen, Germany, 221. https://doi.org/10/ggw55s

Robinson, C.L., Hernandez-Martinez, P. & Broughton, S. (2012). Mathematics Lecturers’ Practice and Perception of Computer-Aided Assessment. In: P. Iannone & A. Simpson (Eds.), Mapping University Mathematics Assessment Practices (pp. 105-117). Norwich: University of East Anglia.

Rønning, F. (2017). Influence of computer-aided assessment on ways of working with mathematics. Teaching Mathematics and its Applications, 36(2), 94-107. https://doi.org/10.1093/teamat/hrx001