Errors and feedback > Optimising feedback efforts

Question 12
In what circumstances is instant feedback from automated marking preferable to marking by hand?

Instant feedback is sometimes presented as a considerable advantage of e-assessment, though the level and quality of feedback produced is sometimes in doubt, leading to questions about the effectiveness of such feedback to promote learning.

What motivates this question?

E-assessment enables instant feedback on student work. Feedback on a mistake at the time the student made the mistake is potentially effective because the mistake is corrected while the student is thinking about the work, rather than potentially weeks later as in the case of human marking. Such feedback can present a correct solution to the randomised problem the student encountered. However, some question whether automated feedback helps learning especially for weaker students, because this may present to students as simply another worked example like those found in taught material which students may struggle to interpret (Broughton et al., 2017; Robinson et al., 2012).

Additionally, it is uncertain whether instant feedback encourages students to take responsibility for their learning (Broughton, Robinson and Hernandez-Martinez, 2013) or whether a focus on submitting work for immediate feedback on whether the answer was correct, determined by the e-assessment system, reduces “the need for the student to trust his/her own answer” (Rønning, 2017).

Rønning (2017) reports students preferring to receive richer feedback from human-marked work and taking greater care when writing mathematics which will be examined by a person, learning more from the process of presenting their argument to an assumed reader than they would through automated marking.

What might an answer look like?

A survey of student use of e-assessment could investigate student reported preferences.

Potentially an experiment could be designed whereby students are given a choice at the end of an e-assessment to either get automated feedback now or send their responses for human-marked feedback in a few days, to see whether practice meets reported preferences.

References

Broughton, S.J., Hernandez-Martinez, P. & Robinson, C.L. (2013). A definition for effective assessment and implications on computer-aided assessment practice. In A.M. Lindmeier & A. Heinze (Eds.), 37th Conference of the International Group for the Psychology of Mathematics Education, Kiel, Germany, vol. 2 (pp. 113-120). Berlin: The International Group for the Psychology of Mathematics Education.

Broughton, S.J., Hernandez-Martinez, P. & Robinson, C.L. (2017). The effectiveness of computer-aided assessment for purposes of a mathematical sciences lecturer. In: M. Ramirez-Montoya (Ed.), Handbook of Research on Driving STEM Learning with Educational Technologies (pp. 427-443). Hershey, PA: IGI Global.

Robinson, C.L., Hernandez-Martinez, P. & Broughton, S. (2012). Mathematics Lecturers’ Practice and Perception of Computer-Aided Assessment. In: P. Iannone & A. Simpson (Eds.), Mapping University Mathematics Assessment Practices (pp. 105-117). Norwich: University of East Anglia.

Rønning, F. (2017). Influence of computer-aided assessment on ways of working with mathematics. Teaching Mathematics and its Applications, 36(2), 94-107. https://doi.org/10.1093/teamat/hrx001