Errors and feedback > Feedback design

Question 6
What difficulties appear when designing e-assessment tasks that give constructive feedback to students?

To make a computer to give constructive feedback to answers, and maybe to solutions, is a complex process. How can that be handled, and what difficulties appear?

What motivates this question?

It is a widely shared practical concern, and particularly for one of the contributors (at the University of Adger in the autumn 2020, in the engineering education at campus Grimstad).

In the literature, some report much positivity about instant, personalised feedback and others report dissatisfaction with the feedback generated by e-assessment systems. For example, Broughton et al. (2017) report lecturers feeling the feedback given by their system was “not to the standard that [they] desired to give to their students”, including one lecturer who doubted the system was helping her weaker students due to “reservations towards the quality and helpfulness of the feedback” (see also e.g. Delius, 2004; Schofield & Ashton, 2005). Is the design of the e-assessment system that is used relevant to this, or is this problem more inherent in automated feedback systems?

One particular feature to consider in making constructive feedback is the use of external/explicit feedback compared with inherent/intrinsic/implicit feedback. For example, consider the difference between a message popping up to say an answer is right or wrong, vs a GeoGebra construction where a student ‘discovers’ an inherent flaw in their work. (For an illustration of that in practice, see this blog post by Dan Meyer.)

What might an answer look like?

A survey of e-assessment users could identify common difficulties that arise in producing effective feedback, and perhaps relate these to features of different e-assessment systems (especially those that are not universal).

Focusing on the “constructive” part of the question, drawing on school-based research in which feedback is inherent (Jones & Pratt, 2012) and game-based learning research (Jay et al., 2019) may help with this. Contributor Ian Jones will be supervising a project looking at the primary arithmetic app Stick and Split which, while not HE or assessment (sorry!), will involve considering the role of implicit feedback.

References

Broughton, S.J., Hernandez-Martinez, P. & Robinson, C.L. (2017). The effectiveness of computer-aided assessment for purposes of a mathematical sciences lecturer. In M. Ramirez-Montoya (Ed.), Handbook of Research on Driving STEM Learning with Educational Technologies (pp. 427-443). Hershey, PA: IGI Global.

Delius, G.W. (2004). Conservative Approach to Computerised Marking of Mathematics Assignments. MSOR Connections, 4(3), 42-47.

Jay, T., Habgood, J., Mees, M., & Howard-Jones, P. (2019). Game-based training to promote arithmetic fluency. Frontiers in Education, 4, 118. https://doi.org/10.3389/feduc.2019.00118

Jones, I., & Pratt, D. (2012). A substituting meaning for the equals sign in arithmetic notating tasks. Journal for Research in Mathematics Education, 43(1), 2–33. https://doi.org/10.5951/jresematheduc.43.1.0002

Schofield, D. & Ashton, H. (2005). Effective reporting for online assessment — shedding light on student behaviour. Maths-CAA Series, February 2005. Retrieved from http://icse.xyz/mathstore/node/61.html