Errors and feedback > Emulating teacher feedback
Question 8
How useful for students’ long-term learning is feedback that gives a series of follow-up questions, from a decision tree, versus a single terminal piece of feedback?
What motivates this question?
The motivation is a desire to replicate, as far as is possible, some of the behaviours of expert teachers, who often seek to limit their interventions to the minimum necessary (Foster, 2014), and engage students in conversations, rather than just brain-dump the whole thing at once. Thus, the question is about the extent to which students’ learning can be better supported by giving feedback through a “process of scaffolding and fading, of moving from directed through indirect prompts to spontaneous use by the student” (Mason, 2000, p. 99).
This has some connection with previous work on adaptive assessment (e.g. the DIAGNOSYS system developed at Newcastle in the 1990s; see Appleby, 2000) where the feedback and tasks are displayed based on a decision tree.
What might an answer look like?
It could be that it is a prohibitively huge amount of work to design and populate a decision tree for real scenarios. Indeed, reflecting on a decade of development work on an automated “intelligent tutoring system”, Anderson et al. (1995) acknowledged that they had “totally abandoned our original conception of [automatic] tutoring as human emulation”.
Alternatively, it might be that in areas where a few common misconceptions dominate, the design work is relatively straightforward.
Addressing the question may involve:
- a survey of what topics might be amenable;
- a “framework” for implementing questions in a certain topic, e.g. “you should include feedback on this possible misconception” (e.g., see Wake et al., 2016, Table 2) or a template for decision trees;
- experimental work to compare approaches;
- investigating whether particular students need less support over time (i.e. they are internalising more of the decision tree for themselves)
Related questions
- This style of feedback is an example of an approach relevant to Q3: What are the approaches to detecting and feeding back on students’ errors?
- The content of feedback is considered by Q5: What are the linguistic features of feedback that help students engage with and use feedback in an online mathematical task at hand and in future mathematical activities?
- Taking account of prior knowledge of the student links with Q7: How can feedback that is dynamically tailored to the student’s level of mathematical expertise help a student use feedback on mathematical tasks effectively?
- This may be one of the “circumstances” referred to in Q12: In what circumstances is instant feedback from automated marking preferable to marking by hand?
- The student response is considered in Q15: How do students engage with automated feedback? What differences (if any) can be identified with how they would respond to feedback from a teacher?
References
Anderson, J. R., Corbett, A. T., Koedinger, K. R., & Pelletier, R. (1995). Cognitive tutors: Lessons learned. The journal of the learning sciences, 4(2), 167-207.
Appleby, J. C. (2000). What can we learn from computer-based diagnostic testing?. Proceedings of TIME 2000: An International Conference on Technology in Mathematics Education (Auckland, NZ, December 11-14, 2000), 103-110. Retrieved from https://files.eric.ed.gov/fulltext/ED474050.pdf#page=111
Foster, C. (2014). Minimal interventions in the teaching of mathematics. European Journal of Science and Mathematics Education, 2(3), 147–154. https://doi.org/10.30935/scimath/9407
Mason, J. (2000). Asking mathematical questions mathematically. International Journal of Mathematical Education in Science and Technology, 31(1), 97–111. https://doi.org/10.1080/002073900287426
Wake, G., Swan, M., & Foster, C. (2016). Professional learning through the collaborative design of problem-solving lessons. Journal of Mathematics Teacher Education, 19(2-3), 243-260.