Design and implementation choices > Randomisation

Question 26
When writing multiple choice questions, is student learning better enhanced using distractors based on common errors, or randomly-generated distractors?

Standard advice says that distractors should be based on common errors (e.g. Gierl et al., 2017). But does this apply to numerical mathematics items?

What motivates this question?

Greenhow (2008) says distractors or mal-rules are “consistent but incorrect methods” used by students when answering questions. Producing plausible distractors is difficult (Lawson, 2002), as is anticipating student errors (Walker et al., 2015). Therefore it could simplify the job of question authoring if randomly-generated distractors were in fact equally (or more) effective.

Making the distractors essentially random numbers could constitute immediate feedback to students who make a common error, since when they do not see their answer listed as an option, they have the chance to fix their working. This may mean that students think again and correct errors during the test, perhaps leading to better learning overall. On the other hand, this could make the assessment less informative (whether being used formatively or summatively) by obscuring the prevalence of misconceptions. In particular, it would preclude discussion with students about the distractors (“why is that a plausible answer?”, “what misconception would lead to that answer?”). It could also lead to confusion and frustration for students – if common errors are not listed, the student making the error might assume there is a mistake in the question!

What might an answer look like?

This could be investigated with an experimental approach, comparing student performance on MCQs with different types of distractors - random (R) or deliberate (D). The experiment could look at the relative performance of students on the R and D items, with scores on the R items expected to be higher due to the immediate feedback mechanism. It may also be worthwhile to gather students’ written working to help identify whether some students exposed to R items made a common error that they were able to correct. To get a longer-term view on any impact on students’ learning, it would be worth also investigating the students’ subsequent performance in assessments on the same topics.

This should include qualitative research into the student view/experience as well as that of the teacher, particularly around the issue of whether random distractors cause confusion/frustration for students who make an error and do not see their answer listed as an option.

References

Gierl, M. J., Bulut, O., Guo, Q., & Zhang, X. (2017). Developing, Analyzing, and Using Distractors for Multiple-Choice Tests in Education: A Comprehensive Review. Review of Educational Research, 87(6), 1082–1116. https://doi.org/10.3102/0034654317726529

Greenhow, M. (2008). Mathletics – a suite of computer-assisted assessments. MSOR Connections, 8(3), 7-10.

Lawson, D. (2002). Computer-aided assessment in mathematics: Panacea or propaganda? International Journal of Innovation in Science and Mathematics Education, 9(1). Retrieved from https://openjournals.library.sydney.edu.au/index.php/CAL/article/view/6095

Walker, P., Gwynllyw, D.R. & Henderson, K.L. (2015). Diagnosing student errors in e-Assessment questions. Teaching Mathematics and its Applications, 34(3), 160-170. https://doi.org/10.1093/teamat/hrv010