Design and implementation choices > Randomisation

Question 25
What effect does the use of random versions of a question (e.g. using parameterised values) have on the outcomes of e-assessment?

Many e-assessment systems permit the parametrisation of questions, and hence the ability to present several variants of each question. Are such variants of value to students?

What motivates this question?

The use of randomised parameters in questions has been suggested to have several advantages, including:

  • allowing students to practice the type of question as often as they feel necessary.
  • avoiding “bad learning behaviour in which students just learn the correct answers by syntax” (Schwinning et al., 2015).
  • allowing one question to be set in different scenarios (Greenhow, 2015), by randomising words rather than mathematical properties.
  • a longer term saving of staff time may result from the increased effort of producing randomised questions, particularly in institutions that only permit the reuse of e-assessment questions from year to year if they are parameterised with sufficient variants.

However, the case for randomising e-assessment items is not clear-cut:

  • Randomisation introduces additional complexities with increased chances of errors/bugs/boundary cases.
  • Care needs to be taken to ensure that the random variants are of a similar difficulty level, particularly for summative use.
  • While parametrisation could potentially address the issue of plagiarism, this is not yet clear.
  • It is unclear whether the increase in complexity improves student learning/mastery of the content. For example, students may only see a single instance of a parametrised problem.

If the impact of randomising questions is found to be negligible, then discontinuing their creation could significantly reduce the work in designing e-assessment items and allow efforts to be better spent elsewhere.

Rather than parametrised questions (those with randomised parameters), an alternative is random selection (questions selected from a question bank). This is often seen as less sophisticated, because fewer question variants are typically produced; however, it is possible that a selection of one question from ten in a question bank offers sufficient advantages while also allowing greater quality control. Indeed, this is one of the motivations for the STACK e-assessment system using “deployed variants” of parameterised questions.

What might an answer look like?

The answer might include an comparison between students who do such repeated e-assessment questions to those doing repetitive questions in printed exercise. It should include student survey questions to investigate whether students perceive this to be an advantage. Quantitative work could be undertaken looking at banks of student responses and considering how many times different instances of the same question were answered by students, and whether that has an effect on the students final module result.

An instance of a subject multiple e-Assessment homeworks could include both randomised and non-randomised assessments. We could then look at results over the assignments (to assess mastery) and perhaps include surveys to poll students on whether they thought the items were randomised or not, as well as whether they thought it was beneficial.

References

Greenhow, M. (2015). Effective computer-aided assessment of mathematics; principles, practice and results. Teaching Mathematics and its Applications, 34(3), 117-137. https://doi.org/10.1093/teamat/hrv012

Schwinning, N., Striewe, M., Savija, M. & Goedicke, M. (2015). On Flexible Multiple Choice Questions With Parameters. In: A. Jefferies & M. Cubric (Eds.), 14th European Conference on e-Learning (ECEL 2015), Hatfield, UK, 29-30 October 2015 (pp. 523-529). Sonning Common: Academic Conferences and Publishing International Ltd.