Design and implementation choices > Task design principles
Question 23
E-assessment task designers often convert questions that could be asked on a traditional pen and paper exam: what are the implications, technicalities, affordances and drawbacks of this approach?
What motivates this question?
When e-assessment is introduced to a course for the first time, it is often used as a replacement for existing paper-based assessments, e.g. in one case study of a department introducing computer-aided assessment to a linear algebra module, “exercise sheets were replaced by online weekly quizzes” (Iannone & Simpson, 2012, p. 37). In this scenario, it is natural that lecturers or task designers would look to “translate” existing tasks to e-assessments.
This “translation” approach may have implications for the range of tasks that are set using e-assessment: some existing paper-based tasks may be “untranslatable”, while other tasks that would be suited to e-assessment may not be considered. Moreover, the tranlation of tasks can lead to unintended extra demand being added to tasks (e.g., Lawson, 2002, p. 4).
Thus, from the task designer’s point of view, the “translation” approach offers certain affordances and drawbacks – but the range and impact of these is not well-known.
What might an answer look like?
One approach would be to survey or interview task designers about their experiences of translating tasks, to identify what they perceive as the affordances and drawbacks of the approach. This could be supplemented by some close analysis of the translation process in practice, perhaps comparing the types of skills assessed before and after translation (e.g. using the MATH taxonomy; see Kinnear et al., 2020).
Related questions
- Comparing pen and paper with e-assessment is a feature of several questions:
- Q18: How might dyslexic, dyscalculic and other groups of students be disadvantaged by online assessments rather than paper-based assessments?
- Q43: How can we emulate human marking of students’ working such as follow-on marking and partially correct marking?
- Q42: How can we automate the assessment of work traditionally done using paper and pen?
- Q41: Are there differences in performance on mathematics problems presented on paper versus as e-assessments?
- The “translation” approach should be considered among principles for task design:
- One choice to be made during translation is about whether/how to randomise: Q25: What effect does the use of random versions of a question (e.g. using parameterised values) have on the outcomes of e-assessment?
References
Iannone, P., & Simpson, A. (2012). Mapping University Mathematics Assessment Practices. University of East Anglia. Retrieved from https://mathshe.files.wordpress.com/2012/08/mu-map.pdf
Kinnear, G., Bennett, M., Binnie, R., Bolt, R., & Zheng, Y. (2020). Reliable application of the MATH taxonomy sheds light on assessment practices. Teaching Mathematics and Its Applications: International Journal of the IMA, 1–15. https://doi.org/10.1093/teamat/hrz017
Lawson, D. (2002). Computer-aided assessment in mathematics: Panacea or propaganda? International Journal of Innovation in Science and Mathematics Education, 9(1). Retrieved from https://openjournals.library.sydney.edu.au/index.php/CAL/article/view/6095