Design and implementation choices > Task design principles

Question 22
What principles should inform the design of e-assessment tasks?

It would be useful to develop a set of design principles for e-assessment. The principles could involve the mathematical type of the question (e.g. example-generation), the format of the question (e.g. multiple choice), and the type and timing of feedback. The design principles could be used to advise practitioners.

What motivates this question?

Much has been written about the design of mathematical tasks (see for example the 2015 ICMI Study 22 on Task Design in Mathematics Education; Mason & Johnston-Wilder, 2004; Swan, 2008), but most of the work is focused on primary and secondary school with relatively little attention paid to tasks aimed at undergraduates. Even less is known about the design of e-assessment at university level.

Writing questions is technically challenging, because of the need to understand the minutiae of how a CAS will handle a response (Sangwin, 2007), and pedagogically more difficult, because of the need to understand what skills a question assumes (Greenhow, 2015). Lawson (2002) says question authors must take care to avoid introducing alternative or additional learning outcomes while being “creative in finding ways round” the “limitations” of e-assessment. It is possible for randomisation to generate questions that cannot be answered. Sangwin (2004) refers to “the bitter experience of setting mathematically impossible problems”. Robinson et al. (2012) are concerned that perceived objectivity of e-assessment by students causes some not to challenge the marks they have been awarded, “even when a question is coded to mark the work incorrectly”.

For example, Sangwin (2007) refers to “provided response questions”, in which “a student is provided with a list of potential answers and asked to make a selection, match-up, rearrange or perform various other kinds of interactions”, and says these are “almost always a constraint dictated by the software, and not the preferred choice of the user”. These can provide a hint for students who do not know how to begin a question or allow answering by a process of elimination or guessing. Some provided-response tasks may not test the desired learning outcomes. For example, an integral may be answered by differentiating the response options, applying the technique of differential calculus when integral calculus was supposed to be tested (Lawson, 2002; Sangwin & Jones, 2017).

Parameterised questions backed by a CAS allow for questions with multiple correct answers, allowing example-generation tasks to be written. However, care is needed with randomisation to make sure the range of questions generated is really asking what is expected, both in terms of mathematical complexity/difficulty and learning objectives (Greenhow, 2015).

An advantage over paper-based assessment is the ability to use graphics, audio and video in questions. For example, Sangwin (2015) gives an example in which a student interacts with a GeoGebra diagram as part of their answer. How does such innovation impact on task creation?

What might an answer look like?

A group of task designers could work together to develop design principles or to adapt previous design frameworks. The opportunities and constraints inherent in the use of e-assessment at undergraduate level would be taken into consideration.

References

Greenhow, M. (2015). Effective computer-aided assessment of mathematics; principles, practice and results. Teaching Mathematics and its Applications, 34(3), 117-137. https://doi.org/10.1093/teamat/hrv012

Lawson, D. (2002). Computer-aided assessment in mathematics: Panacea or propaganda? International Journal of Innovation in Science and Mathematics Education, 9(1). Retrieved from https://openjournals.library.sydney.edu.au/index.php/CAL/article/view/6095

Mason, J., and S. Johnston-Wilder. 2004. Designing and Using Mathematical Tasks. St Albans, UK: Tarquin Press.

Robinson, C.L., Hernandez-Martinez, P. & Broughton, S. (2012). Mathematics Lecturers’ Practice and Perception of Computer-Aided Assessment. In: P. Iannone & A. Simpson (Eds.), Mapping University Mathematics Assessment Practices (pp. 105-117). Norwich: University of East Anglia.

Sangwin, C. (2004). Assessing mathematics automatically using computer algebra and the internet. Teaching Mathematics and its Applications, 23(1), 1-14. https://doi.org/10.1093/teamat/23.1.1

Sangwin, C.J. (2007). Assessing elementary algebra with STACK. International Journal of Mathematical Education in Science and Technology, 38(8), 987-1002. https://doi.org/10.1080/00207390601002906

Sangwin, C. J., & Jones, I. (2017). Asymmetry in student achievement on multiple-choice and constructed-response items in reversible mathematics processes. Educational Studies in Mathematics, 94(2), 205–222. https://doi.org/10.1007/s10649-016-9725-4

Swan, M. 2008. Designing a multiple representation learning experience in secondary algebra. Educational Designer. 1(1): 1–17.

Watson, A &, Ohtani, M. (Eds.) (2015) Task Design In Mathematics Education, ICMI Study Series, Springer Verlag.