Affordances offered by e-assessment tools > Free-form student input

Question 43
How can we emulate human marking of students’ working such as follow-on marking and partially correct marking?

What motivates this question?

Lawson (2002) says it is “generally accepted throughout the mathematics community that an incorrect answer can still demonstrate the achievement of some learning outcomes”. Because of this, it is normal to award students marks for their understanding of the method involved in solving a problem, even if they do not implement this fully correctly. Ashton and Youngson (2004) also describe “follow through error”, when the answer to one part of a question is taken as input to a second part. Students may use the correct method in the second part and only obtain an incorrect answer because their input from the first part was incorrect, occurrences of which, they say, “occur quite often in paper based examination”. It may be that the final answer entered into a computer is not necessarily enough information to allocate partial credit, and this can be a cause of student complaint (Beevers et al., 1999). A human marker can act flexibly, whereas an e-assessment system might be very rigid with small errors. Rønning (2017) says that the experience of having an automated system say your solution is incorrect but not being able to work out why is “frustrating and leads to a loss of confidence”; meanwhile, getting corrections on a written solution is “not perceived as bad for the self-efficacy” because it is “helpful”. This feeling that human markers are more valued is conceptualised by Rønning as being because in e-assessment “the mediating artefact is a computer system, not a person. This affects the relation between the subject and the object”.

To attempt to award partial credit, a question may be broken into a series of smaller steps or sub-questions, so marks can be assigned for correct parts (e.g. Beevers et al., 1999). CAS can recalculate the answer for a later step as though the incorrect answer for an earlier step was the correct value, leading to enhanced follow-through marking. Some make intermediate steps optional, perhaps with a penalty for students who choose to use them (Pitcher et al., 2002). Alternatively, feedback may be offered at the end of each step, in order that errors do not “carry over” to “subsequent solution steps, and consequently the final solution” (Corbalan et al., 2010).

Greenhow (2015) also points out that a question which assesses multiple skills must be broken down into steps for e-assessment, even if it is not a multi-part process, whereas a human marker with access to the intermediate working would be able to judge the use of different skills.

While steps might reduce the information processing load (Beevers & Paterson, 2003) and provide intermediate feedback for increased motivation (Beevers et al., 1999), this is felt to reduce the authenticity of the assessment (Lawson, 2002). Multi-stage questions might force students to use a method they would not have chosen (Lawson, 2002), or give students an indication they may not otherwise have had of how a question should be attempted, which may be a disadvantage in some circumstances. Multi-stage questions may cause students to focus on individual steps while not seeing “the bigger picture”, leading to a procedural knowledge and lack of deep understanding (Quinney, 2010). All this may mean that steps can lose the original challenge intended by the question.

An alternative approach to providing partial-credit is to allow multiple attempts at each question (perhaps following feedback), with a penalty mark incurred for each attempt used. Naismith and Sangwin (2004) say this opportunity to reflect and retry a question provides an advantage over paper-based assessment.

Students may be asked to submit written work alongside answering e-assessment questions (Pollock, 2002; Heck, 2017). Pacheco-Venegas et al. (2015) report some progress in marking steps by converting student handwritten input into CAS input.

What might an answer look like?

Comparison between traditional human marking and existing technology is likely to provide one way to answer this question.

References

Ashton, H.S. & Youngson, M.A. (2004). Creating Questions for Automatic Assessment in Mathematics. Maths-CAA Series, February 2004. Retrieved from http://icse.xyz/mathstore/node/61.html

Beevers, C.E., Wild, D.G., McGuire, G.R., Fiddles, D.J. & Youngson, M.A. (1999). Issues of partial credit in mathematical assessment by computer. ALT-J, 7(1), 26-32. https://doi.org/10.1080/0968776990070105

Beevers, C.E. & Paterson, J.S. (2003). Automatic assessment of problem-solving skills in mathematics. Active Learning in Higher Education, 4(2), 127-144. https://doi.org/10.1177/1469787403004002002

Corbalan, G., Paas, F. & Cuypers, H. (2010). Computer-based feedback in linear algebra: Effects on transfer performance and motivation. Computers & Education, 55(2), 692-703. https://doi.org/10.1016/j.compedu.2010.03.002

Greenhow, M. (2015). Effective computer-aided assessment of mathematics; principles, practice and results. Teaching Mathematics and its Applications, 34(3), 117-137. https://doi.org/10.1093/teamat/hrv012

Heck, A. (2017). Using SOWISO to realize interactive mathematical documents for learning, practising, and assessing mathematics. MSOR Connections, 15(2), 6-16. https://doi.org/10.21100/msor.v15i2.412

Lawson, D. (2002). Computer-aided assessment in mathematics: Panacea or propaganda? International Journal of Innovation in Science and Mathematics Education, 9(1). Retrieved from https://openjournals.library.sydney.edu.au/index.php/CAL/article/view/6095

Naismith, L. & Sangwin, C. (2004). Implementation of a Computer Algebra Based Assessment System. Maths-CAA Series, October 2004. Retrieved from http://icse.xyz/mathstore/node/61.html

Pacheco-Venegas, N.B., López, G. & Andrade-Aréchiga, M. (2015). Conceptualization, development and implementation of a web-based system for automatic evaluation of mathematical expressions. Computers & Education, 88, 15-28. https://doi.org/10.1016/j.compedu.2015.03.021

Pitcher, N., Goldfinch, J. & Beevers, C. (2002). Aspects of Computer-Based Assessment in Mathematics. Active Learning in Higher Education, 3(2), 159-176. https://doi.org/10.1177/1469787402003002005

Pollock, M.J. (2002). Introduction of CAA into a mathematics course for technology students to address a change in curriculum requirements. International Journal of Technology and Design Education, 12(3), 249-270. https://doi.org/10.1023/A:1020229330655

Quinney, D. (2010). The Role of E-Assessment in Mathematics. In P. Bogacki (Ed.), Electronic Proceedings of the Twenty-second Annual International Conference on Technology in Collegiate Mathematics, Chicago, Illinois (pp. 279-288). Retrieved from http://archives.math.utk.edu/ICTCM/VOL22/S093/paper.pdf

Rønning, F. (2017). Influence of computer-aided assessment on ways of working with mathematics. Teaching Mathematics and its Applications, 36(2), 94-107. https://doi.org/10.1093/teamat/hrx001