Errors and feedback > Student errors

Question 2
Do the errors students make in e-assessments differ from those they make in paper-based assessments?

Does answering questions through the medium of a computer-based assignment (with the consequential need to input answers using some form of computer syntax or mathematics editor) mean students make different errors to those made on paper? Are there more, or different types of, transcription error? Can computer based assessments detect errors and misconceptions that are not possible to detect in paper based written answers?

Perhaps the widespread lack of support for showing or demonstrating working leads to more errors, or different types of errors, when it comes to e-assessment as opposed to paper-based assessment.

Types of errors that might arise specifically in e-assessment contexts:

  • Lack of request or requirement for explicit working, in contrast to paper-based assessments
  • Students are used to quickly clicking buttons and filling in textboxes; do they realise adequately that an online maths test is not a social media survey? (relates to Q13: How do students interact with an e-assessment system? )
  • Mistakes due to the technology: clicked the button too quickly
  • Errors of input: students: used the wrong syntax

What motivates this question?

An answer to this question might help alleviate the fears of colleagues reluctant to engage with computer based assessment. It might also help identify errors or misconceptions that are hard to detect on paper. (For example, the student who thought the notation for natural logarithm was “In” (captial-eye, en) as they had always misread ln. This would be hard to detect on paper, but on a computer causes answers to be marked as wrong.)

Sangwin (2015) highlights the difference between a typed response being “invalid” and “wrong”, saying that floating point numbers, rational coefficients not in lowest terms or an expression entered in place as of an equation might be invalid, in certain circumstances, rather than wrong.

An issue is that when a system responds to an error, the student may be unaware whether the error is mathematical or typographical in nature (Jones, 2008).

What might an answer look like?

A study to answer this question might split a group of students: half doing an assessment on paper and half on computer and compare errors made. A large sample would be needed, ideally of students likely to make a significant number of errors. This might be informed by the answer to Q1: What common errors do students make when answering online assessment questions? which could first identify common errors in computer assessments to guide this study. A preliminary study to identify common errors made on paper might also be needed.

See Lemmo (2021) for a possible research tool.

References

Jones, I.S. (2008). Computer-aided assessment questions in engineering mathematics using MapleTA. International Journal of Mathematical Education in Science and Technology, 39(3), 341-356. https://doi.org/10.1080/00207390701734523

Sangwin, C. (2015). Computer Aided Assessment of Mathematics Using STACK. In S.J. Cho (Ed.), Selected Regular Lectures from the 12th International Congress on Mathematical Education (pp. 698-713). Cham: Springer. https://doi.org/10.1007/978-3-319-17187-6_39

Lemmo, A. (2020). A Tool for Comparing Mathematics Tasks from Paper-Based and Digital Environments. International Journal of Science and Mathematics Education, 1-21. https://link.springer.com/article/10.1007/s10763-020-10119-0