Peter Rowlett
Sheffield Hallam University
Peter is a Reader at Sheffield Hallam University, where he teaches mathematics and is a researcher focused on higher education mathematics educational practice. He has been interested in e-assessment since around 2003, including as a user of e-assessment systems and as a researcher interested in how e-assessment can be used to support student learning and effective assessment practice.
Questions
Peter is a contributor to these questions:
- Q1: What common errors do students make when answering online assessment questions?
- Q2: Do the errors students make in e-assessments differ from those they make in paper-based assessments?
- Q6: What difficulties appear when designing e-assessment tasks that give constructive feedback to students?
- Q10: Under what circumstances is diagnosing errors worth the extra effort, as compared with generally addressing errors known to be typical?
- Q12: In what circumstances is instant feedback from automated marking preferable to marking by hand?
- Q13: How do students interact with an e-assessment system?
- Q14: To what extent does repeated practice on randomized e-assessment tasks encourage mathematics students to discover deep links between ideas?
- Q15: How do students engage with automated feedback? What differences (if any) can be identified with how they would respond to feedback from a teacher?
- Q16: What should students be encouraged to do following success in e-assessment?
- Q17: What are students' views on e-assessment, and what are their expectations from automated feedback?
- Q20: How can e-assessment be used in group work, and what effect does the group element have on individuals' learning?
- Q22: What principles should inform the design of e-assessment tasks?
- Q24: To what extent does the randomisation of question parameters, which makes sharing answers between students difficult, adequately address plagiarism?
- Q25: What effect does the use of random versions of a question (e.g. using parameterised values) have on the outcomes of e-assessment?
- Q27: How can formative e-assessments improve students’ performance in later assessments?
- Q29: What are suitable roles for e-assessment in formative and summative assessment?
- Q33: What might a “hierarchy of needs” look like for lecturers who are transitioning to increased use of e-assessments?
- Q36: To what extent do existing e-assessments provide reliable measures of mathematical understanding, as might otherwise be measured by traditional exams?
- Q37: How can e-assessment support take-home open-book examinations?
- Q39: What methods are available for student input of mathematics?
- Q40: How can the suitability of e-assessment tools for summative assessment be improved by combining computer-marking and pen-marking?
- Q41: Are there differences in performance on mathematics problems presented on paper versus as e-assessments?
- Q43: How can we emulate human marking of students’ working such as follow-on marking and partially correct marking?
- Q46: How can we assess problem solving using e-assessment?
- Q47: How can we assess open-ended tasks using e-assessment?
- Q49: How can the assessment of proof be automated?
- Q54: To what extent can e-assessments meaningfully judge student responses to example generation tasks?