This page shows all 55 questions in the research agenda.

You can also see the questions arranged into Themes

Code Question Contributors
Q1 What common errors do students make when answering online assessment questions?
Q2 Do the errors students make in e-assessments differ from those they make in paper-based assessments?
Q3 What are the approaches to detecting and feeding back on students’ errors?
Q4 How can content-specific features of provided feedback, for instance explanations with examples versus generic explanations, support students' learning?
Q5 What are the linguistic features of feedback that help students engage with and use feedback in an online mathematical task at hand and in future mathematical activities?
Q6 What difficulties appear when designing e-assessment tasks that give constructive feedback to students?
Q7 How can feedback that is dynamically tailored to the student’s level of mathematical expertise help a student use feedback on mathematical tasks effectively?
Q8 How useful for students’ long-term learning is feedback that gives a series of follow-up questions, from a decision tree, versus a single terminal piece of feedback?
Q9 What are the relative benefits of e-assessment giving feedback on a student’s set of responses (e.g. “two of these answers are wrong – find which ones and correct them”), rather than individual responses separately?
Q10 Under what circumstances is diagnosing errors worth the extra effort, as compared with generally addressing errors known to be typical?
Q11 What are the relative merits of addressing student errors up-front in the teaching compared with using e-assessment to detect and give feedback on errors after they are made?
Q12 In what circumstances is instant feedback from automated marking preferable to marking by hand?
Q13 How do students interact with an e-assessment system?
Q14 To what extent does repeated practice on randomized e-assessment tasks encourage mathematics students to discover deep links between ideas?
Q15 How do students engage with automated feedback? What differences (if any) can be identified with how they would respond to feedback from a teacher?
Q16 What should students be encouraged to do following success in e-assessment?
Q17 What are students' views on e-assessment, and what are their expectations from automated feedback?
Q18 How might dyslexic, dyscalculic and other groups of students be disadvantaged by online assessments rather than paper-based assessments?
Q19 How can peer assessment be used as part of e-assessment?
Q20 How can e-assessment be used in group work, and what effect does the group element have on individuals' learning?
Q21 What design methodologies and principles are used by e-assessment task designers?
Q22 What principles should inform the design of e-assessment tasks?
Q23 E-assessment task designers often convert questions that could be asked on a traditional pen and paper exam: what are the implications, technicalities, affordances and drawbacks of this approach?
Q24 To what extent does the randomisation of question parameters, which makes sharing answers between students difficult, adequately address plagiarism?
Q25 What effect does the use of random versions of a question (e.g. using parameterised values) have on the outcomes of e-assessment?
Q26 When writing multiple choice questions, is student learning better enhanced using distractors based on common errors, or randomly-generated distractors?
Q27 How can formative e-assessments improve students’ performance in later assessments?
Q28 How can regular summative e-assessments support learning?
Q29 What are suitable roles for e-assessment in formative and summative assessment?
Q30 To what extent does the timing and frequency of e-assessments during a course affect student learning?
Q31 What are the relations between the mode of course instruction and students' performance and activity in e-assessment?
Q32 What advice and guidance (both practical and pedagogical) is available to lecturers about using e-assessment in their courses, and to what extent do they engage with it?
Q33 What might a “hierarchy of needs” look like for lecturers who are transitioning to increased use of e-assessments?
Q34 How can lecturers be informed about how students interact with e-assessment tasks, and so help lecturers act upon these findings in an effective way?
Q35 What types of reasoning are required to complete current e-assessments?
Q36 To what extent do existing e-assessments provide reliable measures of mathematical understanding, as might otherwise be measured by traditional exams?
Q37 How can e-assessment support take-home open-book examinations?
Q38 What developments at the forefront of e-assessment (such as artificial intelligence) can we apply to undergraduate mathematics?
Q39 What methods are available for student input of mathematics?
Q40 How can the suitability of e-assessment tools for summative assessment be improved by combining computer-marking and pen-marking?
Q41 Are there differences in performance on mathematics problems presented on paper versus as e-assessments?
Q42 How can we automate the assessment of work traditionally done using paper and pen?
Q43 How can we emulate human marking of students’ working such as follow-on marking and partially correct marking?
Q44 How can e-assessment using comparative judgment support learning?
Q45 How can comparative judgement be used for e-assessment?
Q46 How can we assess problem solving using e-assessment?
Q47 How can we assess open-ended tasks using e-assessment?
Q48 How can e-assessments provide scaffolding (cues, hints) during and after problem-solving tasks?
Q49 How can the assessment of proof be automated?
Q50 What can automated theorem provers (e.g. LEAN) offer to the e-assessment of proof comprehension?
Q51 What types/forms of proof-comprehension-related questions can be meaningfully assessed using currently available e-assessment platforms?
Q52 How can students effectively type free-form proof for human marking online?
Q53 How can e-assessments be designed to expand and enrich students' example spaces?
Q54 To what extent can e-assessments meaningfully judge student responses to example generation tasks?
Q55 How does the use of e-assessment impact students’ example generation strategies and success, relative to the same tasks on paper or orally?