George Kinnear
University of Edinburgh
George has written hundreds of assessment questions in various e-assessment systems (most recently in STACK), both for his own teaching and on behalf of others. His research interests are broadly around assessment in undergraduate mathematics.
Questions
George is a contributor to these questions:
- Q4: How can content-specific features of provided feedback, for instance explanations with examples versus generic explanations, support students' learning?
- Q8: How useful for students’ long-term learning is feedback that gives a series of follow-up questions, from a decision tree, versus a single terminal piece of feedback?
- Q9: What are the relative benefits of e-assessment giving feedback on a student’s set of responses (e.g. “two of these answers are wrong – find which ones and correct them”), rather than individual responses separately?
- Q10: Under what circumstances is diagnosing errors worth the extra effort, as compared with generally addressing errors known to be typical?
- Q11: What are the relative merits of addressing student errors up-front in the teaching compared with using e-assessment to detect and give feedback on errors after they are made?
- Q13: How do students interact with an e-assessment system?
- Q14: To what extent does repeated practice on randomized e-assessment tasks encourage mathematics students to discover deep links between ideas?
- Q15: How do students engage with automated feedback? What differences (if any) can be identified with how they would respond to feedback from a teacher?
- Q17: What are students' views on e-assessment, and what are their expectations from automated feedback?
- Q20: How can e-assessment be used in group work, and what effect does the group element have on individuals' learning?
- Q26: When writing multiple choice questions, is student learning better enhanced using distractors based on common errors, or randomly-generated distractors?
- Q28: How can regular summative e-assessments support learning?
- Q30: To what extent does the timing and frequency of e-assessments during a course affect student learning?
- Q35: What types of reasoning are required to complete current e-assessments?
- Q38: What developments at the forefront of e-assessment (such as artificial intelligence) can we apply to undergraduate mathematics?
- Q53: How can e-assessments be designed to expand and enrich students' example spaces?
- Q54: To what extent can e-assessments meaningfully judge student responses to example generation tasks?
- Q55: How does the use of e-assessment impact students’ example generation strategies and success, relative to the same tasks on paper or orally?