Grant recipients: Daniel Smilek and Colin Macleod, Department of Psychology
Project team members:
Daniel Smilek, Department of Psychology
Colin MacLeod, Department of Psychology
Brandon Ralph*, Department of Psychology
(Project timeline: September 2015 - August 2016)
Testing not only measures learning, it improves learning. Standard paper tests, however, suffer from numerous drawbacks, among them needing to prepare far in advance and requiring expensive materials, in addition to delaying feedback for students. Our long-term goal is to create an electronic testing platform that leverages existing iClicker response technology to permit classroom implementation of paperless electronic exams. To initiate this project, we investigated two issues in two equivalent sections of the same course. First, we examined student preferences for, and perceived benefits from, in-class quizzes containing questions with fixed vs. variable response durations. Second, we examined whether students preferred receiving feedback following quiz questions, and whether students felt that such feedback (or a lack thereof) helped or hindered their learning by comparing three feedback conditions: (1) no feedback (baseline), (2) correct answer only, or (3) instructional feedback (correct answer plus lecture material re-presentation) following each quiz question.
We investigated two issues in the context of multiple-choice testing, common in large classes. The first issue was whether students prefer fixed vs. variable response durations for in-class quizzes, and relatedly, whether students feel that fixed vs. variable response durations influence their learning of course material. Quiz questions with fixed durations were allotted 45 seconds to answer, whereas questions with variable response durations were allotted 30, 45, or 60 seconds to answer based on anticipated difficulty of the question.
The second issue was whether students prefer receiving some form of feedback following quiz questions (as opposed to receiving no feedback, as is typical on many midterms and exams), and whether students feel that receiving feedback helps or hinders their learning of course material. To this end, we compared student preferences and ratings of the effects on learning, for quiz questions with: (1) no feedback (baseline), (2) correct answer only, or (3) instructional feedback (i.e., correct answer plus lecture material representation).
To evaluate preferences for a particular testing format, students responded to the statement “Please rate the extent to which you prefer end-of-class quizzes with each of the following testing styles:” Similarly, to evaluate the perceived effect on learning, students responded to the statement “Please rate the extent to which you feel end-of-class quizzes with each of the different testing styles helped you learn the course material.” In both cases, preferences and the perceived effects on learning were indicated using 6-point scales. For the preference items, response options ranged from “Highly Dislike” (1) to “Highly Prefer” (6). For the perceived-effects-on-learning items, response options ranged from “Hindered My Learning A Lot” (1) to “Helped My Learning a Lot” (6).
- Across both sections of the course sampled (N = 174), students preferred quizzes with fixed response durations compared to variable response durations (p p
- Receiving no feedback was rated as significantly less preferred and less helpful than receiving either forms of feedback (correct only or instructional; both p’s disliking receiving no feedback following quizzes and feeling that receiving no feedback also hindered their learning of course material. This finding is an important consideration given that many standard midterm and exam formats reflect this no-feedback style of testing.
- Students expressed a high positive preference for receiving both forms of feedback (correct answers only and instructional) and felt that both forms of feedback helped them learn the course material. Receiving instructional feedback (correct answers plus lecture material representation) was rated as the most preferred and helpful form of feedback, significantly above receiving correct feedback only (p
- At the Department/School and/or Faculty/Unit levels: Findings from this investigation have been shared and discussed with colleagues.
- At the national and/or international levels: We intend to present our findings at an upcoming conference and publish the results in a future manuscript.
Impact of the Project
- Teaching: Our plan moving forward is to integrate our findings into future course offerings (i.e., to include in-class quizzes wherein feedback is provided). Furthermore, as our long-term goal is to implement an electronic testing platform for midterms and final exams, we plan to incorporate feedback following test questions so that the tests become an opportunity for both teaching and learning.
- Involvement in other activities or projects: Partly based on our involvement in the current project Dr. Smilek has been able to recruit a PhD student who is specifically focused on exploring different pedagogical methods and their effects on performance. This project has also provided us with data as to how best to construct future software for electronic testing. In particular, presenting questions with fixed response duration and providing feedback is a critical component that must be incorporated into our testing platform. Lastly, findings from the current investigation have helped us to refine our design of electronic in-class quizzes.
- Connections with people from different departments, faculties, and/or disciplines about teaching and learning: Through our involvement in research concerning teaching and learning, we have developed connections with staff at the University of Toronto. We have also developed a stronger relation with the developer who designed the web-based testing interface for our in-class quizzes.
Project reference list (PDF)