PhD Defence Notice: From Understanding Learning Difficulties Among Students To Providing High-Quality Automated Feedback

Thursday, September 12, 2024 9:00 am - 11:00 am EDT (GMT -04:00)

Candidate: Huanyi Chen

Date: September 12, 2024

Time: 9:00 AM

Place: E5 4047

Supervisor(s): Ward, Paul

Abstract:

Students face various difficulties during their learning journeys. However, providing timely feedback often poses a challenge for educators due to availability constraints. Fortunately, automated feedback systems have been introduced, offering invaluable assistance.

To equip instructors with a general understanding of students in their teaching activities, we conducted an analysis of students' learning analytics to gain insights. In this study, we applied clustering techniques to behavior data naturally collected within an automated feedback system. We discovered that although students spent a significant amount of time using the system, the learning outcomes were often limited. A predictive model was derived based on these observations.

To assist students in their learning, we explored whether offering trivial-penalty time extensions could be beneficial and why students use them. Implementing flexible late policies was straightforward and placed minimal burden on instructors. We analyzed a fourth-year course that utilized flexible late policies and found that time conflicts and underestimation of coursework were the top two reasons for utilizing time extensions. In addition, our findings revealed a correlation between students' abilities and their usage of time extensions. This latter result was re-examined in a replication study and a reproduction study. While the automated feedback system was not initially considered in the main study, in the reproduction study, we found that even with time extensions and automated feedback systems, low/middle-performing students still could not match the performance of high-performing students. This suggests a fundamental issue: feedback from automated feedback systems may not be as effective as anticipated, which plays an essential role in assisting students' learning at scale.

Consequently, the critical question arises: how to provide effective feedback from automated feedback systems. We identified two main issues in current automated feedback systems: incorrect components marked as correct and correct components marked as incorrect. To address these issues, we argue that the unit testing philosophy, widely adopted in the software industry, should not be naively applied to automated feedback systems in an educational context. We completely redesigned the procedure and proposed a novel guideline for composing automated assessments. Following this guideline, we developed an automated assessment for an entity-relationship question in a database course. Our evaluation showed that students had significantly improved their understanding of the topic.