PhD Seminar • Software Engineering • Characterizing the Impact, Distribution, and Duration of Stale Reviewer Recommendations

Friday, November 24, 2023 10:00 am - 11:00 am EST (GMT -05:00)

Please note: This PhD seminar will take place in DC 2564 and online.

Farshad Kazemi, PhD candidate
David R. Cheriton School of Computer Science

Supervisor: Professor Shane McIntosh

The appropriate assignment of reviewers is a key factor in determining the value that organizations can derive from code review. Prior work has demonstrated that, in addition to (likely) producing less valuable feedback, inappropriate reviewer assignments can overburden core development teams and increase the risk of turnover-induced knowledge loss. Reviewer recommendation approaches have been proposed to aid in the assignment of reviewers to tasks. These recommendation approaches are traditionally evaluated by comparing their recommendations to the list of reviewers who reviewed change sets in historical data. Recent work suggests that deeper evaluations of reviewer recommendation approaches are needed to better align with practical use cases.

In this seminar, we study stale recommendations, i.e., recommended reviewers who no longer contribute to the project. By applying five code reviewer recommendation approaches (LearnRec, RetentionRec, cHRev, Sofia, and WLRRec) to three thriving open-source systems with 5,806 contributors, we observe that recommendations are often stale. Specifically, our findings reveal that, on average, 12.59% of incorrect recommendations are stale due to developer turnover; however, fewer stale recommendations are made when the recency of contributions is considered by the recommendation objective function. We also investigate which reviewers appear in stale recommendations and observe that the top reviewers account for a considerable proportion of stale recommendations. For instance, in 15.31% of cases, the top-3 reviewers account for at least half of the stale recommendations. Finally, we study how long a stale recommendation lingers after the candidate leaves the project, observing that contributors who left the project 7.7 years ago are still suggested to review change sets. Based on our findings, we recommend that code reviewer recommendation approaches adapt to mitigate stale recommendations by, e.g., incorporating live roster heuristics or team status data from other sources.


To attend this PhD seminar in person, please go to DC 2564. You can also attend virtually using Zoom at https://uwaterloo.zoom.us/j/4130890098.