PhD Seminar • Empirical Software Engineering • Investigating Questions from Automatic Code ReviewersExport this event to calendar

Wednesday, June 19, 2024 — 9:00 AM to 10:00 AM EDT

Please note: This PhD seminar will take place online.

Farshad Kazemi, PhD candidate
David R. Cheriton School of Computer Science

Supervisor: Professor Shane McIntosh

Automatic Code Reviewers (ACRs) are models trained to automate code review tasks, such as generating review comments. Indeed, prior work shows that state-of-the-art ACRs can generate review comments to initiate discussion threads; however, the capacity of ACRs to react to author responses is unclear. This is especially problematic when ACRs pose interrogative comments, i.e., comments that ask questions of other review participants.

In this paper, we study ACR-generated interrogative code review comments, analyzing their prevalence, similarity with human-submitted interrogative comments, and the regularity of their generation. We empirically study three task-specific ACRs and three ACRs based on Large Language Models (LLMs) on mined data from the Gerrit project. We find that state-of-the-art ACRs: (1) generate interrogative comments at a rate of 15.6% 65.26%; (2) differ from humans in generating such comments, which can stifle conversations, particularly in discussions where questions could spark productive dialogue; (3) produce interrogative comments with high irregularity, especially when we increase the number of comments generated; and (4) suffer from limitations in their capacity to communicate; for instance, task-specific and GPT-4-based ACRs do not and LLaMA2-based ACR rarely (2.27%) pose rhetorical questions, which account for 8.74% of human-posed interrogative comments. Unlike task-specific ACRs, LLM-based ACRs can react to author responses. Hence, we further inspect 150 examples of their interrogative comments and reactions to author responses, observing that: (5) the interrogative comments that LLM-based ACRs pose can differ even more substantially from human behaviour than those of task-specific ACRs; and (6) LLM-based ACRs struggle to participate in code review discussions when compared to humans. While our results suggest that neither task-specific nor LLM-based ACRs can replace human reviewers yet, we observe opportunities for synergies. For example, ACRs raise pertinent questions about exception handling of common APIs more frequently than human reviewers.


To attend this PhD seminar on Zoom, please go to https://uwaterloo.zoom.us/j/4130890098?pwd=V1VydVpCdGxQcGVwNXNrRDZPcjc0QT09.

Location 
Online PhD seminar
200 University Ave West

Waterloo, ON N2L 3G1
Canada
Event tags 

S M T W T F S
26
27
28
29
30
31
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
1
2
3
4
5
6
  1. 2024 (168)
    1. August (3)
    2. July (7)
    3. June (17)
    4. May (23)
    5. April (41)
    6. March (27)
    7. February (25)
    8. January (25)
  2. 2023 (296)
    1. December (20)
    2. November (28)
    3. October (15)
    4. September (25)
    5. August (30)
    6. July (30)
    7. June (22)
    8. May (23)
    9. April (32)
    10. March (31)
    11. February (18)
    12. January (22)
  3. 2022 (245)
  4. 2021 (210)
  5. 2020 (217)
  6. 2019 (255)
  7. 2018 (217)
  8. 2017 (36)
  9. 2016 (21)
  10. 2015 (36)
  11. 2014 (33)
  12. 2013 (23)
  13. 2012 (4)
  14. 2011 (1)
  15. 2010 (1)
  16. 2009 (1)
  17. 2008 (1)