PhD Seminar • Software Engineering • Cross-Language and Cross-Project Bug Localization via Dynamic Chunking and Hard Example Learning

Tuesday, September 10, 2024 10:00 am - 11:00 am EDT (GMT -04:00)

Please note: This PhD seminar will take place in DC 2314.

Partha Chakraborty, PhD candidate
David R. Cheriton School of Computer Science

Supervisor: Professor Mei Nagappan

Software bugs require developers to expend significant effort to identify and resolve them, often consuming about one-third of their time. Bug localization, the process of pinpointing the exact source code files that need modification, is crucial in reducing this effort. Existing bug localization tools, typically reliant on deep learning techniques, face limitations in both cross-project applicability and multi-language environments. Recent advancements with Large Language Models (LLMs) offer detailed representations for bug localization that may help to overcome such limitations. However, these models are known to encounter challenges with 1) limited context windows and 2) mapping accuracy.

To address these challenges, we propose BLAZE, an approach that employs dynamic chunking and hard example learning. First, BLAZE dynamically segments source code to minimize continuity loss. Then, BLAZE fine-tunes a GPT-based model using complex bug reports in order to enhance cross-project and cross-language bug localization. To support the capability of BLAZE, we create the BeetleBox dataset, which comprises 26,321 bugs from 29 large and thriving open-source projects across five programming languages (Java, C++, Python, Go, and JavaScript). Our evaluation of BLAZE on three benchmark datasets — BeetleBox, SWE-Bench, and Ye et al. — demonstrates substantial improvements compared to six state-of-the-art baselines. Specifically, BLAZE achieves up to an increase of 120% in Top 1 accuracy, 144% in Mean Average Precision (MAP), and 100% in Mean Reciprocal Rank (MRR). Furthermore, an extensive ablation study confirms the contributions of our pipeline components to the overall performance enhancement.