Please note: This PhD seminar will take place online.
Xuye Liu, PhD candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Jian Zhao
Large Language Models (LLMs) are increasingly used to assist programmers with code evaluation and optimization. However, current LLM-based tools often provide opaque feedback, making it difficult for programmers to understand how code is evaluated and how different quality dimensions interact during optimization.
I will present a system that supports transparent, multi-dimensional code evaluation in real time, allowing programmers to track and reason about code quality across dimensions such as readability, performance, and maintainability. The system was developed through an iterative, user-centered design process informed by a formative study and a large-scale code analysis, and evaluated through a controlled lab study with novice programmers. The results show that making evaluation explicit at the code-segment level helps users more effectively identify issues, apply targeted optimizations, and better understand the evolving state of their code. Finally, I will briefly discuss how building and studying this system led me to reflect on the importance of code understanding and reasoning, and introduce ongoing exploratory work that builds on this perspective.