Please note: This seminar will take place in DC 1304.
Lunjia Hu, PhD candidate
Computer Science Department, Stanford University
Machine learning holds significant potential for positive societal impact. However, in critical applications involving people such as healthcare, employment, and lending, machine learning raises serious concerns of fairness, robustness, and interpretability. Addressing these concerns is crucial for making machine learning more trustworthy.
This talk will focus on three lines of my recent research establishing the mathematical foundations of trustworthy machine learning. First, I will introduce a theory that optimally characterizes the amount of data needed for achieving multicalibration, a recent fairness notion with many impactful applications. This result is an instance of a broader theory developed in my research giving the first sample complexity characterizations for learning tasks with multiple interacting function classes (ALT’22 Best Student Paper, ITCS’23 Best Student Paper). Next, I will discuss my research in omniprediction, a new approach to robust learning that allows for simultaneous optimization of different loss functions and fairness constraints (ITCS’23, ICML’23). Finally, I will present a principled theory of calibration of neural networks (STOC’23). This theory provides an essential tool for understanding uncertainty quantification and interpretability in deep learning, allowing rigorous explanations for interesting empirical phenomena (NeurIPS’23 spotlight, ITCS’24).
Bio: Lunjia Hu is a final-year Computer Science PhD student at Stanford University, advised by Moses Charikar and Omer Reingold. He works on advancing the theoretical foundations of trustworthy machine learning, addressing fundamental questions about interpretability, fairness, robustness, and uncertainty quantification. His works on algorithmic fairness and machine learning theory have received Best Student Paper awards at ALT 2022 and ITCS 2023.