Please note: This PhD seminar will be given in person in DC 1302 and online.
Zeou Hu, PhD candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Yaoliang Yu
Federated learning (FL) has emerged as a promising, massively distributed way to train a joint deep model over large amounts of edge devices while keeping private user data strictly on device. Conventionally, FL is formulated as a single-objective optimization problem where the global objective is the weighted sum of each client’s local objective with pre-determined weighting coefficients. However, this formulation has some drawbacks. We argue that federated learning is inherently a multi-objective problem, and this perspective generalizes the previous approach.
In this work, we formulate federated learning as multi-objective optimization and, motivated from ensuring fairness among users and robustness against malicious adversaries, propose a new algorithm FedMGDA+ that is guaranteed to converge to Pareto stationary solutions. FedMGDA+ is simple to implement, has fewer hyperparameters to tune, and refrains from sacrificing the performance of any participating user. We establish the convergence properties of the algorithm and point out its connections to existing approaches. Extensive experiments on a variety of datasets confirm that FedMGDA+ compares favorably against state-of-the-art. In the end, we also discuss several possible extensions of our work.