Please note: You must register to attend this online seminar.
Yang Cao, Associate Professor
Graduate School of Information Science and Technology, Hokkaido University
Federated learning has received increasing attention in academia and industry as a new privacy-preserving machine learning paradigm. Unlike traditional machine learning, which requires data collection before training, in federated learning, the clients collaboratively train a model under the coordination of a central server. In particular, the clients only share model updates to the server, and all raw data are stored locally. However, recent studies showed that the model updates might reveal sensitive information to the server. In addition, federated learning itself does not guarantee formal privacy.
This talk will review recent advances on differentially private federated learning under untrusted servers, introduce our attempts towards this goal by leveraging LDP, the shuffle model of DP and TEE, and discuss some open problems.
Bio: Yang Cao is an Associate Professor in the Division of Computer Science and Information Technology at Hokkaido University. He earned his Ph.D. from the Graduate School of Informatics, Kyoto University, in 2017.
His research interests lie in the intersections between databases, security, and machine learning. He has published many papers in these areas, including top venues such as VLDB, SIGMOD, ICDE, AAAI, TKDE, and USENIX Security. Two of his papers were selected as one of the best paper finalists in ICDE 2017 and ICME 2020. He is a recipient of the IEEE Computer Society Japan Chapter Young Author Award 2019, Database Society of Japan Kambayashi Young Researcher Award 2021.
To attend this seminar on Zoom, please register at https://uwaterloo.zoom.us/meeting/register/tJMsdeytrzMpG9D-U8caIIoBpwqxRhGM6DZj.
200 University Avenue West
Waterloo, ON N2L 3G1