Please note: This seminar will be given online.
Shalmali Joshi, Postdoctoral Fellow
Center for Research on Computation and Society, Harvard University
Machine Learning advances have revolutionized many domains such as machine translation, complex game playing, and scientific discovery. On the other hand, ML has only enjoyed modest successes in human-centered applications. To improve the utility, reliability, and robustness of Machine Learning (ML) models in human-centered domains, we need to address several foundational challenges.
In this talk, I will demonstrate how an algorithmic-safety perspective can motivate specific technical challenges for learning in human-centered domains such as healthcare. Specifically, I will discuss the need to improve the utility of ML-robustness, explainability with an emphasis on decision-making, and post-hoc algorithmic safety to prevent harm. I will discuss my contributions on i) novel methods to improve causal robustness of ML methods designed for practical generative settings, ii) aiding safe decision-making in non-IID settings using time-series explainability intended to address clinicians’ requirements, and iii) novel learning algorithms to optimize for post-deployment safety in sequential decision-making settings. I will conclude with an overview of my future research vision on novel safety-based objectives for explainability in ML, expanding ML-based solutions to general and practical generative settings, and outlining novel ways of validating ML models targeting safety-based objectives.
Bio: Shalmali Joshi is a Postdoctoral Fellow at the Center for Research on Computation and Society at Harvard University. Previously, she was a Postdoctoral Fellow at the Vector Institute. She received her Ph.D. from the University of Texas at Austin (UT Austin).
Her research is on the algorithmic safety of Machine Learning for human-centered domains. Shalmali has contributed to the field of explainability, robustness, and novel algorithms for ML safety with an emphasis on practical generative settings and impact on decision-making. Shalmali has published in ML and inter-disciplinary venues in healthcare such as NeurIPS, FAccT, CHIL, MLHC, PMLR, and perspectives in JAMIA, LDH, and Nature Medicine. She has co-founded the Fair ML for Health NeurIPS workshop and has served as the communications chair for ACM CHIL 2020, besides reviewing and meta-reviewing for several ML academic venues.