Location
MC 5479
Candidate
Juliette Sinnott | Applied Mathematics, University of Waterloo
Title
Improving Explainability, Interpretability, and Privacy in Machine Learning for Medical Applications
Abstract
Machine learning (ML) has shown great effectiveness in tackling medical problems. However, many models suffer from the black-box problem, making their predictions difficult to explain. Moreover, approaches that aim to increase interpretability can inadvertently risk leaking sensitive information about the training data. Using a previously published logistic regression model that predicts patient response to immunotherapy, I will discuss methods that address these challenges. First, I propose a Causally Informed Prediction Model, which outputs both response probabilities and algorithmic recourse suggestions. By explicitly considering causal relationships between input features, this model computes the minimal user actions that could lead to a more desirable outcome. This approach can be integrated into various architectures to enhance explainability. Next, I will present my collaborative work on Private and Interpretable Tensor Train (TT) Models. I will first illustrate how attacks on publicly available trained models can reveal which datasets were used for training. Then, I will show how implementing the TT method decreases the accuracy of such attacks without sacrificing model performance. Furthermore, the TT representation provides interpretability advantages, including insights into feature importance and conditional computations.