PhD Comprehensive Exam | Juliette Sinnott, Private, Interpretable and Explainable Machine Learning Models for Medical Applications

Wednesday, May 6, 2026 11:00 am - 12:00 pm EDT (GMT -04:00)

Location

MC 6460

Candidate

Juliette Sinnott | Applied Mathematics, University of Waterloo

Title

Private, Interpretable and Explainable Machine Learning Models for Medical Applications

Abstract

Most of the recent advances in machine learning (ML) have focused on improving accuracy, generalization and computational efficiency.  In sensitive fields like medicine, other priorities like privacy, interpretability and explainability are needed.  We address each of these issues in different projects using an existing immunotherapy prediction model as a case study.  First, we explore how information about training data can be unintentionally kept in published models and accessed through membership inference attacks.  We compare the privacy unprotected models, models trained with differential privacy, and models transformed into quantum-inspired tensor trains(TTs).  We show that TTs defend against attacks, with superior interpretability to other methods.  Next we explore how model outputs can be enriched by offering suggestions for minimal algorithmic recourse.  By building a causal graph and approximating the structural equations, we can use gradient descent to suggest the best future action for patients who receive unfavorable predictions from the model.  This is a more responsible way to guide user responses to ML predictions.  Finally, we discuss how causality and TTs can be incorporated more directly into medical ML models to improve prediction quality alongside all the other priorities we mentioned.