PhD Defence • Cryptography, Security, and Privacy (CrySP) • Deployment Concerns in Machine Learning Systems: Unintended Interactions and Accountability

Monday, June 15, 2026 1:00 pm - 4:00 pm EDT (GMT -04:00)

Please note: This PhD defence will take place online.

Vasisht Duddu, PhD candidate
David R. Cheriton School of Computer Science

Supervisor: Professor N. Asokan

Machine learning (ML) models are increasingly being deployed for client-facing services (e.g., chatbots, search engines, and browsers), high-stakes decision-making applications (e.g., healthcare and criminal justice), and as part of larger systems (e.g., autonomous vehicles and operating systems). However, to deploy ML models for a particular application, practitioners need to address various deployment concerns including (i) infrastructure issues (e.g., latency, throughput, interoperability, scalability), (ii) model design (e.g., high utility and generalization, low overfitting, hyperparameter tuning, data processing), (iii) environmental impact (e.g., reducing carbon emission, water and power consumptions by data-centers), (iv) adversarial and societal risks (e.g., security, privacy, safety of clients, unfairness, poor transparency, misalignment, misinformation, and cyberattacks), and (v) enabling governance (e.g., verifying claims by practitioners, and regulatory compliance). I focus on two deployment concerns—adversarial and societal risks, and enabling governance—and address unintended interactions and accountability within these respective concerns. I present them as two parts of the thesis.

(Part-1) Unintended Interactions in Machine Learning: Existing literature has identified and exploited various risks to security, privacy, fairness, transparency, and safety. While substantial prior work explores the design of defenses against individual risks in isolation, this is not sufficient for real-world ML models which must protect against multiple risks simultaneously. This requires practitioners to address unintended interactions that emerge when protecting against multiple risks. A systematic understanding of such interactions is lacking and I identify the following unintended interactions: (a) a defense against one risk may increase or decrease other unrelated risks; (b) conflicts among defenses can decrease their effectiveness when combined; and (c) collusion among adversaries can allow exploiting one risk to increase others. I propose frameworks to identify factors underlying such interactions, and present guidelines to conjecture about unexplored ones.

(Part-2) Accountability in Machine Learning Pipeline: Practitioners must often demonstrate properties of an ML model, its training process, and its training data, to a verifier (e.g., regulator or customer). Such claims are typically communicated via ML property cards (e.g., model, data, and inference cards). I propose ML property attestations, technical mechanisms that allow provers (e.g., model trainers) to demonstrate these properties to verifiers, while ensuring confidentiality of the proprietary model and data. I show that existing software-based attestations are either inefficient (e.g., cryptographic mechanisms), or ineffective and easily evaded (e.g., ML-based mechanisms). I then identify hardware-assisted mechanisms using trusted execution environments as an efficient and effective alternative for providing ML property attestations. This can be used for verifiable ML property cards, ensure accountability of practitioners' claims, and also show compliance with regulations.


Attend this PhD defence virtually on Zoom.