Please Note: This seminar will be given online.
Statistics & Biostatistics seminar series
University of California, San Francisco (UCSF)
Link to join seminar: Hosted on Webex
Safe approval policies for continual learning systems in healthcare
The number of machine learning (ML)-based medical devices approved by the US Food and Drug Administration (FDA) has been rapidly increasing. The current regulatory policy requires these algorithms to be locked post-approval; subsequent changes must undergo additional scrutiny.
Nevertheless, ML algorithms have the potential to improve over time by training over a growing body of data, better reflect real-world settings, and adapt to distributional shifts. To facilitate a move toward continual learning algorithms, the FDA is looking to streamline regulatory policies and design Algorithm Change Protocols (ACPs) that autonomously approve proposed modifications. However, the problem of designing ACPs cannot be taken lightly. We show that policies without error rate guarantees are prone to "bio-creep" and may not protect against distributional shifts. To this end, we investigate the problem of ACP design within the frameworks of online hypothesis testing and online learning and take the first steps towards developing safe ACPs.