Department seminarSubha Maity Room: M3 3127 |
Tackling Posterior Drift via Linear Adjustments and Exponential Tilts
I will speak on some of my recent work on transfer learning from a source to a target population in the presence of 'posterior drift': i.e. the regression function or the Bayes classifier in the target population is different from that in the source. In the situation where labeled samples from the target domain are available, by modeling the posterior drift through a linear adjustment (on an appropriately transformed scale), we are able to learn the nature of the posterior drift using relatively few samples from the target population as compared to the source population, which provides an abundance of samples. The other (semi-supervised) case, where labels from the target are unavailable, is addressed by connecting the probability distribution in the target domain to that in the source domain via a formulation involving the exponential family and learning the corresponding parameters. Both approaches are motivated by ideas originating in classical statistics. I will present theoretical guarantees for these procedures as well as applications to real data from the UK Biobank study (mortality prediction) and the Waterbirds dataset (image classification).