BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Drupal iCal API//EN
X-WR-CALNAME:Events items teaser
X-WR-TIMEZONE:America/Toronto
BEGIN:VTIMEZONE
TZID:America/Toronto
X-LIC-LOCATION:America/Toronto
BEGIN:DAYLIGHT
TZNAME:EDT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
DTSTART:20190310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZNAME:EST
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
DTSTART:20191103T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:69f3201dd0581
DTSTART;TZID=America/Toronto:20200116T100000
SEQUENCE:0
TRANSP:TRANSPARENT
DTEND;TZID=America/Toronto:20200116T100000
URL:https://uwaterloo.ca/statistics-and-actuarial-science/events/department
 -seminar-victor-veitch-columbia-university
LOCATION:M3 - Mathematics 3 200 University Avenue West Room 3127 Waterloo O
 N N2L 3G1 Canada
SUMMARY:Department seminar by Victor Veitch\, Columbia University
CLASS:PUBLIC
DESCRIPTION:ADAPTING BLACK-BOX MACHINE LEARNING METHODS FOR CAUSAL INFERENC
 E\n\nI'll cover two recent works on the use of deep learning for causal\ni
 nference with observational data. The setup for the problem is: we\nhave a
 n observational dataset where each observation includes a\ntreatment\, an 
 outcome\, and covariates (confounders) that may affect\nthe treatment and 
 outcome. We want to estimate the causal effect of\nthe treatment on the ou
 tcome\; that is\, what happens if we intervene? \nThis effect is estimate
 d by adjusting for the covariates. The talk\ncovers two aspects of using o
 f deep learning for this adjustment.\n\nFirst\, neural network research ha
 s focused on \\emph{predictive}\nperformance\, but our goal is to produce 
 a quality \\emph{estimate} of\nthe effect. I'll describe two adaptations t
 o neural net design and\ntraining\, based on insights from the statistical
  literature on the\nestimation of treatment effects. The first is a new ar
 chitecture\, the\nDragonnet\, that exploits the sufficiency of the propens
 ity score for\nestimation adjustment. The second is a regularization proce
 dure\,\ntargeted regularization\, that induces a bias towards estimates th
 at\nhave non-parametrically optimal asymptotic properties. \n\nSecond\, I
 'll describe how to use deep language models (e.g.\, BERT) for\ncausal inf
 erence with text data. The challenge here is that text data\nis high dimen
 sional\, and naive dimension reduction may throw away\ninformation require
 d for causal identification. The main insight is\nthat the text representa
 tion produced by deep embedding methods\nsuffices for the causal adjustmen
 t. 
DTSTAMP:20260430T092549Z
END:VEVENT
END:VCALENDAR