BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Drupal iCal API//EN
X-WR-CALNAME:Events items teaser
X-WR-TIMEZONE:America/Toronto
BEGIN:VTIMEZONE
TZID:America/Toronto
X-LIC-LOCATION:America/Toronto
BEGIN:DAYLIGHT
TZNAME:EDT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
DTSTART:20190310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZNAME:EST
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
DTSTART:20191103T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:69d99a27108fb
DTSTART;TZID=America/Toronto:20200122T100000
SEQUENCE:0
TRANSP:TRANSPARENT
DTEND;TZID=America/Toronto:20200122T100000
URL:https://uwaterloo.ca/statistics-and-actuarial-science/events/department
 -seminar-lin-liu-harvard-university
LOCATION:M3 - Mathematics 3 200 University Avenue West Room 3127 Waterloo O
 N N2L 3G1 Canada
SUMMARY:Department seminar by Lin Liu\, Harvard University
CLASS:PUBLIC
DESCRIPTION:THE POSSIBILITY OF NEARLY ASSUMPTION-FREE INFERENCE IN CAUSAL\n
 INFERENCE\n\nIn causal effect estimation\, the state-of-the-art is the so-
 called\ndouble machine learning (DML) estimators\, which combine the benef
 it of\ndoubly robust estimation\, sample splitting and using machine learn
 ing\nmethods to estimate nuisance parameters. The validity of the\nconfide
 nce interval associated with a DML estimator\, in most part\,\nrelies on t
 he complexity of nuisance parameters and how close the\nmachine learning e
 stimators are to the nuisance parameters. Before we\nhave a complete under
 standing of the theory of many machine learning\nmethods including deep n
 eural networks\, even a DML estimator may\nhave a bias so large that pr
 ohibits valid inference. In this talk\,\nwe describe a nearly assumption
 -free procedure that can either\ncriticize the invalidity of the Wald co
 nfidence interval associated\nwith the DML estimators of some causal effe
 ct of interest or falsify\nthe certificates (i.e. the mathematical condit
 ions) that\, if true\,\ncould ensure valid inference. Essentially\, we are
  testing the null\nhypothesis that if the bias of an estimator is smaller 
 than a fraction\n$\\rho$ its standard error. Our test is valid under the n
 ull without\nrequiring any complexity (smoothness or sparsity) assumptions
  on the\nnuisance parameters or the properties of machine learning estimat
 ors\nand may have power to inform the analysts that they have to do\nsomet
 hing else than DML estimators or Wald confidence intervals for\ninference 
 purposes. This talk is based on joint work with Rajarshi\nMukherjee and Ja
 mes M. Robins.
DTSTAMP:20260411T004735Z
END:VEVENT
END:VCALENDAR