BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Drupal iCal API//EN
X-WR-CALNAME:Events items teaser
X-WR-TIMEZONE:America/Toronto
BEGIN:VTIMEZONE
TZID:America/Toronto
X-LIC-LOCATION:America/Toronto
BEGIN:DAYLIGHT
TZNAME:EDT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
DTSTART:20180311T070000
END:DAYLIGHT
BEGIN:STANDARD
TZNAME:EST
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
DTSTART:20171105T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:69c0c9444adba
DTSTART;TZID=America/Toronto:20180504T140000
SEQUENCE:0
TRANSP:TRANSPARENT
DTEND;TZID=America/Toronto:20180504T140000
URL:https://uwaterloo.ca/artificial-intelligence-group/events/phd-seminar-r
 egularized-losses-weakly-supervised-cnn
LOCATION:DC - William G. Davis Computer Research Centre 200 University Aven
 ue West 2310 Waterloo ON N2L 3G1 Canada
SUMMARY:PhD Seminar: Regularized Losses for Weakly-supervised CNN Segmentat
 ion
CLASS:PUBLIC
DESCRIPTION:Speaker: Meng Tang\, PhD candidate\n\nMinimization of regulariz
 ed losses is a principled approach to weak\nsupervision well established i
 n deep learning\, in general. However\, it\nis largely overlooked in seman
 tic segmentation currently dominated by\nmethods mimicking full supervisio
 n via \"fake\" fully-labeled training\nmasks (proposals) generated from av
 ailable partial input. To obtain\nsuch full masks the typical methods expl
 icitly use standard\nregularization techniques for \"shallow\" segmentatio
 n\, e.g.\, graph cuts\nor dense CRFs. In contrast\, we integrate such stan
 dard regularizers\nand clustering criteria directly into the loss function
 s over partial\ninput. This approach simplifies weakly-supervised training
  by avoiding\nextra MRF/CRF inference steps or layers explicitly generatin
 g full\nmasks\, while improving both the quality and efficiency of trainin
 g. 
DTSTAMP:20260323T050156Z
END:VEVENT
END:VCALENDAR