BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Drupal iCal API//EN
X-WR-CALNAME:Events items teaser
X-WR-TIMEZONE:America/Toronto
BEGIN:VTIMEZONE
TZID:America/Toronto
X-LIC-LOCATION:America/Toronto
BEGIN:DAYLIGHT
TZNAME:EDT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
DTSTART:20200308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZNAME:EST
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
DTSTART:20191103T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:69d427aaa3d79
DTSTART;TZID=America/Toronto:20200917T160000
SEQUENCE:0
TRANSP:TRANSPARENT
DTEND;TZID=America/Toronto:20200917T160000
URL:https://uwaterloo.ca/statistics-and-actuarial-science/events/department
 -seminar-neil-spencer-carnegie-mellon-university
SUMMARY:Department seminar by Neil Spencer\, Carnegie Mellon University
CLASS:PUBLIC
DESCRIPTION:A NEW FRAMEWORK FOR MODELING SPARSE NETWORKS THAT MAKES SENSE (
 AND CAN\nACTUALLY BE FIT!)\n\nLatent position models are a versatile tool 
 when working with network\ndata. Applications include clustering entities\
 , network visualization\,\nand controlling for unobserved causal confoundi
 ng. In traditional\ntreatments of the latent position model\, the nodes’
  latent positions\nare viewed as independent and identically distributed r
 andom\nvariables. This assumption implies that the average node degree gro
 ws\nlinearly with the number of nodes in the network\, making it\ninapprop
 riate when the network is sparse. In the first part of this\ntalk\, I will
  propose an alternative assumption—that the latent\npositions are genera
 ted according to a Poisson point process—and\nshow that it is compatible
  with various levels of network sparsity. I\nwill also provide theory esta
 blishing that the nodes’ latent\npositions can be consistently estimated
 \, provided that the network\nisn't too sparse.  In the second part of th
 e talk\, I will consider\nthe computational challenge of fitting latent po
 sition models to large\ndatasets. I will describe a new Markov chain Monte
  Carlo\nstrategy—based on a combination of split Hamiltonian Monte Carlo
  and\nFirefly Monte Carlo—that is much more efficient than the standard\
 nMetropolis-within-Gibbs algorithm for inferring the latent positions.\nTh
 roughout the talk\, I will use an advice-sharing network of\nelementary sc
 hool teachers within a school district as a running\nexample.\n\nPlease no
 te: This talk will be hosted on Webex. To join please click\non the foll
 owing link: Department seminar by Neil Spencer\n[https://uwaterloo.webex.
 com/uwaterloo/onstage/g.php?MTID=e15ee553cf6b5ab678d359aa7d122a3ca].
DTSTAMP:20260406T213746Z
END:VEVENT
END:VCALENDAR