Department seminar by Neil Spencer, Carnegie Mellon University

Thursday, September 17, 2020 4:00 pm - 4:00 pm EDT (GMT -04:00)

A new framework for modeling sparse networks that makes sense (and can actually be fit!)

Latent position models are a versatile tool when working with network data. Applications include clustering entities, network visualization, and controlling for unobserved causal confounding. In traditional treatments of the latent position model, the nodes’ latent positions are viewed as independent and identically distributed random variables. This assumption implies that the average node degree grows linearly with the number of nodes in the network, making it inappropriate when the network is sparse. In the first part of this talk, I will propose an alternative assumption—that the latent positions are generated according to a Poisson point process—and show that it is compatible with various levels of network sparsity. I will also provide theory establishing that the nodes’ latent positions can be consistently estimated, provided that the network isn't too sparse.  In the second part of the talk, I will consider the computational challenge of fitting latent position models to large datasets. I will describe a new Markov chain Monte Carlo strategy—based on a combination of split Hamiltonian Monte Carlo and Firefly Monte Carlo—that is much more efficient than the standard Metropolis-within-Gibbs algorithm for inferring the latent positions. Throughout the talk, I will use an advice-sharing network of elementary school teachers within a school district as a running example.

Please note: This talk will be hosted on Webex. To join please click on the following link: Department seminar by Neil Spencer.