BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Date iCal//NONSGML kigkonsult.se iCalcreator 2.20.4//
METHOD:PUBLISH
X-WR-CALNAME;VALUE=TEXT:Events
BEGIN:VTIMEZONE
TZID:America/Toronto
BEGIN:STANDARD
DTSTART:20181104T020000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20190310T020000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:calendar.1316.field_event_date.0@uwaterloo.ca/statistics-and-actuarial-
science
DTSTAMP:20201125T121304Z
CREATED:20190103T143436Z
DESCRIPTION:Some Priors for Nonparametric Shrinkage and Bayesian Sparsity I
nference \n\n\n\nIn this talk\, I introduce two novel classes of shrinkage
priors for different purposes: functional HorseShoe (fHS) prior for nonpa
rametric subspace shrinkage and neuronized priors for general sparsity inf
erence. \n\n\n\n In function estimation problems\, the fHS prior encourag
es shrinkage towards parametric classes of functions. Unlike other shrinka
ge priors for parametric models\, the fHS shrinkage acts on the shape of t
he function rather than inducing sparsity on model parameters. I study som
e desirable theoretical properties including an optimal posterior concentr
ation property on the function and the model selection consistency. I appl
y the fHS prior to nonparametric additive models for some simulated and re
al data sets\, and the results show that the proposed procedure outperform
s the state-of-the-art methods in terms of estimation and model selection.
\n\n\n\n For general sparsity inference\, I propose the neuronized priors
to unify and extend existing shrinkage priors such as one-group continuou
s shrinkage priors\, continuous spike-and-slab priors\, and discrete spike
-and-slab priors with point-mass mixtures. The new priors are formulated a
s the product of a weight variable and a transformed scale variable via an
activation function. By altering the activation function\, practitioners
can easily implement a large class of Bayesian variable selection procedu
res. Compared with classic spike and slab priors\, the neuronized priors a
chieve the same explicit variable selection without employing any latent i
ndicator variable\, which results in more efficient MCMC algorithms and mo
re effective posterior modal estimates. I also show that these new formula
tions can be applied to more general and complex sparsity inference proble
ms\, which are computationally challenging\, such as structured sparsity a
nd spatially correlated sparsity problems.
DTSTART;TZID=America/Toronto:20190124T160000
DTEND;TZID=America/Toronto:20190124T160000
LAST-MODIFIED:20190103T143940Z
LOCATION:M3 - Mathematics 3\n \n\n Room: 3127 \n
\n\n \n\n 200 University Avenue West \n
Waterloo\, ON\n
N2L 3G1\n \nCanada
SUMMARY:Department seminar by Minsuk Shin\, Harvard University
URL;TYPE=URI:https://uwaterloo.ca/statistics-and-actuarial-science/events/d
epartment-seminar-minsuk-shin-harvard-university
END:VEVENT
END:VCALENDAR