BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Drupal iCal API//EN
X-WR-CALNAME:Events items teaser
X-WR-TIMEZONE:America/Toronto
BEGIN:VTIMEZONE
TZID:America/Toronto
X-LIC-LOCATION:America/Toronto
BEGIN:DAYLIGHT
TZNAME:EDT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZNAME:EST
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
DTSTART:20241103T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:69f87ffbc6de1
DTSTART;TZID=America/Toronto:20251007T133000
SEQUENCE:0
TRANSP:TRANSPARENT
DTEND;TZID=America/Toronto:20251007T150000
URL:https://uwaterloo.ca/pure-mathematics/events/computability-learning-sem
 inar-155
SUMMARY:Computability Learning Seminar
CLASS:PUBLIC
DESCRIPTION:COLE WYETH\, UNIVERSITY OF WATERLOO\n\n_ntroduction to Algorith
 mic Probability and the Coding Theorem_\n\nBuilding on the prefix-free Kol
 mogorov complexity discussed at our\nlast meeting\, I will introduce the b
 asic objects of algorithmic\nprobability. In particular\, with a theory of
  effective explanations in\nhand\, it is natural to ask which strings are 
 more probable a priori?\nAfter all\, it is harder to predict the data befo
 re you have seen it!\nThe distributions generated by probabilistic Turing 
 machines can be\nfully characterized as the (normalized) \"lower semicompu
 table\nsemimeasures\,\" which naturally leads to the so-called \"discrete\
 nuniversal distribution\" m by simply mixing them all together. I will\nsk
 etch a proof of Leonid A. Levin's coding theorem\, which tells us\nthat -l
 g m(x) = K(x) up to constants\, meaning that all of our work\nwas\, in the
  most satisfying possible sense\, for nothing: we can take\nonly the short
 est algorithmic explanation without losing anything.\nHowever\, this is al
 l just a warm-up: we will find that the situation\nis much more intricate 
 when we turn to the prediction of infinite\nsequences\, which I hope to ge
 sture at\, time permitting. \n\nMC 5403
DTSTAMP:20260504T111611Z
END:VEVENT
END:VCALENDAR