BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Drupal iCal API//EN
X-WR-CALNAME:Events items teaser
X-WR-TIMEZONE:America/Toronto
BEGIN:VTIMEZONE
TZID:America/Toronto
X-LIC-LOCATION:America/Toronto
BEGIN:DAYLIGHT
TZNAME:EDT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZNAME:EST
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
DTSTART:20251102T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
UID:69d15f4aa85b5
DTSTART;TZID=America/Toronto:20260310T153000
SEQUENCE:0
TRANSP:TRANSPARENT
DTEND;TZID=America/Toronto:20260310T170000
URL:https://uwaterloo.ca/centre-for-theoretical-neuroscience/events/ctn-sem
 inar-chris-sims-rensselaer-polytechnic
LOCATION:DC - William G. Davis Computer Research Centre 200 University Aven
 ue West Waterloo ON N2L 3G1 Canada
SUMMARY:CTN Seminar: Chris Sims Rensselaer Polytechnic
CLASS:PUBLIC
DESCRIPTION:Room: DC1304\n\nTitle: Why Simplicity Enables Intelligence: Ef
 ficient Coding in Human\nLearning and Generalization\n\nAbstract:\n\nHuman
  intelligence depends critically on the ability to learn\nrepresentations 
 that generalize beyond past experience. While\nreinforcement learning theo
 ry formalizes how agents should act to\nmaximize reward\, it provides litt
 le guidance on how internal\nrepresentations should be structured to suppo
 rt generalization. In\nthis talk\, I propose that efficient coding provide
 s a unifying\nrepresentational principle. When agents are constrained to u
 se the\nsimplest representations compatible with reward maximization\, the
 y are\nforced to discover abstract structure in the environment and to\nse
 lectively encode features that matter for behaviour. I present a\ncomputat
 ional framework in which efficient coding augments the\nclassical reinforc
 ement learning objective\, leading to compact\ninternal state spaces that 
 support robust generalization. Behavioural\nexperiments show that this fra
 mework accounts for human generalization\npatterns that standard models s
 truggle to explain. I further\ndemonstrate that the same principle explain
 s long-standing\nregularities in perceptual generalization\, including the
  universal law\nof generalization. These results suggest that abstraction\
 ,\ngeneralization\, and perceptual similarity arise from a common\nnormati
 ve pressure to efficiently encode information under resource\nconstraints.
DTSTAMP:20260404T185818Z
END:VEVENT
END:VCALENDAR