Artificial Intelligence Group

Welcome to the Artificial Intelligence Group

The Artificial Intelligence (AI) Group at the David R. Cheriton School of Computer Science conducts research in many areas of artificial intelligence. Faculty members of the group have active interests in: models of intelligent interaction, multi-agent systems, natural language understanding, constraint programming, computational vision, robotics, machine learning, and reasoning under uncertainty.

The AI Group also has a particular investment in Societal AI.

  1. Dec. 3, 2018Professor Shai Ben-David and colleagues win best paper award at NeurIPS 2018

    Cheriton School of Computer Science Professor Shai Ben-David, his former PhD student Hassan Ashtiani, now an Assistant Professor at McMaster University, along with colleagues Christopher Liaw, Abbas Mehrabian and Yaniv Plan, have received a best paper award at NeurIPS 2018, the 32ndAnnual Conference on Neural Information Processing Systems.

  2. Oct. 29, 2018Meet Professors Olga Veksler and Yuri Boykov, two computer vision researchers who recently joined the Cheriton School of Computer Science

    Professors Olga Veksler and Yuri Boykov joined the David R. Cheriton School of Computer Science earlier this year. Previously, both were full professors in the Department of Computer Science at Western University, where they were faculty members for 14 years.

    Their research interests are in the area of computer vision. In particular, Olga’s interests are in visual correspondence and image segmentation, and Yuri’s also include 3D reconstruction and biomedical image analysis.

  3. Oct. 26, 2018Mike Schaekermann, Joslin Goh, Kate Larson and Edith Law win best paper award at CSCW 18

    Computer science doctoral student Mike Schaekermann, Dr. Joslin Goh and Schaekermann’s cosupervisors, Professors Kate Larson and Edith Law, have received a best paper award at CSCW 18, the 21stACM Conference on Computer Supported Cooperative Work and Social Computing.

Read all news
  1. Dec. 13, 2018PhD Seminar: Beyond LIF: The Computational Power of Passive Dendritic Trees

    Andreas Stöckel, PhD candidate
    David R. Cheriton School of Computer Science

    The artificial neurons typically employed in machine learning and computational neuroscience bear little resemblance to biological neurons. They are often derived from the “leaky integrate and fire” (LIF) model, neglect spatial extent, and assume a linear combination of input variables. It is well known that these simplifications have a profound impact on the family of functions that can be computed in a single-layer neural network. 

  2. Dec. 14, 2018PhD Seminar: Progressive Memory Banks for Incremental Domain Adaptation

    Nabiha Asghar, PhD candidate
    David R. Cheriton School of Computer Science

    We address the problem of incremental domain adaptation (IDA). We assume each domain comes one after another, and that we could only access data in the current domain. The goal of IDA is to build a unified model performing well on all the domains that we have encountered. We propose to augment a recurrent neural network (RNN) with a directly parameterized memory bank, which is retrieved by an attention mechanism at each step of RNN transition. The memory bank provides a natural way of IDA: when adapting our model to a new domain, we progressively add new slots to the memory bank, which increases the number of parameters, and thus the model capacity. 

All upcoming events