Listed below are the graduate level AI courses. The tentative CS graduate level course schedule for the next term is posted on the CS current course offerings page, in which 600-level courses are open to undergraduate students as 400-level courses. For a complete list of graduate level CS course offerings, please visit the CS course list page.
Introduction to modeling and algorithmic techniques for machines to learn concepts from data. Generalization: underfitting, overfitting, cross-validation. Tasks: classification, regression, clustering. Optimization-based learning: loss minimization, regularization. Statistical learning: maximum likelihood, Bayesian learning. Algorithms: nearest neighbor, (generalized) linear regression, mixtures of Gaussians, Gaussian processes, kernel methods, support vector machines, deep learning, sequence learning, ensemble techniques. Large scale learning: distributed learning and stream learning. Applications: Natural language processing, computer vision, data mining, human computer interaction, information retrieval.
Introduction to image and vision understanding by computer. Camera-system geometry, image formation and lighting, and image acquisition. Basic visual processes for recognition of edges, regions, lines, and surfaces. Processing of stereo images, and motion in image sequences. Object recognition. Applications of computer vision systems.
Extracting meaningful patterns from random samples of large data sets. Statistical analysis of the resulting problems. Common algorithm paradigms for such tasks. Central concepts: VC-dimension, Margins of classifier, Sparsity and description length. Performance guarantees: Generalization bounds, data dependent error bounds and computational complexity of learning algorithms. Common paradigms: Neural networks, Kernel methods and Support Vector machines, Applications to Data Mining.
Goals and methods of artificial intelligence. Methods of general problem solving. Introduction to mathematical logic Mechanical theorem proving. Game playing. Natural language processing. Preference will be given to CS graduate students. All others require approvalfrom the department. Department approval will be by Undergraduate Advisor.
This number is used for courses being offered on a temporary basis. Such a course may be available only once, for example to take advantage of a visiting professor's expertise, or may be offered experimentally until it is determined whether of not the course should become part of the regular course offerings. It may also be used for an individual study course carried out under the supervision of a Computer Science faculty member with the approval from the Associate Chair, Graduate Studies. This is a grade course. Preference will be given to CS graduate students. All others require approval of the Department.
- Computational Audio (R. Mann, Winter 2019)
- Neural Networks (J. Orchard, Winter 2019)
- Computational Audio (R. Mann, Winter 2018)
- Machine Learning (P. Poupart, Winter 2018)
- Neural Networks (J. Orchard, Winter 2018)
- Machine Learning (Y. Yu, Fall 2017)
- Computational Audio (R. Mann, Winter 2017)
- Machine Learning (P. Poupart, Winter 2017)
- Rhetoric, Argument and Machines (C. Di Marco, Winter 2017)
- Computational Audio (R. Mann, Winter 2016)
Intelligence in interfaces-natural language processing, plan recognition, dialogue, generation, user modeling. Interfaces to intelligent systems-intelligent agents and multi-agent systems, information processing and data mining, knowledge-based systems.
- AI: Law, Ethics & Policy (M. Grossman, Fall 2019)
- Fairness and Interpretability (S. Ben-David, Spring 2019)
- AI: Law, Ethics & Policy (M. Grossman, Fall 2018)
- AI: Law, Ethics & Policy (M. Grossman, Fall 2017)
- Machine Learning & Society Impact (S. Ben-David, Fall 2017)
- Games for Health (C. Di Marco, Spring 2017)
- Optimize for Machine Learning (Y. Yu, Fall 2016)
- Games for Health (C. Di Marco, Fall 2015)
The course introduces students to the design of algorithms that enable machines to learn based on reinforcements. In contrast to supervised learning where machines learn from examples that include the correct decision and unsupervised learning where machines discover patterns in the data, reinforcement learning allows machines to learn from partial, implicit and delayed feedback. This is particularly useful in sequential decision making tasks where a machine repeatedly interacts with the environment or users. Applications of reinforcement learning include robotic control, autonomous vehicles, game playing, conversational agents, assistive technologies, computational finance, operations research, etc.
- Theory of Deep Learning (Y. Yu, Winter 2019)
- Clustering Theory (S. Ben-David, Winter 2019)
- Trust and Online Social Networks (R. Cohen, Fall 2018)
- Clustering Theory (S. Ben-David, Winter 2018)
- Artificial Intelligence and Philosophy (R. Cohen, Fall 2017)
- Affective Computing (J. Hoey, Winter 2017)
- Multiagent Systems (K. Larson, Fall 2016)
- Theoretical Foundations of Clustering (S. Ben-David, Spring 2016)
- Trust and Online Social Networks (R. Cohen, Winter 2016)
- Affective Computing (J. Hoey, Winter 2016)
- Topics in Natural Language Processing (M. Li, Spring 2015)
- Theoretical Foundations of Clustering (S. Ben-David, Winter 2015)
- Multi-agent Systems (K. Larson, Winter 2015)
- Topics in Computer Vision (Y. Boykov, Spring 2019)
- Deep Learning and its applications (M. Li, Spring 2017)
- On the Synergy Between CS and Biology (G. Baranoski, Winter 2017)
- On the Synergy Between CS and Biology (G. Baranoski, Winter 2016)
For a complete list of approved courses, please visit the CS approved courses page.
- PSYCH 784 Human Neuroanatomy and Neuropathology
- STAT 831 Generalized Linear Models and Applications
- CO 759 Algorithmic Game Theory
- CO 769 Optimization for Big Data
- ECE 750 Tools of Intelligent Systems
- ECE 780 Motion Coordination & Planning
- COGSCI 600 Seminar in Cognitive Science
- MSCI 724 Design and Analysis of Information Procurement Mechanisms
- STAT 946 Kernels & Ensembles