Seminar • Artificial Intelligence • Learning Language Structures through Grounding

Thursday, March 2, 2023 10:30 am - 11:30 am EST (GMT -05:00)

Please note: This seminar will take place in DC 1304 and virtually over Zoom.

Freda Shi, PhD candidate
Toyota Technological Institute at Chicago

Syntactic and semantic structures model the language processing behaviors of humans, and can serve as the foundation for machine learning based natural language processing (NLP) models. However, such structures can be expensive to annotate, rendering the conventional supervised training approach inefficient or even infeasible in a number of scenarios.
 
In this talk, I will present my work on learning both syntactic and semantic structures of language through various grounding signals that naturally exist in the world. I will start by introducing the task of visually grounded grammar induction and our proposed solution based on visual concreteness estimation. Second, I will describe our work on learning the semantic structures of sentences (i.e., their corresponding programs) by the grounding program implementations to their execution outcomes. Towards language-universal NLP, I will describe our work that efficiently transfers models from one language to another without any human annotations in the target language, using cross-lingual word correspondence as grounding signals. I will conclude with a discussion on the future directions of this line of work.


Bio: Freda Shi (a.k.a. Haoyue Shi) is a Ph.D. candidate at the Toyota Technological Institute at Chicago (TTIC), advised by Professors Kevin Gimpel and Karen Livescu. Freda’s research interests are in computational linguistics, natural language processing and related aspects of machine learning. Her work has been recognized with best paper nominations at ACL 2019 and 2021, as well as a Google PhD Fellowship. Freda received a B.S. in Intelligence Science and Technology from the School of EECS at Peking University and minored in Sociology.