Please note: This PhD seminar will be given online.
Peng Shi, PhD candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Jimmy Lin
In-context learning has attracted a lot of attention in the NLP community due to its practical and scientific values. However, most of the work focused on English datasets, where prompt and input sequence are both in English. In this work, we focus on the cross-lingual semantic parsing tasks, targeting on non-English datasets. We explore the performance of large pre-trained models (Codex) under the setting where the utterance is in non-English languages while the exemplars are in English which has more annotated resources.
We propose a framework that can better leverage the capability of large pre-trained language models, by retrieving better exemplars. Inspired by chain-of-thought prompting, we also explore a translation-based inference process by treating the translation as a chain-of-thought prompt. Experiments show that our framework can better leverage the large language models, outperforming existing baselines.