Please note: This master’s thesis presentation will take place online.
Runcheng (Irene) Liu, Master’s candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Pascal Poupart
Recent years have witnessed a prosperous development of dialogue response generation since the advent of Transformer. Fine-tuning pretrained language models for different downstream tasks has become the dominant paradigm in Natural Language Processing (NLP). However, fine-tuning requires storing a full copy of parameter states for every task, which is memory-consuming and expensive to serve when working with large-scale models with billions of parameters like GPT-3.
Meanwhile, prompt-tuning has become an increasingly popular parameter-efficient method for steering large pretrained language models to various tasks. Most of the prompting techniques are applied in language understanding and assuming fixed prompts for all data samples within a task. Therefore, there arises an urgent need to exploit the ability of prompt-tuning in open-domain dialogue generation where data samples may vary greatly within a task.
In this thesis, we present a novel, instance-specific prompt-tuning algorithm for dialogue generation. Specifically, we generate prompts based on instance-level control code, rather than the conversation context, to explore their impact on controlled dialogue generation. Experiments on popular open-domain dialogue datasets, evaluated with both automated metrics and human evaluation, demonstrate that our method is superior to prompting baselines as well as other lightweight controlled generation methods, and comparable to fine-tuning with less than 10% of total parameters.
To join this master’s thesis presentation on Zoom, please go to https://vectorinstitute.zoom.us/j/82076660042?pwd=U3V3R3pFN1RFNXJmREcvNVJBUmdDQT09.
200 University Avenue West
Waterloo, ON N2L 3G1