PhD Seminar • Artificial Intelligence • A Critical Look At Tokenwise Reward-Guided Text Generation

Monday, July 8, 2024 10:00 am - 11:00 am EDT (GMT -04:00)

Please note: This PhD seminar will take place online.

Ahmad Rashid, PhD candidate
David R. Cheriton School of Computer Science

Supervisor: Professor Pascal Poupart

Large language models (LLMs) can significantly be improved by aligning to human preferences — the so-called reinforcement learning from human feedback (RLHF). However, the cost of fine-tuning an LLM is prohibitive for many users. Due to their ability to bypass LLM finetuning, tokenwise reward-guided text generation (RGTG) methods have recently been proposed. They use a reward model trained on full sequences to score partial sequences during a tokenwise decoding, in a bid to steer the generation towards sequences with high rewards. However, these methods have so far been only heuristically motivated and poorly analyzed.

In this work, we show that reward models trained on full sequences are not compatible with scoring partial sequences. To alleviate this issue, we propose to explicitly train a Bradley-Terry reward model on partial sequences, and autoregressively sample from the implied tokenwise policy during decoding time. We study the property of this reward model and the implied policy. In particular, we show that this policy is proportional to the ratio of two distinct RLHF policies. We show that our simple approach outperforms previous RGTG methods and achieves similar performance as strong offline baselines but without large-scale LLM finetuning.


You can attend this seminar on Zoom.