PhD Seminar • Information Retrieval | Natural Language Processing • Nearest Neighbor Speculative Decoding for LLM Generation and Attribution

Thursday, June 20, 2024 2:00 pm - 3:00 pm EDT (GMT -04:00)

Please note: This PhD seminar will take place online.

Minghan Li, PhD candidate
David R. Cheriton School of Compute Science

Supervisor: Professor Jimmy Lin

Large language models (LLMs) often hallucinate and lack the ability to provide attribution for their generations. Semi-parametric LMs, such as kNN-LM, approach these limitations by refining the output of an LM for a given prompt using its nearest neighbor matches in a non-parametric data store. However, these models often exhibit slow inference speeds and produce non-fluent texts. In this paper, we introduce Nearest Neighbor Speculative Decoding (Nest), a novel semi-parametric language modeling approach that is capable of incorporating real-world text spans of arbitrary length into the LM generations and providing attribution to their sources.

Nest performs token-level retrieval at each inference step to compute a semi-parametric mixture distribution and identify promising span continuations in a corpus. It then uses an approximate speculative decoding procedure that accepts a prefix of the retrieved span or generates a new token. Nest significantly enhances the generation quality and attribution rate of the base LM across a variety of knowledge-intensive tasks, surpassing the conventional kNN-LM method and performing competitively with in-context retrieval augmentation. In addition, Nest substantially improves the generation speed, achieving a 1.8× speedup in inference time when applied to Llama-2-Chat 70B.