PodParley PodParley
Long-Context LLMs Meet RAG

EPISODE · Oct 18, 2024 · 15 MIN

Long-Context LLMs Meet RAG

from LlamaCast · host Shahriar Shariati

📈 Long-Context LLMs Meet RAG: Overcoming Challenges for Long Inputs in RAGThis paper explores the challenges and opportunities of using long-context language models (LLMs) in retrieval-augmented generation (RAG) systems. While increasing the number of retrieved passages initially improves performance, the authors find that it eventually degrades due to the introduction of irrelevant information, or "hard negatives." To address this, the paper proposes three methods for enhancing the robustness of RAG with long-context LLMs: retrieval reordering, RAG-specific implicit LLM fine-tuning, and RAG-oriented LLM fine-tuning with intermediate reasoning. The paper also investigates the impact of various factors related to data distribution, retriever selection, and training context length on the effectiveness of RAG-specific tuning.📎 Link to paper

NOW PLAYING

Long-Context LLMs Meet RAG

0:00 15:36

No transcript for this episode yet

We transcribe on demand. Request one and we'll notify you when it's ready — usually under 10 minutes.

No similar episodes found.

No similar podcasts found.

URL copied to clipboard!