#78- RAFT: Why just to use RAG if you can also fine tune?
An episode of the Life with AI podcast, hosted by Filipe Lauar, titled "#78- RAFT: Why just to use RAG if you can also fine tune?" was published on March 21, 2024 and runs 9 minutes.
March 21, 2024 ·9m · Life with AI
Summary
Hello, in this episode I talk a Retrieval Aware Fine Tuning (RAFT), a paper that proposes a new technique to use both domain specific fine-tuning and RAG to improve the retrieval capabilities of LLMs. In the episode I also talk about another paper that is called RAFT, but this time Reward rAnking Fine Tuning, which proposes a new technique to perform RLHF without the convergence problems of Reinforcement Learning. Retrieval Aware Fine Tuning: https://arxiv.org/abs/2403.10131v1 Reward rAnking Fine Tuning: https://arxiv.org/pdf/2304.06767.pdf Instagram of the podcast: https://www.instagram.com/podcast.lifewithai Linkedin of the podcast: https://www.linkedin.com/company/life-with-ai
Episode Description
Hello, in this episode I talk a Retrieval Aware Fine Tuning (RAFT), a paper that proposes a new technique to use both domain specific fine-tuning and RAG to improve the retrieval capabilities of LLMs.
In the episode I also talk about another paper that is called RAFT, but this time Reward rAnking Fine Tuning, which proposes a new technique to perform RLHF without the convergence problems of Reinforcement Learning.
Retrieval Aware Fine Tuning: https://arxiv.org/abs/2403.10131v1
Reward rAnking Fine Tuning: https://arxiv.org/pdf/2304.06767.pdf
Instagram of the podcast: https://www.instagram.com/podcast.lifewithai
Linkedin of the podcast: https://www.linkedin.com/company/life-with-ai
Similar Episodes
Jan 15, 2025 ·15m
Jan 15, 2025 ·18m
Dec 8, 2023 ·20m
Oct 25, 2023 ·18m
Oct 21, 2023 ·19m
Sep 16, 2023 ·18m