EPISODE · Mar 21, 2024 · 9 MIN
#78- RAFT: Why just to use RAG if you can also fine tune?
from Life with AI · host Filipe Lauar
Hello, in this episode I talk a Retrieval Aware Fine Tuning (RAFT), a paper that proposes a new technique to use both domain specific fine-tuning and RAG to improve the retrieval capabilities of LLMs. In the episode I also talk about another paper that is called RAFT, but this time Reward rAnking Fine Tuning, which proposes a new technique to perform RLHF without the convergence problems of Reinforcement Learning. Retrieval Aware Fine Tuning: https://arxiv.org/abs/2403.10131v1 Reward rAnking Fine Tuning: https://arxiv.org/pdf/2304.06767.pdf Instagram of the podcast: https://www.instagram.com/podcast.lifewithai Linkedin of the podcast: https://www.linkedin.com/company/life-with-ai
NOW PLAYING
#78- RAFT: Why just to use RAG if you can also fine tune?
No transcript for this episode yet
Similar Episodes
No similar episodes found.