EPISODE · Oct 6, 2023 · 50 MIN
All About Evaluating LLM Applications // Shahul Es // #179
from MLOps.community · host Demetrios
MLOps Coffee Sessions #179 with Shahul Es, All About Evaluating LLM Applications.// AbstractShahul Es, renowned for his expertise in the evaluation space and is the creator of the Ragas Project. Shahul dives deep into the world of evaluation in open source models, sharing insights on debugging, troubleshooting, and the challenges faced when it comes to benchmarks. From the importance of custom data distributions to the role of fine-tuning in enhancing model performance, this episode is packed with valuable information for anyone interested in language models and AI.// BioShahul is a data science professional with 6+ years of expertise and has worked in data domains from structured, NLP, to Audio processing. He is also a Kaggle GrandMaster and code owner/ ML of the Open-Assistant initiative that released some of the best open-source alternatives to ChatGPT.// MLOps Jobs board jobs.mlops.community// MLOps Swag/Merchhttps://mlops-community.myshopify.com/// Related LinksAll about evaluating Large language models blog: https://explodinggradients.com/all-about-evaluating-large-language-modelsRagas: https://github.com/explodinggradients/ragas--------------- ✌️Connect With Us ✌️ -------------Join our Slack community: https://go.mlops.community/slackFollow us on Twitter: @mlopscommunitySign up for the next meetup: https://go.mlops.community/registerCatch all episodes, blogs, newsletters, and more: https://mlops.community/Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/Connect with Shahul on LinkedIn: https://www.linkedin.com/in/shahules/Timestamps:[00:00] Shahul's preferred coffee[00:20] Takeaways[01:46] Please like, share, and subscribe to our MLOps channels![02:07] Shahul's definition of Evaluation[03:27] Evaluation metrics and Benchmarks[05:46] Gamed leaderboards[10:13] Best at summarizing long text open-source models[11:12] Benchmarks[14:20] Recommending the evaluation process[17:43] LLMs for other LLMs[20:40] Debugging failed evaluation models[24:25] Prompt injection[27:32] Alignment[32:45] Open Assist[35:51] Garbage in, garbage out[37:00] Ragas[42:52] Valuable use case besides OpenAI[45:11] Fine-tuning LLMs[49:07] Connect with Shahul if you need help with Ragas @Shahules786 on Twitter[49:58] Wrap up
NOW PLAYING
All About Evaluating LLM Applications // Shahul Es // #179
No transcript for this episode yet
Similar Episodes
Apr 21, 2026 ·13m
Apr 19, 2026 ·16m
Apr 17, 2026 ·13m
Apr 13, 2026 ·11m
Apr 11, 2026 ·16m