LoRA: Low-Rank Adaptation of Large Language Models
An episode of the Build Wiz AI Show podcast, hosted by Build Wiz AI, titled "LoRA: Low-Rank Adaptation of Large Language Models" was published on September 4, 2025 and runs 19 minutes.
September 4, 2025 ·19m · Build Wiz AI Show
Summary
In this episode, we dive into LoRA, a groundbreaking technique that makes fine-tuning massive language models like GPT-3 more accessible and efficient. Discover how this method drastically reduces the number of trainable parameters and GPU memory needed, all without adding any extra delay during inference. We'll explore how LoRA freezes the original model and injects small, trainable matrices, achieving results on-par with or even better than full fine-tuning.
Episode Description
In this episode, we dive into LoRA, a groundbreaking technique that makes fine-tuning massive language models like GPT-3 more accessible and efficient. Discover how this method drastically reduces the number of trainable parameters and GPU memory needed, all without adding any extra delay during inference. We'll explore how LoRA freezes the original model and injects small, trainable matrices, achieving results on-par with or even better than full fine-tuning.
Similar Episodes
Jan 15, 2016 ·18m
Dec 23, 2015 ·40m
Dec 18, 2015 ·9m
Dec 7, 2015 ·16m
Nov 11, 2015 ·10m