674: Parameter-Efficient Fine-Tuning of LLMs using LoRA (Low-Rank Adaptation)
Models like Alpaca, Vicuña, GPT4All-J and Dolly 2.0 have relatively small model architectures, but they're prohibitively expensive to train even on a small amount of your own data. The standard model-training protocol can also lead to catastrophic forgetting. In this week's episode, Jon explores a solution to these problems, introducing listeners to Parameter-Efficient Fine-Tuning (PEFT) and the leading approach: Low-Rank Adaptation (LoRA).Additional materials: www.superdatascience.com/674Int...
An episode of the Super Data Science: ML & AI Podcast with Jon Krohn podcast, hosted by Jon Krohn, titled "674: Parameter-Efficient Fine-Tuning of LLMs using LoRA (Low-Rank Adaptation)" was published on April 28, 2023 and runs 5 minutes.
April 28, 2023 ·5m · Super Data Science: ML & AI Podcast with Jon Krohn
Summary
Models like Alpaca, Vicuña, GPT4All-J and Dolly 2.0 have relatively small model architectures, but they're prohibitively expensive to train even on a small amount of your own data. The standard model-training protocol can also lead to catastrophic forgetting. In this week's episode, Jon explores a solution to these problems, introducing listeners to Parameter-Efficient Fine-Tuning (PEFT) and the leading approach: Low-Rank Adaptation (LoRA).Additional materials: www.superdatascience.com/674Interested in sponsoring a SuperDataScience Podcast episode? Visit JonKrohn.com/podcast for sponsorship information.
Episode Description
Similar Episodes
Apr 9, 2026 ·12m
Apr 7, 2026 ·14m
Apr 7, 2026 ·10m