670: LLaMA: GPT-3 performance, 10x smaller
How does Meta AI's natural language model, LLaMa compare to the rest? Based on the Chinchilla scaling laws, LLaMa is designed to be smaller but more performant. But how exactly does it achieve this feat? It's all done by training a small model for a longer period of time. Discover how LLaMa compares to its competition, including GPT-3, in this week's episode. Additional materials: www.superdatascience.com/670Interested in sponsoring a SuperDataScience Podcast episode? Visit JonKrohn.com/podca...
An episode of the Super Data Science: ML & AI Podcast with Jon Krohn podcast, hosted by Jon Krohn, titled "670: LLaMA: GPT-3 performance, 10x smaller" was published on April 14, 2023 and runs 13 minutes.
April 14, 2023 ·13m · Super Data Science: ML & AI Podcast with Jon Krohn
Summary
How does Meta AI's natural language model, LLaMa compare to the rest? Based on the Chinchilla scaling laws, LLaMa is designed to be smaller but more performant. But how exactly does it achieve this feat? It's all done by training a small model for a longer period of time. Discover how LLaMa compares to its competition, including GPT-3, in this week's episode. Additional materials: www.superdatascience.com/670Interested in sponsoring a SuperDataScience Podcast episode? Visit JonKrohn.com/podcast for sponsorship information.
Episode Description
Similar Episodes
Mar 10, 2026 ·43m
Feb 17, 2026 ·57m