Compressing deep learning models: distillation (Ep.104)
Episode 101 of the Data Science at Home podcast, hosted by Francesco Gadaleta, titled "Compressing deep learning models: distillation (Ep.104)" was published on May 20, 2020 and runs 22 minutes.
May 20, 2020 ·22m · Data Science at Home
Summary
Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference. In this episode I explain one of the first methods: knowledge distillation Come join us on Slack Reference Distilling the Knowledge in a Neural Network https://arxiv.org/abs/1503.02531 Knowledge Distillation and Student-Teacher Learning for Visual Intelligence: A Review and New Outlooks https://arxiv.org/abs/2004.05937
Episode Description
Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference.
In this episode I explain one of the first methods: knowledge distillation
Come join us on Slack
Reference- Distilling the Knowledge in a Neural Network https://arxiv.org/abs/1503.02531
- Knowledge Distillation and Student-Teacher Learning for Visual Intelligence: A Review and New Outlooks https://arxiv.org/abs/2004.05937
Similar Episodes
Apr 13, 2026 ·4m
Apr 12, 2026 ·5m
Apr 11, 2026 ·5m
Apr 10, 2026 ·4m
Apr 9, 2026 ·3m
Apr 8, 2026 ·3m