Compressing deep learning models: rewinding (Ep.105)
Episode 102 of the Data Science at Home podcast, hosted by Francesco Gadaleta, titled "Compressing deep learning models: rewinding (Ep.105)" was published on June 1, 2020 and runs 15 minutes.
June 1, 2020 ·15m · Data Science at Home
Summary
As a continuation of the previous episode in this one I cover the topic about compressing deep learning models and explain another simple yet fantastic approach that can lead to much smaller models that still perform as good as the original one. Don't forget to join our Slack channel and discuss previous episodes or propose new ones. This episode is supported by Pryml.io Pryml is an enterprise-scale platform to synthesise data and deploy applications built on that data back to a production environment. References Comparing Rewinding and Fine-tuning in Neural Network Pruninghttps://arxiv.org/abs/2003.02389
Episode Description
As a continuation of the previous episode in this one I cover the topic about compressing deep learning models and explain another simple yet fantastic approach that can lead to much smaller models that still perform as good as the original one.
Don't forget to join our Slack channel and discuss previous episodes or propose new ones.
This episode is supported by Pryml.io Pryml is an enterprise-scale platform to synthesise data and deploy applications built on that data back to a production environment.
References
Comparing Rewinding and Fine-tuning in Neural Network Pruning https://arxiv.org/abs/2003.02389
Similar Episodes
Apr 13, 2026 ·4m
Apr 12, 2026 ·5m
Apr 11, 2026 ·5m
Apr 10, 2026 ·4m
Apr 9, 2026 ·3m
Apr 8, 2026 ·3m