EPISODE · May 31, 2018 · 46 MIN
Deep Gradient Compression for Distributed Training with Song Han - TWiML Talk #146
from The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) · host Sam Charrington
On today’s show I chat with Song Han, assistant professor in MIT’s EECS department, about his research on Deep Gradient Compression. In our conversation, we explore the challenge of distributed training for deep neural networks and the idea of compressing the gradient exchange to allow it to be done more efficiently. Song details the evolution of distributed training systems based on this idea, and provides a few examples of centralized and decentralized distributed training architectures such as Uber’s Horovod, as well as the approaches native to Pytorch and Tensorflow. Song also addresses potential issues that arise when considering distributed training, such as loss of accuracy and generalizability, and much more. The notes for this show can be found at twimlai.com/talk/146.
NOW PLAYING
Deep Gradient Compression for Distributed Training with Song Han - TWiML Talk #146
No transcript for this episode yet
Similar Episodes
No similar episodes found.