PodParley PodParley
Scalable Distributed Deep Learning with Hillery Hunter - TWiML Talk #77

EPISODE · Dec 4, 2017 · 38 MIN

Scalable Distributed Deep Learning with Hillery Hunter - TWiML Talk #77

from The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) · host Sam Charrington

This week on the podcast we’re running a series of shows consisting of conversations with some of the impressive speakers from an event called the AI Summit in New York City. The theme of the conference, and the series, is AI in the Enterprise, and I think you’ll find it really interesting in that it includes a mix of both technical and case-study-oriented discussions. My guest for this first show in the series is, Hillery Hunter, IBM Fellow & Director of the Accelerated Cognitive Infrastructure group at IBM’s T.J. Watson Research Center. Hillery and I met a few weeks back in New York and I'm really glad that we were able to get her on the show. Hillery joins us to discuss her team's research into distributed deep learning, which was recently released as the PowerAI Distributed Deep Learning Communication Library or DDL. In my conversation with Hillery, we discuss the purpose and technical architecture of the DDL, it’s ability to offer fully synchronous distributed training of deep learning models, the advantages of its Multi-Ring Topology, and much more. This is for sure a nerd alert pod, especially for the performance and hardware geeks among us . Be sure post any feedback or questions you may have to the show notes page, which you’ll find at twimlai.com/talk/77. For more info on this series, visit twimlai.com/aisummit

NOW PLAYING

Scalable Distributed Deep Learning with Hillery Hunter - TWiML Talk #77

0:00 38:13
Play in mini player Transcript not yet generated

No transcript for this episode yet

We transcribe on demand. Request one and we'll notify you when it's ready — usually under 10 minutes.

No similar episodes found.

URL copied to clipboard!