PodParley PodParley

Episode 22: Parallelising and distributing Deep Learning

Episode 14 of the Data Science at Home podcast, hosted by Francesco Gadaleta, titled "Episode 22: Parallelising and distributing Deep Learning" was published on September 25, 2017 and runs 19 minutes.

September 25, 2017 ·19m · Data Science at Home

0:00 / 0:00

Continuing the discussion of the last two episodes, there is one more aspect of deep learning that I would love to consider and therefore left as a full episode, that is parallelising and distributing deep learning on relatively large clusters. As a matter of fact, computing architectures are changing in a way that is encouraging parallelism more than ever before. And deep learning is no exception and despite the greatest improvements with commodity GPUs - graphical processing units, when it comes to speed, there is still room for improvement. Together with the last two episodes, this one completes the picture of deep learning at scale. Indeed, as I mentioned in the previous episode, How to master optimisation in deep learning, the function optimizer is the horsepower of deep learning and neural networks in general. A slow and inaccurate optimisation method leads to networks that slowly converge to unreliable results. In another episode titled “Additional strategies for optimizing deeplearning” I explained some ways to improve function minimisation and model tuning in order to get better parameters in less time. So feel free to listen to these episodes again, share them with your friends, even re-broadcast or download for your commute. While the methods that I have explained so far represent a good starting point for prototyping a network, when you need to switch to production environments or take advantage of the most recent and advanced hardware capabilities of your GPU, well... in all those cases, you would like to do something more.

Continuing the discussion of the last two episodes, there is one more aspect of deep learning that I would love to consider and therefore left as a full episode, that is parallelising and distributing deep learning on relatively large clusters.

As a matter of fact, computing architectures are changing in a way that is encouraging parallelism more than ever before. And deep learning is no exception and despite the greatest improvements with commodity GPUs - graphical processing units, when it comes to speed, there is still room for improvement.

Together with the last two episodes, this one completes the picture of deep learning at scale. Indeed, as I mentioned in the previous episode, How to master optimisation in deep learning, the function optimizer is the horsepower of deep learning and neural networks in general. A slow and inaccurate optimisation method leads to networks that slowly converge to unreliable results.

In another episode titled “Additional strategies for optimizing deeplearning” I explained some ways to improve function minimisation and model tuning in order to get better parameters in less time. So feel free to listen to these episodes again, share them with your friends, even re-broadcast or download for your commute.

While the methods that I have explained so far represent a good starting point for prototyping a network, when you need to switch to production environments or take advantage of the most recent and advanced hardware capabilities of your GPU, well... in all those cases, you would like to do something more.  

The Analytics Engineering Podcast dbt Labs, Inc. Tristan Handy has been curating the Analytics Engineering Roundup newsletter since 2015, pulling together the internet's best data science & analytics articles.Tristan and co-host Julia Schottenstein now bring the Roundup to real life, hosting biweekly conversations with data practitioners inventing the future of analytics engineering.You can view full episode summaries and read back issues of the Roundup newsletter at https://roundup.getdbt.com.The podcast is sponsored by dbt labs, makers of the data transformation framework dbt. To reach our team, drop a note to [email protected]. Explicit STEM.queer() Vera Sativa Machine learning, data science, feminismo y queer anarquismo.Episodios cada 2 semanas. Explicit 天方烨谈 基因频道 华大基因专业团队倾情打造,基因科普娓娓道来! Explicit Explorers Wanted 5d20 Media, LLC We are an actual play podcast using the Numenera (http://numenera.com) Discovery and Destiny rules. Set one billion years in the future, we journey across the Ninth World. There have been eight worlds before this, where civilizations rose to intergalactic heights only to fall into ashes, leaving a world of strange relics behind them. Join our ragtag crew of messy adventurers as they navigate weird ruins, contend with criminal intrigue, and ignore their own better judgment... Repeatedly.See more (https://www.explorerswanted.fm/about)Become a Patron!Campaign Two: Hearts in Orbit<img src="https://files.fireside.fm/file/fireside-uploads/images/2/213fef3d-303d-4053-8ec2-96e695eef9f5/mDJd_g4e.png" alt="Three figures, from left: Ezr Explicit
URL copied to clipboard!