PodParley PodParley

Episode 21: Additional optimisation strategies for deep learning

Episode 13 of the Data Science at Home podcast, hosted by Francesco Gadaleta, titled "Episode 21: Additional optimisation strategies for deep learning" was published on September 18, 2017 and runs 15 minutes.

September 18, 2017 ·15m · Data Science at Home

0:00 / 0:00

In the last episode How to master optimisation in deep learning I explained some of the most challenging tasks of deep learning and some methodologies and algorithms to improve the speed of convergence of a minimisation method for deep learning. I explored the family of gradient descent methods - even though not exhaustively - giving a list of approaches that deep learning researchers are considering for different scenarios. Every method has its own benefits and drawbacks, pretty much depending on the type of data, and data sparsity. But there is one method that seems to be, at least empirically, the best approach so far. Feel free to listen to the previous episode, share it, re-broadcast or just download for your commute. In this episode I would like to continue that conversation about some additional strategies for optimising gradient descent in deep learning and introduce you to some tricks that might come useful when your neural network stops learning from data or when the learning process becomes so slow that it really seems it reached a plateau even by feeding in fresh data.

In the last episode How to master optimisation in deep learning I explained some of the most challenging tasks of deep learning and some methodologies and algorithms to improve the speed of convergence of a minimisation method for deep learning. I explored the family of gradient descent methods - even though not exhaustively - giving a list of approaches that deep learning researchers are considering for different scenarios. Every method has its own benefits and drawbacks, pretty much depending on the type of data, and data sparsity. But there is one method that seems to be, at least empirically, the best approach so far.

Feel free to listen to the previous episode, share it, re-broadcast or just download for your commute.

In this episode I would like to continue that conversation about some additional strategies for optimising gradient descent in deep learning and introduce you to some tricks that might come useful when your neural network stops learning from data or when the learning process becomes so slow that it really seems it reached a plateau even by feeding in fresh data.

The Analytics Engineering Podcast dbt Labs, Inc. Tristan Handy has been curating the Analytics Engineering Roundup newsletter since 2015, pulling together the internet's best data science & analytics articles.Tristan and co-host Julia Schottenstein now bring the Roundup to real life, hosting biweekly conversations with data practitioners inventing the future of analytics engineering.You can view full episode summaries and read back issues of the Roundup newsletter at https://roundup.getdbt.com.The podcast is sponsored by dbt labs, makers of the data transformation framework dbt. To reach our team, drop a note to [email protected]. Explicit STEM.queer() Vera Sativa Machine learning, data science, feminismo y queer anarquismo.Episodios cada 2 semanas. Explicit 天方烨谈 基因频道 华大基因专业团队倾情打造,基因科普娓娓道来! Explicit Explorers Wanted 5d20 Media, LLC We are an actual play podcast using the Numenera (http://numenera.com) Discovery and Destiny rules. Set one billion years in the future, we journey across the Ninth World. There have been eight worlds before this, where civilizations rose to intergalactic heights only to fall into ashes, leaving a world of strange relics behind them. Join our ragtag crew of messy adventurers as they navigate weird ruins, contend with criminal intrigue, and ignore their own better judgment... Repeatedly.See more (https://www.explorerswanted.fm/about)Become a Patron!Campaign Two: Hearts in Orbit<img src="https://files.fireside.fm/file/fireside-uploads/images/2/213fef3d-303d-4053-8ec2-96e695eef9f5/mDJd_g4e.png" alt="Three figures, from left: Ezr Explicit
URL copied to clipboard!