Activate deep learning neurons faster with Dynamic RELU (ep. 101)
Episode 98 of the Data Science at Home podcast, hosted by Francesco Gadaleta, titled "Activate deep learning neurons faster with Dynamic RELU (ep. 101)" was published on April 1, 2020 and runs 22 minutes.
April 1, 2020 ·22m · Data Science at Home
Summary
In this episode I briefly explain the concept behind activation functions in deep learning. One of the most widely used activation function is the rectified linear unit (ReLU). While there are several flavors of ReLU in the literature, in this episode I speak about a very interesting approach that keeps computational complexity low while improving performance quite consistently. This episode is supported by pryml.io. At pryml we let companies share confidential data. Visit our website. Don't forget to join us on discord channel to propose new episode or discuss the previous ones. References Dynamic ReLU https://arxiv.org/abs/2003.10027
Episode Description
In this episode I briefly explain the concept behind activation functions in deep learning. One of the most widely used activation function is the rectified linear unit (ReLU). While there are several flavors of ReLU in the literature, in this episode I speak about a very interesting approach that keeps computational complexity low while improving performance quite consistently.
This episode is supported by pryml.io. At pryml we let companies share confidential data. Visit our website.
Don't forget to join us on discord channel to propose new episode or discuss the previous ones.
ReferencesDynamic ReLU https://arxiv.org/abs/2003.10027
Similar Episodes
Apr 13, 2026 ·4m
Apr 12, 2026 ·5m
Apr 11, 2026 ·5m
Apr 10, 2026 ·4m
Apr 9, 2026 ·3m
Apr 8, 2026 ·3m