PodParley PodParley
Neural Network Quantization and Compression with Tijmen Blankevoort - TWIML Talk #292

EPISODE · Aug 19, 2019 · 50 MIN

Neural Network Quantization and Compression with Tijmen Blankevoort - TWIML Talk #292

from The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) · host Sam Charrington

Today we’re joined by Tijmen Blankevoort, a staff engineer at Qualcomm, who leads their compression and quantization research teams. In our conversation with Tijmen we discuss:  • The ins and outs of compression and quantization of ML models, specifically NNs, • How much models can actually be compressed, and the best way to achieve compression,  • We also look at a few recent papers including “Lottery Hypothesis."

NOW PLAYING

Neural Network Quantization and Compression with Tijmen Blankevoort - TWIML Talk #292

0:00 50:17

No transcript for this episode yet

We transcribe on demand. Request one and we'll notify you when it's ready — usually under 10 minutes.

No similar episodes found.

URL copied to clipboard!