PODCAST · science
AI Podcast
by Kirill Solodskikh
Educational AI Podcast from CEO of TheStage AI. We will learn mathematics and engineering behind efficient models deployment.
-
1
AI Podcast: Quantization of Neural Networks, Part 1. Introduction, Definitions, Examples.
Quantization is a powerful technique for reducing memory usage and speeding up AI applications built with LLMs, diffusion models, CNNs, and other architectures. In fact, quantization is fundamental to all data compression—from JPEG and GIF to MP3 and MP4 (HEVC)! In this episode, we'll cover the basics of neural network quantization, laying the groundwork for future episodes where we'll dive into specific quantization algorithms. The AI Podcast is hosted by Kirill, CEO of TheStage AI. With his team's deep scientific and industrial expertise in neural network acceleration and deployment, they'll show you how to run AI anywhere and everywhere! OUTLINE: 00:00 - Jingle! 01:24 - Structure of Podcast 01:46 - When and How to Use Quantization? 03:11 - Speedup or reduce memory? Or Both? 04:18 - Hardware with quantization support 05:28 - DNN compilers to run quantized networks 06:01 - What is quantization mathematically? 07:22 - Fake Quantized Tensors 08:43 - Symmetric, asymmetric, per-tensor, per-channel, per-group 09:43 - Quantized matrix multiplication 11:31 - Quantization algorithms 13:23 - Examples of PTQ and QAT 16:11 - Quantized parameters exists not in discrete space! Is it manifold? 18:08 - Details of the next episode!
No matches for "" in this podcast's transcripts.
No topics indexed yet for this podcast.
Loading reviews...
ABOUT THIS SHOW
Educational AI Podcast from CEO of TheStage AI. We will learn mathematics and engineering behind efficient models deployment.
HOSTED BY
Kirill Solodskikh
CATEGORIES
Loading similar podcasts...