[1] Welcome back -not amlek.ai - we‘re ExplAInable

EPISODE · Sep 28, 2021 · 6 MIN

[1] Welcome back -not amlek.ai - we‘re ExplAInable

from ExplAInable · host Tamir Nave, Mike Erlihson, Uri Goren, Hila Paz Herszfang

Tamir Nave and Uri Goren introduce themselves and the new podcast format.

NOW PLAYING

[1] Welcome back -not amlek.ai - we‘re ExplAInable

0:00 6:17

No transcript for this episode yet

We transcribe on demand. Request one and we'll notify you when it's ready — usually under 10 minutes.

Spatial Web AI Podcast Denise Holt Active Inference AI & the Spatial Web The Future of AI is shared, distributed, and multi-scale.AI that is knowable, explainable, and capable of human governance.Based on the same mechanics as biological intelligence, it operates in a naturally efficient way, with no big data requirement.This is Active Inference AI & the Spatial Web. Trustworthy AI : De-risk business adoption of AI Pamela Gupta Description:  Creating AI Trust is a very complex and hard problem. It is not clear what it is and how it can be operationalized.  We will demystify what is Trustworthy AI, efficient adoption and leveraging it for reducing risks in AI programs.McKinsey reports indicates companies seeing the biggest bottom-line returns from AI—those that attribute at least 20 percent of EBIT or profitability to their use of AI—are more likely than others to follow Trustworthy AI best practices, including explainability. Further, organizations that establish digital trust among consumers through responsible practices such as making AI explainable are more likely to see their annual revenue and profitability grow at rates of 10 percent or more. Evidence → Cognition → Discernment™️ - Your Pathway to AI Leadership Greg Twemlow XperientialAI — Pathway to AI Leadership explores how people can collaborate with AI without outsourcing judgment. The spine is a three-step method: Evidence → Cognition → Discernment — a bridge from what’s scattered to what’s chosen. Through essays, reflections, and practical examples, I show how the Context & Critique Rule™ keeps thinking visible, decisions explainable, and responsibility human. Mateusz Chrobok Mateusz Chrobok Jak niebezpieczny jest Internet?Co można zrobić z danymi i dlaczego buzzwordy napędzają branżę IT?Jak przekuć pomysł w startup i spróbować zrobić coś dobrego?Mateusz Chrobok stara się dzielić swoim doświadczeniem w pracy w działach Research & Development na swoim kanale na youtube. Porusza w nim tematykę tworzenia startupów a także podejmowania kluczowych decyzji. Budowania produktów i weryfikacji ich zasadności. Opowiada także o bezpieczeństwie i o tym co robić i jak żyć by być maksymalizować swoje bezpieczeństwo. Oprócz tego jest ciekawy nowych technologii związanych z sztuczną inteligencją takich jak explainable AI, które mogą zmienić adopcję tych technologii w życiu codziennym.
URL copied to clipboard!