PodParley PodParley
Evaluating Model Explainability Methods with Sara Hooker - TWiML Talk #189

EPISODE · Oct 10, 2018 · 1H 3M

Evaluating Model Explainability Methods with Sara Hooker - TWiML Talk #189

from The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) · host Sam Charrington

In this, the first episode of the Deep Learning Indaba series, we’re joined by Sara Hooker, AI Resident at Google Brain. I spoke with Sara in the run-up to the Indaba about her work on interpretability in deep neural networks. We discuss what interpretability means and nuances like the distinction between interpreting model decisions vs model function. We also talk about the relationship between Google Brain and the rest of the Google AI landscape and the significance of the Google AI Lab in Accra, Ghana.

NOW PLAYING

Evaluating Model Explainability Methods with Sara Hooker - TWiML Talk #189

0:00 1:03:57
Play in mini player Transcript not yet generated

No transcript for this episode yet

We transcribe on demand. Request one and we'll notify you when it's ready — usually under 10 minutes.

No similar episodes found.

URL copied to clipboard!