PodParley PodParley

Trustworthy Al series: Transparency and explainability

An episode of the Changing Conversations Podcast podcast, hosted by SGS, titled "Trustworthy Al series: Transparency and explainability" was published on April 2, 2025 and runs 51 minutes.

April 2, 2025 ·51m · Changing Conversations Podcast

0:00 / 0:00

AI is shaping the world around us, making decisions in healthcare, finance, hiring, law enforcement, and more. But how can we trust these systems when their reasoning is often hidden in a "black box"? In this episode, we unravel the crucial pillars of Explainability and Transparency—the keys to making AI trustworthy, ethical, and accountable.Join our panel of industry leaders and top researchers as they explore:🔹 Transparency vs. Explainability: What’s the difference, and why does it matter?🔹 Real-World Challenges: Why do AI systems remain opaque, and what are the risks?🔹 Cutting-Edge Solutions: The latest methods for making AI interpretable and responsible.🔹 The Future of Trustworthy AI: How transparency and explainability are shaping AI regulations and adoption.Featuring Willy Fabritius (Global Head of Strategy & Business Development, SGS), Tomislav Nad (Lead Innovation Technologist, SGS), Dr. Dominik Kowald (Research Manager, Know-Center & Graz University of Technology), and Ilija Šimić (AI Explainability Researcher, Know-Center). Together, they bridge industry expertise and academic insights to tackle one of AI’s biggest challenges.🎧 Tune in now and discover what it takes to make AI fair, accountable, and worthy of our trust.(00:00:19)- Introduction(00:04:24)- What is Transparency in AI? And how does it relate to Explainability in AI? (00:06:23)- Why are Transparency and Explainability in AI important? (00:10:35)- How does the lack of transparency in AI systems pose a risk to business operations and reputation? (00:14:04)- How can we ensure that AI is transparent?(00:21:36)- What are methods that can be used to explain AI? (00:25:13)- How do regulations in Europe and others worldwide see transparency? (00:30:56)- Are there cases where transparency and explainability in AI are not needed? Also in relation to the EU AI Act? (00:35:23)- How can transparency and explainability be measured? (00:38:20)- Are there tools available for explaining AI? And if yes, are these tools freely-available and/or open-source? (00:40:03)- Where do you see the biggest challenges in the field? About our “Trustworthy AI: current areas of research and challenges” series:The need for trustworthy Artificial Intelligence systems is recognized by many organizations, from governments, to industries and academia. As AI systems become more widely used by both organizations and individuals, it is important to establish trust in them. To establish this trust, numerous white papers, proposals and standards have been published and are still in development to educate organizations on the need for and uses of AI systems. Join us for our series as our experts discuss a variety of topics related to building trust and understanding of AI systems.

AI is shaping the world around us, making decisions in healthcare, finance, hiring, law enforcement, and more. But how can we trust these systems when their reasoning is often hidden in a "black box"? In this episode, we unravel the crucial pillars of Explainability and Transparency—the keys to making AI trustworthy, ethical, and accountable.

Join our panel of industry leaders and top researchers as they explore:
🔹 Transparency vs. Explainability: What’s the difference, and why does it matter?
🔹 Real-World Challenges: Why do AI systems remain opaque, and what are the risks?
🔹 Cutting-Edge Solutions: The latest methods for making AI interpretable and responsible.
🔹 The Future of Trustworthy AI: How transparency and explainability are shaping AI regulations and adoption.

Featuring Willy Fabritius (Global Head of Strategy & Business Development, SGS), Tomislav Nad (Lead Innovation Technologist, SGS), Dr. Dominik Kowald (Research Manager, Know-Center & Graz University of Technology), and Ilija Šimić (AI Explainability Researcher, Know-Center). Together, they bridge industry expertise and academic insights to tackle one of AI’s biggest challenges.

🎧 Tune in now and discover what it takes to make AI fair, accountable, and worthy of our trust.

(00:00:19)- Introduction

(00:04:24)- What is Transparency in AI? And how does it relate to Explainability in AI?

(00:06:23)- Why are Transparency and Explainability in AI important?

(00:10:35)- How does the lack of transparency in AI systems pose a risk to business operations and reputation? 

(00:14:04)- How can we ensure that AI is transparent?

(00:21:36)- What are methods that can be used to explain AI?

(00:25:13)- How do regulations in Europe and others worldwide see transparency? 

(00:30:56)- Are there cases where transparency and explainability in AI are not needed? Also in relation to the EU AI Act?

(00:35:23)- How can transparency and explainability be measured? 

(00:38:20)- Are there tools available for explaining AI? And if yes, are these tools freely-available and/or open-source?

(00:40:03)- Where do you see the biggest challenges in the field?

About our “Trustworthy AI: current areas of research and challenges” series:
The need for trustworthy Artificial Intelligence systems is recognized by many organizations, from governments, to industries and academia. As AI systems become more widely used by both organizations and individuals, it is important to establish trust in them. To establish this trust, numerous white papers, proposals and standards have been published and are still in development to educate organizations on the need for and uses of AI systems. Join us for our series as our experts discuss a variety of topics related to building trust and understanding of AI systems.

Changing Conversations Billy Burke and Sarah Philp The Changing Conversations Podcast is here to create the space for conversations with, for and by educators. We bring you conversations on leadership, learning and wellbeing that have resonance both now and in the future. We are having the conversations we know you want to have and want to be a part of. Co-hosts Sarah and Billy are learners and educators, their experience and expertise means they ask the questions that matter. Connect with us on Twitter @changingconver1 @William_J_Burke @sarahphilpcoach Email: [email protected] Boundaryless Conversations Podcast Boundaryless SRL Boundaryless Conversations Podcast is an ongoing exploration of the future of Platforms & Ecosystems. Here we explore new perspectives about how we organise at scale in a rapidly changing world.From Boundaryless SRLHosted by Simone Cicero and Shruthi Prakash The SuperpowHer Podcast Deya Direct The SuperpowHer Podcast is a curated series of life-changing conversations with Life-Coach Author, Speaker and Media Personality Deya Direct and friends; sharing REAL solutions, tools and life-changing stories from celebrities and thought-leaders to reignite love, growth, and self-care. Nia Conversations with Siya Ben Siya The Nia Conversations Podcast was inspired by my own journey as a rape survivor. After going through this traumatic experience in 2013, I struggled to find content that I could resonate with and that made the healing process very lonely. The podcast is aimed at reaching out to others who’ve been through, or are still going through life-changing traumas. Through my journey, I hope to shine the spotlight on healing, mental health, wellness, meaningful conversations, wellness and and my walk with God.
URL copied to clipboard!