Trustworthy Al series: Transparency and explainability
An episode of the Changing Conversations Podcast podcast, hosted by SGS, titled "Trustworthy Al series: Transparency and explainability" was published on April 2, 2025 and runs 51 minutes.
April 2, 2025 ·51m · Changing Conversations Podcast
Summary
AI is shaping the world around us, making decisions in healthcare, finance, hiring, law enforcement, and more. But how can we trust these systems when their reasoning is often hidden in a "black box"? In this episode, we unravel the crucial pillars of Explainability and Transparency—the keys to making AI trustworthy, ethical, and accountable.Join our panel of industry leaders and top researchers as they explore:🔹 Transparency vs. Explainability: What’s the difference, and why does it matter?🔹 Real-World Challenges: Why do AI systems remain opaque, and what are the risks?🔹 Cutting-Edge Solutions: The latest methods for making AI interpretable and responsible.🔹 The Future of Trustworthy AI: How transparency and explainability are shaping AI regulations and adoption.Featuring Willy Fabritius (Global Head of Strategy & Business Development, SGS), Tomislav Nad (Lead Innovation Technologist, SGS), Dr. Dominik Kowald (Research Manager, Know-Center & Graz University of Technology), and Ilija Šimić (AI Explainability Researcher, Know-Center). Together, they bridge industry expertise and academic insights to tackle one of AI’s biggest challenges.🎧 Tune in now and discover what it takes to make AI fair, accountable, and worthy of our trust.(00:00:19)- Introduction(00:04:24)- What is Transparency in AI? And how does it relate to Explainability in AI? (00:06:23)- Why are Transparency and Explainability in AI important? (00:10:35)- How does the lack of transparency in AI systems pose a risk to business operations and reputation? (00:14:04)- How can we ensure that AI is transparent?(00:21:36)- What are methods that can be used to explain AI? (00:25:13)- How do regulations in Europe and others worldwide see transparency? (00:30:56)- Are there cases where transparency and explainability in AI are not needed? Also in relation to the EU AI Act? (00:35:23)- How can transparency and explainability be measured? (00:38:20)- Are there tools available for explaining AI? And if yes, are these tools freely-available and/or open-source? (00:40:03)- Where do you see the biggest challenges in the field? About our “Trustworthy AI: current areas of research and challenges” series:The need for trustworthy Artificial Intelligence systems is recognized by many organizations, from governments, to industries and academia. As AI systems become more widely used by both organizations and individuals, it is important to establish trust in them. To establish this trust, numerous white papers, proposals and standards have been published and are still in development to educate organizations on the need for and uses of AI systems. Join us for our series as our experts discuss a variety of topics related to building trust and understanding of AI systems.
Episode Description
AI is shaping the world around us, making decisions in healthcare, finance, hiring, law enforcement, and more. But how can we trust these systems when their reasoning is often hidden in a "black box"? In this episode, we unravel the crucial pillars of Explainability and Transparency—the keys to making AI trustworthy, ethical, and accountable.
Join our panel of industry leaders and top researchers as they explore:
🔹 Transparency vs. Explainability: What’s the difference, and why does it matter?
🔹 Real-World Challenges: Why do AI systems remain opaque, and what are the risks?
🔹 Cutting-Edge Solutions: The latest methods for making AI interpretable and responsible.
🔹 The Future of Trustworthy AI: How transparency and explainability are shaping AI regulations and adoption.
Featuring Willy Fabritius (Global Head of Strategy & Business Development, SGS), Tomislav Nad (Lead Innovation Technologist, SGS), Dr. Dominik Kowald (Research Manager, Know-Center & Graz University of Technology), and Ilija Šimić (AI Explainability Researcher, Know-Center). Together, they bridge industry expertise and academic insights to tackle one of AI’s biggest challenges.
🎧 Tune in now and discover what it takes to make AI fair, accountable, and worthy of our trust.
(00:00:19)- Introduction
(00:04:24)- What is Transparency in AI? And how does it relate to Explainability in AI?
(00:06:23)- Why are Transparency and Explainability in AI important?
(00:10:35)- How does the lack of transparency in AI systems pose a risk to business operations and reputation?
(00:14:04)- How can we ensure that AI is transparent?
(00:21:36)- What are methods that can be used to explain AI?
(00:25:13)- How do regulations in Europe and others worldwide see transparency?
(00:30:56)- Are there cases where transparency and explainability in AI are not needed? Also in relation to the EU AI Act?
(00:35:23)- How can transparency and explainability be measured?
(00:38:20)- Are there tools available for explaining AI? And if yes, are these tools freely-available and/or open-source?
(00:40:03)- Where do you see the biggest challenges in the field?
About our “Trustworthy AI: current areas of research and challenges” series:
The need for trustworthy Artificial Intelligence systems is recognized by many organizations, from governments, to industries and academia. As AI systems become more widely used by both organizations and individuals, it is important to establish trust in them. To establish this trust, numerous white papers, proposals and standards have been published and are still in development to educate organizations on the need for and uses of AI systems. Join us for our series as our experts discuss a variety of topics related to building trust and understanding of AI systems.
Similar Episodes
Apr 2, 2026 ·55m
Nov 25, 2025 ·17m
Nov 14, 2025 ·18m
Oct 1, 2025 ·73m
Aug 8, 2025 ·46m
Aug 7, 2025 ·64m