PodParley PodParley

Trustworthy Al series: Transparency and explainability

An episode of the Changing Conversations Podcast podcast, hosted by SGS, titled "Trustworthy Al series: Transparency and explainability" was published on April 2, 2025 and runs 51 minutes.

April 2, 2025 ·51m · Changing Conversations Podcast

0:00 / 0:00

AI is shaping the world around us, making decisions in healthcare, finance, hiring, law enforcement, and more. But how can we trust these systems when their reasoning is often hidden in a "black box"? In this episode, we unravel the crucial pillars of Explainability and Transparency—the keys to making AI trustworthy, ethical, and accountable.Join our panel of industry leaders and top researchers as they explore:🔹 Transparency vs. Explainability: What’s the difference, and why does it matter?🔹 Real-World Challenges: Why do AI systems remain opaque, and what are the risks?🔹 Cutting-Edge Solutions: The latest methods for making AI interpretable and responsible.🔹 The Future of Trustworthy AI: How transparency and explainability are shaping AI regulations and adoption.Featuring Willy Fabritius (Global Head of Strategy & Business Development, SGS), Tomislav Nad (Lead Innovation Technologist, SGS), Dr. Dominik Kowald (Research Manager, Know-Center & Graz University of Technology), and Ilija Šimić (AI Explainability Researcher, Know-Center). Together, they bridge industry expertise and academic insights to tackle one of AI’s biggest challenges.🎧 Tune in now and discover what it takes to make AI fair, accountable, and worthy of our trust.(00:00:19)- Introduction(00:04:24)- What is Transparency in AI? And how does it relate to Explainability in AI? (00:06:23)- Why are Transparency and Explainability in AI important? (00:10:35)- How does the lack of transparency in AI systems pose a risk to business operations and reputation? (00:14:04)- How can we ensure that AI is transparent?(00:21:36)- What are methods that can be used to explain AI? (00:25:13)- How do regulations in Europe and others worldwide see transparency? (00:30:56)- Are there cases where transparency and explainability in AI are not needed? Also in relation to the EU AI Act? (00:35:23)- How can transparency and explainability be measured? (00:38:20)- Are there tools available for explaining AI? And if yes, are these tools freely-available and/or open-source? (00:40:03)- Where do you see the biggest challenges in the field? About our “Trustworthy AI: current areas of research and challenges” series:The need for trustworthy Artificial Intelligence systems is recognized by many organizations, from governments, to industries and academia. As AI systems become more widely used by both organizations and individuals, it is important to establish trust in them. To establish this trust, numerous white papers, proposals and standards have been published and are still in development to educate organizations on the need for and uses of AI systems. Join us for our series as our experts discuss a variety of topics related to building trust and understanding of AI systems.

AI is shaping the world around us, making decisions in healthcare, finance, hiring, law enforcement, and more. But how can we trust these systems when their reasoning is often hidden in a "black box"? In this episode, we unravel the crucial pillars of Explainability and Transparency—the keys to making AI trustworthy, ethical, and accountable.

Join our panel of industry leaders and top researchers as they explore:
🔹 Transparency vs. Explainability: What’s the difference, and why does it matter?
🔹 Real-World Challenges: Why do AI systems remain opaque, and what are the risks?
🔹 Cutting-Edge Solutions: The latest methods for making AI interpretable and responsible.
🔹 The Future of Trustworthy AI: How transparency and explainability are shaping AI regulations and adoption.

Featuring Willy Fabritius (Global Head of Strategy & Business Development, SGS), Tomislav Nad (Lead Innovation Technologist, SGS), Dr. Dominik Kowald (Research Manager, Know-Center & Graz University of Technology), and Ilija Šimić (AI Explainability Researcher, Know-Center). Together, they bridge industry expertise and academic insights to tackle one of AI’s biggest challenges.

🎧 Tune in now and discover what it takes to make AI fair, accountable, and worthy of our trust.

(00:00:19)- Introduction

(00:04:24)- What is Transparency in AI? And how does it relate to Explainability in AI?

(00:06:23)- Why are Transparency and Explainability in AI important?

(00:10:35)- How does the lack of transparency in AI systems pose a risk to business operations and reputation? 

(00:14:04)- How can we ensure that AI is transparent?

(00:21:36)- What are methods that can be used to explain AI?

(00:25:13)- How do regulations in Europe and others worldwide see transparency? 

(00:30:56)- Are there cases where transparency and explainability in AI are not needed? Also in relation to the EU AI Act?

(00:35:23)- How can transparency and explainability be measured? 

(00:38:20)- Are there tools available for explaining AI? And if yes, are these tools freely-available and/or open-source?

(00:40:03)- Where do you see the biggest challenges in the field?

About our “Trustworthy AI: current areas of research and challenges” series:
The need for trustworthy Artificial Intelligence systems is recognized by many organizations, from governments, to industries and academia. As AI systems become more widely used by both organizations and individuals, it is important to establish trust in them. To establish this trust, numerous white papers, proposals and standards have been published and are still in development to educate organizations on the need for and uses of AI systems. Join us for our series as our experts discuss a variety of topics related to building trust and understanding of AI systems.

Changing Conversations Billy Burke and Sarah Philp The Changing Conversations Podcast is here to create the space for conversations with, for and by educators. We bring you conversations on leadership, learning and wellbeing that have resonance both now and in the future. We are having the conversations we know you want to have and want to be a part of. Co-hosts Sarah and Billy are learners and educators, their experience and expertise means they ask the questions that matter. Connect with us on Twitter @changingconver1 @William_J_Burke @sarahphilpcoach Email: [email protected] Seven Second Offense Seven Second Offense In the early to mid 2000's, the Phoenix Suns implemented an offensive style that would go on to completely change the trajectory of how the game of basketball would be played. That offense—later coined as the "Seven Seconds or Less" offense—was truly game changing. In the podcast "Seven Second Offense, Cody and Theo have conversations that have changed the way they look at the world. Whether light hearted or truly serious, if you have a topic you think they should discuss, question, or even just something on your mind, reach out at [email protected]. New York/Mongolia Podcast Tregg Frank and Gabriel Frascella A podcast documenting two best friends’ life changing journeys 6,380 miles apart. One is joining the Peace Corps, the other is starting a job halfway across the country.Big moves, big changes, friendly conversations. Deserail Rose Rashad Deserail Texarkana Ark/Tex based podcast built to influence conversations with people around the world that are changing it day by day and the ideas they have to better educate the masses on social political and psychological views. Please check out and subscribe to Inkk Lyfee & KiddzPodcastShow
URL copied to clipboard!