The Vernon Richard Show

PODCAST · technology

The Vernon Richard Show

Vernon Richards and Richard Bradshaw discuss all things software testing, quality engineering and life in the world of software development. Plus our own personal journeys navigating our careers and lifes.

  1. 35

    “Testing isn't a specialism"? You keep using that word…

    "Testing Is Not a Specialism" - You keep using that word…Vernon got triggered. A bold LinkedIn post declared "PSA: testing is not a specialism. Thank you for your time." Mic drop, walk off stage, no explanation. And it wasn't just one person. So Vernon did what any self-respecting tester would do: he asked why. And didn't get an answer. In this episode, Vernon and Richard dig into why some developers seem to find the idea of testing as a specialism genuinely laughable, what happens when you confuse a skill with a role, and why, in a world where everyone's building agentic workflows, nobody seems to notice that they're writing skills.md files full of testing knowledge. They also explore how AI is already reshaping what's expected of every role on a software team, why "knowing what good looks like" has never mattered more, and what skill stacking means for testers who want to stay ahead of the curve.Chapters00:00 - Intro01:17 - Vern's welcome rant01:42 - The topic: Is testing a specialism?07:17 - Rich gets a chance to speak 😅07:29 - Good Testers vs Bad Testers11:15 - Aren't we all developers now anyway?13:06 - There's testing and there's Testing15:01 - Testers communicating their value18:34 - If testing isn't a specialism, where does that leave agents and skills?20:30 - The lads cook up a new way to reframe the situation23:53 - Who should do testing?32:27 - Vern believes these folks are saying one thing and doing another35:02 - Rich wants to know what happens to the 0.5x Testers?40:02 - Skill Stacking45:36 - The thing most people haven't done but need to50:04 - Are we all Domain Translators now?54:23 - Wrap upLinks to stuff we mentioned during the pod:01:42 - Paul's interesting LinkedIn post that triggered Vernon03:34 - The question I asked on LinkedIn about why "people" get so triggered about testing as a role05:33 - Greg's interesting post about test management and levels of competence06:31 - Jade Rubick's post questioning whether QA should exist at all!10:46 - Angie Jones (the only thing Angie is terrible at is being terrible!)Angie's blogAngie's LinkedIn14:11 - The episode When Everything Sounds Like Testing… How Do You Explain What You Really Do?14:46 - Funnily enough, Vernon is giving this talk at Agile Testing Days 2026!It's called If Testers Had a Dragon's Den Pitch, Would Anyone Invest?DM Vernon if you would like a discount code for the conference!15:07 - The very cool GreaTest Quality Conference27:25 - Jason Bourne a fictional character from some of the best movies ever (especially the first two ^VR)28:29 - Anne-Marie CharrettThe excellent book in digital or physical versionsAnne-Marie's websiteBe sure to check out the blog which is no longer pay-walled 🥳Anne-Marie's LinkedIn28:29 - James BachThe Test Jumper description we refer toJames' blogJames' LinkedIn51:02 - Nate B. Jones (Shout out Martin for putting Vernon on to it 🙏🏾The post about developer roles mentioned in the episodeNate's newsletterNate's websiteNate's YouTubeNate's LinkedIn56:52 - Vernon did write Vernon Version 3 - Now with added AI! all about the skills he thinks he needs to develop going forwardsPlease like, subscribe, and share 😊 ^VRGot thoughts on whether testing is a specialism? We genuinely want to hear from you. Vernon still doesn't have an answer to his question. Run it past a friendly developer and let us know what they say. Drop us a message on LinkedIn and if Paul or Greg are listening, the invitation to come on the pod is very much open.

  2. 34

    6 AI Tool Ideas That Will Transform How You Test

    In this episode, Richard and Vernon explore the evolving concept of automation in quality, especially in the context of AI and Gen AI. They discuss how new technologies are blurring the lines between testing and quality, and what this means for the future of software development and testing practices.00:00 - Intro00:52 - Welcome and weekly catch-up01:11 - Vern's deep dive into the AI rabbit hole02:39 - Rich’s quit(er) work week, new threads, and dentists04:15 - Richard buys a domain and we started the pod proper06:09 - Tool idea #1: Using an LLM to evaluate user stories and acceptance criteria automatically07:35 - Is analysing a story "testing" or "quality"? The ISTQB static analysis debate10:27 - Vernon's diabetes analogy: AI is forcing us to finally do what we always said we should12:19 - Better stories = better testing: how quality work amplifies everything downstream13:11 - Tool idea #2: "If we made this change, what areas of the system would be impacted?"14:23 - Distilling years of system knowledge into 5–10 questions an agent could ask18:37 - Tool idea #3: The PR Analyser — summarising code changes through a testing and quality lens21:45 - Vernon's "1 unit of effort, 5 units of testing" — the quality multiplier effect23:29 - Comparing story analysis to actual implementation: where did understanding diverge?24:43 - Tool idea #4: Dynamic test selection — cherry-picking the right tests to run first27:05 - Tool idea #5: An agent that analyses failed builds and attempts to fix them27:28 - Why Richard's first attempt always "fixed" the test instead of the code (and what was missing)29:21 - Dan's AI agents: one thinking partner, one employee monitoring production32:42 - The documentation goldmine: why AI-generated RCA notes might matter more than the fix33:39 - Tool idea #6: A holistic quality dashboard pulling insights across stories, code, tests, and process36:43 - John Cutler on context: it's not data you pass around — it's formed through interaction40:43 - More options than ever: whether it's testing, quality, or static analysis — you can do it differently now41:56 - The real skill: spotting the opportunity to make yourself more effective42:30 - Ge Hill's Lump of Code Fallacy and why task analysis matters43:34 - Why Richard got into automation: efficiency, not because he was told to45:03 - Vernon's big question: in a world where agents can do everything, what's your performance review about?46:52 - Context, craft, and product knowledge can't be delegated to tools yet48:29 - Call to action: What are you building? What tools couldn't you build before that you can now?49:29 - Upcoming: Test Automation Days and PeerCon Live in NottinghamLinks to stuff we mentioned during the pod:04:15 - Automation in QualityRichard bought the automationinquality.com domain! The concept explored throughout this episode.05:28 - Kalpesh Sodha aka KalpsShout out to Richard's colleague who played devil's advocate on the "is it testing or quality?" question07:31 - Static analysis29:44 - Dan "The Agile Guy" ElliottHis post about how he uses AI agents as a "thinking partner" and an "employee" with different missions and capabilitiesDan’s websiteDan's LinkedIn36:52 - John CutlerJohn Cutler's piece on how context isn't just data you move around — it's formed through interaction between peopleJohn's newsletterJohn's LinkedIn42:37 - Rob SabourinMy quick Perplexity search for Rob's public material on Task AnalysisRob's Linkedin42:45 - Michael “GeePaw” HillHis Lump of Code Fallacy. The idea that coding isn't just one activity — there are three flavours of work that occur when you codeMichael’s websiteMichaels Mastadon49:35 - Test Automation DaysRichard will be keynoting at Test Automation DaysMake sure you say hi if you’re there50:10 - PeersConVernon and Richard will be recording a live episode at PeersCon!If you're there, come say hi and grab a mic 🎙️

  3. 33

    Six Principles of Automation in Testing: Still Relevant in 2026?

    In this episode, Richard Bradshaw and Vernon discuss the relevance and application of the six principles of automation in testing in the context of AI advancements. They explore how these principles hold up in 2026, the challenges faced in automation, and the future of testing strategies.00:00 - Intro01:47 - Welcome (Richard is not at home 👀)02:07 - Ramadan, cooking without tasting, and plastic teeth 🦷04:01 - Today's topic: revisiting the AiT principles ahead of a keynote04:58 - What is Automation in Testing (AiT)?06:49 - Principle 1: Supporting Testing over Replicating Testing07:01 - Vernon's take: testing is a performance, not a click sequence08:22 - What the industry promised vs what automation actually does08:49 - The serendipity you lose when a human isn't testing09:59 - Agentic testing: observing more, but still not replicating humans10:56 - The danger of anthropomorphising AI output12:10 - LLMs always give an answer — and that's the problem13:03 - Principle 2: Testability over Automatability13:14 - Vernon's take: narrow vs broad — operate, control, observe14:38 - Making apps automatable for the robots but not the humans15:37 - The shiniest framework in a broken testing context16:40 - If it's testable, it's probably automatable — but not vice versa16:55 - Automation strategy vs testing strategy: when they compete, everyone loses17:46 - The problem has always been testing, not automation19:57 - Principle 3: Testing Expertise over Coding Expertise20:18 - Vernon's take: testing expertise lets you leverage the tools21:47 - The spoonfed tests problem: great at automating, lost without guidance22:36 - The "code school" era: everyone told to learn to code22:51 - Coding agents have changed the maths on this26:01 - The new nuance: test design and framework knowledge over writing the code28:44 - Evaluating code is a testing problem — and LLMs can help you do it30:43 - Are agents as good as a junior developer?31:42 - Outcome Engineering (O16G) and the race to write the AI principles32:13 - Simon Wardley: we're in the wild west again33:22 - Principle 4: Problems over Tools33:29 - Vernon's take: the hammer and the nail34:07 - Don't let your problems be shaped by the framework you have34:36 - New automation opportunities beyond testing: PRs, logs, story review35:30 - Principle 5: Risk over Coverage36:12 - Vernon's take: 100% coverage ≠ 100% risk coverage38:00 - The one test case, one automated test fallacy39:04 - Where in the system is the risk? Do you even know your layers?39:49 - Probabilistic vs non-deterministic: refining the language around AI40:53 - Coverage as intentional vs coverage as a number someone picked once43:15 - Principle 6: Observability over Understanding43:24 - Vernon's take: just-in-time understanding vs reading everything upfront44:12 - What the principle was actually about: making automation results observable47:00 - Does this principle belong in testing, or has it grown into quality?49:00 - So... what's missing?50:00 - The four pillars: Strategy, Creation, Usage, and Education57:05 - Automation in Quality: the bigger opportunity01:01:00 - Wrap up + Vern's Lead Dev panelLinks to stuff we mentioned during the pod:04:00 - Automation in Testing (AiT)The principles live at automationintesting.comAiT was co-created by Richard Bradshaw and Mark Winteringham04:00 - Test Automation DaysThe conference where Richard is giving his keynote — testautomationdays.com24:48 - James ThomasThe "kid in a candy shop" himself — James's blog and LinkedIn31:42 - Outcome Engineering (016G)The article Richard shared before recording — worth tracking down if you're interested in where agentic development practices are heading32:13 - Simon WardleyIf you're not following Simon Wardley, please follow Simon Wardley! His work on Wardley Maps and situational awareness in strategy is essential readingSimon's LinkedIn43:30 - Abby BangserVern's go-to person for all things observability. Abby's LinkedIn46:04 - Noah SusmanAs it turns out, the quote Vern's referencing: advanced monitoring as "indistinguishable from testing" was not by Noah! It was Ed Keyes at GTAC 2007.Noah's blog and LinkedIn59:30 - Angie JonesVern's been reading Angie's work on testing AI-enabled applications here and here.Angie's website and LinkedIn01:01:30 - The Lead Dev panel Vernon will be part of"How to Measure the Business Impact of AI" — happening 25th February, free to sign up01:02:00 - Richard's Selenium Conf talk"Redefining Test Automation" — the talk that the Test Automation Days keynote is shaping up to be a spiritual successor to. 

  4. 32

    This Was Supposed to Be About Testing

    This was supposed to be about testing.Instead, it turned into a conversation about burnout, money, leadership, community, AI, and what it actually takes to build a sustainable life in tech.Richard and Vernon kick off 2026 reflecting on what they’re changing, what they’re rebuilding, and how testing and quality fit into a future shaped by intention rather than hustle.Links to stuff we mentioned during the pod:05:19 - The Malazan Book of the Fallen by Steven Erikson14:59 - The $1k Challenge by Ali Abdaal Vernon took part in last year17:23 - The video from Daniel Pink on how to have a successful yearHere's where Daniel talks about having a Challenger Network (but the whole video is 😙🤌🏾)18:46 - Toby SinclairToby's websiteToby's LinkedIn19:24 - Keith KlainKeith's blogKeith's podcastKeith's LinkedIn19:25 - Agile Testing Days conference35:45 - What is Model Drift?41:06 - Glue workTanya's Glue Work presentation which you can read or watchVernon's talk about how glue work impacts Quality Engineers, Testers, etc.48:06 - Gary "GaryVee" VaynerchukGary's websiteGary's YouTube00:00 - Intro00:54 - Greetings & where have we been?01:32 - The holidays02:34 - Rest & mood04:00 - Routines for success05:59 - Push-up challenge!08:35 - Dopamine detox10:28 - THE EPISODE BEGINS!10:29 - What are our personal 2026 themes (rather than resolutions)?10:59 - Rich's 2026 themes13:10 - Vern's themes17:58 - Friendship, loneliness, and being the initiator21:28 - Rich has a two itches. One about writing...21:56 - ...and another about hats25:23 - Vern's leadership focus and testing foundations31:06 - AI work: data mindset, agents, and the vibe coding divide40:11 - Rant about AI testing being stuck in the past46:37 - Do "cool" shit and "talk" about it. How to stand out from AI Slop50:10 - Our podcast themes for 2026

  5. 31

    Shifting Left: Agile vs. Waterfall in QA

    In this episode of the Vernon and Richard show, the hosts engage in light-hearted banter about football before diving into a deep discussion on QA, QE, and testing. They explore the concept of 'shift left' in software development, comparing its application in agile versus waterfall methodologies. The conversation shifts to the evolving roles of QA and QE in the context of AI's impact on the industry, emphasizing the importance of task analysis and building a quality culture within teams. The episode concludes with reflections on managing expectations in QA roles and the future of jobs in the field.00:00 - Intro00:48 - Welcome and "Hey" (may contain traces of ⚽️)04:45 - Olly's first question: Does shift left lend itself more to waterfall (than other methodologies)?14:41 - Olly's second question: Does this limit how much agile can be used? Is there potentially a new methodology that can emerge from this?22:31 - Olly's third question (remixed by Rich a little): "...is it more now a case of making people aware that they can, should be considering things ahead of development?"34:24 - Olly's fourth question: How far can you shift-left before it becomes overstepping?51:53 - Olly's... which question is this now?! Next question! That works!: Where does the QA role end?Links to stuff we mentioned during the pod:04:26 - Olly FairhallOlly's LinkedInHere's a link to what Olly sent us04:45 - Waterfall (in software development)Wikipedia article about the history of the term This article goes into a little more detail about the different phases and characteristics of the model 07:29 - Dan Ashby's (yes DAN'S!) famous diagram is part of his often cited "Continuous Testing" post07:50 - For folks who don't understand that reference, it's... taken (🥁) scene from the movie Taken08:10 - Rich's Whiteboard used to get a lot more love😞 22:31 - Olly's questions and thoughts that are guiding our conversation. Thanks Olly!44:12 - The book "Who Not How" by Dan Sullivan and Dr. Benjamin Hardy46:33 - Elisabeth HendricksonGet Elisabeth's excellent book Explore It!Elisabeth's LinkedIn46:49 - Alan PageAlan's newsletterAlan and Brent's podcastAlan's LinkedIn51:53 - Kelsey HightowerKelsey did a Q&A at Cloud Native PDX and you can listen to the question and answer I was trying to describe here.I urge you to listen to the whole thing. Kelsey is an excellent orator, storyteller, and all-around human ❤️55:33 - Rob SabourinMy quick Perplexity search for Rob's public material on Task AnalysisRob's Linkedin56:59 - Vernon's newsletter "Yeah But Does it Work?!"The issue mentioned is called "What Is The Vaughn Tan Rule and How Does It Impact Testing?" and talks about where we might start with unbundling

  6. 30

    Measuring Software Testing When The Labels Don’t Fit

    This episode is about the struggle to explain, measure, and name the work testers and quality advocates actually do — especially when traditional labels and metrics fall short.Links to stuff we mentioned during the pod:05:05 - Defect Detection Rate (DDR)The rate at which bugs are detected per test case (automated or manual)No. of defects found by test team / No. of Test Cases executed) *10015:06 - David Evans' LinkedIn24:57 - Janet GregoryJanet's websiteJanet's LinkedIn26:01 - Defect Prevention RatePerplexity search results here28:28 - Jerry WeinbergJerry's Wikipedia page (his books are highly recommended)49:33 - Shift-Left: The concept of moving testing activities earlier in the software development lifecycyle.Some resources explaining the Shift-Left concept (Perplexity link)00:00 - Intro01:11 - Welcome & "woke" testing 😳03:15 - QA, QE, Testing… whatever we call it, how do we measure if we're doing a good job?03:44 - Vernon’s first experience with testing metrics: more = better?05:00 - Defect Detection Rate enters the chat06:41 - Rich reverse engineers quality skills needed in the AI era10:54 - How do we know if we’re doing any of this well?12:40 - Trigger warning: the topic of coverage is incoming 😅16:54 - Bugs in production21:09 - Automation metrics: flakiness, pass rates, and execution time24:29 - Can you measure something that didn’t happen? (Prevention metrics)27:43 - Do DORA metrics actually measure prevention?32:03 - Here comes Jerry!33:50 - The one metric the business cares about...36:23 - QA vs QE: whose “quality” are we "assuring"?39:25 - What's the story behind the numbers?48:29 - Rich brings in Shift Left Testing50:14 - Metrics that reach beyond engineering53:14 - Rich gets a new perspective on QE and the business56:50 - Who does this work? Testers? QEs? Or someone else?

  7. 29

    When Everything Sounds Like Testing… How Do You Explain What You Really Do?

    In this episode, Richard and Vernon delve into the complexities of Quality Assurance (QA), Quality Engineering (QE), and testing in software development. They explore the evolution of these concepts, their interrelations, and the importance of metrics in assessing quality. The conversation highlights the need for a holistic approach to quality, emphasizing that both prevention and detection of bugs are essential. The hosts also discuss the challenges of defining these terms and the future of quality in the industry.Links to stuff we mentioned during the pod:08:50 - Dan AshbyWe're referring to Dan's's excellent post called "Continuous Testing" (featuring his famous diagram!)17:13 - Jit GosaiJit's blog Jit's Quality Engineering Newsletter Jit's LinkedIn19:24 - Quality Talks PodcastStu's Quality Talks podcast that he co-hosts with Chris HendersonStu's LinkedInChris's Linkedin19:55 - The Testing Peers podcast22:00 - DORA Metrics: DORA metrics are a set of key performance indicators developed by Google’s DevOps Research and Assessment team to measure the effectiveness of software delivery and DevOps processes, focusing on both throughput and stability26:13 - A link from Episode 10 where Vern discusses Glue Work (be sure to check out the show notes on that episode)Quick overview of DORA metrics34:43 - The Credibility PlaybookA video course by Vernon as he experiments with building digital products.Check it out and let him know what you think of it! 😊46:24 - Ali AbdaalAli's websiteAli's YouTube00:00 - Intro01:36 - Welcome02:40 - Today's topic: What the hell is QA? QE? Testing? And is it all changing?03:00 - Why is this bugging Rich?05:11 - Fruit fly tangent 🍌🍊🍎🪰🐝🦋06:27 - Rich's take on QA, QE, and Testing08:31 - Vern's take on QA, QE, and Testing11:15 - Is shift-left testing the same as QE?13:05 - When the team tests early... is that QE then?!16:18 - What's the big deal if we can’t define QE clearly?19:27 - Why the Efficiency Era makes this even harder22:55 - Trying to draw the Testing, QA, QE, Venn diagram27:24 - Getting the QA, QE, Testing blend just right. What's the right mix?29:52 - The kinds of work we take on as our careers grow34:08 - What Testers get rewarded for45:34 - How Ali Abdaal helped Vern think differently about quality48:18 - Rich talks measurement

  8. 28

    Embedding Quality Using AI

    In this conversation, Vernon and Richard explore the evolving role of AI in quality engineering and software development. They discuss how AI can enhance quality control processes, the importance of embedding quality early in the development cycle, and the potential challenges and opportunities that arise from integrating AI tools. The conversation also touches on the need for skill development and community engagement in adapting to these changes, as well as the implications for roles within the industry.Description and Thumbnail made with AI to assess the quality, we had to!00:00 - Intro01:02 - Welcome and footy ⚽️02:15 - Today's topic: The impact that AI may or may not have on Quality Engineering03:22 - Rich's wild idea about AI and software quality14:10 - Vern asks a clarifying question22:45 - Communities of excellence… for machines?!24:03 - Vern thinks there's an obvious risk that follows from this idea...31:31 - Rich addresses the risk (Oracles, prompts, and tester superpowers)36:13 – Reflection: the hidden skill AI forces on us41:40 – Shifting in all directions (not just left)43:04 - Feeding your past self into an AI: smart or scary?45:53 – Operation 400 subscribers (and bot listeners)47:13 – Tony Bruce calls us out on sloppy show notes and outroLinks to stuff we mentioned during the pod:04:18 - Shift-Left: The concept of moving testing activities earlier in the software development lifecycyle.Some resources explaining the Shift-Left concept (Perplexity link)25:35 - Rob BowleyRob's LinkedInThe post Vernon referred to......a follow-up post not long after that one too!26:40 - Alan PageAlan and Brent's podcastAlan's LinkedIn34:43 - Saskia CoplansDigital Interruption Saskia's cybersecurity consultancyREXscan Saskia's automated mobile application vulnerability scannerSaskia's LinkedIn (highly recommended follow)41:49 - Paul ColesPaul Coles published 3 of his 4 part series "The Subtle Art of Hearding Cats" over on Dev.To Recommended reading!Paul's LinkedIn43:09 - Maaret PyhäjärviMaaret's websiteMaaret's blogMaaret's LinkedIn

  9. 27

    Six Hard Lessons From Building With AI Agents

    In this episode of the Vernon Richard show, the hosts discuss their experiences with AI tools and agents, focusing on the challenges and lessons learned from using these technologies in coding and software engineering. They explore best practices for utilizing AI effectively, the importance of context in interactions with AI, and the future of AI agents in the workplace. The conversation highlights the balance between leveraging AI for efficiency while maintaining control and understanding of the underlying processes.Links to stuff we mentioned during the pod:09:16 - The LinkedIn post talking about Replit messing with someone's production code 😳And the link to the thread of person who went through itThe tool in question, Replit13:01 - Rich's LinkedIn post with his tips14:21 - GitHub Copilot18:09 - VS Code29:01 - Folks at different ends of the "AI Enthusiasm Spectrum"On the enthusiastic endJason Arbon is on the positive side and is always creating something interesting like...testers.aiOn the unenthusiastic endKeith Klain has created a reading list to help get us up to speed...Keith's reading AI reading listYou can see his full resources list hereMaaike Brinkhof has a bunch of thought-provoking posts on the topic......like this oneand this one34:44 - Want to know what "conflabulation" means? Listen to Martin explain it on the Ghost in th code podcast (that's not a typo!)37:24 - What is Context Engineering? Perplexity has answers!46:38 - The legendary Lt. Geordi La Forge from Star Trek: The Next Generation.51:48 - After recording, the very cool Paul Coles published his article The Subtle Art of Herding Cats: Why AI Agents Ignore Your Rules (Part 1 of 4, explaining the topic of Context Engineering. It’s brilliant!59:04 - The promises of technology over the years...60:50 - The always insightful Meredith Whittaker of Signal fame, where is the president and services on its board of directors, explains the privacy and security concerns with agentic technology.Watch the clip, then go back and watch the whole thing!00:00 - Intro01:17 - Welcome01:30 - TANGENT BEGINS... All kinds of egregious waffling follows. Skip to the actual content at 08:3401:31 - Rich VS Tree Stump01:57 - What on earth did Rich need the pulley for?02:26 - Vern's nerdy confession and pulley confusion02:52 - Does Rich live next door to Tony Stark?!03:22 - What to do when you need a steel RSJ03:35 - We admit defeat. 03:36 - Welcome to Rich's Garden Adventures Podcast!07:25 - What has Vern been up to?08:34 - We attempt to segue into the episode at last!08:35 - TANGENT ENDS...08:51 - Rich’s POC: using agents to help build AI tools09:45 - The Replit disaster: vibe coding meets deleted production data 11:12 - Sociopathic assistants and the case for AI gaslighting 11:55 - Vernon wants his team experimenting with AI tools12:50 - Rich explains the context for his latest AI adventures13:18 - Rich’s bench project and “putting the engineering hat on” 15:22 - Setting up the stack and staying in control 16:53 - A familiar story: things were going fine until they weren’t 17:00 - Ask vs Edit vs Agent mode in Copilot explained 19:06 - The innocent linting error that spiralled out of control 21:16 - Stuck in a loop: “I didn’t know what it was doing, but I let it keep going” 22:11 - The fateful click: “I’m going to reset the DB” 23:10 - The aftermath: no data, no damage… but very nearly 23:33 - Security wake-up call: agents are acting as you 24:39 - You can’t fix what you don’t know it broke 25:52 - Can you interrupt an agent mid-task? 27:14 - When agents get “are you sure?” moments 28:15 - Tea breaks as a dev strategy: outsourcing work to agents 29:24 - Jason Aborn vs Keith & Maaike: where Rich sits on the AI enthusiasm spectrum 30:41 - Tip1. The first of Rich’s 6 agent tips: commit after every interaction32:12 - Why trusting the “keep all” button is risky 34:01 - Writing your own commits vs letting the agent do it 35:26 - When agents lose the plot: reset instead of fixing 36:55 - “You’re insane now, GPT. I’m giving you a break.” 37:54 - Tip 2: Make the task as small as possible 39:59 - The middle ground between 'ask' and full agent delegation 41:12 - Tip 3: Ask the agent to break the task down for you 43:36 - The order matters: why you shouldn’t start with the form UI 44:33 - Vernon compares it to shell command pipelines 45:09 - It can now open browsers and run Playwright tests (!) 46:23 - Star Trek and the rise of the engineer-agent hybrid 47:57 - Tips 4–6: Test often, review the code, use other models 49:39 - Pattern drift and the importance of prompt templates 50:51 - Vernon’s nemesis: m dashes, emojis, and being ignored by GPT 51:48 - Context engineering vs prompt engineering 52:43 - When codebases get too big for agents to cope 53:40 - Why agents sometimes act dumber than your IDE 54:32 - The danger of outsourcing good practices to AI 54:48 - Spoilers: Rich’s upcoming keynote at TestIt 55:01 - Agents don’t ask why — they just keep going 56:42 - Goals vs loops: when failure isn’t part of the plan 58:32 - The question of efficiency: is training agents worth it? 59:47 - Rich’s take: we’ll buy agents like we buy SaaS 61:08...

  10. 26

    Coaching Developers on QE

    In this episode of the Vernon Richard Show, Richard and Vernon discuss the challenges and opportunities in coaching software engineers on quality engineering. They explore personal updates, family dynamics, and the importance of perspective in quality and risk management. The conversation delves into the significance of code quality, effective communication, and the role of engineers in ensuring quality. They also touch on the need for hands-on learning and practical application in quality engineering training, concluding with a call to action for listeners to share their experiences and insights.Links to stuff we mentioned during the pod:01:16 - Llandegfan Exploratory Workshop in Testing aka LLEWTYou can read about the latest edition from James (I haven’t written anything up yet - VR)19:57 - The “I just want to write code” LinkedIn postFAILURE! I couldn’t find the LinkedIn post I was referring to 😭22:29 - Linda Van De Vooren (massive brain freeze - I couldn't remember Linda's last name properly! Sorry Linda 🤦🏾‍♂️)25:04 - Cul-de-sacThis is a French term (that means "bottom of the bag") that we use in English to describe a dead-end street, i.e. a street that only has one entry/exit point.We also use it in the context Vernon just did, to indicate a situation where we have no options.31:45 - The Deep Dive tracks at Agile Testing Days look incredible! Get your tickets ASAP folks!They also have an Online Pass available if you're unable to visit Berlin (although if you can, we recommend visiting in person!)34:20 - Rich's Qt testing articlesWhere Does AI Fit in the Future of Software Testing?Applying the SACRED Model to Build Reliable Automated TestsThe Importance of Technical System Knowledge4 Essential Types of Automated API TestingExploring the Different Types of Automated UI TestingThe Manual Testing and Automated Testing Paradox42:11 - PeersCon tickets are available now. If you're in the UK and can easily get to Nottingham, I highly recommend visiting!Don't forget they also need VolunPeers (do you see what they did there?), before and during the event, so check that out too please 🙏🏾43:55 - Heather Reid Heather's blog Heather's LinkedIn46:15 - Liza, the awesome teammate in question46:41 - European Testing Conference led by Maaret PyhäjärviWhile the event has stopped, you can still take a peek at their website00:00 - Intro00:48 - Welcome ramble05:25 - Rich's question: An Engineer colleague wants to be coached on Quality Engineering, what do I do?08:24 - Vern goes into coaching mode (shock!)09:35 - Vern goes into teaching mode (shock!)10:04 - Where could we start?12:25 - Risk enters the chat...14:50 - Quality enters the chat...15:48 - Help them speak up and become a QA (Question Asker)17:30 - Two powerful questions to get them thinking about quality19:15 - The dangers of acting like an order taker...19:57 - ...or are they?21:45 - Uno Reverse! Is it true that all Engineers "love" writing code?23:18 - Order Takers vs Experts24:30 - Another powerful question to ask25:47 - Rich's clarification sparks an idea about hats27:17 - Slalom Sponsorship Appeal27:41 - How do you decide when you have learned enough on a given topic?29:21 - Majors and minors30:55 - Learning modalities31:42 - Learning tools35:00 - A "syllabus" or roadmap starts to emerge36:50 - What can the Engineer do to help the QEs in their life?43:29 - Send us your ideas please45:54 - The 1-to-many approach47:53 - The classic mistake to avoid in this situation49:15 - The relationship between testing and quality52:10 - Vernon's people will contact Slalom's people

  11. 25

    Growth Plans for Technical Testers: Why Playwright Isn’t Enough

    In this episode of the Vernon Richard Show, Richard and Vernon discuss the growth plans for testers in test automation, focusing on the importance of coding skills, exploratory testing, and the balance between generalist and specialist roles. They explore the need for measurable targets in personal development plans and the significance of understanding the context of problems in software development. The conversation also touches on the impact of AI on software engineering and the necessity for collaboration between testers and developers.00:00 - Intro01:42 - The Ramble begins07:39 - QUESTION (Thanks Thierry!): "How do you see a growth plan for testers in test automation as a personal development plan?"10:12 - How has Vern helped Testers create an automation development plan?13:14 - What does it mean to go from novice to advanced?15:15 - Rich wants to know what test automation means before answering the question!15:57 - The nuance (and trap!) of the word "tool"17:35 - Rich has come up with a new term for old testing19:21 - What about code? Which languages should you learn?20:34 - Vern's answer to a Redditor asking a similar question23:34 - Don't forget the reason why we're trying to learn all of these tools and languages24:24 - Who makes the "best" "automation" testers?25:45 - What does it look like when an SDET hasn't learned how to identify the right test?26:34 - Ok if that's you and your team, how can you make it work?28:33 - Lord of the Rings testing!29:40 - How does Alan Richardson defeat "Testing Sauron"? (I'll stop the LotR references now I swears it 😇)31:07 - Noah Sussman's excellent early ideas to solve this problem32:42 - Generalist or Specialist, what is the core, foundational knowledge needed to call yourself an engineer?34:18 - ...and what about AI? (only took half an hour!)35:10 - Vern wants to get back to work asap and start creating growth plans... but for who?38:20 - What two things are often missed in growth plans?40:41 - Rich talks about the tangible difference between being a novice and advanced SDET/Automation Specialist/Toolsmith41:39 - The cognitive load of your engineers42:17 - Production code vs Automation code: Which is more important? Rich breaks it down.44:27 - What are we optimising for?47:45 - Do we have to choose between readability and efficiency though?52:52 - Learning through pain54:12 - Rich and Vern wonder what they should do next54:32 - What makes this relevant in today's job market55:22 - One last wild take about software development careers...Links to stuff we mentioned during the pod:03:23 - The Øredev conference in MalmöGet your tickets here!04:13 - The LLEWT peer testing workshopCheck out this summary from last years event by James ThomasRead about the origins of this flavour of workshop06:40 - Cynefin a sense-making framework devised by Dr Dave SnowdenHere's Dr Snowden explaining the frameworkEnabling ConstraintsThe Paradox of Choice (which I didn't know was a book - readingList++)07:39 - Here's the full question from Thierry as he asked it on LinkedIn14:14 - "GUI Automation"A term used to describe tools focused on driving browsers. Some examples of such tools would include Selenium, Cypress, Playwright, and Watir.15:57 - I'll link to Rich's article once it's published 🙂16:04 - Automation in Testing (AiT)Automation in Testing references (via Perplexity)16:18 - Some tools and frameworks Rich mentioned:SeleniumPlaywrightJUnit18:50 - Rich's API Testing article on Qt QA blog19:07 - Rich's article explaining the different kinds of GUI Automation19:21 - What's a scripting language vs an object-oriented language?According to PerplexityKey takeaway: These are not mutually exclusive terms and label to two different aspects of programming languagesScripting: how code is runObject-orietned: how code is organised19:36 - Programming languagesTypeScriptJavaC# (pronounced "See-Sharp")JavaScript19:52 - Mark WinteringhamMark's websiteMark's GenAI bookMark's LinkedIn20:34 - The article called "Career Advice For A 35+ Year Old Manual Tester"26:54 - Erik "I love orange" DavisErik's LinkedIn27:42 - Rich's S.A.C.R.E.D. model29:40 - Alan RichardsonAlan's websiteAlan's Patreon communityAlan's LinkedIn31:02 - Noah SussmanNoah's stupendous blog post "How to teach yourself to be a technical tester: some thoughts."Noah's LinkedInHighly recommend that you watch Noah's talks on anyOfTheThings32:24 - Then the effervescent Michael Larsen actually went through the thing!He documented his journey with it too.

  12. 24

    Should Testers Bother With Social Media?

    In this episode of the Vernon Richard Show, the hosts discuss the significance of social media and community interaction in the software testing field. They explore how social media has evolved, the importance of content creation, and the balance between personal goals and professional networking. The conversation also touches on their favorite podcasts and books, as well as future directions for their own show.06:10 - Olly FairhallOlly's LinkedInOlly's Bluesky11:27 - Chris ArmstrongChris's blogChris's podcast The Testing Peers (of PeersCon fame!)Chris's YouTubeChris's LinkedIn19:52 - Anne-Marie CharrettAnne-Marie's book (why are you still here? Go and buy it right now!)Anne-Marie's blogAnne-Marie's LinkedIn28:21 - Angie JonesAngie's website (including links to her public GitHub repos, blog, and allTheSocialMedia profiles - I told you she was prolific!)28:33 - GopsTo prove my point, I cannot find this man's LinkedIn profile 🤦🏾‍♂️😂 (this is a dude who I've known for multiple decades btw!)34:05 - I work at Phrase, I'm having a great time, and we're hiring (and if you see something you like, apply! Just don't forget to tell them Vern sent you! 🙂)34:36 - People who help me think about testing (and life in general)Angie (scroll up for the links!)Ash Coleman Hynie's LinkedIn (don't forget to check out the app she's building over at CountrPT!)Martin Hynie's LinkedIn (check out his new podcast The Ghost In Th Machine explaining AI!)James Thomas's LinkedIn (he also has a wonderful blog)37:35 - Dean MoonDean's websiteDean's LinkedIn37:35 - Jit GosaiJit's blog Jit's Quality Engineering Newsletter Jit's LinkedIn48:47 - Our influencesVernonDiary Of A CEO (DOAC)Modern WisdomOwner NationBig DealSame As Ever by Morgan HouselThings I need to go back and bingeAB Testing podcastQuality Talks podcastTesting Peers podcastRichardEngineering Quality podcastSecrets Of Consulting by Jerry WeinbergExplore It! by Elisabeth HendricksonThinking, Fast and Slow by Daniel KahnemanAgile Testing and More Agile Testing both by Lisa Crispin and Janet Gregory57:03 - Rachel MlotaRachel's YouTubeRachel's LinkedIn00:00 - Intro00:46 - Welcome (Tiredness, Sleeper walls, sweaty chilli, and strongman training)05:59 - QUESTION (Observation? Comment? Ramble? Whatever it is, thanks Olly!): How important is social media and community interaction to testing?06:46 - Vernon's experience of social media over the years08:30 - The impact of the "Influencer" phenomenon on testing09: 59 - The golden age of Testing Twitter10:57 - What LinkedIn can't replace12:10 - The LinkedIn algorithm is weird!13:15 - Algorithm anxiety and the overthinking spiral15:45 - Community: planned vs organic17:24 - Lurkers, reactors, and why it still counts18:04 - To be or not to be... an influencer?20:07 - Rich reflects on the impact of social media on him and testing24:00 - Vern reflects on the impact of social media on him and testing25:35 - The power of sharing with purpose (not just promoting yourself)27:19 - QUESTION (From Olly): There seems to be a push to create content. What do you think about that?30:12 - What is "content" anyway?31:09 - Angie's 1-2-punch for content creation (that we should all copy!)32:39 - Do you know why you're posting in the first place?!33:23 - The impact of talking about testing (it's not just about "likes")37:35 - Rabbiting on about writing40:31 - Talking vs Doing41:12 - Goal confusion & content fatigue44:20 - Now that we mention it, what are OUR goals for content and social media?!48:47 - QUESTION: What podcasts do you listen to, what books do you read, and do any of them influence what you'd like to do with the show?57:20 - Where are we taking the podcast?58:24 - Is Salesforce as dull as Rich thinks?59:35 - Outro

  13. 23

    Gatekeeping Gotchas and Mentorship Mechanics

    In this episode of the Vernon Richard Show, Vernon and Richard discuss various topics, including personal updates, mental health, the role of gatekeeping in quality assurance, mentoring experiences, and effective onboarding strategies. They emphasise the importance of community support, advocating for mental health awareness, and the nuances of being a gatekeeper in professional settings. The conversation also delves into the dynamics of mentoring, the significance of setting clear goals, and the art of making suggestions in new environments.00:00 - Intro01:06 - Where have we been for a month?06:16 - Men's Mental Health09:46 - The Questions!10:10 - QUESTION: To gate-keep, or not to gate-keep? That is the question from Deb Sherwood22:05 - Olly impersonates Emily22:33 - QUESTION: Mentorship advice27:25 - Coaching vs Mentoring28:10 - Vern's good and bad with experiences mentoring28:57 - What role does accountability play in this?29:38 - Informal mentoring30:48 - Rich shares his experience mentoring a colleague31:46 - Rich's good and bad experiences with mentoring33:44 - Putting 💰 on the line35:45 - Energy Vampires 🧛🏾37:16 - The upside of being a mentor39:17 - QUESTION: Onboarding into a new team or to a new product40:40 - Rich's two-step process when he's in this situation43:50 - Vern's kids teach him a valuable communication technique45:24 - "Asking" & "Suggesting"48:28 - The danger of suggesting things blindly51:15 - Leadership inception54:40 - OutroLinks to stuff we mentioned during the pod:06:16 - Manchester Tech FestivalThe MTF Mixer: Male Mental Health featuringKofi JosephsKofi's website for Why Not IKofi's InstagramKofi's LinkedInJamie Lee DennisJamie's LinkedInJamie's website for Mandem MeetupDmitry LeykoDmitry's LinkedInJames DaviesJames' LinkedIn08:56 - Vern's travelsFrankfurt for German Testing Days with Beren Van DaeleRiskStorming OnlinePrague to meet my Phrase teammatesManchester Tech Festival in... Manchester!Zurich for GreaTest ConferenceLiverpool to celebrate Liverpool FC winning the Premier League!10:10 - Deb SherwoodDeb's LinkedIn15:37 - Jobs To Be Done (JTBD)Courtesy of Perplexity, here are some references you might find useful that explain the JTBD concept25:00 - S.M.A.R.T. goals27:25 - The coaching vs mentoring rabbit hole on earlier episodes29:13 - Ryan CoxRyan's website (if you check him out, tell him Vern sent you!)Ryan's LinkedIn35:45 - The Software Testers Journey book, written by Nicola Lindgren and me40:30 - Automation in Testing references (via Perplexity)41:18 - Resources about the 10 Ps of TestabilityArticle on testability featuring the 10 PsThe book Team Guide to Software Testability by Ash Winter & Rob Meaney

  14. 22

    Discussing Test Automation, It Depends On....

    SummaryIn this engaging conversation, Vernon and Richard celebrate Liverpool's recent football victory while seamlessly transitioning into a discussion about automation in testing. They explore the definitions of automation, the importance of having a unified codebase for tests, and the challenges of choosing the right programming language for testing tools. The duo emphasizes the significance of collaboration between developers and testers, the need for regular review of tests, and the impact of context on decision-making in automation. The conversation is filled with insights and practical advice for anyone involved in software testing. In this conversation, Vernon and Richard explore the complexities of development and testing, focusing on the importance of language consistency, the role of developers in testing culture, and the challenges of tool standardization. They discuss the perception of automation skills in the industry, the distinction between coding and technical skills, and the need for context in automation. The conversation also touches on the future of AI in development and the balance between technical skills and automation. Finally, they share upcoming events and opportunities for community engagement.* AI Generated show notes00:00 Football Triumphs and Rivalries01:43 Diving into Automation04:28 Understanding Automation in Testing05:58 Frameworks and Code Repositories08:29 The Role of Developers in Automation11:08 Challenges in Automation Implementation13:11 Best Practices for Test Code Management19:13 Building Communication Between Components21:13 Understanding Context and Testability23:21 The Dilemma of Skipping Tests25:29 The Importance of Test Review and Discipline27:55 Navigating Commercial Pressures in Testing29:32 The Complexity of Automation in Different Languages42:16 The Misconception of Technical Skills in Automation45:45 The Automation Misconception48:44 Technical Skills vs. Coding Skills52:32 Understanding the Role of Automation56:01 The Future of Testing in an AI World59:46 The Value of Critical Thinking in Testing01:03:36 Navigating the Job Market as a Tester01:08:48 Upcoming Events and Community Engagement

  15. 21

    The Messy Truth About Tech and Testing Careers in 2025

    In this episode, Richard and Vernon discuss various aspects of job stability in the tech industry post-pandemic, the impact of innovation on job longevity, and the dynamics of accidental management. They explore the changing landscape of tech roles, the importance of ratios between developers and testers in projects, and the evolving nature of go/no-go decisions in software releases. The conversation emphasizes the need for clear metrics and standards to facilitate smoother decision-making processes in tech teams.00:00 - Intro01:07 - Welcome to the Vernon Richard Emily Show01:35 - Our viral moment03:07 - Question: Do you think people are staying in tech jobs longer since the pandemic? The old time in role used to be about 2 years right?17:00 - Income worries and diversification20:20 - Question: I see loads of developers become accidental managers, but i don't think that's as common for testers - what do you think?23:40 - Can Testing specialists become Engineering Managers (and beyond)?26:06 - How does being a Quality Engineer impact your chances?28:25 - Lived experience vs coaching31:08 - We're already doing management and leadership32:19 - What career paths are open to people with a testing background?33:59 - Question: Do you think there is a good target ratio of dev:test professionals in greenfield projects? Brownfield projects etc. If not, what sort of thing do you think that depends on?34:40 - It depends! On the project demands35:20 - It depends! On what kind of people are in the team37:10 - It depends! On testability37:33 - And Rich's answer is...39:19 - Sometimes you can't coach your way out of a situation40:50 - It depends! On what good looks like41:25 - It depends! On where the company is in it's life44:35 - Question: My team release software to customers 1-2 times a week. Who do you think should be in that "go/no go" conversation?46:46 - It depends 🤦🏾‍♂️48:16 - go/no go meetings in 2002 versus 2025Links to stuff we mentioned during the pod:01:35 - Our most viral video to dateIt's the short Sometimes it's best to quit your job05:02 - Gary Stevenson, economist, trader, authorGary's websiteGary's YouTube "Gary's Economics"Gary's book "The Trading Game"13:02 - Graham FreeburnHappy retirement Graham!17:00 - Here's the phrase Vernon butchered and what he was trying to convey28:41 - The definition of coachingThe International Coaching Federation (ICF) define coaching as: "Partnering with clients in a thought-provoking and creative process that inspires them to maximize their personal and professional potential."Here's Vernon explaining it during a presentation at Agile Testing Days37:10 - Resources about the 10 Ps of TestabilityArticle on testability featuring the 10 PsThe book Team Guide to Software Testability by Ash Winter & Rob Meaney

  16. 20

    Positioning and Selling Yourself in Your Teams

    In this episode of the Vernon Richard Show, Richard and Vernon delve into the intricacies of career journeys in software testing, discussing the significance of job titles, the importance of positioning oneself within a role, and the challenges faced in consulting. They explore how to define one's role and impact, navigate client expectations, and the evolution of job descriptions in the tech industry. The conversation emphasizes the need for continuous learning and adapting to different contexts, ultimately highlighting the importance of effective communication and self-reflection in shaping a successful career.00:00 Introduction to the Podcast and Career Journeys03:49 The Importance of Job Titles10:08 Positioning and Expectations in Roles20:37 Defining Roles and Responsibilities27:47 Evolution of Testing Roles29:09 Navigating Challenges in Testing30:08 The Role of Leadership in Quality Assurance31:09 Understanding Job Expectations in Consulting32:28 The Importance of Experience in Consulting34:38 Context Gathering for Effective Consulting36:33 Identifying Root Causes in Testing Issues37:25 Adapting Responses Based on Context38:51 The Art of Job Crafting41:38 Shifting Perspectives on Best Practices44:16 Balancing Expectations and Reality in Consulting47:17 Recognizing When to Walk Away48:39 The Impact of Context on Job Titles51:48 Reflecting on Skills Beyond Job Titles

  17. 19

    PeersCon 2025 Perspectives

    SummaryIn this episode, Vernon and Richard reflect on their experiences at the PeersCon conference, discussing the engaging keynote speakers, insightful workshops, and the overall atmosphere of the event. They share their thoughts on various talks, emphasizing the importance of communication, quality, and personal growth in the software testing industry. The conversation highlights the value of community and collaboration, as well as the significance of learning from both successes and failures in one's career. In this episode, Vernon and Richard reflect on their experiences at PeersCon, discussing memorable moments, workshops attended, and the importance of communication in the testing field. They delve into specific workshops like Pipeline and Risk Storming, and the game 'Defend the Indefensible', which encourages critical thinking and perspective-taking. The conversation also touches on feedback for future events, emphasizing the need for a balance between communication and testing topics, and the importance of recognizing the efforts of event organizers.Chapters00:00 Introduction and Conference Overview03:42 Reflections on PeersCon and Community Engagement06:35 Keynote Highlights and Speaker Insights09:30 Workshops and Learning Experiences12:29 Communication and Quality in Software Development15:27 Personal Growth and Career Development18:22 Closing Thoughts and Future Events25:06 Exploring Engaging Talks at the Conference27:57 Tom's Journey: From Manufacturing to Software Testing29:10 Linda's Comedy of Errors: A Unique Perspective on Automation32:43 The Reality of Coding: Enjoyment vs. Competence39:37 Networking and Unexpected Encounters at the Venue42:57 Workshops: Enhancing Technical Understanding and Communication Skills52:17 The Importance of Communication in IT and Life54:39 Feedback and Constructive Criticism57:36 Balancing Communication and Testing in Conferences01:00:29 Enhancing Event Experience and Sponsor Engagement01:04:34 Acknowledging Organizers and Their Contributions01:08:34 Sustaining Affordability and Accessibility in EventsLinks to come!

  18. 18

    How Testers Can Thrive in CI/CD Without Being Gatekeepers

    In this episode of the Vernon Richard Show, the hosts delve into the nuances of Continuous Delivery and Continuous Deployment, exploring how testing practices evolve in these environments. They discuss the skills required for testers, the importance of risk management, and the cultural shifts necessary for effective quality assurance. The conversation highlights the need for collaboration within teams and the role of testers as facilitators rather than gatekeepers. The episode concludes with reflections on the importance of understanding quality and risk in software development.Links to stuff we mentioned during the pod:01:35 - Quality TalksCheck out their awesome new website02:37 - The Agile Testing Days ConferenceOh! Their Call For Papers is open until the end of March. You should submit!02:42 - Johnny J. JonesJohnny's LinkedIn03:23 - Continuous Delivery & Continuous DeploymentWhat is Continuous Delivery?What is Continuous Deployment?03:31 - Abby BangserAbby's LinkedIn05:24 - Keith KlainKeith's blogKeith's podcastKeith's LinkedIn23:51 - Dan Ashby's famous post called "Continuous Testing" (featuring his famous diagram!)28:45 - James Bach's Test Jumper conceptJames' websiteJames' LinkedIn31:20 - All of these people should probably change their last names to "Bourne"40:18 - James ChristieJames' blogJames' body of work regarding the Post Office Scandal43:37 - Trunk-Based Thierry de PauwThierry's websiteThierry's LinkedIn48:30 - Jerry Weinberg's 2nd Law of Consulting from his book The Secrets of Consulting"No matter how it looks at first, it's always a people problem."Find more of Jerry's quotes on this pageJerry's Wikipedia page (his books are highly recommended)Bonus links to further study on the topic:What's the difference between Continuous Delivery and Continuous Deployment?The book Continuous Delivery, seminal work on the topic by Dave Farley & Jez HumbleSpeaking of Dave Farley...You can visit his website to find links to allTheThingsHere's his excellent YouTube channel Modern Software EngineeringHe's written a second book Modern Software EngineeringSpeaking of Jez Humble...He wrote the excellent book Accelerate with lead author Dr. Nicole Forsgren (<- all of Dr. Nicole's work is recommended reading at this point)And here's his websiteI asked ChatGPT for some resources and it gave me this list (proceed with caution just in case!). 00:00 - Intro01:06 - Merch tangent02:53 - Today's topic:What skills and behaviours does a Tester need, in order to be successful when they work in a CI/CD context?What does testing look like in a team using CI/CD?04:08 - ⚽️ Footy04:40 - Compare and contrast07:00 - What conversation(s) needs to happen before "pressing the button" and who needs to be involved in it?08:25 - Deployed Vs Released13:37 - Monitoring and tooling to enable CI/CD practices17:25 - Where/how do reviews fit into this?20:38 - Back to Shift Left!23:51 - Where does the testing happen?24:36 - The link between chef Gordon Ramsey and software testing25:45 - What are we reeeally talking about here?27:09 - How to reframe things when someone makes the polarising claim "We don't need Testers in CI/CD/DevOps teams"29:35 - Q: So how would I test differently if I were a Tester in a CI/CD team? A: Test like Jason Bourne.32:00 - The value of having a tool belt and using it regularly33:06 - How to catch a unicorn? How to unbundle testing skills35:40 - This all loops back to risks & culture37:03 - Where would it be a bad idea to use Continuous Deployment?40:45 - Q: So how would I test differently if I were a Tester in a CI/CD team? A: Test like a Circus Ringmaster.42:39 - Moar Shift Left: Real Devs build on Main48:48 - Modern Vs Traditional mindsets50:30 - Quality enters the chat...51:00 - The relationship between risk and quality52:47 - Testing Vs Quality Engineering55:50 - ⚽️ Footy

  19. 17

    A Love Letter to Testing: What We Love About Our Work

    In this episode, Vernon and Richard celebrate their love for the software testing community, discussing the importance of people, tools, teaching, and the thrill of conferences. They reflect on personal growth, the challenges of production issues, and the joy of mentoring others. The conversation emphasizes the connections made within the industry and the shared experiences that enrich their careers.Links to stuff we mentioned during the pod:10:28 - SeleniumThe Selenium websiteThe BiDi spec10:44 - Jason & SimonJason's LinkedInSimon's LinkedIn13:17 - James ThomasHere's an example of what I meanAnd another!Aaaaaand another!27:02 - The PEBCAK error27:28 Conferences27:58 - PeersCon 202528:17 - Agile Testing Days 202528:29 - Let's Test SAPlease DM if you can commit to going and/or know of companies that would be willing to sponsor!BONUS - The legendary Chris Kenst maintains Software Testing Conferences, which has a list of allTheConferences29:33 - Martin HynieMartin's newsletterMartin's LinkedIn00:00 - Intro01:11 - Is it our birthday yet?01:52 - The Forced Socio-Economic Day episode02:20 - What do we love about our careers?03:07 - We love PEOPLE04:39 - Different kids of friendship06:08 - Making a huge impact on people with tiny interactions08:18 - The benefit of being tool aware10:10 - We love TOOLS10:20 - Rich's favourite tool12:03 - Vern's favourite tool15:40 - We love the VARIETY18:21 - We love TEACHING19:58 - What does the balance look like between teaching, mentoring, and coaching in Rich's current role22:33 - We love CHALLENGE23:51 - Systems thinking and understanding how things work27:28 - We love CONFERENCES35:35 - Mutual appreciation ❤️36:28 - The Friendly Tester is Dead. Long Live Richard Bradshaw!

  20. 16

    Exploring Agentic AI: A Fun and Eye-Opening First Look

    In this conversation, Richard and Vernon delve into the evolving landscape of AI, particularly focusing on the concept of agentic AI. They discuss personal updates, including their health and fitness journeys, before transitioning into a detailed exploration of AI technologies. Richard shares his recent experiences with AI training and projects, emphasizing the differences between traditional generative AI and agentic AI.The discussion highlights the importance of goals, tasks, and tool awareness in AI, drawing parallels to software testing and the dynamics of generalists versus specialists in the tech industry. The conversation concludes with reflections on the implications of these technologies for the future. In this conversation, Vernon and Richard explore the evolving landscape of AI, particularly focusing on agentic AI and its implications for testing and quality assurance.They discuss the importance of defining clear goals and expected outcomes for AI tasks, the need for quality characteristics in AI outputs, and the critical role of human oversight in AI decision-making. The conversation also touches on iterative learning, exploratory testing, and the future of AI in the testing domain, emphasizing the necessity for testers to adapt and enhance their skills in this rapidly changing environment.Links to stuff we mentioned during the pod:02:40 - Ben KellyBen's LinkedInBen's IMDb06:38 - Martin Hynie's video explaining Agentic AIBe sure to check out the resources he shared in the comments too. Goodness gracious 🎯12:19 - CrewAI the tool Rich was experimenting with13:27 - And for funsies we asked ChatGPT the same question17:52 - Jason ArbonJason's websiteJason's LinkedIn23:23 - Persona-based testing24:11 - Context-driven testing01:08:43 - Other folks & materials you can learn fromThe deeplearning.ai websiteTariq KingHere's what Perplexity came up with!His workshop: An Introduction to AI-Driven Test AutomationHis presentation: Integrating GenAI for Testing into the Software LifecycleTariq's LinkedInMelissa EadenMel's newsletterThe issue called is "A Fable about GenAI" is excellentMel's LinkedInMark WinteringhamMark's websiteMark's GenAI bookMark's LinkedInMartin HynieMartin's newsletterThe series "So You Just Got Assigned Your First GenAI Project" is goldenMartin's LinkedInGo to his profile. Find the "Activity" section. Use the "Videos" link/button to filter his posts. Watch the videos. Thank me later.Bill MatthewsBill's LinkedIn00:00 - Intro01:14 - Welcome04:02 - Rich's adventures learning about AI05:24 - Rich goes down the Agentic AI rabbit hole07:00 - GenAI vs Agentic AI12:45 - Understanding Agentic AI vs. Traditional AI13:27 - What's the difference between the term "Agent" and "Agentic"?15:20 - How would Rich describe or categorise a chatbot?16:15 - What makes something agentic then?17:52 - Jason helps Rich understand what to expect from his exploration18:51 - What's the relationship between goals and tasks?20:06 - Rich explains what makes this so interesting for him and got him excited26:12 - Empowering Agents with the Right Tools27:47 - Understanding Tasks vs. Goals28:45 - Breaking Down Tasks for Efficiency29:44 - How much agency do agents have?31:38 - Task Descriptions and Expected Outcomes33:03 - Teams of agents vs teams of people and specialists vs generalists35:48 - How does an agent decide what to do next and how does it know it has completed the task?36:40 - Defining Quality in Agent Outputs38:15 - MOAR testing concepts that have parallels with Rich's exploration40:28 - The consequences of not being accurate enough with your backstory, expected output, tasks, etc43:34 - What happens when agenticai is asked to achieve the same goal without changing anything about the backstory, expected output, tasks, etc?45:40 - Challenges of Iteration and Learning46:47 - What are max iterations and what does that remind Rich of?47:40 - Vern wonders how important semantics is going to be and how Testers can contribute to this work49:42 - Rich riffs on exploratory testing51:02 - Exploratory Testing and Agentic Learning. What does the Tester's story look like in the context of an agentic system from the agent's perspective?54:15 - Exploring Autonomy in AI Systems56:57 - Evaluating AI Outputs and Task Design58:21 - What happens if/when the context is left blank in these agentic systems?01:00:49 - Soooo where do the humans fit in if agentic systems can doAllTheThings?01:02:37 - Wrap up: Take 1 - Designing small targeted tests vs designing small targeted tasks01:04:46 - Wrap up: Take 2 - Agents delegating tasks to other agents. Er... WTF?!01:06:00 - Wrap up: Take 3 - How is Rich feeling about AI & AI tools?01:09:45 - Wrap up: Take 4 - Testers ASSEMBLE! How we're going to contribute in a world of AI

  21. 15

    Goals, Growth, and Getting Things Done in 2025

    In this episode of the Vernon Richards show, the hosts discuss their goals for the new year, reflecting on the past year and sharing strategies for achieving personal and professional aspirations. They emphasize the importance of journaling, creating structured routines, and building accountability through community support. The conversation also touches on the significance of intentional content consumption and the benefits of sharing progress publicly. Overall, the episode serves as a motivational guide for listeners looking to set and achieve their own goals in 2025.Links to stuff we mentioned during the pod:05:35 - The Yearly Review by Dickie Bush & Nicolas ColeThis is an updated version of the one Vernon discussed in the episode. Win!07:52 - Some helpful journalling prompts from Dickie08:18 - The workshop Vernon mentioned is called The Productivity Spark 2025It was a free to attend online workshop BUT... It looks like the link is down 😞. Vernon suspects the free workshop will be run quarterly, so keep your eyes and ears open for the next one. In the meantime, here are some other helpful resources from Ali.Ali's websiteAli's free stuffAli's bookAli's YouTube channelAli's Productivity course/community09:49 - The CountrPT app by Ash Coleman Hynie12:08 - Ilari Henrik AegerterIlari's LinkedIn17:03 - Elizabeth ZagrobaSign-up for the FroGS peer conference that Elizabeth organises (with help from her pals) because it's awesome!Elizabeth's blogElizabeth's LinkedIn26:00 - Ben KellyBen's LinkedIn28:48 - Toby SinclairToby's websiteToby's LinkedIn34:55 - Daniel PriestlyDaniel's YouTube35:14 - John CutlerJohn's SubstackJohn's LinkedIn36:30 - Martin HynieMartin's LinkedIn36:49 - Melissa EadenMel's SubstackMelissa's LinkedIn41:25 - The DOSE EffectThe bookA conversation on the topic between Ali Abdaal & TJ Power the book's author58:05 - Some references for the internal board of directors and invisible council concept from Perplexity01:00:46 - The System Seeing Challenge by Ruth MalanRuth's websiteThe guide book for A Month of System Seeing00:00 - Intro: New Year, New Goals01:09 - Welcome01:33 - Today's theme: Achieving our goals03:29 - How are we approaching goal setting and achieving goals?05:35 - Vernon describes the Yearly Review process he used07:52 - Journalling as a means to remember what happened throughout the year09:33 - Where else would journalling be useful?11:20 - Rich talks about notebooks and how he uses writing to achieve his goals and remember his wins12:58 - Rich's goals for 202515:30 - Timeblocking FTW!16:39 - Starting small18:23 - Vernon's goals for 202519:26 - Richard's reading goals and his library24:15 - How to stay accountable: Building in public & Accountability buddies31:34 - How to stay accountable: Bullet journal34:21 - Vern shares the concept of "Look at me" vs "Look at this" content39:10 - Rich realises why he hasn't done as much signal boosting as he used to41:25 - How to stay accountable: Managing dopamine44:55 - Managing focus by removing distraction47:49 - Intention and finding a balance between resting vs procrastination and striving vs obsession54:10 - How to stack the odds of success i

  22. 14

    2024 Reflections and A Look to 2025

    00:00 Introduction02:00 Happy to be Employed05:00 Vernon Wrote a Book08:00 Talking at a Developer Conferences16:20 PeersCon20:24 Our Podcast21:50 Agile Testing Days Experience22:20 Our Podcast again28:30 Bluesky32:30 Vernon’s New Newsletter41:00 Generalist Specialist and AI51:00 Discipline and Consistency54:00 Vernon’s Personal Reflection55:40 Richard’s Personal ReflectionLinks to stuff that we mentioned:05:00 - Mark’s and Nicola’s Book - The Software Tester’s Journey05:01 - Nicola Lindgren Bluesky Profile08:00 - Richard’s talk at Oredev08:00 - Øredev conference - https://oredev.org/11:30 - Abby Bangser - Bluesky Profile15:30 - Gitte Klitgaard - Bluesky Profile16:20 - PeersCon - https://testingpeerscon.com/18:24 - Beth Probert - https://www.linkedin.com/in/bethprobert/18:48 - Testing Peers Podcast21:20 - Our Agile Testing Days Podcast episode21:46 - Agile Testing Days conference29:00  - Tobias Geyer Bluesky32:30 - Jit Gosai Bluesky32:30 - JIt Gosai Quality Engineering Newsletter

  23. 13

    Agile Testing Days Experience with Special Guests

    This conversation captures the vibrant atmosphere of Agile Testing Days in Potsdam, highlighting the importance of community, targeted automated testing, and the exploration of tester identity. The hosts discuss their experiences at the conference, including workshops, networking opportunities, and a unique musical performance that showcased the talents of attendees. They delve into the significance of understanding tester identity and the symbols associated with it, emphasizing the need for connection and belonging within the testing community. This conversation at Agile Testing Days explores various themes including the experience of performing in a musical, insights from keynote speakers on technical coaching, the importance of documenting achievements for career advancement, and the evolving identity of professionals in agile environments. The discussion emphasizes the value of networking at conferences and the need for continuous learning and adaptation in one's career. [AI]Links to stuff we mentioned during the pod:01:00 - Agile Testing Days 2024 (ATD)ATD's website02:00 - 🤠 Señor Performo aka Leandro Melendez (who Vernon definitely did NOT call Leonardo 😅😇🤥🥷🐢🤦🏾‍♂️)Señor Performo's websiteSeñor Performo's English YouTube channel 🇬🇧Señor Performo's Español YouTube channel 🇪🇸Señor Performo's English podcast channel 🇬🇧Señor Performo's Español podcast channel 🇪🇸Señor Performo's bookSeñor Performo's LinkedInSeñor Performo's X03:00 - Richard's tutorial Targeted Automated Tests04:35 - Mark WinteringhamMark's websiteMark's books (probably why he's so slick with words!)10:07 - Lisa Crispin & João ProençaLisa's website and LinkedIn pagesLisa's BlueskyJoão's LinkedIn pageJoão's Bluesky10:17 - José "Pepe" Díaz & Uwe GelfertJosé's websiteJosé's BlueskyJosé's LinkedInUwe's LinkedIn11:25 - Alex(andra) SchladebeckAlex's websiteAlex's LinkedIn13:05 - The castSamuel "Robot Overlord" NitscheLena "Test Management Saviour" Pejgan NyströmRachel "Pitch Perfect" KiblerVeerle "Light Sabre Extraordinaire" VerhagenTamara "Putting the "Rock Star" in Rock Star Developer" JostenCallum "The stage is my happy place" Akehurst-RyanBastian "Best Bad Boss Ever" KnerrTobias "Chill Samuel I've got this" GeyerDid you order a light sabre duel with your musical? You did? Well here you go!19:32 - Leandro's workshop Ramping up modern performance21:53 - Jenna CharltonThe description of Jenna's ATD keynote Testing, Identity, and SymbolsJenna's LinkedInJenna's BlueskyMartin HynieMartin describing the wider impact of his talk, including a link to the talk itselfMartin's LinkedIn22:57 - Ashley HunsbergerAshley's LinkedIn39:05 - Bart KnaackBart's LinkedIn42:21 - Emily BacheThe description of Emily's ATD keynote Technical coaching development teams using the Samman methodEmily's YouTubeThe Samman Society websiteThe Hartman Proficiency Taxonomy by Marian HartmanBloom's taxonomy for cognitive thinkingPaul Holland another ATD contributor of allTheThings and Rubix Cube legendEmily's LinkedIn47:16 - Ash "Data is dope" HynieAsh's LinkedInCountrPT's websiteCountrPT's LinkedInPatrick Prill (who aside from being a generalist, also gives THE BEST hugs around, trust me on this folks!)Rita AvotaBen DowenJoao Proenca01:00 - Intro LIVE from Agile Testing Days!03:00 - Why are we here?10:52 - The importance of networking and conferences12:17 - What kind of event is ATD?13:05 - The ATD Musical (you read that correctly)18:29 - Guest: Señor Performo discussing who he is and performance testing21:53 - Guest: Jenna Charlton explaining the relationship between testing, identity, and symbols36:15 - Guest: Basti aka Bastian "Best Bad Boss Ever" Knerr talking about being part of the musical39:05 - Guest: Baart Knack talking about his escapades at ATD over the years and hi...

  24. 12

    Storytelling in Testing: Books, Fieldstones, and Keynotes

    #contentcreation #softwaretesting In this conversation, Richard Bradshaw and Vernon discuss various themes including veganism, personal achievements, the writing process, and the importance of collecting ideas for content creation. They explore the significance of networking within the software testing community and reflect on the dynamics of social media, particularly Twitter and LinkedIn. The conversation also highlights the value of storytelling in professional settings and shares insights from a recent conference experience. In this conversation, Richard Bradshaw shares his experiences from the Husteth Conference, including the challenges and triumphs of being a closing keynote speaker. He discusses the unexpected power cut during his talk, the importance of engaging Q&A sessions, and the value of networking and building connections at conferences. The conversation emphasizes the significance of community, sharing insights, and the overall positive experience of attending events.00:00 Introduction and Veganism Discussion02:58 Celebrating Achievements and Book Launch06:00 Writing Process and Content Creation08:54 Collecting Ideas: The Fieldstone Method11:55 Building in Public and Sharing Experiences15:04 Networking and Community Engagement17:58 Reflections on Social Media Dynamics21:00 The Importance of Collecting Stories23:55 Conference Experience and Innovations26:53 Q&A Dynamics at Events32:25 Reflections on the Husteth Conference34:09 The Power Cut Incident39:43 The Importance of Q&A Sessions45:21 Closing Keynote Experience49:42 Traveling and Networking at Conferences52:22 Conversations and Connections at Events57:11 Final Thoughts and Takeaways

  25. 11

    Collabs, Careers, and Quirky Habits: Vernon & Richard's Ask Us Anything Session Part 2

    In this episode of the Vernon Richard show, Richard and Vernon engage in an AMA format, discussing various topics including their collaborative projects, future aspirations, the impact of their quality testing mindset on daily life, memorable swag from testing events, experiences in uncomfortable establishments, significant learning moments, and the importance of testing environments. They emphasize the need for continuous content creation and the desire to connect with their audience for future interactions.00:00 - Intro00:16 - Ben's question: If you could do any co-lab with anyone from the community, who would it be, and what might it look like?05:43 - Leigh's question: Where do you and vern see yourselves in 5 years, or want to be in 5 years, doing what kind of role in what kind of company?15:46 - Andy's question: When I speak with people, I love hearing how their Quality/Testing mindset spills over into day to day life. One person used to test their childrens' toys by seeing if they could use them one-handed covered in olive oil. Another guy would occasionally test how far away his TV remote would work and see if it changes 😂Maybe you've covered this stuff already, but personally I always love hearing about these funny quirks and testing 'life'.26:05 - Emily's question: What's the best swag you've ever picked up from a testing event?31:29 - Mark's question: Have you ever walked in to a pub that's so bad you want to instantly leave, But because of obligated politeness and fear of that awkward feeling of walking straight out again you go stayed?35:08 - Ide's question: Looking back, best learning ever (at that moment perhaps biggest fail ever), with context, and what/when did it change from: Argh! to Ahhh!40:53 - Anonymous question: Do you think one of your test environments should match the spec of production?Links to stuff we mentioned during the pod:00:16 - Stuff from Ben's questionButch MayhewButch's websiteButch's LinkedInBen DowenBen's LinkedInLisi HockeLisi's blogLisi's LinkedInNicola LindgrenNicola's YouTube channelNicola's blogThe book we're writing called "The Tester's Journey"!Nicola's first book "Starting Your Software Testing Career"!Nicola's LinkedInKaren ToddKaren's YouTube channelKaren's LinkedInDean MoonDean's websiteDean's LinkedInAsh Coleman HynieAsh's new product CountrPT!Ash's LinkedInKelsey HightowerKelsey's X (formerly Twitter) accountManchester Tech FestivalTheir websiteTheir founder Amy NewtonEmily O'ConnorEmily's LinkedInDorothy "Dot" GrahamDorothy's LinkedInJanet GregoryJanet's websiteJanet's LinkedInAbby BangserAbby's LinkedInMelissa EadenMel's blogMel's LinkedInStuff from Leigh's questionLeigh RathboneLeigh's LinkedInAlan PageAlan's newsletterAlan and Brent's podcastAlan's LinkedInThe Black Tech Unplugged Podcast hosted by Deena McKayNB (from Vernon): I got the name of the podcast wrong 🤦🏾‍♂️! I mentioned the Tech Is The New Black podcast, which is awesome but crucially, NOT the one Deena creates. Forgive me Deena 🙏🏾.The Quality Bits podcast hosted by Lina ZubyteNB (from Vernon): I blundered AGAIN 🤦🏾‍♂️! The name of the pod is Quality Bits, BITS! Not Bytes. Good grief. Sorry Lina 🙏🏾Daniel KnottDaniel's YouTube channel (with 107,000 subscribers at the time of writing 🤯)Daniel's blogDaniel's booksHands-On Mobile App Testing on LeanpubSmartwatch App Testing on LeanpubDaniel's LinkedInJoe ColantonioJoe's YouTube (with 387,000 subscribers at the time of writing 🤯)Joe's websiteStuff from Andy's questionAndy JohnsonAndy's LinkedInDel DewarDel's blogDel's PS5 botDel's LinkedInStuff from Emily's questionEmily O'ConnorEmily's LinkedInStuff from Mark's questionMark GillottMark's LinkedInStuff from Ide's questionIde KoopsIde's LinkedInStuff from the Anonymous questionChristian LeggetChristian's LinkedInJonathan MarshallJonathan's LinkedIn

  26. 10

    Creating an Environment for Testers to Thrive

    The conversation revolves around the challenges faced by testers and the lack of understanding and support they receive from leadership. The hosts discuss the misconception of the value of testers and the need for leaders to create an environment where testers can thrive. They highlight the importance of addressing the frustrations and unhappiness of testers and the need for leaders to take responsibility for creating a supportive and nurturing culture.The conversation also touches on the changing expectations of developers compared to testers and the need for leaders to have a better understanding of the role and value of testers. The conversation explores the disconnect between the expectations and perspectives of testing and quality engineers. It highlights the need for leaders to take responsibility for creating a supportive environment and culture. The role of tools and marketing in shaping these expectations is also discussed.The conversation concludes with the importance of clear communication, understanding the needs of the team, and nurturing the growth of testers and quality engineers.#podcast #softwaretesting #software #softwaredevelopment Links to stuff we mentioned during the pod:11:38 - Challenge Networks. Listen to Adam Grant explain the concept of a Challenge Network on the DOAC podcast.26:30 - Jerry Weinberg's 2nd Law of Consulting from his book The Secrets of Consulting"No matter how it looks at first, it's always a people problem."Find more of Jerry's quotes on this page28:40 - Vern talking about his talk in episode 6 "How No-Code Test Tools, Technical Leadership & Glue Work Impact Software Quality"Tanya's Glue Work presentation which you can read or watch"What Is Quiet Quitting?" a BBC News article describing the phenomenonVernon's Agile Yorkshire presentation where he describes the link between those concepts29:19 - Martin HynieMartin describing the wider impact of his talkMartin's LinkedIn30:59 - Maaret Pyhäjärvi & Anna BaikAnna's quote that Maaret shared on LinkedInAnna's LinkedInMaaret's LinkedIn34:35 - Prompt EngineerWhat's a Prompt Engineer?35:30 - Adelina ChalmersAdelina's AMA session that I joined (please follow her and join her sessions, she's AWESOME!)Adelina's LinkedIn00:00 Introduction and Technical Discussion01:27 - What's going on with Rich's fingers?!01:36 Challenges and Misunderstandings Faced by Testers and Quality Engineers01:54 - Everything the Testers in your team want to tell you but are too afraid02:50 - Vern's theory07:40 - Why do other roles get "the nutrients" they need?08:09 The Value of Testers and the Need for Supportive Leadership10:26 - What do leaders misunderstand about the value of Testers and QEs?11:38 - Support networks Vs Challenger networks12:09 - The bugs we report that people REALLY don't like!12:43 - System problems disguised as testing problems14:37 Shifting Expectations for Developers and the Evolving Understanding of Testing14:53 - Groundhog day!16:10 - Rich wonders if our expectations are reasonable19:29 - How does the world perceive Developers, Designers, and Testers?21:04 - How expectations have changed for Developers23:07 Creating a Supportive and Nurturing Environment for Testers24:08 - How a lack of curiosity impacts the wellbeing of your team26:21 - Expectation vs Reality26:59 Bridging the Gap: Expectations and Perspectives27:27 - How to collaborate on expectations with the Tester in your life!28:37 - Martin's crazy experiments, Glue Work, Technical Leadership, and Quality Engineering30:13 - What does this tell us about the culture of the organisation?30:58 Creating a Supportive Environment for Testers and Quality Engineers31:52 - Rich asks if this is only a problem for people like us?32:41 The Role of Tooling in Shaping Expectations35:30 - What can we learn from the CEO/CTO relationship?38:12 - What can we learn from relationships, period?40:19 - ⚽️ Footy42:16 - The impact of language and narrative on testing in the test tool market45:50 - The link between testing, manual labour, and knowledge work46:20 Advocating for Testers and Quality Engineers47:10 - Hiring to solve problems or to put bums in seats48:00 - Rich takes us back to the chicken and egg50:08 - A potential new focus and name for the show!51:22 - Outro

  27. 9

    Testing the Job Hunt: Red Flags, Networking, Personal Brand, and the Power of Storytelling

    In this episode, Richard and Vernon discuss the topic of hiring and share their thoughts on the annoying things that companies and hiring managers do. They emphasize the importance of seeking clarification and understanding the context behind red flags on a candidate's CV.They also discuss the power dynamic in the hiring process and provide advice for job seekers on how to mitigate potential problems. They highlight the value of storytelling and narrative in CVs and suggest cherry-picking relevant experiences to showcase in job applications.In this conversation, Richard and Vernon discuss job hunting strategies and offer advice for those looking for new roles. They emphasize the importance of networking, building a personal brand, and being intentional about what you share on platforms like LinkedIn.They also discuss the distinction between skills and tools in job specifications and CVs, encouraging a focus on transferable skills rather than specific tools. The conversation concludes with a call for feedback and suggestions from listeners.#softwaretesting #software #hiring #hiringtips Links to stuff we mentioned during the pod:01:43 - Vernon's LinkedIn post about how NOT to handle "red flags" spotted on CVs!09:41 - Wayne Bennett, FRSA, CertRPWayne's comment on my postWayne's recruitment firm Made4Tech GlobalWayne's LinkedIn17:47 - Recruiters we think are awesome!Wayne Bennet's LinkedIn (particularly for Manchester and NW England roles)Kelli Jackson's LinkedIn (for North American roles and runs a community for midlifers changing careers)Gabbi Trotter's LinkedIn (UK wide but particularly the North West and Midlands roles)Samir Mehta's LinkedIn (although he's internal now & works with Rich the poor guy)Jamie Doyle's LinkedIn (UK wide and a legend)Kristina Javůrková's LinkedIn (for Benelux roles)Matt Drinkwater's LinkedIn (UK wide and hosts QE Babble)James Duke's LinkedIn (UK wide and lover of fast cars)18:07 - Book: Never Split The Difference by Chris VossAn explanation of LabellingAn explanation of an Accusation AuditChris's websiteGrab the book from Amazon23:53 - Huib SchootsHuib's websiteAnd Huib is one of the folks I've heard talk about storytelling! You can find that presentation here.Huib's LinkedIn32:28 - Alan RichardsonAlan's websiteAlan's Patreon communityAlan's LinkedIn43:01 - The Quality Talks PodcastThe Quality Talks podcast hosted by Stu and ChrisStu's LinkedInChris's Linkedin48:01 - Elizabeth ZagrobaElizabeth's article Doubt Builds TrustWhich contains an example of a Trustworthy CVElizabeth's websiteElizabeth's LinkedIn53:18 - The Never Search Alone movement00:00 - Intro00:49 - Let's talk about hiring01:00 - Hiring managers annoying habits01:43 - Vern's rant about "red flags" on CVs03:59 - Rich explains why he thinks option C is reasonable (in the circumstances!)05:30 - Hiring is like software development!05:37 - Red flags == Bugs in production06:42 - Red flags == Feature flags / AB tests09:41 - A recruiter's perspective on the issue12:15 - How Rich approached his recent job search14:11 - Don't be passive during the interview: Asking questions, clarifications, and storytelling15:30 - How to handle objections during an interview16:36 - The importance of weaving the hidden gems of your experience into you interview18:07 - How Labelling and Accusation Audits can help you in interviewsHow to combine labelling and an accusation audits to your advantage in interviews19:31 - Leverage your risk analysis skills to prep for your interview20:30 - How to sell yourself short in an interview23:20 - The meta skill of storytelling24:49 - Storytelling with your CV28:04 - Excellent advice about leveraging LinkedIn that Vern isn't following!29:49 - Rich's advice about what information to include and Vern's mysterious friends' experience trying to take that advice31:26 - Contradictory job hunting advice and how to swerve it32:05 - The ultimate hack(s) for job hunting36:10 - How do you decide or calculate what kind of material to include in your "personal brand"?41:09 - The balance of serious vs fun content43:01 - Rich questions who Vern chooses to hang out with43:39 - How to share other people's ideas and yours at the same time45:47 - Shout out to PastRich!46:06 - Rich wants to talk about how we talk about skills48:01 - Elizabeth Zagroba's interesting take on writing CVs51:21 - Vern has an idea for the next episode52:51 - What advice did we miss? Help!

  28. 8

    Dream Jobs and Emotion Based Testing: Using Feelings As Heuristics

    In this conversation, Richard and Vernon discuss their use of AI in their lives and then explore the topic of working at their dream companies. Richard expresses his fascination with SpaceX and the incredible engineering and technology involved in space exploration. Vernon shares his love for video games and the art and science behind their creation.They also touch on the emotions involved in software testing and how they can be clues to underlying problems. The conversation explores various emotions experienced during software testing, including frustration, joy, fear, suspicion, and familiarity. Frustration often arises when encountering bugs or issues, while joy can be felt when using a well-designed and user-friendly app. Fear is associated with the potential for irreversible actions or data loss. Suspicion arises when recognizing patterns or past experiences that may indicate potential problems. Familiarity helps in identifying missing features or inconsistencies.The conversation also touches on the concept of behavior-driven development (BDD) and the importance of having conversations and automating them rather than just documenting them.#exploratorytesting #softwaretesting #testing #software #softwaredevelopment #emotions 00:00 - Intro attempt no. 100:50 - Intro attempt no. 201:16 - ⚽️ Footy01:46 - ⚽️ Footy related preamble to the question03:01 - Dream job question03:36 - Space! The final frontier!03:54 Dream Job: Working at SpaceX and Developing Software for Rockets08:30 - Dream job: Nintendo, adventure games, and storytelling11:54 The Fascination with Rockets and Space17:04 - Emotions in software testing19:41 Beyond Functionality: The Importance of User Experience and Emotions20:10 The Role of Emotions in Software Testing20:35 Using Frustration and Anger as Indicators of Improvement Areas21:29 Learning and Coding: Frustration and Joy22:36 BDD and Sweary Outbursts23:56 The Importance of Clear User Scenarios25:34 The Value of Conversations in BDD26:50 - Joyful testing28:57 Fear and Suspicion in Testing31:14 The Anxiety of Sending Money33:27 - Suspicion and that feeling of de ja vu36:09 Applying Past Experiences and Patterns37:25 The Evolution of Suspicion and Familiarity39:27 The Role of Heuristics in Testing41:24 The Absence of Joy in Testing42:46 Emotions as a Guide for Testing Strategies and ApproachesLinks to stuff we mentioned during the pod:00:09 - KrispKrisp.ai - Noise cancelling software (not an affiliate link!)00:56 - Tristan LombardTristan's LinkedInTristan's TwitterVincent Kompany03:36 - SpaceX07:02 - Virgin GalacticTheir websiteHistoric mission which included the first women astronauts from the CaribbeanAntogua08:34 - Nintendo09:50 - Wonder Boy III: The Dragon's TrapWonder Boy III: The Dragon's Trap wiki page10:44 - The Zelda gamesThe Legend of Zelda: A Link to the Past wiki page12:49 - Daniel KnottDaniel's YouTube channelDaniel's blogDaniel's booksHands-On Mobile App Testing on LeanpubSmartwatch App Testing on LeanpubDaniel's LinkedIn22:45 - Behaviour Driven DevelopmentWhat is BDD? Doc page on cucumber.ioAnd again on the wiki page24:59 - Mark WinteringhamMark's website and blogMark's LinkedIn25:43 - Liz Keogh"Having the conversation > Documenting the conversation > Automating the conversation" Check out Slide 14 of Liz's excellent course Behaviour Driven Development38:40 - Beren van DaeleBeren's Testsphere cards (including links where you can buy your own deck - RECOMMEND!)The RiskStorming process based on the Testsphere deck for discovering risks (including using emotions!)Beren's websiteBeren's LinkedIn

  29. 7

    We Have to Talk About Crowdstrike! Hot Takes and Quality Debates

    The conversation discusses the CrowdStrike outage caused by a kernel bug in a Windows update. The impact of the outage was widespread, affecting airports, medical professionals, banking, and even news channels.The hosts emphasize the need to understand the complexity of software testing and not jump to conclusions or blame testers. They highlight the importance of continuous improvement, learning from mistakes, and taking ownership of problems.The conversation also touches on the debate around releasing software on Fridays and the need for context-specific decision-making. The conversation explores the impact of software bugs and the importance of quality in software development. It discusses the ability to turn off software in critical situations, the challenges of working on low-level or embedded software, and the need for risk mitigation.The conversation also touches on the response of CrowdStrike to the recent software bug and the potential human impact of such incidents. The concept of quality in software is examined, and the conversation concludes with a discussion on the increasing prevalence of software in various industries.Links & Mentions01:01 - CrowdstrikeWho are they?This is their wikipedia pageThis is their About us page01:17 - What is a kernel?01:56 - What happened?BBC article - Crowdstrike release causes "Mass IT outage affects airlines, hospitals, media and banks"Preliminary Post Incident Review from Crowdstrike05:32 - Dave's garage explanation of what happened 😙🤌🏾 (Ex-Microsoft Dev)12:46 - Rich's LinkedIn posted about jumping to conclusions in the wake of the Crowdstrike issue15:38 - Mark WinteringhamMark's website and blogMark's excellent blog post about "Quality Engineering, Digital Employees and Job Security"Mark's LinkedIn28:00 - Article: Crowdstrike CEO called to congress37:59 - Crowdstrike updatesTheir blogTheir Remediation and Guidance Hub: Falcon Content Update for Windows HostsTheir Preliminary Post Incident Review (PIR): Content Configuration Update Impacting the Falcon Sensor and the Windows Operating System (BSOD)44:47 - Dame Anita FrewWho is Dame Anita Frew?00:00 Introduction and Appreciation for Listeners00:33 - Did anything interesting happen in the last week?01:01 - Crowdstrike (what else?!)01:56 - Vernon & Richard describe what happened with the Crowdstrike shenanigans04:23 Realizing the Global Impact of the Outage06:16 Explaining the Kernel Bug and its Effects07:44 The Process of Getting a Kernel-Based Application08:40 The Kernel's Response to Errors and Risks09:29 The Significance of the Kernel in Software10:35 Updates and News from CrowdStrike11:11 The Importance of Software Testing and Quality12:12 The Fallacy of Blaming Testers and Testing12:46 - Vern reads out Rich's LinkedIn post in the immediate wake of the issue14:29 Recognizing Process Shortcomings and Risks15:38 - The danger of "hot takes"16:24 Taking Ownership and Learning from Mistakes19:15 - Common Crowdstrike Hot Takes: Thou shalt not release of Friday!19:46 Alternative Explanations and Hot Takes21:16 The Danger of Treating Hot Takes as Facts22:20 The Debate Around Releasing on Fridays23:17 Mitigating Risks and Context-Specific Decision-Making24:42 The Need for Continuous Improvement and Learning26:18 - Common Crowdstrike Hot Takes: Clearly this hasn't been tested!26:37 - Common Crowdstrike Hot Takes: Obvious risk mitigation steps the should have taken28:00 - Crowdstrike CEO called to congress28:45 The Impact of Software Bugs and the Importance of Quality30:54 - What might have happened if Crowdstrike didn't release a critical update?36:22 Mitigating Risks and Turning Off Software in Critical Situations37:59 - Updates directly from Crowdstrike38:39 - Rich's Columbo question43:48 - The miracle of ubiquitous software45:42 The Response of CrowdStrike and the Potential Human Impact46:22 - One Final Hot Take from Rich

  30. 6

    How No-Code Test Tools, Technical Leadership & Glue Work Impact Software Quality

    In this conversation, Richard and Vernon discuss the need for manual test cases and manual testing in the future, particularly in the context of the rise of no code automation and AI.They explore the underlying skills and activities involved in testing, such as critical thinking, analysis, communication, and understanding oracles and heuristics. They also touch on the importance of context and problem-solving in determining the appropriate testing approach. The conversation highlights the value of automation as a means to offload effort and gather information, rather than as an end in itself. In this conversation, Richard and Vernon discuss the importance of automation in testing and how it helps confirm the tester's knowledge of the system.They also explore the concepts of glue work, quiet quitting, and quality engineering. Vernon shares his upcoming talk on setting quality engineers up for success and the challenges they face in organisations. They discuss the positioning of testers and the need for a cultural shift towards quality engineering. They invite listeners to share their thoughts and feedback on the topics discussed.00:00 - ⚽️ Footy (1 min)00:39 - Intro01:54 - How will low-code and no-code automation tools impact the need for manual testers and manual test cases?07:50 - How does Generative AI and/or Large Language Models (LLMs) change the answer?20:39 - Issac Asmimov tangent!21:18 - SPOILER ALERT! PLEASE SKIP IF YOU DON'T WANT TO HEAR ABOUT THE INCREDIBLE ISAAC ASIMOV STORY "PROFESSION" (I HOPE BECAUSE YOU'RE GOING TO READ IT YOURSELF!)24:27 - SPOILER END!!!33:07 - Vernon's talk: How We're Setting Up QEs To FailLinks to stuff we mentioned during the pod:07:50 - AI, Generative AI and Large Language Models (LLMs)Useful material:Microsoft course AI for Beginners07:50 - Dr Tariq KingTariq's LinkedIn07:50 - Melissa EadenMelissa's LinkedIn09:52 - The book AI-Assisted Testing by Mark WinteringhamGrab the book hereMark's websiteMark's LinkedIn10:53 - Knowledge workOur previous discussion on Knowledge Work in Episode 1A definition of knowledge work from Wikipedia13:59 - Vernon's Scripting Vs Exploring workshop he delivered at the European Testing Conference (ETC)14:03 - ETC (organised by Maaret Pyhäjärvi) is now sadly on hiatus27:26 - Doug Hoffman and High Volume Automated Testing (HiVAT)An explanation of HiVAT by Cem KanerDoug's LinkedIn33:25 - The Agile Yorkshire meetup organised by Royd BrayshayThe Agile Yorkshire websiteRoyd's LinkedIn33:59 - Cassandra H. LeungCassandra's blogCassandra's LinkedIn34:22 - Tanya ReillyTanya's websiteTanya's Glue Work presentation which you can read or watchTanya's LinkedIn37:22 - "What Is Quiet Quitting?" a BBC News article describing the phenomenon38:23 - Jenna CharltonJenna's LinkedInThe presentation I referred to is called: "Imperfect Agile: Lessons Learned From Embracing The Journey And Ditching The Rules". It was great and I'll see if Jenna is willing and able to share a link to it as soon as I can🙂39:17 - Stuart DayStu's Quality Talks podcast that he co-hosts with Chris HendersonStu's LinkedInChris's Linkedin43:30 - Anna BaikAnna's LinkedInAnna's quote that Maaret shared on LinkedIn

  31. 5

    Mentors, Mindsets, Missions, and Margherita: Vernon & Richard's First Ask Us Anything Session

    In this episode, Vernon and Richard answer questions from their audience. They discuss what they would do if they weren't in software testing, the primary mission of a tester, advice for their younger selves, their stance on pineapple on pizza, and their preferences as trainers, mentors, consultants, and coaches. In this conversation, Vernon and Richard discuss various topics related to testing and quality.They explore the meaning of quality and how it can vary depending on the context and individual perspectives. They also discuss the importance of testing and whether there are situations where testing may not be necessary. Additionally, they delve into the concept of a testing mindset and whether it is something that individuals are born with or can be developed. Finally, they reflect on what advice they would give to their younger selves, focusing on the themes of confidence, self-kindness, and self-care.Links to stuff we mentioned during the pod:01:42 - Joëlle BurkhardtJoëlle's LinkedIn06:41 - James ThomasJames' one liner on his blogJames' LinkedIn09:15 - AJ WilsonAj's LinkedIn14:32 - Kelsey's story about relating the impact of software problems to real humans15:10 - Mark GillottMark's LinkedIn15:34 - Template letter for banning pineapple on pizza17:10 - Olivier BanalOlivier's LinkedIn18:25 - Leigh RathboneLeigh's LinkedIn22:30 - Deb SherwoodDeb's LinkedIn22:42 - Jerry WeinbergJerry's Wikipedia page (his books are highly recommended)23:46 - Joep SchuurkesJoep's blogJoep's LinkedIn28:36 - Mark TomlinsonMark's websiteMark's podcastMark's LinkedIn32:45 - Anna RoyzmanAnna's LinkedIn38:42 - Melissa FisherMelissa's LinkedIn38:42 - In case you need to know what the TARDIS is, please read this!40:59 - David Goggins (definitely watch/listen to his stuff with headphones on or with no kids around because 🤬)David's cookie jar philosophyDavid's website43:47 - End of year reflection resourcesDickie Bush & Nicolas Cole's Yearly Review Process00:00 - Intro01:42 - Joëlle Burkhardt: What would you do if you weren't in software testing?06:41 - James Thomas: You have to summarise what a tester's primary mission for a team is in a snappy one-liner that applies across contexts. What's your one-liner?09:15 - Aj Wilson: What advice would you give the version of yourself, that was 2 years into software testing if you could, what would older wiser Richard advise new blood Richard?15:10 - Mark Gillott: Why is pineapple on pizza still not illegal?18:25 - Leigh Rathbone: What do you prefer, being a trainer (providing instruction and direction), a mentor, a consultant, or a coach?22:30 - Deb Sherwood: What does quality mean for you?28:36 - Mark Tomlinson: To test, or not to test.32:45 - Anna Royzman: Testing mindset - are you born with it?38:42 - Melissa Fisher: If you could jump in a tardis and go back in time, what would you tell your younger self?

  32. 4

    Playing in the Workplace and Killer Bugs

    In this episode of the Vernon Richard Show, Vernon and Richard discuss alternative names for the show and reflect on their recent activities. They talk about the Leeds Testing Atelier conference and highlight some of the workshops and talks they attended. They discuss the importance of play in the workplace and the impact of bugs in software development. They also mention the Post Office Horizon scandal and the need to consider the human impact of software failures.The conversation covered various topics including testing chatbots, the importance of accessibility and user flow mapping. The speakers discussed their experiences with chatbots, highlighting both positive and negative interactions. They also talked about the significance of screen readers and the need for proper web app design to improve accessibility. User flow mapping was mentioned as a useful technique for building a joint team understanding of work tasks. The conversation also touched on the challenges of communication with anxiety and the benefits of being open about mental health in the workplace.Various other topics were also discussed including reducing anxiety in the workplace, the concept of spoon theory, and the balance between speed and quality in software development. The speakers discussed the importance of building relationships and understanding how to communicate effectively to reduce anxiety. They also explored the idea of spoon theory, which relates to managing energy levels and prioritizing tasks. Lastly, they delved into the challenge of achieving both speed and quality in software development, emphasizing the need for a learning mindset and continuous improvement.Description Generated by AILinks to stuff we mentioned during the pod:00:00 - James ThomasJames' blogJames LinkedIn02:42 - The Testing Atelier ConferenceTheir websiteTheir YouTube channel05:10 - Jit Gosai Jit's Leeds Testing Atelier postJit's Leeds Testing Atelier talkJit's blog Jit's Quality Engineering Newsletter Jit's LinkedIn05:43 - Elly GausdenElly's LinkedIn08:03 - The Battleships board game09:00 - The Colt Express board game11:02 - Clare NormanClare's LinkedIn11:44 - Lego Serious Play Training11:56 - Rich's Lego Automation workshop13:33 - Elliot ThurlandElliot's talkElliot's LinkedIn13:38 - The Post Office Horizon scandalThe public enquiry websiteArticle on Wikipedia13:53 - James ChristieJames' blogJames' body of work regarding the Post Office scandal16:30 - BBC Radio 4 series about the Post Office Scandal16:35 - 4 part television drama for ITV17:48 - The Nightmare Headline game is described in Elisabeth Hendrickson's excellent book Explore It!18:43 - Bug AdvocacyThe Association for Software Testing (AST) flavour of the courseThe Black Box Software Testing training programs in collaboration with Altom and Cem Kaner19:20 - Kelsey HightowerKelsey's Twitter/XKelsey's GitHub21:08 - Leah King & Tracy ArchibaldTheir talkLeah's LinkedInTracy's LinkedIn23:44 - Emily O'ConnorEmily's LinkedIn28:20 - Steven MilneSteven's talkSteven's LinkedinSteven's Twitter/X34:54 - Paul ColesPaul's LinkedIn34:54 - Rita AvotaRita's LinkedIn35:20 - How assistive technology is REALLY usedAccessibility Testing with People with Disabilities - Samuel ProulxWe've linked to the specific part of the video that shows the demo but we'd recommend watching the whole presentation!38:47 - Colin WrenColin's talkColin's LinkedIn45:24 - Melissa RocksMelissa's talkMelissa's LinkedInMelissa's Twitter/X51:14 - Spoon Theory56:03 - Ian ThomasIan's LinkedIn56:29 - Rich's "allergic reaction" presentation: "Pyramids Are Ancient - Test Automation Strategy"59:45 - The Adobo & Avocados show01:01:52 - Kiel GoodmanKiel's LinkedIn01:04:45 - The Gartner Hype Cycle01:04:59 - Troy MagennisTroy's websiteTroy's LinkedIn00:00 - Banter (new name suggestions from James Thomas)01:09 - The actual intro 😅01:55 - Footy content warning 🚨02:42 - What we intend to cover during this episode04:42 - ⚽️ Footy (1 min)05:10 - Jit's post about Leeds Testing Atelier

  33. 3

    Conference Rejected, Networking and Joining a New Team

    In this conversation, Richard and Vernon discuss the experience of being rejected for conference talks and the importance of actionable feedback. They emphasize the need for clear and compelling abstracts, as well as the value of networking and building relationships within the industry. They encourage individuals to continue sharing their stories and knowledge through alternative platforms such as YouTube, blogs, and meetups.The conversation also touches on the power of diversity in conference lineups and the importance of providing opportunities for underrepresented voices. In this conversation, Richard and Vernon discuss the importance of networking and building relationships in the software testing industry. They emphasize the value of nurturing connections and being present in the network, rather than only reaching out when you need something.They also discuss the challenges of onboarding onto a new team and share their experiences and strategies for effective onboarding. They highlight the importance of asking for help, sharing knowledge, and finding the right balance between asking for help and helping yourself. Overall, the conversation emphasises the power of relationships and continuous learning in the testing profession.00:00 Introduction and Positive Feedback09:01 Creating Clear and Compelling Abstracts16:21 The Importance of Actionable Feedback33:10 The Power of Networking45:33 Finding the Balance: Asking for Help vs. Helping Yourself53:06 The Importance of Continuous LearningLinks to stuff we mentioned during the pod:08:00 - Richard's legendary advice about how to structure your conference proposal 10:07 - Sarah Deery on LinkedIn13:00 - Clear not clever explained by Nicolas Cole17:09 - Posts from Lena, Emna, and Jenna 20:00 - The Mash Program 20:20 - I couldn't find anything about Speak Easy but you can find the founder Anne-Marie Charrett 25:21 - Lisa Crispin's website and LinkedIn page27:00 - Abby Bangser on LinkedIn27:30 - Ash Coleman Hynie on LinkedIn31:30 - Vernon Scott II on LinkedIn34:00 - Marie Cruz and Lewis Prescott's book 45:47 - Lisi Hocke's website & Ben Dowen website46:08 - My old pal 49:55 - The First 90 Days book on Amazon56:37 - What's AWS? 58:50 - What about Google Cloud? 58:51 - What's Heroku?

  34. 2

    Highlights from PeersCon & Choosing the Right Medium for Meaningful Discussions

    The second episode of the Vernon Richard show discusses the PeersCon conference and highlights some of the key talks. Topics covered include the concept of minimal shippable risk, the importance of psychological safety in creating a productive work environment, the challenges and learnings of stepping into a leadership role, and the role of DevOps in organisations.We express our appreciation for the speakers and their valuable insights. The conversation covered various themes, including the importance of embracing DevOps and the role of testers in the process. The concept of glue work, which involves technical leadership and ensuring collaboration and success, was discussed. The negative impact of debates on LinkedIn and the need for respectful and curious engagement were highlighted. The importance of framing conversations and choosing the right medium for discussions was emphasized. The idea of thinking like a scientist and valuing getting right over being right was also explored.* Generated by AI.00:00 - Introduction00:47 - Our overall thoughts about the inaugural PeersCon event03:00 - Heather Reid's presentation "Wait! That's not tested"12:05 - Jit Gosai's presentation "Psychological safety – The link between speaking up, complexity and high performing teams"16:48 - Al Goodall's presentation "Things I Learned being a new(ish) Quality Manager"20:43 - Beth Clarke's presentation "Being the Glue: The Role DevOps in Testing"26:38 - Leigh Rathbone's presentation "The history of testing and why its important as it feeds our future"27:44 - Debates on LinkedIn32:05 - Choosing the right medium for sharing ideas and managing your energy35:04 - The challenges of online debate39:44 - Preachers, Prosectors, Politicians and thinking like a ScientistLinks to stuff we mentioned during the pod:00:32 - The PeersCon website02:28 - The Testing Peers podcast 03:00 - Heather Reid Heather's blog Heather's LinkedIn07:55 - Vernon's Quality Coaching Kickstart Guide 12:05 - Jit Gosai Jit's blog Jit's Quality Engineering Newsletter Jit's LinkedIn16:48 - Al Goodall Al's blog  Al's LinkedIn20:43 - Beth Clarke Beth's blog Beth's LinkedIn22:33 - "First time?" meme from Maaike Brinkhof 23:39 - Tanya Reilly Glue Work26:38 - Leigh Rathbone Leigh's LinkedIn29:50 - Testing vs Checking30:02 - Context Driven Testing30:18 - Caleb CrandallCaleb's great post about mindset Caleb's LinkedIn 33:05 - James Bach James' blog 39:06 - Vernon explaining Coaching and being 2% right on Deena McKay's Black Tech Unplugged podcast39:24 - Adam Grant explaining Preachers, Prosectors, Politicians and thinking like a Scientist on the Diary of a CEO podcast

  35. 1

    Smoke Testing, Knowledge Work and Testing in Production

    In this episode, Vernon and Richard introduce their new podcast and discuss the concept of smoke testing and knowledge work. They explain that smoke testing is a quick test to determine if something is alive or valid, often used when deploying new builds or testing in production. They also discuss the challenges of testing in production and the importance of health checks. In regards to knowledge work, they define it as cognitive work that involves manipulating and processing information based on expertise. They reflect on the recent arguments and discussions on LinkedIn and emphasize the need for nuance and understanding in these conversations.The conversation explores the challenges and misconceptions surrounding knowledge work in the context of software testing and automation. The speakers discuss how the intellectual effort and expertise involved in testing are often overlooked or undervalued. They highlight the importance of specialised knowledge, innovation, problem-solving, and continuous learning in testing. The conversation also touches on the perception of automated tests and the need to strike a balance between explicit test cases and exploratory testing.Yes, we are trying to keep this lean, and the above was generated by AI.00:00 Introducing the Vernon Richard Show04:47 Exploring the Concept of Smoke Testing14:32 Understanding Knowledge Work16:33 Introduction to Knowledge Work17:11 Defining Knowledge Work18:03 Characteristics of Knowledge Work19:22 Perception of Testing as Knowledge Work24:29 Perception of Programming as Knowledge Work27:48 Challenges in Communicating Testing Work31:05 Automated Tests as Test Case 2.034:46 Balancing Test Cases and Exploration35:56 Conclusion and Call for FeedbackLinks to stuff we mentioned during the pod:16:17 - Same as Ever by Morgan Housel: https://www.amazon.co.uk/Same-Ever-Timeless-Lessons-Opportunity/dp/B0CMQRQS33/16:29 - Morgan Housel on DOAC: https://youtu.be/vOvLFT4v4LQ?si=GU2pW-d9thmmFV4E27:21 - Huib Schoots: Telling The Testing Story: https://www.huibschoots.nl/storytelling/

Type above to search every episode's transcript for a word or phrase. Matches are scoped to this podcast.

Searching…

No matches for "" in this podcast's transcripts.

Showing of matches

No topics indexed yet for this podcast.

Loading reviews...

ABOUT THIS SHOW

Vernon Richards and Richard Bradshaw discuss all things software testing, quality engineering and life in the world of software development. Plus our own personal journeys navigating our careers and lifes.

HOSTED BY

Vernon Richards and Richard Bradshaw

Produced by Richard Bradshaw

CATEGORIES

URL copied to clipboard!