Hubert's Podcast

PODCAST · technology

Hubert's Podcast

"Streaming Data Mesh" OReilly. Currently writing his second book "Streaming Databases - Supporting Monolithic Data Engineers hubertdulay.substack.com

  1. 22

    Interview with Kai Waehner

    In this podcast Ralph and I interview a former colleague of mine, Kai, who has extensive experience in the data streaming and real-time events space. Kai highlights the top five trends for data streaming with Kafka and Flink, including data sharing, data contracts for governance, serverless stream processing, multi-cloud adoption, and the use of generative AI in real-time contexts. We discuss the role of generative AI in providing accurate answers and the importance of real-time data integration for contextual recommendations, using the example of travel and flight cancellations. We also delve into the role of Flink as a stream processor in ensuring the accuracy and freshness of data for semantic searches and generative AI applications.We also delve into the idea of streaming databases and whether the market is ready to embrace them. We discuss the need for data contracts and data governance to understand the flow of data through systems, as well as the responsibility of the data engineering team in creating embeddings. We also discuss integrating large language models with other applications using technologies like Kafka and provide examples of how generative AI can be integrated into existing business processes. The interview touches on the concept of a "lake house" and the separation of compute and storage for real-time analytics. The guest also highlights Confluent's approach to building Kafka in a cloud-native way and their focus on the streaming side, while emphasizing the need for accessible stream processing solutions for ordinary database users. Get full access to SUP! Hubert’s Substack at hubertdulay.substack.com/subscribe

  2. 21

    Interview with Materialize - Consistency

    In this podcast, we interview Arjun Narayan, Frank McSherry, and Nikhil Benesch from Materialize. Ralph and I are writing a book on streaming databases and seeking expert insights from Materialize on topics rarely discussed in the field. We begin by exploring the distinction between operational and analytical workloads, highlighting the importance of real-time or near-real-time results for operational tasks. We further delve into the significance of consistency in operational workloads and the challenges of using eventually consistent systems. The guests caution against relying on eventually consistent stores and databases, stressing the value of consistency in certain domains like payments.We focus on the concept of time in differential data flow, explaining how revisions provide a better understanding of time in this context. Consistency is highlighted as crucial in temporal joins, especially for mathematical operations and data enrichment. Overall, we emphasize the importance of real-time workloads, consistency, and integration in operational systems.SUP! Hubert’s Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Get full access to SUP! Hubert’s Substack at hubertdulay.substack.com/subscribe

  3. 20

    Filipinos in Tech - Marlow and Ron

    In continuing the Filipinos in Tech series, in this episode, I interview Marlo Carrillo and Ron Guerrero currently at Databricks but previously from Cloudera. We reflect on the significance of the Balikbayan box, symbolizing resilience and the importance of remembering their roots. We share personal and emotional stories of their own families' journeys to America, the struggles they faced, and the sacrifices made for a better life. We also discuss the challenges of growing up Filipino in different communities, feeling different, and trying to find connections. We highlight how Filipinos assimilate into new cultures while holding onto their heritage, and how language can be a marker of identity and assimilation. The episode explores the immigrant experience and the complexities of belonging to multiple worlds.In addition to discussing our immigrant experiences, we focus on the impact of technology on the Filipino community. We speculate that more Filipinos will join the technology field in the future, including their family members. We discuss the preference for social and personal interactions that Filipinos may have, which could potentially explain the underrepresentation of Filipinos in the tech industry. We express gratitude towards America and its opportunities while acknowledging the unique charm of the Philippines. We also talk about retirement plans and the possibility of returning to the Philippines, with some expressing a desire to visit rather than permanently relocate.SUP! Hubert’s Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Get full access to SUP! Hubert’s Substack at hubertdulay.substack.com/subscribe

  4. 19

    Interview with DeepHaven Founders

    The founders of Deep Haven created the company to monetize technology from their previous company and diversify their capabilities in the capital markets. They found a gap in the market for a data system that met their needs, so they developed DeepHaven to provide a live data stack that integrates with Kafka and other data sources. Deephaven Community Core is an open-source project that is a real-time, time-series, column-oriented analytics engine with relational database features. Queries can seamlessly operate upon both historical and real-time data. Deephaven includes an intuitive user experience and visualization tools. It can ingest data from a variety of sources, apply computation and analysis algorithms to that data, and build rich queries, dashboards, and representations with the results.SUP! Hubert’s Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Get full access to SUP! Hubert’s Substack at hubertdulay.substack.com/subscribe

  5. 18

    Filipinos in Tech - Al Domingo

    Another episode of “Filipinos in Tech.” This time I interview Al Domingo, senior director of solutions engineering, Americas Strategic at Confluent (and a long time friend). Al and I share a love of real-time data and music (guitars specifically) but we also share pride in our heritage as Filipinos.We also reflect on our experiences as Filipinos in the tech industry. We discuss the cultural expectations placed on Filipinos to pursue careers in healthcare and the challenges of being one of the few Filipinos in his workplace. Al also shares his fascination with open-source technology and his time at companies like Confluent and Cloudera. This episode highlights the importance of pursuing one's passions and the impact of cultural influences on career choices. Our time at Cloudera emphasizes the supportive learning environment and the unique perspective of working for an open-source company. We discuss the strong Filipino community we encountered at Cloudera, showcasing the impact the company had on our careers and personal connections. The episode concludes with a plan to further explore the experiences of younger Filipinos in the tech industry and encourage more recognition and representation in the field. Get full access to SUP! Hubert’s Substack at hubertdulay.substack.com/subscribe

  6. 17

    Filipinos in Tech

    This is the first of a series of podcasts about Filipino Americans in the tech industry, with our guest Keith Oliver Rull sharing his immigration experience and career journey. We discuss the stress and struggles faced by Filipino immigrants, their hard work and sacrifices, and the lack of Filipino representation in the tech industry. The conversation touches upon the impact of colonial mentality on Filipino culture. Get full access to SUP! Hubert’s Substack at hubertdulay.substack.com/subscribe

  7. 16

    Interview with Peter Corless

    In this podcast interview, I discuss federated systems with Peter Corliss, the Director of Product Marketing for StarTree. Peter will be presenting at a meetup next Tuesday:Peter explains the emergence of federated systems from the evolution of web development and the need to define the backend workings of front-end websites. They also explore the definitions of terms like stack, platform, and cluster in today's environment. The conversation highlights the shift from traditional stacks to clusters of systems and discusses the distinction between federated systems and federated data. They also delve into the challenges and limitations of federated systems and databases, emphasizing the trade-offs between moving the data or the processing. They touch on the concept of federated learning in AI and ML and the importance of optimizing data for queries. They conclude by discussing the need for new language and grammar to describe these complex architectures and the importance of collaboration between data sciences and data engineering teams.In the second part of the podcast, the conversation focuses on the interoperability and limitations of cloud computing systems, specifically AWS, Google Cloud, and Azure. The guest notes that while efforts have been made to make these systems interoperable, users still have to choose between different ecosystems offered by providers. They then shift to the importance of replication in data systems and the concept of a data divide. They emphasize the need to choose the best database or system for each specific aspect of an application architecture. They also discuss the potential for a stack to span across cloud regions and continents, allowing for global consistency and the ability to query data from different locations. Finally, they discuss Apache Pino, describing it as a complex system that can act as a cluster of clusters. They highlight its ability to assimilate more components and scale out, as well as its powerful tools for organizing and storing data. They conclude by discussing the expectation of clusters of clusters in modern systems.SUP! Hubert’s Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Get full access to SUP! Hubert’s Substack at hubertdulay.substack.com/subscribe

  8. 15

    Interview with Aklivity co-founders John and Leonid

    In this podcast interview, the founders of Aklivity, John and Leonid, discuss their journey from working on WebSocket at Kaazing to starting Aklivity. Aklivity aims to support event-driven architectures, particularly those based on Kafka. They also highlight the lessons they learned at Kaazing, emphasizing the importance of meeting clients where they are and using familiar tools and APIs.We delve into the features and capabilities of Zilla, an open-source project developed by Aklivity. Zilla acts as a proxy for Apache Kafka and supports both source and sink APIs. It allows for data extraction from various sources, placing it into an asynchronous system, and exposing it as an external API. The integration with Kafka ensures reliable event-driven architectures, while Zilla’s Kafka cache provides advanced features such as indexing, filtering, and message sharding.SUP! Hubert’s Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Get full access to SUP! Hubert’s Substack at hubertdulay.substack.com/subscribe

  9. 14

    Interview with Sai, CEO of PeerDB

    In this podcast interview, Sai, the CEO and co-founder of PeerDB, discusses his background and motivations for creating the company. He noticed that customers using existing ETL tools for data movement with Postgres often faced issues and ended up building in-house solutions. This inspired him to start PeerDB, a data movement tool optimized for Postgres. The initial use case for PeerDB is real-time streaming of data from Postgres to data warehouses, queues, and storage engines.Sai explains the benefits of this feature, including minimal lag and the ability to easily stream data across different namespaces, topics, and subscriptions. They explore the differences between real-time CDC replication and streaming query replication features in PeerDB. SUP! Hubert’s Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Get full access to SUP! Hubert’s Substack at hubertdulay.substack.com/subscribe

  10. 13

    Interview with Micah, Founder/CEO of Arroyo

    In this podcast, Ralph and I interview Micah Wylde the founder and creator of Arroyo, a stream processing platform. Micah talks about the challenges of current stream processing tools being too difficult for end-users. These challenges motivated him to create Arroyo to make stream processing accessible to everyone. Micah also delves into the importance of the Dataflow paper by Google, explaining how Arroyo focuses on timely data processing by using watermarks to handle potentially delayed and out-of-order data. It’s how Arroyo differentiates itself from other stream processing solutions.SUP! Hubert’s Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Get full access to SUP! Hubert’s Substack at hubertdulay.substack.com/subscribe

  11. 12

    Interview with Ben, CEO Popsink

    In this podcast episode, Ralph and I interview Ben, the CEO and founder of Popsink. Ben shares his background, starting his data journey at Amazon and later becoming the head of data at SIXT, a well-known European car rental company. Ben explains that the motivation behind creating Popsink was frustration with the current data tools, particularly in the realm of serving operations and facing the complexities of serving end users. He wanted to go beyond traditional table-based approaches and create a more optimal solution. Popsink aims to provide a familiar experience for users from the database world, implementing technologies like Flink for transformation, Connect for CDC (change data capture), and Red Panda for event bus purposes.Overall, this podcast episode gives insights into the origin story of Popsink and its goal of revolutionizing the data industry by offering a comprehensive and user-friendly solution for serving operations and delivering data to end users.SUP! Hubert’s Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Get full access to SUP! Hubert’s Substack at hubertdulay.substack.com/subscribe

  12. 11

    Interview with Timeplus at Current23

    In this podcast, I’m joined by Ting and Jove at Current 23 at the StartupHub. Ting and Jove are two founders of Timeplus, a streaming database company. Timeplus recently announced their open source version, called Proton, under Apache Linux 2. It provides a docker image for easy setup on laptops. While the SAS version focuses on usability with a user-friendly interface and community-based deployment, the open source version aims to collect feedback from developers and provides a secure primary interface with no web UI. The podcast continues with Jove explaining that the open source version of Timeplus is compact and can be customized based on user preferences, including different Linux versions and binary downloads. Our discussion highlights the comprehensive nature of the streaming database and its various capabilities, which are user-friendly and accessible for both developers and non-technical users. Get full access to SUP! Hubert’s Substack at hubertdulay.substack.com/subscribe

  13. 10

    Short Interview with Speedb at Current23

    Speedb (pronounced “speedy bee”) is an embedded key-value storage engine that is a fork of RocksDB. At Current23, I caught up with the founders as they introduced many features on top of RocksDB and managed a large community of RocksDB and Speedb users. They emphasize the project's open-source nature and the community's acceptance of contributions. Their current work focuses on performance improvement, ease of use, and resource management. Streaming is a focus for Speedb, highlighting the growing market for event streaming databases. Speedb is a drop-in replacement for RocksDB that is faster and more efficient than RocksDB due to a new patented LSM tree technology. Get full access to SUP! Hubert’s Substack at hubertdulay.substack.com/subscribe

  14. 9

    Interview with Eric Broda

    In this podcast, Ralph, my co-host and I interview Eric Broda. In this interview with Eric Broda, co-author of the book "Implementing Data Mesh," the focus is on the technical aspects of implementing a data mesh. The interview touches on the book's content, emphasizing its technical and hands-on nature, providing tools, frameworks, and code to help organizations implement a data mesh successfully. Eric's perspective on data mesh is discussed, highlighting potential conflicts with other viewpoints in the field, particularly in defining domains and data products as well as how they are consumed. Despite these differences, the interview emphasizes the common goal of solving practical problems rather than getting caught up in pedantic debates. The role of sponsors and Conways' law in shaping data products is also briefly explored, along with the concept of chargeback as a financial consideration.Conway’s LawIn simple terms, it means that the way people and teams interact and collaborate will influence the design and organization of the software they create. For example, if different teams don't communicate well, it can lead to a fragmented and poorly integrated software system. Conversely, effective collaboration can result in a more cohesive and well-designed software product.SUP! Hubert’s Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Get full access to SUP! Hubert’s Substack at hubertdulay.substack.com/subscribe

  15. 8

    Interview with Robert Zych - The Pinot Guy

    In this podcast episode, host I interview Robert Zych, a software engineer at Raft with nearly 20 years of experience in software and data engineering. Robert talks about his engineering background and why he is so passionate about the Apache Pinot project. Robert explains that Apache Pinot is a game changer in real-time analytics, offering scalability, performance, and powerful features like UPSERT and the StarTree index. These capabilities, Robert states, are lacking in other data warehouses he has worked with, making Pinot a compelling choice. Robert also highlights the importance of data quality when using Pinot for real-time analytics, emphasizing the need for accurate and complete data to build solid metrics. As an expert in the Pinot community, Robert shares the top three critical features he believes are essential for a real-time analytical use case. He also explains how Pinot makes Kafka queryable and emphasizes the importance of good-quality data for building solid metrics. Additionally, Robert discusses the ability to ingest data on the fly and add indexes to support query patterns, which is particularly useful for platform teams like the one he worked with at DoorDash.SUP! Hubert’s Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Get full access to SUP! Hubert’s Substack at hubertdulay.substack.com/subscribe

  16. 7

    Interview with Timothy Sehn - Founder and CEO at DoltHub

    30-Second Summary: Tim Sehn on Dolt and Data ManagementIn this interview, Tim Sehn discusses Dolt, a revolutionary data management system. Dolt challenges traditional data sharing barriers by proposing the concept of "renting data" as easily as renting a computer. Dolt's novel approach involves Prolly trees, enabling lightning-fast data access. With use cases spanning from version control for complex spreadsheets to replicating MySQL, Dolt bridges data and versioning seamlessly. It's gaining traction, with significant funding and a growing user base, and is set to focus on edge case compatibility in the coming months.3-Minute Summary: Tim Sehn on Dolt: Transforming Data ManagementTim Sehn's interview unveils the groundbreaking paradigm of Dolt, a data management platform that challenges conventional limitations. Drawing an intriguing analogy between renting computers and renting data, Sehn proposes a paradigm shift in data sharing. Addressing the historical difficulty of data sharing, he introduces the idea of branching and merging in data, akin to version control systems.Central to Dolt's architecture is the innovative Nom operating system and the Prolly tree structure, characterized by nodes with content addresses. Dolt initially focused on data sharing and introduced features like import-export mechanisms. Prolly trees, with their exceptional speed, became instrumental in Dolt's evolution.Sehn introduces three primary use cases or modes of Dolt. The "Git for data" mode, comparable to Artifactory, caters to intricate spreadsheets and CSV data, facilitating functions like diffing and merging. Another mode replaces MySQL, featuring version control for databases, and allows SQL interactions with data. The third mode replicates MySQL, synchronizing data between different instances. Dolt also tackles machine learning use cases and potentially serves as a feature store.The interview delves into DoltHub's role, akin to the GitHub of data, and introduces DoltSQL as well. Sehn emphasizes Dolt's adaptability to Data Mesh concepts and hints at its potential compatibility with Postgres in the future.As Dolt gains traction, Sehn envisions focusing on compatibility, especially in edge cases, over the next six months. Notably, Dolt's capacity to trace historical changes in data has implications for government, audits, and data governance.In essence, Tim Sehn's interview showcases Dolt as a dynamic disruptor in data management, pioneering novel ways of sharing, versioning, and evolving data, ultimately transforming the landscape of data handling and accessibility.Hubert’s Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Get full access to SUP! Hubert’s Substack at hubertdulay.substack.com/subscribe

  17. 6

    Interview with Epsio CEO Gilad Kleinman

    Our guest today is Gilad Kleinman, CEO of Epsio. Epsio has emerged from the world of academia with over a decade of experience dedicated to one mission: optimizing database queries. With the explosion of data in OLTP (Online Transaction Processing) databases and the increasing complexity of queries, our guest emphasizes the need to keep things simple.But that's not all. They're on the path to revolutionize the database world, embracing the concept of HTAP (Hybrid Transactional/Analytical Processing) and recognizing that queries aren't just ad hoc; they're recurring and essential.They've found a way to bridge the gap between OLTP and analytical queries without reinventing the wheel. Instead of building a new database from scratch (a task compared to "changing a car's engine while it's running"), they're integrating analytical queries seamlessly into existing databases.How do they do it? By plugging into OLTP, leveraging Change Data Capture (CDC), and utilizing foreign tables and federated tables. They create views that seamlessly connect OLTP data to the world of analytics, with streaming materialized views persisting in their streaming engine.It's all about externalized materialized views, a concept that promises to reshape how we approach data optimization and analytics. In today's episode, we'll explore their innovative approach, learn about their process, and discover how their external engine, Epsio, operates as a separate process. We'll also uncover their method of writing results back to the database.Hubert’s Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Get full access to SUP! Hubert’s Substack at hubertdulay.substack.com/subscribe

  18. 5

    Interview with WarpStream Founders

    Our guests today are the co-founders of WarpStream: Richard Artoul and Ryan Worl. WarpStream is a company that's creating waves in the world of data streaming and storage. Their innovative approach to distributed systems has caught the attention of tech enthusiasts everywhere.Richie, one of the co-founders, is no stranger to the tech world, with an impressive background that includes stints at M3 Uber and Chronosphere. And his serendipitous meeting with his co-founder, Ryan, at a Percona live conference, laid the foundation for WarpStream.Ryan, the other half of this dynamic duo, had the spark for WarpStream even before their journey with Datadog. Together, they've set out to revolutionize the way we handle data, and today, they're here to give us a glimpse into their world.WarpStream, with its Kafka-compatible, entirely new architecture implemented in a Go binary, has been turning heads. They're introducing the concept of "Bring Your Own Cloud," pointing directly to S3 buckets, and eliminating the need for local disks.Join us as we explore their unique deployment model, where agents become brokers and can be deployed across multiple Availability Zones. These stateless agents maintain a distributed cache, ensuring efficient data handling.In this episode, we'll also uncover the costs, features currently missing, and the innovative use cases that make WarpStream a game-changer for high-volume data streams.And of course, we'll touch on the future – including the "Bring Your Own Cloud" strategy, their take on the concept of a "lakehouse," and how WarpStream aims to bridge the gap between batch and streaming processing.Hubert’s Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Get full access to SUP! Hubert’s Substack at hubertdulay.substack.com/subscribe

  19. 4

    Interview with Hojjat - CEO of DeltaStream

    Today, we have an exclusive interview with a visionary in the field, Hojjat, the CEO of DeltaStream and one of the original creators of ksqlDB.DeltaStream is at the forefront of revolutionizing data processing with its serverless stream processing platform. One of the key challenges Hojjat highlights is the management of multiple clusters, which can become a cumbersome task for many organizations. With DeltaStream, the user doesn't have to worry about nodes and servers. It's all about simplifying the process and allowing businesses to focus on what matters most – their data.We delve into the distinction between stream processing and streaming databases, uncovering the importance of stateful processing and materialization, drawing parallels with the traditional Postgres model.We also get a sneak peek into FlinkSQL, offering a comprehensive view of DeltaStream's capabilities in the world of stream processing and data management.Hubert’s Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Get full access to SUP! Hubert’s Substack at hubertdulay.substack.com/subscribe

  20. 3

    Interview with Seth Wiesman (Materialize)

    In this episode, we talk to Seth Wiseman the Director of Field Engineering at Materialize and Apache Flink Committer about streaming databases. He is not a fan of the term streaming database - "Streaming is just an implementation of a database."Enter Materialize. The name itself carries the promise of a new era. The market is growing hungry to understand the potential of materialized views. While the perception is that streaming might be complex, it's worth noting that companies like Uber have invested significantly in technologies like Apache Flink, boasting a team of 30 engineers solely dedicated to this endeavor. The market demand for such solutions has never been more apparent, but curiously, the "streaming engineer" title remains relatively niche.As we delve deeper, we uncover the internal distinctions between Flink and Materialize, tracing the evolution of streaming databases. Flink plays the role of the compute, while Materialize aims to build out the database itself. The journey from Spark to Flink reveals unique internals, showcasing the distinction between pipeline architectures and the streaming paradigm. Seth provides an exciting perspective on streaming and streaming databases in general and how Materialize differs from Flink and Spark.Hubert’s Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Expense it brah. Get full access to SUP! Hubert’s Substack at hubertdulay.substack.com/subscribe

  21. 2

    Interview with Mike and Tun at Quix

    In this enlightening podcast episode, we delve into the world of stream processing and cloud-native solutions with two key figures from Quix: Michael Rosam, Founder and CEO, and Tun Shwe, VP of Data and DevRel.Quix Streams is a game-changing cloud-native solution that harnesses the agility of Python for stream processing, all wrapped up in a lightweight library. The conversation with Michael and Tun sheds light on Quix's journey, motivations, and the pivotal role it plays in the data processing landscape.Michael shares his motivation for founding Quix, stemming from his experiences in F1 racing that demanded true real-time capabilities. Quix's goal: take humans out of the equation and embrace automation, all while ensuring true real-time data processing.Tun, holding dual VP roles, provides insight into the intersection of these roles. Education is the common thread, and the overlap arises from content creation and the shared mission of selling the dream.But why Python? The founders' choice to go open source in February this year is rooted in a belief in the community and the desire to empower users to explore the full potential of stream processing.Do data scientists need to be well-versed in streaming? Michael and Tun concur that a basic understanding is crucial, focusing on state management and checkpointing.Looking ahead, the future of Quix promises exciting developments, including streaming data frames reminiscent of Pandas and compatibility with scikit-learn. The podcast concludes with insights into where you can find Quix and what's on the horizon.Don't miss this captivating episode as Quix accelerates through the data processing landscape at speeds up to 300 mph, where the journey is a lot like an F1 race, and the major launch of Quix V2 is on the horizon.Hubert’s Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Get full access to SUP! Hubert’s Substack at hubertdulay.substack.com/subscribe

  22. 1

    Interview with Feldera Founders and DBSP

    Ralph M. Debusmann and Hubert Dulay interview Mihai Budiu and Leonid Ryzhyk from FelderaThe Feldera Continuous Analytics Platform, or Feldera Platform, in short, is a fast computational engine and associated components for continuous analytics over data in motion. Feldera Platform allows users to configure data pipelines as standing SQL programs (DDLs) that are continuously evaluated as new data arrives from various sources. What makes Feldera's engine unique is its ability to evaluate arbitrary SQL programs incrementally, making it more expressive and performant than existing alternatives like streaming engines.This recording starts 10 minutes into the interview. Please enjoy. Get full access to SUP! Hubert’s Substack at hubertdulay.substack.com/subscribe

Type above to search every episode's transcript for a word or phrase. Matches are scoped to this podcast.

Searching…

No matches for "" in this podcast's transcripts.

Showing of matches

No topics indexed yet for this podcast.

Loading reviews...

ABOUT THIS SHOW

"Streaming Data Mesh" OReilly. Currently writing his second book "Streaming Databases - Supporting Monolithic Data Engineers hubertdulay.substack.com

HOSTED BY

Hubert Dulay

CATEGORIES

URL copied to clipboard!