PodParley PodParley
Voices of VR

PODCAST · technology

Voices of VR

Since May 2014, Kent Bye has published over 1500 Voices of VR podcast interviews featuring the pioneering artists, storytellers, and technologists driving the resurgence of virtual & augmented reality. He's an oral historian, experiential journalist, & aspiring philosopher, helping to define the patterns of immersive storytelling, experiential design, ethical frameworks, & the ultimate potential of XR.

  1. 200

    #1713: Lincoln Center for Performing Arts Immersive Programming Overview with Jordana Leigh

    The Lincoln Center for Performing Arts has been stage a variety of different types of immersive experiences as a part of their interdisciplinary programming, and I had a chance to catch up the lead immersive programmer Jordana Leigh at Venice Immersive in order to get an overview of what they've been showing, XR experiences they've commissioned, how audiences connect to each other about the unique transportive affordances of experiences presented there, and generally how they're using XR to bring new and diverse communities together in New York City. We also talked about their Lincoln Center Collider Fellowship for XR artists to advance their artistic practice through a range of either open-ended R&D or time and space for innovative experimentation. Leigh was scheduled to present at the IDFA DocLab R&D Summit, but had some travel delays. Hopefully this conversation helps to explain the many ways that the Lincoln Center for Performing Arts is totally in alignment with some of the broader themes of providing opportunities to de-isolate and revitalize civic society that is covered extensively in this report. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  2. 199

    1713: CIIIC’s €200 Million in Public Funding: The Creative Industries Immersive Impact Coalition

    The CIIIC is the Creative Industries Immersive Impact Coalition based out of the Netherlands, which will be spending about €200 Million in Public Funding over the next five years. It is a really exciting development in Europe that is promoting the development of Immersive Experiences (which they abbreviate IX). They will be cultivating knowledge and methods of experiential design, developing immersive talent and human capital, cultivating immersive ecosystem and facilities, catalyzing innovation via various projects, and creating an over synergy across all of their efforts. For a comprehensive recap of CIIIC and what they're doing, then also be sure to check out the CIIIC section starting on page 62 of the extensive 121-page IDFA DocLab Think Tank Report that I wrote, which was recently published on April 21, 2026. I provide a bit more context to this report in the intro and outro of this episode, which is an oral history interview with CIIIC Program Director Heleen Rouw at UnitedXR in December. This conversation forms the basis for that section, but also has some additional updates on their various efforts including: Artistic & Design Research for Immersive Experiences (ADRIE) (5 projects) Phase I of Innovation Impact Challenge: IX in Urban Development (17 projects) Phase II Innovation Impact Challenge: IX in Urban Development (10 projects) The "Shared Realities" consortium is part of the initial ADRIE cohort, which includes a collaboration between IDFA DocLab, Amsterdam University of Applied Sciences, MIT Open Documentary Lab, PHI, ARTIS Planetarium, and a number of XR studios based in the Netherlands including POPKRAFT, Polymorf, Studio Biarritz, WeMakeVR, ALLLESSS (Ali Eslami), Ado Ato Pictures (Tamara Shogaolu), and Cassette (Nu:Reality). Be sure to check out episode #1697 to hear more about how the Shared Realities initiative will be facilitating experiential designers and artists collaborating with researchers to see if immersive art can help to revitalize civic society. This interview with Rouw provides an overview of the CIIIC, how they're defining "immersive" to be much broader than any single technology, and why they think immersive will be the next big wave of innovation that can help promote public interest values. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  3. 198

    #1712: Preview of SXSW XR Experience 2026 with Blake Kammerdiener

    I interviewed SXSW XR Experience 2026 curator Blake Kammerdiener about this year's selection, and how immersive artists are using Generative AI in a series of different projects. Below is the selection (ordered from longest to shortest). This year's program runs from 11a to 6p CDT from Sunday, March 15-17, 2026. XR Experience Competition Escape The Internet (Part 1) (50 min) Inter(mediate) Spaces (45 min) Winterover (45 min) Fabula Rasa: Dead Man Talking (30 min) Frustrain: Trainman (30 min) The Forgotten War (30 min) Watsonville (30 min) Fillos do Vento: A Rapa (28 min) Crafting Crimes: The Mona Lisa Heist (20 min) Love Bird (20 min) The Baby Factory is Closed (20 min) Lionia Is Leaving (18 min) Body Proxy (15 min) Cycle (15 min) The Great Dictator: A participatory AI installation about power, rhetoric, and memory (15 min) XR Experience Spotlight The Clouds Are Two Thousand Meters Up (62 min) The Great Orator (50 min) Lesbian Simulator (40 min) A Long Goodbye (35 min) Dark Rooms (35 min) Lacuna (34 min) The Dollhouse (24 min) Reality Looks Back (21 min) Insider Outsider (12 min) loss·y (10 min) Lost Love Hotline (10 min) Out of Nowhere (10 min) Spectacular: The Art of Jonathan Yeo in Augmented Reality (10 min) Ascended Intelligence (9 min) MIT Open Documentary Lab’s AR and Public Space Artist Collective Layers of Place: Austin [90 min total] ORYZA: Healing Ground (15 min) The Founders Pillars (15 min) Open Access Memorial (15 min) Paper Boat (15 min) Humble Monuments (15 min) Moving Memory (15 min) This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  4. 197

    #1711: Mission Responsible 3: Discussion on AI Ethics with 6 Winners of Polys Ombudsperson of the Year

    This is the panel discussion of Mission Responsible 3 featuring the winners of the Polys Ombudsperson of the year including: Kent Bye (2020), Avi Bar-Zeev (2021), Brittan Heller (2022), Micaela Mantegna (2023), Ingrid Kopp (2024), and Nonny de la Pena (2025). Introduced by Renard T. Jenkins. The big topic this year was AI, but lots to say about XR as well. Here are some links that I mentioned in the introduction that were referenced within the show: "Freedom of Expression in Next-Generation Computing" by Brittan Heller XR Guild's Principles US sanctioning individual ICC judges for decisions they don't like. The Polys 6th Annual Immersive Awards takes place next weekend on Sunday, March 22, 2026 at SVA Theatre in New York City. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  5. 196

    #1710: When Integration Becomes Subordination: Big Tech Parallels in Carney’s Davos Speech & Untethering from the AI Big Brother

    Canada’s Prime Minister Mark Carney gave a rousing speech at the World Economic Forum on January 20, 2026 about the rupture of the rules-based order of the globalized economy, and he emphasized the need to build new coalitions to sustain the pressure coming from the United States' emerging authoritarianism. Carney said, “Great powers have begun using economic integration as weapons, tariffs as leverage, financial infrastructure as coercion, supply chains as vulnerabilities to be exploited. You cannot live within the lie of mutual benefit through integration, when integration becomes the source of your subordination.” Just as globalized, economic integrations are being weaponized by the United States, then Big Tech's integrations woven throughout our lives will continue to become the source of our own subordination, especially as surveillance capitalism heads towards its logical conclusion of an all-pervasive, AI Big Brother, perhaps eventually explicitly tied into authoritarian governments. The AI Big Brother has already started within the context of private companies, but with the outdated Third-Party doctrine of the Fourth Amendment, then any data given to a third party has "no legitimate 'expectation of privacy'." From UNITED STATES v. MILLER (1976): "The Fourth Amendment does not prohibit the obtaining of information revealed to a third party and conveyed by him to Government authorities." So the US government can request almost any data shared with a third party without a warrant, and given Big Tech's cozy relationship to a democratically-backsliding US government, then who knows what kinds of backroom deals are being made to automate data sharing. We're already in an era where almost all data given to a third party is not considered to be private, and you can start to see some early indications for how this can go wrong in Taylor Lorenz's interview with 404 Media's Joe Cox about ICE's surveillance technologies. It seems likely that we are entering into the very early phases of Orwell's worst nightmare of a 1984 surveillance state powered by Big Tech's AI. In this op-ed podcast episode, I connect some dots between Carney’s Davos speech about the hegemonic forces in the geopolitical sphere and the parallels with Big Tech's push towards "contextually aware-AI," which is just an always-on AI that is surveillance capitalism on steroids. Carney's speech provides a lot of insights for how Canada is navigating this new reality where the rules-based order on the International stage seems to be dissolving. One of his deepest insights is to simply name the truth, and to describe precisely what is happening. He refers to a powerful story from Vaclav Havel's The Power of the Powerless where shopkeepers eventually "took their [propaganda] signs down" during communist rule after they were no longer willing to live within a lie. Carney says: "The system's power comes not from its truth, but from everyone's willingness to perform as if it were true, and its fragility comes from the same source. When even one person stops performing, when the greengrocer removes his sign, the illusion begins to crack. Friends, it is time for companies and countries to take their signs down." Taking down metaphoric signs breaks the spell of the collective performative ritual that sustains the power of an authoritarian regime. Taking a sign down is also the embodiment of the first lesson of Timothy Synder's On Tyranny, which is "Do Not Obey in Advance." This lesson is certainly easier said than done, and I've been surprised how pervasive and powerful the chilling effects to remain silent can be. I find myself self-censoring, going dark on social media, and just generally not speaking the full truth as I see it. So this episode is a step in that direction of trying to name things as I see them, but also drawing the parallels between these broader political contexts and how they're collapsing into the technological contexts. As a society, one sign we've been holding up is that we've collectively been willing to mortgage our privacy by giving our data to Big Tech because it allows us to get access to software and services for free. But as the line between Big Tech and authoritarian governments continues to blur, then I expect to see more people start "taking down their signs" of tolerating surveillance capitalism by tapering down or cutting off their relationship completely. I'm already seeing some signs of this resistance to Big Tech starting to happen with the resurgence of dumb phones to counter smart-phone addiction, quitting social media to reduce the algorithmic filter bubbles that curate our realties, and a implementing a digital detox to unplug from the Internet in favor of more embodied, immersive, and experiential entertainment. We're starving for authenticity as social media networks are flooded with AI slop because it makes numbers go up, but yet it is a profoundly dehumanizing experience that feels like it's the logical extreme of novelty-optimized AI dopamine machines leading us to an Idiocracy dystopian future. With the democratic-backsliding in the US, the Trump Administration has been following the "seven basic tactics in the pursuit of power" as detailed by The Authoritarian Playbook (2024) as they politicize independent institutions, spread disinformation, pursue the unitary executive theory at the expense of checks and balances, quash criticism and dissent, scapegoat vulnerable and marginalized communities, work to corrupt elections, and stoke violence with their Operation Metro Surge. I'm seeing the abandonment of due process, and I've lost all faith in the enforcement of the rule of law as the Department of Justice has been weaponized. This abandonment of the rules-based order of the rule of law has a profoundly destabilizing psychological impact, and other countries have also been reckoning with it. In response, the Prime Minister of Canada Mark Carney has called for new coalitions of the middle powers given that the United States has chosen to abandon rules-based order in favor of coercive negotiating techniques. The US is leveraging their asymmetry of power to turn all relationships into a transaction that can be won or lost. Canada is unwilling to bend the knee to these authoritarian ways, and is making the call to arms for all middle powers to unite in order to resist the power of these hegemonic forces. There is a real strength in collective resistance, and so Canada is taking a hybrid approach towards coalition building. Their approach is primarily led by collaborating with countries that have shared values, but they also recognize the need for more pragmatic, ad-hoc, "variable geometry" coalitions based upon mutual benefit or interest. Just as countries are thinking about how to maintain their sovereignty, we are all entering into a new era that has moved beyond a rules-based order. So people around the world are also thinking about how they can maintain their own sovereignty in the context of Big Tech's push towards an all-pervasive, AI surveillance machine. One recent example of Big Tech's surveillance aspirations comes from an internal Meta memo shared with the New York Times arguing that the political chaos in the world right now makes it the perfect time to push out controversial tech that would normally get a lot of blowback. They're considering launching facial recognition features for their RayBan-Meta AI glasses as they callously characterize this moment as a "dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.” This type of realpolitik moral reasoning follows the logic of surveillance capitalism, which completely ignores the broader potential cultural and legal impact of their technologies in favor of their short-term gain. I've previously written about how Meta's dream of contextually-aware AI is a dystopian privacy nightmare within the Proceedings of Stanford's Existing Law and Extended Reality Symposium. The always-on and persistent sensing from face-mounted cameras embedded into glasses is the next frontier for Meta, but this persistent capturing from wearable technology across all contextual domains will start to change the legal definition of our "reasonable expectation of privacy." This is because part of the legal test laid out by Harlan's concurring opinion in KATZ v UNITED STATES (1967) is what "society is prepared to recognize as reasonable." In other words, whatever the culture accepts as the boundary between public and private contexts becomes a part of the legal test for what the government considers to be protected by the Fourth Amendment. So an always-on AI surveillance from wearable face cameras will inevitably change these legal definitions and weaken everyone's Fourth Amendment protections. Even if all of the raw data remained on these devices, then inferences made from devices would not be protected if they're shared with a third party. Imagine a noisy raw camera feed from Meta's AI glasses is processed, but it makes some incorrect inferences from computer vision algorithms or hallucinations from a large language model, then these incorrect inferences could end up in the hands of a government and used as evidence against you in a court of law. The film Coded Bias does a great job of elaborating how marginalized communities have been harmed by biased algorithms that have been integrated into automatic decision-making in the context of policing, housing, employment, etc. Carney's roadmap has many lessons that we can also apply to our own encounters of a new reality. He named the truth of this situation and is taking Canada's metaphoric sign down signaling that they are no longer willing to live within a lie. Canada is untethering itself from their relationship to the United States as the US takes an authoritarian turn into dem

  6. 195

    #1709: Ian Hamilton on Getting Fired from UploadVR & Concerns on AI Authorship in News

    On Wednesday, January 28, 2026, Ian Hamilton announced on Bluesky that "I've been fired from UploadVR." He was the editor in chief at UploadVR, and he wrote a Substack post titled "Ian is Typing" on January 30th detailing how is co-workers were pushing to do a test of a "clearly disclosed AI author for UploadVR," and that he had three specific concerns that it be brief, for the ability for readers to turn off and hide all AI-authored posts, and for human freelancers to have the right of first refusal. Hamilton claims to have tried to raise these concerns in the context of Slack, but that the experiment was going to proceed regardless. He writes, "Unable to shift the direction of my colleagues and out of options to affect what was coming, I stepped out of Slack and sent a final email to them on Wednesday morning with a number of my contacts in the industry copied, raising some of these concerns. Not long after, I was called by my boss and fired." I spoke with Hamilton last Friday after his Substack post in order to get more context that led to his departure. Hamilton claims that UploadVR Editor & Developer David Heaney and UploadVR's Operations Manager Kyle Riesenbeck were behind the push to test this clearly disclosed AI author on UploadVR, and that ultimately the proposed test was a business decision made by Riesenbeck. It was a decision that Hamilton ultimately disagreed with, and he cites it as the primary factor that led to behavior that ultimately led to his firing. (UPDATE Feb 5, 2026: It is worth noting here that UploadVR has yet to run this AI bot author test, but that it was the proposed test that was the catalyst for Hamilton’s behavior). The specific reasons and circumstances around Hamilton's firing are publicly disputed by Heaney, who reacted on Twitter after Hamilton's Substack post went live by saying, "It is indeed only one side of the story. And an incomplete telling of it, with key omissions and wording choices that serve to paint a misleading picture." In another post Heaney says, "I can't get into it more at this point for obvious reasons, but don't believe everything you read, especially a single side of a complex story." I asked Hamilton for his reaction to Heaney's claims that he's being misleading during our interview, and he did provide more context in our conversation that lead up to his firing. Ultimately, it does sounds like the proposed AI bot author test was the primary catalyst for Hamilton, and that this disagreement may have led to other behaviors and reactions that could also be reasonably cited for why he was fired. UploadVR may have a differing opinions as to what happened, but no one from UploadVR has made public comments beyond what Heaney has said on Twitter. I have extended invitations to both Riesenbeck or Heaney to come onto the podcast for a broader discussion about AI, but nothing has been confirmed by the time of publication. My Personal Take on AI: Technically, Philosophically, Legally, and Culturally Public discourse around AI has split into a binary of Pro-AI vs Anti-AI, and while my personal views can not be easily collapsed into one side of the other, I'd usually take the Anti-AI side of a debate if given the opportunity. I do think some form of AI is here to stay, and will be around for a long time, but that right now there is a lot of hype and deluded thinking on the topic. I see AI as a technology that consolidates wealth and power, and so a primary question worth asking is “Whose power and wealth is being consolidated?” Karen Hao's The Empire of AI elaborates on how the past patterns of colonialism are replaying out within the context of data and the field of AI, as well as how scaling with more compute power has been the primary mode of innovation in AI, and that Gary Marcus has been pushing against the "Scale is All You Need" theory for many years now. Technically speaking, I'm more of a skeptic in the short-term around LLMs along the lines of Stochastic Parrots critique that is elaborated upon by Emily M. Bender and Alex Hanna in The AI Con book, but also Yann Lecun's call for more sensory grounding, as well as Gary Marcus' calls for more neurosymbolic cognitive architectures. AI has always been a marketing term as elaborated by Dr. Jonnie Penn’s Ph.D. thesis on "Inventing Intelligence: On the History of Complex Information Processing and Artificial Intelligence in the United States in the Mid-Twentieth Century." My perspective on AI has been informed by 122 unpublished interviews with AI researchers, many of whom also cite how the empirical results often outpace the theoretical results (i.e. there are often benchmark improvements without full knowledge around the theoretical foundations behind it leading resulting in plateaus rather than monotonic progress). I've also spoken to over 100 XR artists, storytellers, and engineers about AI on the Voices of VR podcast over the past decade. When the context is bounded, and the data are gathered while being in right relationship, then there can be some real utility. But there's also many gaps and ways that LLMs cause harm to marginalized communities. See the film Coded Bias for more details on that front. Philosophically speaking, Process Philosophy has had a big influence on me, and so check out my conversation with Whitehead scholar Matt Segall on AI. Timnit Gebru and Émile P. Torres' paper on the TESCREAL bundle has also been a key influence that deconstructs the influence of philosophies like Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism on AI research. I don't think AI is conscious, but I lean towards Whitehead's panexperientialism, which sees experience as going all the way down. This perspective also helps to differentiate humans from machines by looking at things like emotions, meaning, value, intention, context, relationships, all of which can easily get collapsed if only looking through the lens of “intelligence.” I'm curious about Data Science as Neoplatonism ideas, and Michael Levin’s work on ingressing minds (influenced by Platonic forms and Whitehead's eternal objects) and his general calls for SUTI: the Search for Unconventional Terrestrial Intelligence. I also love Timothy E. Eastman’s Logoi Framework as elaborated in his Untying the Gordian Knot: Process, Reality, Context book. He highlights the triadic nature of reality being input-output-context, and the logic of actualizations being Boolean logic and the logic of potential being non-Boolean logic, which is something that Hans Primas elaborates on in Knowledge and Time. So AI needs to account for the pluralism of non-Boolean realities, but often collapses them into a singular formal system that collapses situated knowledges. Also see James Bradley’s “Beyond Hermeneutics: Peirce’s Semiology as a Trinitarian Metaphysics of Communication,” which elaborates on Charles Saunders Peirce’s semiotics as being a triadic system that includes a sign, object, and interpretant, and LLMs take a nominalist, dyadic approach that collapses the deeper meaning or interpretation (see computational linguist Bender’s elaboration of this argument in The AI Con). Also see Michèle Friend’s Pluralism in Mathematics: A New Position in Philosophy of Mathematics as it applies Gödel's Incompleteness to the foundations of mathematics itself and points out the limits of Boolean logic, and the need for an overall paraconsistent logic. AI researcher Ben Goertzel wrote a paper on "Paraconsistent Foundations for Probabilistic Reasoning, Programming and Concept Formation." Here's a talk I gave with some of my preliminary thoughts on AI. I also have a lot more thoughts and resources in my write-up from when I argued against AI in a Socratic debate at AWE 2025. Also check out this recent philosophical talk that digs into some of the philosophical foundations to my experiential design framework and Whitehead's panexperientialism. Legally speaking, I generally advocate for a relational approach as well as open source, decentralized approaches, but also I see that there's a need for some legal checks and balances around privacy. I elaborate on these in a paper titled "Privacy Pitfalls of Contextually-Aware AI: Sensemaking Frameworks for Context and XR Data Qualities" that was written for the Stanford's Cyber Policy Center's "Existing Law and Extended Reality" Symposium. But there is no sign of any new comprehensive federal privacy law in the US, which is where these major Big Tech companies are located. So the privacy implications of contextually-aware AI remain to be extremely fraught, especially with the trend of democratic backsliding in the US and beyond. Culturally speaking, I find the forced integrations of AI into many layers of UX / UI to be largely non-consensual and with me being left with the feeling that AI is being shoved down my throat when I didn't ask for it and usually avoid using it whenever I can. I don't want AI to write for me, because writing is the process of thinking for me, and I'd rather think for myself (see “thinking as craft” argument from Hanna in The AI Con). I do find the experience of AI slop videos, photos, and text to be profoundly dehumanizing and makes me want to retreat from any social media space where AI slop is flooding the feeds. I hate the experience of having to question the provenance and legitimacy of everything I see and hear, and the AI-driven misinformation campaigns are a blight on democracy. I really resonate with the view that AI is the Aesthetics of Fascism considering the extent of how authoritarian leaders are using AI slop to push their democratic backsliding agendas. So my perspectives on AI don't fit neatly into a single category, but I do resonate with some of the Anti-AI, Neo-Luddite sentiment. I'd point to Emily M. Bender and Alex Hanna’s The AI Con book, Karen Hao’s Empire of AI, Shoshana Zuboff’s Age of Surveillance Capitalism,...

  7. 194

    #1708: How Process Philosophy Centers Experience. A Prismatic Tour of “Whitehead’s Universe” by Andrew M. Davis

    I interviewed Andrew M. Davis about his forthcoming book titled Whitehead's Universe: A Prismatic Introduction on Thursday, December 4, 2025. It's absolutely the best introduction to Alfred North Whitehead's work in Process Philosophy, and I can't recommend it enough. The worst part is that it isn't set to release until sometime next year, but you can get an early look at some drafts if you sign up with some of Davis' upcoming Whitehead's Universe courses that are being offered in January and February 2026. Whitehead's Process Philosophy centers the human experience at the center of it's philosophy, and therefore focuses on the dynamic flux and flow of experience as we inherit past memories, anticipate the future, decide what actions to take moment to moment, and synthesize it all through our feelings which help to solidify our core memories through the peak emotional experiences of our lives. Davis helps us navigate through Whitehead's neologisms, which are attempting to rewire our brain to think about the nature of reality in a completely new and different way. The subject-predicate and noun-emphasized object-oriented structure of the English isn't doing us any favors, but thankfully the immersive experiences that are offered through immersive art and entertainment is very much oriented into the dynamic flux of our experience, through what is theorized as presence theory in virtual reality. I have my own elemental theory of presence, and in this conversation with Davis I discovered that there's a lot of resonance with how Whitehead is reconceptualizing the nature of reality into a more verb-based event ontology. This is my fifth deep dive on Process Philosophy, and so be sure to check out my other conversations here: #965: Primer on Whitehead’s Process Philosophy as a Paradigm Shift & Foundation for Experiential Design #1147: Thirteen Philosophers on the Problem of Opposites: Grant Maxwell’s Integration & Difference Book & Archetypal Approaches to Character #1183: From Kant to an Organic View of Reality: Scaffolding a Process-Relational Paradigm Shift with Whitehead Scholar Matt Segall #1568: A Process-Relational Philosophy View on AI, Intelligence, & Consciousness with Matt Segall #1708: How Process Philosophy Centers Experience. A Prismatic Tour of “Whitehead’s Universe” by Andrew M. Davis This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  8. 193

    #1707: War Journalist Turns to Immersive Art to Shatter Our Numbness Through Feeling. “In 36,000 Ways” is a Revelatory Embodied Poem by Karim Ben Khelifa

    I interviewed Karim Ben Khelifa about In 36,000 Ways on Sunday, November 16, 2025 at IDFA DocLab in Amsterdam, Netherlands. Here are the 26 episodes and more than 24 hours of coverage from my IDFA DocLab 2025 coverage: #1682: Preview of IDFA DocLab's Selection of "Perception Art" & Immersive Stories #1683: "Feedback VR Antifuturist Musical" Wins Immersive Non-Fiction Award at IDFA DocLab 2025 #1684: Playable Essay “individualism in the dead-internet age” Recaps Enshittification Against Indie Devs #1685: Immersive Liner Notes of Hip-Hop Album "AÜTO/MÖTOR" Uses three.js & HTML 1.0 Aesthetics #1686: 15 Years of Hand-Written Letters about the Internet in "Life Needs Internet 2010–2025" Installation #1687: Text-Based Adventure Theatrical Performance "MILKMAN ZERO: The First Delivery" #1688: Hacking Gamer Hardware and Stereotypes in "Gamer Keyboard Wall Piece #2" #1689: Making Post-Human Babies in "IVF-X" to Catalyze Philosophical Reflections on Reproduction #1690: Asking Philosophical Questions on AI in "The Oracle: Ritual for the Future" with Poetic Immersive Performance #1691: A Call for Human Friction Over AI Slop in "Deep Soup" Participatory Film Based on "Designing Friction" Manifesto #1692: Playful Remixing of Scanned Animal Body Parts in "We Are Dead Animals" #1693: A Survey of the Indie Immersive Dome Community Trends with "The Rift" Directors & 4Pi Productions #1694: Reimagining Amsterdam's Red Light District in "Unimaginable Red" Open World Game #1695: "Another Place" Takes a Liminal Architectural Stroll into Memories of Another Time and Place #1696: Speculative Architecture Meets the Immersive Dome in Sergey Prokofyev's "Eternal Habitat" #1697: Can Immersive Art Revitalize Civic Engagement? Netherlands CIIIC Funds "Shared Reality" Initiative #1698: Immersive Exhibition Lessons Learned from Undershed's First Year with Amy Rose #1699: Announcing "The Institute of Immersive Perservation" with Avinash Changa & His XR Virtual Machine Wizardry #1700: Update on Co-Creating XR Distribution Field Initiative & Toolkits from MIT Open DocLab #1701: Public Art Installation "Nothing to See Here" Uses Perception Art to Challenge Our Notions of Reality #1702: "Coded Black" Creates Experiential Black History by Combining Horror Genres with Open World Exploration #1703: "Reality Looks Back" Uses Quantum Possibility Metaphors & Gaussian Splats to Challenge Notions of Reality #1704: "Lesbian Simulator" is an Interactive VR Narrative Masterclass Balancing Levity, Pride, & Naming of Homophobic Threats #1705: The Art of Designing Emergent Social Dynamics with Ontroerend Goed's "Handle with Care" #1706: Using Immersive Journalism to Document Genocide in Gaza with "Under the Same Sky" #1707: War Journalist Turns to Immersive Art to Shatter Our Numbness Through Feeling. "In 36,000 Ways" is a Revelatory Embodied Poem by Karim Ben Khelifa This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  9. 192

    #1706: Using Immersive Journalism to Document Genocide in Gaza with “Under the Same Sky”

    I interviewed Khalil Ashawi, Sami Sultan, & Hail Khalaf about Under the Same Sky on Saturday, November 15, 2025 at IDFA DocLab in Amsterdam, Netherlands. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  10. 191

    #1705: The Art of Designing Emergent Social Dynamics with Ontroerend Goed’s “Handle with Care”

    I interviewed Alexander Devriendt about Handle with Care on Wednesday, December 3, 2025. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  11. 190

    #1704: “Lesbian Simulator” is an Interactive VR Narrative Masterclass Balancing Levity, Pride, & Naming of Homophobic Threats

    I interviewed Iris van der Meule about Lesbian Simulator on Tuesday, November 18, 2025 at IDFA DocLab in Amsterdam, Netherlands. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  12. 189

    #1703: “Reality Looks Back” Uses Quantum Possibility Metaphors & Gaussian Splats to Challenge Notions of Reality

    I interviewed Anne Jeppesen & Omid Zarei about Reality Looks Back on Tuesday, November 18, 2025 at IDFA DocLab in Amsterdam, Netherlands. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  13. 188

    #1702: “Coded Black” Creates Experiential Black History by Combining Horror Genres with Open World Exploration

    I interviewed Maisha Wester about Coded Black on Monday, November 17, 2025 at IDFA DocLab in Amsterdam, Netherlands. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  14. 187

    #1701: Public Art Installation “Nothing to See Here” Uses Perception Art to Challenge Our Notions of Reality

    I interviewed Celine Daemen about Nothing to See Here on Monday, November 17, 2025 at IDFA DocLab in Amsterdam, Netherlands. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  15. 186

    #1700: Update on Co-Creating XR Distribution Field Initiative & Toolkits from MIT Open DocLab

    I interviewed Sarah Wolozin, Scarlett Kim, Julia Scott-Stevenson about MIT Open DocLab's Co-Creating XR Distribution Field Initiative on Wednesday, November 19, 2025 at IDFA DocLab in Amsterdam, Netherlands. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  16. 185

    #1699: Announcing “The Institute of Immersive Preservation” with Avinash Changa & His XR Virtual Machine Wizardry

    I interviewed Avinash Changa about The Institute of Immersive Perservation on Tuesday, November 18, 2025 at IDFA DocLab in Amsterdam, Netherlands. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  17. 184

    #1698: Immersive Exhibition Lessons Learned from Undershed’s First Year with Amy Rose

    I interviewed Amy Rose about first year of the Undershed at the Watershed on Saturday, November 15, 2025 at IDFA DocLab in Amsterdam, Netherlands. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  18. 183

    #1697: Can Immersive Art Revitalize Civic Engagement? Netherlands CIIIC Funds “Shared Reality” Initiative

    I interviewed Martijn de Waal about revitalizing civic engagement through immersive art on Sunday, November 16, 2025 at IDFA DocLab in Amsterdam, Netherlands. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  19. 182

    #1696: Speculative Architecture Meets the Immersive Dome in Sergey Prokofyev’s “Eternal Habitat”

    I interviewed Sergey Prokofyev about Eternal Habitat on Monday, November 17, 2025 at IDFA DocLab in Amsterdam, Netherlands. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  20. 181

    #1695: “Another Place” Takes a Liminal Architectural Stroll into Memories of Another Time and Place

    I interviewed Domenico Singha Pedroli about Another Place on Saturday, November 15, 2025 at IDFA DocLab in Amsterdam, Netherlands. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  21. 180

    #1694: Reimagining Amsterdam’s Red Light District in “Unimaginable Red” Open World Game

    I interviewed Vitor Freire & Monique Grimord about Unimaginable Red on Wednesday, November 19, 2025 at IDFA DocLab in Amsterdam, Netherlands. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  22. 179

    #1693: A Survey of the Indie Immersive Dome Community Trends with “The Rift” Directors & 4Pi Productions

    I interviewed Janire Najera & Matthew Wright from 4PI Productions and CULTVR Lab about The Rift on Monday, November 17, 2025 at IDFA DocLab in Amsterdam, Netherlands. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  23. 178

    #1692: Playful Remixing of Scanned Animal Body Parts in “We Are Dead Animals”

    I interviewed Maarten Isaak de Heer about We Are Dead Animals on Saturday, November 15, 2025 at IDFA DocLab in Amsterdam, Netherlands. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  24. 177

    #1691: A Call for Human Friction Over AI Slop in “Deep Soup” Participatory Film Based on “Designing Friction” Manifesto

    I interviewed Luna Maurer & Roel Wouters about Deep Soup on Tuesday, November 18, 2025 at IDFA DocLab in Amsterdam, Netherlands. You can also check out their Designing Friction Manifesto. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  25. 176

    #1690: Asking Philosophical Questions on AI in “The Oracle: Ritual for the Future” with Poetic Immersive Performance

    I interviewed Victorine van Alphen about The Oracle: Ritual for the Future on Wednesday, November 19, 2025 at IDFA DocLab in Amsterdam, Netherlands. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  26. 175

    #1689: Making Post-Human Babies in “IVF-X” to Catalyze Philosophical Reflections on Reproduction

    I interviewed Victorine Van Alphen about IVF-X on Saturday, April 8, 2023 at New Images in Paris, France. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  27. 174

    #1688: Hacking Gamer Hardware and Stereotypes in “Gamer Keyboard Wall Piece #2”

    I interviewed Sjef van Beers about Gamer Keyboard Wall Piece #2 on Saturday, November 15, 2025 at IDFA DocLab in Amsterdam, Netherlands. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  28. 173

    #1687: Text-Based Adventure Theatrical Performance “MILKMAN ZERO: The First Delivery”

    I interviewed Matt Romein about MILKMAN ZERO: The First Delivery on Monday, November 17, 2025 at IDFA DocLab in Amsterdam, Netherlands. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  29. 172

    #1686: 15 Years of Hand-Written Letters about the Internet in “Life Needs Internet 2010–2025” Installation

    I interviewed Jeroen van Loon about Life Needs Internet 2010–2025 on Wednesday, November 19, 2025 at IDFA DocLab in Amsterdam, Netherlands. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  30. 171

    #1685: Immersive Liner Notes of Hip-Hop Album “AÜTO/MÖTOR” Uses three.js & HTML 1.0 Aesthetics

    I interviewed Albert Johnson about A Ü T O / M Ö T O R on Sunday, November 16, 2025 at IDFA DocLab in Amsterdam, Netherlands. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  31. 170

    #1684: Playable Essay “individualism in the dead-internet age” Recaps Enshittification Against Indie Devs

    I interviewed Nathalie Lawhead about individualism in the dead-internet age: an anti-big tech asset flip shovelware r̶a̶n̶t̶ manifesto on Monday, November 17, 2025 at IDFA DocLab in Amsterdam, Netherlands. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  32. 169

    #1683: “Feedback VR: An Antifuturist Musical” Wins Immersive Non-Fiction Award at IDFA DocLab 2025

    I interviewed Claudix Vanesix, Cocompi & Aaron Medina about Feedback VR, un musical antifuturista on Sunday, November 16, 2025 at IDFA DocLab in Amsterdam, Netherlands. Here are the 26 episodes and more than 24 hours of coverage from my IDFA DocLab 2025 coverage: #1682: Preview of IDFA DocLab's Selection of "Perception Art" & Immersive Stories #1683: "Feedback VR Antifuturist Musical" Wins Immersive Non-Fiction Award at IDFA DocLab 2025 #1684: Playable Essay “individualism in the dead-internet age” Recaps Enshittification Against Indie Devs #1685: Immersive Liner Notes of Hip-Hop Album "AÜTO/MÖTOR" Uses three.js & HTML 1.0 Aesthetics #1686: 15 Years of Hand-Written Letters about the Internet in "Life Needs Internet 2010–2025" Installation #1687: Text-Based Adventure Theatrical Performance "MILKMAN ZERO: The First Delivery" #1688: Hacking Gamer Hardware and Stereotypes in "Gamer Keyboard Wall Piece #2" #1689: Making Post-Human Babies in "IVF-X" to Catalyze Philosophical Reflections on Reproduction #1690: Asking Philosophical Questions on AI in "The Oracle: Ritual for the Future" with Poetic Immersive Performance #1691: A Call for Human Friction Over AI Slop in "Deep Soup" Participatory Film Based on "Designing Friction" Manifesto #1692: Playful Remixing of Scanned Animal Body Parts in "We Are Dead Animals" #1693: A Survey of the Indie Immersive Dome Community Trends with "The Rift" Directors & 4Pi Productions #1694: Reimagining Amsterdam's Red Light District in "Unimaginable Red" Open World Game #1695: "Another Place" Takes a Liminal Architectural Stroll into Memories of Another Time and Place #1696: Speculative Architecture Meets the Immersive Dome in Sergey Prokofyev's "Eternal Habitat" #1697: Can Immersive Art Revitalize Civic Engagement? Netherlands CIIIC Funds "Shared Reality" Initiative #1698: Immersive Exhibition Lessons Learned from Undershed's First Year with Amy Rose #1699: Announcing "The Institute of Immersive Perservation" with Avinash Changa & His XR Virtual Machine Wizardry #1700: Update on Co-Creating XR Distribution Field Initiative & Toolkits from MIT Open DocLab #1701: Public Art Installation "Nothing to See Here" Uses Perception Art to Challenge Our Notions of Reality #1702: "Coded Black" Creates Experiential Black History by Combining Horror Genres with Open World Exploration #1703: "Reality Looks Back" Uses Quantum Possibility Metaphors & Gaussian Splats to Challenge Notions of Reality #1704: "Lesbian Simulator" is an Interactive VR Narrative Masterclass Balancing Levity, Pride, & Naming of Homophobic Threats #1705: The Art of Designing Emergent Social Dynamics with Ontroerend Goed's "Handle with Care" #1706: Using Immersive Journalism to Document Genocide in Gaza with "Under the Same Sky" #1707: War Journalist Turns to Immersive Art to Shatter Our Numbness Through Feeling. "In 36,000 Ways" is a Revelatory Embodied Poem by Karim Ben Khelifa This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  33. 168

    #1682: Preview of IDFA DocLab’s 2025 Selection of “Perception Art” & Immersive Stories

    IDFA DocLab is the immersive selection of non-fiction digital and immersive stories that is a part of the International Documentary Film Festival Amsterdam (IDFA), and they're having their 19th selection this year. DocLab founder Caspar Sonnen has been doing an amazing job of tracking the frontiers of new forms of digital, interactive, and immersive storytelling since 2007, and he joined me along with his co-curator Nina van Doren to talk about the ten pieces within the DocLab Competition for Immersive Non-Fiction as well as the nine pieces within the DocLab Competition for Digital Storytelling as well as portions of their DocLab Spotlight as well as the DocLab at the Planetarium: Down to Earth program, DocLab Playroom prototype sessions as well as the DocLab R&D Summit. In trying to describe the types of immersive art and storytelling works that DocLab curates, then they have started to use the term "Perception Art" in order to describe the types of pieces and work that they're featuring. This year's theme is "Off the Internet," which speaks to both the types of works that critique and analyze the impacts of online culture on our lives, but also taking projects that were born on the Internet and giving them an IRL physical installation art context to view them. I'll be on site seeing the selection of works and also be interviewing various artists who are on the frontiers of experimentation for these new forms of "perception art." UPDATE: December 6, 2025. Here's all of my coverage from IDFA DocLab 2025: #1682: Preview of IDFA DocLab's Selection of "Perception Art" & Immersive Stories #1683: "Feedback VR Antifuturist Musical" Wins Immersive Non-Fiction Award at IDFA DocLab 2025 #1684: Playable Essay “individualism in the dead-internet age” Recaps Enshittification Against Indie Devs #1685: Immersive Liner Notes of Hip-Hop Album "AÜTO/MÖTOR" Uses three.js & HTML 1.0 Aesthetics #1686: 15 Years of Hand-Written Letters about the Internet in "Life Needs Internet 2010–2025" Installation #1687: Text-Based Adventure Theatrical Performance "MILKMAN ZERO: The First Delivery" #1688: Hacking Gamer Hardware and Stereotypes in "Gamer Keyboard Wall Piece #2" #1689: Making Post-Human Babies in "IVF-X" to Catalyze Philosophical Reflections on Reproduction #1690: Asking Philosophical Questions on AI in "The Oracle: Ritual for the Future" with Poetic Immersive Performance #1691: A Call for Human Friction Over AI Slop in "Deep Soup" Participatory Film Based on "Designing Friction" Manifesto #1692: Playful Remixing of Scanned Animal Body Parts in "We Are Dead Animals" #1693: A Survey of the Indie Immersive Dome Community Trends with "The Rift" Directors & 4Pi Productions #1694: Reimagining Amsterdam's Red Light District in "Unimaginable Red" Open World Game #1695: "Another Place" Takes a Liminal Architectural Stroll into Memories of Another Time and Place #1696: Speculative Architecture Meets the Immersive Dome in Sergey Prokofyev's "Eternal Habitat" #1697: Can Immersive Art Revitalize Civic Engagement? Netherlands CIIIC Funds "Shared Reality" Initiative #1698: Immersive Exhibition Lessons Learned from Undershed's First Year with Amy Rose #1699: Announcing "The Institute of Immersive Perservation" with Avinash Changa & His XR Virtual Machine Wizardry #1700: Update on Co-Creating XR Distribution Field Initiative & Toolkits from MIT Open DocLab #1701: Public Art Installation "Nothing to See Here" Uses Perception Art to Challenge Our Notions of Reality #1702: "Coded Black" Creates Experiential Black History by Combining Horror Genres with Open World Exploration #1703: "Reality Looks Back" Uses Quantum Possibility Metaphors & Gaussian Splats to Challenge Notions of Reality #1704: "Lesbian Simulator" is an Interactive VR Narrative Masterclass Balancing Levity, Pride, & Naming of Homophobic Threats #1705: The Art of Designing Emergent Social Dynamics with Ontroerend Goed's "Handle with Care" #1706: Using Immersive Journalism to Document Genocide in Gaza with "Under the Same Sky" #1707: War Journalist Turns to Immersive Art to Shatter Our Numbness Through Feeling. "In 36,000 Ways" is a Revelatory Embodied Poem by Karim Ben Khelifa This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  34. 167

    #1681: VRChat Worldbuilder DrMorro on His Epic & Dreamlike Masterpieces

    The VRChat worlds by DrMorro are truly incredible. They're vast landscapes made of surreal mash-ups of various architecture styles and symbols that feels like you're walking through a waking dream. His Organism Trilogy (Organism, Epilogue 1, and Epilogue 2) is a true masterpiece of VR worldbuilding. And his latest Ritual is one of the biggest and most impressive single worlds on VRChat that feels walking through a fever dream, and probably the closest thing to Meow Wolf's style of immersive art. And his Raindance Immersive award-winning Olympia was his truly first vast world, and they've been getting bigger and bigger and more impressive ever since. He's got a keen ear for sound design and a sound track that will help set the eerie mood of his sometimes unsettling and liminal worlds. In short, the experience of spending 4-5 hours going through one of DrMorro's worlds is a completely unique and singular experience, as he's in a class of his own when it comes to VRChat world building. https://www.youtube.com/watch?v=L4AfYsmHQB8 I have long wanted to conduct an interview with DrMorro doing a comprehensive retrospective of his works, but he's an anonymous Russian artist who doesn't speak English. He's only done one other interview with Russian Del'Arte Magazine, but otherwise he's a pretty mysterious and cryptic figure. I managed to got ahold of him through a mutual friend, and he suggested that we do a "19th-century-style written correspondence" where I would send questions over text chat over the course of a week. He would use an AI translator to translate what I said into Russian, and then he would then translate his Russian response back into English. For this podcast, I used the open source Boson AI Higgs Audio with Russian actor Yul Brynner's voice to bring DrMorro's personality to life, but the full transcript of our edited chat is down below if you prefer to read it as I had experienced it. You can support DrMorro's work through Boosty, and you can support the Voices of VR podcast through Patreon. Kent Bye: Alright! Can you go ahead and introduce yourself and what you do in the realm of VR? DrMorro: Hello! The name's DrMorro – or well, that's my alias, to be precise. That's the name I'm known by as the creator of all those strange worlds in VRChat. For now, that's my only real achievement in the VR sphere. Other than that, I'm a 2D and 3D artist, which is my main profession. Kent Bye: Awesome. Well, this is my first interview that I’ve done via text. Can you give a bit more context for why you prefer to do the interview in this way? DrMorro: Honestly, I'm a pretty closed-off person, and it's easier for me to write than to talk. It’s just a character trait. Especially since I can't even imagine communicating through a voice translator. When I write, I can at least somehow control the translation. I don't know spoken English, but I manage fine in writing. So, no conspiracy theories. It's just how I'm used to communicating. Though it's strange because by nature, I'm a staunch introvert and I make worlds about total solitude. In ORGANISM, how many entities did you even find there besides the hat-wearing figure? And then suddenly, this popularity falls on me, and constant communication becomes the norm. Aaaahhh! Kent Bye: Well, I very much appreciate you taking the time to do what you describe as a “19th-century-style written correspondence” with me over the next week or so. And it makes sense that you could have a little bit more control in how you can express yourself via written text through a translator. Alight. So I always like to hear what type of design disciplines folks are bringing into VR, and so can you provide a bit more context about your background and journey into working with VR? DrMorro: To put it briefly, my journey is that I essentially work in architectural visualization. But that's more of a day job to keep myself afloat and pay the bills. My main interest, of course, has always been computer games. Yeah, I'm from the era of cassette tapes for the ZX Spectrum and 3D Max running on DOS. For as long as I can remember, one of my biggest dreams has been to create my own games. However, a humanities-oriented mind has always been the main roadblock on that path. All those numbers and C++ would just stump me completely. So I gradually mastered 3D graphics, but purely as a tool. In parallel, I painted traditional art—graphics and paintings. Over time, my graphics tablet replaced the canvas. And when VR technology arrived, I realized that this was exactly the tool that was missing from 'static' paintings. After that, it was a technical matter as I had to choose the most accessible gaming platform in terms of its SDK, and VRChat turned out to be a perfect fit for such goals. I still view my worlds as paintings in which you can find yourself and wander. Yes, partly because of my style of storytelling and partly because of my technical illiteracy. All these programmable events, triggers, animations - this is definitely not for my mind. Kent Bye: Ah! That makes sense that you would have at least some experience working with architecture, since your worlds have such an emphasis on vast spaces, and mashing up different architectural styles and contexts. How did you first encounter VRChat as a place to further develop your skills as a world builder? DrMorro: VRChat was one of many free apps I instantly installed on my VR headset as soon as I got it. There were other social apps too, but VRChat just blew me away right from the start. There are so many people, minimal censorship, and complete freedom of action. I ended up settling in a Russian-themed world called SLAV WORLD PADIK, and after getting to know its creator, I started slowly adding content to it. You can still find my graffiti on the walls there, made in the VR app "Kingspray," along with a few avatars I made. After that, I started thinking about creating my own worlds from scratch. Kent Bye: I would love to hear a little bit more about your design process for some of these epic worlds that you've been creating. Where do you typically begin with your world building process? Do you draw out a complete blueprint? Or do some concept art and painting? Do you build out one scene at a time, and then figure out how it all fits together? Do you start with a story or memory? I'd love to know where you begin. DrMorro: Well, the process is different for every world. Sometimes there's a clear concept from the start, other times it's born along the way. But overall, it's always pure chaos. And honestly, that's what I love most about it. The narrative unfolds in real-time; I literally live through it for a year, or however long it takes to build the world. Could I even handle such a project if it were all meticulously planned out upfront, leaving just the monotony of execution? But here, everything is completely unpredictable. A tiny detail can spawn an entire new branch of the story. And that branch, connecting to previous locations in the most bizarre way, can change everything, forcing me to go back to the beginning to smooth out the narrative lines. It's genuine magic—to be present at the birth of something new. I'm like some kind of AI mashing up a cat and an orange, and it's fantastic. But I wouldn't say the final world is a surprise to me. Of course, there are main storylines, thousands of scribbled sketches, and tons of new information gathered during development. So this is really one of those cases where the process is just as important as the result. Kent Bye: A quick follow-up on the timing, I know that some festivals like Venice Immersive require World Premiere or International Premiere status in order to be in competition. And I know that the curators Liz and Michel would have happily had some of your prior work in the main competition at Venice. But publishing it on VRChat ahead of the festival does not meet either one of their premiere requirements. But it sounds like you were driven by your own creative process, and similar to Valve where "It's done when it's done" and not driven by deadlines or aspirations to compete. I’ve seen quite a lot of work at both Raindance Immersive and Venice Immersive, and I can say that your world building is on a level beyond anyone else I've seen so far. But it doesn't seem like you're motivated to prove that out beyond the accolades you've already received from Raindance Immersive, or subject yourself to strict deadlines or the "crunch" that most game developers typically face. DrMorro: Yes, you're absolutely right. This is that one zone where I don't submit to any external rules. Usually, obligations tie your hands. Here, I only do what I want to do. A commercial approach would probably have buried these projects in their infancy. As for festivals—to be honest, no one presented me with any demands; they just offered me a chance to participate. And that's wonderful. To be completely honest, I'll add something else. It's not even that important to me how viewers will interpret my worlds, whether I've provided enough clues, or if the path through them is straightforward. In this, I'm a total egoist, someone who was also raised on those old games where there was no hand-holding for the player whatsoever. And this, by the way, has a fantastic side effect which is that some versions of the players' experiences and interpretations are worthy of their own book. I know that entire communities have formed just to explore and research my worlds. That is what I was truly striving for. Kent Bye: Because the process is so important to you, I'm curious to hear a little bit more about your 3D art technical pipeline, and process of iterating both inside and outside of VR. Do you prototype within VR art programs like Gravity Sketch or Tilt Brush / Open Brush? Or do you go straight to Blender or Maya,...

  35. 166

    #1680: Charlie Melcher’s “The Future of Storytelling” Book Surveys Over 50 Living Stories

    Charles Melcher's new book "The Future of Storytelling: How Immersive Experiences Are Transforming Our World" was released on November 4, 2025, and I had a chance to take an early look and interview Melcher. The book is broken up into six main chapters where Melcher argues that the future of storytelling is agentic, immersive, embodied, responsive, social, and transformative. Melcher covers over fifty different "living stories" across different genres including virtual reality stories, location-based entertainment, immersive stories, immersive theatre, immersive art, experiential brand activations, and interactive experiences. He told me that he's had a chance to experience around 80 to 85% of the experiences that he features in his book, which most of them are site-specific and many times time-limited, immersive exhibitions that are not always easy to get into. He's been traveling to different locations around the world with his Future of Storytelling Explorer's Club to see many of these experiences, as well as engage with the creators behind the experiences. In his book, he shares some brief trip reports on over 50 different experiences, as well as some very high-quality, official photo documentation of these projects. It serves to provide some documentation of many of these ephemeral projects, but also tie together some of the common elements that helps to define and elucidate what exactly is meant by "immersive." Melcher and I also talk about the founding of The Future of Storytelling Summit back on October 2012, as well as the start of his Future of Storytelling podcast on March 2020 that has published over 120 interviews since it started during the pandemic. Around 20% of the projects and creators that have appeared on his podcast are featured in his book as what he considers to be a canon of work that exemplifies these deeper trends of immersive storytelling and living stories. While the book does provide a lot of valuable documentation, one complaint that I have is that it is not always easy to tell where Melcher is sourcing his quotes from project creators. The majority of quotations are coming from either private interviews that he personally conducted or from public conversations that he's featured on his podcast. But sometimes he uses quotes of creators from other publications without full attribution. So if there's a second edition, then I hope to see a more detailed set of footnotes and perhaps an index to make it an even more useful piece of documentation. The way that Melcher is breaking down the different foundational qualities of immersive experiences also closely mirrors my own elemental approach, but with some slight deviations or different categorizations. His agentic qualities are equivalent to what I call active presence, his embodied is the same as my embodied presence, and his social is the same as my social presence. I also have emotional presence and environmental presence, which he classifies as emotional and physical subsets of immersive qualities. Melcher also has a participatory subset under immersive qualities, which I consider to just be a part of active presence and what he is already classifying as agentic. For me "immersive" is more of an umbrella term that includes all of the various qualities of presence, and Melcher proposes a sort of rating system judging the degree of immersiveness rated across the different physical, emotional, and participatory dimensions. But Melcher doesn't list social as it's own vector of immersiveness as he told me that he considers social to be a subsection of emotions, but I consider social qualities to be distinct from emotional ones. Melcher also highlights the "responsive" qualities of a piece of work, which I see as both connected to ways of amplifying agency, but also something that contributes to Slater's Plausibility Illusion of an experience or a suspension of disbelief, which I classify under mental presence. Melcher also sees responsiveness as a key quality for personalized stories, and I appreciate his highlighting of this trend. For me, personalization is less of a quality of presence and more of a reflection of identity across various contextual domains. My experiential design framework is broken into quality, context, character, and story. So I see identity as a set of character traits across contextual domains that could be used as input for responsive stories. Each experience we have will evoke various qualities of presence, which we will be radiating different physical, emotional, behavioral, cognitive, and social biomarkers that may also be tracked. I detail this in an article titled "Privacy Pitfalls of Contextually-Aware AI: Sensemaking Frameworks for Context and XR Data Qualities" published as a part of Existing Law and Extended Reality: An Edited Volume of the 2023 Symposium Proceedings. While this biometric data could be used to create responsive stories, it can also be used by surveillance capitalism companies to extrapolate psychographic information on us, which is something that I would have liked to also have seen a bit more critical discussion about in Melcher's book. The responsive chapter was also an opportunity for Melcher to explore how AI and GenAI might be used in the future to create experiences that are more reactive to whatever AI can discern about us, which also raises more privacy and ethical implications for me. The final dimension that Melcher covers is transformative, and he cites Pine and Gillmore's 1999 book The Experience Economy where they talk about the progression from extracting commodities to making goods to delivering services to staging experiences, and eventually to guiding transformations. Melcher says that if all of other qualities are achieved, then it could pass a threshold of becoming a transformative experience. I agree with Melcher, Pine, and Gilmore about the transformative potential of these experiences, but for me it is something that is very elusive, mysterious and certainly not something that can be orchestrated on demand. There is also a part of me where I don't see immersive stories as any more transformative than other forms of stories. The conditions for transformation may be more up to hearing the right story within the right context and right time. But I've experienced enough awe-inspiring and transformative moments in various immersive stories that I do agree that we may be headed into a future where these types of on-demand transformative experiences are much more likely. On the whole, I really enjoyed reading through Melcher's The Future of Storytelling book. There were a lot of experiences that were not on my radar, and it's a great accounting of different parts of the immersive industry that I haven't been tracking as closely. I appreciated it as a form of documentation for this era phase for these types of living stories. There is also clearly a rising demand for these types of meaningful, immersive stories, and it's an area where I see some of the most interesting innovations and most compelling content being developed. Melcher also does a great job of summarizing many of the core affordances of this emerging fusion of various storytelling traditions, and there are bound to be many insights for folks working within the XR industry. To hear some more of my feedback and thoughts on Melcher's book, then be sure to tune into my conversation with him or check out the transcript down below. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  36. 165

    #1679: The Matrix at Cosm Expands Film Beyond the Frame with Cinematic Shared Reality

    The Matrix at Cosm in LA opened on June 6th, 2025, which leverages Cosm's 87-foot, 12K+ LED immersive dome to show this classic film within a 16x9 frame while the additional space beyond the frame was filled with over 50 different scenes thereby expanding the worldbuilding beyond the frame. I finally had a chance to see it last month, and was really impressed with how much this additional space was able to increase the level of immersion, to amplify key emotional beats within the film, and create some truly awe-inspiring moments. I had a chance to speak with Alexis Scalice, Cosm’s vice president of business development and entertainment, about Cosm's collaboration with Little Cinema, MakeMake, and Warner Brothers to launch their inaugural "Cinematic Shared Reality" immersive experience. The Matrix has a few more weeks of screenings before their second film Willy Wonka and the Chocolate Factory (1971) opens on November 21, 2025. You can also hear more context from in Noah Nelson's No Proscencium podcast interview with Little Cinema's Jay Rinsky conducted ahead of the world premiere. And I also share some impressions of the two enhanced cinema productions of The Black Phone and M3GAN within Blumhouse Enhanced Cinema Quest App. These films have some similarities to what The Matrix at Cosm is doing, but at a much smaller scale and not nearly as effective as the expanded immersive worldbuilding in one of the greatest science fiction films of all time. The Matrix at Cosm is setting a quality high bar for this type of format that is going to be difficult to match. You can see more context in the rough transcript below. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  37. 164

    #1678: Wevr on VR LBE as a “New Cinema,” a 10-Year Retrospective

    I had a chance to catch up with Wevr's CEO and co-founder Neville Spiteri, which has been making location-based VR experiences for the last decade in what he calls a "New Cinema." See more context in the rough transcript below. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  38. 163

    #1677: Snap’s AR Developer Relations Plan for 2026 Specs Consumer Launch with Joe Darko

    I did an interview with Joe Darko, Global Head of Developer Relations at Snap, at Snap's Developer Conference of Lensfest. See more context in the rough transcript below. You can also check out all 11 episodes in this Snap Lensfest series here: #1667: Kickoff of Snap Lensfest 2025 Coverage & SnapOS 2.0 Announcements #1668: Snap Co-Founders Community Q&A about Specs 2026 Launch Plan #1669: Snap's Resh Sidhu on the Future of AR Commerce & Developer-Centered Innovation #1670: Snapchat's Embodied Gaming Innovations with AR Developer Relations Head #1671: Reflecting on Snap's AR Platform & Developer Tools Past and Future with Terek Judi #1672: Niantic Spatial's Project Jade Demo Shows Latest Location-Aware, AI Tour Guide Innovations #1673: Snap Lensfest Announcement Reflections from AR Gaming Studio DB Creations #1674: 3rd Place Spectacles Lensathon Team: Fireside Tales Collaborative Storytelling with GenAI #1675: 2nd Place Spectacles Lensathon Team: CartDB Barcode-Scanning Nutrition App #1676: 1st Place Spectacles Lensathon Team: Decisionator Object-Detection AI Decision-Maker #1677: Snap's AR Developer Relations Plan for 2026 Specs Consumer Launch with Joe Darko Here are some concluding deep thoughts that I just posted in a LinkedIn post. Reflections on Snap Lensfest XR & AI Trends Covered in Latest Voices of VR Podcast Series Snap brought me down to LA to cover their Lensfest developer conference where they made a lot of AR developer platform announcements, had a hackathon featuring those new capabilities, and are gearing up for their 2026 consumer launch of Specs, their fully 6-DoF, hand-tracked enabled, AR Glasses. It’s been a full year since their Spectacles dev kit was announced and made available to developers, and I feel like Snap is on the bleeding edge of where the overall XR industry may be headed. These latest 11 Voices of VR podcast episodes spanning nearly 7 hours dig into these deeper trends that go beyond the headline announcements from Snap Lensfest. I recorded five interviews with various Snap employees, and I had a chance to catch up with some of the leading AR developers in the space, including Niantic Spatial’s latest VPS guided tour experience on the Spectacles with an AI virtual being. I also served as a preliminary hackathon judge where I got hands-on experiences with all of the AR experiences exploring what’s possible with the latest Snap Cloud announcements, and I’m featuring interviews with the top three Lensathon teams from the Spectacles track. Snap's Latest AR Developer Platform Announcements Snap is gearing up for a 2026 launch of Specs for what will likely be nearly two full years of the Spectacles dev kits having been made available. So this Lensfest marks a half-way point towards a consumer release, and the product team has been busy rapidly iterating on their bespoke, AR app production pipeline. Dedicated AR glasses are very resource constrained, and so Snap has been continuing to evolve their Lens Studio developer tool and optimizing their SnapOS platform for Spectacles. Snap didn't share any news on their target specifications for the Specs, but they released eight significant releases of their development tools over the past year with some of the biggest announcements being shared as the primary focus at Lensfest. Snap is launching Snap Cloud, based upon a Supabase deployment of their open source, PostgreSQL hosted solution. This will allow developers to dynamically load assets, call edge functions, and more easily set up database backends. This will hopefully help to enable Spectacles AR lenses to go beyond some byte-sized entertainment and rapidly prototyped experiments into more fully-featured applications that also leverage cutting-edge AI models and computer-vision enabled applications. Spectacles developers have been limited by 25MB lens size limits, but the Snap Cloud announcements makes it so that larger assets can be dynamically loaded, and I expect to see more sophisticated experiences, more AI-driven applications using various cloud services, and lenses that have data persistence that make it more likely for users to want to come back to them. Is the AR Glasses as Front-End to AI a Viable Path? There’s certainly a lot of experimental things happening with various AI services that are cropping up, and Snap is very much embracing the exploratory potentials for AR devs to see what’s possible by making it easier to integrate with these services. While many are excited about the possibilities and potential of the mashups between AI and AR, there are also a ton of open questions that have yet to be answered on what types of business models will prove to be sustainable. We very well could be in an AI Bubble where many of these emerging AI services prove to be economically unsustainable as the costs to run them may continue to outpace the revenue that they generate. But Snap seems content to go all-in on enabling AR developers to see what’s possible, while also trying to mitigate the risks through some more experimental flags and requests for consent from end users. See my conversation with Terek Judi for more context on how Snap is striking this balance between innovation and AI trust & safety. Snap doesn’t have their own preferred LLM or AI service, and so the Spectacles and Specs may be one of the only AR devices that allow developers to more freely explore all of the various AI options that are out there. But at the same time, the developers of these apps may also be on the hook to foot the bills for whatever AI-driven services they create. The business models for all of these AI-driven AR experiences have yet to be fully fleshed out, and the fly-wheel of innovation is at the point of pure experimentation to see what types of compelling AI-driven experiences may be enabled by the convenience of a face computer. An oft-repeated adage in a number of my conversations is that AR will likely serve as the experiential UI and frontend to an AI backend. Therefore, Snap is very much interested in empowering developers to experiment with these new AI capabilities. The prompt for the Spectacles Lensathon participants was to leverage these new Snap Cloud features from Supabase, including being able to call edge functions to various AI services, implementing database-driven apps, or to have some sort of live multi-player and social interaction facilitated by the Spectacles. Serving as a preliminary judge for the Lensathon allowed me to have a chance to experience what the ten Spectacles track teams were able to pull off in a quick 25-hour hackathon. I share more about some of the trends that emerged in the introduction to my interview with the Lensathon winner, but also within my interviews with the 2nd place and 3rd place teams. Yes, technically many of these AR apps could also be phone-based apps, but the convenience of hands-free, gesture-based triggers with a head-mounted camera on your face may lower the friction enough to make new AR applications much more viable than a phone-based equivalent. Snap as a Dark Horse in the AR Glasses Race Overall, I see Snap as a bit of a dark horse in the race towards fully functional AR glasses, and the big differentiating factor may be what types of experiences developers will be able to make for the Snap Specs launching sometime next year (likely sometime after Labor Day in either Q3 or Q4). This dark horse status is mainly because Snap is going up against some of the biggest companies in the world. The 14th-Anniversary of Snap on September 8, 2025 was marked by an email that Evan Spiegel sent out to all of his employees. In the letter, Spiegel says, “The cutoff for inclusion in the Fortune 500 was $7.4 billion in revenue in 2025, and with analyst estimates suggesting Snap could reach nearly $6 billion in revenue in 2025, we’re not far from achieving Fortune 500 status.” Snap is competing in the XR space with other companies that are near the top of the Fortune 500 list by revenue with Apple at #4, Alphabet (Google) at #7, and Meta at #22. By profit, then Alphabet is #1, Apple is #2, and Meta is #6. It takes a lot of money to do a proper consumer launch of XR hardware, and reporter Alex Heath published a report last week on his new “Access” Substack that Snap CEO Even Spiegel will be in Saudi Arabia this week speaking at their Future Investment Initiative with the intent to raise a $1 billion round money for the Specs release. Heath reports that “sources say Snap plans to turn its Specs hardware unit into an independent subsidiary that can continue raising capital from investors. The idea under discussion is to structure it similarly to Waymo, which operates independently within Alphabet, rather than fully spin off Specs into a new company outside of Snap.” Heath’s report answers some of my own logistical questions, and could provide some additional puzzle pieces for how Snap would continue to punch above its weight on releasing consumer AR glasses in competition with some of the largest companies in the world. Snap Betting on Developer Relations as Differentiating Factor Given that Snap is an underdog in the race towards AR glasses, it means that they have had to differentiate themselves in some fashion, and Snap is betting on their development relations strategy as the key differentiating factor. Meta has de-emphasized collaborating with third party developers for their AI Glasses and Meta Ray-Ban Display Glasses as their smart glasses have been on the market for a couple of years before Meta finally recently announced a pathway for developers to have their own apps interface with them. In contrast, Snap has been taking a much more developer-centric approach for their AR glasses strategy with the Spectacles dev kit being made available on a subscription-basis. Despite the odds, the Spectacles dev kit feels like it is on par with what Micro

  39. 162

    #1676: 1st Place Spectacles Lensathon Team: Decisionator Object-Detection, AI Decision-Maker

    At Snap's Developer Conference of Lensfest, I did an interview with 1st place team in the Snap Spectacles Lensathon named Decisionator including Candice Branchereau, Marcin Polakowski, Volodymyr Kurbatov, and Inna Horobchuk. I also summarize the other 10 Spectacles Lensathon projects after serving as a preliminary judge for the competition. See more context in the rough transcript below. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  40. 161

    #1675: 2nd Place Spectacles Lensathon Team: CartDB Barcode-Scanning Nutrition App

    At Snap's Developer Conference of Lensfest, I did an interview with 2nd place team in the Snap Spectacles Lensathon named CartdB including Guillaume Dagens, Nigel Hartman, and Uttam Grandhi (the other team member Nicholas Ross had some prior commitments). See more context in the rough transcript below. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  41. 160

    #1674: 3rd Place Spectacles Lensathon Team: Fireside Tales Collaborative Storytelling with GenAI

    At Snap's Developer Conference of Lensfest, I did an interview with 3rd place team in the Snap Spectacles Lensathon named Fireside Tales including Stijn Spanhove, Pavlo Tkachenko, and Yegor Ryabtsov. See more context in the rough transcript below. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  42. 159

    #1673: Snap Lensfest Announcement Reflections from AR Gaming Studio DB Creations

    I did an interview with DB Creations co-founders Dustin Kochensparger and Blake Gross at Snap's Developer Conference of Lensfest. See more context in the rough transcript below. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  43. 158

    #1672: Niantic Spatial’s Project Jade Demo Shows Latest Location-Aware, AI Tour Guide Innovations

    I did an interview with Alicia Berry, Executive Producer at Niantic Spatial, and Asim Ahmed, Head of Product Marketing at Niantic Spatial, at Snap's Developer Conference of Lensfest about their latest Project Jade Spectacles demo. See more context in the rough transcript below. https://twitter.com/tweetsfromasim/status/1981830288771887606 This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  44. 157

    #1671: Reflecting on Snap’s AR Platform & Developer Tools Past and Future with Terek Judi

    At Snap's Developer Conference of Lensfest, I did an interview with Terek Judi who is working on Spectacles Product at Snap focusing on SnapOS, Platform, and Developer Tools. See more context in the rough transcript below, and if you'd like to check out the two interviews with Matt Hargett that I reference in the intro, then be sure to check out epsiode #1311 and episode #1660. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  45. 156

    #1670: Snapchat’s Embodied Gaming Innovations with AR Developer Relations Head

    I did an interview with Raag Harshavat, AR Developer Relations at Snapchat, at Snap's Developer Conference of Lensfest. See more context in the rough transcript below. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  46. 155

    #1669: Snap’s Resh Sidhu on the Future of AR Commerce & Developer-Centered Innovation

    I did an interview with Resh Sidhu, Senior Director of Innovation of Specs and Developer Marketing at Snap, at Snap's Developer Conference of Lensfest. See more context in the rough transcript below. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  47. 154

    #1668: Snap Co-Founders Community Q&A about Specs 2026 Launch Plan

    The snap co-founders of CEO Evan Spiegel and CTO Bobby Murphy typically have a community-driven Q&A after their Lensfest Keynote where they field over a dozen questions from Lensfest attendees. I'm including this in my coverage again this year as it's a really great set of questions about their consumer release of Specs AR glasses next year, some of their thinking about the role of AI at Snap, and reflections of their 10 years of working with AR lenses going back to the vomiting rainbows facial filter released in 2015. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  48. 153

    #1667: Kickoff of Snap Lensfest 2025 Coverage & SnapOS 2.0 Announcements

    This interview with Spectacles Community Manager Jesse McCulloch kicks off my coverage of Snap's Developer Conference called Lensfest. Snap is gearing up for a consumer release of their Snap Specs AR glasses some time next year, and they've been busy frequently updating their underlying operating system and platform tools like Lens Studio. There were no new announcements or reveals about the details of the Snap Specs that have been shared yet, but I did cover the biggest announcements at Lensfest throughout this series and in this interview with McCulloch. I also had a chance to interview five different Snap employees exploring different aspects of their AR strategy, and I also interviewed some AR developers in from the Snap ecosystem. Snap brought me down to also cover the 25-hour Lensathon, and I had a chance to be a judge for the 10 different Spectacles-based hackathon projects, and so I'll be featuring the top 3 finalists in the series. I also interviewed the AR game developers from DB Creations, as well as the latest AI assistant, guided tour demo from Niantic Spatial. Here is a list of the 11 episodes and nearly 7 hours of coverage from Snap's Lensfest: #1667: Kickoff of Snap Lensfest 2025 Coverage & SnapOS 2.0 Announcements #1668: Snap Co-Founders Community Q&A about Specs 2026 Launch Plan #1669: Snap's Resh Sidhu on the Future of AR Commerce & Developer-Centered Innovation #1670: Snapchat's Embodied Gaming Innovations with AR Developer Relations Head #1671: Reflecting on Snap's AR Platform & Developer Tools Past and Future with Terek Judi #1672: Niantic Spatial's Project Jade Demo Shows Latest Location-Aware, AI Tour Guide Innovations #1673: Snap Lensfest Announcement Reflections from AR Gaming Studio DB Creations #1674: 3rd Place Spectacles Lensathon Team: Fireside Tales Collaborative Storytelling with GenAI #1675: 2nd Place Spectacles Lensathon Team: CartDB Barcode-Scanning Nutrition App #1676: 1st Place Spectacles Lensathon Team: Decisionator Object-Detection AI Decision-Maker #1677: Snap's AR Developer Relations Plan for 2026 Specs Consumer Launch with Joe Darko This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  49. 152

    #1666: VRChat CEO Graham Gaylor on Exploring Various UGC Monetization Strategies

    I did an interview with VRChat co-founder and CEO Graham Gaylor at Meta Connect 2025 where we talk about the various different monetization strategies that VRChat has been exploring with their user-generated content platform. VRChat announced layoffs for 30% of their employees back on June 12, 2024, and so this is the first time I've had a chance to interview any of the VRChat executives since then. I used to have a pretty consistent streak of interviewing either VRChat leaders or employees at various VR conferences running from 2014, 2015, 2016, 2017, 2018, and 2019, but after the pandemic they were not giving as many public interviews. I did however recently cover the VRChat Avatar Marketplace as well as a conversation with VRChat's new Trust and Safety lead Jun Young Ro about his plans to overhaul and modernize VRChat's Trust and Safety processes, especially as users like Harry X were pointing out some gaps in their moderation processes. I had a chance to chat with Gaylor about some of the early decisions in VRChat for making custom avatars easily uploadable since version 0.3.5 on March 16, 2014 when co-founder Jesse Joudrey made his first public contributions to the project. Joudrey elaborated on his vision of what he considered to be "one of the corner stones of virtual reality and any cyberpunk offshoot... Customization. I don't want any limit on who or what I can be in virtual reality." I had dug up these dates and posts in the write up for episode #1408 where I went down a deep rabbit hole of tracing down some of the origin story for VRChat. Gaylor had actually passed along some early emails and documentation of the early days of VRChat for that write-up. The decision to make avatars completely customizable has been part of the magic and success of VRChat. But centralized and controlled identity has traditionally been one of the core pathways for monetization. In a conversation with VRChat community members after the June 2024 layoffs, qDot told me, "You cannot put the asset genie back in the bottle for VRChat. They can't just come up with an asset system that works this sort of centrally-regulated way now. Everyone is used to throwing these assets around, selling them on Gumroad, selling them on Booth." So I had a chance to talk with Gaylor about his paradox of customizable identity being both the secret sauce of VRChat, but also the clearest traditional path for monetization. You can see more context in the rough transcript below. This also happens to wrap up my coverage of Meta Connect 2025, and here's a recap of the different stories and coverage if you'd like to dig into more details of other things that were announced this year. #1652: Kick-off of Meta Connect Coverage with Meta Ray-Ban Display Glasses Insights from Norm Chan #1653: XR Analyst Anshel Sag on Meta's AI Glasses Strategy #1654: CNET's Scott Stein's Reflections on Meta Ray-Ban Display Glasses Implications #1655: Meta Horizon Studio News and Virtual Fashion with Paige Dansinger #1656: Kiira Benz Part 1: "Runnin'" Large-Scale Volumetric Music Video (2019) #1657: Kiira Benz Part 2: "Finding Pandora X" Bringing Immersive Theatre to VRChat (2020) #1658: Kiira Benz Part 3: Immersive Storytelling Career Retrospective (2025) #1659: VR Gaming Career Retrospective of Chicken Waffle's Finn Staber #1660: Enabling JavaScript-Based Native App XR Pipelines with NativeScript, React Native, and Node API with Matt Hargett #1661: State of VR Gaming with Jasmine Uniza's Impact Realities and Flat2VR Studios #1662: Meta Connect Highlights & Meta Horizon News with JDun and JoyReign #1663: ShapesXR Updates & Neural Band Design Implications of Transforming Your Hand into a Mouse #1664: Resolution Games CEO on Apple Vision Pro Launch + Gaze & Pinch HCI Mechanic in Game Room (2024) #1665: Resolution Games' "Battlemarked" Blends Mixed Reality Social Features with Demeo and D&D Gameplay #1666: VRChat CEO Graham Gaylor on Exploring Various UGC Monetization Strategies This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

  50. 151

    #1665: Resolution Games’ “Battlemarked” Blends Mixed Reality Social Features with Demeo and D&D Gameplay

    I did an interview with Gustav Stenmark at Meta Connect 2025 talking about their latest game Demeo x Dungeons & Dragons: Battlemarked, which enables some pretty interested co-located mixed reality social features, but also enables individual players to have their own mixed reality or VR POV. You can cee more context in the rough transcript below. This is a listener-supported podcast through the Voices of VR Patreon. Music: Fatality

Type above to search every episode's transcript for a word or phrase. Matches are scoped to this podcast.

Searching…

No matches for "" in this podcast's transcripts.

Showing of matches

No topics indexed yet for this podcast.

Loading reviews...

ABOUT THIS SHOW

Since May 2014, Kent Bye has published over 1500 Voices of VR podcast interviews featuring the pioneering artists, storytellers, and technologists driving the resurgence of virtual & augmented reality. He's an oral historian, experiential journalist, & aspiring philosopher, helping to define the patterns of immersive storytelling, experiential design, ethical frameworks, & the ultimate potential of XR.

HOSTED BY

Kent Bye

URL copied to clipboard!