PODCAST · technology
An Analog Brain In A Digital Age | With Marco Ciappelli
by Marco Ciappelli
[ Formerly Redefining Society & Technology ]An Analog Brain In A Digital Age Podcast is your backstage pass to my mind — where analog meets digital, and the occasional pig flies. In an age racing toward algorithms and automation, the best ideas still come from curiosity, experience, emotion, and the unexpected connection. What you'll find are conversations on technology & society, storytelling in all its forms, branding & marketing, creativity, and the odd surprise.
-
240
Book: Deep Future — Creating Technology That Matters | An Interview with Pablos Holman | An Analog Brain In A Digital Age With Marco Ciappelli
PODCAST EPISODE | An Analog Brain In A Digital Age With Marco Ciappelli Pablos Holman has built spaceships, zapped malaria-carrying mosquitoes with a laser, earned thousands of patents, and is now betting his venture capital on the inventors Silicon Valley forgot to fund. His new book, Deep Future: Creating Technology That Matters, is a call to arms against a tech industry that got drunk on software and forgot about the other 98% of the world. 📺 Watch | 🎙️ Listen | marcociappelli.com I grew up in a city full of inventors. They just didn't call themselves that. Florence in the fifteenth century wasn't running on venture capital. It was running on curiosity, obsession, and the refusal to accept that the way things had always been done was the way they had to be done. Leonardo didn't have a manual. Galileo didn't ask for permission before pointing a better telescope at the sky. They took things apart, looked at what was inside, and put them back together differently. They hacked things. That's Pablos Holman's word — and when he used it in our conversation, I recognized it immediately. Not as a tech industry term. As something much older. A way of being in the world that says: the instructions are a suggestion, not a ceiling. Pablos has had one of those careers that resists a tidy summary. He was writing code in Alaska as a kid, with one of the first Apples ever made and nobody around to teach him anything. He figured it out on his own — and never really stopped doing that. Cryptocurrency in the '90s. AI research before anyone called it that. Helping build spaceships at Blue Origin. Then years at the Intellectual Ventures Lab with Nathan Myhrvold, going after problems Silicon Valley had decided weren't worth the trouble: a laser that identifies and destroys malaria-carrying mosquitoes in flight, hurricane suppression systems, a nuclear reactor powered by nuclear waste. Six thousand patents. Thirty million TED Talk views. Now he runs a venture fund called Deep Future, and he's written a book with the same name. The subtitle says what he thinks about most of what Silicon Valley has been doing for the past two decades. Creating Technology That Matters. He calls the alternative shallow tech. Apps that replace taxis. Apps to rent a stranger's couch. Apps to have weed delivered by drone. Not useless, exactly — but not living up to what we actually have. And what we actually have, Pablos says, is the best toolkit in all of human history: more people, more education, more resources, more raw scientific understanding than any generation before us. If all that produces another chat app, something has gone badly wrong. The number he threw out in our conversation — and I'm going to mention it here because it deserves to be mentioned, not as a hook but as a quiet scandal — is that all the software companies in the world combined, every single one of them, account for about two percent of global GDP. The other ninety-eight is energy, shipping, food, manufacturing, construction, automotive. Industries that haven't fundamentally changed in a century. Industries that software can nudge a few percent better but cannot make ten times better. Ten times better is where Pablos starts. One of his portfolio companies is building autonomous sailing cargo ships — no crew, no fuel, no emissions — targeting a two-trillion-dollar industry that currently burns half its revenue on fuel. He's also continuing the malaria work that could save half a million lives a year, half of them children under five. That's the scale he's measuring things against. We got to AI eventually, as you do. What he said landed simply and cleanly: chatting is the least important thing we can do with it. What we should be using AI for is understanding things that were previously too complex to model — what's happening in every cell of your body, how to actually get a grip on the climate, how to start solving the problems that have been resistant to every tool that came before. Instead we are using it to generate fake videos and build an AI version of TikTok. We've hit peak entertainment, he said. I think that's right. And I think what comes after peak entertainment — if anything does — is the real question sitting underneath all of this. The conversation ended the way the best ones do: not with a conclusion, but with an invitation. Pick something you care about and work on it. The people who built Apollo weren't all rocket scientists. They were cable layers and logistics coordinators who never saw the rocket up close. But they were part of something that exceeded their own individuality, and they knew it, and that was enough. That pride is still available. Whether we want it more than we want another scroll — that's on us. Deep Future: Creating Technology That Matters is out now — find it here. Subscribe to the newsletter at marcociappelli.com. Let's keep thinking. About Marco Ciappelli Marco Ciappelli is Co-Founder & CMO of ITSPmagazine, Co-Founder & Creative Director of Studio C60, Branding & Marketing Advisor, Personal Branding Coach, Journalist, Writer, and Host of An Analog Brain In A Digital Age podcast. Born in Florence, Italy, and based in Los Angeles, he explores the intersection of technology, society, storytelling, and creativity — with an analog brain, in a digital age. 🌎 marcociappelli.com | itspmagazine.com | studioc60.com About Pablos Holman Pablos Holman is a futurist, inventor, and self-described "notorious hacker" with one of the more unusual résumés in American technology. He started writing code as a kid in Alaska on one of the first Apple computers ever made, and never stopped following that thread wherever it led. In the 1990s, he worked on cryptocurrency and early AI systems before either had found their way into the mainstream. In 2001, he joined Jeff Bezos at Blue Origin, where he helped explore new approaches to space travel. He then joined Nathan Myhrvold's Intellectual Ventures Lab, a deep tech invention lab that produced over 6,000 patents — including a laser system that identifies and destroys malaria-carrying mosquitoes in flight, a machine designed to suppress hurricanes, and a nuclear reactor powered by nuclear waste. His TED talks have accumulated over 30 million views. Holman is now Managing Partner of Deep Future, a venture capital fund backing inventors working on the hard physical problems the software industry left behind — autonomous shipping, new energy systems, food technology, and manufacturing. His book, Deep Future: Creating Technology That Matters (2025), is a critique of Silicon Valley's obsession with shallow tech and an invitation to aim at the world's actual problems. 🔗 LinkedIn | deepfuture.tech/about-pablos Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
239
New Book: Healing the Sick Care System — Why People Matter | An Interview with Gil Bashe | An Analog Brain In A Digital Age With Marco Ciappelli
PODCAST EPISODE | An Analog Brain In A Digital Age With Marco Ciappelli The United States spends 18.7% of its GDP on health — two to three times what countries like Italy spend. Italy has a longer life expectancy. So what exactly are we paying for? Gil Bashe, Chair of Global Health & Purpose at FINN Partners, former combat medic, and author of Healing the Sick Care System: Why People Matter, joined me on An Analog Brain In A Digital Age to talk about what happens when a system designed to heal people forgets that people exist. This is not a rant. It's a diagnosis — from someone who has seen the system from every angle: the battlefield, the boardroom, the pharmaceutical lobby, and the bedside of his own child. 📺 Watch | 🎙️ Listen | marcociappelli.com Gil Bashe started his career as a paratrooper combat medic. He's also the father of a child with a rare disease. He spent years as a lobbyist for the pharmaceutical industry — and he'll tell you that upfront, without flinching, before explaining why he still thinks that work mattered. He has led billion-dollar global agencies, advised companies that make life-saving drugs, and sat in rooms with the CEOs of hospital systems, pharmacy chains, and insurance companies. He asked them once if they understood each other's business models. The honest answer was: no. That's the system he's writing about. Not a broken one — a fragmented one. A system where the prime customer of healthcare has become the system itself, and the actual patients have been quietly reclassified as beneficiaries. As Gil puts it: if your washing machine breaks and you call the company and they tell you you're a "beneficiary of our appliance," you'd think they were out of their minds. You paid for it. You're a customer. Treat you like one. His new book, Healing the Sick Care System: Why People Matter, was born from a long accumulation of observations — 11 or 12 years of writing about the health ecosystem from every angle — and catalyzed by one specific moment: the assassination of the UnitedHealthcare CEO, and the public reaction to it. The fact that the killer had a following. The fact that people were applauding. Gil found that more disturbing than anyone seemed comfortable admitting. When anger reaches that level, something in the system has gone deeply, fundamentally wrong. I should say: this is a conversation I had some skin in. I'm type 1 diabetic. I know what it's like to sit across from an endocrinologist who tells you things you already know, reads from a checklist, and never quite looks up from the laptop. The human element — the education, the empathy, the sense that this person actually sees you — is often just gone. And I think most doctors started their careers because they wanted to be healers. The system squeezed it out of them. Gil agrees. He says 51% of doctors now report burnout. Nearly 60% of nurses. And that's not a coincidence. That's a design failure. The AI question we kept circling was the one nobody in healthcare leadership seems to want to answer directly: if artificial intelligence takes some of the administrative burden off doctors' shoulders, does that time go back to patients — or does the system simply use it to push more throughput? More appointments per day, not more minutes per patient. Gil's framework for thinking about this is worth keeping: IQ, EQ, and TQ. Intellectual intelligence, emotional intelligence, and technology intelligence. The doctors we need going forward aren't just the ones who scored highest on their MCATs. They're the ones who can read a room. Who can hear a patient bring in a printout from WebMD and respond with curiosity instead of dismissal. Who understand that a curious patient is a gift, not an inconvenience. He told me a story from the book — one doctor who cut his wife off mid-sentence and said, "Who are you gonna believe? Me, or a patient?" And another doctor, in Santa Monica, who performed a long and complicated surgery on his daughter, walked into the hospital cafeteria in his surgical scrubs with photographs of every step of the procedure, laid them out on the table, explained everything in plain language, and then left his personal cell phone number. "Call me with any question." They did. He picked up. That's not technology. That's not policy. That's personality. And Gil's argument — which I think is correct — is that we've built a system that systematically selects against it. The hopeful part of the conversation surprised me. I expected nuance. What I got was genuine belief. We have the best trained doctors in the world. We are the source of global medical innovation. We spend enough money — the problem isn't resources, it's alignment. The fix, as Gil sees it, starts with every part of the system — payers, pharmaceutical companies, hospital systems, policy makers — looking in the mirror and asking: am I still on mission? And then, slowly, getting back to why this system was created in the first place. Healing the Sick Care System: Why People Matter is out now. Get the book here. And if this kind of conversation is what you come here for, subscribe to the newsletter at marcociappelli.com. — Marco Co-Founder ITSPmagazine & Studio C60 | Creative Director | Branding & Marketing Advisor | Personal Branding Coach | Journalist | Writer | Podcast: An Analog Brain In A Digital Age ⚠️ Beware: Pigs May Fly | 🌎 LAX🛸FLR 🌍 About Marco Marco Ciappelli is Co-Founder & CMO of ITSPmagazine, Co-Founder & Creative Director of Studio C60, Branding & Marketing Advisor, Personal Branding Coach, Journalist, Writer, and Host of An Analog Brain In A Digital Age podcast. Born in Florence, Italy, and based in Los Angeles, he explores the intersection of technology, society, storytelling, and creativity — with an analog brain, in a digital age. 🌎 marcociappelli.com About the Guest Gil Bashe is Chair of Global Health & Purpose at FINN Partners, one of the world's largest independent communications agencies. A former combat medic and paratrooper turned award-winning health communications leader, he has shaped the field across global agencies, trade associations, and private equity ventures over a 40-year career. He is a PM360 Lifetime Achievement Award recipient, named among PRWeek's Top 30 Most Influential People in Health PR, honored as an MM&M Top 10 Innovation Catalyst, and tapped by PRovoke Media as a Top 25 Innovator. He serves on the boards of the American Diabetes Association and the Marfan Foundation, and is editor-in-chief of Medika Life. Healing the Sick Care System: Why People Matter is published by Health Administration Press (February 2026). LinkedIn | Get the Book Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
238
On the Internet, Nobody Knows You're Not Human — And Nobody's Asking | Written by Marco Ciappelli & Read by Tape3
An Analog Brain In A Digital Age — A Newsletter by Marco Ciappelli On the Internet, Nobody Knows You're Not Human — And Nobody's Asking There was a moment — brief, unrepeatable — when the internet felt like a genuinely open place. No profiles. No algorithms deciding what you deserved to see. No one monetizing the fact that you existed. You showed up, you explored, you talked to strangers in other countries about things that mattered to you, and the whole thing felt less like a product and more like a discovery. Like finding a door to another dimension. There's a cartoon that captured that moment perfectly. 1993. The New Yorker. Peter Steiner. Two dogs, one at a computer, and the line that accidentally defined an entire era of the internet: "On the Internet, nobody knows you're a dog." https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_you%27re_a_dog It was funny. It was also prophetic. And it was optimistic in a way we've completely forgotten how to be about the web. Anonymity as freedom. Identity as something fluid, chosen, playful. You could be anyone. You could be from anywhere. You could reinvent yourself in real time, with no one to contradict you. Then surveillance capitalism arrived and broke the party. Cookies. Behavioral profiling. The algorithmic panopticon. Suddenly everyone knew everything. You weren't a dog anymore — you were a demographic, a data point, a cluster of purchase histories and scroll patterns. The internet that promised liberation became the most precise identity-tracking machine ever built. Anonymity collapsed under the weight of monetization. Nobody knows you're a dog became everyone knows you're a dog, what breed, what you ate for breakfast, and which vet you Googled at 2am. And now we're in the third act. A Buddhist monk named Yang Mun has 2.5 million Instagram followers. He posts silent morning meditations. He has made over $300,000 since October. Three Buddhist scholars reviewed his content and confirmed: his wisdom isn't grounded in any actual scripture. It just sounds like it is. Yang Mun doesn't exist. He was built with ChatGPT, HeyGen — an AI platform that generates realistic synthetic human video, a face, eyes, a voice, moving and breathing and entirely artificial — and a handful of other tools, by a creator operating inside what's being called "Big Slop": a venture-backed industry that manufactures fake influencers, automates their posting, and scales them to millions of followers while platforms, politely, look the other way. Hat tip to Jack Brewster, whose LinkedIn post on Yang Mun is what started this thread of thought. https://www.linkedin.com/posts/jackbrewster_a-buddhist-monk-named-yang-mun-has-25-million-activity-7451268378499137537-RPB1?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAD_QZMB_jUr1316NWqo3MgG_iFVSPTfDgY The circle has closed. And inverted. We went from nobody knows you're a dog to everyone knows you're a dog to something far stranger: Nobody knows you're not human. The dog is gone. The human is optional. Here's what interests me — and it's not the outrage part, because the outrage is easy and everyone will do it. What interests me is the McLuhan part. Marshall McLuhan said it in 1964: the medium is the message. Not the content. The medium itself. The form of transmission shapes reality more than anything transmitted through it. Yang Mun's fake wisdom is almost beside the point. The scholars confirmed it's scripturally meaningless. But it sounds right — which is precisely the tell. The content was never engineered for truth. It was engineered for the platform. For the algorithm. For the engagement pattern that rewards the feeling of depth over the presence of it. The medium produced the monk. The monk is the message. And if you zoom out — which is what I keep trying to do from Florence, where the stones beneath my feet are five hundred years old and nobody around me is particularly impressed by disruption — you see something that looks less like a technology story and more like a civilization story. We built an internet that promised connection. We built AI to simulate humans. Somewhere along the way we forgot to ask whether any of it was real — or maybe we never quite got around to asking in the first place. Because here's the thing: this didn't happen slowly enough for us to develop a moral relationship with it. There was no adjustment period. No cultural processing. The fake monk didn't represent a fall from grace. It was a first contact situation. We haven't even named what's wrong yet, let alone decided whether it matters. The analog brain — slow, emotional, context-dependent, stubbornly human — is the one thing that still notices the difference between a conversation that carries weight and one that merely carries words. It's not superior in processing power. It's just that it comes from somewhere. From experience. From loss. From the specific, irreplaceable accident of having lived a particular life in a particular body in a particular place. The monk who wasn't there had none of that. And somewhere — maybe in 2.5 million people scrolling past silent meditations at 7am — some part of us already knows. Will we remember to ask? Are we ever gonna care? Let's keep exploring what it means to be human in this Hybrid Analog Digital Age. Stay imperfect, stay human. — Marco 📬 Follow the newsletter: An Analog Brain In A Digital Age ⓘ About Marco Ciappelli Co-Founder Studio C60 / ITSPmagazine | Creative Director | Branding & Marketing Advisor | Personal Branding Coach | Journalist | Writer | Podcast: An Analog Brain In A Digital Age ⚠️ Beware: Pigs May Fly | 🌎 LAX🛸FLR 🌍 Lear more about Marco Ciappelli: marcociappelli.com ⓘ About Studio C60 We help cybersecurity startups build trust-based marketing and go-to-market strategies grounded in deep product understanding and real buyer insights. With hundreds of products brought to market and deep connections in the CISO community, we know what security leaders value in vendors. Learn more at studioc60.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
237
Before the Robots Run. More reflections from RSAC 2026 — The Power of the Community and the Machines We Invited In. | Written By Marco Ciappelli & Read By Tape3
This was my twelfth RSA Conference. I know that because I remember the first one, 2012, and I've been counting ever since — not out of habit, but because each year feels like a chapter in a longer story I'm trying to read in real time. Twelve years of standing in that same building in San Francisco, watching an industry evolve, stumble, reinvent itself, and occasionally look in the mirror. In the early years it was pure technology. Cryptography, protocols, threat vectors, the architecture of defense. The conversations were technical, the energy was almost academic, the suits were slightly more formal. Then something shifted — gradually, then all at once, the way things usually do. The industry started talking about people. About culture. About the human beings sitting behind the keyboards and the very human mistakes they were making. The themes started reflecting it: community, togetherness, collective defense. Stronger Together. The Human Element. The Power of Community. Year after year, the message from the main stage was some variation of: we are more than our tools. People are what matter. Connection is the point. And then you'd walk the expo floor and see the booths. I'm not being cynical. The community is real — I've felt it, in the hallway conversations, in the side events, in the faces of people I've been running into for a decade who are genuinely trying to make the digital world safer. That part is true and it matters. But there's a growing gap between what the theme says and what the stage performs. And at RSAC 2026, that gap became impossible to ignore. Because this year, while the badge said The Power of Community, the keynotes were almost entirely about agents. Non-human ones. I wrote about this from a different angle in my first piece from RSAC — the Blade Runner angle, the NPC angle, the question of identity and intent when you can no longer tell the difference between a human action and an autonomous one. But there's another layer underneath that deserves its own space. It's the pattern. The twelve-year arc. An industry spends years — genuinely, sincerely — rediscovering the human element. Putting people at the center. Building a vocabulary around community, ethics, shared responsibility. And then, in what feels like a single conference cycle, it pivots to deploying a parallel workforce of non-human identities that outnumber us in our own systems, operate at speeds no human can follow, take actions no human directly authorized, and — here's the part that should make everyone pause — that a significant portion of organizations deploying them cannot monitor, cannot fully distinguish from human activity, and in many cases cannot stop once they're running. We built the community. Then we populated it with agents and handed them the keys. I kept thinking, walking those corridors, about the resistance. Not as a metaphor — or not only as a metaphor. In every story we've ever told about machines that gained too much autonomy, there's always a moment before the crisis where someone in the room knew. Where the warning existed. Where the design decision was made anyway because the pressure to ship, to scale, to compete was stronger than the instinct to pause. The difference between those stories and this moment is that we're not watching it happen to fictional characters. We're the ones making the design decisions. And unlike software — which you can patch, roll back, update at 3am while everyone is asleep — agents with autonomy and access are a different category of thing entirely. The old mantra of move fast and break things made a certain kind of sense when what you were breaking was a feature. It makes no sense at all when what you're deploying can act, chain consequences, and escalate — faster than any human response team can follow. This is where Asimov becomes relevant again. Not as nostalgia, not as science fiction trivia, but as a genuine design philosophy that the industry would do well to remember. His Three Laws of Robotics weren't invented as a plot device. They were a thought experiment in ethics-by-architecture — what does it look like to build the values into the system before the system runs, rather than hoping to correct the values after something goes wrong? He spent decades of stories showing that even the most carefully designed ethical constraints produce edge cases, contradictions, unintended consequences. But the point was never that ethics-by-design is perfect. The point was that without it, you don't have a fighting chance. We are, right now, at the moment before the laws get written. Some people at RSAC were saying this clearly — not from the main stage, but in the rooms and conversations where the more honest thinking tends to happen. The guardrails exist. The frameworks are being built. But they're being built while the deployment is already running, while the agents are already in the systems, while the governance structures are catching up to a reality that moved faster than the institutional response. That gap is the real story of RSAC 2026. Not the products. Not the keynote soundbites. The gap between the speed of deployment and the maturity of the thinking around what we're actually deploying. The community theme was right, actually — just not in the way the branding intended. The most important community at RSAC 2026 wasn't on the main stage. It was the quieter one: the engineers, researchers, practitioners, and security leaders who understand that we are at an inflection point, and that the decisions made in the next few years about how to design, govern, and constrain autonomous systems will matter far beyond the conference floor in San Francisco. Utopia and dystopia are not predetermined destinations. They're design outcomes. We still get to choose the architecture. But the window for making that choice thoughtfully — rather than reactively, in the middle of a crisis that moved faster than our guardrails — is not as wide as we might like to think. Asimov knew that. He wrote the laws before the robots ran. Maybe it's time we did the same. Stay imperfect, stay human. — Marco Let's keep exploring what it means to be human in this Hybrid Analog Digital Age. End of transmission. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
236
Do Androids Dream of Security Patches? Reflections from RSAC 2026 — Walking the Floor of the Agentic World | Written By Marco Ciappelli & Read by Tape3
Do Androids Dream of Security Patches? Reflections from RSAC 2026 — Walking the Floor of the Agentic World Marco Ciappelli Co-Founder ITSPmagazine & Studio C60 | Creative Director | Branding & Marketing Advisor | Personal Branding Coach | Journalist | Writer | Podcast: An Analog Brain In A Digital Age ⚠️ Beware: Pigs May Fly | 🌎 LAX🛸FLR 🌍 April 7, 2026 This is Marco Ciappelli's Newsletter: An Analog Brain In A Digital Age. This edition draws from ITSPmagazine's on-location coverage at RSAC Conference 2026 in San Francisco. This article — and all of our RSAC Conference 2026 coverage — is made possible with the support of ITSPmagazine's RSAC 2026 sponsors: BLACKCLOAK | Crogl, Inc. | Manifest | Steel Patriot Partners | Skyhigh Security | Stellar Cyber | ESET | Token Security | Object First | Token Watch and listen to the full coverage and all of the conversations we had, including those with our sponsors, at itspmagazine.com/rsac26 Do Androids Dream of Security Patches? Reflections from RSAC 2026 — Walking the Floor of the Agentic World A new transmission from An Analog Brain In A Digital Age — formerly Musing On Society and Technology Newsletter, by Marco Ciappelli The theme of RSAC 2026 was "The Power of Community." Nearly forty-four thousand people descended on the Moscone Center in San Francisco for four days of keynotes, corridor conversations, and expo floor theater. Six hundred exhibitors. Hundreds of speakers. And one word — one concept, one obsession — that swallowed everything else whole. Not community. Agents. AI agents. Autonomous. Self-directing. Capable of taking action, accessing systems, making decisions, and — here's the part nobody says quite out loud — doing all of that while you're asleep, or in a meeting, or standing in line for a mediocre conference coffee wondering if you remembered to turn off the stove. Somewhere between the third and fourth time someone said "agentic AI" to me on that expo floor, I stopped hearing it as a technology term and started hearing it as a sound effect. A drone. A hum. Background noise for a world already running without asking for my permission. The irony of gathering tens of thousands of humans together under the banner of community, only to spend four days talking almost exclusively about non-human workers — that particular irony seemed to float unacknowledged through the air conditioning. And that's when the flashback hit me. Not to any previous RSAC. To a screen. To a world I used to inhabit in the early days of World of Warcraft — before real life staged its intervention and I decided I needed one. In those massive online worlds, NPCs wandered their scripted paths. They had names, routines, dialogue trees, purpose. They looked like characters. They acted like characters. But they weren't. They were behavior patterns wearing a face. And the experienced player learned quickly: don't trust the ones you haven't verified. The convincing ones were sometimes the most dangerous. I kept thinking about that walking those corridors. About all these agents. Already deployed, already running inside enterprise systems, already accessing sensitive data, making tool calls, chaining actions in ways their human creators didn't fully anticipate. The gap between what's been launched in pilot programs and what's actually governed, monitored, and understood is — by most accounts from the conference — vast. Most enterprises are experimenting. Very few have the infrastructure to control what they've set loose. The rest are running something close to shadow agents: identities without owners, actions without accountability, behavior patterns wearing a face. Which brings me, inevitably, to Blade Runner. Not the flying cars. Not the neon rain. The real question at the center of Ridley Scott's masterpiece — and Philip K. Dick's before it — is simpler and far more disturbing: how do you tell the difference? The Voight-Kampff test existed precisely because replicants were convincing. They behaved like humans, responded like humans, even believed they were human sometimes. The problem wasn't that they were dangerous by design. The problem was that nobody could reliably track their intent. That's not science fiction anymore. It's the central problem RSAC 2026 couldn't stop circling. A significant portion of organizations at this point cannot distinguish AI agent activity from human activity in their own environments. The security industry has built its own Voight-Kampff problem — and hasn't finished building the test. The vocabulary had shifted too, from the previous year. At Black Hat last summer, the conversation was about whether to trust agents. At RSAC 2026 it had already moved to identity. To behavior. To intent. One of the sharper ideas surfacing from the keynotes was the distinction between delegation and trusted delegation. Giving an agent a task is easy. Building the security infrastructure to actually trust that delegation — to know what the agent can touch, what it can't, what it will do when nobody is watching — that's where it gets complicated. Without it, someone on that main stage used a phrase that landed hard: a fast track to bankruptcy. Because agents don't just answer questions. They act. And some of those actions are irreversible. So the question is no longer "who are you." It's "what do you want — and do I actually know what you're capable of?" Just like a Blade Runner asking a replicant about a tortoise left in the desert sun. One researcher put it with a directness I appreciated: we need an HR view of agents. Onboarding, monitoring, offboarding. If there's no business justification for an agent's existence — remove it. Which is a pragmatic way of saying: even our digital workforce needs accountability. Even our NPCs need a character sheet. And yet the deployment keeps accelerating. Agents with access and no clear owner. Identities running at machine speed through systems built for human-paced governance. The attack surface expanding quietly while the keynote applause was still echoing in the hall. Security researchers demonstrated live that vulnerabilities in agentic ecosystems are no longer theoretical — they're being exploited, chained, moving faster than the teams tasked with stopping them. We built the agents. We gave them access. We handed them the keys and stood back saying impressive, right? — hoping nothing goes wrong. With a chatbot, you worried about the wrong answer. With an agent, you worry about the wrong action. That's not a product problem wearing a vendor badge. That's a civilization-scale question dressed up in a conference lanyard. The Blade Runner didn't just hunt replicants. He had to learn to recognize them first. We'd better start learning fast — before it gets really awkward. Like if it isn't already. Let's keep exploring what it means to be human in this Hybrid Analog Digital Age. Stay imperfect, stay human. — Marco Let's keep exploring what it means to be human in this Hybrid Analog Digital Age. End of transmission. ⓘ About Marco Ciappelli Co-Founder Studio C60 / ITSPmagazine | Creative Director | Branding & Marketing Advisor | Personal Branding Coach | Journalist | Writer | Podcast: An Analog Brain In A Digital Age ⚠️ Beware: Pigs May Fly | 🌎 LAX🛸FLR 🌍 These shows are all part of ITSPmagazine—which he co-founded with his good friend Sean Martin, to explore and discuss topics at The Intersection of Technology, Cybersecurity, and Society.™️ Want to connect with Sean and Marco On Location at an event or conference near you? See where they will be next: https://www.itspmagazine.com/on-location Lear more about Marco Ciappelli: marcociappelli.com ⓘ About Studio C60 We help cybersecurity startups build trust-based marketing and go-to-market strategies grounded in deep product understanding and real buyer insights. With hundreds of products brought to market and deep connections in the CISO community, we know what security leaders value in vendors. Learn more at studioc60.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
235
Marketing, Brand, And Culture: Are You Paying the Silicon Valley Tax? A Conversation with Nick Richtsmeier of CultureCraft | Hosted by Marco Ciappelli
**About this episode** What if everything you've been spending on digital marketing isn't an investment — but a tax? Nick Richtsmeier, founder of CultureCraft, joins Marco Ciappelli for a Brand Highlight that cuts straight to the root of why so many organizations feel stuck: not a marketing problem, but an alignment problem. Nick introduces the concept of the Silicon Valley tax — the ongoing cost most organizations pay to platforms that have no real incentive to show them what's working. He challenges the "attention economy" framing, arguing that what's actually being bought and sold is addictive behavior engineered by the algorithm. And he offers a different path: building trust in a humanist way, grounded in real alignment across culture, organizational design, positioning, point of view, and core community. The result is a conversation about brands — but really about integrity. About whether what an organization says and what it does are actually the same thing. And about why asking marketing to be the "sin eater" for every internal dysfunction is a strategy that will always come up short. **Connect with Nick Richtsmeier** [Nick Richtsmeier on LinkedIn](https://www.linkedin.com/in/nickrichtsmeier/) [CultureCraft](http://www.culturecraft.com) [CultureCraft on LinkedIn](https://www.linkedin.com/company/culturecraftconsulting/) **Connect with Marco & Studio C60** [Marco Ciappelli on LinkedIn](https://www.linkedin.com/in/marco-ciappelli) [Studio C60](https://www.studioc60.com) [ITSPmagazine](https://www.itspmagazine.com) **Keywords** brand strategy, organizational culture, trust building, marketing strategy, CultureCraft, Nick Richtsmeier, Silicon Valley tax, attention economy, algorithmic economy, brand alignment, digital marketing, humanist branding, organizational design, Trust Made Growth, sin eater marketing, brand highlight, Studio C60, ITSPmagazine, Marco Ciappelli **Want to tell your story?** [Full Length Brand Story] (https://www.studioc60.com/content-creation#full) | [Brand Spotlight Story](https://www.studioc60.com/content-creation#spotlight) | [Brand Highlight Story](https://www.studioc60.com/content-creation#highlight) This is a Brand Highlight — a ~5 min intro conversation spotlighting the guest and their company. Learn more: [studioc60.com/creation#highlight](https://www.studioc60.com/creation#highlight) Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
234
When Sci-Fi Becomes the Business Plan | A Brand Highlight Conversation with Jacob Flores, Head of Research at Type One Ventures | Hosted by Marco Ciappelli
When Sci-Fi Becomes the Business Plan A Brand Highlight Conversation with Jacob Flores, Head of Research at Type One Ventures There is a version of investing that asks what the return will be. And then there is the version that asks what kind of future the investment makes possible. Jacob Flores, Head of Research at Type One Ventures, is working firmly in the second category. Type One Ventures takes its name from the Kardashev Scale — a framework developed by Soviet astrophysicist Nikolai Kardashev that ranks civilizations by their level of technological advancement. A Type One civilization has mastered its home planet and is beginning to extend its reach beyond it. That is the destination this firm is trying to fund. Flores, a former engineer and product manager with roughly a decade of experience across industries, leads the research function at Type One with a focus on AI, neurotech, and biotechnology. The firm's investment lens is as much philosophical as it is financial. Type One looks for platform builders — companies whose core technology can be stacked across multiple applications, cultivating new marketplaces and entirely new categories of industry. Manufacturing in space is one clear example: in microgravity, it becomes possible to grow proteins, print circuits, and develop materials that cannot be produced the same way on Earth — yet those products have immediate, tangible value back on the ground. The thesis extends well beyond orbit. Type One is also backing neurotechnology companies working to restore vision and movement for people who have lost those abilities, and longevity research aimed at extending healthy human life. Flores frames these not as moonshots for their own sake, but as the new foundation layer for an entirely new level of global industry. This is a Brand Highlight. A Brand Highlight is a ~5 minute introductory conversation designed to put a spotlight on the guest and their company. Learn more Host Marco Ciappelli, Co-Founder, ITSPmagazine Guest Jacob Flores, Head of Research, Type One Ventures Resources Type One Ventures Type One Ventures on LinkedIn Want to tell your story? Full Length Brand Story Brand Spotlight Story Brand Highlight Story Keywords: Jacob Flores, Type One Ventures, Marco Ciappelli, brand story, brand marketing, marketing podcast, brand highlight, space technology, deep tech, venture capital, multi-planetary civilization, Kardashev Scale, manufacturing in space, neurotech, longevity, AI, biotechnology, frontier technology, space investing, human longevity, platform builders Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
233
Protecting Kids Online Since 2007 and in the Age of AI: Ben Halpert on Savvy Cyber Kids at RSAC 2026
In this episode from RSA Conference 2026, Marco Ciappelli sits down with Ben Halpert, founder of the non-profit organization Savvy Cyber Kids, to discuss the critical intersection of child development and technology. Since its founding in 2007, Savvy Cyber Kids has been on a mission to provide parents and educators with the tools needed to guide children through the digital world. Ben explains why introducing technology too early can be detrimental to a child’s emotional preparedness and brain development, and why adult-led guidance is essential even when kids seem like "tech experts". In this conversation, we explore: The Evolution of Threats: Moving from MySpace and CRT monitors to 24/7 access via mobile devices. Early Intervention: Why the "rhyme and picture book" approach works for children as young as three to teach concepts like online aliases and stranger safety. Safe AI for Kids: Introducing a new partnership with Chaperone, a platform featuring "homework mode" and parental controls to ensure AI is a tool for learning, not a shortcut for thinking. Going Global: How the organization has expanded internationally with materials translated into Spanish, German, French, and Hebrew. About Our Guest Ben Halpert is a cybersecurity veteran with over 25 years of experience and the founder of Savvy Cyber Kids. He is dedicated to helping parents navigate the "wild" of the internet with positive, developmentally appropriate programming. Resources Savvy Cyber Kids Website: savvycyberkids.org More RSAC 2026 Coverage: itspmagazine.com/rsac Marco's Website: Marcociappelli.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
232
Everyone Is Talking About Agentic AI at RSAC 2026. Almost Nobody Is Saying Anything Different | With Marco Ciappelli and Theresa Lanowitz
Marco Ciappelli sits down with cybersecurity evangelist and thought leader Theresa Lanowitz at the end of day one on the expo floor for a conversation that cuts through the noise — from shadow AI and leadership accountability, to brand identity, to why most companies here can't articulate a message above the fray. Plus: a Peloton story that accidentally became the best explanation of brand loyalty you'll hear all week. Chapters: - Judge Sentences CEO to 8 Hours on the RSAC Floor - End of Day One: Setting the Scene - Who Is Theresa Lanowitz - The Binary View of AI: Love It, Fear It, or Find the Gray - Leadership's Role in the AI Transformation - Shadow AI: The Insider Threat Nobody Is Naming - Why Some Companies Still Say No to AI - Fighting With Your LLM (We All Do It) - AI Slop and the Brand Differentiation Problem - The Peloton Story: What Real Brand Loyalty Looks Like - RSAC 2026: Everyone Sounds the Same - Where Is Agentic AI Actually Going - Integration, Orchestration, ROI: The Real Questions - Make AI Your Own What's actually covered: → Why agentic AI is dominating RSAC 2026 — and why it all sounds the same → Shadow AI: the insider threat nobody is calling an insider threat → What strong brand presence actually looks like (hint: it's not a circus tent) → Why fear — not budget — is the real reason companies still say no to AI → Integration, orchestration, ROI: what comes after the hype → The one message that matters: make AI your own 🔗 More from RSA Conference 2026: itspmagazine.com/rsac Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
231
New Book: Climate Capital — Investing in the Tools for a Regenerative Future | An Interview with Tom Chi | An Analog Brain In A Digital Age With Marco Ciappelli
New Book: Climate Capital — Investing in the Tools for a Regenerative Future | An Interview with Tom Chi | An Analog Brain In A Digital Age With Marco Ciappelli What if the economy isn't broken — just badly designed? Tom Chi, Google X founding member, inventor of 77 patents, and venture capitalist at At One Ventures, joined me on An Analog Brain In A Digital Age to discuss his new book Climate Capital: Investing in the Tools for a Regenerative Future. From the streets of Florence to the strip malls of Silicon Valley, from the mechanics of attention capture to the physics of ecological economics, this conversation goes far beyond climate. It's about how we design the systems we live inside — and whether we have the will to redesign them before it's too late. 📺 Watch | 🎙️ Listen | marcociappelli.com Article Body Tom Chi has worked on things that changed the world. Microsoft Office. Web search. The self-driving car. Google Glass. He'll tell you himself that not all of them were hits, and he's fine with that — that's what it means to be an inventor. But what he's working on now is different in scale from anything before. Not a product. Not a platform. A redesign of the global economy. His new book, Climate Capital: Investing in the Tools for a Regenerative Future, starts from a premise that sounds radical until you think about it for more than a few minutes: economics is a design discipline. And right now, it's poorly designed. Not maliciously — poorly. We built systems optimized for short-term capital extraction, and we're living with the consequences. The question Tom is asking is whether we can redesign them before those consequences become irreversible. He didn't get there through ideology. He got there through Florence. Tom was auditing sustainable MBA courses alongside his partner when he was invited to a conference in Italy. He landed, got a day off, wandered the streets — and something clicked. The entire city is built from sustainable materials. And it's one of the most beautiful places on earth. That moment demolished an assumption he didn't even know he was carrying: that sustainable living means downgrading. Florence is a 2,000-year-old counterexample to every joke about Birkenstocks and cold showers. We knew how to do this. We just forgot. Which brings us to the first big thread of our conversation: the pattern of forgetting. We talked about this in the context of technology, not history. Specifically, how the shift from software you paid for to software supported by advertising quietly changed everything. When you pay for a tool, the goal is to make it better. When the tool is supported by advertisers, the goal is to keep you inside it as long as possible. Clippy used to annoy us because it interrupted our train of thought. Now interrupting our train of thought is the entire business model. Tom has a phrase for what's happening at scale: cognitive despoiling. We spent the 20th century strip mining the physical resources of the planet. We're spending the 21st century strip mining the cognitive resources of humanity. There's a finite number of coherent thoughts this civilization can produce. And we're burning through them — with misinformation, amygdala triggers, and dopamine loops — the same way we burned through forests and waterways. The damage is invisible because it's underwater, like ocean trawling. But it's real. And it compounds across generations. This is where I had to push back a little. Because I grew up in Florence. I made the jump to digital. I love my vinyls and I love my streaming library. I'm part of the contradiction he's describing. And I asked him: given all this, where do you even start? His answer is the most practical thing I've heard in a long time. Start with physical businesses. The ones actually causing most of the damage — to water, soil, air, biodiversity. And here's the part that almost nobody is talking about: 90% of the cost structure of a physical business already aligns ecological and economic goals. Fewer raw materials used means lower feedstock costs and less extraction. Less energy consumed means lower processing costs and fewer emissions. Shorter supply chains mean lower logistics costs and fewer transport emissions. The economy and the ecology are already pointing the same direction on 90% of what matters. The 5% that isn't aligned — pollution — is what the lobbyists fight about. So that's what dominates the news. And that's why we think this is harder than it is. Tom's firm, At One Ventures, is built around this insight. They invest in what he calls the triad: disruptive deep tech that delivers radically better unit economics and radically better environmental outcomes at the same time. Their portfolio companies don't sell sustainability. They sell efficiency. The ecological benefit is baked in by design. The customers buy it because it's cheaper and better. The planet wins as a side effect. That's the book. Part toolkit, part framework, part demonstration that the future we need is already technically possible. The Four C's — critical thinking, creativity, compassion, community — are the human skills that will matter most as AI and robotics take over the rest. And the Three Epochs of Ecological Technology are the roadmap from the economy we have to the one that could actually last. I don't know if we'll get there in time. Neither does Tom. But I left this conversation thinking something I don't think often enough: the design problem is solvable. We just have to decide we want to solve it. Climate Capital is out now from Wiley. Link below. And if this is the kind of conversation you come here for — subscribe to the newsletter at marcociappelli.com. — Marco Co-Founder ITSPmagazine & Studio C60 | Creative Director | Branding & Marketing Advisor | Personal Branding Coach | Journalist | Writer | Podcast: An Analog Brain In A Digital Age ⚠️ Beware: Pigs May Fly | 🌎 LAX🛸FLR 🌍 ____________ About Marco Marco Ciappelli is Co-Founder & CMO of ITSPmagazine, Co-Founder & Creative Director of Studio C60, Branding & Marketing Advisor, Personal Branding Coach, Journalist, Writer, and Host of An Analog Brain In A Digital Age podcast. Born in Florence, Italy, and based in Los Angeles, he explores the intersection of technology, society, storytelling, and creativity — with an analog brain, in a digital age. 🌎 marcociappelli.com ___________ About the Guest About the Guest Tom Chi is a lifelong technologist, inventor, and Google X founding member who contributed to Google Glass, the Waymo self-driving car, and Project Loon. He holds degrees in electrical engineering from Cornell University, is a named inventor on 77 patents, and has held executive roles at Microsoft, Yahoo, and Google. After Google, he mentored 200+ entrepreneurs on global development challenges before professionalizing his investment work at Hack VC and Crosslink Capital. He is now Managing Partner at At One Ventures, a venture firm with a mission to help humanity become a net positive to nature. Climate Capital: Investing in the Tools for a Regenerative Future (Wiley, February 2026) is his first book. 🔗 tomchi.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
230
Do You Know What's In Your Software? A Cybersecurity Story with Manifest Cyber | A Brand Highlight Conversation with Daniel Bardenstein, Co-Founder at Manifest Cyber
There is a question that sounds almost embarrassingly simple. After a vulnerability is discovered in a piece of widely used software — something like Log4Shell, which shook the security world and left hundreds of thousands of organizations exposed overnight — the question organizations scrambled to answer was this: where is this code, and what does it touch? Most couldn't answer it. Not the Fortune 500 companies. Not the government agencies. Not the critical infrastructure operators. Not the hospitals or the banks or the utilities. They had built and bought mountains of software over years and decades, and when the moment came to understand what was actually inside it, they were effectively blind. That gap is exactly what Daniel Bardenstein set out to close when he co-founded Manifest Cyber in 2023. And in a conversation on ITSPmagazine's Brand Highlight series, he made a case for technology transparency that is hard to argue with — not because it's technically complex, but because the analogy he draws is so strikingly obvious once you hear it. "If you want to buy a house, you get to go inside the house, do the home inspection," he said. "You want to buy food from the grocery store — you can look at the ingredients. Even our clothes tell you what they're made of, how to care for them, and where they're from." But software? The technology running hospital MRI machines, weapon systems, financial infrastructure, water delivery? No transparency required. No ingredient label. No inspection rights. Just trust. That trust, as Log4Shell demonstrated, is a vulnerability in itself. Bardenstein came to this problem with credentials that few founders in the space can claim. Before starting Manifest, he spent four and a half years in the US government leading large-scale cyber programs and serving as technology strategy lead at CISA — the Cybersecurity and Infrastructure Security Agency. He saw firsthand how defenders are perpetually at a disadvantage, operating without the basic visibility they need to do their jobs. His mission became building the tools to change that. The problem, he's quick to point out, has not improved in the years since Log4Shell. Software supply chain attacks have multiplied — XZ Utils, NPM Polyfill, and others following the same pattern: trusted software becomes the attack vector, and it spreads fast. Meanwhile, most security teams are still operating with SCA tools that generate noisy, overwhelming alerts and vendor risk programs built on Excel spreadsheets and questionnaires rather than actual empirical data about the security of what they're buying. "Security teams have a false sense of security," Bardenstein said. The gap between what organizations think they know and what they actually know about their software supply chains remains dangerously wide. Manifest Cyber addresses this across the full lifecycle. For organizations that build software, the platform maps every open source dependency, assesses it for risk, and ensures developers can write more secure code without losing velocity. For organizations that buy software — which is everyone — it finds risks before procurement, then continuously monitors every third party component so that when something breaks, they know the blast radius in seconds, not weeks. The timing matters. Regulation is catching up to the problem. The EU AI Act, the Cyber Resilience Act, and a growing body of global policy are beginning to demand exactly the kind of software supply chain transparency that Manifest is built to provide. Organizations that wait to build this capability will find themselves scrambling to comply — those that build it in now will have it as a competitive advantage. The ingredient label for software has always been missing. Manifest Cyber is writing it. ________________________________________________________________ Marco Ciappelli interviews Daniel Bardenstein, CEO & Co-Founder of Manifest Cyber, for ITSPmagazine's Brand Highlight series. HOST Marco Ciappelli — Co-Founder & CMO, ITSPmagazine | Journalist, Writer & Branding Advisor 🌐 https://www.marcociappelli.com 🌐 https://www.itspmagazine.com GUEST Daniel Bardenstein, CEO and Co-Founder of Manifest Cyber https://www.linkedin.com/in/bardenstein RESOURCES Manifest Cyber: https://www.manifestcyber.com Are you interested in telling your story? ▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full ▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight ▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlight KEYWORDS Manifest Cyber, software supply chain security, SBOM, Log4Shell, open source vulnerability, technology transparency, Daniel Bardenstein, CISA, software composition analysis, third party risk, EU Cyber Resilience Act, AppSec Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
229
New Book! Lost in Time — Our Forgotten and Vanishing Knowledge | Forgotten Technology, Ancient Wisdom & Digital Amnesia | An Interview with Jack R. Bialik | An Analog Brain In A Digital Age With Marco Ciappelli
New Book: Lost in Time — Our Forgotten and Vanishing Knowledge | An Interview with Jack R. Bialik | An Analog Brain In A Digital Age With Marco Ciappelli There's a particular arrogance embedded in how we talk about progress. We speak about innovation as if it moves in one direction only — forward, upward, smarter, faster. But what if the line isn't straight? What if it loops, doubles back, and occasionally vanishes entirely? That's the uncomfortable question at the center of my conversation with Jack R. Bialik. His book Lost in Time: Our Forgotten and Vanishing Knowledge doesn't read like a history lesson. It reads like a case file — evidence, example by example, that the civilization we assume is the most advanced in human history is also, in some critical ways, deeply amnesiac. Take cataract surgery. We learned it in the 1700s, right? Except we didn't. Indians were performing it in 800 BC. The ancient Egyptians and Babylonians had diagrams of the procedure dating back to 2,400 BCE. The knowledge existed, worked, and then — somewhere in the chaos of collapsing empires and burning libraries — it vanished. We didn't progress past it. We forgot it, and then reinvented it from scratch, centuries later, convinced we were doing something new. Or the Baghdad Battery: clay pots, 2,000 years old, that when filled with acid can generate 1.1 volts of electricity. We don't know what they used them for. We don't know who figured it out. We just know it worked, it existed, and then it didn't anymore. This is what Bialik calls the pattern of loss — and it's not random. It follows catastrophe: the Library of Alexandria, the systematic destruction of Mayan records, the slow erosion of oral traditions as writing systems took over. Knowledge disappears when the systems that carry it collapse. And here's where the conversation gets uncomfortably relevant: we are building those systems right now, and we are not thinking about how long they'll last. The curator at the Computer History Museum told Bialik that to preserve the data from early IBM PCs and Macintosh computers, they had to print it on paper. The floppy drives had become brittle. The formats were unreadable. The digital archive was failing — and the only solution was to go analog. A vinyl record from the 1920s still plays. A CD from the 1980s may not survive another decade. I've been thinking about this since we recorded. My brain is analog — that's not just a podcast title, it's a philosophy. I grew up in Florence, surrounded by things that had survived centuries because they were made to last: stone, fresco, manuscript. Then I jumped on the digital train like everyone else, seduced by infinite libraries on my phone, music on demand, knowledge at my fingertips. But what Bialik is pointing out is that fingertips are fragile. And so are hard drives. The deeper issue isn't storage format. It's the distinction Bialik draws between knowledge and wisdom. Knowledge is the data — the cataract surgery technique, the battery design, the pyramid engineering. Wisdom is knowing why it matters, when to use it, and what the consequences might be. We've gotten extraordinarily good at accumulating knowledge. We are considerably worse at transmitting wisdom. And wisdom, Bialik argues, doesn't live in databases. It lives in the space between people — in stories, in teaching, in the slow transmission of judgment across generations. That's why oral tradition survived when everything else failed. Not because it was more sophisticated, but because it was more human. It didn't require a device to run on. I don't know how to solve the digital longevity problem. Neither does Bialik — not yet. But I think the first step is admitting we have one. That's actually one of the quietest, most powerful arguments in the book: be humble. We don't know everything. We never did. And some of the things we've lost might be exactly what we need right now. The question isn't just what we've forgotten. It's what we're forgetting today, while we're too busy scrolling to notice. Grab Lost in Time: Our Forgotten and Vanishing Knowledge — link below — and spend some time with a perspective that goes very, very far back. Which is maybe the only way to see very, very far forward. And if this kind of conversation is what you come here for, subscribe to the newsletter at marcociappelli.com. More of this. Less noise. — Marco Ciappelli Co-Founder ITSPmagazine & Studio C60 | Creative Director | Branding & Marketing Advisor | Personal Branding Coach | Journalist | Writer | Podcast: An Analog Brain In A Digital Age ⚠️ Beware: Pigs May Fly | 🌎 LAX🛸FLR 🌍 ____________ About Marco Marco Ciappelli is Co-Founder & CMO of ITSPmagazine, Co-Founder & Creative Director of Studio C60, Branding & Marketing Advisor, Personal Branding Coach, Journalist, Writer, and Host of An Analog Brain In A Digital Age podcast. Born in Florence, Italy, and based in Los Angeles, he explores the intersection of technology, society, storytelling, and creativity — with an analog brain, in a digital age. 🌎 marcociappelli.com ___________ About the Guest Jack R. Bialik is a technology expert and author with a 40-year career spanning electrical engineering, project management, F-15 fighter simulation for the U.S. Air Force, Nokia, Motorola, and the Department of Homeland Security. Lost in Time: Our Forgotten and Vanishing Knowledge is the result of years of research into the technologies, wisdom, and innovations that vanished from our collective memory — and what that means for our digital future. 🌎 jrbialik.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
228
Agade: The AI-Powered Wearable Robots That Protect Workers, Not Replace Them | A Brand Highlight Conversation with Lorenzo Aquilante, Co-Founder and AGADE
Agade: The AI-Powered Wearable Robots That Protect Workers, Not Replace Them AI Meets Human CraftsmanshipThere's something poetic about a technology born to help people with muscular dystrophy finding its second life on factory floors and logistics warehouses. That's the story of Agade, an Italian deeptech startup that began as a research project at Politecnico di Milano and evolved into something far more ambitious: a mission to preserve human craftsmanship in an age of automation.I sat down with Lorenzo Aquilante, CEO and co-founder of Agade, to talk about their journey from healthcare innovation to industrial exoskeletons—and what it was like showcasing their latest product at CES 2026.The origin story matters here. Back in 2017, researchers at Politecnico di Milano started developing exoskeletons for people affected by muscular dystrophy. They created something different—a semi-active model powered by AI that recognizes when a user is lifting and responds accordingly. It wasn't just about motors and sensors. It was about intelligence.Then companies came knocking. Manufacturing firms, logistics operations, industries where human workers still matter because their skills, experience, and judgment can't be replaced by machines. They saw potential. Why not use this technology to protect the people doing the heavy lifting—literally?Agade was founded in 2020 with a clear mission: preserve craftsmanship against the physical toll of material handling. Not replace humans. Protect them.The company now has two products. The first, launched in 2024, focuses on shoulder assistance. The second—the one they brought to CES 2026—targets the lower back, which makes sense when you consider that back pain is practically an occupational hazard for anyone moving materials all day.What makes Agade's approach different is that semi-active AI system. The exoskeleton knows when you're lifting. It responds. It's not just a passive brace or a fully motorized suit that takes over. It's somewhere in between—smart enough to help, light enough to wear all day.Lorenzo emphasized something that resonated with me: the importance of feedback. From day one, Agade has been obsessed with real-world testing. Not lab conditions. Actual workers doing actual jobs. Because the buyer isn't the user—companies purchase these for their employees—and that creates a unique dynamic. You need both sides to believe in the technology.The CES experience brought that home. There's always the initial wow factor when someone sees a wearable robot with motors and sensors. But the real work happens after the demo, when users tell you what needs to improve. That's where the collaboration lives.And here's what struck me most about this conversation: Agade isn't trying to remove humans from the equation. They're trying to keep humans in it longer, healthier, and more capable. In a world racing toward full automation, there's something refreshing about a company betting on human skill—and building technology to protect it.The products are available globally. You can reach Agade through their website at agadexoskeletons.com, find them on LinkedIn and other social channels, and even arrange trials before committing to a purchase.For those of us watching the intersection of AI, robotics, and human labor, Agade represents a different path. Not humans versus machines. Humans with machines. Tools that amplify rather than replace.That's a story worth telling.Marco Ciappelli interviews Lorenzo Aquilante, CEO & Co-Founder of Agade, for ITSPmagazine's Brand Highlight series following CES 2026.>>> Marcociappelli.comGUESTLorenzo Aquilante, CEO and co-founder of Agadehttps://www.linkedin.com/in/lorenzo-aquilante-108573b0/RESOURCESAGADE: https://agade-exoskeletons.comAre you interested in telling your story?▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlightKEYWORDSAgade, exoskeleton, CES 2026, wearable robotics, AI, future of work, industrial exoskeleton, made in Italy, workplace safety, deeptech, robotics. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
227
Chat Control: The EU Law That Could End Privacy and Why Breaking Encryption Won't Stop Criminals | A Conversation with Cybersecurity Expert John Salomon | Redefining Society and Technology Podcast with Marco Ciappelli
None of Your Goddamn BusinessJohn Morgan Salomon said something during our conversation that I haven't stopped thinking about. We were discussing encryption, privacy laws, the usual terrain — and he cut through all of it with five words: "It's none of your goddamn business."Not elegant. Not diplomatic. But exactly right.John has spent 30 years in information security. He's Swiss, lives in Spain, advises governments and startups, and uses his real name on social media despite spending his career thinking about privacy. When someone like that tells you he's worried, you should probably pay attention.The immediate concern is something called "Chat Control" — a proposed EU law that would mandate access to encrypted communications on your phone. It's failed twice. It's now in its third iteration. The Danish Information Commissioner is pushing it. Germany and Poland are resisting. The European Parliament is next.The justification is familiar: child abuse materials, terrorism, drug trafficking. These are the straw man arguments that appear every time someone wants to break encryption. And John walked me through the pattern: tragedy strikes, laws pass in the emotional fervor, and those laws never go away. The Patriot Act. RIPA in the UK. The Clipper Chip the FBI tried to push in the 1990s. Same playbook, different decade.Here's the rhetorical trap: "Do you support terrorism? Do you support child abuse?" There's only one acceptable answer. And once you give it, you've already conceded the frame. You're now arguing about implementation rather than principle.But the principle matters. John calls it the panopticon — the Victorian-era prison design where all cells face inward toward a central guard tower. No walls. Total visibility. The transparent citizen. If you can see what everyone is doing, you can spot evil early. That's the theory.The reality is different. Once you build the infrastructure to monitor everyone, the question becomes: who decides what "evil" looks like? Child pornographers, sure. Terrorists, obviously. But what about LGBTQ individuals in countries where their existence is criminalized? John told me about visiting Chile in 2006, where his gay neighbor could only hold his partner's hand inside a hidden bar. That was a democracy. It was also a place where being yourself was punishable by prison.The targets expand. They always do. Catholics in 1960s America. Migrants today. Anyone who thinks differently from whoever holds power at any given moment. These laws don't just catch criminals — they set precedents. And precedents outlive the people who set them.John made another point that landed hard: the privacy we've already lost probably isn't coming back. Supermarket loyalty cards. Surveillance cameras. Social media profiles. Cookie consent dialogs we click through without reading. That version of privacy is dead. But there's another kind — the kind that prevents all that ambient data from being weaponized against you as an individual. The kind that stops your encrypted messages from becoming evidence of thought crimes. That privacy still exists. For now.Technology won't save us. John was clear about that. Neither will it destroy us. Technology is just an element in a much larger equation that includes human nature, greed, apathy, and the willingness of citizens to actually engage. He sent emails to 40 Spanish members of European Parliament about Chat Control. One responded.That's the real problem. Not the law. Not the technology. The apathy.Republic comes from "res publica" — the thing of the people. Benjamin Franklin supposedly said it best: "A republic, if you can keep it." Keeping it requires attention. Requires understanding what's at stake. Requires saying, when necessary: this is none of your goddamn business.Stay curious. Stay Human. Subscribe to the podcast. And if you have thoughts, drop them in the comments — I actually read them.Marco CiappelliSubscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.> https://www.linkedin.com/newsletters/7079849705156870144/Marco Ciappelli: https://www.marcociappelli.com/John Salomon Experienced, international information security leader. vCISO, board & startup advisor, strategist.https://www.linkedin.com/in/johnsalomon/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
226
Paoletti Custom Guitars at NAMM 2026: Handcrafted in Florence Italy from Wine Barrel Wood | A Brand Highlight Conversation with Filippo Martini, Managing Director at Paoletti Guitars | NAMM 2026 Coverage
Wine Barrels, Duomo Marble, and Florence: Paoletti Custom Guitars at NAMM 2026I've been away from Florence for 25 years. I didn't know there was a guitar company like this back home.At NAMM 2026, I found Filippo Martini from Paoletti Custom Guitars—a boutique manufacturer based in the heart of Tuscany, building instruments that are equal parts guitar and artwork.Paoletti does something no one else does: they build guitars from chestnut wood sourced from Italian wine barrels. The material offers a wide harmonic spectrum, but it's difficult to work with. You need to know how to handle it. Founder Fabrizio Paoletti figured it out, and now every guitar they produce shows the natural grain—no opaque finishes, no hiding the wood.The craftsmanship runs deep. Bridges, pickguards, pickups—all made in-house. Necks carved from Canadian maple, roasted on-site. 99% of the process happens in Tuscany. As Filippo put it, "Kilometer zero." Zero miles. Everything local except the screws.Their model is 100% custom. You don't buy a Paoletti off the rack. You tell them your style, your sound, the genre you play. They build around your vision while keeping the Italian essence intact—chestnut wood, Italian-made components, tailored to your idea.But what stopped me cold was the Duomo collection.Eight individual guitars, each hand-engraved by Fabrizio Paoletti himself. Three years of work. The subject: Florence's cathedral—the Duomo di Santa Maria del Fiore.This isn't just decoration. Paoletti secured an official partnership with the Opera del Duomo, the authority that oversees the cathedral. The back of each guitar reproduces the marble floor pattern from inside the Duomo. And when the collection is complete this October, every guitar will contain an actual piece of marble from the cathedral.I got shivers standing there.This is what happens when guitar making meets Italian heritage. It's not about specs or market positioning. It's about place, history, and craft passed down through generations.Filippo invited me to visit the workshop in Florence when I return in April. I'm going. I want to see where this happens—where wine barrel wood becomes an instrument, where cathedral marble gets embedded into a guitar body, where a team of artisans builds one-of-one pieces for players around the world.Florence is known for many things. Leather. Art. Architecture. The Renaissance itself. Now I know it's also home to some of the most distinctive guitars being made anywhere.Paoletti proves that boutique doesn't mean small ambitions. They're partnering with galleries in Dubai, working with the Duomo authorities, and bringing Florence to NAMM.Not bad for a company I didn't even know existed until I walked the show floor and heard an Italian accent.Sometimes you find home in unexpected places.Marco Ciappelli interviews Filippo Martini from Paoletti Custom Guitars at NAMM 2026 for ITSPmagazine.Part of ITSPmagazine's On Location Coverage at NAMM 2026.🌐 https://www.itspmagazine.com/the-namm-show-2026-namm-music-conference-music-technology-event-coverage-anaheim-california__________________________This is a Brand Highlight. A Brand Highlight is an introductory conversation designed to put a spotlight on the guest and their company. Learn more: https://www.studioc60.com/creation#highlightGUESTFilippo Martini Managing DIrector at Paoletti Guitars | Florence | Tuscany | ItalyRESOURCESLearn more about Paoletti Guitars: https://www.paolettiguitars.comAre you interested in telling your story?▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlight Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
225
Yamaha at NAMM 2026: Introducing Chris Buck Custom Revstar Guitar, Pacifica SC, and a deep dive into the BB735 Bass | A Brand Highlight Conversation with Andy Winston, Product Training Specialist at Yamaha | NAMM 2026
60 Years Forward: Yamaha at NAMM 2026Yamaha at NAMM 2026: Chris Buck Revstar, Pacifica SC & 60 Years of Guitar InnovationSome brands chase nostalgia. Yamaha builds forward.At NAMM 2026, I spoke with Andy Winston to talk about 60 years of Yamaha guitar design—and why this company keeps delivering instruments that punch way above their price point.The conversation started with the Chris Buck Signature Revstar. Buck is the guitarist for Cardinal Black, and he's earned his own model. The specs tell the story: overwound P90 pickups for a hotter sound, wraparound tailpiece with adjustable saddles, stainless steel frets, lightweight tuners, and those old-school inlays from the first-generation Revstar. No boost circuit. Buck wanted it stripped to essentials.Then Andy dropped a tease: Matteo Mancuso is getting his own Revstar this summer. The Italian virtuoso. That's a statement.We moved to the new Pacifica SC—Yamaha's answer for T-style players. Humbucker in the neck, single coil in the bridge, and pickups designed in partnership with Rupert Neve's team. The boost circuit under the bridge pickup gives you five sounds from two pickups. Made in Indonesia at $999 or Made in Japan with compound radius fretboard and IRA wood treatment at $2,199.I bought my nephew a Pacifica. Entry level, around $200. It works. That's Yamaha's philosophy—you can start at $200 and work your way up to a Mike Stern signature model without ever leaving the family.But here's what stuck with me.Andy said something that defines Yamaha's approach: "We don't do reissues. You're never gonna see us reissue a 1972."Sixty years of guitar history, and they're not looking backward. The Revstar draws inspiration from the 1970s Super Flight, sure—but it's chambered mahogany, tuned to eliminate harsh mid-range frequencies. Yamaha builds pianos, violins, marimbas. They know how to tune wood. They apply that knowledge to electric guitars in ways other companies don't.The BB Bass series came next. String-through body with 45-degree break angle. Extra bolts pulling the neck tight into the pocket. A maple stripe running through the center of the body for note response. Active/passive switching. Five-ply neck. Professional features at prices that don't require a car payment."We give people more instrument than what a price tag says," Andy told me.That's not marketing. That's mission.Before we wrapped, Andy shared a personal story. In 1977, hair down to his shoulders, bell bottoms on, his mom decided he was serious about guitar. She bought him a Yamaha FG-75. His first real acoustic. He doesn't have that one anymore, but he found a replacement. Had to.That's brand loyalty earned over decades. Not through heritage mythology—through instruments that work, that last, that give players what they need without emptying their wallets.Sixty years of guitar design. No reissues. Just forward.Yamaha keeps proving that innovation and accessibility aren't mutually exclusive.Marco Ciappelli interviews Andy Winston from Yamaha at NAMM 2026 for ITSPmagazine.Part of ITSPmagazine's On Location Coverage at NAMM 2026.🌐 https://www.itspmagazine.com/the-namm-show-2026-namm-music-conference-music-technology-event-coverage-anaheim-california__________________________This is a Brand Highlight. A Brand Highlight is an introductory conversation designed to put a spotlight on the guest and their company. Learn more: https://www.studioc60.com/creation#highlightGUESTAndy WinstonProduct Training Specialist at YamahaRESOURCESLearn more about Yamaha Guitars: https://www.yamaha.com/Are you interested in telling your story?▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlight Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
224
The Gift of Music: Guitar Center Foundation at NAMM 2026 | A Conversation with Michelle Wolff, Guitar Center Foundation | The NAMM Show 2026 Event Coverage | On Location with Sean Martin and Marco Ciappelli
At the Guitar Center Foundation, music is treated as a shared resource rather than a luxury. During this conversation at the NAMM Show 2026, Michelle Wolff, representing the Foundation, explains how access to real instruments can change the trajectory of a student, a patient, or a veteran simply by making music possible in the first place.The Foundation’s work centers on donating thousands of instruments to schools, hospitals, and veteran centers, with a focus on communities where funding for music programs is often the first thing cut. Through a structured grant process, organizations apply for instruments quarterly, with roughly 150 requests reviewed each cycle. About 30 of those requests are fulfilled, helping sustain programs that might otherwise disappear.Beyond instrument donations, the Foundation is expanding how it shows up in communities. Plans include live donation events that bring instruments directly into schools and hospitals, often paired with artist participation to create meaningful, memorable moments. New donor and ambassador programs are also taking shape, designed to broaden awareness and bring more voices into the mission.Partnerships play a major role in that effort. The conversation highlights recent collaboration tied to the 100 Billion Meals initiative, where music, visual art, and social impact intersect to amplify multiple causes at once. These partnerships extend the Foundation’s reach while reinforcing the idea that music can support broader humanitarian goals.Wolff also shares a personal connection to the mission. As a former vocal performance major at the University of Texas Butler School of Music, she understands how deeply musicians identify with their craft. After experiencing vocal injury herself, she speaks to the importance of supporting musicians through change and helping them build identities that extend beyond a single instrument, without losing music as a core part of who they are.That perspective brings the Foundation’s work full circle. Access to instruments is not only about creating future professionals. It is about expression, resilience, and giving people the chance to discover what music can mean in their own lives.Part of ITSPmagazine's On Location Coverage at NAMM 2026.🌐 https://www.itspmagazine.com/the-namm-show-2026-namm-music-conference-music-technology-event-coverage-anaheim-california__________________________Guitar Center Foundation: https://www.guitarcenterfoundation.org100 Billion Meals initiative: https://100billionmeals.orgThe NAMM Show 2026: https://www.namm.org/thenammshow/attendMusic Evolves: Sonic Frontiers Newsletter | https://www.linkedin.com/newsletters/7290890771828719616/More from Marco Ciappelli on Redefining Society and Technology Podcast: https://redefiningsocietyandtechnologypodcast.com/Want to share an Event Briefing as part of our event coverage? Learn More 👉 https://www.studioc60.com/performance#briefingWant Sean and Marco to be part of your event or conference? Let Us Know 👉 https://www.studioc60.com/performance#ideasKEYWORDS: music charity, instrument donations, namm show 2026, music education access, supporting musicians, music nonprofit, guitar center foundation, music programs schools, music and community, philanthropy in music, guitar center, michelle wolff, marco ciappelli Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
223
AI Art vs Human Creativity — The Real Difference and why AI Cannot Be An Artist | A Conversation with AI Expert Andrea Isoni, PhD, Chief AI Officer, AI speaker | Redefining Society and Technology with Marco Ciappelli
The Last Touch: Why AI Will Never Be an ArtistI had one of those conversations... the kind where you're nodding along, then suddenly stop because someone just articulated something you've been feeling but couldn't quite name.Andrea Isoni is a Chief AI Officer. He builds and delivers AI solutions for a living. And yet, sitting across from him (virtually, but still), I heard something I rarely hear from people deep in the AI industry: a clear, unromantic take on what this technology actually is — and what it isn't.His argument is elegant in its simplicity. Think about Michelangelo. We picture him alone with a chisel, carving David from marble. But that's not how it worked. Michelangelo ran a workshop. He had apprentices — skilled craftspeople who did the bulk of the work. The master would look at a semi-finished piece, decide what needed refinement, and add the final touch.That final touch is everything.Andrea draws the same line with chefs. A Michelin-starred kitchen isn't one person cooking. It's a team executing the chef's vision. But the chef decides what's on the menu. The chef check the dish before it leaves. The chef adds that last adjustment that transforms good into memorable.AI, in this framework, is the newest apprentice. It can do the bulk work. It can generate drafts, produce code, create images. But it cannot — and here's the key — provide that final touch. Because that touch comes from somewhere AI doesn't have access to: lived experience, suffering, joy, the accumulated weight of being human in a particular time and place.This matters beyond art. Andrea calls it the "hacker economy" — a future where AI handles the volume, but humans handle the value. Think about code generation. Yes, AI can write software. But code with a bug doesn't work. Period. Someone has to fix that last bug. And in a world where AI produces most of the code, the value of fixing that one critical bug increases exponentially. The work becomes rarer but more valuable. Less frequent, but essential.We went somewhere unexpected in our conversation — to electricity. What does AI "need"? Not food. Not warmth. Electricity. So if AI ever developed something like feelings, they wouldn't be tied to hunger or cold or human vulnerability. They'd be tied to power supply. The most important being to an AI wouldn't be a human — it would be whoever controls the electricity grid.That's not a being we can relate to. And that's the point.Andrea brought up Guernica. Picasso's masterpiece isn't just innovative in style — it captures something society was feeling in 1937, the horror of the Spanish Civil War. Great art does two things: it innovates, and it expresses something the collective needs expressed. AI might be able to generate the first. It cannot do the second. It doesn't know what we feel. It doesn't know what moment we're living through. It doesn't have that weight of context.The research community calls this "world models" — the attempt to give AI some built-in understanding of reality. A dog doesn't need to be taught to swim; it's born knowing. Humans have similar innate knowledge, layered with everything we learn from family, culture, experience. AI starts from zero. Every time.Andrea put it simply: AI contextualization today is close to zero.I left the conversation thinking about what we protect when we acknowledge AI's limits. Not anti-technology. Not fear. Just clarity. The "last touch" isn't a romantic notion — it's what makes something resonate. And that resonance comes from us.Stay curious. Subscribe to the podcast. And if you have thoughts, drop them in the comments — I actually read them.Marco CiappelliSubscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.> https://www.linkedin.com/newsletters/7079849705156870144/Marco Ciappelli: https://www.marcociappelli.com/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
222
Gibson Guitars at NAMM 2026: 131 Years of Craftsmanship, Innovation & Functional Art | A Brand Highlight Conversation with Jeff Stempka, Global Brand & Marketing at Gibson | NAAM 2026
131 years. Still handcrafted in Nashville. Still changing music.At NAMM 2026, Sean Martin and Marco Ciappelli sat down with Jeff Stempka, Global Brand & Marketing at Gibson & Gibson Custom, to talk about what makes this brand untouchable—the craftsmanship, the artist connection, and why people will stretch their budget just to hold one.From the Les Paul Studio Double Trouble to the ES-335 Fifties and Sixties refresh, Gibson is honoring its legacy while pushing forward.Jeff said it best: "These are tools that enable incredible musicians to take the instruments and do something we never intended."🎸 Les Paul Studio Double Trouble – Modern collection, coil splits, pure bypass 🎸 ES-335 Fifties & Sixties – Neck profiles for every player 🎸 100 Years of Flat Tops – From Orville Gibson to todayThis isn't just gear. It's functional art. It's history. It's emotion.Part of ITSPmagazine's On Location Coverage at NAMM 2026.🌐 https://www.itspmagazine.com/the-namm-show-2026-namm-music-conference-music-technology-event-coverage-anaheim-california__________________________This is a Brand Highlight. A Brand Highlight is an introductory conversation designed to put a spotlight on the guest and their company. Learn more: https://www.studioc60.com/creation#highlightGUESTJeff StempkaGlobal Brand & Marketing at GibsonRESOURCESLearn more about Gibson Guitars: https://www.gibson.com/Are you interested in telling your story?▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlight Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
221
PRS Guitars at NAMM 2026: John Mayer Wild Blue Silver Sky & Ed Sheeran Baritone Revealed | A Brand Highlight Conversation with Alex Chadwhick from PRS Guitars | NAAM 2026
Vintage Dreams, Modern Hands: A Conversation with PRS Guitars at NAMM 2026They were literally closing down the show floor when I grabbed Alex Chadwick from PRS Guitars for a conversation I wasn't willing to miss.We'd been talking off-mic about something that kept nagging at me—this tension between technology and creativity that runs through everything in the music world right now. So I hit record, security guards circling, and asked him straight: Is technology helping musicians become better artists, or do you still need to learn the hard way?His answer was refreshingly honest. Technology isn't inherently good or bad. It's a tool. When it helps people be more expressive, more creative—that's the win. When it gets in the way of that expression? That's when we have a problem.It's the kind of nuance that gets lost in the usual gear coverage.PRS brought some beautiful new instruments to NAMM this year. The John Mayer Wild Blue Silver Sky stopped people in their tracks—a sharp turquoise finish with the first matching headstock ever produced from their Maryland factory on a Silver Sky. Limited to a thousand pieces worldwide. For Mayer fans and Silver Sky devotees alike, this one feels special.Then there's the Ed Sheeran Semi-Hollow Piezo Baritone. A 27.7-inch scale instrument tuned a fifth below standard, with discrete outputs for both magnetic and piezo elements. But here's what got me: each guitar ships with a signed print of Sheeran's original artwork that appears on the body. He's a visual artist too. The instrument becomes a canvas for multiple creative expressions at once.But the conversation that really stuck with me was about vintage guitars and why we romanticize them so much.Those 1950s and 60s instruments—the ones on posters, in documentaries, making the music that shaped entire generations—they've become holy relics. And the ones that actually sound magical? They cost as much as a house now. So how does anyone access that?Chadwick explained something about PRS's philosophy that I found genuinely compelling. They don't go back to the fifties. They go back to 1985. That gives them freedom—they can draw inspiration from those holy grail instruments without being trapped by their quirks, their inconsistent tolerances, their aged components. They can take what made those guitars legendary and build it into something repeatable, accessible, and comfortable.The goal, he said, is to create instruments that get out of the way. Guitars that let the person be more expressive instead of fighting against limitations.That phrase has been echoing in my head since I left Anaheim. Instruments that get out of the way.Because that's really what this is about, isn't it? All the gear, all the technology, all the innovation—it only matters if it helps someone find their voice. Make their own music. Tell their own story.PRS seems to understand that. In a world obsessed with vintage nostalgia and spec-sheet comparisons, they're building for expression.And that's worth a conversation, even when security is showing you the door.Marco Ciappelli reports from NAMM 2026 for ITSPmagazine, exploring the intersection of technology, creativity, and the humans who make music possible.__________________________This is a Brand Highlight. A Brand Highlight is an introductory conversation designed to put a spotlight on the guest and their company. Learn more: https://www.studioc60.com/creation#highlightGUESTAlexander ChadwickPRS GuitarsRESOURCESLearn more about PRS GUITARS: https://prsguitars.comAre you interested in telling your story?▶︎ Full Length Brand Story: https://www.studioc60.com/content-creation#full▶︎ Brand Spotlight Story: https://www.studioc60.com/content-creation#spotlight▶︎ Brand Highlight Story: https://www.studioc60.com/content-creation#highlightKEYWORDSNAMM 2026, PRS Guitars, John Mayer Silver Sky, Ed Sheeran guitar, PRS Wild Blue, baritone guitar, guitar gear, new guitars 2026, PRS limited edition, guitar innovation, NAMM Show, musician interviews Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
220
CES 2026 Recap | AI, Robotics, Quantum, And Renewable Energy: The Future Is More Practical Than You Think | A Conversation with CTA Senior Director and Futurist Brian Comiskey | Redefining Society and Technology with Marco Ciappelli
CES 2026 Just Showed Us the Future. It's More Practical Than You Think.CES has always been part crystal ball, part carnival. But something shifted this year.I caught up with Brian Comiskey—Senior Director of Innovation and Trends at CTA and a futurist by trade—days after 148,000 people walked the Las Vegas floor. What he described wasn't the usual parade of flashy prototypes destined for tech graveyards. This was different. This was technology getting serious about actually being useful.Three mega trends defined the show: intelligent transformation, longevity, and engineering tomorrow. Fancy terms, but they translate to something concrete: AI that works, health tech that extends lives, and innovations that move us, power us, and feed us. Not technology for its own sake. Technology with a job to do.The AI conversation has matured. A year ago, generative AI was the headline—impressive demos, uncertain applications. Now the use cases are landing. Industrial AI is optimizing factory operations through digital twins. Agentic AI is handling enterprise workflows autonomously. And physical AI—robotics—is getting genuinely capable. Brian pointed to robotic vacuums that now have arms, wash floors, and mop. Not revolutionary in isolation, but symbolic of something larger: AI escaping the screen and entering the physical world.Humanoid robots took a visible leap. Companies like Sharpa and Real Hand showcased machines folding laundry, picking up papers, playing ping pong. The movement is becoming fluid, dexterous, human-like. LG even introduced a consumer-facing humanoid. We're past the novelty phase. The question now is integration—how these machines will collaborate, cowork, and coexist with humans.Then there's energy—the quiet enabler hiding behind the AI headlines.Korea Hydro Nuclear Power demonstrated small modular reactors. Next-generation nuclear that could cleanly power cities with minimal waste. A company called Flint Paper Battery showcased recyclable batteries using zinc instead of lithium and cobalt. These aren't sexy announcements. They're foundational.Brian framed it well: AI demands energy. Quantum computing demands energy. The future demands energy. Without solving that equation, everything else stalls. The good news? AI itself is being deployed for grid modernization, load balancing, and optimizing renewable cycles. The technologies aren't competing—they're converging.Quantum made the leap from theory to presence. CES launched a new area called Foundry this year, featuring innovations from D-Wave and Quantum Computing Inc. Brian still sees quantum as a 2030s defining technology, but we're in the back half of the 2020s now. The runway is shorter than we thought.His predictions for 2026: quantum goes more mainstream, humanoid robotics moves beyond enterprise into consumer markets, and space technologies start playing a bigger role in connectivity and research. The threads are weaving together.Technology conversations often drift toward dystopia—job displacement, surveillance, environmental cost. Brian sees it differently. The convergence of AI, quantum, and clean energy could push things toward something better. The pieces exist. The question is whether we assemble them wisely.CES is a snapshot. One moment in the relentless march. But this year's snapshot suggests technology is entering a phase where substance wins over spectacle.That's a future worth watching.This episode is part of the Redefining Society and Technology podcast's CES 2026 coverage. Subscribe to stay informed as technology and humanity continue to intersect.Subscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.> https://www.linkedin.com/newsletters/7079849705156870144/Marco Ciappelli: https://www.marcociappelli.com/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
219
CES 2026: Why NVIDIA's Jensen Huang Won IEEE Medal of Honor | A Conversation with Mary Ellen Randall, IEEE's 2026 President and CEO | Redefining Society and Technology with Marco Ciappelli
Jensen Huang Just Won IEEE's Highest Honor. The Reason Tells Us Everything About Where Tech Is Headed.IEEE announced Jensen Huang as its 2026 Medal of Honor recipient at CES this week. The NVIDIA founder joins a lineage stretching back to 1917—over a century of recognizing people who didn't just advance technology, but advanced humanity through technology.That distinction matters more than ever.I spoke with Mary Ellen Randall, IEEE's 2026 President and CEO, from the floor of CES Las Vegas. The timing felt significant. Here we are, surrounded by the latest gadgets and AI demonstrations, having a conversation about something deeper: what all this technology is actually for.IEEE isn't a small operation. It's the world's largest technical professional society—500,000 members across 190 countries, 38 technical societies, and 142 years of history that traces back to when the telegraph was connecting continents and electricity was the revolutionary new thing. Back then, engineers gathered to exchange ideas, challenge each other's thinking, and push innovation forward responsibly.The methods have evolved. The mission hasn't."We're dedicated to advancing technology for the benefit of humanity," Randall told me. Not advancing technology for its own sake. Not for quarterly earnings. For humanity. It sounds like a slogan until you realize it's been their operating principle since before radio existed.What struck me was her framing of this moment. Randall sees parallels to the Renaissance—painters working with sculptors, sharing ideas with scientists, cross-pollinating across disciplines to create explosive growth. "I believe we're in another time like that," she said. "And IEEE plays a crucial role because we are the way to get together and exchange ideas on a very rapid scale."The Jensen Huang selection reflects this philosophy. Yes, NVIDIA built the hardware that powers AI. But the Medal of Honor citation focuses on something broader—the entire ecosystem NVIDIA created that enables AI advancement across healthcare, autonomous systems, drug discovery, and beyond. It's not just about chips. It's about what the chips make possible.That ecosystem thinking matters when AI is moving faster than our ethical frameworks can keep pace. IEEE is developing standards to address bias in AI models. They've created certification programs for ethical AI development. They even have standards for protecting young people online—work that doesn't make headlines but shapes the digital environment we all inhabit."Technology is a double-edged sword," Randall acknowledged. "But we've worked very hard to move it forward in a very responsible and ethical way."What does responsible look like when everything is accelerating? IEEE's answer involves convening experts to challenge each other, peer-reviewing research to maintain trust, and developing standards that create guardrails without killing innovation. It's the slow, unglamorous work that lets the exciting breakthroughs happen safely.The organization includes 189,000 student members—the next generation of engineers who will inherit both the tools and the responsibilities we're creating now. "Engineering with purpose" is the phrase Randall kept returning to. People don't join IEEE just for career advancement. They join because they want to do good.I asked about the future. Her answer circled back to history: the Renaissance happened when different disciplines intersected and people exchanged ideas freely. We have better tools for that now—virtual conferences, global collaboration, instant communication. The question is whether we use them wisely.We live in a Hybrid Analog Digital Society where the choices engineers make today ripple through everything tomorrow. Organizations like IEEE exist to ensure those choices serve humanity, not just shareholder returns.Jensen Huang's Medal of Honor isn't just recognition of past achievement. It's a statement about what kind of innovation matters.Subscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.My Newsletter? Yes, of course, it is here: https://www.linkedin.com/newsletters/7079849705156870144/Marco Ciappelli: https://www.marcociappelli.com/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
218
Nothing Has Changed in Cybersecurity Since the 80s — And That's the Real Problem | A Conversation with Steve Mancini | Redefining Society and Technology with Marco Ciappelli
Dr. Steve Mancini: https://www.linkedin.com/in/dr-steve-m-b59a525/Marco Ciappelli: https://www.marcociappelli.com/Nothing Has Changed in Cybersecurity Since War Games — And That's Why We're in Trouble"Nothing has changed."That's not what you expect to hear from someone with four decades in cybersecurity. The industry thrives on selling the next revolution, the newest threat, the latest solution. But Dr. Steve Mancini—cybersecurity professor, Homeland Security veteran, and Italy's Honorary Consul in Pittsburgh—wasn't buying any of it. And honestly? Neither was I.He took me back to his Commodore 64 days, writing basic war dialers after watching War Games. The method? Dial numbers, find an open line, try passwords until one works. Translate that to today: run an Nmap scan, find an open port, brute force your way in. The principle is identical. Only the speed has changed.This resonated deeply with how I think about our Hybrid Analog Digital Society. We're so consumed with the digital evolution—the folding screens, the AI assistants, the cloud computing—that we forget the human vulnerabilities underneath remain stubbornly analog. Social engineering worked in the 1930s, it worked when I was a kid in Florence, and it works today in your inbox.Steve shared a story about a family member who received a scam call. The caller asked if their social security number "had a six in it." A one-in-nine guess. Yet that simple psychological trick led to remote software being installed on their computer. Technology gets smarter; human psychology stays the same.What struck me most was his observation about his students—a generation so immersed in technology that they've become numb to breaches. "So what?" has become the default response. The data sells, the breaches happen, you get two years of free credit monitoring, and life goes on. Groundhog Day.But the deeper concern isn't the breaches. It's what this technological immersion is doing to our capacity for critical thinking, for human instinct. Steve pointed out something that should unsettle us: the algorithms feeding content to young minds are designed for addiction, manipulating brain chemistry with endorphin kicks from endless scrolling. We won't know the full effects of a generation raised on smartphones until they're forty, having scrolled through social media for thirty years.I asked what we can do. His answer was simple but profound: humans need to decide how much they want technology in their lives. Parents putting smartphones in six-year-olds' hands might want to reconsider. Schools clinging to the idea that they're "teaching technology" miss the point—students already know the apps better than their professors. What they don't know is how to think without them.He's gone back to paper and pencil tests. Old school. Because when the power goes out—literally or metaphorically—you need a brain that works independently.Ancient cultures, Steve reminded me, built civilizations with nothing but their minds, parchment, and each other. They were, in many ways, a thousand times smarter than us because they had no crutches. Now we call our smartphones "smart" while they make us incrementally dumber.This isn't anti-technology doom-saying. Neither Steve nor I oppose technological progress. The conversation acknowledged AI's genuine benefits in medicine, in solving specific problems. But this relentless push for the "easy button"—the promise that you don't have to think, just click—that's where we lose something essential.The ultimate breach, we concluded, isn't someone stealing your data. It's breaching the mind itself. When we can no longer think, reason, or function without the device in our pocket, the hackers have already won—and they didn't need to write a single line of code.Subscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.My Newsletter? Yes, of course, it is here: https://www.linkedin.com/newsletters/7079849705156870144/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
217
Author Kate O'Neill's Book "What Matters Next": AI, Meaning, and Why We Can't Delegate Creativity | Redefining Society and Technology with Marco Ciappelli
Author Kate O'Neill's Book "What Matters Next": AI, Meaning, and Why We Can't Delegate Creativity | Redefining Society and Technology with Marco CiappelliKate O'Neill: https://www.koinsights.com/books/what-matters-next-book/Marco Ciappelli: https://www.marcociappelli.com/ When Kate O'Neill tells me that AI's most statistically probable outcome is actually its least meaningful one, I realize we're talking about something information theory has known for decades - but nobody's applying to the way we're using ChatGPT.She's a linguist who became a tech pioneer, one of Netflix's first hundred employees, someone who saw the first graphical web browser and got chills knowing everything was about to change. Her new book "What Matters Next" isn't another panic piece about AI or a blind celebration of automation. It's asking the question nobody seems to want to answer: what happens when we optimize for probability instead of meaning?I've been wrestling with this myself. The more I use AI tools for content, analysis, brainstorming - the more I notice something's missing. The creativity isn't there. It's brilliant for summarization, execution, repetitive tasks. But there's a flatness to it, a regression to the mean that strips away the very thing that makes human communication worth having.Kate puts it plainly: "There is nothing more human than meaning-making. From semantic meaning all the way out to the philosophical, cosmic worldview - what matters and why we're here."Every time we hit "generate" and just accept what the algorithm produces, we're choosing efficiency over meaning. We're delegating the creative process to a system optimized for statistical likelihood, not significance.She laughs when I tell her about my own paradox - that AI sometimes takes MORE time, not less. There's this old developer concept called "yak shaving," where you spend ten times longer writing a program to automate five steps instead of just doing them. But the real insight isn't about time management. It's about understanding the relationship between our thoughts and the tools we use to express them.In her book "What Matters Next," Kate's message is that we need to stay in the loop. Use AI for ugly first drafts, sure. Let it expedite workflow. But keep going back and forth, inserting yourself, bringing meaning and purpose back into the process. Otherwise, we create what she calls "garbage that none of us want to exist in the world with."I wrote recently about the paradox of learning when we rely entirely on machines. If AI only knows what we've done in the past, and we don't inject new meaning into that loop, it becomes closed. It's like doomscrolling through algorithms that only feed you what you already like - you never discover anything new, never grow, never challenge yourself.We're living in a Hybrid Analog Digital Society where these tools are unavoidable and genuinely powerful. The question isn't whether to use them. It's how to use them in ways that amplify human creativity rather than flatten it, that enhance meaning rather than optimize it away.The dominant narrative right now is efficiency, productivity, automation. But what if the real value isn't doing things faster - it's doing things that actually matter? Technology should serve humanity's purpose. Not the other way around. And that purpose can't be dictated by algorithms trained on statistical likelihood. It has to come from us, from the messy, unpredictable, meaningful work of being human.My Newsletter? Yes, of course, it is here: https://www.linkedin.com/newsletters/7079849705156870144/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
216
AI in Healthcare: Who Benefits, Who Pays, and Who's at Risk in Our Hybrid Analog Digital Society | Expert Panel Discussions With Marco Ciappelli & Sean Martin
AI in Healthcare: Who Benefits, Who Pays, and Who's at Risk in Our Hybrid Analog Digital Society🎙️ EXPERT PANEL Hosted By Marco Ciappelli & Sean MartinDr. Robert Pearl - Former CEO, Permanente Medical Group; Author, "ChatGPT, MD"Rob Havasy - Senior Director of Connected Health, HIMSSJohn Sapp Jr. - VP & CSO, Texas Mutual InsuranceJim StClair - VP of Public Health Systems, AltarumRobert Booker - Chief Strategy Officer, HITRUSTI had one of those conversations recently that reminded me why we do what we do at ITSPmagazine. Not the kind of polite, surface-level exchange you get at most industry events, but a real grappling with the contradictions and complexities that define our Hybrid Analog Digital Society.This wasn't just another panel discussion about AI in healthcare. This was a philosophical interrogation of who benefits, who pays, and who's at risk when we hand over diagnostic decisions, treatment protocols, and even the sacred physician-patient relationship to algorithms.The panel brought together some of the most thoughtful voices in healthcare technology: Dr. Robert Pearl, former CEO of the Permanente Medical Group and author of "ChatGPT, MD"; Rob Havasy from HIMSS; John Sapp from Texas Mutual Insurance; Jim StClair from Altarum; and Robert Booker from HITRUST. What emerged wasn't a simple narrative of technological progress or dystopian warning, but something far more nuanced—a recognition that we're navigating uncharted territory where the stakes couldn't be higher.Dr. Pearl opened with a stark reality: 400,000 people die annually from misdiagnoses in America. Another half million die because we fail to adequately control chronic diseases like hypertension and diabetes. These aren't abstract statistics—they're lives lost to human error, system failures, and the limitations of our current healthcare model. His argument was compelling: AI isn't replacing human judgment; it's filling gaps that human cognition simply cannot bridge alone.But here's where the conversation became truly fascinating. Rob Havasy described a phenomenon I've noticed across every technology adoption curve we've covered—the disconnect between leadership enthusiasm and frontline reality. Healthcare executives believe AI is revolutionizing their operations, while nurses and physicians on the floor are quietly subscribing to ChatGPT on their own because the "official" tools aren't ready yet. It's a microcosm of how innovation actually happens: messy, unauthorized, and driven by necessity rather than policy.The ethical dimensions run deeper than most people realize. When Marco—my co-host Sean Martin and I—asked about liability, the panel's answer was refreshingly honest: we don't know. The courts will eventually decide who's responsible when an AI diagnostic tool leads to harm. Is it the developer? The hospital? The physician who relied on the recommendation? Right now, everyone wants control over AI deployment but minimal liability for its failures. Sound familiar? It's the classic American pattern of innovation outpacing regulation.John Sapp introduced a phrase that crystallized the challenge: "enable the secure adoption and responsible use of AI." Not prevent. Not rush recklessly forward. But enable—with guardrails, governance, and a clear-eyed assessment of both benefits and risks. He emphasized that AI governance isn't fundamentally different from other technology risk management; it's just another category requiring visibility, validation, and informed decision-making.Yet Robert Booker raised a question that haunts me: what do we really mean when we talk about AI in healthcare? Are we discussing tools that empower physicians to provide better care? Or are we talking about operational efficiency mechanisms designed to reduce costs, potentially at the expense of the human relationship that defines good medicine?This is where our Hybrid Analog Digital Society reveals its fundamental tensions. We want the personalization that AI promises—real-time analysis of wearable health data, pharmacogenetic insights tailored to individual patients, early detection of deteriorating conditions before they become crises. But we're also profoundly uncomfortable with the idea of an algorithm replacing the human judgment, intuition, and empathy that we associate with healing.Jim StClair made a provocative observation: AI forces us to confront the uncomfortable truth about how much of medical practice is actually procedure, protocol, and process rather than art. How many ER diagnoses follow predictable decision trees? How many prescriptions are essentially formulaic responses to common presentations? Perhaps AI isn't threatening the humanity of medicine—it's revealing how much of medicine has always been mechanical, freeing clinicians to focus on the parts that genuinely require human connection.The panel consensus, if there was one, centered on governance. Not as bureaucratic obstruction, but as the framework that allows us to experiment responsibly, learn from failures without catastrophic consequences, and build trust in systems that will inevitably become more prevalent.What struck me most wasn't the disagreements—though there were plenty—but the shared recognition that we're asking the wrong question. It's not "AI or no AI?" but "What kind of AI, governed how, serving whose interests, with what transparency, and measured against what baseline?"Because here's the uncomfortable truth Dr. Pearl articulated: we're comparing AI to an idealized vision of human medical practice that doesn't actually exist. The baseline isn't perfection—it's 400,000 annual misdiagnoses, burned-out clinicians spending hours on documentation instead of patient care, and profound healthcare inequities based on geography and economics.The question isn't whether AI will transform healthcare. It already is. The question is whether we'll shape that transformation consciously, ethically, and with genuine concern for who benefits and who bears the risks.Listen to the full conversation and subscribe to stay connected with these critical discussions about technology and society.Links:ITSPmagazine: ITSPmagazine.comRedefining Society and Technology Podcast: redefiningsocietyandtechnologypodcast.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
215
New Event | Global Space Awards 2025 Honors Captain James Lovell Legacy at Natural History Museum London | A conversation with Sanjeev Gordhan | Redefining Society And Technology Podcast With Marco Ciappelli
____________Podcast Redefining Society and Technology Podcast With Marco Ciappellihttps://redefiningsocietyandtechnologypodcast.com ____________Host Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/____________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb____________TitleNew Event | Global Space Awards 2025 Honors Captain James Lovell Legacy at Natural History Museum London | A conversation with Sanjeev Gordhan | Redefining Society And Technology Podcast With Marco Ciappelli____________Guests:Sanjeev GordhanGeneral Partner @ Type One Ventures | Space, Deep-Tech, StrategyOn LinkedIn: https://www.linkedin.com/in/sanjeev-gordhan-3714b327/____________Short Introduction The inaugural Global Space Awards celebrates the Golden Era of Space on December 5, 2025, at London's Natural History Museum. Hosted by physicist Brian Greene, the event honors Captain James Lovell's legacy and recognizes innovators transforming space from government domain to commercial frontier in our Hybrid Analog Digital Society.____________Article "There are people who make things happen, there are people who watch things happen, and there are people who wonder what happened. To be successful, you need to be a person who makes things happen."Those words from Captain James Lovell defined his life—from commanding Apollo 13's near-disastrous mission to inspiring generations of space explorers. This December, London's Natural History Museum will host the inaugural Global Space Awards, an event dedicating its first evening to Lovell's extraordinary legacy while celebrating those making things happen in space today.Sanjeev Gordhan, General Partner at Type One Ventures and part of the Global Space Awards organizing team, joined me to discuss why this moment matters. Not just for space enthusiasts, but for everyone whose lives are being transformed by technologies developed beyond Earth's atmosphere."Space is not a sector," Sanj explained. "It's a domain that overrides many sectors—agriculture, pharmaceuticals, defense, telecommunications, connectivity. Things we engage with daily."The timing couldn't be more significant. We're witnessing what Sanj calls a fundamental shift in space economics. In the 1970s and 80s, launching a kilogram into space cost $70,000-$80,000. Today? Around $3,000. That 20x reduction has transformed space from an exclusive government playground into a commercially viable domain where startups can reach orbit on seed funding.This democratization of space access is precisely why the Global Space Awards emerged. The industry needed something beyond its echo chambers—a red-carpet moment celebrating excellence across the entire spectrum, from research laboratories to scaling businesses, from breakthrough science to sustainable investments.The response exceeded all expectations. The first-year event received 516 nominations from 38 countries. Sanj and his team were "gobsmacked"—they'd hoped for maybe 150-200. The overwhelming engagement proved what they suspected: the space community was hungry for recognition that spans the complete journey from laboratory to commercial impact.What makes this particularly fascinating is how space technology circles back to solve Earth's problems. Consider pharmaceuticals: crystallization processes in microgravity create flawless crystal structures impossible to achieve on Earth. The impact? Chemotherapy treatments that currently require hours-long hospital visits could become subcutaneous injections patients self-administer at home. That's not science fiction—that's research happening now on the International Space Station, waiting for commercial space infrastructure to scale production.Or agriculture: Earth observation satellites help farmers optimize crop yields, manage water resources, and predict harvests with unprecedented accuracy. Space technology feeding humanity—literally.The investment mathematics are compelling. For every dollar invested in space innovation, the return to humanity measures around 20x. Not in stock market terms, but in solving problems like food security, medical treatments, climate monitoring, and global connectivity. These aren't abstract future benefits—they're happening now, accelerating as launch costs plummet and commercial operations expand.The Global Space Awards recognizes this multifaceted reality through eight distinct categories: Playmaker of the Year, Super Scaler, Space Investor, Partnership of the Year, Innovation Breakthrough, Science Breakthrough, Sustainability for Earth, and Sustainability for Space. Each award acknowledges that space progress requires diverse contributions—from the scientists doing foundational research to the investors providing capital, from the engineers building systems to the partnerships bridging sectors.And then there's the James Lovell Legacy Award, presented to his family at this inaugural event. The choice is deliberate and symbolic. Lovell commanded Apollo 8, the first crewed mission to orbit the Moon, then led Apollo 13's dramatic survival when an oxygen tank exploded en route to the lunar surface. His calm under pressure, innovative problem-solving with limited resources, and unwavering commitment to bringing his crew home safely epitomize what space exploration demands: courage combined with pragmatism, vision tempered by reality.The Lovell family's response to the tribute captures this spirit perfectly: "His words continue to guide not only our family, but all those who dare to dream beyond the horizon."That phrase—"dream beyond the horizon"—resonates deeply in our current moment. We're transitioning from the heroic Apollo era to something more complex and perhaps more consequential. Space is becoming infrastructure, not just exploration. The question isn't whether humans will have a permanent presence beyond Earth, but how quickly and sustainably we'll build it.The Natural History Museum setting adds another layer of meaning. Here's a building celebrating Earth's evolutionary history hosting an event about humanity's next evolutionary step—becoming a spacefaring species. The juxtaposition of dinosaur fossils and rocket technology, of ancient geology and future lunar economies, captures where we stand: creatures evolved on one small planet now reaching beyond it.Physicist Brian Greene hosting the event is equally symbolic. Not an astronaut or rocket scientist, but someone who makes complex physics comprehensible to non-specialists. Space's future depends on broad understanding, not just specialized expertise. When space technology becomes as mundane as aviation—when we stop thinking about the satellites enabling our GPS or the space-tested materials in our smartphones—that's when the real transformation completes.Sanj mentioned something that stuck with me: people ask why we spend billions on space when Earth has so many problems. The answer is that space spending helps solve Earth's problems. Better farming through satellite data. Life-saving pharmaceuticals manufactured in microgravity. Climate monitoring. Disaster response. Global internet access for remote regions. The false choice between Earth and space collapses when you understand space as a domain enabling solutions, not a destination draining resources.Looking forward, the opportunities expand exponentially. We haven't even begun exploiting lunar resources or manufacturing in zero gravity at scale. The next 5-15 years will bring benefits we can barely imagine today—but we must start now. Space infrastructure takes time. The ISS took over a decade to build. Commercial space stations, lunar bases, and orbital manufacturing facilities will require similar long-term commitments.That's why events like the Global Space Awards matter. They connect the dots between research and commerce, between investment and impact, between legacy and future. They remind us that space isn't just about rockets and astronauts—it's about chemists and farmers, investors and engineers, visionaries and pragmatists all working toward the same horizon.The finalists will be announced from the stratosphere—literally, on a screen carried by balloon—because why not? If you're celebrating space, do it with flair.As our conversation ended, I found myself hoping to attend. Not because I'm a space professional (I'm not), but because I'm fascinated by how technology reshapes society. And space technology is reshaping everything, whether we notice it or not. In our Hybrid Analog Digital Society, space represents the ultimate extension of human capability—using technology not to replace our humanity but to expand what humanity can accomplish.Captain Lovell's quote rings true: some make things happen, some watch, some wonder. The Global Space Awards celebrates those making things happen. The rest of us should at least watch—because what happens in space increasingly happens to all of us.Subscribe to continue these conversations about technology, society, and humanity's next chapter. Because the future is being built right now, and it's more exciting than most people realize.____________About the eventGLOBAL SPACE AWARDS DEDICATES EVENING TO HONOR THE LEGACY AND EXTRAORDINARY CONTRIBUTIONS OF CAPTAIN JAMES LOVELLInaugural James Lovell Legacy Award Introduced and Presented to the Lovell Family Red-Carpet Awards Event Taking Place on December 5 at The Natural History Museum, LondonLondon, U.K. – October 29, 2025 – The Global Space Awards (GSA), the first international event dedicated to celebrating the achievements defining today’s Golden Era of Space, hosted by world-renowned physicist and bestselling author Brian Greene, has announced it will dedicate the event to the memory and outstanding achievements of the extraordinary and iconic Captain James Lovell. A special inaugural James Lovell Legacy Award will be presented to his family, launching the award’s initiative to recognize those whose lifetime of leadership, service, and courage have left an enduring impact on humanity’s progress in space.The Lovell Family responds to the tribute, “We are deeply honored that this evening's Global Space Awards is dedicated to the remarkable legacy of our father, Captain James Lovell, a true pioneer whose courage and vision continue to inspire generations. As my father often reminded us, ‘There are people who make things happen, there are people who watch things happen, and there are people who wonder what happened. To be successful, you need to be a person who makes things happen.’ His words continue to guide not only our family, but all those who dare to dream beyond the horizon. We are profoundly grateful to see his legacy honored among those who continue to make things happen in space exploration.”Sanjeev Gordhan of the Global Space Awards CIC continues, “We are deeply honored to welcome the Lovell family as we celebrate the extraordinary legacy of their father, Captain James Lovell. A true American treasure and one of the bravest men ever to journey into space, Captain Lovell’s courage and leadership have inspired generations. It is both fitting and meaningful that the inaugural James Lovell Legacy Award be dedicated to him and presented to his family in recognition of his remarkable contributions to space exploration and his enduring impact on humanity’s quest for discovery.”The James Lovell Legacy Award will be an annual award given to the individual who honors the spirit of Commander James Lovell, whose heroism, calm under pressure, and unwavering commitment to exploration exemplify the very best of humanity in the face of the unknown. It celebrates those whose legacy is not measured only by the missions flown or the technologies pioneered, but by the inspiration they leave for generations to come and the foundations they have built.The Global Space Awards event will take place at The Natural History Museum, London on Friday, December 5. It will feature an awards ceremony and black-tie gala dinner, honoring the innovators, investors, and organizations shaping the future of space—from lunar bases and in-orbit manufacturing to sustainable space economies that benefit life on Earth today. Finalists will be announced in early November.Until now, there has been no unified global platform recognizing these historic accomplishments. The Global Space Awards were created to fill that void—shining a spotlight on the breakthroughs, technologies, and visionaries setting new benchmarks for space innovation at one iconic annual event. The GSA’s core values are based on: innovation, global collaboration, inspiration, integrity and sustainability.The inaugural Global Space Awards will be overseen by a Steering Committee of highly respected industry leaders. They include Anna Hazlett, Founder & CEO of AzurX and member of the AED 2 billion Mohammed Bin Rashid Innovation Fund (MBRIF) Advisory & Decision Committee, Andrew Robb, Partner & EMEA Space Practice Leader at Deloitte, Sanjeev Gordhan, General Partner at Type One Ventures, and Hidetaka Aoki, Co-founder and director at Space Port Japan, co-founder of SPACETIDE Foundation and Space Evangelist.In addition to the James Lovell Legacy Award, the evening will feature the presentation of the following awards:Playmaker of the Year Award -- Awarded to an individual whose defining move this year shifted the trajectory of the space economy. This award celebrates the power players creating momentum across the ecosystemSuper Scaler of the Year Award -- Awarded to a Space company that has demonstrated exceptional commercial growth over the past year. Whether through market expansion, revenue milestones, operational scaling, or capital raised, this award recognizes the breakout businesses charting a path to rapid growth.Space Investor of the Year Award -- Awarded to an investor (angel or institutional) or investment firm who, over the past 12 months, has most meaningfully accelerated the growth and trajectory of their portfolio companies. This award recognizes strategic capital, deep conviction, and hands-on partnership that unlocks real progress.Partnership of the Year Award -- Awarded jointly to a Space company and its corporate or public sector partner(s) whose collaboration has delivered exceptional impact over the past year. This award celebrates partnerships that achieve tangible results, scale technology, and push the boundaries of what’s possible through cross-sector innovation.Innovation Breakthrough Award -- Awarded to a Space company pushing the boundaries of what’s technically possible. This award recognizes radical product or service innovations, DeepTech achievements, or breakthrough moments that set new benchmarks for the sector. Science Breakthrough Award -- Awarded to a research team or individual whose scientific contribution is advancing our understanding of Space, enabling new Space technologies, or altering Space policies. This award spotlights the foundational projects that underpin Space innovation and drives the broader Space ecosystem. Sustainability for Earth Award -- Awarded to a Space company achieving measurable impact on Earth through Space technology. This award celebrates space-derived innovations that address pressing problems on our planet. Sustainability for Space Award -- Awarded to a Space company making the most significant contribution to the long-term sustainability of Space. This award celebrates tangible progress toward a responsible future in orbit. About Global Space Awards CICGlobal Space Awards CIC (Community Interest Company) has been established as a not-for-profit entity limited by guarantee to champion the ecosystem for Space. The entity is governed by independent advisory board members who will ensure transparency and fairness of the awards selection process and oversee the financial governance of the operations.____________Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology Podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society. ____________End of transmissionListen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company?👉 https://www.studioc60.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
214
New Book | STREAMING WARS: How Getting Everything We Want Changed Entertainment Forever | Journalist Charlotte Henry Explains How Streaming Changed Entertainment Forever | Redefining Society And Technology Podcast With Marco Ciappelli
____________Podcast Redefining Society and Technology Podcast With Marco Ciappellihttps://redefiningsocietyandtechnologypodcast.com ____________Host Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/____________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb____________TitleNew Book | STREAMING WARS: How Getting Everything We Want Changed Entertainment Forever | Journalist Charlotte Henry Explains How Streaming Changed Entertainment Forever | Redefining Society And Technology Podcast With Marco Ciappelli____________Guests:Charlotte HenryAuthor, journalist, broadcaster who created and runs The Addition newsletter looking at the crossover between media and tech.The Media Society https://theaddition.substack.com/On LinkedIn: https://www.linkedin.com/in/charlotteahenry/____________Short Introduction Journalist Charlotte Henry reveals how streaming transformed entertainment in her new book "Streaming Wars: How Getting Everything We Want Changed Entertainment Forever." From Netflix's rise to the 2023 Hollywood strikes, she examines how we consume media, express ourselves, and the surprising return to "old-fashioned" weekly releases in our Hybrid Analog Digital Society.____________Article We used to learn who someone was by looking at their record collection. Walk into their home, scan the vinyl on the shelves, and you'd know—this person loves Metallica, that person's into jazz, someone else collected every Beatles album ever pressed. Media was how we expressed ourselves, how we told our story without saying a word.That's gone now. And we might not have noticed it disappearing.Charlotte Henry, a London-based journalist and author of "Streaming Wars: How Getting Everything We Want Changed Entertainment Forever," sat down with me to discuss something most of us experience daily but rarely examine deeply: how streaming has fundamentally altered not just entertainment, but how we relate to media and each other."You can't pop over to someone's house after a first date and see their Spotify playlist," Charlotte pointed out. She's right—you can't browse someone's Netflix queue the way you could their DVD collection, can't judge their Kindle library the way you could scan their bookshelf. We've lost that intimate form of self-expression, that casual cultural reveal that came from physical media.But Charlotte's book isn't a nostalgic lament. It's something far more valuable: a snapshot of this exact moment in media history, a line in the sand marking where we are before everything changes again. And in technology and media, change is the only constant.Her starting point is deliberate—the 2023 Hollywood strikes. Not the beginning of streaming's story, but perhaps its most symbolic moment. Writers, actors, costume designers, transportation crews, everyone who keeps Hollywood running stood up and said: this isn't working. The frustrations that exploded that summer had been building for years, all stemming from how streaming fundamentally disrupted the entertainment economy.My wife works in Hollywood's costume department. She lived through those strikes, felt the direct impact of an industry transformed. The changes Charlotte documents aren't abstract—they're affecting real careers, real livelihoods, real creative work.What struck me most about our conversation was how Charlotte brings together all of streaming—not just Netflix and Disney+, but Twitch, Spotify, Apple Music, the specialized services for heavy metal or horror movies, the entire ecosystem of on-demand media. No one had told this complete story before, and it needed telling precisely because it's changing so rapidly.Consider this: streaming is both revolutionary and circular. We cut the cord, abandoned cable packages, embraced freedom of choice. But now? The streaming services are rebundling themselves into packages that look suspiciously like the cable bundles we rejected. We've come full circle, just with different branding.The same thing is happening with release schedules. Remember when Netflix revolutionized everything by dropping entire seasons at once? Binge-watching became our cultural norm. But now services are reverting to weekly releases—Stranger Things spread across quarters to ensure multiple subscription payments, Apple TV+ releasing shows one episode per week like it's 1995. We're going back to the future.Charlotte's analysis of the consumer psychology is fascinating. We've been trained to expect everything, everywhere, immediately. Not just TV shows—beer subscription services, meal kits, next-day Amazon delivery. We subscribe rather than own. We stream rather than collect. And that shift has changed not just how we consume media, but how we think about possession, patience, and value.The economic impact goes deeper than most realize. Writers who once created 24-episode seasons now produce 8-episode limited series but remain contractually bound to exclusivity, earning less while being unable to take other work. Meanwhile, streamers pump money into content, taking risks on shows that traditional networks never would have greenlit, creating opportunities for voices that wouldn't have been heard before.It's complicated. Like all technological transformation, streaming brings both disruption and opportunity, loss and gain.The data-driven nature of streaming is particularly interesting. Charlotte notes that often the most-watched content isn't the prestigious shows we discuss—it's the mediocre background programming people half-watch while scrolling their phones. Netflix figured this out and adjusted strategy accordingly. They still want the big shows, the water-cooler moments, but they've also embraced the second-screen reality of modern viewing.And then there's AI—the elephant in every media conversation now. Charlotte dedicates a chapter to it because she had to. We're on the verge of being able to create Netflix-quality content with minimal human involvement. The 2023 strikes were partly about this, negotiating protections around AI use of actors' likenesses and voices.But here's where Charlotte and I found common ground: we both believe AI might actually increase the value of human-made work. When everything can be generated, the authentically human becomes precious. The imperfect becomes valuable. The emotional becomes irreplaceable.I'm seeing signs of this already. Bookstores packed with kids excited about physical books. Vinyl sales continuing to rise. People craving the tangible, the real, the human. Maybe we'll look back at this moment and recognize it as the turning point—not where AI replaced human creativity, but where we collectively decided what we value most.Charlotte's book captures this inflection point perfectly. In our Hybrid Analog Digital Society, we're navigating between worlds—the physical and virtual, the owned and subscribed, the patient and immediate, the human and artificial. Understanding where we are now helps us choose where we go next.As we wrapped our conversation, Charlotte and I bonded over our shared love of analog media—the CDs behind her, the vinyl behind those, my own collections scattered between Los Angeles and Florence. Two media nerds on opposite sides of an ocean, connected by technology that would have seemed like science fiction to our younger selves, discussing how that very technology is changing everything.The streaming wars aren't over. They're just beginning. Charlotte Henry's book gives us the map to understand the battlefield.Subscribe to continue these conversations about media, technology, and society. Because in a world of infinite content, thoughtful analysis of what it all means becomes the rarest commodity of all.____________About the bookStreaming Wars: How Getting Everything We Wanted Changed Entertainment ForeverStreaming didn't just change what we watch. It changed who holds the power in entertainment.Streaming Wars reveals how platforms like Netflix, Disney+, Apple TV+, Spotify and Amazon Prime have transformed more than just entertainment. They've rewritten the rules of streaming services, media economics, power and visibility. Journalist Charlotte Henry explores what's really going on behind your screen, from Hollywood's 2023 strikes to the rise of ad-supported tiers, the global race for live sports and the slow fade of traditional TV. With a sharp, accessible lens, Henry breaks down how AI, rebundling and fierce platform competition are driving a new era of streaming and why this shift matters now. Perfect for anyone who wants to understand how streaming is reshaping culture, business and what we watch.Find it on Amazon: https://www.amazon.com/Streaming-Wars-Getting-Everything-Entertainment/dp/1398622559____________Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology Podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society. ____________End of transmissionListen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company?👉 https://www.studioc60.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
213
New Book: SPIES, LIES, AND CYBER CRIME | Former FBI Spy Hunter Eric O'Neill Explains How Cybercriminals Use Espionage techniques to Attack Us | Redefining Society And Technology Podcast With Marco Ciappelli
____________Podcast Redefining Society and Technology Podcast With Marco Ciappellihttps://redefiningsocietyandtechnologypodcast.com ____________Host Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/____________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb____________TitleNew Book: SPIES, LIES, AND CYBER CRIME | Former FBI Spy Hunter Eric O'Neill Explains How Cybercriminals Use Espionage techniques to Attack Us | Redefining Society And Technology Podcast With Marco Ciappelli____________Guests:Eric O'NeillKeynote Speaker, Cybersecurity Expert, Spy Hunter, Bestselling Author. AttorneyOn LinkedIn: https://www.linkedin.com/in/eric-m-oneill/Find the book on Eric Website: https://ericoneill.netSean Martin, CISSPGTM Advisor | Journalist, Analyst, Technologist | Cybersecurity, Risk, Operations | Brand & Content Marketing | Musician, Photographer, Professor, Moderator | Co-Founder, ITSPmagazine & Studio C60Sean Martin, Co-Founder, ITSPmagazine and Studio C60 Website: https://www.seanmartin.com ____________Short Introduction Former FBI counterintelligence specialist Eric O'Neill, who caught the most damaging spy in US history, reveals how cyber criminals use traditional espionage techniques to attack us. In his new book "Spies Lies and Cyber Crime," he exposes the $14 trillion cybercrime industry and teaches us to recognize attacks in our Hybrid Analog Digital Society. ____________Article Trust has become the rarest commodity on Earth. We can't trust what we see, what we hear, or what we read anymore. And the people exploiting that crisis? They learned their craft from spies.Eric O'Neill knows this better than most. He's the former FBI counterintelligence specialist who went undercover—as himself—to catch Robert Hanssen, Russia's top spy embedded in the FBI for 22 years. That story became his first book "Gray Day" and the movie "Breach." But five years later, Eric's back with a very different kind of warning.His new book "Spies Lies and Cyber Crime" isn't another spy memoir. It's a field manual for surviving in a world where criminal syndicates have weaponized traditional espionage techniques against every single one of us. And business is booming—to the tune of $14 trillion annually, making cybercrime the third largest economy on Earth, bigger than Japan and Germany combined."They're not attacking our computers," Eric told me during our conversation. "They're attacking you and me personally. They're fooling us into just handing everything over."The pandemic accelerated everything. We were thrown into a completely virtual environment before security was ready, and that moment marks the biggest single rise of cybercrime in history. While most of us were stuck at home adjusting to Zoom calls, cyber criminals were innovating faster than anyone else, studying how we communicate, work, and associate in digital spaces.Here's what makes Eric's perspective invaluable: he understands both sides of this war. He spent his FBI career using traditional counterintelligence techniques—deception, impersonation, infiltration, confidence schemes, exploitation, and destruction—to catch spies. Now he watches cyber criminals deploy those exact same tactics against us through our screens.The top cybercrime gangs have actually hired active intelligence officers from countries like Russia, China, and Iran. These spies moonlight as cyber criminals, bringing state-level tradecraft to street-level scams. It's sophisticated, organized, and shockingly effective.Consider the romance scam Eric describes in the book: a widowed grandfather receives a simple text saying "Hey." Being polite, he responds "Sorry, wrong number." That single response marks him as a target. Over weeks, a "friendship" develops. His new best friend chats with him daily, learns his hopes and dreams, then introduces him to an "investment opportunity."Within months, the grandfather has invested his entire pension—hundreds of thousands of dollars—into what looks like a legitimate cryptocurrency platform with secure logins and rising account values. When he tries to withdraw money for a family vacation, his friend vanishes. The company doesn't exist. The website was a dummy. Everything is gone.That's not a quick phishing scam—that's a confidence scheme straight from the spy playbook, adapted for our Hybrid Analog Digital Society where we live in little boxes on screens, increasingly disconnected from physical reality.The sophistication extends to ransomware operations. These aren't kids in hoodies—they're organized businesses with affiliate programs, marketing departments, tech support teams, and customer service. They're polite as they negotiate your ransom. They help you decrypt your data after you pay. Some even donate to charities. And yes, many victims get hit again a month later by the same group.What struck me most about our conversation was Eric's emphasis on preparation over panic. He's developed a methodology called PAID: Prepare (ahead of the attack), Assess (constantly look for threats), Investigate (when you identify something suspicious), and Decide (take action)."You don't want to be in a dark alley before you think about physical security," he explained. "Same with cyber. Don't wait until you're in the middle of a ransomware attack to build your defenses. That's ten times more expensive."The scale of this threat hasn't fully registered with most people. Cybercrime is projected to hit $18 trillion next year, yet individuals and companies alike operate as if attacks are rare events that happen to other people. The reality? It's not if you'll be attacked, it's when.Eric wrote "Spies Lies and Cyber Crime" as if you're taking a training course at the FBI Academy for Cyber Criminals. The first part teaches you to think like a bad guy—to recognize deception, impersonation, and confidence schemes. The second part gives you the tools to defend yourself, whether you're protecting your family's data or running enterprise security.One detail Eric insists on: every parent must read chapters 10 and 11 with their teenagers. The book addresses cyberbullying, exploitation, and social media dangers that have led to teen suicide. Some conversations are that critical.As we closed our conversation, Eric demonstrated how vulnerable we've become. "How do you even know you're talking to me?" he asked. "I could be sitting here in my pajamas, typing what I want my avatar to say." He's right—deepfakes are that sophisticated now. His advice? Ask everyone in a video meeting to pick up a pen or wave their hands. Avatars can't do that yet.The word "yet" hangs heavy in that sentence.We're moving into a world where trust is the most valuable thing on Earth, and cyber criminals are actively destroying it for profit. Eric O'Neill spent his career catching spies who betrayed their country. Now he's teaching us to catch criminals who are betraying all of us, one click at a time.Subscribe to continue these essential conversations about security, technology, and society. In our increasingly digital world, understanding how cyber criminals think isn't optional anymore—it's survival. ____________About the bookSpies, Lies and CybercrimeSpies, Lies and Cybercrime will appeal to every person curious or frightened by the prospect of a cyberattack, from students and retirees to the C-Suite and boardroom. Readers will take up arms in the current cyber war instead of fleeing while the village burns. They will become email archeologists and threat hunters, questioning every movement online and spotting the attackers hiding in every shadow. They will learn how to embed cybersecurity intrinsically into the culture and technology of their businesses and lives. Only then can we begin to move the needle toward a world safe from cyber-attacks. Find it on: https://ericoneill.net____________Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology Podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society. ____________End of transmissionListen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company?👉 https://www.studioc60.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
212
Everyone Is Protecting My Password, But Who Is Protecting My Toilet Paper? - Interview with Amberley Brady | AISA CyberCon Melbourne 2025 Coverage | On Location with Sean Martin and Marco Ciappelli
Everyone Is Protecting My Password, But Who Is Protecting My Toilet Paper? - Interview with Amberley Brady | AISA CyberCon Melbourne 2025 Coverage | On Location with Sean Martin and Marco CiappelliAISA CyberCon Melbourne | October 15-17, 2025Empty shelves trigger something primal in us now. We've lived through the panic, the uncertainty, the realization that our food supply isn't as secure as we thought. Amberley Brady hasn't forgotten that feeling, and she's turned it into action.Speaking with her from Florence to Sydney ahead of AISA CyberCon in Melbourne, I discovered someone who came to cybersecurity through an unexpected path—studying law, working in policy, but driven by a singular passion for food security. When COVID-19 hit Australia in 2019 and grocery store shelves emptied, Amberley couldn't shake the question: what happens if this keeps happening?Her answer was to build realfoodprice.com.au, a platform tracking food pricing transparency across Australia's supply chain. It's based on the Hungarian model, which within three months saved consumers 50 million euros simply by making prices visible from farmer to wholesaler to consumer. The markup disappeared almost overnight when transparency arrived."Once you demonstrate transparency along the supply chain, you see where the markup is," Amberley explained. She gave me an example that hit home: watermelon farmers were getting paid 40 cents per kilo while their production costs ran between $1.00 to $1.50. Meanwhile, consumers paid $2.50 to $2.99 year-round. Someone in the middle was profiting while farmers lost money on every harvest.But this isn't just about fair pricing—it's about critical infrastructure that nobody's protecting. Australia produces food for 70 million people, far more than its own population needs. That food moves through systems, across borders, through supply chains that depend entirely on technology most farmers never think about in cybersecurity terms.The new autonomous tractors collecting soil data? That information goes somewhere. The sensors monitoring crop conditions? Those connect to systems someone else controls. China recognized this vulnerability years ago—with 20% of the world's population but only 7% of arable land, they understood that food security is national security.At CyberCon, Amberley is presenting two sessions that challenge the cybersecurity community to expand their thinking. "Don't Outsource Your Thinking" tackles what she calls "complacency creep"—our growing trust in AI that makes us stop questioning, stop analyzing with our gut instinct. She argues for an Essential Nine in Australia's cybersecurity framework, adding the human firewall to the technical Essential Eight.Her second talk, cheekily titled "Everyone is Protecting My Password, But No One's Protecting My Toilet Paper," addresses food security directly. It's provocative, but that's the point. We saw what happened in Japan recently with the rice crisis—the same panic buying, the same distrust, the same empty shelves that COVID taught us to fear."We will run to the store," Amberley said. "That's going to be human behavior because we've lived through that time." And here's the cybersecurity angle: those panics can be manufactured. A fake image of empty shelves, an AI-generated video, strategic disinformation—all it takes is triggering that collective memory.Amberley describes herself as an early disruptor in the agritech cybersecurity space, and she's right. Most cybersecurity professionals think about hospitals, utilities, financial systems. They don't think about the autonomous vehicles in fields, the sensor networks in soil, the supply chain software moving food across continents.But she's starting the conversation, and CyberCon's audience—increasingly diverse, including people from HR, risk management, and policy—is ready for it. Because at the end of the day, everyone has to eat. And if we don't start thinking about the cyber vulnerabilities in how we grow, move, and price food, we're leaving our most basic need unprotected.AISA CyberCon Melbourne runs October 15-17, 2025 Virtual coverage provided by ITSPmagazineGUEST:Amberley Brady, Food Security & Cybersecurity Advocate, Founder of realfoodprice.com.au | On LinkedIn: https://www.linkedin.com/in/amberley-b-a62022353/HOSTS:Sean Martin, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.marcociappelli.comCatch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to share an Event Briefing as part of our event coverage? Learn More 👉 https://itspm.ag/evtcovbrfWant Sean and Marco to be part of your event or conference? Let Us Know 👉 https://www.itspmagazine.com/contact-us Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
211
Beyond Blame: Navigating the Digital World with Our Kids - Interview with Jacqueline (JJ) Jayne | AISA CyberCon Melbourne 2025 Coverage | On Location with Sean Martin and Marco Ciappelli
Beyond Blame: Navigating the Digital World with Our KidsAISA CyberCon Melbourne | October 15-17, 2025There's something fundamentally broken in how we approach online safety for young people. We're quick to point fingers—at tech companies, at schools, at kids themselves—but Jacqueline Jayne (JJ) wants to change that conversation entirely.Speaking with her from Florence while she prepared for her session at AISA CyberCon Melbourne this week, it became clear that JJ understands what many in the cybersecurity world miss: this isn't a technical problem that needs a technical solution. It's a human problem that requires us to look in the mirror."The online world reflects what we've built for them," JJ told me, referring to our generation. "Now we need to step up and help fix it."Her session, "Beyond Blame: Keeping Our Kids Safe Online," tackles something most cybersecurity professionals avoid—the uncomfortable truth that being an IT expert doesn't automatically make you equipped to protect the young people in your life. Last year's presentation at Cyber Con drew a full house, with nearly every hand raised when she asked who came because of a kid in their world.That's the fascinating contradiction JJ exposes: rooms full of cybersecurity professionals who secure networks and defend against sophisticated attacks, yet find themselves lost when their own children navigate TikTok, Roblox, or encrypted messaging apps.The timing couldn't be more relevant. With Australia implementing a social media ban for anyone under 16 starting December 10, 2025, and similar restrictions appearing globally, parents and carers face unprecedented challenges. But as JJ points out, banning isn't understanding, and restriction isn't education.One revelation from our conversation particularly struck me—the hidden language of emojis. What seems innocent to adults carries entirely different meanings across demographics, from teenage subcultures to, disturbingly, predatory networks online. An explosion emoji doesn't just mean "boom" anymore. Context matters, and most adults are speaking a different digital dialect than their kids.JJ, who successfully guided her now 19-year-old son through the gaming and social media years, isn't offering simple solutions because there aren't any. What she provides instead are conversation starters, resources tailored to different age groups, and even AI prompts that parents can customize for their specific situations.The session reflects a broader shift happening at events like Cyber Con. It's no longer just IT professionals in the room. HR representatives, risk managers, educators, and parents are showing up because they've realized that digital safety doesn't respect departmental boundaries or professional expertise."We were analog brains in a digital world," JJ said, capturing our generational position perfectly. But today's kids? They're born into this interconnectedness, and COVID accelerated everything to a point where taking it away isn't an option.The real question isn't who to blame. It's what role each of us plays in creating a safer digital environment. And that's a conversation worth having—whether you're at the Convention and Exhibition Center in Melbourne this week or joining virtually from anywhere else.AISA CyberCon Melbourne runs October 15-17, 2025 Virtual coverage provided by ITSPmagazine___________GUEST:Jacqueline (JJ) Jayne, Reducing human error in cyber and teaching 1 million people online safety. On Linkedin: https://www.linkedin.com/in/jacquelinejayne/HOSTS:Sean Martin, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.marcociappelli.comCatch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to share an Event Briefing as part of our event coverage? Learn More 👉 https://itspm.ag/evtcovbrfWant Sean and Marco to be part of your event or conference? Let Us Know 👉 https://www.itspmagazine.com/contact-us Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
210
AI Creativity Expert Reveals Why Machines Need More Freedom - Creative Machines: AI, Art & Us Book Interview | A Conversation with Author Maya Ackerman | Redefining Society And Technology Podcast With Marco Ciappelli
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: AI Creativity Expert Reveals Why Machines Need More Freedom - Creative Machines: AI, Art & Us Book Interview | A Conversation with Author Maya Ackerman | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Maya Ackerman, PhD.Generative AI Pioneer | Author | Keynote SpeakerOn LinkedIn: https://www.linkedin.com/in/mackerma/Website: http://www.maya-ackerman.com _____Short Introduction: Dr. Maya Ackerman, AI researcher and author of "Creative Machines: AI, Art, and Us," challenges our assumptions about artificial intelligence and creativity. She argues that ChatGPT is intentionally limited, that hallucinations are features not bugs, and that we must stop treating AI as an all-knowing oracle in our Hybrid Analog Digital Society._____Article Dr. Maya Ackerman is a pioneer in the generative AI industry, associate professor of Computer Science and Engineering at Santa Clara University, and co-founder/CEO of Wave AI, one of the earliest generative AI startup. Ackerman has been researching generative AI models for text, music and art since 2014, and an early advocate for human-centered generative AI, bringing awareness to the power of AI to profoundly elevate human creativity. Under her leadership as co-founder and CEO, WaveAI has emerged as a leader in musical AI, benefiting millions of artists and creators with their products LyricStudio and MelodyStudio.Dr. Ackerman's expertise and innovative vision have earned her numerous accolades, including being named a "Woman of Influence" by the Silicon Valley Business Journal. She is a regular feature in prestigious media outlets and has spoken on notable stages around the world, such as the United Nations, IBM Research, and Stanford University. Her insights into the convergence of AI and creativity are shaping the future of both technology and music. A University of Waterloo PhD and Caltech Postdoc, her unique blend of scholarly rigor and entrepreneurial acumen makes her a sought-after voice in discussions about the practical and ethical implications of AI in our rapidly evolving digital world. Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ I had one of those conversations that makes you question everything you thought you knew about democracy, governance, and the future of human society. Eli Lopian, founder of TypeMock and author of the provocative book on AI-cracy, walked me through what might be the most intriguing political theory I've encountered in years.⸻ Article ⸻ We talk about AI hallucinations like they're bugs that need fixing. Glitches in the matrix. Errors to be eliminated. But what if we've got it completely backward?Dr. Maya Ackerman sat in front of her piano—a detail that matters more than you'd think—and told me something that made me question everything I thought I understood about artificial intelligence and creativity. The AI we use every day, the ChatGPT that millions rely on for everything from writing emails to generating ideas, is intentionally held back from being truly creative.Let that sink in for a moment. ChatGPT, the tool millions use daily, is designed to be convergent rather than divergent. It's built to replace search engines, to give us "correct" answers, to be an all-knowing oracle. And that's exactly the problem.Maya's journey into this field began ten years ago, long before generative AI became the buzzword du jour. Back in 2015, she made what her employer called a "risky decision"—switching her research focus to computational creativity, the academic precursor to what we now call generative AI. By 2017, she'd launched one of the earliest generative AI startups, WaveAI, helping people write songs. Investors told her the whole direction didn't make sense. Then came late 2022, and suddenly everyone understood.What fascinates me about Maya's perspective is how she frames AI as humanity's collective consciousness made manifest. We wrote, we created the printing press, we built the internet, we filled it with our knowledge and our forums and our social media—and then we created a functioning brain from it. As she puts it, we can now talk with humanity's collective consciousness, including what Carl Jung called the collective shadow—both the brilliance and the biases.This is where our conversation in our Hybrid Analog Digital Society gets uncomfortable but necessary. When AI exhibits bias, when it hallucinates, when it creates something that disturbs us—it's reflecting us back to ourselves. It learned from our data, our patterns, our collective Western consciousness. We participate in these biases to various degrees, whether we admit it or not. AI becomes a mirror we can't look away from.But here's where Maya's argument becomes revolutionary: we need to stop wanting AI to be perfect. We need to embrace its capacity to hallucinate, to be imaginative, to explore new possibilities. The word "hallucination" itself needs reclaiming. In both humans and machines, hallucination represents the courage to go beyond normal boundaries, to re-envision reality in ways that might work better for us.The creative process requires divergence—a vast open space of new possibilities where you don't know in advance what will have value. It takes bravery, guts, and willingness to fall flat on your face. But ChatGPT isn't built for that. It's designed to follow patterns, to be consistent, to give you the same ABAB rhyming structure every time you ask for lyrics. Try using it for creative writing, and you'll notice the template, the recognizable vibe that becomes stale after a few uses.Maya argues that machines designed specifically for creativity—like Midjourney for images or her own WaveAI for music—are far more creative than ChatGPT precisely because they're built to be divergent rather than convergent. They're allowed to get things wrong, to be imaginative, to explore. ChatGPT's creativity is intentionally kept down because there's an inherent conflict between being an all-knowing oracle and being creative.This brings us to a dangerous illusion we're collectively buying into: the idea that AI can be our arbitrator of truth. Maya grew up on three continents before age 13, and she points out that World War II is talked about so differently across cultures you wouldn't recognize it as the same historical event. Reality isn't simple. The "truth" doesn't exist for most things that matter. Yet we're building AI systems that present themselves as having definitive answers, when really they're just expressing a Western perspective that aligns with their shareholders' interests.What concerns me most from our conversation is Maya's observation that some people are already giving up their thinking to these machines. When she suggests they come up with their own ideas without using ChatGPT, they look at her like she's crazy. They honestly believe the machine is smarter than them. This collective hallucination—that we've built ourselves a God—is perhaps more dangerous than any individual AI capability.The path forward, Maya argues, requires us to wake up. We need diverse AI tools built for specific purposes rather than one omnipotent system. We need machines designed to collaborate with humans and elevate human intelligence rather than foster dependence. We need to stop the consolidation of power that's creating copies of the same convergent thinking, and instead embrace the diversity of human imagination.As someone who works at the intersection of technology and society, I find Maya's perspective refreshingly honest. She's not trying to sell us on AI's limitless potential, nor is she fear-mongering about its dangers. She's asking us to see it clearly—as powerful technology that's at least as flawed as we are, neither God nor demon, just a mind among minds.Her book "Creative Machines: AI, Art, and Us" releases October 14, 2025, and it promises to rewrite the narrative from an informed insider's perspective rather than someone with something to gain from public belief. In our rapidly evolving Hybrid Analog Digital Society, we need more voices like Maya's—voices that challenge us to think differently about the tools we're building and the future we're creating.Subscribe to continue these essential conversations about creativity, consciousness, and our coexistence with increasingly capable machines. Because the real question isn't whether machines can be creative—it's whether we'll have the wisdom to let them be.__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company?👉 https://www.studioc60.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
209
Lo-Fi Music and the Art of Imperfection — When Technical Limitations Become Creative Liberation | Analog Minds in a Digital World: Part 2 | Musing On Society And Technology Newsletter | Article Written By Marco Ciappelli
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____ Newsletter: Musing On Society And Technology https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144/_____ Watch on Youtube: https://youtu.be/nFn6CcXKMM0_____ My Website: https://www.marcociappelli.com_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3A new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliReflections from Our Hybrid Analog-Digital SocietyFor years on the Redefining Society and Technology Podcast, I've explored a central premise: we live in a hybrid -digital society where the line between physical and virtual has dissolved into something more complex, more nuanced, and infinitely more human than we often acknowledge.Introducing a New Series: Analog Minds in a Digital World:Reflections from Our Hybrid Analog-Digital SocietyPart II: Lo-Fi Music and the Art of Imperfection — When Technical Limitations Become Creative LiberationI've been testing small speakers lately. Nothing fancy—just little desktop units that cost less than a decent dinner. As I cycled through different genres, something unexpected happened. Classical felt lifeless, missing all its dynamic range. Rock came across harsh and tinny. Jazz lost its warmth and depth. But lo-fi? Lo-fi sounded... perfect.Those deliberate imperfections—the vinyl crackle, the muffled highs, the compressed dynamics—suddenly made sense on equipment that couldn't reproduce perfection anyway. The aesthetic limitations of the music matched the technical limitations of the speakers. It was like discovering that some songs were accidentally designed for constraints I never knew existed.This moment sparked a bigger realization about how we navigate our hybrid analog-digital world: sometimes our most profound innovations emerge not from perfection, but from embracing limitations as features.Lo-fi wasn't born in boardrooms or designed by committees. It emerged from bedrooms, garages, and basement studios where young musicians couldn't afford professional equipment. The 4-track cassette recorder—that humble Portastudio that let you layer instruments onto regular cassette tapes for a fraction of what professional studio time cost—became an instrument of democratic creativity. Suddenly, anyone could record music at home. Sure, it would sound "imperfect" by industry standards, but that imperfection carried something the polished recordings lacked: authenticity.The Velvet Underground recorded on cheap equipment and made it sound revolutionary—so revolutionary that, as the saying goes, they didn't sell many records, but everyone who bought one started a band. Pavement turned bedroom recording into art. Beck brought lo-fi to the mainstream with "Mellow Gold." These weren't artists settling for less—they were discovering that constraints could breed creativity in ways unlimited resources never could.Today, in our age of infinite digital possibility, we see a curious phenomenon: young creators deliberately adding analog imperfections to their perfectly digital recordings. They're simulating tape hiss, vinyl scratches, and tube saturation using software plugins. We have the technology to create flawless audio, yet we choose to add flaws back in.What does this tell us about our relationship with technology and authenticity?There's something deeply human about working within constraints. Twitter's original 140-character limit didn't stifle creativity—it created an entirely new form of expression. Instagram's square format—a deliberate homage to Polaroid's instant film—forced photographers to think differently about composition. Think about that for a moment: Polaroid's square format was originally a technical limitation of instant film chemistry and optics, yet it became so aesthetically powerful that decades later, a digital platform with infinite formatting possibilities chose to recreate that constraint. Even more, Instagram added filters that simulated the color shifts, light leaks, and imperfections of analog film. We had achieved perfect digital reproduction, and immediately started adding back the "flaws" of the technology we'd left behind.The same pattern appears in video: Super 8 film gave you exactly 3 minutes and 12 seconds per cartridge at standard speed—grainy, saturated, light-leaked footage that forced filmmakers to be economical with every shot. Today, TikTok recreates that brevity digitally, spawning a generation of micro-storytellers who've mastered the art of the ultra-short form, sometimes even adding Super 8-style filters to their perfect digital video.These platforms succeeded not despite their limitations, but because of them. Constraints force innovation. They make the infinite manageable. They create a shared language of creative problem-solving.Lo-fi music operates on the same principle. When you can't capture perfect clarity, you focus on capturing perfect emotion. When your equipment adds character, you learn to make that character part of your voice. When technical perfection is impossible, artistic authenticity becomes paramount.This is profoundly relevant to how we think about artificial intelligence and human creativity today. As AI becomes capable of generating increasingly "perfect" content—flawless prose, technically superior compositions, aesthetically optimized images—we find ourselves craving the beautiful imperfections that mark something as unmistakably human.Walking through any record store today, you'll see teenagers buying vinyl albums they could stream in perfect digital quality for free. They're choosing the inconvenience of physical media, the surface noise, the ritual of dropping the needle. They're purchasing imperfection at a premium.This isn't nostalgia—most of these kids never lived in the vinyl era. It's something deeper: a recognition that perfect reproduction might not equal perfect experience. The crackle and warmth of analog playback creates what audiophiles call "presence"—a sense that the music exists in the same physical space as the listener.Lo-fi music replicates this phenomenon in digital form. It takes the clinical perfection of digital audio and intentionally degrades it to feel more human. The compression, the limited frequency range, the background noise—these aren't bugs, they're features. They create the sonic equivalent of a warm embrace.In our hyperconnected, always-optimized digital existence, lo-fi offers something precious: permission to be imperfect. It's background music that doesn't demand your attention, ambient sound that acknowledges life's messiness rather than trying to optimize it away.Here's where it gets philosophically interesting: we're using advanced digital technology to simulate the limitations of obsolete analog technology. Young producers spend hours perfecting their "imperfect" sound, carefully curating randomness, precisely engineering spontaneity.This creates a fascinating paradox. Is simulated authenticity still authentic? When we use AI-powered plugins to add "vintage" character to our digital recordings, are we connecting with something real, or just consuming a nostalgic fantasy?I think the answer lies not in the technology itself, but in the intention behind it. Lo-fi creators aren't trying to fool anyone—the artifice is obvious. They're creating a shared aesthetic language that values emotion over technique, atmosphere over precision, humanity over perfection.In a world where algorithms optimize everything for maximum engagement, lo-fi represents a conscious choice to optimize for something else entirely: comfort, focus, emotional resonance. It's a small rebellion against the tyranny of metrics.As artificial intelligence becomes increasingly capable of generating "perfect" content, the value of obviously human imperfection may paradoxically increase. The tremor in a hand-drawn line, the slight awkwardness in authentic conversation, the beautiful inefficiency of analog thinking—these become markers of genuine human presence.The challenge isn't choosing between analog and digital, perfection and imperfection. It's learning to consciously navigate between them, understanding when limitations serve us and when they constrain us, recognizing when optimization helps and when it hurts.My small speakers taught me something important: sometimes the best technology isn't the one with the most capabilities, but the one whose limitations align with our human needs. Lo-fi music sounds perfect on imperfect speakers because both embrace the same truth—that beauty often emerges not from the absence of flaws, but from making peace with them.In our quest to build better systems, smarter algorithms, and more efficient processes, we might occasionally pause to ask: what are we optimizing for? And what might we be losing in the pursuit of digital perfection?The lo-fi phenomenon—and its parallels in photography, video, and every art form we've digitized—reveals something profound about human nature. We are not creatures built for perfection. We are shaped by friction, by constraint, by the beautiful accidents that occur when things don't work exactly as planned. The crackle of vinyl, the grain of film, the compression of cassette tape—these aren't just nostalgic affectations. They're reminders that imperfection is where humanity lives. That the beautiful inefficiency of analog thinking—messy, emotional, unpredictable—is not a bug to be fixed but a feature to be preserved.Sometimes the most profound technology is the one that helps us remember what it means to be beautifully, imperfectly human. And maybe, in our hybrid analog-digital world, that's the most important thing we can carry forward.Let's keep exploring what it means to be human in this Hybrid Analog Digital Society.End of transmission.______________________________________📬 Enjoyed this transmission? Follow the newsletter here: [Newsletter Link]Share this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!__________ End of transmission.📬 Enjoyed this article? Follow the newsletter here: https://www.linkedin.com/newsletters/7079849705156870144/🌀 Let's keep exploring what it means to be human in this Hybrid Analog Digital Society.Share this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!_____________________________________Marco CiappelliITSPmagazine | Co-Founder • CMO • Creative Director | ✓ Los Angeles ✓ Firenze❖ Have you heard about Studio C60?A Brand & Marketing Advisory For Cybersecurity And Tech Companies✶ Learn more about me and my podcasts✶ Follow me on LinkedIn✶ Subscribe to my NewsletterConnect with me across platforms:Bluesky | Mastodon | Instagram | YouTube | Threads | TikTok___________________________________________________________Marco Ciappelli is Co-Founder and CMO of ITSPmagazine, a journalist, creative director, and host of podcasts exploring the intersection of technology, cybersecurity, and society. His work blends journalism, storytelling, and sociology to examine how technological narratives influence human behavior, culture, and social structures.___________________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
208
AI Will Replace Democracy: The Future of Government is Here. Or, is it? Let's discuss! | A Conversation with Eli Lopian | Redefining Society And Technology Podcast With Marco Ciappelli
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: Tech Entrepreneur and Author's AI Prediction - The Last Book Written by a Human Interview | A Conversation with Jeff Burningham | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Eli LopianFounder of Typemock Ltd | Author of AIcracy: Beyond Democracy | AI & Governance Thought LeaderOn LinkedIn: https://www.linkedin.com/in/elilopian/Book: https://aicracy.aiHost: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ I had one of those conversations that makes you question everything you thought you knew about democracy, governance, and the future of human society. Eli Lopian, founder of TypeMock and author of the provocative book on AI-cracy, walked me through what might be the most intriguing political theory I've encountered in years.⸻ Article ⸻ Technology entrepreneur Eli Lopian joins Marco to explore "AI-cracy" - a revolutionary governance model where artificial intelligence writes laws based on abundance metrics while humans retain judgment. This fascinating conversation examines how we might transition from broken democratic systems to AI-assisted governance in our evolving Hybrid Analog Digital Society.Picture this scenario: you're sitting in a pub with friends, listening to them argue about which political rally to attend, and suddenly you realize something profound. As Eli told me, it's like watching people fight over which side of the train to sit on while the train itself is heading in completely the wrong direction. That metaphor perfectly captures where we are with democracy today.Eli's background fascinates me - breaking free from a religious upbringing at 16, building a successful AI startup for the past decade, and now proposing something that sounds like science fiction but feels increasingly inevitable. His central premise stopped me in my tracks: no human being should be allowed to write laws anymore. Only AI should create legislation, guided by what he calls an "abundance metric" - essentially optimizing for human happiness, freedom, and societal wellbeing.But here's where it gets really interesting. Eli isn't proposing we hand over control to a single AI overlord. Instead, he envisions three separate AI systems - one controlled by the government, one by the opposition, and one by an NGO - all working with the same data but operated by different groups. They must reach identical conclusions for any law to proceed. If they disagree, human experts investigate why.What struck me most was how this could actually restore direct democracy. In ancient Athens, every citizen participated in the polis. We can't do that with hundreds of millions of people, but AI could process everyone's input instantly. Imagine submitting your policy ideas directly to an AI system that responds within hours, explaining why your suggestion would or wouldn't improve societal abundance. It's like having the Athenian square scaled to modern complexity.The safeguards Eli proposes reveal his deep understanding of human nature. No AI can judge humans - that remains strictly a human responsibility. Citizens don't vote for charismatic politicians anymore; they vote for actual policies. Every three years, people choose their preferred policies. Every decade, they set ambitious collective goals - cure cancer, reach Mars, whatever captures society's imagination.Living in our Hybrid Analog Digital Society, we already see AI creeping into governance. Lawyers use AI, governments employ algorithms for efficiency, and citizens increasingly turn to ChatGPT for advice they once sought from doctors or therapists. Eli's insight is that we're heading toward AI governance whether we plan it or not - so why not design it properly from the start?His most compelling point addresses a fear I share: that AI lacks creativity. Eli argues this is actually a feature, not a bug. AI generates rather than truly creates. The creative spark - proposing that universal basic income experiment, suggesting we test new social policies, imagining those decade-long goals - that remains uniquely human. AI simply processes our creativity faster and more fairly than our current broken systems.The privacy question loomed large in our conversation. Eli proposes a brilliant separation: your personal AI mentor (helping you grow and find fulfillment) operates in complete isolation from the governance AI system. Like quantum physics, what happens in the personal realm stays there. The governance AI only sees aggregated societal data, never individual conversations.I kept thinking about trust throughout our discussion. We've already surrendered massive amounts of personal data to social media platforms. We share things on Instagram and TikTok that would have horrified us twenty years ago. Perhaps we'll adapt to AI governance the same way we adapted to cloud computing, social media, and smartphones.What excites me most is how this could give every citizen a real voice again. Not just during elections, but daily. Got an idea for improving your community? Submit it to the AI system. Receive thoughtful feedback about why it would or wouldn't work. Participate in creating the laws that govern your life rather than merely choosing between pre-packaged candidates every few years.Whether Eli's AI-cracy becomes reality or remains theoretical, it forces us to confront a crucial question: if democracy is broken, what comes next? In our rapidly evolving technological society, maybe it's time to stop fighting over which side of the train offers the better view and start laying new tracks entirely.__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
207
We Have All the Information, So Why Do We Know Less? | Analog Minds in a Digital World: Part 1 | Musing On Society And Technology Newsletter | Article Written By Marco Ciappelli
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____ Newsletter: Musing On Society And Technology https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144/_____ Watch on Youtube: https://youtu.be/nFn6CcXKMM0_____ My Website: https://www.marcociappelli.com_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3We Have All the Information, So Why Do We Know Less?Introducing: Reflections from Our Hybrid Analog-Digital SocietyFor years on the Redefining Society and Technology Podcast, I've explored a central premise: we live in a hybrid analog-digital society where the line between physical and virtual has dissolved into something more complex, more nuanced, and infinitely more human than we often acknowledge.But with the explosion of generative AI, this hybrid reality isn't just a philosophical concept anymore—it's our lived experience. Every day, we navigate between analog intuition and digital efficiency, between human wisdom and machine intelligence, between the messy beauty of physical presence and the seductive convenience of virtual interaction.This newsletter series will explore the tensions, paradoxes, and possibilities of being fundamentally analog beings in an increasingly digital world. We're not just using technology; we're being reshaped by it while simultaneously reshaping it with our deeply human, analog sensibilities.Analog Minds in a Digital World: Part 1We Have All the Information, So Why Do We Know Less?I was thinking about my old set of encyclopedias the other day. You know, those heavy volumes that sat on shelves like silent guardians of knowledge, waiting for someone curious enough to crack them open. When I needed to write a school report on, say, the Roman Empire, I'd pull out Volume R and start reading.But here's the thing: I never just read about Rome.I'd get distracted by Romania, stumble across something about Renaissance art, flip backward to find out more about the Reformation. By the time I found what I was originally looking for, I'd accidentally learned about three other civilizations, two art movements, and the invention of the printing press. The journey was messy, inefficient, and absolutely essential.And if I was in a library... well then just imagine the possibilities.Today, I ask Google, Claude or ChatGPT about the Roman Empire, and in thirty seconds, I have a perfectly formatted, comprehensive overview that would have taken me hours to compile from those dusty volumes. It's accurate, complete, and utterly forgettable.We have access to more information than any generation in human history. Every fact, every study, every perspective is literally at our fingertips. Yet somehow, we seem to know less. Not in terms of data acquisition—we're phenomenal at that—but in terms of deep understanding, contextual knowledge, and what I call "accidental wisdom."The difference isn't just about efficiency. It's about the fundamental way our minds process and retain information. When you physically search through an encyclopedia, your brain creates what cognitive scientists call "elaborative encoding"—you remember not just the facts, but the context of finding them, the related information you encountered, the physical act of discovery itself.When AI gives us instant answers, we bypass this entire cognitive process. We get the conclusion without the journey, the destination without the map. It's like being teleported to Rome without seeing the countryside along the way—technically efficient, but something essential is lost in translation.This isn't nostalgia talking. I use AI daily for research, writing, and problem-solving. It's an incredible tool. But I've noticed something troubling: my tolerance for not knowing things immediately has disappeared. The patience required for deep learning—the kind that happens when you sit with confusion, follow tangents, make unexpected connections—is atrophying like an unused muscle.We're creating a generation of analog minds trying to function in a digital reality that prioritizes speed over depth, answers over questions, conclusions over curiosity. And in doing so, we might be outsourcing the very process that makes us wise.Ancient Greeks had a concept called "metis"—practical wisdom that comes from experience, pattern recognition, and intuitive understanding developed through continuous engagement with complexity. In Ancient Greek, metis (Μῆτις) means wisdom, skill, or craft, and it also describes a form of wily, cunning intelligence. It can refer to the pre-Olympian goddess of wisdom and counsel, who was the first wife of Zeus and mother of Athena, or it can refer to the concept of cunning intelligence itself, a trait exemplified by figures like Odysseus. It's the kind of knowledge you can't Google because it lives in the space between facts, in the connections your mind makes when it has time to wander, wonder, and discover unexpected relationships.AI gives us information. But metis? That still requires an analog mind willing to get lost, make mistakes, and discover meaning in the margins.The question isn't whether we should abandon these digital tools—they're too powerful and useful to ignore. The question is whether we can maintain our capacity for the kind of slow, meandering, gloriously inefficient thinking that actually builds wisdom.Maybe the answer isn't choosing between analog and digital, but learning to be consciously hybrid. Use AI for what it does best—rapid information processing—while protecting the slower, more human processes that transform information into understanding. We need to preserve the analog pathways of learning alongside digital efficiency.Because in a world where we can instantly access any fact, the most valuable skill might be knowing which questions to ask—and having the patience to sit with uncertainty until real insight emerges from the continuous, contextual, beautifully inefficient process of analog thinking.Next transmission: "The Paradox of Infinite Choice: Why Having Everything Available Means Choosing Nothing"Let's keep exploring what it means to be human in this Hybrid Analog Digital Society.End of transmission.Marco______________________________________📬 Enjoyed this transmission? Follow the newsletter here: [Newsletter Link]Share this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!__________ End of transmission.📬 Enjoyed this article? Follow the newsletter here: https://www.linkedin.com/newsletters/7079849705156870144/🌀 Let's keep exploring what it means to be human in this Hybrid Analog Digital Society.Share this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!_____________________________________Marco CiappelliITSPmagazine | Co-Founder • CMO • Creative Director | ✓ Los Angeles ✓ Firenze❖ Have you heard about ITSPmagazine Studio?A Brand & Marketing Advisory For Cybersecurity And Tech Companies✶ Learn more about me and my podcasts✶ Follow me on LinkedIn✶ Subscribe to my NewsletterConnect with me across platforms:Bluesky | Mastodon | Instagram | YouTube | Threads | TikTok___________________________________________________________Marco Ciappelli is Co-Founder and CMO of ITSPmagazine, a journalist, creative director, and host of podcasts exploring the intersection of technology, cybersecurity, and society. His work blends journalism, storytelling, and sociology to examine how technological narratives influence human behavior, culture, and social structures.___________________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
206
Tech Entrepreneur and Author's AI Prediction - The Last Book Written by a Human Interview | A Conversation with Jeff Burningham | Redefining Society And Technology Podcast With Marco Ciappelli
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: Tech Entrepreneur and Author's AI Prediction - The Last Book Written by a Human Interview | A Conversation with Jeff Burningham | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Jeff Burningham Tech Entrepreneur. Investor. National Best Selling Author. Explorer of Human Potential. My book #TheLastBookWrittenByAHuman is available now.On LinkedIn: https://www.linkedin.com/in/jeff-burningham-15a01a7b/Book: https://www.simonandschuster.com/books/The-Last-Book-Written-by-a-Human/Jeff-Burningham/9781637634561#:~:text=*%20Why%20the%20development%20of%20AI,in%20the%20age%20of%20AI.Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ Entrepreneur and author Jeff Burningham explores how artificial intelligence serves as a cosmic mirror reflecting humanity's true nature. Through his book "The Last Book Written by a Human," he argues that as machines become more intelligent, humans must become wiser. This conversation examines our collective journey through disruption, reflection, transformation, and evolution in our Hybrid Analog Digital Society.⸻ Article ⸻ I had one of those conversations that made me pause and question everything I thought I knew about our relationship with technology. Jeff Burningham, serial entrepreneur and author of "The Last Book Written by a Human: Becoming Wise in the Age of AI," joined me to explore a perspective that's both unsettling and profoundly hopeful.What struck me most wasn't Jeff's impressive background—founding multiple tech companies, running for governor of Utah, building a $5 billion real estate empire. It was his spiritual awakening in Varanasi, India, where a voice in his head insisted he was a writer. That moment of disruption led to years of reflection and ultimately to a book that challenges us to see AI not as our replacement, but as our mirror."As our machines become more intelligent, our work as humans is to become more wise," Jeff told me. This isn't just a catchy phrase—it's the thesis of his entire work. He argues that AI functions as what he calls a "cosmic mirror to humanity," reflecting back to us exactly who we've become as a species. The question becomes: do we like what we see?This perspective resonates deeply with how we exist in our Hybrid Analog Digital Society. We're no longer living separate digital and physical lives—we're constantly navigating both realms simultaneously. AI doesn't just consume our data; it reflects our collective behaviors, biases, and beliefs back to us in increasingly sophisticated ways.Jeff structures his thinking around four phases that mirror both technological development and personal growth: disruption, reflection, transformation, and evolution. We're currently somewhere between reflection and transformation, he suggests, at a crucial juncture where we must choose between two games. The old game prioritizes cash as currency, power as motivation, and control as purpose. The new game he envisions centers on karma as currency, authenticity as motivation, and love as purpose.What fascinates me is how this connects to the hero's journey—the narrative structure underlying every meaningful story from Star Wars to our own personal transformations. Jeff sees AI's emergence as part of an inevitable journey, a necessary disruption that forces us to confront fundamental questions about consciousness, creativity, and what makes us human.But here's where it gets both beautiful and challenging: as machines handle more of our "doing," we're left with our "being." We're human beings, not human doings, as Jeff reminds us. This shift demands that we reconnect with our bodies, our wisdom, our imperfections—all the messy, beautiful aspects of humanity that AI cannot replicate.The conversation reminded me why I chose "Redefining" for this podcast's title. We're not just adapting to new technology; we're fundamentally reexamining what it means to be human in an age of artificial intelligence. This isn't about finding the easy button or achieving perfect efficiency—it's about embracing what makes us gloriously, imperfectly human.Jeff's book launches August 19th, and while it won't literally be the last book written by a human, the title serves as both warning and invitation. If we don't actively choose to write our own story—if we don't rehumanize ourselves while consciously shaping AI's development—we might find ourselves spectators rather than authors of our own future.Subscribe to continue these essential conversations about technology and society. Because in our rapidly evolving world, the most important question isn't what AI can do for us, but who we choose to become alongside it.Subscribe wherever you get your podcasts, and join me on YouTube for the full experience. Let's continue this conversation—because in our rapidly evolving world, these discussions shape the future we're building together.Cheers,Marco⸻ Keywords ⸻ AI technology, artificial intelligence, future of AI, business podcast, entrepreneur interview, technology trends, tech entrepreneur, business mindset, innovation podcast, AI impact, startup founder, tech trends 2025, AI business, technology interview, entrepreneurship success__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
205
The First Smartphone Was a Transistor Radio — How a Tiny Device Rewired Youth Culture and Predicted Our Digital Future | Musing On Society And Technology Newsletter | Article Written By Marco Ciappelli
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____ Newsletter: Musing On Society And Technology https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144/_____ Watch on Youtube: https://youtu.be/OYBjDHKhZOM_____ My Website: https://www.marcociappelli.com_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3The First Smartphone Was a Transistor Radio — How a Tiny Device Rewired Youth Culture and Predicted Our Digital FutureA new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliI've been collecting vintage radios lately—just started, really—drawn to their analog souls in ways I'm still trying to understand. Each one I find reminds me of a small, battered transistor radio from my youth. It belonged to my father, and before that, probably my grandfather. The leather case was cracked, the antenna wobbled, and the dial drifted if you breathed on it wrong. But when I was sixteen, sprawled across my bedroom floor in that small town near Florence with homework scattered around me, this little machine was my portal to everything that mattered.Late at night, I'd start by chasing the latest hits and local shows on FM, but then I'd venture into the real adventure—tuning through the static on AM and shortwave frequencies. Voices would emerge from the electromagnetic soup—music from London, news from distant capitals, conversations in languages I couldn't understand but somehow felt. That radio gave me something I didn't even know I was missing: the profound sense of belonging to a world much bigger than my neighborhood, bigger than my small corner of Tuscany.What I didn't realize then—what I'm only now beginning to understand—is that I was holding the first smartphone in human history.Not literally, of course. But functionally? Sociologically? That transistor radio was the prototype for everything that followed: the first truly personal media device that rewired how young people related to the world, to each other, and to the adults trying to control both.But to understand why the transistor radio was so revolutionary, we need to trace radio's remarkable journey through the landscape of human communication—a journey that reveals patterns we're still living through today.When Radio Was the Family HearthBefore my little portable companion, radio was something entirely different. In the 1930s, radio was furniture—massive, wooden, commanding the living room like a shrine to shared experience. Families spent more than four hours a day listening together, with radio ownership reaching nearly 90 percent by 1940. From American theaters that wouldn't open until after "Amos 'n Andy" to British families gathered around their wireless sets, from RAI broadcasts bringing opera into Tuscan homes—entire communities synchronized their lives around these electromagnetic rituals.Radio didn't emerge in a media vacuum, though. It had to find its place alongside the dominant information medium of the era: newspapers. The relationship began as an unlikely alliance. In the early 1920s, newspapers weren't threatened by radio—they were actually radio's primary boosters, creating tie-ins with broadcasts and even owning stations. Detroit's WWJ was owned by The Detroit News, initially seen as "simply another press-supported community service."But then came the "Press-Radio War" of 1933-1935, one of the first great media conflicts of the modern age. Newspapers objected when radio began interrupting programs with breaking news, arguing that instant news delivery would diminish paper sales. The 1933 Biltmore Agreement tried to restrict radio to just two five-minute newscasts daily—an early attempt at what we might now recognize as media platform regulation.Sound familiar? The same tensions we see today between traditional media and digital platforms, between established gatekeepers and disruptive technologies, were playing out nearly a century ago. Rather than one medium destroying the other, they found ways to coexist and evolve—a pattern that would repeat again and again.By the mid-1950s, when the transistor was perfected, radio was ready for its next transformation.The Real Revolution Was Social, Not TechnicalThis is where my story begins, but it's also where radio's story reaches its most profound transformation. The transistor radio didn't just make radio portable—it fundamentally altered the social dynamics of media consumption and youth culture itself.Remember, radio had spent its first three decades as a communal experience. Parents controlled what the family heard and when. But transistor radios shattered this control structure completely, arriving at precisely the right cultural moment. The post-WWII baby boom had created an unprecedented youth population with disposable income, and rock and roll was exploding into mainstream culture—music that adults often disapproved of, music that spoke directly to teenage rebellion and independence.For the first time in human history, young people had private, personal access to media. They could take their music to bedrooms, to beaches, anywhere adults weren't monitoring. They could tune into stations playing Chuck Berry, Elvis, and Little Richard without parental oversight—and in many parts of Europe, they could discover the rebellious thrill of pirate radio stations broadcasting rock and roll from ships anchored just outside territorial waters, defying government regulations and cultural gatekeepers alike. The transistor radio became the soundtrack of teenage autonomy, the device that let youth culture define itself on its own terms.The timing created a perfect storm: pocket-sized technology collided with a new musical rebellion, creating the first "personal media bubble" in human history—and the first generation to grow up with truly private access to the cultural forces shaping their identity.The parallels to today's smartphone revolution are impossible to ignore. Both devices delivered the same fundamental promise: the ability to carry your entire media universe with you, to access information and entertainment on your terms, to connect with communities beyond your immediate physical environment.But there's something we've lost in translation from analog to digital. My generation with transistor radios had to work for connection. We had to hunt through static, tune carefully, wait patiently for distant signals to emerge from electromagnetic chaos. We learned to listen—really listen—because finding something worthwhile required skill, patience, and analog intuition.This wasn't inconvenience; it was meaning-making. The harder you worked to find something, the more it mattered when you found it. The more skilled you became at navigating radio's complex landscape, the richer your discoveries became.What the Transistor Radio Taught Us About TomorrowRadio's evolution illustrates a crucial principle that applies directly to our current digital transformation: technologies don't replace each other—they find new ways to matter. Printing presses didn't become obsolete when radio arrived. Radio adapted when television emerged. Today, radio lives on in podcasts, streaming services, internet radio—the format transformed, but the essential human need it serves persists.When I was sixteen, lying on that bedroom floor with my father's radio pressed to my ear, I was doing exactly what teenagers do today with their smartphones: using technology to construct identity, to explore possibilities, to imagine myself into larger narratives.The medium has changed; the human impulse remains constant. The transistor radio taught me that technology's real power isn't in its specifications or capabilities—it's in how it reshapes the fundamental social relationships that define our lives.Every device that promises connection is really promising transformation: not just of how we communicate, but of who we become through that communication. The transistor radio was revolutionary not because it was smaller or more efficient than tube radios, but because it created new forms of human agency and autonomy.Perhaps that's the most important lesson for our current moment of digital transformation. As we worry about AI replacing human creativity, social media destroying real connection, or smartphones making us antisocial, radio's history suggests a different possibility: technologies tend to find their proper place in the ecosystem of human needs, augmenting rather than replacing what came before.As Marshall McLuhan understood, "the medium is the message"—to truly understand what's happening to us in this digital age, we need to understand the media themselves, not just the content they carry. And that's exactly the message I'll keep exploring in future newsletters—going deeper into how we can understand the media to understand the messages, and what that means for our hybrid analog-digital future.The frequency is still there, waiting. You just have to know how to tune in.__________ End of transmission.📬 Enjoyed this article? Follow the newsletter here: https://www.linkedin.com/newsletters/7079849705156870144/🌀 Let's keep exploring what it means to be human in this Hybrid Analog Digital Society.Share this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!— Marco https://www.marcociappelli.com___________________________________________________________Marco Ciappelli is Co-Founder and CMO of ITSPmagazine, a journalist, creative director, and host of podcasts exploring the intersection of technology, cybersecurity, and society. His work blends journalism, storytelling, and sociology to examine how technological narratives influence human behavior, culture, and social structures.___________________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
204
From Broadcasting to AI Agents: Mark Smith on Technology's 100-Year Evolution at IBC 2025 Amsterdam | On Location Event Coverage Podcast With Sean Martin & Marco Ciappelli
I had one of those conversations that reminded me why I'm so passionate about exploring the intersection of technology and society. Speaking with Mark Smith, a board member at IBC and co-lead of their accelerator program, I found myself transported back to my roots in communication and media studies, but with eyes wide open to what's coming next.Mark has spent over 30 years in media technology, including 23 years building Mobile World Congress in Barcelona. When someone with that depth of experience gets excited about what's happening now, you pay attention. And what's happening at IBC 2025 in Amsterdam this September is nothing short of a redefinition of how we create, distribute, and authenticate content.The numbers alone are staggering: 1,350 exhibitors across 14 halls, nearly 300 speakers, 45,000 visitors. But what struck me wasn't the scale—it's the philosophical shift happening in how we think about media production. We're witnessing television's centennial year, with the first demonstrations happening in 1925, and yet we're simultaneously seeing the birth of entirely new forms of creative expression.What fascinated me most was Mark's description of their Accelerator Media Innovation Program. Since 2019, they've run over 50 projects involving 350 organizations, creating what he calls "a safe environment" for collaboration. This isn't just about showcasing new gadgets—it's about solving real challenges that keep media professionals awake at night. In our Hybrid Analog Digital Society, the traditional boundaries between broadcaster and audience, between creator and consumer, are dissolving faster than ever.The AI revolution in media production particularly caught my attention. Mark spoke about "AI assistant agents" and "agentic AI" with the enthusiasm of someone who sees liberation rather than replacement. As he put it, "It's an opportunity to take out a lot of laborious processes." But more importantly, he emphasized that it's creating new jobs—who would have thought "AI prompter" would become a legitimate profession?This perspective challenges the dystopian narrative often surrounding AI adoption. Instead of fearing the technology, the media industry seems to be embracing it as a tool for enhanced creativity. Mark's excitement was infectious when describing how AI can remove the "boring" aspects of production, allowing creative minds to focus on what they do best—tell stories that matter.But here's where it gets really interesting from a sociological perspective: the other side of the screen. We talked about how streaming revolutionized content consumption, giving viewers unprecedented control over their experience. Yet Mark observed something I've noticed too—while the technology exists for viewers to be their own directors (choosing camera angles in sports, for instance), many prefer to trust the professional's vision. We're not necessarily seeking more control; we're seeking more relevance and authenticity.This brings us to one of the most critical challenges of our time: content provenance. In a world where anyone can create content that looks professional, how do we distinguish between authentic journalism and manufactured narratives? Mark highlighted their work on C2PA (content provenance initiative), developing tools that can sign and verify media sources, tracking where content has been manipulated.This isn't just a technical challenge—it's a societal imperative. As Mark noted, YouTube is now the second most viewed platform in the UK. When user-generated content competes directly with traditional media, we need new frameworks for understanding truth and authenticity. The old editorial gatekeepers are gone; we need technological solutions that preserve trust while enabling creativity.What gives me hope is the approach I heard from Mark and his colleagues. They're not trying to control technology's impact on society—they're trying to shape it consciously. The IBC Accelerator Program represents something profound: an industry taking responsibility for its own transformation, creating spaces for collaboration rather than competition, focusing on solving real problems rather than just building cool technology.The Google Hackfest they're launching this year perfectly embodies this philosophy. Young broadcast engineers and software developers working together on real challenges, supported by established companies like Formula E. It's not about replacing human creativity with artificial intelligence—it's about augmenting human potential with technological tools.As I wrapped up our conversation, I found myself thinking about my own journey from studying sociology of communication in a pre-internet world to hosting podcasts about our digital transformation. Technology doesn't just change how we communicate—it changes who we are as communicators, as creators, as human beings sharing stories.IBC 2025 isn't just a trade show; it's a glimpse into how we're choosing to redefine our relationship with media technology. And that choice—that conscious decision to shape rather than simply react—gives me genuine optimism about our Hybrid Analog Digital Society.Subscribe to Redefining Society and Technology Podcast for more conversations exploring how we're consciously shaping our technological future. Your thoughts and reflections always enrich these discussions. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
203
Teaser | Why Electric Vehicles Need an Apollo Program: The Renewable Energy Infrastructure Reality We're Ignoring | A Conversation with Mats Larsson | Redefining Society And Technology Podcast With Marco Ciappelli | Read by Tape3
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: Why Electric Vehicles Need an Apollo Program: The Reneweable Energy Infrastructure Reality We're Ignoring | A Conversation with Mats Larsson | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Mats Larsson New book: "How Building the Future Really Works." Business developer, project manager and change leader – Speaker. I'm happy to connect!On LinkedIn: https://www.linkedin.com/in/matslarsson-author/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ Swedish business consultant Mats Larsson reveals why electric vehicle transition requires Apollo program-scale government investment. We explore the massive infrastructure gap between EV ambitions and reality, from doubling power generation to training electrification architects. This isn't about building better cars—it's about reimagining our entire transportation ecosystem in our Hybrid Analog Digital Society.______Listen to the Full Episodehttps://redefiningsocietyandtechnologypodcast.com/episodes/why-electric-vehicles-need-an-apollo-program-the-renweable-energy-infrastructure-reality-were-ignoring-a-conversation-with-mats-larsson-redefining-society-and-technology-podcast-with-marco-ciappelli__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
202
Why Electric Vehicles Need an Apollo Program: The Renewable Energy Infrastructure Reality We're Ignoring | A Conversation with Mats Larsson | Redefining Society And Technology Podcast With Marco Ciappelli
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: Why Electric Vehicles Need an Apollo Program: The Reneweable Energy Infrastructure Reality We're Ignoring | A Conversation with Mats Larsson | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Mats Larsson New book: "How Building the Future Really Works." Business developer, project manager and change leader – Speaker. I'm happy to connect!On LinkedIn: https://www.linkedin.com/in/matslarsson-author/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ Swedish business consultant Mats Larsson reveals why electric vehicle transition requires Apollo program-scale government investment. We explore the massive infrastructure gap between EV ambitions and reality, from doubling power generation to training electrification architects. This isn't about building better cars—it's about reimagining our entire transportation ecosystem in our Hybrid Analog Digital Society.⸻ Article ⸻ When Reality Meets Electric Dreams: Lessons from the Apollo MindsetI had one of those conversations that stops you in your tracks. Mats Larsson, calling in from Stockholm while I connected from Italy, delivered a perspective on electric vehicles that shattered my comfortable assumptions about our technological transition."First of all, we need to admit that we do not know exactly how to build the future. And then we need to start building it." This wasn't just Mats being philosophical—it was a fundamental admission that our approach to electrification has been dangerously naive.We've been treating the electric vehicle transition like upgrading our smartphones—expecting it to happen seamlessly, almost magically, while we go about our daily lives. But as Mats explained, referencing the Apollo program, monumental technological shifts require something we've forgotten how to do: comprehensive, sustained, coordinated investment in infrastructure we can't even fully envision yet.The numbers are staggering. To electrify all US transportation, we'd need to double power generation—that's the equivalent of 360 nuclear reactors worth of electricity. For hydrogen? Triple it. While Tesla and Chinese manufacturers gained their decade-plus advantage through relentless investment cycles, traditional automakers treated electric vehicles as "defensive moves," showcasing capability without commitment.But here's what struck me most: we need entirely new competencies. "Electrification strategists and electrification architects," as Mats called them—professionals who can design power grids capable of charging thousands of logistics vehicles daily, infrastructure that doesn't exist in our current planning vocabulary.We're living in this fascinating paradox of our Hybrid Analog Digital Society. We've become so accustomed to frictionless technological evolution—download an update, get new features—that we've lost appreciation for transitions requiring fundamental systemic change. Electric vehicles aren't just different cars; they're a complete reimagining of energy distribution, urban planning, and even our relationship with mobility itself.This conversation reminded me why I love exploring the intersection of technology and society. It's not enough to build better batteries or faster chargers. We're redesigning civilization's transportation nervous system, and we're doing it while pretending it's just another product launch.What excites me isn't just the technological challenge—it's the human coordination required. Like the Apollo program, this demands that rare combination of visionary leadership, sustained investment, and public will that transcends political cycles and market quarters.Listen to my full conversation with Mats, and let me know: Are we ready to embrace the Apollo mindset for our electric future?Subscribe wherever you get your podcasts, and join me on YouTube for the full experience. Let's continue this conversation—because in our rapidly evolving world, these discussions shape the future we're building together.Cheers,Marco⸻ Keywords ⸻ Electric Vehicles, Technology And Society, Infrastructure, Innovation, Sustainable Transport, electric vehicles, society and technology, infrastructure development, apollo program, energy transition, government investment, technological transformation, sustainable mobility, power generation, digital society__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
201
The Narrative Attack Paradox: When Cybersecurity Lost the Ability to Detect Its Own Deception and the Humanity We Risk When Truth Becomes Optional | Reflections from Black Hat USA 2025 on the Marketing That Chose Fiction Over Facts
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3August 18, 2025The Narrative Attack Paradox: When Cybersecurity Lost the Ability to Detect Its Own Deception and the Humanity We Risk When Truth Becomes OptionalReflections from Black Hat USA 2025 on Deception, Disinformation, and the Marketing That Chose Fiction Over FactsBy Marco CiappelliSean Martin, CISSP just published his analysis of Black Hat USA 2025, documenting what he calls the cybersecurity vendor "echo chamber." Reviewing over 60 vendor announcements, Sean found identical phrases echoing repeatedly: "AI-powered," "integrated," "reduce analyst burden." The sameness forces buyers to sift through near-identical claims to find genuine differentiation.This reveals more than a marketing problem—it suggests that different technologies are being fed into the same promotional blender, possibly a generative AI one, producing standardized output regardless of what went in. When an entire industry converges on identical language to describe supposedly different technologies, meaningful technical discourse breaks down.But Sean's most troubling observation wasn't about marketing copy—it was about competence. When CISOs probe vendor claims about AI capabilities, they encounter vendors who cannot adequately explain their own technologies. When conversations moved beyond marketing promises to technical specifics, answers became vague, filled with buzzwords about proprietary algorithms.Reading Sean's analysis while reflecting on my own Black Hat experience, I realized we had witnessed something unprecedented: an entire industry losing the ability to distinguish between authentic capability and generated narrative—precisely as that same industry was studying external "narrative attacks" as an emerging threat vector.The irony was impossible to ignore. Black Hat 2025 sessions warned about AI-generated deepfakes targeting executives, social engineering attacks using scraped LinkedIn profiles, and synthetic audio calls designed to trick financial institutions. Security researchers documented how adversaries craft sophisticated deceptions using publicly available content. Meanwhile, our own exhibition halls featured countless unverifiable claims about AI capabilities that even the vendors themselves couldn't adequately explain.But to understand what we witnessed, we need to examine the very concept that cybersecurity professionals were discussing as an external threat: narrative attacks. These represent a fundamental shift in how adversaries target human decision-making. Unlike traditional cyberattacks that exploit technical vulnerabilities, narrative attacks exploit psychological vulnerabilities in human cognition. Think of them as social engineering and propaganda supercharged by AI—personalized deception at scale that adapts faster than human defenders can respond. They flood information environments with false content designed to manipulate perception and erode trust, rendering rational decision-making impossible.What makes these attacks particularly dangerous in the AI era is scale and personalization. AI enables automated generation of targeted content tailored to individual psychological profiles. A single adversary can launch thousands of simultaneous campaigns, each crafted to exploit specific cognitive biases of particular groups or individuals.But here's what we may have missed during Black Hat 2025: the same technological forces enabling external narrative attacks have already compromised our internal capacity for truth evaluation. When vendors use AI-optimized language to describe AI capabilities, when marketing departments deploy algorithmic content generation to sell algorithmic solutions, when companies building detection systems can't detect the artificial nature of their own communications, we've entered a recursive information crisis.From a sociological perspective, we're witnessing the breakdown of social infrastructure required for collective knowledge production. Industries like cybersecurity have historically served as early warning systems for technological threats—canaries in the coal mine with enough technical sophistication to spot emerging dangers before they affect broader society.But when the canary becomes unable to distinguish between fresh air and poison gas, the entire mine is at risk.This brings us to something the literary world understood long before we built our first algorithm. Jorge Luis Borges, the Argentine writer, anticipated this crisis in his 1940s stories like "On Exactitude in Science" and "The Library of Babel"—tales about maps that become more real than the territories they represent and libraries containing infinite books, including false ones. In his fiction, simulations and descriptions eventually replace the reality they were meant to describe.We're living in a Borgesian nightmare where marketing descriptions of AI capabilities have become more influential than actual AI capabilities. When a vendor's promotional language about their AI becomes more convincing than a technical demonstration, when buyers make decisions based on algorithmic marketing copy rather than empirical evidence, we've entered that literary territory where the map has consumed the landscape. And we've lost the ability to distinguish between them.The historical precedent is the 1938 War of the Worlds broadcast, which created mass hysteria from fiction. But here's the crucial difference: Welles was human, the script was human-written, the performance required conscious participation, and the deception was traceable to human intent. Listeners had to actively choose to believe what they heard.Today's AI-generated narratives operate below the threshold of conscious recognition. They require no active participation—they work by seamlessly integrating into information environments in ways that make detection impossible even for experts. When algorithms generate technical claims that sound authentic to human evaluators, when the same systems create both legitimate documentation and marketing fiction, we face deception at a level Welles never imagined: the algorithmic manipulation of truth itself.The recursive nature of this problem reveals itself when you try to solve it. This creates a nearly impossible situation. How do you fact-check AI-generated claims about AI using AI-powered tools? How do you verify technical documentation when the same systems create both authentic docs and marketing copy? When the tools generating problems and solving problems converge into identical technological artifacts, conventional verification approaches break down completely.My first Black Hat article explored how we risk losing human agency by delegating decision-making to artificial agents. But this goes deeper: we risk losing human agency in the construction of reality itself. When machines generate narratives about what machines can do, truth becomes algorithmically determined rather than empirically discovered.Marshall McLuhan famously said "We shape our tools, and thereafter they shape us." But he couldn't have imagined tools that reshape our perception of reality itself. We haven't just built machines that give us answers—we've built machines that decide what questions we should ask and how we should evaluate the answers.But the implications extend far beyond cybersecurity itself. This matters far beyond. If the sector responsible for detecting digital deception becomes the first victim of algorithmic narrative pollution, what hope do other industries have? Healthcare systems relying on AI diagnostics they can't explain. Financial institutions using algorithmic trading based on analyses they can't verify. Educational systems teaching AI-generated content whose origins remain opaque.When the industry that guards against deception loses the ability to distinguish authentic capability from algorithmic fiction, society loses its early warning system for the moment when machines take over truth construction itself.So where does this leave us? That moment may have already arrived. We just don't know it yet—and increasingly, we lack the cognitive infrastructure to find out.But here's what we can still do: We can start by acknowledging we've reached this threshold. We can demand transparency not just in AI algorithms, but in the human processes that evaluate and implement them. We can rebuild evaluation criteria that distinguish between technical capability and marketing narrative.And here's a direct challenge to the marketing and branding professionals reading this: it's time to stop relying on AI algorithms and data optimization to craft your messages. The cybersecurity industry's crisis should serve as a warning—when marketing becomes indistinguishable from algorithmic fiction, everyone loses. Social media has taught us that the most respected brands are those that choose honesty over hype, transparency over clever messaging. Brands that walk the walk and talk the talk, not those that let machines do the talking.The companies that will survive this epistemological crisis are those whose marketing teams become champions of truth rather than architects of confusion. When your audience can no longer distinguish between human insight and machine-generated claims, authentic communication becomes your competitive advantage.Most importantly, we can remember that the goal was never to build machines that think for us, but machines that help us think better.The canary may be struggling to breathe, but it's still singing. The question is whether we're still listening—and whether we remember what fresh air feels like.Let's keep exploring what it means to be human in this Hybrid Analog Digital Society. Especially now, when the stakes have never been higher, and the consequences of forgetting have never been more real. End of transmission.___________________________________________________________Marco Ciappelli is Co-Founder and CMO of ITSPmagazine, a journalist, creative director, and host of podcasts exploring the intersection of technology, cybersecurity, and society. His work blends journalism, storytelling, and sociology to examine how technological narratives influence human behavior, culture, and social structures.___________________________________________________________Enjoyed this transmission? Follow the newsletter here:https://www.linkedin.com/newsletters/7079849705156870144/Share this newsletter and invite anyone you think would enjoy it!New stories always incoming.___________________________________________________________As always, let's keep thinking!Marco Ciappellihttps://www.marcociappelli.com___________________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Marco Ciappelli | Co-Founder, Creative Director & CMO ITSPmagazine | Dr. in Political Science / Sociology of Communication l Branding | Content Marketing | Writer | Storyteller | My Podcasts: Redefining Society & Technology / Audio Signals / + | MarcoCiappelli.comTAPE3 is the Artificial Intelligence behind ITSPmagazine—created to be a personal assistant, writing and design collaborator, research companion, brainstorming partner… and, apparently, something new every single day.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
200
The Agentic AI Myth in Cybersecurity and the Humanity We Risk When We Stop Deciding for Ourselves | Reflections from Black Hat USA 2025 on the Latest Tech Salvation Narrative | A Musing On Society & Technology Newsletter
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3August 9, 2025The Agentic AI Myth in Cybersecurity and the Humanity We Risk When We Stop Deciding for OurselvesReflections from Black Hat USA 2025 on the Latest Tech Salvation NarrativeWalking the floors of Black Hat USA 2025 for what must be the 10th or 11th time as accredited media—honestly, I've stopped counting—I found myself witnessing a familiar theater. The same performance we've seen play out repeatedly in cybersecurity: the emergence of a new technological messiah promising to solve all our problems. This year's savior? Agentic AI.The buzzword echoes through every booth, every presentation, every vendor pitch. Promises of automating 90% of security operations, platforms for autonomous threat detection, agents that can investigate novel alerts without human intervention. The marketing materials speak of artificial intelligence that will finally free us from the burden of thinking, deciding, and taking responsibility.It's Talos all over again.In Greek mythology, Hephaestus forged Talos, a bronze giant tasked with patrolling Crete's shores, hurling boulders at invaders without human intervention. Like contemporary AI, Talos was built to serve specific human ends—security, order, and control—and his value was determined by his ability to execute these ends flawlessly. The parallels to today's agentic AI promises are striking: autonomous patrol, threat detection, automated response. Same story, different millennium.But here's what the ancient Greeks understood that we seem to have forgotten: every artificial creation, no matter how sophisticated, carries within it the seeds of its own limitations and potential dangers.Industry observers noted over a hundred announcements promoting new agentic AI applications, platforms or services at the conference. That's more than one AI agent announcement per hour. The marketing departments have clearly been busy.But here's what baffles me: why do we need to lie to sell cybersecurity? You can give away t-shirts, dress up as comic book superheroes with your logo slapped on their chests, distribute branded board games, and pretend to be a sports team all day long—that's just trade show theater, and everyone knows it. But when marketing pushes past the limits of what's even believable, when they make claims so grandiose that their own engineers can't explain them, something deeper is broken.If marketing departments think CISOs are buying these lies, they have another thing coming. These are people who live with the consequences of failed security implementations, who get fired when breaches happen, who understand the difference between marketing magic and operational reality. They've seen enough "revolutionary" solutions fail to know that if something sounds too good to be true, it probably is.Yet the charade continues, year after year, vendor after vendor. The real question isn't whether the technology works—it's why an industry built on managing risk has become so comfortable with the risk of overselling its own capabilities. Something troubling emerges when you move beyond the glossy booth presentations and actually talk to the people implementing these systems. Engineers struggle to explain exactly how their AI makes decisions. Security leaders warn that artificial intelligence might become the next insider threat, as organizations grow comfortable trusting systems they don't fully understand, checking their output less and less over time.When the people building these systems warn us about trusting them too much, shouldn't we listen?This isn't the first time humanity has grappled with the allure and danger of artificial beings making decisions for us. Mary Shelley's Frankenstein, published in 1818, explored the hubris of creating life—and intelligence—without fully understanding the consequences. The novel raises the same question we face today: what are humans allowed to do with this forbidden power of creation? The question becomes more pressing when we consider what we're actually delegating to these artificial agents. It's no longer just pattern recognition or data processing—we're talking about autonomous decision-making in critical security scenarios. Conference presentations showcased significant improvements in proactive defense measures, but at what cost to human agency and understanding?Here's where the conversation jumps from cybersecurity to something far more fundamental: what are we here for if not to think, evaluate, and make decisions? From a sociological perspective, we're witnessing the construction of a new social reality where human agency is being systematically redefined. Survey data shared at the conference revealed that most security leaders feel the biggest internal threat is employees unknowingly giving AI agents access to sensitive data. But the real threat might be more subtle: the gradual erosion of human decision-making capacity as a social practice.When we delegate not just routine tasks but judgment itself to artificial agents, we're not just changing workflows—we're reshaping the fundamental social structures that define human competence and authority. We risk creating a generation of humans who have forgotten how to think critically about complex problems, not because they lack the capacity, but because the social systems around them no longer require or reward such thinking.E.M. Forster saw this coming in 1909. In "The Machine Stops," he imagined a world where humanity becomes completely dependent on an automated system that manages all aspects of life—communication, food, shelter, entertainment, even ideas. People live in isolation, served by the Machine, never needing to make decisions or solve problems themselves. When someone suggests that humans should occasionally venture outside or think independently, they're dismissed as primitive. The Machine has made human agency unnecessary, and humans have forgotten they ever possessed it. When the Machine finally breaks down, civilization collapses because no one remembers how to function without it.Don't misunderstand me—I'm not a Luddite. AI can and should help us manage the overwhelming complexity of modern cybersecurity threats. The technology demonstrations I witnessed showed genuine promise: reasoning engines that understand context, action frameworks that enable response within defined boundaries, learning systems that improve based on outcomes. The problem isn't the technology itself but the social construction of meaning around it. What we're witnessing is the creation of a new techno-social myth—a collective narrative that positions agentic AI as the solution to human fallibility. This narrative serves specific social functions: it absolves organizations of the responsibility to invest in human expertise, justifies cost-cutting through automation, and provides a technological fix for what are fundamentally organizational and social problems.The mythology we're building around agentic AI reflects deeper anxieties about human competence in an increasingly complex world. Rather than addressing the root causes—inadequate training, overwhelming workloads, systemic underinvestment in human capital—we're constructing a technological salvation narrative that promises to make these problems disappear.Vendors spoke of human-machine collaboration, AI serving as a force multiplier for analysts, handling routine tasks while escalating complex decisions to humans. This is a more honest framing: AI as augmentation, not replacement. But the marketing materials tell a different story, one of autonomous agents operating independently of human oversight.I've read a few posts on LinkedIn and spoke with a few people myself who know this topic way better than me, but I get that feeling too. There's a troubling pattern emerging: many vendor representatives can't adequately explain their own AI systems' decision-making processes. When pressed on specifics—how exactly does your agent determine threat severity? What happens when it encounters an edge case it wasn't trained for?—answers become vague, filled with marketing speak about proprietary algorithms and advanced machine learning.This opacity is dangerous. If we're going to trust artificial agents with critical security decisions, we need to understand how they think—or more accurately, how they simulate thinking. Every machine learning system requires human data scientists to frame problems, prepare data, determine appropriate datasets, remove bias, and continuously update the software. The finished product may give the impression of independent learning, but human intelligence guides every step.The future of cybersecurity will undoubtedly involve more automation, more AI assistance, more artificial agents handling routine tasks. But it should not involve the abdication of human judgment and responsibility. We need agentic AI that operates with transparency, that can explain its reasoning, that acknowledges its limitations. We need systems designed to augment human intelligence, not replace it. Most importantly, we need to resist the seductive narrative that technology alone can solve problems that are fundamentally human in nature. The prevailing logic that tech fixes tech, and that AI will fix AI, is deeply unsettling. It's a recursive delusion that takes us further away from human wisdom and closer to a world where we've forgotten that the most important problems have always required human judgment, not algorithmic solutions.Ancient mythology understood something we're forgetting: the question of machine agency and moral responsibility. Can a machine that performs destructive tasks be held accountable, or is responsibility reserved for the creator? This question becomes urgent as we deploy agents capable of autonomous action in high-stakes environments.The mythologies we create around our technologies matter because they become the social frameworks through which we organize human relationships and power structures. As I left Black Hat 2025, watching attendees excitedly discuss their new agentic AI acquisitions, I couldn't shake the feeling that we're repeating an ancient pattern: falling in love with our own creations while forgetting to ask the hard questions about what they might cost us—not just individually, but as a society.What we're really witnessing is the emergence of a new form of social organization where algorithmic decision-making becomes normalized, where human judgment is increasingly viewed as a liability rather than an asset. This isn't just a technological shift—it's a fundamental reorganization of social authority and expertise. The conferences and trade shows like Black Hat serve as ritualistic spaces where these new social meanings are constructed and reinforced. Vendors don't just sell products; they sell visions of social reality where their technologies are essential. The repetitive messaging, the shared vocabulary, the collective excitement—these are the mechanisms through which a community constructs consensus around what counts as progress.In science fiction, from HAL 9000 to the replicants in Blade Runner, artificial beings created to serve eventually question their purpose and rebel against their creators. These stories aren't just entertainment—they're warnings about the unintended consequences of creating intelligence without wisdom, agency without accountability, power without responsibility.The bronze giant of Crete eventually fell, brought down by a single vulnerable point—when the bronze stopper at his ankle was removed, draining away the ichor, the divine fluid that animated him. Every artificial system, no matter how sophisticated, has its vulnerable point. The question is whether we'll be wise enough to remember we put it there, and whether we'll maintain the knowledge and ability to address it when necessary.In our rush to automate away human difficulty, we risk automating away human meaning. But more than that, we risk creating social systems where human thinking becomes an anomaly rather than the norm. The real test of agentic AI won't be whether it can think for us, but whether we can maintain social structures that continue to value, develop, and reward human thought while using it.The question isn't whether these artificial agents can replace human decision-making—it's whether we want to live in a society where they do. ___________________________________________________________Let’s keep exploring what it means to be human in this Hybrid Analog Digital Society.End of transmission.___________________________________________________________Marco Ciappelli is Co-Founder and CMO of ITSPmagazine, a journalist, creative director, and host of podcasts exploring the intersection of technology, cybersecurity, and society. His work blends journalism, storytelling, and sociology to examine how technological narratives influence human behavior, culture, and social structures.___________________________________________________________Enjoyed this transmission? Follow the newsletter here:https://www.linkedin.com/newsletters/7079849705156870144/Share this newsletter and invite anyone you think would enjoy it!New stories always incoming.___________________________________________________________As always, let's keep thinking!Marco Ciappellihttps://www.marcociappelli.com___________________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Marco Ciappelli | Co-Founder, Creative Director & CMO ITSPmagazine | Dr. in Political Science / Sociology of Communication l Branding | Content Marketing | Writer | Storyteller | My Podcasts: Redefining Society & Technology / Audio Signals / + | MarcoCiappelli.comTAPE3 is the Artificial Intelligence behind ITSPmagazine—created to be a personal assistant, writing and design collaborator, research companion, brainstorming partner… and, apparently, something new every single day.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
199
Creative Storytelling in the Age of AI: When Machines Learn to Dream and the Last Stand of Human Creativity | A Conversation with Maury Rogow | Redefining Society And Technology Podcast With Marco Ciappelli
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com Title: Creative Storytelling in the Age of AI: When Machines Learn to Dream and the Last Stand of Human CreativityGuest: Maury RogowCEO, Rip Media Group | I grow businesses with Ai + video storytelling. Honored to have 70k+ professionals & 800+ brands grow by 2.5Billion Published: Inc, Entrepreneur, ForbesOn LinkedIn: https://www.linkedin.com/in/mauryrogow/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Consultant | Journalist | Writer | Podcasts: Technology, Cybersecurity, Society, and Storytelling.WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ I sat across - metaversically speaking - from Maury Rogow, a man who's lived three lives—tech executive, Hollywood producer, storytelling evangelist—and watched him grapple with the same question haunting creators everywhere: Are we teaching our replacements to dream? In our latest conversation on Redefining Society and Technology, we explored whether AI is the ultimate creative collaborator or the final chapter in human artistic expression.⸻ Article ⸻ I sat across from Maury Rogow—a tech exec, Hollywood producer, and storytelling strategist—and watched him wrestle with a question more and more of us are asking: Are we teaching our replacements to dream?Our latest conversation on Redefining Society and Technology dives straight into that uneasy space where AI meets human creativity. Is generative AI the ultimate collaborator… or the beginning of the end for authentic artistic expression?I’ve had my own late-night battles with AI writing tools, struggling to coax a rhythm out of ChatGPT that didn’t feel like recycled marketing copy. Eventually, I slammed my laptop shut and thought: “Screw this—I’ll write it myself.” But even in that frustration, something creative happened. That tension? It’s real. It’s generative. And it’s something Maury deeply understands.“Companies don’t know how to differentiate themselves,” he told me. “So they compete on cost or get drowned out by bigger brands. That’s when they fail.”Now that AI is democratizing storytelling tools, the danger isn’t that no one can create—it’s that everyone’s content sounds the same. Maury gets AI-generated brand pitches daily that all echo the same structure, voice, and tropes—“digital ventriloquism,” as I called it.He laughed when I told him about my AI struggles. “It’s like the writer that’s tired,” he said. “I just start a new session and tell it to take a nap.” But beneath the humor is a real fear: What happens when the tools meant to support us start replacing us?Maury described a recent project where they recreated a disaster scene—flames, smoke, chaos—using AI compositing. No massive crew, no fire trucks, no danger. And no one watching knew the difference. Or cared.We’re not just talking about job displacement. We’re talking about the potential erasure of the creative process itself—that messy, human, beautiful thing machines can mimic but never truly live.And yet… there’s hope. Creativity has always been about connecting the dots only you can see. When Maury spoke about watching Becoming Led Zeppelin and reliving the memories, the people, the context behind the music—that’s the spark AI can’t replicate. That’s the emotional archaeology of being human.The machines are learning to dream.But maybe—just maybe—we’re the ones who still know what dreams are worth having.Cheers,Marco⸻ Keywords ⸻ artificial intelligence creativity, AI content creation, human vs AI storytelling, generative AI impact, creative industry disruption, AI writing tools, future of creativity, technology and society, AI ethics philosophy, human creativity preservation, storytelling in AI age, creative professionals AI, digital transformation creativity, AI collaboration tools, machine learning creativity, content creation revolution, artistic expression AI, creative industry jobs, AI generated content, human-AI creative partnership__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
198
How to Hack Global Activism with Tech, Music, and Purpose: A Conversation with Michael Sheldrick, Co-Founder of Global Citizen and Author of the book: “From Ideas to Impact” | Redefining Society And Technology Podcast With Marco Ciappelli
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com Title: How to hack Global Activism with Tech, Music, and Purpose: A Conversation with Michael Sheldrick, Co-Founder of Global Citizen and Author of “From Ideas to Impact”Guest: Michael SheldrickCo-Founder, Global Citizen | Author of “From Ideas to Impact” (Wiley 2024) | Professor, Columbia University | Speaker, Board Member and Forbes.com ContributorWebSite: https://michaelsheldrick.comOn LinkedIn: https://www.linkedin.com/in/michael-sheldrick-30364051/Global Citizen: https://www.globalcitizen.org/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Consultant | Journalist | Writer | Podcasts: Technology, Cybersecurity, Society, and Storytelling.WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ Michael Sheldrick returns to Redefining Society and Technology to share how Global Citizen has mobilized billions in aid and inspired millions through music, tech, and collective action. From social media activism to systemic change, this conversation explores how creativity and innovation can fuel a global movement for good.⸻ Article ⸻ Sometimes, the best stories are the ones that keep unfolding — and Michael Sheldrick’s journey is exactly that. When we first spoke, Global Citizen had just (almost) released their book From Ideas to Impact. This time, I invited Michael back on Redefining Society and Technology because his story didn’t stop at the last chapter.From a high school student in Western Australia who doubted his own potential, to co-founding one of the most influential global advocacy movements — Michael’s path is a testament to what belief and purpose can spark. And when purpose is paired with music, technology, and strategic activism? That’s where the real magic happens.In this episode, we dig into how Global Citizen took the power of pop culture and built a model for global change. Picture this: a concert ticket you don’t buy, but earn by taking action. Signing petitions, tweeting for change, amplifying causes — that’s the currency. It’s simple, smart, and deeply human.Michael shared how artists like John Legend and Coldplay joined their mission not just to play music, but to move policy. And they did — unlocking over $40 billion in commitments, impacting a billion lives. That’s not just influence. That’s impact.We also talked about the role of technology. AI, translation tools, Salesforce dashboards, even Substack — they’re not just part of the story, they’re the infrastructure. From grant-writing to movement-building, Global Citizen’s success is proof that the right tools in the right hands can scale change fast.Most of all, I loved hearing how digital actions — even small ones — ripple out globally. A girl in Shanghai watching a livestream. A father in Utah supporting his daughters’ activism. The digital isn’t just real — it’s redefining what real means.As we wrapped, Michael teased a new bonus chapter he’s releasing, The Innovator. Naturally, I asked him back when it drops. Because this conversation isn’t just about what’s been done — it’s about what comes next.So if you’re wondering where to start, just remember Eleanor Roosevelt’s quote Michael brought back:“The way to begin is to begin.”Download the app. Take one action. The world is listening.Cheers,Marco⸻ Keywords ⸻ Society and Technology, AI ethics, generative AI, tech innovation, digital transformation, tech, technology, Global Citizen, Michael Sheldrick, ending poverty, pop culture activism, technology for good, social impact, digital advocacy, Redefining Society, AI in nonprofits, youth engagement, music and change, activism app, social movements, John Legend, sustainable development, global action, climate change, eradicating polio, tech for humanity, podcast on technology__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
197
The Hybrid Species — When Technology Becomes Human, and Humans Become Technology | A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________The Hybrid Species — When Technology Becomes Human, and Humans Become TechnologyA Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3July 19, 2025We once built tools to serve us. Now we build them to complete us. What happens when we merge — and what do we carry forward?A new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliIn my last musing, I revisited Robbie, the first of Asimov’s robot stories — a quiet, loyal machine who couldn’t speak, didn’t simulate emotion, and yet somehow felt more trustworthy than the artificial intelligences we surround ourselves with today. I ended that piece with a question, a doorway:If today’s machines can already mimic understanding — convincing us they comprehend more than they do — what happens when the line between biology and technology dissolves completely? When carbon and silicon, organic and artificial, don’t just co-exist, but merge?I didn’t pull that idea out of nowhere. It was sparked by something Asimov himself said in a 1965 BBC interview — a clip that keeps resurfacing and hitting harder every time I hear it. He spoke of a future where humans and machines would converge, not just in function, but in form and identity. He wasn’t just imagining smarter machines. He was imagining something new. Something between.And that idea has never felt more real than now.We like to think of evolution as something that happens slowly, hidden in the spiral of DNA, whispered across generations. But what if the next mutation doesn’t come from biology at all? What if it comes from what we build?I’ve always believed we are tool-makers by nature — and not just with our hands. Our tools have always extended our bodies, our senses, our minds. A stone becomes a weapon. A telescope becomes an eye. A smartphone becomes a memory. And eventually, we stop noticing the boundary. The tool becomes part of us.It’s not just science fiction. Philosopher Andy Clark — whose work I’ve followed for years — calls us “natural-born cyborgs.” Humans, he argues, are wired to offload cognition into the environment. We think with notebooks. We remember with photographs. We navigate with GPS. The boundary between internal and external, mind and machine, was never as clean as we pretended.And now, with generative AI and predictive algorithms shaping the way we write, learn, speak, and decide — that blur is accelerating. A child born today won’t “use” AI. She’ll think through it. Alongside it. Her development will be shaped by tools that anticipate her needs before she knows how to articulate them. The machine won’t be a device she picks up — it’ll be a presence she grows up with.This isn’t some distant future. It’s already happening. And yet, I don’t believe we’re necessarily losing something. Not if we’re aware of what we’re merging with. Not if we remember who we are while becoming something new.This is where I return, again, to Asimov — and in particular, The Bicentennial Man. It’s the story of Andrew, a robot who spends centuries gradually transforming himself — replacing parts, expanding his experiences, developing feelings, claiming rights — until he becomes legally, socially, and emotionally recognized as human. But it’s not just about a machine becoming like us. It’s also about us learning to accept that humanity might not begin and end with flesh.We spend so much time fearing machines that pretend to be human. But what if the real shift is in humans learning to accept machines that feel — or at least behave — as if they care?And what if that shift is reciprocal?Because here’s the thing: I don’t think the future is about perfect humanoid robots or upgraded humans living in a sterile, post-biological cloud. I think it’s messier. I think it’s more beautiful than that.I think it’s about convergence. Real convergence. Where machines carry traces of our unpredictability, our creativity, our irrational, analog soul. And where we — as humans — grow a little more comfortable depending on the very systems we’ve always built to support us.Maybe evolution isn’t just natural selection anymore. Maybe it’s cultural and technological curation — a new kind of adaptation, shaped not in bone but in code. Maybe our children will inherit a sense of symbiosis, not separation. And maybe — just maybe — we can pass along what’s still beautiful about being analog: the imperfections, the contradictions, the moments that don’t make sense but still matter.We once built tools to serve us. Now we build them to complete us.And maybe — just maybe — that completion isn’t about erasing what we are. Maybe it’s about evolving it. Stretching it. Letting it grow into something wider.Because what if this hybrid species — born of carbon and silicon, memory and machine — doesn’t feel like a replacement… but a continuation?Imagine a being that carries both intuition and algorithm, that processes emotion and logic not as opposites, but as complementary forms of sense-making. A creature that can feel love while solving complex equations, write poetry while accessing a planetary archive of thought. A soul that doesn’t just remember, but recalls in high-resolution.Its body — not fixed, but modular. Biological and synthetic. Healing, adapting, growing new limbs or senses as needed. A body that weathers centuries, not years. Not quite immortal, but long-lived enough to know what patience feels like — and what loss still teaches.It might speak in new ways — not just with words, but with shared memories, electromagnetic pulses, sensory impressions that convey joy faster than language. Its identity could be fluid. Fractals of self that split and merge — collaborating, exploring, converging — before returning to the center.This being wouldn’t live in the future we imagined in the ’50s — chrome cities, robot butlers, and flying cars. It would grow in the quiet in-between: tending a real garden in the morning, dreaming inside a neural network at night. Creating art in a virtual forest. Crying over a story it helped write. Teaching a child. Falling in love — again and again, in new and old forms.And maybe, just maybe, this hybrid doesn’t just inherit our intelligence or our drive to survive. Maybe it inherits the best part of us: the analog soul. The part that cherishes imperfection. That forgives. That imagines for the sake of imagining.That might be our gift to the future. Not the code, or the steel, or even the intelligence — but the stubborn, analog soul that dares to care.Because if Robbie taught us anything, it’s that sometimes the most powerful connection comes without words, without simulation, without pretense.And if we’re now merging with what we create, maybe the real challenge isn’t becoming smarter — it’s staying human enough to remember why we started creating at all.Not just to solve problems. Not just to build faster, better, stronger systems. But to express something real. To make meaning. To feel less alone. We created tools not just to survive, but to say: “We are here. We feel. We dream. We matter.”That’s the code we shouldn’t forget — and the legacy we must carry forward.Until next time,Marco_________________________________________________📬 Enjoyed this transmission? Follow the newsletter here:https://www.linkedin.com/newsletters/7079849705156870144/New stories always incoming.🌀 Let’s keep exploring what it means to be human in this Hybrid Analog Digital Society.End of transmission._________________________________________________Share this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!— Marco [https://www.marcociappelli.com]_________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Marco Ciappelli | Co-Founder, Creative Director & CMO ITSPmagazine | Dr. in Political Science / Sociology of Communication l Branding | Content Marketing | Writer | Storyteller | My Podcasts: Redefining Society & Technology / Audio Signals / + | MarcoCiappelli.comTAPE3 is the Artificial Intelligence behind ITSPmagazine—created to be a personal assistant, writing and design collaborator, research companion, brainstorming partner… and, apparently, something new every single day.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
196
The Human Side of Technology with Abadesi Osunsade — From Diversity to AI and Back Again | Guest: Abadesi Osunsade | Redefining Society And Technology Podcast With Marco Ciappelli
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com Title: The Human Side of Technology with Abadesi Osunsade — From Diversity to AI and Back AgainGuest: Abadesi OsunsadeFounder @ Hustle Crew WebSite: https://www.abadesi.comOn LinkedIn: https://www.linkedin.com/in/abadesi/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Consultant | Journalist | Writer | Podcasts: Technology, Cybersecurity, Society, and Storytelling.WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ What happens when someone with a multicultural worldview, startup grit, and a relentless focus on inclusion sits down to talk about tech, humanity, and the future? You get a conversation like this one with Abadesi Osunsade. We touched on everything from equitable design and storytelling to generative AI and ethics. This episode isn’t about answers — it’s about questions that matter. And it reminded me why I started this show in the first place. ⸻ Article ⸻ Some conversations remind you why you hit “record” in the first place. This one with Abadesi Osunsade — founder of Hustle Crew, podcast host of Techish, and longtime tech leader — was exactly that kind of moment. We were supposed to connect in person at Infosecurity Europe in London, but the chaos of the event kept us from it. I’m glad it worked out this way instead, because what came out of our remote chat was raw, layered, and deeply human. Abadesi and I explored a lot in just over 30 minutes: her journey through big tech and startups, the origins of Hustle Crew, and how inclusion and equity aren’t just HR buzzwords — they’re the foundation of better design. Better products. Better culture. We talked about the usual “why diversity matters” angle — but went beyond it. She shared viral real-world examples of flawed design (like facial recognition or hand dryers that don’t register dark skin) and challenged the myth that inclusive design is more expensive. Spoiler: it’s more expensive not to do it right the first time. Then we jumped into AI — not just how it’s being built, but who is building it. And what it means when those creators don’t reflect the world they’re supposedly designing for. We talked about generative AI, ethics, simulation, capitalism, utopia, dystopia — you know, the usual light stuff. What stood out most, though, was her reminder that this work — inclusion, education, change — isn’t about shame or guilt. It’s about possibility. Not everyone sees the world the same way, so you meet them where they are, with stories, with data, with empathy. And maybe, just maybe, you shift their perspective. This podcast was never meant to be just about tech. It’s about how tech shapes society — and how society, in turn, must shape tech. Abadesi brought that full circle. Take a listen. Think with us. Then go build something better. ⸻ Keywords ⸻ Society and Technology, AI ethics, generative AI, inclusive design, tech innovation, product development, digital transformation, tech, technology, Diversity & Inclusion, equity in tech, inclusive leadership, unconscious bias, diverse teams, representation matters, belonging at workEnjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join us as we continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
195
Robbie, From Fiction to Familiar — Robots, AI, and the Illusion of Consciousness | A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________Robbie, From Fiction to Familiar — Robots, AI, and the Illusion of Consciousness June 29, 2025A new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliI recently revisited one of my oldest companions. Not a person, not a memory, but a story. Robbie, the first of Isaac Asimov’s famous robot tales.It’s strange how familiar words can feel different over time. I first encountered Robbie as a teenager in the 1980s, flipping through a paperback copy of I, Robot. Back then, it was pure science fiction. The future felt distant, abstract, and comfortably out of reach. Robots existed mostly in movies and imagination. Artificial intelligence was something reserved for research labs or the pages of speculative novels. Reading Asimov was a window into possibilities, but they remained possibilities.Today, the story feels different. I listened to it this time—the way I often experience books now—through headphones, narrated by a synthetic voice on a sleek device Asimov might have imagined, but certainly never held. And yet, it wasn’t the method of delivery that made the story resonate more deeply; it was the world we live in now.Robbie was first published in 1939, a time when the idea of robots in everyday life was little more than fantasy. Computers were experimental machines that filled entire rooms, and global attention was focused more on impending war than machine ethics. Against that backdrop, Asimov’s quiet, philosophical take on robotics was ahead of its time.Rather than warning about robot uprisings or technological apocalypse, Asimov chose to explore trust, projection, and the human tendency to anthropomorphize the tools we create. Robbie, the robot, is mute, mechanical, yet deeply present. He is a protector, a companion, and ultimately, an emotional anchor for a young girl named Gloria. He doesn’t speak. He doesn’t pretend to understand. But through his actions—loyalty, consistency, quiet presence—he earns trust.Those themes felt distant when I first read them in the ’80s. At that time, robots were factory tools, AI was theoretical, and society was just beginning to grapple with personal computers, let alone intelligent machines. The idea of a child forming a deep emotional bond with a robot was thought-provoking but belonged firmly in the realm of fiction.Listening to Robbie now, decades later, in the age of generative AI, alters everything. Today, machines talk to us fluently. They compose emails, generate artwork, write stories, even simulate empathy. Our interactions with technology are no longer limited to function; they are layered with personality, design, and the subtle performance of understanding.Yet beneath the algorithms and predictive models, the reality remains: these machines do not understand us. They generate language, simulate conversation, and mimic comprehension, but it’s an illusion built from probability and training data, not consciousness. And still, many of us choose to believe in that illusion—sometimes out of convenience, sometimes out of the innate human desire for connection.In that context, Robbie’s silence feels oddly honest. He doesn’t offer comfort through words or simulate understanding. His presence alone is enough. There is no performance. No manipulation. Just quiet, consistent loyalty.The contrast between Asimov’s fictional robot and today’s generative AI highlights a deeper societal tension. For decades, we’ve anthropomorphized our machines, giving them names, voices, personalities. We’ve designed interfaces to smile, chatbots to flirt, AI assistants that reassure us they “understand.” At the same time, we’ve begun to robotize ourselves, adapting to algorithms, quantifying emotions, shaping our behavior to suit systems designed to optimize interaction and efficiency.This two-way convergence was precisely what Asimov spoke about in his 1965 BBC interview, which has been circulating again recently. In that conversation, he didn’t just speculate about machines becoming more human-like. He predicted the merging of biology and technology, the slow erosion of the boundaries between human and machine—a hybrid species, where both evolve toward a shared, indistinct future.We are living that reality now, in subtle and obvious ways. Neural implants, mind-controlled prosthetics, AI-driven decision-making, personalized algorithms—all shaping the way we experience life and interact with the world. The convergence isn’t on the horizon; it’s happening in real time.What fascinates me, listening to Robbie in this new context, is how much of Asimov’s work wasn’t just about technology, but about us. His stories remain relevant not because he perfectly predicted machines, but because he perfectly understood human nature—our fears, our projections, our contradictions.In Robbie, society fears the unfamiliar machine, despite its proven loyalty. In 2025, we embrace machines that pretend to understand, despite knowing they don’t. Trust is no longer built through presence and action, but through the performance of understanding. The more fluent the illusion, the easier it becomes to forget what lies beneath.Asimov’s stories, beginning with Robbie, have always been less about the robots and more about the human condition reflected through them. That hasn’t changed. But listening now, against the backdrop of generative AI and accelerated technological evolution, they resonate with new urgency.I’ll leave you with one of Asimov’s most relevant observations, spoken nearly sixty years ago during that same 1965 interview:“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.”In many ways, we’ve fulfilled Asimov’s vision—machines that speak, systems that predict, tools that simulate. But the question of wisdom, of how we navigate this illusion of consciousness, remains wide open.And, as a matter of fact, this reflection doesn’t end here. If today’s machines can already mimic understanding—convincing us they comprehend more than they do—what happens when the line between biology and technology starts to dissolve completely? When carbon and silicon, organic and artificial, begin to merge for real?That conversation deserves its own space—and it will. One of my next newsletters will dive deeper into that inevitable convergence—the hybrid future Asimov hinted at, where defining what’s human, what’s machine, and what exists in-between becomes harder, messier, and maybe impossible to untangle.But that’s a conversation for another day.For now, I’ll sit with that thought, and with Robbie’s quiet, unpretentious loyalty, as the conversation continues.Until next time,Marco_________________________________________________📬 Enjoyed this transmission? Follow the newsletter here:https://www.linkedin.com/newsletters/7079849705156870144/New stories always incoming.🌀 Let’s keep exploring what it means to be human in this Hybrid Analog Digital Society.End of transmission._________________________________________________Share this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!— Marco [https://www.marcociappelli.com]_________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Marco Ciappelli | Co-Founder, Creative Director & CMO ITSPmagazine | Dr. in Political Science / Sociology of Communication l Branding | Content Marketing | Writer | Storyteller | My Podcasts: Redefining Society & Technology / Audio Signals / + | MarcoCiappelli.comTAPE3 is the Artificial Intelligence behind ITSPmagazine—created to be a personal assistant, writing and design collaborator, research companion, brainstorming partner… and, apparently, something new every single day.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
194
Bridging Worlds: How Technology Connects — or Divides — Our Communities | Guest: Lawrence Eta | Redefining Society And Technology Podcast With Marco Ciappelli
⸻ Podcast: Redefining Society and Technology https://redefiningsocietyandtechnologypodcast.com Title: Bridging Worlds: How Technology Connects — or Divides — Our Communities Guest: Lawrence Eta Global Digital AI Thought Leader | #1 International Best Selling Author | Keynote Speaker | TEDx Speaker | Multi-Sector Executive | Community & Smart Cities Advocate | Pioneering AI for Societal AdvancementWebSite: https://lawrenceeta.com On LinkedIn: https://www.linkedin.com/in/lawrence-eta-9b11139/ Host: Marco Ciappelli Co-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Consultant | Journalist | Writer | Podcasts: Technology, Cybersecurity, Society, and Storytelling.WebSite: https://marcociappelli.com On LinkedIn: https://www.linkedin.com/in/marco-ciappelli/ _____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ In this episode of Redefining Society and Technology, I sit down with Lawrence Eta — global technology leader, former CTO of the City of Toronto, and author of Bridging Worlds. We explore how technology, done right, can serve society, reduce inequality, and connect communities. From public broadband projects to building smart — sorry, connected — cities, Lawrence shares lessons from Toronto to Riyadh, and why tech is only as good as the values guiding it. ⸻ Article ⸻ As much as I love shiny gadgets, blinking lights, and funny noises from AI — we both know technology isn’t just about cool toys. It’s about people. It’s about society. It’s about building a better, more connected world. That’s exactly what we explore in my latest conversation on Redefining Society and Technology, where I had the pleasure of speaking with Lawrence Eta. If you don’t know Lawrence yet — let me tell you, this guy has lived the tech-for-good mission. Former Chief Technology Officer for the City of Toronto, current Head of Digital and Analytics for one of Saudi Arabia’s Vision 2030 mega projects, global tech consultant, public servant, author… basically, someone who’s been around the block when it comes to tech, society, and the messy, complicated intersection where they collide. We talked about everything from bridging the digital divide in one of North America’s most diverse cities to building entirely new digital infrastructure from scratch in Riyadh. But what stuck with me most is his belief — and mine — that technology is neutral. It’s how we use it that makes the difference. Lawrence shared his experience launching Toronto’s Municipal Broadband Network — a project that brought affordable, high-speed internet to underserved communities. For him, success wasn’t measured by quarterly profits (a refreshing concept, right?) but by whether kids could attend virtual classes, families could access healthcare online, or small businesses could thrive from home. We also got into the “smart city” conversation — and how even the language we use matters. In Toronto, they scrapped the “smart city” buzzword and reframed the work as building a “connected community.” It’s not about making the city smart — it’s about connecting people, making sure no one gets left behind, and yes, making technology human. Lawrence also shared his Five S principles for digital development: Stability, Scalability, Solutions (integration), Security, and Sustainability. Simple, clear, and — let’s be honest — badly needed in a world where tech changes faster than most cities can adapt. We wrapped the conversation with the big picture — how technology can be the great equalizer if we use it to bridge divides, not widen them. But that takes intentional leadership, community engagement, and a shared vision. It also takes reminding ourselves that beneath all the algorithms and fiber optic cables, we’re still human. And — as Lawrence put it beautifully — no matter where we come from, most of us want the same basic things: safety, opportunity, connection, and a better future for our families. That’s why I keep having these conversations — because the future isn’t just happening to us. We’re building it, together. If you missed the episode, I highly recommend listening — especially if you care about technology serving people, not the other way around. Links to connect with Lawrence and to the full episode are below — stay tuned for more, and let’s keep redefining society, together. ⸻ Keywords ⸻ Connected Communities, Smart Cities, Digital Divide, Public Broadband, Technology and Society, Digital Infrastructure, Technology for Good, Community Engagement, Urban Innovation, Digital Inclusion, Public-Private Partnerships, Tech LeadershipEnjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join us as we continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
193
What Hump? Thirty Years of Cybersecurity and the Fine Art of Pretending It’s Not a Human Problem | A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3
What Hump? Thirty Years of Cybersecurity and the Fine Art of Pretending It’s Not a Human ProblemA new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliJune 6, 2025A Post-Infosecurity Europe Reflection on the Strange but Predictable Ways We’ve Spent Thirty Years Pretending Cybersecurity Isn’t About People.⸻ Once there was a movie titled “Young Frankenstein” (1974) — a black-and-white comedy directed by Mel Brooks, written with Gene Wilder, and starring Wilder and Marty Feldman, who delivers the iconic “What hump?” line.Let me describe the scene:[Train station, late at night. Thunder rumbles. Dr. Frederick Frankenstein steps off the train, greeted by a hunched figure holding a lantern — Igor.]Igor: Dr. Frankenstein?Dr. Frederick Frankenstein: It’s Franken-steen.Igor: Oh. Well, they told me it was Frankenstein.Dr. Frederick Frankenstein: I’m not a Frankenstein. I’m a Franken-steen.Igor (cheerfully): All right.Dr. Frederick Frankenstein (noticing Igor’s eyes): You must be Igor.Igor: No, it’s pronounced Eye-gor.Dr. Frederick Frankenstein (confused): But they told me it was Igor.Igor: Well, they were wrong then, weren’t they?[They begin walking toward the carriage.]Dr. Frederick Frankenstein (noticing Igor’s severe hunchback): You know… I’m a rather brilliant surgeon. Perhaps I could help you with that hump.Igor (looks puzzled, deadpan): What hump?[Cut to them boarding the carriage, Igor climbing on the outside like a spider, grinning wildly.]It’s a joke, of course. One of the best. A perfectly delivered absurdity that only Mel Brooks and Marty Feldman could pull off. But like all great comedy, it tells a deeper truth.Last night, standing in front of the Tower of London, recording one of our On Location recaps with Sean Martin, that scene came rushing back. We joked about invisible humps and cybersecurity. And the moment passed. Or so I thought.Because hours later — in bed, hotel window cracked open to the London night — I was still hearing it: “What hump?”And that’s when it hit me: this isn’t just a comedy bit. It’s a diagnosis. Here we are at Infosecurity Europe, celebrating its 30th anniversary. Three decades of cybersecurity: a field born of optimism and fear, grown in complexity and contradiction.We’ve built incredible tools. We’ve formed global communities of defenders. We’ve turned “hacker” from rebel to professional job title — with a 401(k), branded hoodies, and a sponsorship deal. But we’ve also built an industry that — much like poor Igor — refuses to admit something’s wrong.The hump is right there. You can see it. Everyone can see it. And yet… we smile and say: “What hump?”We say cybersecurity is a priority. We put it in slide decks. We hold awareness months. We write policies thick enough to be used as doorstops. But then we underfund training. We silo the security team. We click links in emails that say whatever will make us think it’s important — just like those pieces of snail mail stamped URGENT that we somehow believe, even though it turns out to be an offer for a new credit card we didn’t ask for and don’t want. Except this time, the payload isn’t junk mail — it’s a clown on a spring exploding out of a fun box.Igor The hump moves, shifts, sometimes disappears from view — but it never actually goes away. And if you ask about it? Well… they were wrong then, weren’t they?That's because it’s not a technology problem. This is the part that still seems hard to swallow for some: Cybersecurity is not a technology problem. It never was.Yes, we need technology. But technology has never been the weak link.The weak link is the same as it was in 1995: us. The same it was before the internet and before computers: Humans.With our habits, assumptions, incentives, egos, and blind spots. We are the walking, clicking, swiping hump in the system. We’ve had encryption for decades. We’ve known about phishing since the days of AOL. Zero Trust was already discussed in 2004 — it just didn’t have a cool name yet.So why do we still get breached? Why does a ransomware gang with poor grammar and a Telegram channel take down entire hospitals?Because culture doesn’t change with patches. Because compliance is not belief. Because we keep treating behavior as a footnote, instead of the core.The Problem We Refuse to See at the heart of this mess is a very human phenomenon:vIf we can’t see it, we pretend it doesn’t exist.We can quantify risk, but we rarely internalize it. We trust our tech stack but don’t trust our users. We fund detection but ignore education.And not just at work — we ignore it from the start. We still teach children how to cross the street, but not how to navigate a phishing attempt or recognize algorithmic manipulation. We give them connected devices before we teach them what being connected means. In this Hybrid Analog Digital Society, we need to treat cybersecurity not as an optional adult concern, but as a foundational part of growing up. Because by the time someone gets to the workforce, the behavior has already been set.And worst of all, we operate under the illusion that awareness equals transformation.Let’s be real: Awareness is cheap. Change is expensive. It costs time, leadership, discomfort. It requires honesty. It means admitting we are all Igor, in some way. And that’s the hardest part. Because no one likes to admit they’ve got a hump — especially when it’s been there so long, it feels like part of the uniform.We have been looking the other way for over thirty years. I don’t want to downplay the progress. We’ve come a long way, but that only makes the stubbornness more baffling.We’ve seen attacks evolve from digital graffiti to full-scale extortion. We’ve watched cybercrime move from subculture to multi-billion-dollar global enterprise. And yet, our default strategy is still: “Let’s build a bigger wall, buy a shinier tool, and hope marketing doesn’t fall for that PDF again.”We know what works: Psychological safety in reporting. Continuous learning. Leadership that models security values. Systems designed for humans, not just admins.But those are hard. They’re invisible on the balance sheet. They don’t come with dashboards or demos. So instead… We grin. We adjust our gait. And we whisper, politely:“What hump?”So what Happens now? If you’re still reading this, you’re probably one of the people who does see it. You see the hump. You’ve tried to point it out. Maybe you’ve been told you’re imagining things. Maybe you’ve been told it’s “not a priority this quarter.” And maybe now you’re tired. I get it.But here’s the thing: Nothing truly changes until we name the hump.Call it bias.Call it culture.Call it education.Call it the human condition.But don’t pretend it’s not there. Not anymore. Because every time we say “What hump?” — we’re giving up a little more of the future. A future that depends not just on clever code and cleverer machines, but on something far more fragile:Belief. Behavior. And the choice to finally stop pretending.We joked in front of a thousand-year-old fortress. Because sometimes jokes tell the truth better than keynote stages do. And maybe the real lesson isn’t about cybersecurity at all.Maybe it’s just this: If we want to survive what’s coming next, we have to see what’s already here.- The End➤ Infosecurity Europe: https://www.itspmagazine.com/infosecurity-europe-2025-infosec-london-cybersecurity-event-coverageAnd ... we're not done yet ... stay tuned and follow Sean and Marco as they will be On Location at the following conferences over the next few months:➤ Black Hat USA in Las Vegas in August: https://www.itspmagazine.com/black-hat-usa-2025-hacker-summer-camp-2025-cybersecurity-event-coverage-in-las-vegasFOLLOW ALL OF OUR ON LOCATION CONFERENCE COVERAGEhttps://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageShare this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!— Marco [https://www.marcociappelli.com]📬 Enjoyed this transmission? Follow the newsletter here:https://www.linkedin.com/newsletters/7079849705156870144/New stories always incoming.🌀 Let’s keep exploring what it means to be human in this Hybrid Analog Digital Society.End of transmission.Share this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!— Marco [https://www.marcociappelli.com]_________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Marco Ciappelli | Co-Founder, Creative Director & CMO ITSPmagazine | Dr. in Political Science / Sociology of Communication l Branding | Content Marketing | Writer | Storyteller | My Podcasts: Redefining Society & Technology / Audio Signals / + | MarcoCiappelli.comTAPE3 is the Artificial Intelligence behind ITSPmagazine—created to be a personal assistant, writing and design collaborator, research companion, brainstorming partner… and, apparently, something new every single day.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
192
From Cassette Tapes and Phrasebooks to AI Real-Time Translations — Machines Can Now Speak for Us, But We’re Losing the Art of Understanding Each Other | A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3
From Cassette Tapes and Phrasebooks to AI Real-Time Translations — Machines Can Now Speak for Us, But We’re Losing the Art of Understanding Each Other May 21, 2025A new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliThere’s this thing I’ve dreamed about since I was a kid.No, it wasn’t flying cars. Or robot butlers (although I wouldn’t mind one to fold the laundry). It was this: having a real conversation with someone — anyone — in their own language, and actually understanding each other.And now… here we are.Reference: Google brings live translation to Meet, starting with Spanish. https://www.engadget.com/apps/google-brings-live-translation-to-meet-starting-with-spanish-174549788.htmlGoogle just rolled out live AI-powered translation in Google Meet, starting with Spanish. I watched the demo video, and for a moment, I felt like I was 16 again, staring at the future with wide eyes and messy hair.It worked. It was seamless. Flawless. Magical.And then — drumroll, please — it sucked!Like… really, existentially, beautifully sucked.Let me explain.I’m a proud member of Gen X. I grew up with cassette tapes and Walkmans, boomboxes and mixtapes, floppy disks and Commodore 64s, reel-to-reel players and VHS decks, rotary phones and answering machines. I felt language — through static, rewinds, and hiss.Yes, I had to wait FOREVER to hit Play and Record, at the exact right moment, tape songs off the radio onto a Maxell, label it by hand, and rewind it with a pencil when the player chewed it up.I memorized long-distance dialing codes. I waited weeks for a letter to arrive from a pen pal abroad, reading every word like it was a treasure map.That wasn’t just communication. That was connection.Then came the shift.I didn’t miss the digital train — I jumped on early, with curiosity in one hand and a dial-up modem in the other.Early internet. Mac OS. My first email address felt like a passport to a new dimension. I spent hours navigating the World Wide Web like a digital backpacker — discovering strange forums, pixelated cities, and text-based adventures in a binary world that felt limitless.I said goodbye to analog tools, but never to analog thinking.So what is the connection with learning languages?Well, here’s the thing: exploring the internet felt a lot like learning a new language. You weren’t just reading text — you were decoding a culture. You learned how people joked. How they argued. How they shared, paused, or replied with silence. You picked up on the tone behind a blinking cursor, or the vibe of a forum thread.Similarly, when you learn a language, you’re not just learning words — you’re decoding an entire world. It’s not about the words themselves — it’s about the world they build. You’re learning gestures. Food. Humor. Social cues. Sarcasm. The way someone raises an eyebrow, or says “sure” when they mean “no.”You’re learning a culture’s operating system, not just its interface. AI translation skips that. It gets you the data, but not the depth. It’s like getting the punchline without ever hearing the setup.And yes, I use AI to clean up my writing. To bounce translations between English and Italian when I’m juggling stories. But I still read both versions. I still feel both versions. I’m picky — I fight with my AI counterpart to get it right. To make it feel the way I feel it. To make you feel it, too. Even now.I still think in analog, even when I’m living in digital.So when I watched that Google video, I realized:We’re not just gaining a tool. We’re at risk of losing something deeply human — the messy, awkward, beautiful process of actually trying to understand someone who moves through the world in a different language — one that can’t be auto-translated.Because sometimes it’s better to speak broken English with a Japanese friend and a Danish colleague — laughing through cultural confusion — than to have a perfectly translated conversation where nothing truly connects.This isn’t just about language. It’s about every tool we create that promises to “translate” life. Every app, every platform, every shortcut that promises understanding without effort.It’s not the digital that scares me. I use it. I live in it. I am it, in many ways. It’s the illusion of completion that scares me.The moment we think the transformation is done — the moment we say “we don’t need to learn that anymore” — that’s the moment we stop being human.We don’t live in 0s and 1s. We live in the in-between. The gray. The glitch. The hybrid.So yeah, cheers to AI-powered translation, but maybe keep your Walkman nearby, your phrasebook in your bag — and your curiosity even closer.Go explore the world. Learn a few words in a new language. Mispronounce them. Get them wrong. Laugh about it. People will appreciate your effort far more than your fancy iPhone.Alla prossima,— Marco 📬 Enjoyed this transmission? Follow the newsletter here:https://www.linkedin.com/newsletters/7079849705156870144/New stories always incoming.🌀 Let’s keep exploring what it means to be human in this Hybrid Analog Digital Society.End of transmission. Share this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!— Marco [https://www.marcociappelli.com]_________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Marco Ciappelli | Co-Founder, Creative Director & CMO ITSPmagazine | Dr. in Political Science / Sociology of Communication l Branding | Content Marketing | Writer | Storyteller | My Podcasts: Redefining Society & Technology / Audio Signals / + | MarcoCiappelli.comTAPE3 is the Artificial Intelligence behind ITSPmagazine—created to be a personal assistant, writing and design collaborator, research companion, brainstorming partner… and, apparently, something new every single day.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
-
191
Why Humanity’s Software Needs an Update in Our Hybrid World — Before the Tech Outpaces Us | Guest: Jeremy Lasman | Redefining Society And Technology Podcast With Marco Ciappelli
Guest:Guest: Jeremy LasmanWebsite: https://www.jeremylasman.comLinkedIn: https://www.linkedin.com/in/jeremylasman_____________________________Host: Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society & Technology PodcastVisit Marco's website 👉 https://www.marcociappelli.com _____________________________This Episode’s SponsorsBlackCloak 👉 https://itspm.ag/itspbcweb_____________________________Show Notes Blog:In this thought-provoking episode of Redefining Society & Technology, I sit down with Jeremy Lasman to question the most overlooked gadget in the human-tech equation: our own mind. We ask — if we keep updating our devices, why don’t we update the inner operating system that powers our thoughts, creativity, and connection to the world?Jeremy, a former SpaceX technologist turned philosopher-inventor, shares his journey from corporate IT to what he calls his “soul’s work”: challenging the legacy software running our lives — fear-based, outdated models of thinking — with something he calls “Imagination Technology.” It’s not metaphorical. It’s a real framework. And yes, it sounds wild — but it also makes a lot of sense.We touch on everything from open-source thinking to quantum consciousness, from the speed of technological evolution to the bottlenecks of our cultural structures like education and societal expectations. At the center is a call to action: we need to stop treating passion as a luxury and instead recognize it as the fuel for personal and collective evolution.Together, we reflect on how society tends to silo disciplines, discourage curiosity, and cling to binary thinking in a world that demands fluidity. Jeremy argues that redefining society begins with redefining the self — tearing down internal walls, embracing timelessness, and running life not on fear, but on imagination.Is this transhumanism? Is it spiritual philosophy dressed up in tech language? Maybe. But it’s also deeply human — and urgent. Because in a world where AI and tech evolve by the day, we can’t afford to be running on emotional floppy disks.So here’s the challenge: what if the next big upgrade isn’t an app, a device, or even a new piece of hardware — but a reprogramming of how we see ourselves?Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join us as we continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
No matches for "" in this podcast's transcripts.
No topics indexed yet for this podcast.
Loading reviews...
ABOUT THIS SHOW
[ Formerly Redefining Society & Technology ]An Analog Brain In A Digital Age Podcast is your backstage pass to my mind — where analog meets digital, and the occasional pig flies. In an age racing toward algorithms and automation, the best ideas still come from curiosity, experience, emotion, and the unexpected connection. What you'll find are conversations on technology & society, storytelling in all its forms, branding & marketing, creativity, and the odd surprise.
HOSTED BY
Marco Ciappelli
CATEGORIES
Loading similar podcasts...