PodParley PodParley
Futuristic

PODCAST · news

Futuristic

Each episode we look at the emerging technologies that are going to change our lives (ChatGPT, Claude, Tesla, other AI tools, robotics, nanotech) and try to work out the social, business and political consequences and opportunities.

  1. 10

    Futuristic #46 – 2022 vs 2025 vs 2028

    In this episode of Futuristic, Cameron and Steve reunite after a three-month break to reflect on how far artificial intelligence and robotics have come since the launch of ChatGPT in November 2022. They chart the wild rise from OpenAI’s first conversational model to today’s trillion-dollar valuations, integrated browsers, agentic coding tools, and the dawn of humanoid robots. Along the way they weigh the “bubble” narrative, the myth of a job apocalypse, and the cultural impact of AI on everything from creativity to capitalism. The conversation pivots from nostalgia for the early internet to speculation about the next three years—when everyone, they predict, will be working with personal AI agents and, perhaps, living alongside household robots. The banter swings between philosophy, tech history, humour, and a few pulled hamstrings. FULL TRANSCRIPT Futuristic recording – Oct 30, 2025 Cameron: [00:00:00] Welcome back to the Futuristic Podcast, episode 46. It is the 31st of October, 2025. The first time I’ve seen your face on a screen except on TikTok, Steve Sammartino. Since August 4th was the last time you and I did an episode, I did one with my old friend, Nick Johnson, principal of Toowoomba Ankin School on August 23. But we have not done a podcast for nearly three months, Steve. Not because there’s been nothing happening, just because there’s too much happening, and you’ve been too busy. Steve: Sounds so needy. I apologize wholeheartedly, Cameron. And, uh, the fact that you didn’t delete my phone from your contacts Cameron: No Steve: is revelatory, and I like it. It’s good. You’re a good man. And Cameron: revelatory. Steve: it and I missed it. Our chats because [00:01:00] every time I chat to you I get a little bit smarter. So I’ve been in Cameron: Same, same. Steve: and I’m glad we are going to, uh, unpack. Cameron: I’ve got less hair than the last time we talked. Not because it’s falling out, but just because it’s hard to tell with, there’s glare coming through my window. But, um, I am, uh, Sean Light so bright. Steve: And I just asked, when did you go gray? ’cause you got a really good gray mop there. When did it, were you one of Cameron: Well, I’m, yeah, I, I started going gray at 23. I’m white. I’ve been white for a lot longer than I’ve been gray, but Yeah. Steve: Blonde is area part of the Aryan Nation, Cameron: Or as the Mormons like to say white and delight them. If you wanna get into heaven, you need to be white and delight them. Steve: Do you? Cameron: Hmm. Steve: me. All right. Cameron: Steve? Um. In terms of what to talk about today. There’s been a lot of news in the last [00:02:00] three months and too much to catch up on, obviously, and I thought singers were coming up to the third anniversary of chat, GPT, which hit the world in November of 2022. It might be a good time to stop and reflect where have we come from, where are we today, and where do you think we might be three years from now? What do you think about that as a model, Steve? Steve: Perfect. It is the perfect time for a review. Three years. Things work in threes. It is the perfect time to review it because it’s been a pretty radical three years and maybe in some ways we’re in the [00:03:00] trough of disillusionment now. A lot of people are starting to ask questions. The bubble word comes up, which comes up with every technology revolution. And bubbles aren’t bad anyway. If a bubble bursts, the beneficiaries are usually the people because there’s been an excessive investment in capital and that can benefit all of us because the infrastructure gets built out and they overinvested and we need that. It happened with broadband cables and early internet and, and now it’s, it’s happening again potentially. And that’s good ’cause you want overinvestment because the beneficiaries are usually the wider populace when it comes to and technology Revolutions. Cameron: Yes. Look, there’s obviously a ton of investment, Steve: Mm. Cameron: ton of hype going into all things AI and robotics we’re to, [00:04:00] NVIDIA’s just became a $5 trillion company open. AI is suggesting they might IPO at a trillion dollar valuation. I mean, it is, uh, like bonkers stuff, absolutely bonkers. But I thought I would start this by pulling up the blog post that OpenAI put out in November, 2002. It was November 30th, so we’re about a month away from the five year anniversary. It was pretty simple. It said, we’ve trained a model called Chat, GPT, which interacts in a conversational way. The dialogue format makes it possible for chat GPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. Chat GPT is a sibling model to instruct GPT, which is trained to follow an instruction in a prompt and provide a detailed response. We are excited [00:05:00] to introduce chat GPT to get users feedback and learn about its strengths and weaknesses during the research preview usage of chat. GPT is free. Try it [email protected]. Then there were some samples of ways to use it. Limitations Chachi PT sometimes write plausible sounding, but incorrect or nonsensical answers. Well, I’m glad they fixed that. Uh, but. These were simpler times in November, 2022. Do you remember when you first heard about it and when you first used it? Steve: I was actually using GPT, uh, the, the model before, it might’ve been a 2.5 or a three, uh, through one of the APIs. ’cause the APIs were released earlier. wasn’t Grammarly. I’m trying to remember the name of it now. And it was, and it blew my mind the first time I saw it. Uh, when I went on to Chacha, bt I do remember it [00:06:00] being a super revolution, and we can remind listeners that they had a hundred million users in the first month, fastest adopted consumer product in history. also they’ve got 983 million monthly users now, which is pretty radical. You know, maybe one quarter of the internet. And a, Cameron: Wow. Steve: of people don’t have, call it unlimited data like we have in western markets. So you’d pretty much say if you’re in a Western market, you’re on it. Uh, it’s decimated search. I, I did say a statistic, which blew my mind on search. Now, if you’re a first page search on Google, first page search item, traffic is down. Reportedly just came out, uh, yesterday, 79%. And that’s two reasons. The first one is the AI dropped down summaries that happens in Google, which to their credit, they’ve adopted the reality of where they are. Unlike Kodak and said, look, people are going this way. We just have to adopt it and work out the business model later. [00:07:00] But also, I know that I use chat GBT 80% of the time when I once would use search. And I simply ask it for the live feeds of, tell me where you got it from, what happened today. And you know, those small prompts to make sure the data you’re getting isn’t just a hi historical construct. And it, it really has changed the way we use the internet fundamentally. And even though the last, I think the last maybe. Three or six months. A lot’s been happening, but nothing fundamental. I would say easy to forget how far we’ve come in those three years because the first version of chat, GBT when it was launched in, in November of 2022, remember that was an 18 month old database, and then it was six months old, and then now it’s live. It didn’t have code bases, didn’t have Dali, didn’t have image recognition, didn’t have video. It’s easy to forget that [00:08:00] it isn’t just chat GBT. It’s actually been really dramatic and we’ve almost been spoiled by the fact that every iteration, most of them, for 80% of that time in the past three years has been incredible and wowing. And maybe in the last few months, because we haven’t been wowed with the most recent iteration of Cha JBT, then all of a sudden, oh, bubble Bubble came out. Cameron: I’ve got an article here from the 5th of December, 2022 New York Times by Kevin Ros. He says, like most nerds who read science fiction, I’ve spent a lot of time wondering how society will greet true artificial intelligence. If and when it arrives, will we panic start sucking up to our new robot overlords, ignore it, and go about our daily lives? So it’s been fascinating to watch the Twitter sphere try to make sense of chat, GPTA new cutting edge AI chat bot that was open for testing this week. Chat GPT is quite simply the best artificial intelligence chat bot ever released to the [00:09:00] general public. It was built by open ai, the San Francisco AI company that is also responsible for tools like GPT-3 and Dali two, the breakthrough image generator that came out this year. Um, goes on to say that, uh, AI chatbots are usually terrible, but chat GPT feels different, smarter, weirder, more flexible. It can write jokes, some of which are actually funny, working computer code and college level essays. It can also guess at medical diagnosis, create text-based Harry Potter games and explain scientific concepts at multiple levels of difficulty. And, uh, you know, I always like to go back and read news articles from the early days of the internet, 93, 94, 95, and see how people were talking about it. Um, he finishes this article though by saying. Personally, I’m still trying to wrap my head around the fact that chat, GPTA chat bot that some people think could make Google obsolete and that has already been compared to the iPhone in [00:10:00] terms of its potential impact on society isn’t even open AI’s best AI model. That would be GPT for the next incarnation of the company’s large language model, which is rumored to be coming out sometime next year. We are not ready. Now what they did come out with a week ago or so was chat, GPT Atlas, their chromium web browser. Are you using Atlas? Steve: No, I haven’t. I did have a look at it, but I haven’t gone to using it. Are you? Cameron: Yes. I’ve been using it exclusively as my browser since it came out and basically, you know, if you type something in. You open it up and you type something in the, the, the URL field, which would normally pull up a Google search. It pulls up a chat, GPT answer. So chat GPT is built in. It’s the default answering model. You can also, you know, use it on a [00:11:00] sidebar and access all of your chats to, with chat, GPT. Uh, anything you plug into the browser also turns up in your chat GPT history if you interface via the app or your phone or whatever. So it’s totally integrated into the backend. Steve: because that’s the last thing I want, Cameron. I don’t even want to talk about it. Cameron: Uh, but yeah, look, it’s, it’s. I, you know, I, I’ve seen a lot of negative feedback about it, uh, in the forums, but, um, I think it’s great. Like it is the default thing I go to now. It’ll give me, you know, um, if I just type in a general search, it’ll give me Wikipedia links, new source links, uh, whatever I want. Um, it’s a search engine that understands me. It’ll, quite often if I ask a general question, it’ll say, it’ll gimme an answer. Then it’ll say, if you’re thinking about this in terms of doing a podcast episode for one of your history podcasts or your investing podcast, here are some of the things you might wanna think about. So it’s search answers that are [00:12:00] already built, personalized, built around what it knows about me, what I’m trying to do. Um, yeah, so. That’s a big thing. Like we’re, I, I don’t go to Google. I do literally do not go to Google anymore. ’cause this is now my browser is just chat, GPT, uh, integrated, you know, uh, uh, native integration into chat GPT. Anyway, I’m getting off, I’m getting off, uh, the track. So, November 22 came out and my recollection is that I was shocked and impressed that I could have a conversation with a chatbot that could converse in English. It could understand what I was talking about. It could understand a question, it could answer my [00:13:00] questions. It could write coherent English responses, sponsors, uh. About anything. They weren’t always correct. They still aren’t always correct, but it was like just a massive shift in language. User interfacing with a computer, uh, blew my mind at the time. Yeah. Steve: That’s, that’s the key element, Cameron. The big shift when chat EPT came out was it was the first successful consumer oriented natural language processing computation device where you could or talk to it and get a result that was coherent and not only coherent, a PhD in every single subject. And just like normal human PhDs, they say stupid things too occasionally. So feel like it, it really was when the abstraction of computation got removed and we could [00:14:00] interface in a human way with computational devices. And that was the revolution there. Even the word chat, because chatbots had been around or felt like a bit of a misnomer, almost was named poorly. And didn’t really give the gravitas to what it was because my opinion, and I know yours because we’ve discussed it over the time, is we already have artificial general intelligence. To me, that was the launch of a GI. It is general, it is better than most people at most things, the Cameron Riley definition of artificial general intelligence. And we’ve had it for three years. And that’s quite telling too, because we’ve had artificial general intelligence for three years and where exactly is this job apocalypse that everyone keeps talking about? It ain’t here. And by the way, it ain’t coming anytime soon or ever. And I know that and I’m even more steadfast on that three years down the track that, uh, not about to replace everyone. Certainly people with [00:15:00] jobs that have variety in them. Cameron: Well, let’s talk then about where we are late 2025. Um, you say there’s no jog, job apocalypse, but big tech companies, Amazon just announced they’re laying off, I think 40,000, was it 30,000 or 40,000 people? And it’s not because they’re replacing them all with an ai, but. It is, as I understand it, because they need to cut costs so they can sink more money into building data centers to run ai, and they have to find the money from somewhere. And the first place to go is these 40,000 employees. So it’s a side effect of AI not directly being replaced by ai, but Microsoft’s been letting thousands of people go. I think some of them are being replaced by ai. [00:16:00] Um, it, we, but, you know, we’re a long way from seeing the apocalypse, and I think the reason is the AI is still not good enough yet. I, I had a, a, a friend come over for a coffee yesterday and, uh, last time he was over a week or two ago, he’s a, he’s a web designer slash user interface designer guy. Bit of a coder. Um, I said, are you using Cursor? And he said, no, uh, I’m not, I’m not using, I said, it’s your company works in the energy sector, building solutions for the energy sector. I said, how much AI are people using coding in the background? He goes, uh, not much. I said, check out Cursor. Now, I I, I dunno if you and I, we probably haven’t talked about this ’cause it’s been months, but for people who don’t use Cursor aren’t coders, cursor is a, a development developer tool, an IDE as we call it an interactive developer environment development environment. It’s basically where [00:17:00] you write code and it’s had AI integration now for a year, 18 months. But one of the things that they implemented a couple of months ago is basically ag agentic development processes. So now I will, and I use it all day, every day, and it’s not. Full time all day, every day. But it’s working on projects behind the scenes. So I will be trying to write a piece of code for running my business, right? I’ll say, here’s what I wanna do. Um, help me build this. And it’ll start writing code. And then it will test the code using linter. It’s like a testing environment and it has full access to my hard drive, my project folders. And it will write the code, build the code, test the code, the code will fail. It’ll go, oh, that didn’t work. Lemme try again. That didn’t work. Lemme try again. Oh, I see what I did there. And it’ll [00:18:00] run in the background for 10, 15, 20 minutes trying to get a working version of it. It’ll finally say, yeah, I think this works. I’ll test it, it won’t work. I’ll say, have a look at the console. And then it’ll read the console. Oh, okay. I see what the problem is. And it goes through another cycle of debug test, debug test, debug test until it thinks it’s got a working model. And I’ll rinse and repeat, and this will sometimes go on for hours or days, and at the end of every day, I will export my conversation with it to my project folder. And then the next day, if I start a new session, I’ll say, we’ve been trying to solve this problem. Go and read the conversation that we had yesterday. It’s in my folder, it’s a markdown file, and let’s pick up from where we left off yesterday. The other interesting thing that it does is, uh, when the context window, if you’ve been having a chat with it for three or four hours and you’re [00:19:00] still going around it, the context window, you know, it gets so large that it can’t remember what you talked about hours ago, and then it starts to repeat itself. Steve: it does. Yeah. Cameron: What, what cursor automatically does is when the context window is filling up it’s sum, it stops, summarizes the conversation, creates a new chat session, uses that summary as the prompt for the new chat session and kicks it off again. So it, that’s all, it just automatically reha, uh, what cleans out its context window and starts again. So it’s amazing. But he, so this friend came yesterday. I tried using Cursor and it, it got like 90% of the way to building a solution for me and then kept failing on the last 10%. And I just got in this loop where I couldn’t get it to fix the last 10%. And I was like, yeah, yeah, I know that feeling. And I think that is the problem is it’s still [00:20:00] almost useful in that sort of closed loop way, but it’s still not. It’s, there are still hallucinations. It still makes mistakes. It still gets stuck in these loops, so we’re not there yet. It, it can’t replace a human. It can replace maybe some like low level developers, but it can’t replace a good developer yet. Steve: So the Amazon, going way back to the jobs and the replacement, I mean, that’s the interesting part is yeah, there’s a huge investment and you need to offset the cost of that investment by removing employees. And it’s not overtly dissimilar to getting robots into a car factory. Funding that investment and having some short term pain where you get people within the administrative or marketing or other commercial realms to offset that, that that has happened quite frequently. It would be good to see a study on how that has happened [00:21:00] historically, but I don’t think there’s an increase in job losses directly related to the technology itself. And I would be surprised if this automation exceeds the impact that robotic and manufacturing automation has had. I re I really would, and I, and I, I actually don’t think it’s gonna happen and I actually don’t think that even if it can finish a project and that last 10% that you’re speaking about can happen, I actually don’t think it’s gonna replace us because. Agency is the thing that humans want, we change our mind frequently. Uh, a friend of mine, Nick Hodges, he talks about it and I wrote about it last week. He calls it the takeoff and landing problem, is that we want to set the course for the plane. Watch where it’s going, and you just said, then you, you are using the console or the control panel to [00:22:00] help it on where it’s going and guide the AI that’s flying the plane. But you are sort of guiding it and then you bring it into land on what you finally want. I think that’s where we’re gonna go. And I, I would actually that where we are right now with AI is that it hasn’t reduced the workload for anyone at all. Almost what it has done is enabled us to get more done more quickly. I know that I get things done a lot quicker now, whether it’s thinking of a blog post, working on a presentation by working closely with the AI as an agent, as a. An idea generating tool that as something to synthesize my thoughts, to gather information and data on my behalf, that might take me five hours. That takes me five minutes. I’m not doing less work. And in fact, I’ve had my biggest year ever this year financially, and I think it’s because I’m getting more done and being able to satisfy clients’ needs far more quickly and able to do more keynotes and write more things [00:23:00] because I’ve got this incredibly powerful general AI at my disposal. But everyone I Cameron: An accelerator. Steve: yeah, an accelerator and everyone I Cameron: Hmm. Steve: the informational realm, whether they’re tradies or people working in commercial roles or entrepreneurs, they all just say that they’re all getting more done. No one’s doing less work. I, I don’t think, and productivity’s been the big missing link that we haven’t had. I think this is it, but I, and even if the AI can do everything for us, I don’t think we’ll want it to. We actually want to make decisions. We want to have the final call. That’s why you have board meetings and someone comes in and the big cheese makes the decision. People don’t want to outsource their agency. In fact, the entire capitalist premise money itself is about increasing agency to make independent decisions on your behalf. That’s the actual thing that we want, right? Cameron: Yeah, but what we want isn’t necessarily what the same thing as what our bosses want or our shareholders want. You know, I, you know, if I, I [00:24:00] still think we’re gonna see low level jobs going. I, I’ve already, like, I wonder about the impact. It’s. Steve: them go, but not more than we have. Like, it’s gonna be the same pattern that we’ve, I I don’t think it’s gonna like be this massive overnight deluge. Oh, Agis arrived. That’s it. Agen can start, finish envisage the project. Guess what the project is before you even start it. And it’s already done it. And just, I just don’t think that’s gonna happen. Cameron: I wonder what the impact is on like gig economy jobs. You know, I, two years ago I had somebody who was editing my shows for me. I had somebody who was transcribing my shows for me. Now I do both of those things myself with AI tools, um, editing a show used to take a. Steve: those singular things. And this is the point that I think a lot of those pieces of the puzzle, you have, this guy does the edit, this guy does the transcribe. Well, [00:25:00] now you’ve gotta, I can do those bits and, and, and it is incumbent, it is incumbent upon those people who have singular jobs, jobs without a wide scope. Like we need to be as horizontal as possible what we do so that we have all of the pieces of the puzzle so that we don’t get replaced. The more singular your role, the greater risk that you’re at, like translators, the classic one. Oh, like, must be a tough time to be a translator right now. Cameron: Well, I, I do think the big question is when are we gonna have ais that are reliable and can be trusted to. Three nines, four nines, five nines. Never gonna be a hundred percent, but trust enough or have AI working to check each other’s work in a system where you can give it, uh, initially a relatively low level job and be [00:26:00] confident that it will deliver a, an output that it’s at human quality level or better. Steve: Yeah, I think we will have that, and maybe we do have that now, but where the pilot’s flying the plane, right? And we are telling it, it needs to go off and do this. We’re managing agents in the same way that we manage employees, or we are managing ai, doing processes that we would’ve done ourselves or low level employees would’ve done. Some of those low level employees get replaced in the same way that, uh, mail room people, it’s a terrible example from 1989. They get replaced now that we move to email. And so low level functional roles get replaced, but I don’t think complex roles that have a multitude of inputs, outputs, and stakeholders and micro projects get replaced. people who are tiny, thin project tier get replaced. Cameron: But if you replace all of the low level people, what’s the job of the person who’s currently managing all of the low [00:27:00] level people like corporations are Steve: the Cameron: hierarchies, right? Steve: Yeah. They are, they are. The, Cameron: I. Steve: the job of the person who is managing low level people is someone who manages a bunch of ais and agents and actually becomes a really important role because those agents are delivering work that then goes up the hierarchy and you become a, a, an, an orchestrator of the task that AI bots and agents do on our behalf. In Cameron: So you’re no, Steve: the Cameron: you’re no longer managing humans. You’re managing ais. Steve: bots. Yeah, that’s right. but I do think that some of that hierarchy will remain because people like to be in charge of people, and I still believe that 90% of jobs are bullshit jobs that no one really needs. Anyway, all those Amazon jobs, those 30,000 Amazon jobs, they’re all just like, oh, what are you on Billy Blog’s podcast, promotional operator, strategic, uh, special projects guy on 400 grand a year and whatever, and bye-bye. Thanks for coming. Like, that’s, that’s what happened. For sure. [00:28:00] Like they’re real bullshit jobs. And the great reminder is what happened during COVID, we really find out. We found out the jobs we really needed during COVID, didn’t we? Well, we need the person who can clean the streets and the medical people and the food, and we really found out what we really needed. And all of the bullshit jobs, those 30,000 Amazon, that was everyone working from home and doing nothing and getting paid more, which is insane, but that’s the world. Cameron: And we increased the salaries of those people doing the really important jobs a hundred times because we realized how important they’re, Steve: So if I can propose a question to you, Cameron, and I Cameron: Hmm. Steve: am writing a post on this right now about ownership. So let’s imagine that you get a robot and this robot. Can do everything that you can do and can do your job on your behalf. Let’s say, uh, the robot could be an AI or it could be a humanoid Cameron that looks and acts like Cam with short gray hair that he got at the age of 23, [00:29:00] but this robot is in a 23-year-old. It’s a new robot. It’s just arrived and it can do all the things that you can do on your behalf or the things that you don’t want to do. super efficient. If you could get that robot and it could do all of your tasks or even if you’re an employee, and let’s imagine you have this weird sense of, uh, autonomy where you can deploy whatever resources you want in your job. You get to make that choice. You could bring in a bot to do things for you a little bit like how you go on the internet and find things to do your job. If you could own that bot. course you would. You would say yes to it every day because you could be more efficient, get more done, maybe free up your time, do other things maybe not work as much, right? If you could get that bot, the problem isn’t the robot or the ai, the problem is who owns it, right? Becomes an ownership issue. If the company deploys the robot and owns the robot, it gets rid of you. But if you own the robot and deploy the robot, that’s a totally different set of [00:30:00] circumstances. I know it’s a weird kind of antithetical example, and that’s not how commerce works, because the boss and the management and the owners of capital get to decide where resources get deployed. But if you owned those, you’d have to say The AI and robotics revolution isn’t bad. It’s a question of who owns it, who deploys it, and where do the benefits get distributed? See, now I’m sounding like a communist. Now you’ve finally got me. I’ve had three months of thinking without Cameron, and I’ve come back a communist. Cameron: Well, for a start, I wouldn’t wanna replace myself with a robot because I like what I do. I don’t want to outsource what I do. I like what I do. Steve: Well, no, but wait a minute. Wait a minute. Yes. You like what you do, but you were just telling me five minutes ago about the bots that you were deploying to do certain parts of your job. So thi this is the point. Some people would wanna replace themselves entirely with a robot that becomes their economic engine. Let’s say instead Cameron: Hmm. Steve: a company, maybe you own a robot that becomes your personal [00:31:00] economic engine that Cameron: Slave. Steve: and it Well, Cameron: Hmm. Steve: wow. Here we go. Cameron: We’re just, you’re talking about slavery basically. Yeah. Steve: Well, Cameron: a, an army of slaves out there working for me. Steve: well, Kurt Cobain sang about it. He said, and it’s okay fish, because they don’t have any feelings. It’s okay you fish ’cause they don’t have any feelings. Something in the way the robots don’t think like that’s where we are. So, Cameron: I. Steve: I think we are gonna get to a question of ownership and deployment. You can have a robotic economic engine Cameron: Robots. Robots is the next section of the show. Steve, I’m trying to stick to AI and then we’ll get to robots in a minute. Yeah, you’re getting ahead. Steve: Alright. Cameron: My question is, where we’re at with AI today? Is this where you thought we’d be [00:32:00] three years ago? Are we further ahead or are we not as far ahead as you thought we’d be? Steve: We are further ahead than I thought we’d be. Way further ahead because humans have a terrible habit. Once something arrives and they’re used to it, they just have full disappointment and they just want more. We are greedy little, little beings. It’s like, seriously, we are so much further ahead. Chat GPT, the first, what was it, 3.5 that came out in blue. Mines like that was crazy. Good at everything. And then now you can have live realtime conversations. It’s got live web, you can create video, you can get it to write code, like incredible. I think we’re further ahead and I think maybe in the past few months, if we imagine technology doesn’t move in a straight line, it moves in step changes. You have a bit of a flat moment and then a boom and then a flat moment. We had quite a few booms and then a little bit of flatness in the last iterations were maybe a [00:33:00] teeny bit disappointing because our expectations are so incredibly high now. I think we’re way ahead. Cameron: At the end of last year, the end of our 2024 season, we predicted 2025. We both said the same thing, the year of agents, because that’s what all of the state-of-the-art model companies were talking about. You and I were talking, uh, the other day and you said agents very disappointing. Steve: Yeah, they are. So, we are further ahead than I thought we’d be three years ago. We are not as far ahead as I thought we’d be almost 12 months ago. So we, we, we, we did so well in those two years, it was so ridiculously incredible, or two and a half years, even. The first half of this year was pretty incredible that my expectations was just so high, and now I’m a little bit disappointed that agents haven’t arrived, but it might be, it might be in the next six months. That moment happens where everyone goes, whoa, [00:34:00] remember agents couldn’t take off and land a plane. Now they can. Cameron: I think agents have arrived, Steve: They Cameron: but Steve: and they’re a bit disappointing. Cameron: yes, it, it’s, it’s very early days. Yeah. Yeah. They’re not very, Steve: days. We haven’t Cameron: yeah. Steve: jet engine phase yet. We’re still smashing those propellers and the dual wing. I mean, we’re not there yet. Cameron: I mean, my cursor example before is a limited example, but to me it’s mind blowing compared to where coding was with AI a year ago, where I had to. Tell it what I wanted. Copy and paste the code, put it in an IDE, run it, copy and paste the error message, put it back, find a document, share the document. Now I just give it access to my directory on my hard drive and say, go do it. If you need something, look it up. And it does. And it finds it. And it fixes it. Still perfect. No, but in terms of where we were at the end of last year, mind blowing. [00:35:00] But it’s a limited. Example of, of agentic behavior from a coding perspective. Uh, by the way, there, there was something I would’ve talked to you about a couple of months ago if we’d done a show. Um, there was a lot of criticism I was seeing online because Dario e Modi, the CEO of Anthropic had said six months earlier that within six months, 90% of code would be written by ai. And people will say, well, that’s not true. You know, I’m still coding the old fashioned way at blah, blah, blah. My argument is, but there’s 200 million people writing code now that weren’t writing code six months ago. Steve: So it is. It’s maybe even more than 90%. You are right. it definitely is. And I was just. Yeah, looking at the, the idea of it coding, one thing that it has done recently is AI can release its tentacles into the web in the market and bring things [00:36:00] back now. So in a way that’s kind of agentic within the scope of the project that you’re asking it to do, rather than, here’s a project, go and do it. But on the tools that it’s been trained on, like coding, it does very well go out and go, oh, okay. I’ve gone onto to hugging face and I’ve done this and I’ve gone to these and it brings back So it’s tentacles are starting to reach out. Cameron: We’ve also seen this year the rise of. Text to video and text to music models, which are really accelerating very, very quickly. VO 3.5 just came out where you can do much longer videos. Saw a two came out a few weeks ago. I still don’t have access to it even though I have an invite code from my son in LA because you have to be in the US to be able to download it Still. But the videos that are coming out are really, really super impressive, but interesting. I was in the car with Fox the other day and he said, you know what? I’m [00:37:00] sick of saw videos. He goes, everywhere I look, it’s saw a video, saw a video, saw a video. And he said, I’m actually quite concerned about the future where I’m not gonna be able to tell what’s real and what’s not real anymore. And he said, if I had a time machine, he said, he asked me, he said, if you had a time machine, would you go into the future? Would you go into the past? And I said, well, hold, can I get back after I do it? He goes, yeah. And I go, my answer is usually the past. And when people say, where would you go? I say, I’d go back 30 years ago to talk to my dad and my grandparents when they were still alive again, because there are so many things that I wish I could tell them about. And talk to them about that. I didn’t have the opportunity or, you know, they, they didn’t get to meet any of my children and you know, they didn’t get to meet my wife and all that kinda stuff. But I’d, I’d show them my abs and say, look at me. [00:38:00] I’m 55 and you know, I’m, I’m do kung fu Um, Steve: though? Don’t just wait a minute. Stop, pause everyone. Holy mackerel. Riley’s in good shape. And now the podcast is X-rated. All of a sudden, Cameron: dude, I have this incredible like kung fu body now after, and because of chat GPT, Steve: you say so yourself, I Cameron: well, compared to where I was a couple of years ago. Yeah, I would say incredible subjectively, not objectively, subjectively, Steve: Yeah. Cameron: um. But he was saying, and and I said, well, but then I’d also go into the future and see what stocks are successful and then I’d come back and invest in them, you know? Steve: style. Cameron: But he said, yeah, yeah, yeah. But he said I’d get back into the past. ’cause I wanna know what life was like before everything was AI and you know, computerized what life was like before iPads and iPhones and the internet and AI. Steve: freedom and, and fluidity to it, [00:39:00] didn’t it? It really did have a different sensibility, and every era has upsides and downsides, right? Cameron: My point is, you know, I keep wondering how his generation is going to respond to this world of AI content and the dead internet theory and all that kinda stuff. I kind of assume they will just flow into it, not question it, not think about it, and it’ll just be, well, this is what we’ve always known. But the fact that he’s 11 and is going, you know what, I, I don’t like this, this is really kind of depressing and scary. Um, I dunno if that’s reflected across his generation at all yet, but it was kind of surprising to me that that was his current point of view. Steve: He’s a very wise, young fellow. How old’s Fox now? Cameron: He’s 11 and a half. Steve: Yeah, so Cameron: Hmm. Steve: smart parents. I, I’m not surprised. of things. The first thing [00:40:00] is if he wonders what life’s like pre-internet, can actually still do that in the real world. It’s not that it’s not possible, it’s just that we don’t, so you can go somewhere without a smartphone and only read a newspaper and maybe just watch the TV channels that are on the TV and be in nature and be at the beach and not have your phone and all of that is still possible. We choose not to. Right. So that’s an interesting, just side note that anyone who wishes for the past can reinvent it. By changing places and Cameron: Log off. Steve: about what Yeah. And what things you bring with you. Right. But I wanna pick up on the VEO and the video to text, which is another amazing thing with ai. So yeah, again, if, the listeners haven’t got it yet, we are so far ahead of where I thought it would be three years ago. Incredibly so. And that’s another great example, but one of the terms that you introduced me to way back in the early Twitter days, one of [00:41:00] probably 20 years ago when we first met, was the Splinter net. And I feel like the Splinter net is making a comeback because what we have now is a couple of apps, are AI only apps. The Sawa app and the Meta app are both AI only apps. And in June of this year, and I’m not talking about. Malicious bots, the amount of content on the internet crossed over to be a majority of ai. It’s now 50, it was 51% in, I think it was August. was a lot of articles that came out with the internet officially crossed over to be more non-human than human. And I think we might see apps which are human only, and a couple of people have proposed it. What? What is the new social network? How do you prove that you typed it with your finger in that moment, and the photo has to be live, or it has to be [00:42:00] only human, and there’s technological ways you can do it. It creates. Boundaries and barriers to e entry. And that’s okay because that’s what it would be for. You would log in and go, well, this is a human with a human opinion. There are no bots on here. You can only be a verified human in some capacity because we’ve already got the AI versions of those now. And to be quite honest, after five minutes, they’re pretty damn boring. I’ve uploaded a heap of photos into the meta app and created videos. I’m like, yeah, okay. I, it’s, it’s a little bit like, I get the joke because there’s something wonderful about dogs hang out with dogs, birds hang out with birds, fish hang out with fish, and humans hang out with humans. And Cameron: Fox was, Steve: we’ve got this other species, the AI species, and we’ve talked about spawning another species, Cameron: Hmm. Steve: hang with humans. Cameron: Fox was talking about Nightmare, but now nightmare before Christmas at bedtime last night. And he was like, we’re never gonna see that again. Like handcrafted [00:43:00] animation or stop motion animation. He said, there’ll be stuff that’ll look like it’s handcrafted stop motion animation, but ha, but it won’t be. And I said, well, maybe people will go back to doing that, uh, because they want human generated content. He goes, but how will you know? And I said, well, they’ll be behind the scenes stuff of the animators making it. And he go, yeah, but how do you know that’s not AI generated? Steve: call. Cameron: like, yeah, it will be ha. Steve: we’ve shown a behind the scenes to show how humans generat it, but we’ve generated it with AI to make you believe and feel, because we, we here at Animators Incorporated know that the feeling of the belief that it really was human when it wasn’t, is the feeling that you are really after. Cameron: Okay. Uh, before we run outta time, ’cause I still wanna talk about robots next three years, Steve. Um, so you’ve talked about, we’ve come a long way. You’re a little bit disenchanted with the lack of agentic process, uh, progress this year. [00:44:00] Based on what progress you’ve seen happen over the last three years, where do you think we will be sitting three years from now in the end of 2028? Steve: Of 2028, this is my prediction. All of us will be working with a number of agents. I think they’ll solve the agentic problem. I don’t think agents will be just let loose and just ping you in the morning and say, Hey, you know how you had a few conversations with Cameron and then Billy and Mary and you’ve been thinking about doing this startup over here? Well, I’ve just launched it all overnight and here’s what I’ve done. Hope you like it. I don’t think that’s ever gonna happen. we’re still gonna set the course of the AI agent, but I think it’ll be able to do everything. All of us will be managing a suite of ais or a singular AI that can do a, a number of projects simultaneously. A little bit like the movie Her, where she’s having, you know, [00:45:00] 11,064 conversations simultaneously. We’ll be managing those. I think leading edge consumers and companies will start to have a lot of robotic humans, humanoid robots walking around in offices, retail spaces, factories, warehouses, and homes Cameron: Don’t skip ahead. Robots is the next section. Steve, stick with ai. Steve? Steve: straight ai. straight ai. You mean screen-based? Ai. Cameron: Yes. Steve: Okay. I think all of us will be working with a suite of agents, both in corporate and private settings that will be, will be guiding on projects that can do all of the technical things that we can’t do, whether that’s video editing, writing code, managing projects, mathematics, engineering, all of those things. And I think it’s gonna be a real period of emancipation where your qualifications are almost irrelevant. And the only qualification you need is to have good taste, to have desires, an [00:46:00] entrepreneurial ethic, and know what projects you wanna undertake. Cameron: Yeah, I, I, I tend to agree. I don’t expect to see a slowing down of progress. I expect to see, uh, ramping up. Uh, I think things are gonna continue to build exponentially on the version that came before. I think AI is gonna play a role in enhancing and developing itself to greater and greater extent AI’s, coding ais and taking the learnings and, and factoring them into the next iterative, uh, step forward. I do think, um, you know, with the on, with the text to video generation and the content, a lot of it is AI slop, as people call it, or just the same idea repeating itself. And I do, you know, I am waiting for the first [00:47:00] generations of truly innovative and exciting content to emerge out of those, a kind of music that we’ve never heard before, a kind of video content that we’ve never seen before that takes all of these things and really just does something truly exciting and truly innovative and spawns a whole new area of content. Um, you know, I remember with podcasting when we started at 20 years ago. You know, people would say, well, it’s just radio and you, it’s just radio on a, you know, on a, a, Steve: Radio, Cameron: portable device. Right? And that was to a certain extent, true. And I used to say, one day we’ll figure out how to do something that’s truly unique. And without blowing my own horn, I was one of the people who did that because all of a sudden I did a hundred hours about Napoleonic history. N no [00:48:00] one ever in the history of humanity had talked about Napoleon for a hundred hours in a media format like that. Steve: Well, Cameron: You couldn’t do that on, Steve: it. You couldn’t do it because there you couldn’t. Cameron: even in cassette tape pack days in the eighties, no one did a hundred cassette tapes. Hour long. BASF cassette tapes on Napoleon or on Caesar or Alexander even hour long, or now, you know, two, three hour long. Podcasts are quite commonplace now. I oh oh,  a cramp in my leg. It’s not a heart attack. Just everyone relax. Steve: attack. Cameron: Oh, Steve: relax. Karate man. The black belt himself has abs, but unfortunately his gray hairs and age give him te terrible cramps. It’s not a Cameron: oh, Steve: [00:49:00] Everybody relax. He will live through this Cameron: oh my Steve: I Cameron: god. Steve: that is karma. Coming back to Cameron for showing off his abs, Cameron: Just gonna Steve: first on Cameron: stand up and stretch my Steve: Cameron could not Cameron: oh Steve: hamstrings popping as they have. Cameron: oh Steve: Cameron, Cameron: oh. Steve: this out. It’s hilarious. Cameron: Oh my God, that hurt. Steve: If you’re not watching this on YouTube and you’re listening right now, Cameron: It was right, Steve: out his hamstring. Cameron: right there. Oh, I just tucked my leg under my chair. Steve: attack. Has Cameron: I, Steve: Has anyone died live on a podcast? Cameron: that’s a good question. I don’t know. Steve: a podcast? We’re Cameron: Christ. That fucking hurt. Oh, I need to go have some electrolytes and magnesium. Um, potassium. Steve: two Cameron: Um, Steve: podcast with Napoleon that [00:50:00] you did that wasn’t possible on the, it’s just radio on internet was, first of all, you found a long tail of customers without any promotional costs, where they kind of found you, you know, distributed around the world, which wasn’t possible before Cameron: yeah. Steve: had breadth of Cameron: Yeah. Steve: in the physical world, whether you’re burning CDs and shipping them around, there are the cassettes. That wasn’t possible either. And so the content, the topic, the distribution, and the audience, all of those things weren’t possible. And you were a real innovator within that realm where, Classic long tail. One size fits one, you know, the, the world is weird. And now we can all find each other. Cameron: So anyway, my point just without wanting to suck my own dick, but the point was new things do come outta these things. It takes time sometimes, but new things do come out of a w worthy new things. I mean, again, Steve: And, Cameron: says me, Steve: we haven’t found them yet with the AI Cameron: you Steve: the text video. I did see one small iteration that, know, [00:51:00] gave me a three minute smile. And then I got on with my life, which was animations with ironic political bents arriving on TikTok, where Cameron: right. Steve: talk about, uh, the Little Mermaid and then like turn it into some political thing. And it had all of the imagery in Disney and it was pretty smart and cool, but it was, it wasn’t a fundamental in the road where it’s an entirely new thing. Cameron: So there’s a lot, and I think we’re gonna see, um, wider, wider deployments of AI into everyday life and business operations. More decisions, more workflows. More operations will be partially or fully AI driven rather than just AI assisted as they are today. I do think AI agents will not just generate code or content, but will be able to execute multi-step tasks with some maybe real world effect bookings and purchases is kind of little bit, [00:52:00] sort of boring for me, but real operational decisions. Um, you know, what I’d love to be able to do today is say to my ai, you know, I want to, I want to. C create a new marketing campaign for my investing podcast in America. Can you go out, find all of the right audiences, find the people that are listening to investing podcasts, that are interested in buffet style value investing. You know, track down their emails, their socials, uh, what they’re listening to, go and create ads for other podcasts. Execute it. Do the deals, negotiate the deals, execute it, just go off and do it like, and have it, you know, go and deliver all of that kinda stuff. I think models are gonna become more efficient. I think they’re gonna become cheaper to train and deploy. I think we’re gonna see, uh, even more stuff coming out of China. Oh, by the way, um, side note, but great. I dunno if I already [00:53:00] sent you this on a text, maybe not. There’s a new book I just, uh, heard about by a Chinese American China scholar called Dan Wong. It’s called Breakneck, and it’s talking about China and where it’s going. And one of the greatest insights that explains China versus the US and the rest of the world was this. He said, China is run by engineers. America is run by lawyers. Steve: you sent me that quote, Cameron: I sent you that quote. Steve: Yeah. Which, which, yeah, Cameron: I listened to a podcast interview with him. I think it was a New York Times interview, and he, he said, I lived in China for, I think it was like 2017 to 2023, and he said. Not only are Chinese cities better than American cities, but you go out into regional China now, and the regional cities are better than big American cities. They’ve got better roads, better energy, better water, cleaner water, better [00:54:00] wifi, better internet. Everything’s better. Because he said, you know, dong Xiaoping made a conscious decision to elevate engineers into the Communist Party, into the pulp Bureau. And that’s continued. So they have an engineering first mindset. How do we build the future rather than a, you know, democracy run by lawyers that’s like, well, here’s what we’re gonna allow and here’s what we’re not gonna allow. And you just all figure it out in the middle. They’re like, here’s what we’ve gotta build. Let’s go build it. And he said, and sometimes they get it wrong. They build the wrong thing or too much of something or whatever. But it’s that engineering mindset and which is why they’re now doing more patents than anywhere else in the world. Um, and he also said competition is even more cutthroat in China than it is anywhere else, because there’s more of them. Steve: to America’s one of the most uncompetitive countries in the world. It, it Cameron: He, Steve: is Cameron: he said, but if you look at it, and he’s talking about what happened in [00:55:00] the tech sector over there, he said, like he said, he interviewed one company over there that started off as a Groupon clone and survived, but they said they were one of 5,000 Groupon clones operating in China at one time. But he said the, the thing is, because of the way it works, the people and the state get all of the benefits. The entrepreneurs themselves fight it out for scraps because there’s so much competition. Steve: that’s what capitalism should be. It should be that corporations fight and that there’s no barriers to entry and we can continue to have that fight so that the consumers get the best outcome. That’s the idea of capitalism, which lurches the entire economy forward. But you don’t have capitalism in America anymore. I mean, again, there’s no such thing as pure capitalism, pure communism, any of that. We’ve Cameron: Hmm. Steve: that neither of those exist, but America has became become far less capitalist, as has Australia in in the last couple of decades. Incredibly so. And I think a big part of that is [00:56:00] because software has copyright, uh, laws baked into it, which reduces competition potential. It’s one of the major issues. And then you’ve got the aging population in these economies as well, where. To get into power, you need to protect the power structures that exist. within China, the idea that they’re doing better outta the advantage is they don’t have legacy infrastructure. Uh, when you’re starting at ground zero, it’s easier to design a new futuristic city than when you’ve got existing roads and systems and vested interest of who owns the capital. It becomes a far more difficult place to innovate around, Cameron: Hmm. Steve: you have protection versus creation. Cameron: Mm-hmm. Steve: is in creation mode Cameron: Hmm. Steve: in protection mode. Cameron: Yeah. Steve: that that’s a core issue. And, and know, you use the word poly bureau, that’s actually the core issue in America now is not technological. It’s not that they don’t have enough entrepreneurs or enough [00:57:00] clever people or the wherewithal to do it. It’s that we have the wrong political structure. The same in Australia. It’s why no one can afford a house. Because we are protecting existential power structures and it’s getting worse in America too, and certainly with tech and ai Cameron: So getting my point was just gonna be, I, I expect to see more AI leading edge stuff coming outta China. More chips, more data centers, more models. I, I do think China’s really going to. Become way more important three years from now than it is at the moment. It’s already caught up in many ways, uh, to the us, but I think it’s gonna, we’re gonna continue to see rapid progress coming out of China. Let’s move on to robots. Steve, we’re an hour in, can we, can we do robots Steve: We can, we definitely do robots. Real quick, Cameron: quick? Steve: last thing that happened with AI was I really liked the idea when [00:58:00] Deep Seat came out and really blew everyone away, they had an open source code base. The fact that it showed, it was the first to show the thinking of the ai, the thinking process, Cameron: Mm-hmm. Steve: how quickly Cameron: Mm-hmm. Steve: AI turned around and released a model of for Cameron: Mm-hmm. Steve: I think China being competitive is good for Western markets because it keeps us on our toes. Cameron: Now we have lots of really state-of-the-art models coming out of China that are, if not as good, almost as good as the state-of-the-art models coming outta the us. So robots in November, 2022, Steve Humanoid robots were largely curiosities in a lab. Um, high cost demos of walking or balancing or some simple manipulation, but not doing general tasks in human environments. They were wired up. Tethered, usually lots of external sensors or human teleoperation. The business cakes was weak and they didn’t have ai, so they weren’t very [00:59:00] smart. It was all about, look, it can stand up and balance or stack boxes. Robots and AI were sort of separate tracks three years ago. Today, of course, it’s just a robot sort of delivery mechanism for AI in many ways. They’re all gonna, we see it now. It’s all gonna be integrated from the get go. And you know, one of the big news stories this week was the neo humanoid robot going on presale $20,000. You can pre-order a neo and a very, very well, not very impressive, a somewhat impressive launch video for it that came out this week of it going around your house, doing all of your chores for you while you’re out of the home. Wall Street Journal went and met with them and, uh, behind the scenes, 99% of everything that you saw in the launch video was a human tele operating the robot. The robot’s not real. It doesn’t exist. Steve: be [01:00:00] behind it. There’s Cameron: Yeah. Steve: in the humanoid robot dancing around. Cameron: It’s just, yeah, horrible. But that’s where we’re at there pre-selling this thing that doesn’t even exist yet. Trying to get ahead of the market. Steve: exist. I love a presale. Cameron: What about your robot that you ordered? Have you got it yet? Steve: No, I, it arrives in December, K Bott, is, you have to code it. You can’t verbally and visually train it, Cameron: Right. Steve: which is gonna be tricky. Uh, but still, I’m, I’m gonna pre-order one of the Neos too, like why not It’s pre-order. It’s like a few hundred bucks. It’s definitely worth doing. I love what you just said, Cameron. Humanoid robots are a delivery mechanism for ai. That’s beautiful, man. And that’s exactly it. And I think this is the big, I think this is a bigger shift than screen-based and voice-based ai, because they’re multimodal. Now that AI can talk, have visual, verbal reasoning, [01:01:00] we can explain things, we can do demos. It becomes the, the, uh, let’s say the embodiment moment of ai. And I think it’s akin to what happened with the car because we had horse and cars for hundreds of years. We’ve had robots not for hundreds of years, but for quite a long time, uh, static robots that can do tasks within factory settings. What we’ve done with ai, I believe, is developed the brain and the nervous system put inside the humanoid robot to get a deployment. And we need humanoid robots because we’ve got a human shaped world. And humanoid robots for me are a chance for grand emancipation of humanity where we can escape the screen and the tech behemoths potentially, because a lot of the humanoid robots aren’t coming from the traditional big tech, which I like and is quite exciting. And if we have open source l LMS LLMs, we can plunk that into a humanoid robot that we train with [01:02:00] our perspective, with our ideas, a little bit like the way we train our children and show it the tasks that matter in our work context or in our domestic context. That’s the exciting part of humanoid robots. And so long as they’re open source and or we can tinker with them in the same way that we can tinker with a car. We used to buy a car and you own it. You can lift up the bonnet and soup up the motor and put spoilers on it and change it the way you want. That’s what we need in ai. That’s the missing link. Right. And And I, and I feel like. Humanoids is the big, big thing. I think that, I think the robot economy, and I wrote about that a few weeks ago. I think we’re gonna have this huge robot economy. The nighttime economy is 13% of GDP and it didn’t exist in Australia until 1917 because there was no electric lights. If we can deploy robots to do things on our behalf, they become a new economic engine for everyone. I’m like, I’m really excited about that, but it’s got to be open source where we can play with it, not where the [01:03:00] code base and it’s all controlled from afar and across the cloud and they upload and download. Potentially, we could teach our robots something and then sell license. What we taught our, so you can have the, the nicest trimmed hedges in your garden. ’cause your robot is, is the champion at that or whatever it is you want to teach it. For me, Cameron: Stop being so capitalist, train it and then give it away. Give the code away. Make it freely available. Open source the code. Steve: Okay, great. Do that. Well. Well, well, well, either an economic engine or let’s say an abundance engine might Cameron: A world where your robot sees a black or helicopter and you say, can you fly that thing? And it goes. I can now, yeah. Downloads the Steve: right. Yeah, Cameron: right? Like Steve: but let, let’s Cameron: Trinity. Steve: Either economically or, or, or either a fashion, uh, that it opens up abundance because things can be done for you and on your behalf and within this ecosystem that all of [01:04:00] us own and develop. ’cause I think that’s what happened with airlines, with cars, with every technology until we inside, you know? Cameron: So where we’re at today with humanoid robots, I talked about where we were at three years ago. Where we’re at today is they are available for sale. You’ve bought one primitive model, but you’ve bought a humanoid robot. Do you think three years ago you would’ve expected that you would’ve bought a humanoid robot by late 2025? Steve: Now this is a massive surprise. And even the price point, we’re talking 16 K cam and the new Neo is 20,000. Uh, I, yeah, 20,000 us. So let’s just say they’re the price of a small car. I think that’s a fair synopsis. And, and Cameron: And Steve: anyone, anyone would buy a humanoid robot if it came at the cost of a small car, because I think it’s gonna have maybe even more utility than a car does. Cameron: I don’t think we will end up buying them. I think we’ll [01:05:00] end up having them on a monthly subscription, but Steve: Ooh, here. Cameron: Yeah. But well, not all of us are as rich as you are, Steve. I can’t afford to spend $20,000 on a robot, so if I don’t get it on a subscription, I’m not getting one. But anyway, that’s another story. Steve: but if you, no, wait a minute. Let’s, let’s hold up there. If you would buy a car for $20,000, I do not see why you wouldn’t buy a humanoid robot, which can undertake all of the domestic tasks. And let’s assume it can do everything. Just like an ai as a PhD in every subject. Why? Why wouldn’t you, in fact, Cameron: ’cause you don’t have the money. ’cause I don’t have the money, Steve. Steve: sell the car and get the robot, the robot will take you further and it can Cameron: It’ll pick me up and carry me to kung fu Steve: you to the bus stop. It will give you a piggyback Cameron: of the family too. Yeah. Yeah. Steve: what I need to get. When my robot comes, I need to hop on its back and go. You think you can’t afford a robot? Well, I sold my car and I’ve got a robot. Where am I going? Now he’s piggybacking me to the train. Stop, bring [01:06:00] on rail. The electrical transport system. Cameron: ranting. Where are we? So there are humanoid robots being deployed in factories, uh, in an Amazon and places like that. Steve: has them doing, um, certain parts of, uh, moving panels within the cars. Cameron: Yeah, Tesla are saying they’re gonna ship 5,000 units in 2025, but Tesla says a lot of things that they don’t deliver on. Uh, you know, I’ve seen forecasts that the mar, the humanoid robot market is projected to be about us $30 billion by 2035. So that’s 10 years away. But where it’s at today is there’s a lot of improvements. Like they’re able to walk around without being tethered up without teleoperation, do simple things, better balance, more robust hardware, better finger manipulation. They can pick up objects, navigate human spaces. A little bit more capably, [01:07:00] but a long way from something that’s, you know, science fictiony robots. You know, still a long way from really being something that you can use to get things done in the home. But now when I say a long way, a year, two, three, um, Steve: Maybe Cameron: know, but we’ve, Steve: things are non-linear. This is non-linear Cameron: it’s non-linear. That’s right. And a lot of these things turned out to be really hard, like getting the dexterity of them and the balance and all that kind of stuff. Their, their ability to walk or move quickly. Like most of them, you see, they’re really slow moving. It’s like, I’ve just eaten. Too many gummies and everything’s moving in slow motion. Steve: Uh. Cameron: Uh, but we’ve come a long way in the last three years. And the big thing, of course, not just the, the form factor, uh, improvements, but AI in the chip set, AI in the brain, that’s, as you said, is the, the [01:08:00] big leap forward in many ways. Um, there’s a lot of work that’s being done by NVIDIA and companies like that for building virtual environments to train the robots to do something, and then they can take that code and just stick it in its head and it already knows how to put away the dishes or cook a dinner or whatever it is. Steve: humanoid robots are gonna benefit from cognitive surplus in the same way that the internet did where we had all of this connection and the knowledge bank went up exponentially. soon as one robot knows how to do one thing, theoretically, if we have the right model and open source nature of it, then all humanoid robots, like you say, download it and it knows how to do it. So the rapid onset of the ability of humanoid robots should be. As, as soon as one knows how to do it all, know how to do it. And, and, and that’s, [01:09:00] that’s where I think you get that exponential improvement on capability. So long as the balance, strength, dexterity within the finger movements then everything changes. But then you get the second and third order effect where all of a sudden, and I’ve been espousing this for some time now, is that the advantage of low cost labor markets starts to get eroded. And manufacturing and production of many things starts to become possible again in high cost labor markets. Cameron: Yes, I see that happening too. So the question is, where do you think we’ll be three years from now, late 2028 with robotics? Steve: I think someone in your street will have a humanoid robot. In every developed market, and it’ll be a curiosity for two, maybe one or two or three years until people see the incredible utility in workplaces and or your home [01:10:00] and ev, and then everyone gravitates towards getting their model T Cameron: Yeah, Steve: quickly after that. Cameron: I think you’re right. I think three years from now I’ll expect to see them out and about In what businesses? Factories in small numbers. Steve: yeah, you’ll Cameron: Yeah. Uh, elder care facilities, maybe hotels, places like that. Doing service delivery work. Steve: which they’re in now. I was at the Gold Coast airport and I had a robot going around cleaning the floor. I’ve been in a hotel in Shanghai that had a robot that goes up the stairs and delivers your food. Again, it’s on wheels, but it’s not far of a shift Cameron: Hmm. Steve: that capability into the human eye. Uh. Walking element and once you see it, the fear gets removed. People see the utility and, and you just have to go to it. Cameron: Well, there’s my local ZA restaurant has a robot that delivers your [01:11:00] food to your table from the kitchen. But yeah, I’m not talking about that. I’m talking about genuine humanoid form factor robots doing generalized tasks. I do expect that you, if they’re already on presale in really early models, three years from now, we should have, I, I still don’t think they’re gonna be Jetsons, but I think we’ll have robots that are able to do dozens of tasks around the house or in the business. And prices and availability won’t be at a level where they can be mainstream yet, but they will still, they will start to become something you’ll see out and about more often by the end of 2028. Steve: Yeah, the really good ones, the figure bots are well over a hundred k. Some of the other bionic ones that come from Uni tree, they, it’s sort of anywhere between 30 and 50. So Cameron: Yeah, Steve: expensive, but we’ve gotta remember that the price will halfen, the capability will [01:12:00] double. It’s, it’s gonna be, uh, production efficiency in Moore’s law capability element inside that. I think this is bigger than AI on the screen. Cameron: who’s the guy who wrote about um, 10,000 hours Steve: That was Gladwell. Cameron: Gladwell? I was gonna ask Chachi T but you answered before I could even type in the question. I just wanted to finish by talking about self-driving cars. I saw a great clip of him couple of months ago. I dunno if I sent it to you, if you saw it. Steve: No you Cameron: explaining why we’ll never have self-driving cars Steve: Oh, Cameron: on mass, he said, because. They are designed to stop, if anything gets in their way to prevent accidents. And I’ve seen videos since then of people in streets in LA or somewhere like that, just all standing around and sitting on top [01:13:00] of it and it can’t move. It can’t go anywhere because it’s not allowed to run over people. He said the reason people stay off the streets today is because if you step into a street, you’re probably gonna get hit by a car and killed or hit. If the roads are just full of self-driving cars, no one’s going to stop for a self-driving car. They’ll just walk out in the middle of it and it will just have to stop and no one will get anywhere and nothing will get done. Steve: Yeah, it’s, Cameron: his argument. Steve: and it’s a, it’s an interesting idea, Cameron: It is, but I, I have my counter argument to his argument. Steve: which is Cameron: It is just a legislative thing. All of these cars have cameras and you’ve got, you know, facial recognition in the cameras. Steve: right. Cameron: your phone is giving off a chip. If you step in front of a vehicle or [01:14:00] interfere with a self-driving vehicle and you can’t justify why you did that instant, Steve: it on purpose. Yeah. Cameron: instant, fine. You know, it’s a, it’s a one stop legislative. Well, that, that’s on you. Yeah. You, you’re fucking, you’re fucking with society. Immediate. $500 fine. Right. Steve: Yeah. And also the fact that Waymo is operating incredibly well in every market it’s been in. Cameron: Well, that’s because if there’s a, if, if 1% of the cars on the street are self-driving autonomous vehicles, people aren’t gonna just go and f fuck with the traffic. Generally speaking, if 95% of them are self-driving vehicles, people might just go, it’s gonna stop for me. I can just, you know, cross the street whenever I want. But we already have pedestrian crossings. We already have light systems that say when you can and when you can’t cross the street. Yeah. Steve: would, you would hope that self preservation [01:15:00] and not putting, just taking the piss or putting adverse, you know, Cameron: But this was, Steve: trust Cameron: my point was. Steve: Yeah. Yeah. Cameron: was on stage going, and this is why we’ll never have it. And within 10 seconds, I was like, no, fuck you, you idiot. It’s like, it’s one piece of legislation that just kills that dead in its tracks. So yeah. So much for Gladwell as a sociologist or whatever, he markets himself as Steve: people’s work anyway. You know? He never had Cameron: Oh, allegedly, allegedly. I don’t wanna get sued for defamation on this podcast, Steve. Allegedly. Steve: does Cameron: Fuck, Steve: of his Cameron: allegedly. Steve: and he just takes everyone else’s research and just, just Cameron: Well, that’s what Steve: public. Cameron: writers do. That’s what I did. Steve: Yes. Cameron: Yeah, I, I just said, according to Steve Santino wrote a whole book about it, psychopath epidemic. Look it up. Steve: That’s Cameron: Uh uh. Steve: it. Cameron: No, I’m saying I have, I’ve stole the idea [01:16:00] from you. I didn’t, but yeah, Steve: you didn’t, Cameron: probably stole it from someone. All right, Steve, we should go. That was great fun. Steve: It was so good. Cameron: Let’s not live it. Three months. Yeah. Let’s do it again in three months. Three years. Do it in three years and see how we went. Steve: We might not be here. Cameron: Alright. I.

  2. 9

    Futuristic #45 – Will AI Kill School as We Know It?

    In this episode of **Futuristic**, Cameron is joined by his old friend **Nick Johnstone**, Principal of Toowoomba Anglican School, to explore how **AI is reshaping the future of education**. They dive into the role of schools in a world where every student might have access to unlimited knowledge in their pocket, how teachers’ responsibilities may shift toward mentorship and motivation, and whether schools are even necessary when AI tutors can personalize learning better than any human. The conversation ranges from the challenges of managing devices in classrooms, to what employment might look like in a post-AI economy, to whether robots might one day replace teachers. Along the way, they touch on the social role of schools, legislative drag, the fate of universities, and even sneak in a nostalgic chat about Alice Cooper. FULL TRANSCRIPT   Cameron: [00:00:00] welcome back to the Futuristic, uh, I’m doing this today. My name is Cameron Reilly. For new people doing this without my usual partner in crime, Steve Sammartino, because he’s off doing a keynote somewhere, said he couldn’t make it as is his usual want. But, uh, I’m being joined instead today by an even older friend of mine than Steve. I’ve known Steve 20 years. Nick and I go back 30. Five years, probably. How old are we? 55, 83. Nick Johnstone, principal of Toowoomba Anglican School. Uh, recently crowned, uh, the principal of Toowoomba Anglican School. Previously principal of other schools, but yeah, Nick and I go back to grade eight. In Bundaberg and, uh, I, I, I invited [00:01:00] Nick to come on. I sent him an article I wrote recently on some of my thoughts and prognostications about the future of schooling and education in a world of ai. And Nick gave me some great feedback and I said, come on and let’s chat about it. One of the things, welcome Nick, by the way, welcome to the show. As people can tell, I took all of Nick’s hair over the years and, um. Nick Johnstone: at some, at some point we had equal hair, but that didn’t last for very long. Cameron: No, it didn’t last very long as I recall. Um, one of the things, I’m gonna blow some smoke up, um, your backside for a bit. One of the things I’ve always liked about Nick is, uh, you know, Nick and I, uh, you know, science tech guys always have been, and Alice Cooper, uh, science Tech, Alice Cooper. The Beastie Boys, you know, um, bit of Van Halen. Uh, Nick and I used to, I remember, uh, when David Lee Roth came out with, uh, Yankee Rose, you and I dancing at the school, [00:02:00] discos along to that, trying out testing out our high kicks. How’s your high kick going these days? It’s good. You’re staying limber. Nick Johnstone: in the Hemi. Cameron: Good. Yeah. Yeah. Um, no, in all seriousness, Nick, um. In terms of a, a principle I know is very pro technology and it’s, you know, I know you work in sort of the, the, the private school religious sector yet have remained pro tech on the front foot, very, um, on the very aggressive in terms of figuring out how to integrate. New technologies. So it makes you the perfect person to come on and talk about this. Um, before I hit you with a barrage of questions though, Nick, um, why don’t we start by, I’ll ask you to give the audience your. Current application of [00:03:00] technologies in your schools? Like what, what are you doing today? We can talk about the future in a minute, but let’s talk about how you approach technology in your school today. What, what your attitudes are. Nick Johnstone: sure. Um, I guess that, um, I, I, I’ll take a slice maybe the last five years. ’cause I think probably going back further than that doesn’t have, um. translatable ability into the future. But in the, in the last five years, um, I see the opportunities of technology in education being transformative, I’ll use that term. I know it’s a fairly large in the context of to today’s society, but, so in my immediate last school, Bishop Drew College, where I was, uh, head there for, uh, almost seven and a half years. Uh, our aim was to go from a relatively traditional. of education. So we didn’t have a learning management system in the school. Um, we had, um, [00:04:00] a relatively recent, uh, laptop program, but we wanted to create more than that. We wanted to create opportunities for kids to work in the online environment, um, but also in the asynchronous environment as well as the synchronous environment. we set up, um, uh. A system where the kids basically could have access to the class content. There was tutorials built into those, uh, processes as well. Uh, that was the first part of it. The second part of it was we wanted to amplify that. we actually established an online school, uh, that’s called Horizons, uh, and in the first instance it was set up so that it could create greater flexibility for students within our current. structures. So for example, instead of running a class before school hours or after school hours, we would give greater flexibility in the line structure of a school. So basically a student could have a spare, [00:05:00] but in that spare they would do another online subject that was run by our school. Um, with the plan of testing that through a, you know, um, better testing, lots of feedback from students and staff and parents to then, uh. Expand that model to externals. Uh, so that’s the process that’s occurring at the moment in that school. Um, I’ve changed schools in the last 15 weeks and I’m at Toowoomba Anglican School. Uh, and this school is at the start of, um, a similar journey in the fact that, um. They need a, uh, uh, a transformative learning management system that allows the students to have better access both, um, in class environments, but also giving them the flexibility of the day that they currently don’t have. I mean, we have a lot of partners, you know, um, external consultants and teachers coming in. The kids go out to TAFE and a variety of other programs for certificate. Pathways. We have relationships with the universities and those sort sorts of things as well, but we didn’t have a lot of those online [00:06:00] opportunities. Um, so that’s sort of the, the journey we’re on at, at in this school. But Cameron: And you’re a K to, you’re a K to 12, Nick Johnstone: yeah, we’ve got three year olds to 18 year olds. of Cameron: right? Nick Johnstone: at have had early years through to year 12. So, um, in fact my entire career has been. in K 12 environments. Cameron: Right. And, and your attitude towards, uh, devices and the internet in your schools, how, how do you approach that? Usually, I. Nick Johnstone: Yeah, it’s, it’s been an interesting one ’cause there’s been that sort of, uh, push back and forth on that. Um, I, I would say I, I’m pro devices. But I’m probably not pro phones in the adolescent context in school. I, I haven’t always been that way. Cam, I’ve gotta say, in fact, I remember speaking at a senior year’s, conference at University Queensland for probably [00:07:00] 13 or 14 years ago when mobile phones were first sort of a thing, is being able to use the, um, the inner science context, you know, all, all of our phones, you know. Have accelerometers and et cetera in them. And how can we use that to, not just, um, the communication style of education, but also, you know, can we use it in physics? Can we use it in biology? Can we use it in other, in other contexts of, uh, of, of maths and science in particular. Um, and, but I’ve, I’ve stood back from that now because of the distraction factor of a lot of mobile phones in class. I’m not one. That, you know, does the whole, um, uh, put ’em in a pouch, put ’em in a locker, never be seen again. Scenario, uh, we live in, um, modern world. But, um, the flip side of that is it’s all about teaching kids the responsibilities of having a computer in their pocket. Um, Cameron: Hmm. Nick Johnstone: And, and that when we all live in this [00:08:00] world, we can’t pretend that it’s not, it’s not reality. But I also subscribe to a fair bit of what a, what Jonathan Holt talks about in his work about just making sure the kids are appropriately ready for technology. Um, and there’s, I’m gonna say security and oversight without much restriction. I know, and that’s a spectrum there. I know. But, Cameron: Hmm. Nick Johnstone: that’s pretty much how I’ve, how I feel now and. I haven’t felt Al always felt that way. Cameron: I can imagine it’s really difficult as it’s difficult being a parent. You know, your kids are older than Fox. You know, you, I’ve got adult kids. You’ve got adult kids. Fox is 11, so he’s in that. Phase where he’s always on a device and you know, it’s difficult being a parent, a ProTech parent. Like, I want you to have devices, I want you to have technology. But at the same time, I know that it’s an absolute, you know, massive landmine and distraction and comes with a whole bunch of problems. [00:09:00] And then you’ve got a thousand kids to have to worry about how you manage that. So I imagine it’s a, an order of magnitude more difficult. Nick Johnstone: perspective. Um, I didn’t. Jump into the mobile phones for my children until they were 15. and that was Cameron: Yeah. Right. Nick Johnstone: a decision that we made as parents, rightly or wrongly. That was just a decision that we made in the context of our family at that time. Having said that, both of my sons used technology exclusively in their careers. You know, my youngest is a recording artist based in London. and, and my, my oldest works in, uh, rock and roll media and, and works in Brisbane. Um, and they need devices on them all the time. So, Cameron: Yeah. Yeah. Nick Johnstone: it’s the world we live in. Not every career is that, but I mean, know a lot of them are, Cameron: And of course you and I grew up in an era where, um, we had a computer [00:10:00] room at high school where we went and did computers and we weren’t allowed to have calculators because our teachers would always tell us, when you grow up, you’re not gonna have a calculator with you at all times. I go, no, I’ve got an ai. Uh, you know, you are right, I guess. But it was, you know, we’ve, we’ve sort of come through those generations of technology and, and seen how it’s impacted what the attitudes were towards keeping technology outta kids’ hands when we were in high school right through to today, where the technology is obviously so integrated into our daily lives. So the, the, the question that I was posing in my article. That I want to really drill down with you on is what the role of schooling K through 12 and then of course tertiary may look like in a few years if we proceed with the assumption. Rightly or wrongly, because there a lot of things could, uh, go awry with this. But working on the [00:11:00] assumption that the tech industry, the Silicon Valley, uh, consensus as Eric Schmidt, uh, calls it, is that within a few years we are all going to have some, I won’t use the term super intelligent, but Nick Johnstone: Hmm. Cameron: intelligent device. Uh, our phones, our laptops, our glasses. Our watches, whatever other wearable devices they come up with, will have access to an intelligence that is probably as knowledgeable on every topic as the best humans in that field are. So the best. Possible teacher on every possible topic. It also understands every individual in a way that. No other human can understand that individual because it’s reading [00:12:00] your emails, it’s, it’s reading your text messages, it’s listening to your in-person conversations with people. It knows what your. Uh, neuro divergencies are your, which learning modalities you prefer. It’s infinitely patient. It can present a humanistic human-like avatar. It can talk to you in a human voice and have you talk to it. It can. Basically come in and teach, you know, Fox has been using it to help him learn decimals. He’s sort of grade five through to, you know, talking to you about, you know, PhD level chemistry or biology or physics or whatever. Um, I, I’m wondering how you see this playing out, what the role of schools might be. Let’s just say, I think it’s gonna happen a lot faster than this, but say five years from now, let’s say 20 20, 20 30. [00:13:00] 2030. These devices are in everyone’s pocket. They’re, they’re, let’s say that the access to this is essentially free ’cause it’s given away with the phone and the laptop, like Siri is, it’s part of the operating system. What do schools look like in 2030? Nick? Nick Johnstone: Uh, I go back a step there. Cam, there’s two parts to it. There’s what is the role of education? Because not the same as the role of schools, Cameron: Still. Yep. Nick Johnstone: so AI in the context of the role of education. I totally agree with everything you said with regard to, the inputs into that. Um, certainly around, um. Having a tutor that is purpose made for your needs to be able to help you build your knowledge base, understanding it, and take you, and take you from that, um, [00:14:00] the, the basic knowledge through the, you know, the, the theory of gradual release of, uh, of responsibility. So you’re building content, knowledge, skills over time. Uh, got no doubt that in the next five years, that will change hugely from being, um, I’m gonna say bitsy in the, in the area of education to being fully immersive. That will change. Cameron: Mm-hmm. Nick Johnstone: Um, and I don’t know. fast that will change, of course. But like most things in, in technology, it’s, it’s almost exponential growth. Um, and that’s the reality going back from, you know, um, you know, microchip invention to, to the current day we are now, it, it’s been rapid growth beyond what we could have imagined as kids Looking at Star Wars and Star Trek, um, all those years ago. It’s, it’s, it’s. Far exceeded. [00:15:00] Well, it’s probably in line with some of wacky concepts of really. But, um, so that’s the role of education. I think it’ll be really, really important in that. But the role of schools, I think is actually slightly different to that. Um, it’ll, it’ll absorb all of those things of the role of education, but it will also include, A, a humanistic con, um, component. So I think we’ll always need adults in education in schools. Cameron: Hmm. Nick Johnstone: teachers. Cameron: Mm-hmm. Nick Johnstone: Um, and I think that will change, hope no union people are listening to this, but I think it will change. Um, you know, um, the, the, the, the responsible adult in the room, I’ll call ’em the teacher for the sake of this. They’ll, turn into more of a, instead of the, you know, [00:16:00] the, the keeper of the font of knowledge and the. And the holy grail of what’s right and wrong, wrong, that they’ll turn into more of the, you know, the motivator, the facilitator, those kind of roles. But we are dealing with miners here. Those that are under, particularly those that are under the age of 16. they have, um. Requirements for supervision and care. Uh, so in the schooling, in the schooling context, that’s really vital. You know, we, we need to make sure that we are meeting our safe, child standards, processes, um, and all those sort of things as well. So, carer, motivat. that’s our role in schools on top of all of that role of education with AI into school context. So that will mean there’ll be changes in class size, class structures, class times, day lengths, you know, um, [00:17:00] there’s a lot of schools experimenting with the, with the shorter day at the moment to still meet their curriculum requirements. Um, there’ll be a shakeup backwards from schools and society back into, policy. well. So at the moment in, in certain policy documents, particularly in the senior schools, there’s a certain number of hours you are required to teach. Now if teaching fundamentally changes, are those hours of duty to teach valid anymore, um, does it become more of an outcomes based scenario? So you do a pretest and if you pass the pretest, there’s no point teaching you that. So there’s no point spending 55 in a semester to teach you the biology if you’re already. I’ve 80% the biology test, um, you’d move on to the next one. So that age definition for classes, classes maybe will be multi-age. Because of that, uh, it’ll, it’ll fundamentally change the structure of schools. No doubt. Um, [00:18:00] I guess, outside of that, are also opportunities for kids, not just to learn the, the curriculum work, but ob obviously to learn how to interact with other kids of their age, other kids, you know, younger and older, them, how to interact with adults. KI kids, as, as you well know, they behave differently for their parents. As they do for others, they often Cameron: Mm-hmm. Nick Johnstone: at school than they’re at home. home is where they get Cameron: Hmm. Nick Johnstone: down and, you know, act Cameron: Hmm. Nick Johnstone: But, and that’s good. Cameron: Hmm. Nick Johnstone: the experience of being a child. So I, I think there’s a, that’s a big question how it will change. I, I think it’ll change in every way a school structure. Cameron: So there’s a, there’s a lot to unpack there. I mean, talking about the need for adult supervision. My, my natural reaction to that is until we have robots. Um, that, uh, again, if [00:19:00] you, if you believe the people that are building the robot humanoid robot industries, Steve Santino normally hosts the show with me, just bought his first humanoid robot, cost him $16,000. I don’t think he gets it until December, but it’s, it’s been pre-ordered. Um. You know, the, the forecast from the, the robotics industry, Elon Musk and, and people like him, is that by the end of this decade, humanoid robots will cost about the same as a budget car. So 10 to 20 grand. Um, they’ll be running the latest advanced AI platform, whatever it is, five years from now. So I know Jensen Huang, the CEO of Nvidia is talking about giving their, their AI platform away for free with their robots because, um, or their chips that are in the robots ’cause they wanna ship chips. So you, if you [00:20:00] buy a robot with an Nvidia chip, you get the AI for free, you know. I know Sam Altman, the CEO of OpenAI has said he’s envisioning a day when they give you a robot for free with your AI subscription. So he flips it on the head, but it’s like getting a mobile phone for free if you sign up for a Telstra plan for 24 months, right? You get the hardware for free, Nick Johnstone: you Cameron: essentially. Nick Johnstone: a basic, um, screen and you and your streaming services are a subscription service for your, for your media currently. And, um, and then you’ve got, you know, other boxes that sit on top of that, that manipulate the data to, to give you the algorithm you want or you think you. Cameron: Yeah. Nick Johnstone: I, I, yeah, I think so. That’s probably, um, a likely outcome. Robotics, uh, I, I will say that any sort of scientific change, um, and I’m gonna say western world here, um, it has legislative drag. Um, and [00:21:00] in, in Australian society, I mean, a, a, as you know, my, my love during my early university years, it was genetics. I literally Cameron: Mm-hmm. Nick Johnstone: degree and could not get a job in Australia because of the legislative drag. most of the Cameron: Mm-hmm. Nick Johnstone: in the laboratory had been passed through special provisions. ’cause we’re a tertiary institution that we could do them, but we couldn’t actually go to the workplace and do them. Um, and. It’s no different with, with technology. And I think ai, um, I mean, at the moment they’ve obviously got frameworks. I read them from cover to cover. Um, they’re so open-ended and so nondescript, you can pretty much do what you like at the moment. but at some Cameron: Hmm. Nick Johnstone: change will come. And I, I do think even though robots will eventually be involved in, you know, everything from, from, from babysitting to obviously the driving, which is already occurring now, um. And [00:22:00] all of that sort of, uh, componentry, there’ll be legislative catch up that’s required and, and that’ll, that’ll cause certain restrictions in certain countries. Uh, Cameron: I think that legislative drag will be compressed by AI as well, though. Nick Johnstone: maybe, Cameron: You say to the ai, write the, write the, write the policy for us, Nick Johnstone: yeah. Yeah. But, Cameron: and it does, you know. Nick Johnstone: to be voted on by humans with constituents. Cameron: Yes, yes. And then there’s a question of, you know, um, how that is impacted by AI as well. But, okay. So le leaving aside the robots thing though, because, uh, you know, there’s a lot of unknowns there. Although, you know, with, um, I, I’m not sure how much you know about this, but, um, there was a. A kindergarten near us where one of the teachers turned out to be the worst pedophile in Australian history. Um, he’s been in jail for the last year or two. He was one of, he was Fox’s teacher, [00:23:00] Fox was his student at that very, um, uh, uh, highly admired Nick Johnstone: Yeah. Cameron: but. It all came out with the rise of those sorts of things. You know, I can see parents going, do we wanna entrust our children to humans or do we wanna entrust them to robots, which is safer. Uh, there will be an argument that, uh, maybe robots will be safer. But leaving that aside, let’s just talk about the social stuff. So you, you acknowledge that the role of teachers is probably gonna change when the AI is the teacher. Or a better teacher than most teachers could possibly be for all of those other reasons, not that they don’t want to be, but because it just understands that kid far better than any human can, particularly if they’ve got 30 kids in a class. The, the social aspect of it is interesting too because, well, for one, I, I wonder, do parents need to send their kids to school if kids are getting taught by the ai? [00:24:00] Um, I mean, I like Fox going to school. Fox likes going to school, but it costs money. He goes to a small private school. You know, maybe I would decide, okay, well if he doesn’t need to go to school for education, maybe we can do other things with that money. Maybe the socializing aspect of youth is done. In another avenue. I mean, he goes to kung fu as well. Most kids play a sport or some sort of thing like that. Maybe they learn social skills in a setting that isn’t a school. If school’s not required for education and is required for socialization and advanced babysitting, maybe we call it something else. Nick Johnstone: look, and I do think the fact that over time, the concept. Of gathering kids of the same age and putting them in It it, it actually doesn’t meet any sort of [00:25:00] evolutionary need. In humanity. I mean, if, if we go back to preschooling structures, um, kids learnt with a, with kids of various ages and perhaps a responsible adult in that context, guiding them or even checking in with them, and that would, that was in, in a village context. And I mean, I, I know that we don’t live in villages, but, uh, the reality is we, we create our own social connections. And those social connections may or may not include a traditional schooling structure. I, I think they will for quite some time, though. I think they will for at least another 20 or 30 years, because I think that that structure indoctrinated into our psyche now. I think if, I think if it had have been a, a rollover from the village to, to the industrial revolution that magically linked into the technological re revolution, [00:26:00] probably not. But we’re, we’ve, we’ve had, you know, multiple decades since the Industrial Revolution, which created schooling in order to, you know, generate a workforce of, you know, human robots. Um, but. That has changed over time with regard to what we’re teaching, how we’re teaching, um, how we’re evolving. We, we want the students to develop, you know, creative and critical skills. We want them to be part of their educational journey and buy into that, just, you know, rows of desks like you and I went to school with. Um, Cameron: Hmm. Nick Johnstone: a lot of schools don’t, don’t live in that paradigm Cameron: Hmm. Nick Johnstone: which is Cameron: the skill that Fox, Fox goes to, you know, they don’t have rows of desks, you know? Yeah. Nick Johnstone: mean, I, I, I have on purpose chosen to work, to work in and lead schools that aren’t, don’t have that philosophy and I wouldn’t. Cameron: Mm Nick Johnstone: but, um, I, I do think that that social community [00:27:00] I think humans crave, um, and I, and I think they crave that during that large social experiment of COVID was, was a good test case for Cameron: Mm. Nick Johnstone: Is, a lot of time on their devices, but they really craved and they really miss that social connection. And from a school principal’s perspective, the kids couldn’t wait to come back and see and mingle with their friends Cameron: Mm. Nick Johnstone: and parents. developed a new, understanding of what it’s like to be a teacher. Cameron: Mm-hmm. Nick Johnstone: the kids miss that social connection, that face-to-face social connection. But you’re right, that, so that that social connection doesn’t have to happen through a schooling model. Um, I, I will add another, um, caveat to that as well. in the, in the line of, um. Opportunities to, to connect [00:28:00] are, are really difficult. often in, uh. Cultures that aren’t connected. And I mean, as you, as you know, I’ve just been to, to and Portugal and visited. We’ve visited a number of schools in that context. I think we, this year and last year I visited 16 schools in Portugal and Spain and Southern France. And, um. on purpose, we visited, um, all different types of schools, you know, international schools, uh, public, uh, private village schools, inner city schools. And, and in that, uh, context, I wanted to find out what it was like in those areas where they had. Monocultures compared to a really diverse community like we have in Australia and, and, and the US is the same. Um, it’s, it’s a, it’s a diverse, you know, everyone’s welcome. Let’s go. Uh, well it was, um, but um, that [00:29:00] it is different, the sense of community, the sense of connection, in those, in those monocultures. And they’re not all monoculture. I mean, there’s some, some of the towns like Vigo, that there’s about 50% immigration into that region. But the ones that were monocultures, it was a different vibe. There’s a different culture. There was a different social connection with the kids, with the parents, the kids with each other. There was no. Divides and in, in many schools that I’ve taught at that are, you know, 30, 40, 50%, um, immigrants, there’s, there’s pockets. Um, and if you, if you are an immigrant coming into a, into Australia and you don’t speak English, it’s very hard for you to be accepted into that context and therefore your children are experiencing that. often it takes decades or even Cameron: Generational. Nick Johnstone: to Cameron: Yeah. Nick Johnstone: you know, we, we grew up in Bundaberg and, and, um, the Italian kids that were cane farming generations, they were a couple of generations old, so they just fit in and we, [00:30:00] we didn’t even notice any difference, but their, their parents and their grandparents only Cameron: Hmm. Nick Johnstone: you know, met with other Italian families and, and so on. Cameron: Hmm. Nick Johnstone: so I think there’s a, that another complexity Cameron: Hmm. Nick Johnstone: complex environments like, like. You know, um, multicultural societies. Cameron: So this leads me into the next question, which is the point of schooling in this future we’re talking about. So generally speaking, we send kids to school so they can get a job one day. Nick Johnstone: Yeah. Cameron: We want them to Nick Johnstone: Well, that’s Cameron: do primary school, do high school, go to university, I mean. Our kids, yours and mine don’t really, didn’t really go down the traditional job. Path. My kids, uh, you know, Taylor [00:31:00] runs his own media empire and, and Hunter, um, is a talker, whatever that is, as a job. Social media influencer. You said what your kids do before, but. So, no. So really none of our kids benefited from their education whatsoever in a traditional sense to lead them to a job. Um, funnily enough, my adult boys, uh, recently got invited back to their high school Ferny Grove to give a talk to the Entrepre. Class there about, you know, how to leave school and become an entrepreneur and do your own thing, which was interesting ’cause they had nothing nice ever to say about their high school. Uh, but then they got invited back and had to be nice. Um, say nice things. But you know, one of the big questions of course is, well, are there gonna be any jobs in the future? And I’m already seeing this on Reddit. Uh, [00:32:00] people, uh, at university, whether they’re doing it or they’re studying psychology or they’re studying law, or they’re studying like accounting, um, there’s a whole range of people in the middle of their university studies going, well, I don’t think there’s any jobs waiting for me at the end of this because AI is gonna be taken. 20, 30, 40, 50% of the jobs. Microsoft did a study that we talked about on the last episode of this show a few weeks ago, uh, about all of the jobs that they think are gonna be the first to go with it. And it’s, you know, a lot of those professions. And I’m wondering, um, like what do you tell kids. Now, well, why am I learning this, Mr. Johnson? Nick Johnstone: good Cameron: Um, you go, that’s what you say. Yeah. Good question. I, I dunno, I don’t dunno. I dunno what the point of this is[00:33:00] anymore. Nick Johnstone: Um, no. Cameron: Yeah. Nick Johnstone: I guess the first part of it is the, the role of schools and every, I mean, I, I, I’ve been in. Um, in schools a long, long time now, um, well over 30 years and every 10 years or so, know, uh, the police and some representatives from different systems get together and they push out another declaration of what the definition of education schools is. Uh, I guess probably the most, well, probably the seminal piece of work in that space was the Melbourne Declaration, which I can’t remember the year of. But it’s well over 10 years ago. Um, and the Melbourne Declaration basically said the purpose of schooling really is about creating connective connected and effective citizens. So whereas the previous declarations were about, creating, um, [00:34:00] literate, um. And well, basically creating a workforce. So it went from that workforce concept to being engaged citizens in their society. Now, I wouldn’t say I’m a skeptic, but you know, what does that really mean? And which, which society? Um, yeah. get into that, but, um, I do think that. a lot to be said about, um, the tertiary sector being flipped on its head. You know, I, I visit quite a number of university campuses and compared to when, when I went to university, particularly my undergraduate degree, it was a hive of activity. And now are ghost towns most of the courses offer an online option, or they offer flexible buy-in buyout processes where it [00:35:00] might, you can come or you can watch the it online. Um, so university has become this. Part-time, virtual tertiary experience as opposed to a, what I thought was a fairly rich experience. I actually enjoyed my university years. I enjoyed the learning, but I also enjoyed the social part of it, as well. Um, but now that the, those core numbers, even in the largest larger universities, don’t allow that same level of social engagement. for the universities to meet the market of what the learners wanted, actually taken away the entire experience for everyone. Um, which is sad I think, but maybe I’m just looking through rose colored glasses on my, um, time at university. I need to be practical. Um, but I do think we are moving towards a part-time workforce. Definitely. Um, and the stats have shown that every year there’s less and less people that are, [00:36:00] say full-time gainfully employed. And I’m, I hate to use that term ’cause that doesn’t mean anything that says if you don’t have a full-time job, you’re not gainful. Um, but, you know, we, we would meet so many people that aren’t doing the traditional nine to five. Um, anymore and haven’t done for a long time. Um, Cameron: I. Nick Johnstone: exactly right. Uh, and my and my sons are exactly the same. And, and, and my son’s partners are exactly the same, you know. Uh, and Cameron: Hmm. Nick Johnstone: an a, a year and a half of his degree at QUT Brisbane, and he already knew that he was doing better work than the graduates. Cameron: Mm-hmm. Nick Johnstone: asked him to do the university ad. So he did the media for the people that were wanting to come into the course, and he hadn’t even finished the course yet, know? Cameron: I, you know, I think I’ve told you the story where my, I think it was Taylor that was doing a business in it at QUT, and he was, he’d done two years [00:37:00] and then his marketing professor said that he had a blog, but he couldn’t figure out how to get anyone to read it. And Taylor was like, why am I learning marketing from you if you can’t even market your own blog? It’s, I’m out. And they, out. Nick Johnstone: my son will. Yeah, similar story. Um, yeah, so I Cameron: But. Nick Johnstone: of school around citizenship, but irony of that is. We actually don’t actively teach um, as a course or as a, a structure in our educational system. It’s kind of done through the values of the school and the activities of the school. So if our argument is that we create this. Model of education to build citizens. We’re not actually explicitly teaching what that means. Um, mean, yeah, in some states you can do [00:38:00] citizenship education, which is a course usually in year nine and year 10, and they push it into, you know, some different parts of the curriculum, but it’s not a course in itself or it’s not Cameron: Quite frankly, that policy sounds to me like it was created by somebody in the PR department, right? We have to be seen to be saying certain things, but that’s, you know. Honestly, we’re pushing out employees. Morgan Stanley came out with a report just overnight that says that they believe AI will help. I think it’s American businesses save close to a trillion dollars. Uh, I assume it’s a year in productivity, mostly by firing people. It’s $980 billion or something like that. They’re forecasting, I dunno, over what time period. I haven’t read the full report yet. I just read the, um, summation of it. So obviously those, those are jobs that won’t exist anymore. So again, uh, how do we, how do we motivate kids? [00:39:00] Like my working premise at the moment from what I do? ’cause that could disappear. Um, Chrissy’s a viol ed teacher that’ll probably be around for a while. Um, but, you know, why should Fox get an education? My working premise is, look, we dunno, you just keep doing what you’re doing until you can’t do it anymore, until something changes and we figure out what the new world looks like. But how do you do that with a bunch of kids? Like you’re gonna have kids in grade 10, 11, 12 saying, well, if there aren’t gonna be any jobs, what do I, what am I gonna do with my life? How do you handle that as a administrator? Nick Johnstone: Yeah, it, it comes back down to the, to the notion of, I guess you want the kids, oh, you want them to have hope, one, without, without hope, you lose all motivation. So you, you want them to have hope lives. Um, and, but those hopeful lives don’t need to revolve around [00:40:00] paid employment. S you want the kids to be adaptable. That’s another thing. And you know, everyone talks about resilience, but resilience is really around your ability to get, you know, um, chipped and you give yourself a good buffing. Uh, that that’s, you know, or how, how well you can ou back, uh, but world changes. It has always changed. Um, yes, I agree it’s changing at a faster pace, but I think if you Cameron: Mm. Nick Johnstone: you spoke to, you know, our grand grandparents’ generation who, you know, are born in the, in the young, you know, twenties and thirties, um, world is Cameron: They saw a lot of change. My mom grew up on a farm outside of Bundaberg, didn’t have electricity until she was four years old. Now she has an AI on her smartphone. Like that’s insane amount of progress. That said, I do think it’s moving at a pace that in, in the next 10 years that is unimaginable, Nick Johnstone: [00:41:00] adaptability Cameron: giving. So how do you teach that in a school? Like, um, I’m telling people, like teenagers that I talk to, just make sure you are as, um. Uh, across everything that’s happening as quickly as you can be, get really good, which is kind of what I’ve, I’ve sort of built my career out of, um, since I was at Aussie male, um, 30 years ago, which is being on the front foot of all technological innovation because I didn’t want to get left behind. Is, is that sort of a message that you are imparting to kids? Be on the, be on the front foot, be on the cutting edge? Nick Johnstone: Oh, uh, yes, totally be, be open to change. Be open for the opportunities that change provides. Cameron: Hmm. Nick Johnstone: ready to swoop in and take advantage of opportunities is a big one. And I know it’s probably oversold that the concepts and, [00:42:00] and traits of being an entrepreneur, but. That, that’s really about a mindset more than it is a set of to Um, it’s Cameron: Hmm. Nick Johnstone: opportunities, whether it be companies, whether it be your own, um, uh, projects or inventions. But I do think. Almost gone of the, are the days where people will be working full-time jobs. And that nine to five thing, I think, you know, within five years be gone from a school’s perspective, um, as a workforce. I, I, I doubt in five, not in 10 years time, there won’t be too many full-time teachers. They just won’t. Happen anymore. And, and I think that will just be translocated across every different workforce. You might, you might be doing a couple of different things, or you might just be a teacher that works three days a week and maybe two of those days a week are at home. working with students, it’ll, it’ll just change remarkably. [00:43:00] Um, it, it’ll be those schools that adapt to that want and that need and that demand. That do that quickly. And um, education department schools outside of some lighthouse schools will be the last to that trend because of their bureaucratic structures. Um, the independent schools and the Catholic schools will be able to adapt to more quickly, in, in our Australian context. Um, so that. That’s, um, positive for the of high school kids that go to independent and Catholic education, but 60% still go to public schools. so I’m worried it creates this AI digital divide, even greater radar between even if AI is free. Cameron: [00:44:00] Perhaps, but although I, the way I see it playing out is like, I dunno if you’ve seen this, but chat tea. Or open AI A couple of weeks ago, just about a week before they came out with chat GPT five, they introduced a new thing in chat GPT, which is study mode. Have you seen that? So I’ve used that with Fox a couple of times and introduced Tim to it, and for people that haven’t played with it yet, the difference between study mode and the regular mode is study mode doesn’t try and answer your questions as quickly as possible. It steps back and it, it. Has the personality of a Tudor more, it’s like, okay, so tell me what you know so far and tell me what you’re struggling with. And it, it, it, it’s more of a pedagogical approach to ai, which is, Nick Johnstone: for Cameron: yeah. Nick Johnstone: yeah. Cameron: So I think we’re gonna see more and more of that where the AI will natively. [00:45:00] Interpret your need and will come try. Its best to come at you based on what your requirements are. And so it’ll hopefully be taking kids and saying, looks like you’re struggling with this. Um, would you like some help? Would you like me to help you understand this more and more? But so I’m, I, I wanted to finish up Nick, uh, just by asking how your framing. AI to your students, let’s say like older students, 10, 11, 12 grades. How are you framing AI at the moment and how are you integrating it or not into the curriculum and, and what sort of guidance or perspective are you providing the kids with the role that AI should, could, might play in their lives? Nick Johnstone: Yeah. Um, it’s a good question. I, I’ll give two answers to that because, [00:46:00] um, I’ve been at my current school for 15 weeks, so I’ll, I’ll give you my answer from my last school, because we’d done a lot of work in the AI space. We’d, we’d involved, uh, student groups to present to staff on how they’re using. AI in their own world, both inside and outside of school. Um, we had a committee of, um, staff with those students involved in that, in d at different times to look at how we could, uh, I guess, leverage greater opportunities from an administration point of view, from a teaching and learning point of view. Um, from a governance point of view, how, how could we. Build that into our processes. And, and then we had the assessment side. and this comes back to that legislation component. We are required in New South Wales to follow the NSA guidelines about assessment, full stop. Nothing else we can do about that. have to follow that. So we had to have policies that meant the NSA guidelines for [00:47:00] assessment, but there were no Nessa guidelines. Um, for how we could use AI in our classroom teaching and learning context. then we had basically a think tank in presentations regularly back and forth with, with students and, and staff on how we could build that capacity over time. Very, very progressive, um, way of looking at it. And, and that school was very, very progressive in its mindset to ai. And see, we could, how we could amplify all components of schooling, obviously core being, teaching, and learning. Um, I’m 15 weeks into my, into my current school we’re, we are worried about, um, and I’m gonna say worried, we’re worried about the assessment process of year 11 and 12 assessment to make sure it’s valid, um, in that space. And I, I do also know that the Queensland assessment processes through the, through the government are also worried about that [00:48:00] process. I am less worried about that. And. Cameron: What? What do you mean by assessment process is valid. Can you unpack that for me? Nick Johnstone: So, um, a a a student needs to, uh, present work in, in class, they need to, um, submit an assignment. How, how much of that assignment is, is your own work now, I think the con that, that question is a lot more complicated than it sounds because I would argue that. No student is going back to, to first principles to gather work and, and to piece their ideas together. Um, that they’re currently using some sort of search engine. Most people live in Google Land still. Um, and, um, they, they’re gathering those pieces of work. They’re collating it together. They’re adding their own opinion on, onto it, and they’re justifying their, their reasons for that. Some high quality AI in that space already is, and will [00:49:00] continue to, I think, prompt students within the essay as they’re writing the essay. You know, have you thought about this? Ha, um, um, your justification for this looks a bit light on in blah, blah, blah, you know, like the, the deeper level questions, AI can be a lot more, and it doesn’t mean it’s not student work, it just, it’s asking, it’s prompting them to be Cameron: Hmm. Nick Johnstone: their ideas at a, as at a, as a deeper level. And at the end of the day, we all want our kids to develop a. You know, a broad spectrum curriculum, but also have the depth of understanding on those areas. Um, and so I think AI can certainly have a massive role to play in supporting that. Growth of depth of learning. we’re in my current school, we are in early days in that process now, and that’s something personally, um, I would like to spend a lot of time on with, with [00:50:00] staff. Um, and I’ll be inviting, um, some different speakers to come in to talk to teachers and some students in that space. so you might get a phone call from me as well, by the way. Cameron: Yeah, I, I accept, yeah. I, I, um, you know, I, Steve and I have talked about this before, but I’m interested in your perspective. A student running their essay through Chachi PT and getting suggestions for how to improve it. How, if at all, is that different from them having a private English tutor that they ran it past who gave them the suggestions? Nick Johnstone: Cameron, that’s exactly the comments we made at the, at the committee at my last school is, is this any different? You, if you’ve got an English tutor and the English tutor is discussing the Macbeth essay that you’re writing, you know, um, um. I, is that any different to the classroom teacher giving you some feedback after after class? Is it any different to, [00:51:00] to paying $120 an hour for a private tutor? I, I argue no, it’s not, it’s exactly the same. Um, Cameron: Hmm. Nick Johnstone: in fact AI Cameron: And one is celebrated, like the, if you had a personal tutor, people would go, that’s fantastic. Great. You got the best parents ever. If you’re using ai, they’re going, oh, that’s horrible. You shouldn’t be using AI to improve your work. It’s, it’s weird dichotomy, right? Nick Johnstone: is. And you know, and as, as you know, the, the benefit of that AI tutor means that they can be a Shakespearean Cameron: Mm-hmm. Nick Johnstone: So they’re, Cameron: Hmm. Nick Johnstone: questions that are different depth that classroom teacher that probably has read plays, Shakespearean plays and has 15 or 20 years experience. But they’re not a Shakespeare expert ’cause they can’t be. Cameron: Yeah, you can’t be an expert on everything. So there’s, so, okay, so there’s a bunch of questions here, and I know we’re, we’re running outta time. Uh, you’re, you’re doing your next interview in 10 minutes, but there’s, uh. [00:52:00] There’s the question of how kids use it, whether they should use it, but it’s also how to use it well, so I, when I’m preparing research notes for one of my podcasts, whether it’s my investing show or a history show, or a politics show, the process I go through today is I write my notes, I then give my notes to chat GPT, and I say, fact check this. And also. Challenge my interpretation of the facts. If you think I’m off, it will then give me its feedback. I then take all of that and I give it to Gemini or Grok or one of the other AI tools, and I say, fact, check this for me and also give me your position on the interpretation of the facts. So I’m comparing, I call it the Dave Double AI verification. I go run it past Dave in my terminology, right? So it’s, I’m, I’m using the AI to [00:53:00] verify my work and each other’s work, and then trying to align. I go, well, hold on. Chachi PT Rock said. You were wrong on this. And it goes well. Yeah. Look, it’s, you know, there there’s different interpretations or different studies or different models and they go backwards and forwards. So it’s, I, I feel like kids need, and adults too, need to be taught. Okay, yes, hallucinations are less of a thing today than they were a year ago, but there’s still a thing and bias is still a thing. Um, so how do you use the tools to. Nick Johnstone: It’s a, it’s a Cameron: Validate. Nick Johnstone: we already go through the process with, with, with students, particularly those sort of in, in the secondary years around, h how are you justifying the conclusions you’re coming to and, and how are you referencing your work and what are your multiple sources? Is this a primary sources, is this a secondary source, et cetera? Um, Cameron: Mm-hmm. Nick Johnstone: [00:54:00] This is another layer of that. This is another layer of saying, okay, have you run this through, um, multiple processes using the appropriate, um, uh, prompts to cross reference, um, what you’ve said. Um, and look, I do the exact same process. Cam, I, I, I use gr, I use Claude, I use chat GP T five, and, and I, I challenge one against the other. Cameron: Mm-hmm. Nick Johnstone: you know, I, I write my. Prompts in such a way that they even even compliment the, the other LLM in the process. really like what chat GPT had to say about this. However, this is another perspective club will say. Cameron: That GPT says to me, well, of course GR would say that, wouldn’t it? You know, they have a very anti antagonistic relationship in my world. Nick Johnstone: props have a lot more Cameron: I. Oh, okay. Yeah, mine don’t, mine are very acerbic. Okay. [00:55:00] Well look, I know, uh, we’re running outta time, Nick, but, um, are there any final thoughts about AI in the future of education and the future of schooling and what that Venn diagram looks like that you would like to leave with our audience? Nick Johnstone: Yeah. I, I would like to say that we’re, we are in a world of disruption. Everyone knows that we see it in every workplace. Um, that process. Is not complete. It’s continuous. Um, and the only guarantee in life is change fact. That’s the reality. So this is part of the change process we’re going through now, and I, and I ch challenge everyone regardless of their workplace, whether it’s in schools or it’s in, in business, just to be to the fact that change is reality. and it’s a mindset whether you. Adapt, uh, to it and get involved in it, or you don’t, and you [00:56:00] know, the consequences are starkly different. Um, so a choice. Cameron: I’ve just launched, uh, in the last few weeks, my new consulting business, inte, which is my, which is AI consulting. And the question, you know, I think people are thinking about ai, but I think. Nick Johnstone: I. Cameron: The, the approach that most organizations are taking at the moment is, what can we do with ai? What I’m challenging organizations to think about is the question, what are the implications to our business or organization? In a world where every student, every customer, every supplier, every employee, every competitor has access to unlimited intelligence in the palm of their hands. How does that change the nature of what we do? Uh, because I think those are the deeper questions that we need to be [00:57:00] thinking through. Nick Johnstone: it’s, we’ve, will totally knowledge. Cameron: Yes. Knowledge, not information. Google democratized information. This act democratizes knowledge. It is different. Yes. Before you go, Alice Cooper’s new album with the old band. Uh. Nick Johnstone: Love it. Cameron: listened to it. When it came out. I was like, me, I’m not sure you wrote a glowing review and worthy of Rolling Stone forced me to go back and listen to it, and on subsequent listens, it’s growing on me. Nick Johnstone: it, it does take a few lessons. Um, and look, as you well know, there are Alice Cooper albums that I don’t listen to. Cameron: Really Nick Johnstone: there’s a, there’s quite a few in Cameron: easy action. Nick Johnstone: no, I love easy action. Go back and listen to that. Cameron: Right. Nick Johnstone: to. Cameron: I did just recently. Nick Johnstone: with your jazz lens. Um, Cameron: Yeah. Prog rock. Nick Johnstone: oh man, I, I love it. I love it [00:58:00] because it’s got these really strong undertones of, um, the original Alice Cooper group, but also Bob Ezrin and anyone that’s a rock and roll fan, it’s hard to not love Bob. Bob Ezrin. He’s, he for me changed 1970s rock and roll, and then we’re still feeling the Cameron: Yes. Yeah. Big fan of Bob Ezrin. I nearly got to interview him once. Uh, when the, when the iPhone first came out, I was in San Francisco and there was a woman showing me her iPhone, and I was playing with it, and I was flicking through her contacts and seeing how the scroll works, and Bob Ezrin was in her contacts. I was like. You know, Bob Ezrin, she said, you know who Bob Ezrin is? I’m like, lady Lou Reed. Alice Cooper Kiss. I mean, this guy was the man in the seventies. Uh, but she never set it up for me. Okay. And Ozzy. I mean, you and I used to sneak out of class in high school to go to your place to play pool and listen to Alice Cooper [00:59:00] albums when Constrictor came out. Those things, I remember it was good times. Raise your fist and yell Nick Johnstone: right. Cameron: In the late eighties, Nick Johnstone: I listened to Data again the other day. It’s a fantastic album. Yeah. Cameron: we ran into each other when Alice was last in Brisbane. Just, just before COVID early 2020. Yeah. Yeah. Nick Johnstone: um, days. Good days. Good fun. Cameron: Yeah. All right. Thanks for, I’ll let you go, Nick. Thanks for coming on having a chat. That was, that was a lot of fun. Nick Johnstone: Thanks Ken. Always great to see you. And uh, we’ll catch up soon for a coffee. Hey, Cameron: Let’s do that all buddy. Nick Johnstone: Bye.  [01:00:00]

  3. 8

    Futuristic #44 – AI Agents, Robots and Car Sales

    In episode 44 of Futuristic, Cameron and Steve dive into the dawn of the embodiment era of AI. Steve reveals he’s purchased a $16,000 humanoid robot — the K-Bot — to be delivered in December, marking his entry into the personal robotics revolution. The conversation expands into the future of open-source robotics, the potential for a robot skill-sharing app economy, and the economic implications of humanoid automation. Cameron shares his experiment with OpenAI’s new GPT Agents and explains where they fall short. They also explore a future where AI personal assistants act as bullshit detectors during major purchases like buying a car. Finally, Cameron reads an NPR-style retrospective on IBM’s Watson, drawing a direct line from symbolic AI to today’s LLMs and speculating on a future hybrid model. Also – Cameron launches his AI consulting business, Intelletto.  FULL TRANSCRIPT   Cameron: [00:00:00] Okay, go. Gimme your intro. Steve: Steve. Austin. You gonna to say, gimme an intro again? Wait a minute. wait a Cameron: intro. Steve: Okay, I’ve got it. You ready? Cameron: Yeah. Steve: We Cameron: Give it to me, Steve. Steve: Oh, we have the technology. Steve Austin. A $6 million man who is better than every human in every way. In 1975, $6 million was required to build a humanoid robot who was the future. Now it barely gets you a wooden house with two bedrooms in a city in Australia. We’ve had a great reversal. First a humanoid was 6 million. Now it’s a house. Things are back to front. If you think you can predict the future technology sociology, then fucking think again. Welcome to the futuristic.[00:01:00] Cameron: This is, uh, the futuristic episode 44, recording this on the 4th of August, 2025, pre GPT five, the days before GPT five, which we’re expecting to hit any day now, but there’s been a big few weeks since we last talked on the show. Steve, you and I have talked off the show off air, but on air it’s been a while. Steve: private Cameron: What’s, yeah. Tell me you’ve got big news, Steve. Tell me your big news. Let’s, let’s start with that. Steve: I. robot. It wasn’t a $6 million, man, Cameron. It was a $16,000 humanoid. Sexless as far as I know, but I could be surprised when it arrives. We don’t know if it’s gonna have genitalia. We don’t know. But I bought myself a k Bott [00:02:00] which is going to be delivered in December now. Hey, still waiting on, uh, Elon’s Robo Taxis while they’re there now, so we don’t know, but, uh, it’s your first personal robot, open source, full stack in your hands. they’ve actually just, uh, got their second batch now. They’re 11,000. Mine was 16,000 with a few upgrades. It was $18,500 deposit. Tom my co-founder at Macro 3D, we want to code it to do trades work. Uh, that’s the main thing. And also mow the lawns and do the dishes and fold the washing. So. The robotics revolution, I call it the embodiment era of AI is upon us. Wow. We, Cameron: Wow. And you’re gonna take it on the road with you, get it up on stage, Steve: I want Cameron: do a bit of a Abbott and Costello routine. Steve: would really love to do [00:03:00] that. I put on LinkedIn a picture of me taking it through the airport and sitting next to me on a plane and lying down and going through the, uh, metal detector. They’re gonna detect a lot of metal in there. I dunno if that’ll wipe its memory or what’ll happen or whether it’ll glitch out, it’ll be interesting. And I think they are coming. Uni tree also announced a, Cameron: to buy a seat. Steve: well, I Cameron: Do you have to buy a seat for it on a plane, or do you put it in cargo? Steve: well, you wouldn’t be able to put in cargo because of the lithium ion batteries. So imagine you’re gonna have to Cameron: Hmm Steve: Otherwise, that’d be Cameron: hmm. Steve: we don’t want that. Cameron: That’ll be I, I can’t wait to see that you’re the first person in Australia to have a robot sitting beside you in first class. Steve: First class. Okay. Cameron: Come on. Surely you only travel first class. You’re Steve Santino, Australia’s leading futurist. Steve: I, I, Cameron: is too good Steve: travel business [00:04:00] occasionally. It just depends on the client really, to be honest with you. Cameron: on the gig. Hmm. Steve: occasional upgrade. Cameron: so let’s, um, talk about this in more detail. So you say it’s open source. What is open source? The software or the hardware or both? Steve: I think both. Uh, again, this remains to be seen, but you can, it comes, it’s standard fittings and capabilities, but you Cameron: Shouldn’t, shouldn’t, shouldn’t, shouldn’t, you know, be before you bought it, what you’re actually getting. Steve: world works is that you just buy things first and you cross your fingers and they make promises they often don’t keep, whether it’s autonomous vehicles or you know, any other elements, but it’s codeable and you can teach it things and you can also teach it through code, but you can also do it verbally and visually, uh, which is gonna be the killer app on robots. I mean, one of the things that I really hope for is an open source movement within robotics. [00:05:00] Uh, we don’t wanna have closed source. You need to be able to train it in your way and maybe even the skills that you’ve trained your robot. I think that’s a great idea. might be the best car washer or the best on a work site or the best. Warehouse worker. I think that’s really important to get an extension of our skills based on what we teach it. You could have the Stevie skill that, uh, for gardening goes around the world. I, I just downloaded the, the Stevie Gardening, uh, app for my humanoid robot and it’s the best one. And we might get like a whole new app economy where human skills, which are incredibly varied, across all the gamut of things that we do physically, that I think that’s a really big opportunity. But it can only happen in an open source world. Cameron: I think that seems to be the model that Jensen Huang at Nvidia is pushing them into is to make the software free and open source, [00:06:00] uh, because he wants to sell the chip set that runs the robots. So I. I do think that will be one of the models that’s out there. There might be some that are closed and some that are open. Uh, well that’s very exciting, Steve. That’s, uh, that’s really huge. Do you know of anyone else in Australia that has a humanoid robot? Steve: seen a few out with gigs, but all of them seem to be pre-programmed. I’ve seen quite a few of the Boston Dynamics ones, the dogs and the Atlas, to do certain things. Uh, uni Tree just launched a new one, which looks absolutely incredible. It was under 20,000 as well, which is very aligned with Jensen Huang. two years ago we spoke about him saying, by the end of this decade, they’ll cost less than a small car and we’ll all have them. That seems to be very on track. capability, we don’t know we don’t Cameron: Uni tree are a Chinese operation, aren’t they? Steve: Kbo is Which, which, Cameron: yeah. Steve: is, is [00:07:00] rare and good. I Cameron: Hmm. Steve: have Cameron: it. Steve: countries, well, I think you wanna have as many countries as possible with this capability. I think, Cameron: Right. Steve: no, this is not a Cameron: So, Steve: It’s just you want as many as possible. Cameron: right. Yes. That’ll be the two main countries producing them, I imagine. Uh, be interesting to see how it plays out in the tariff wars. Steve: That’s Cameron: Two, getting a robot from China versus a robot from the us. Steve: And, and, Cameron: Well, Steve: yeah, I, I think that Trump, for other reasons is tapped into something that’s gonna happen, which deglobalization and reshoring and onshoring, all of, all of my clients are talking about securing up their supply chain to Nearshore and Reshore, with the ability of AR and robotics, not just those that we’re making, but then those ones that we make can actually help production locally as well. Cameron: hmm. But China’s gonna be the dominant manufacturer [00:08:00] of humanoid robots. I imagine. So. Uh, Steve: it’s not because they’re more capable. I think it’s because America and other Western markets have systematically eroded their own supply chain of all the, I’m gonna call it bits and bobs that went into cars and washing machines and all of that stuff where they no longer have it. And it’s not that we don’t have the capability to design and make them work, it’s that we don’t have all of the small pieces that go into any form of machinery in our local markets like we did in the seventies and the eighties. Cameron: I think China is making a massive commitment to leading the world and AI and robotics, though at a governmental level, they’re gonna think everything behind it and I don’t think the US is gonna be able to compete, quite frankly. But we’ll see how it plays out. Well. I don’t, I haven’t bought a robot, Steve, but I did do my first experiment with chat [00:09:00] GPTs, new agent that came out a few weeks ago. Now, at the end of last year when we predicted what the big story for 2025 would be, we both said AI agents would be the big thing. Didn’t make us that original. Everyone in the industry was predicting that. But I did get my Chachi PT agent, uh, up and running and did a project, which I’ve tried to code over the last year or so, several times unsuccessfully. And this was a, uh, a project involved with my investing podcast, QAV, um, to basically go out and find a list of companies on the A SX, look up the investor relations page on their website, download their most recent. Annual report or half yearly report, find the independent auditor’s report in that document and read it and see whether [00:10:00] or not the company has a qualified audit. Now, for non people that aren’t investors, a qualified audit means an audit where the auditor’s gone. You know, this company has some problems. So we’re qualifying, um, the, the green tick that we’re giving the audit. And that for us as investors, that’s an issue. If the auditors picked out that there’s some serious fiscal concerns with the company, we wanna know about that before we invest in it. I’ve tried to code, it hasn’t worked. Too complicated. Got agent to do it, it seemed to work. Uh, and then I asked it to create a spreadsheet of the results. Give me the, the name of the company. Whether or not it has a qualified audit. And then a link to the most recent financial report, so I could check it. When I gave it a list of companies that I knew had a qualified audit, it gave them all a clean bill of health. Um, and then I went and double checked it and found that a couple of them did not get a clean bill of health and [00:11:00] said to the agent, Hey, how come you gave this one a clean bill of health? And went, ah, yeah, sorry. On, in retrospect, rereading that I shouldn’t have done that. So that was completely useless, uh, but it at least managed to get 90% of the way there, just hallucinated on the important bit. So that was a fail. So anyway, uh, it’s a good first step. You know, it was, uh, easy to get it up and running as an agent. It could go out, find the websites, get the report. At least the links to the reports were right. It just didn’t do a very good job of reading and analyzing it. But I think, you know, it’s, it’s, we’re getting there we’re, it’s, it’s a step in the right direction. The other thing I wanted to mention is you mentioned to me a Kevin Kelly book, the Inevitable from 2016, which I downloaded and started reading over the last couple of days. Um, well, yes, but hilarious. Now, Kevin Kelly, a guy [00:12:00] that we both admire, been fans of Kevin Kelly for 30 years. Talked about him a lot on this show. My entire podcasting business model was based on stuff that he wrote about in the early two thousands. Um, and. In this book, in the first couple of chapters he’s talking, he’s predicting the future of AI and what it would possibly look like and how it would roll out. This is in 2016. Steve: Mm. Cameron: Nine years ago, this book came out hilarious. How outdated it already is, and this guy, Steve: it, but I remember it, Cameron: oh man. Steve: and I was thinking of the last chapter, but tell me, Cameron: I haven’t, I haven’t got to the last chapter yet. Well, I will at the end of the show, or when we get to, um, technology time warp, he talks a lot about IBM’s Watson and, and I was like, oh my God, whatever happened to Watson? Haven’t heard of Watson for years. So [00:13:00] when we get to the, uh, technology time warp, I’ll talk about IBM Watson and what happened to them. But, you know, it’s, he talks about AI and sort of talks about it sort of happening 20 years from now from when he wrote it in 2016, you know, sort of thinking of a 20, 35, 20 40 timeline, uh, and what it, what it might look like. And it was built around the idea of, you know, the, the what we call gofi now, good old fashioned ai, the symbolic AI approach that they took with IBM Watson, which has now become. Really, uh, clunky and out of date, however, I think is gonna make a comeback too. So with a hybrid between LLMs and Gofi, which I’ll talk about later on in the, in the show. Steve: just on that point, and in relation to your use of the agent, a really clever friend of mine, Nick Hodges, who I might have mentioned on the Cameron: Hey, I know Nick, Steve: Super Cameron: friend of mine, old Microsoft [00:14:00] guy. Steve: Uh, was he a Microsoft? Uh, this Nick wasn’t, but maybe he Cameron: Oh, different Nick Hos. Okay. Could be a different Nick Ho. Steve: all the Knicks out there, uh, a friend of mine, David Brown, started a Twitter group with everyone called David Brown Cameron: Oh, David Brown. No. Steve: Yeah, dunno him which one? He wrote a post a few weeks ago ago saying the challenge because of the probabilistic nature of Connectionist ai, which is the opposite of symbolic ai, is what he calls the takeoff and landing problem. And so when you’re in cruising altitude and you’re working on something, the AI is amazing, but getting it to start, it obviously can’t start itself. It needs a lot of direction and nurturing, but also bringing in the project to completion to finish off those rounded edges, it needs the human, uh, attachment as well. And it’s a really great analogy, the takeoff and landing problem. And I do [00:15:00] wonder if that’s at all solvable. And I’m starting to get suspicious that it won’t be, the nature of the models probability means that. How you start and end something is really, really different to the middle of a project. And so it can’t quite get that learning and the guesses that it needs to take at those edges is why you might need symbolic code and get that hybrid model to come, uh, in and around it to, to make it work. And I’ve found that same problem when I’ve been doing some vibe coding with some AI tools for clients where it hasn’t really worked in that way. So, uh, that is, uh, I agree with you. We’ll get to that. Cameron: Vibe coding. Steve: I like the word, I just want to use it as much as possible. Cameron: People call it vibe coding. For me, it’s just coding, you know? Just, you know, it’s coding with ai. Right. Anyway. Steve: vibrating and vibe listening to music? Cameron: [00:16:00] Yeah, right. It’s ridiculous. Um, alright, so let’s get into some of the big news stories, Steve. Um, I think Chachi PT agents, so they, they launched that on the 18th of July, sort of two and a bit weeks ago. Probably the biggest story chat PT five is due out this month. Sam did a sneaky little Twitter post or ex post today with a screenshot of him having a conversation in GPT five. Uh, but, uh, agent has been the biggest release that they’ve come out with since our last show. Have you played around with the agent? Much had any success. Tell me about your agent experience. Steve: I, I found the takeoff and landing really hard. I’ve asked it to do a few things for me, and it’s, it’s been so much effort get it off the ground and starting to do the thing that I wanted to do and setting the parameters. I’ve found that it’s not too dissimilar from giving detailed briefing and instructions inside the traditional prompt framework so [00:17:00] far. Cameron: Let me read from open AI’s blog post from the 18th of July. Chat GT now thinks and acts proactively choosing from a toolbox of ag agentic skills to complete tasks for you using its own computer chat, GPT can now do work for you using its own computer handling complex tasks from start to finish. You can now ask Chachi PT to handle requests like look at my calendar and brief me on upcoming client meetings based on recent news plan and buy ingredients to make Japanese breakfast for four. And analyze three competitors and create a slide deck chat, GT will intelligently navigate websites. Filter results prompt you to log in securely when needed, run code, conduct analysis, and even deliver editable slide shows and spreadsheets that summarize its findings. At the core of this new capability is a [00:18:00] unified agentic system. It brings together three strengths of earlier breakthroughs, operator’s ability to interact with websites, deep researchers skill and synthesizing information and chat GPTs intelligence and conversational fluency. CHATT carries out these tasks using its own virtual computer, fluidly shifting between reasoning and action to handle complex workflows from start to finish or based on your instructions. Nice. In theory. Um. I’ve only done the one experiment, which was not entirely successful. I know my boys Hunter and Taylor have been playing with it quite a bit. Taylor upgraded to a plus subscription. No, a pro subscription. A couple of hundred dollars a month one, so we could really run it through. ’cause you only get a certain amount of credits on the, uh, normal subscription. And they also found it useful in ways, but also flaky in ways. And I think they’ve, uh, terminated their [00:19:00] experiments with it. Uh, I haven’t seen a lot of people excited about what it can do in the subreddits and online. Like there are, you can get it to log in and research people in LinkedIn and get their email addresses and send them emails, log into your emails and that kinda stuff. But. Again, it’s a step in the direction. And later on in the show I’ll take you through a scenario that I wrote over the weekend about what I think the process of buying a car might look like a couple of years from now, which is based largely around agents doing a lot of the work for you. So I think this is a step towards that future, but has some ways to go before it’s really, uh, that useful as a tool. Steve: I think that Sam Altman’s quote that you read out before is not too dissimilar from what we already have just by via prompting process inside search ability. I agree that the ability to write code, the ability to surf websites and do all of [00:20:00] those things is still there, but if you just went in and said, book me a holiday. I don’t think it has enough access to what your preferences are or the memory or your internal files to actually be able to come back with something that is, that you would sign off on. So, so Cameron: Well, it, it asks you questions first. Steve: right. And it can do that in the non-agent mode already, already makes suggestions at post. It sort of says, do you want me to put this into a PowerPoint that you can work? It does all of those things now. So I feel like, again, the takeoff and landing isn’t there. Cameron: Yeah, like now it can in theory log into websites for you, like log into your calendar, log into your email, cr you know, create things, do things which wasn’t very good at doing before. But, uh, yeah, you know, like I think this is, um, a hint at where we’re gonna be a couple of years from now, but, uh, a lot of work to be done to make it [00:21:00] reliable. I mean, with the stuff I was doing, completely useless if the answers it’s giving back to me are hallucinated. So, uh, and, and I think that’s true with a lot of this stuff now. And while it’s still full of holes and needs, human checking, um, kind of pointless, um, Steve: Yeah. Cameron: to just get humans to do it. Steve: Well, and often with some things, if a human needs to check it, that’s the same as doing it. Not with administrative work in a corporate setting, but in a technical setting, which I’ve been working on with clients where AI to develop some stuff. Specifications, which you can’t have an error in there. 95% is the same as zero. ’cause if you’ve gotta check it all a hallucination, which could end up with a bad calculation, and this is where symbolic code’s really important. You can’t have errors. Cameron: Steve, I saw you did a blog post recently about, uh, whether or not AI’s gonna take all of our jobs. And you said you weren’t that [00:22:00] worried about it. You’re more worried about AI just taking over humanity as the dominant species. But Microsoft released a study about a week ago called Working with ai, measuring the Occupational Implications of generative ai. I saw, I also saw in the financial review this morning, the federal treasurer, Jim Chalmers, had a post, an article. Talking about all of the great stuff the federal government’s doing with AI and how he doesn’t think it’s a problem, uh, for jobs, we’re gonna take the middle path, like we have any control over what path we take. Steve: Hmm. Cameron: But, uh, Microsoft has come up with this list of the 40 jobs most at risk of being replaced by AI and the 40 jobs least at risk of being replaced by ai. The top 40 occupations with highest AI applicability score most at risk sorted alphabetically. Advertising, sales agents Steve: Yep. Cameron: broadcast announcers at radio [00:23:00] DJs. Glad they left podcasters outta that, I’m safe. Hey, brokerage clerks, business teachers post-secondary CNC tool programmers, concierges. Counter and rental clerks. Steve: uh, Cameron: Customer service. Steve: a concierge kind of at a hotel, human in greeting someone doing whatever. I totally disagree with that, but Cameron: Yeah, customer service representatives, data scientists, demonstrators and product promoters. Economics, teachers post-secondary, Steve: I thought you meant the ones on the street waving flags and, Cameron: like the a hundred thousand in Sydney on the weekend. Yeah. Editors, farm and home management. Educators, geographers, historians, hosts and hostesses. Interpreters and translators. Library science [00:24:00] teachers, post-secondary management analysts, market research analysts. Mathematicians, just straight up mathematicians. You’re all outta work models. Models because we can just create AI models. Now, Steve: was at Cameron: new accounts, clerks, Steve: was hot. The one at Wimbledon, wasn’t she? Cameron: I dunno what you’re talking about. Steve: a model went viral ’cause it wasn’t a real model and she was all cruising around Wimbledon. Cameron: News analysts, reporters, journalists, passenger attendance, personal financial advisors, political scientists, proofreads and copy markers, public relations specialists, public safety, telecommunications, sales, representatives of services, statistical assistance, switchboard operators, technical writers, telemarketers, telephone operators, ticket agents and travel clerks. Web developers. And writers. And [00:25:00] authors. Steve: you read out all of them? Cameron: That’s the top 40. Steve: Okay, so can I just cut straight to it? Cameron: You can. Steve: Yes. The top 40, most of those are at risk, but think they were at risk before we had the generative AI boom. Most of the things on there software could already do in most capacities. I think a large majority of those Google translate’s been killer for a really long time. Uh, edit up. Yeah, it’s a bit better now. You can throw it into GPT and get a better version and prompt it, but I think a lot of those were already at risk, and I don’t think a lot of those are driven by generative ai. Cameron: Right. Okay. Steve: I feel like the has been attached to a general movement, which has already been well underway, [00:26:00] pre generative ai. Cameron: I’m gonna point out what I think are some of the gaps here. Like they. They mentioned specific teachers like economics, teachers, library science teachers, et cetera, et cetera. I think just teachers in general, uh, at risk, you know, I keep, I’ve written a couple of posts about this recently. I think when kids have the greatest teacher of every subject ever on their phone or their laptop or their iPad, it’s going to not get rid of schools. As such. You still need a place to send your kid and they’ll still need adult supervision. But I’m not sure that teachers, as we think of them, a real what role they’re really gonna play when the AI is a better teacher than the human. More patient understands the child better, is has access to the kids. Emails, chat messages, listens to all of its conversations with its friends, understands its [00:27:00] neurodiversity and its preferred learning modalities knows what movies, it’s watching music, it’s listening to books, it’s reading podcasts, it’s listening to in a way that a human teacher could never hope to do. It’s gonna be able to customize teaching to every kid’s requirements. Um, I just dunno how humans are gonna compete. But anyway. Oh, and we’ve got a guest coming on the show in a few weeks. Um, old friend of mine, Nick, who’s the principal of a large private school here in Queensland, and very progressive guy, very tech savvy. Um. Science, uh, savvy guy. He’s gonna come on and we’re gonna talk about it from his view as the principal of a private school, um, where he thinks AI is gonna lead. But anywho, uh, what do you think about the list of occupations with the lowest AI applicability score? Automotive, glass, installers and repairers, bridge and lock tenders, cement [00:28:00] masons and concrete finishes, dishwashers. I mean, to me, these are all easy robot replacements, right? Maybe not AI software, but robots with ai, floor sanders and finishes, robot Steve: Robot, Cameron: mold and core makers, robot gas, pumping station operators, robot painters, plasterers, production workers, roofers, robots, industrial truck and tractor operators. Yeah. Yeah, right. Steve: Every, including Jeffrey Hinton who said go out and be a plumber in a recent interview. I wrote a post about why the Godfather A of AI was wrong and I’m of course right, Cameron, I was correct and he’s wrong, is because I think everyone is missing the embodiment moment when AI gets an embodiment where the visual, verbal, contextual learning can be taught to something that has a physicality to it. All of those tasks that everyone says, oh well, because you know it’s a one-off and it’s nuanced and it every task changes. [00:29:00] I think they’re all missing the robotic part of this, and I think they’re overestimating the ability of. who want humans to do things with them, where they involve communication and human nuance, we actually want it. And my view is that the most important jobs are gonna be the ones we pay for because a human is doing it, not because an AI can’t. And I think with Cameron: Yeah. Steve: tasks where you’re not really interacting with someone, you’re interacting with stuff, you’re a lines person or a plumber or whatever, no one even sees or cares what you’re doing. They just want it done. Honestly, I think the blue collar trade and technical stuff is gonna get humanoid far quicker than a lot of the office stuff is. And, and there’s another reason why people like building empires inside corporations and having people under their control. That’s a big part of what happens in corporations. People like shaking hands and being in charge and impressing other humans, and I’m convinced that [00:30:00] 80% of what happens in large corporations now is already unnecessary and we don’t need it. It’s all bullshit and layers of bullshit of people talking to each other on presentations and meetings about meetings, and none of that’s based on efficiency or requirements. That’s all just based on power and control and subservience. So that’ll continue. Whereas a lot of the technical tasks will be outsourced to humanoid robots will, which are getting very close to that capability. They already understand it technically, verbally, visually. All you need to do is put that into humanoid that has a large battery life and dexterity, and we’re very close to that. Cameron: Here’s my, my, my pushback on the corporate bureaucracy kind of thing, shareholders. I’m talking about your, your large, you know, activist, shareholder groups. You’re gonna be saying to publicly listed corporations. Uh, why aren’t you using AI more to [00:31:00] reduce your workforce? We’re already getting stories. Atlassians just laid off a bunch of people, Microsoft, Amazon, Google, they’re all laying off thousands and thousands of employees and upfront saying, we’re replacing ’em with ai. It’s happening across the board, it’s happening in Australia as well as overseas already. There was an article from McKinsey I read this morning in the Wall Street Journal saying that, um, AI is an existential risk to them and all consultants, they’re laying off people as well. And replacing them with AI agents. Uh, shareholders will be saying to boards, uh, executive teams, why aren’t you laying off more people and replacing them with ai? So these little fiefdoms inside corporations aren’t gonna be safe from shareholders. It’s like, why aren’t you replacing half of your employees with AI is an easy thing for shareholders to say, why aren’t you [00:32:00] replacing them with a custom built software system? Not so easy to say as an activist shareholder because, you know, you don’t understand necessarily what’s available and how hard it is to build a customized software system, et cetera, et cetera. But why aren’t you replacing these people with AI and saving us? You know, gajillion dollars in costs and putting it through to the bottom line and dividends or capital re reimbursement, et cetera, is an easy thing for shareholders to push on boards. It’s, and, and it’s gonna be very difficult, I think, increasingly for boards to, uh, you know, avoid those sorts of conversations. The shareholders don’t give a crap about the bureaucracies. They want the money. Steve: Of course they don’t. to say, harder to do than it is to say. I think a lot of the idea that you can just remove people and just [00:33:00] put in an AI without the edge cases being solvable the same way that AG agentic AI can do stuff self-directed right from the start. Unless that can happen, I don’t think it will. If I, if age agentic AI gets to the point where it does what it’s meant to do, it interprets the requirements before the project has started and can do it, then that can happen. Until that age, agentic AI is really working, we won’t see the mass of full-time I Cameron: but I, I give that a couple. Yeah. I agree with you. But I give that a couple of years, you know, a couple of years from where we are now based on the progress. And Steve: happen that 5% might never be solved. I’m wondering if Cameron: it might not, yeah. Steve: a little bit like an airplane where apparently Cameron, you should be able to fly to London in one hour in a supersonic plane. And then we just got to a point where there was a, a maxed out [00:34:00] limit of what was economic and will end up Cameron: What people are willing to pay for. Yeah. Steve: Also, Cameron: see. Steve: that this is a productivity unlock. So the, one of the thoughts that I have is, it is better to have fiefdom where we finally get some productivity enhancements, which the western world hasn’t done very well in the last couple of decades. Australia’s got very low, uh, productivity per person. If all of a sudden you can get through three weeks of work in one week or whatever the ratio happens to be, that can enhance employability because the productivity in getting things done might be a lot quicker. if people are more productive and there’s more output for the company, then it almost becomes an accelerator of having more people who can control an AI that becomes like their staff doing a number of things and they’re the orchestrators of AI’s doing things with the onboarding and offboarding, the takeoff and landing element. [00:35:00] So there’s a chance that that happens. But the technology implementation incorporations is so slow. I’m working with these guys every week and they’re still talking about what they can do and why and where. I always harken back to the idea that we could have been doing a million things with video that we didn’t do until COVID. The classic example is seeing the doctor, there was a 10 year lag on capability and implementation. Cameron: Yeah, no, you make good points. There are economic. Bureaucratic hurdles always in place of rolling out this sort of stuff. People get in the way. Well, uh, that’s a couple of news stories for the week. Steve, do you wanna move into deep dive and time warp? I. Steve: let’s do it. Cameron: So, um, I’m in the process of launching my AI consulting business intellect, uh, [00:36:00] which is Italian for intellect. And part of what I’ve been doing over the last couple of weeks is thinking through, I mean, the sort of questions that I wanna be asking clients. And, you know, I think the biggest question in my mind for most organizations, whether they’re businesses or, or government organizations or any other kind of organization, is what does the world look like a few years from now? When your customers, your employees, your suppliers, your partners, your competitors have unlimited intelligence available at their fingertips, and particularly starting with customers, what does it look like when your customer. Knows as much about your product and service as you do, and can see through all of the sales and marketing and PR bullshit [00:37:00] immediately. So I’ve, I’ve been exploring some scenarios and, and writing some stuff about it. Uh, for the website. One of the ones that I was working on over the weekend was, what does buying a car look like in that world? I was thinking about big tickets. So what, so what led to this? I was at the supermarket on the weekend. I’m talking about grocery shopping at Kohl’s, and. I was looking at some Nest Cafe, instant espresso liquid in a bottle thing that was on sale. And so I pulled out GPT and I said, look, have a look at this thing. Got the camera on it, right? Have a look at this. Um, is this more cost effective than buying a bag of beans and doing my own grinding and blah, blah, blah, blah, blah. And it talked through the cost effectiveness of it with me. And then I said, look up the reviews for this NE NECA product. And it said, yeah, the reviews are pretty shit. Basically it says it doesn’t taste like real coffee, it’s weak, it’s, you know, not as flavorful, et cetera, et cetera. [00:38:00] Cheaper, faster, but not as good. Um, then I was gonna buy a bag of peanuts to, ’cause I’d been making my own peanut butter for a while, and it was 20 bucks a kilo at Kohl’s for. Peanuts. Just peanuts in a bag versus buying pre-made peanut butter. So again, I was like, okay, peanut butter, jar of peanut butter, seven 50 grams is like 10 bucks buying peanuts, raw peanuts is 20 bucks a kilo? Is there any justification for making my own peanut butter, GPTs? Taking me through the cost benefit analysis and said, no, that’s ridiculous. That’s a ridiculous price. And in, and it said, in fact, don’t go to Kohl’s. Don’t buy your peanut butter at Cole’s. Go to Aldi. It’s like a fraction of the price at Aldi, right? So it’s talking me and Steve: because legally peanut Cameron: So Steve: peanuts. Otherwise it’s not peanut butter. Cameron: is that right? You just pulled that outta your ass. Is it? Steve: craft peanut Cameron: That’s right. Steve: and Cameron: 30 years ago. [00:39:00] So, and you know, I, I was just had it on in the, in the shopping aisle. I’m asking you about all these products that I’m looking at. What about this, what do you think about that brand versus this brand? It’s talking me through my grocery list, but then I thought, Steve: video idea. You get tailored to film that. You doing that as your launch and you go Cameron: yeah, Steve: shopping an AI agent. I’m, I’m, I’m telling you now that is a million views and I know a thing about or Cameron: Taylor’s in, Taylor lives in LA now, but yeah, yeah. No, I am gonna do that, Steve: you Cameron: but Steve: huge. ’cause if you don’t all steal it, Cameron: I’ll do it. Um. Steve: good. Cameron: But then I was thinking about big ticket items, like buying a car, buying a house. How does AI play into that? So here’s the scenario. I’ll, I’ll just talk you through it. I won’t read the whole thing, but so imagine I’ve been, let’s say you’ve been thinking about buying a new car. Let’s say it’s 2, [00:40:00] 3, 4 years in the future. You put on your AI enabled glasses one Saturday morning and you say, Hey, I’ve decided to press the button on the new car. It already knows you, it knows what you’ve been talking about. It’s it, it says, okay, I’ll put together a short list for you. And it comes up with, based on your budget, based on your requirements, the size of your family, et cetera. It says, look, I found a, a new one at a dealership. That I can book an appointment for you to go and have a test drive today. I’ve also found a secondhand vehicle that looks pretty good. We can go take a look at that. You go, yeah, set, set up the meeting. So it contacts the dealership, it contacts the private seller and it sets up appointments for you, slots it into your diary. You go to the dealership and you sit down with the salesman after you take a test drive and you say to him, Hey, listen, um, before we get started, I just wanna let you know that my AI assistant is gonna be sitting in on this. Are you okay with that? And he’s like, okay, [00:41:00] I guess. And you say, can you just, can you verbally? He just nods you go, can you verbally consent? To my AI sitting in on this call, which he does. It’s in your glasses. And so it’s listening to all of the claims he’s making about the car, the pricing, the financing, the whole deal. And it’s running as a, as a live bullshit detector for everything that he’s saying. And it’s talking. You can either have it on the desk, but that might be a little bit confronting. So it’s in your ears. You’ve got your like meta style glasses. It’s, it’s, it’s coming up on the screen, it’s talking in your ears. It’s going, no, no, that’s bullshit. Don’t listen to that, that, that does. Ask him for more details on that because that doesn’t check out. He’s trying to jack up different value added things, and you go, no, no, no, I, I can see another car at another dealership five minutes away where I can go get it without all that kind of bullshit. And I, I could just walk. So it’s acting as a real time bullshit filter on the car salesman, verifying all of his [00:42:00] claims, not letting him get away with any nonsense. Then you go to the private seller, you have your glasses on. When you’re inspecting the car, it’s looking for any signs of damage. It’s looking at the engine. It’s already checking the VIN number. It’s checking the history of the car, the, you pull out the service book. It says, uh, ask, ask the seller if it’s okay for me to contact his service center to verify the service records you ask him. He says, yeah, sure, that’s fine. Your AI contacts, the AI customer service front end at the. Car Service Center says, uh, my owner is looking at buying this car service records. Indicate you’ve been servicing it for the last couple of years. Would I be able to confirm the service history with you? The AI on the customer, on the on the service center’s end says, well, due to privacy concerns, I’ll need to get approval from the owner of the car. Uh, tell him [00:43:00] to check his phone. I’m about to send him a message. You tell the guy, message pops up. Will you approve that? I can reveal this information? He clicks yes, and it confirms the service record of the car. Also, they say, listen, um, we’re willing to back up this car, so if there’s anything that isn’t, uh, uh, contained within the service record that we’re about to send to you, we’ll cover it free of charge for the next six months or something like that. It, it you renegotiate the price based on some stuff that you found in the service record with him. It does up a new contract for you, sends it to the guy. It’s already in pre-negotiations stage with eight finance companies to get you the best deal. Once you’ve decided to go ahead, it locks in the best deal with the best finance company. It registers the vehicle, it changes the details with your insurance company and it’s done and dusted and it’s all AI driven. So this, this idea of having an AI [00:44:00] assistant with you when you’re doing these bid ticket item purchases, I think is gonna revolutionize that side of buying and selling big ticket items. Uh, within a few years, Steve: a brilliant synopsis cam. What I’m hopeful for is that this is something can be augmented by someone who’s been in car sales for a lot of years, or in purchasing who can work with the GPT to write the code and the software and direct the pathways, like an architecture of what that could look like and be able to automate that process and potentially have a whole. Let’s call it GPT Economy, which hasn’t quite spawned yet, even though Open AI’s tried to do it, there’s new forms of ais that can do that or can. And I want your view on this, is this something that your personal AI just does intuitively [00:45:00] Because it’s a multimodal AI that gets you, gets your situation and knows what to do. And we actually have a general AI that just does everything. You don’t need those specific ones anymore. Cameron: Yeah, well, when I was writing this article, I said to GPT, what are the five most common ways that car dealers rip people off? Stitch people up in the process, and it gave me a list. Steve: is the first thing. When you see the guy in a brown sort of plaid suit with a mustache like the movie, the Big Steel. For any old Australian listeners, you come in there and there’s a different motor in the car and the one he sells you, and then when he delivers it, it’s all changed up. Cameron: So I think, I think Theis will be smart enough to be able to know, because it, you know, it’s, it’s reading everything that’s published out there about these sorts of things. So it can give you the, the things to look out for and it can be looking out for them for you. There may be opportunities for customized symbolic [00:46:00] rules in there, but I think LLMs are gonna be able to do a lot of that just straight outta the box, you know? Steve: think you’ll have your personal ai, which can help you with looking for a house, a car, a university to study at, cooking, all of those things. But this trajectory is one that we’ve been on for a long time. Cam, a friend of mine, was a car sales guy about 20 years ago when car sales.com au and the equivalents around the world arrived where you could see cars online and research. And he said that it got to a point where he used to know more about all of the cars on the yard, but someone would study that specific car every detail on it, and there was no way that he could possibly know more than a consumer within that car. And they learnt all of the tricks. Uh, so they were doing a human version of what you’ve just just described. But this would put it on steroids. If you had an ai, Cameron: Yeah. Steve: just be your research, which is still better than the person selling you the [00:47:00] car because you, you, you’ve really drilled down onto your needs and that they have to be across the whole car yard. and I think it does remove some of that complexity and chicanery of buying things that you don’t buy all that often. Cameron: All the stuff. I see people, you know, when I’m reading AI posts and there’s these AI conferences happening everywhere and they’re talking about how it’s gonna be so great for sales people in sales and marketing ai. ’cause you’re gonna be able to get all of this intel on your customers and you be able to segment your markets and you’re gonna be able to do all this kinda stuff on a new level. And I’m, I’m calling bullshit on all of that. I think AI is gonna cut through all of the bullshit of sales and marketing and it was interesting to see that Microsoft had put down sales and marketing people as one, as the jobs that’s most at risk from AI because the customers are just gonna be able to. See through all of the sales and marketing bullshit, and it is gonna be able to do, the agents are gonna be able to research everything. Like imagine even buying white goods. You need a new [00:48:00] fridge, you need a new dishwasher, you need a new, you know, clothes dryer. Just to be able to say to your ai, you know, based on my family and what we need, tell me which one I should get and find me the best price and get it delivered for me. Steve: but there is a delineation cam, yes, AI will be able to filter through the bullshit that comes from marketing and sales guys. But there’s a two-speed economy. One of those is with rational purchases. And when it comes to rationality, I think the AI will be able to do that, but often and often in the more profitable areas, their emotional purchases, we’re actually looking for a reason from a human to justify spending an inordinate amount of money on a premium car that isn’t as Cameron: That’s when your AI will step in and go, Hey, don’t do that. You can’t afford it. Steve: but you don’t care. Like you’ve already got your brother and your sister and your dad telling you you can’t afford it. You don’t need it. And yet we buy things we [00:49:00] don’t need because we are irrational beings. Emotional purchases. Emotional purchases, AI isn’t gonna solve that problem. You’re actually looking for a reason to justify the decision Cameron: Why, why, Steve: reasons. Cameron: it’s why people invest in Bitcoin. Steve: right? I make a lot of irrational Cameron: no rational reason to buy Bitcoin. Steve: no rational reason to buy very large majority of the things that we do, but we’re irrational beings. Cameron: Yeah, Steve: why and why a human is doing something is gonna be more important than what the human is doing. And, and often I think we will want humans to do things, and the important thing is that a human is doing it, even if it could be done by a machine. Cameron: Yeah. I mean, I, I, I can see an element of that playing out. I’m not sure how much of the marketplace is gonna care that much, but we’ll see. Steve: in [00:50:00] certain areas. I would love Cameron: I. Steve: what percentage of purchases, and even in supermarkets it happens to where you think it’s a highly rational place, but there’s a lot of emotional purchases that happen in a supermarket where you buy premium goods and pay as much for ice cream than you otherwise would. Does it really taste better? I don’t know. Some foods are better and more premium, but a lot of things aren’t. And so I’d like to know what percentage of the economy is emotional purchases and what are rational and, and that would be different with different people. One thing Cameron: So today when I’m at, sorry, go. Steve: I was gonna say the one Cameron: No, I thought you were finished. Steve: we can explore next time is the idea of what I’m calling the robot economy. Like, what will we spend on, because robots are there and do robots need certain things to serve them? And, and I’m thinking about humanoid robots. What, what does that build? And the thing that I’m hearkening back to is that there was really no nighttime economy before the early 19 hundreds when [00:51:00] electricity became commonplace. There was only local communities where you’d go to a, an inn or a pub or, and there wasn’t much of a Cameron: Rle, Steve: It was really hard to get around and get to places and places to be warm and have electricity. And that’s an extraordinary part of our economy day. And, and I really am Cameron: I love you. Steve: about that element. I wanna explore that idea of what the robot economy looks like, what things arrive in support of that new ecosystem. Cameron: Yeah. Okay, well let’s Steve: Jo, that Cameron: schedule that for next time. Yeah. Um, I was gonna say, back to the supermarket. Story. Like when I go to the supermarket now and I have my phone out, or I have just talking to it on my AirPods, Hey, I’m looking at this, I’m looking at that. I’m thinking about a world where we are running AI enhanced glasses, uh, with a camera. So it’s seeing everything that I’m looking at. So [00:52:00] you’re talking about buying the premium ice cream. Steve: Yes. Cameron: be seeing what you are picking up and it’ll be going, Hmm. Yeah. You want my, you want my advice on that? Don’t get that one. It’s overpriced. The reviews are shit. Uh, get the other one that’s, uh, a door down. It’s, uh, just as good. Half the price, less sugar. You know, it’ll be, it’ll be talking in your ear. Some, some people won’t care. Some people will. But increasingly I think people are going to be using their aid or AI to save them money, particularly when they’ve, they’re losing their job to ai. They’re gonna be, Steve: of the Cameron: they’re gonna. Steve: let’s go to taste. Much of the taste is the perception you have in your mind. While you are consuming it because you paid more, it means more to you. This is commonplace with champagne and wine and ice cream and many other products where even in Cameron: Cigars. Steve: scenarios, you will convince yourself that it is better you paid more. And so then we open up two [00:53:00] other ideas on this Supermarket, private and public consumption. of the products we know we’re getting ripped off on, but we want to serve to others or have others see us wearing. And brands are a classic example. wearing the, the branded jacket or what have you, or I’m consuming because it’s, it’s a display of success and my position in society and which cohort I move with. I’m a surfer, I’m a skateboarder, I’m a whatever. that still exists as well. So you’re gonna have emotional consumption, rational consumption, then you’re going to have private consumption, public consumption. But even in some private consumption, you’re competing with your own mind, which wants you to believe that spending more is deserved upon you because you work hard and you’re looking for those moments of joy and little dopamine hits. So I think that the rational AI helping us will be there and will be an element, but I think this is far more complex than we think. Cameron: You, you make a lot of great points, but getting back [00:54:00] to the questions that organizations need to be asking themselves right now is, what kind of impact is this gonna have on my business two, three years from now when my customers, employees, partners, suppliers, competitors, have access to unlimited intelligence? Steve: Right. And so they’ll need to ask themselves, where do we strip out the bullshit? ’cause that game is up, and where do we lean Cameron: Yeah. Steve: and irrational side of the consumption pattern? Because that’s something that Cameron: Yeah. Steve: and a and a perfect all-knowing AI isn’t necessarily gonna sway a decision. So then you get a new decision template within corporations to understand where they sit in private and public consumption, irrational and rational in the person’s mind and the other. So you get a whole new consumer dynamic. Cameron: Well with that, let me, uh, get to the, the last segment of the show, Steve, I wanted to talk about, which is IBM and Watson speaking about emotional [00:55:00] decisions. Um, and I thought I’d do something different with this. I’m going to go into NPR mode. Um, I’ve, I’ve written this as a narrative rather than just a ramble, so, um, I’m gonna put on my NPR voice, Steve Steve: Love. Cameron: uh, uh, and or whatever the Australian equivalent is. PBS maybe. Hello, boys and girls, let’s go back to the mid nineties. You are sitting in front of a bulky CRT monitor. The internet is new. The future feels digital, but not quite real yet. And then this headline hits IBM Supercomputer Defeats World Chess Champion, that champion Gary Kasparov, one of the most brilliant minds of his generation. The machine, deep blue, a hulking IBM computer that could evaluate 200 million chess [00:56:00] positions per second. In 1997, it became the first machine to beat a reigning world champion in a full match. Now, I remember this clearly, people always said computers will never beat a human at chess, and then one did not. Just a human, not just a grandma, but Kasparov at the time thought to be perhaps the greatest chess player who ever lived. It felt like a turning point after his loss. Kasparov didn’t just walk away quietly. He was furious and suspicious. He claimed that some of deep blue’s moves, especially in game two, were too creative and too human to be the result of brute force calculation alone. He suspected that IBM’s team may have had human grandma feeding moves to the machine during the match, violating the agreed upon rules. He said it was an incredible and extremely deep combination [00:57:00] that no machine should be able to see. He demanded access to deep blue’s logs and inner workings, and IBM refused. Then not long after the match, IBM dismantled deep blue and never allowed a rematch. That only fueled the conspiracy theory. Kasparov famously said, I lost to a machine, but not to a computer. He believed he’d lost to a team of humans hiding behind the machine, not the machine itself. However, 2017, he later wrote in the book, deep Thinking, I Was Fighting the Last War. Deep Blue was not intelligent, but it was fast, accurate, and didn’t get tired or scared. The age of machine intelligence was dawning or so we thought, because what Deep Blue actually represented [00:58:00] wasn’t intelligence. As Kasparov later said, deep blue was intelligent. The way your alarm clock is intelligent. This was symbolic AI logic rules and raw computational force crafted by teams of grandma and engineers. Deep blue didn’t understand chess. It just calculated faster than any human could, and yet in the eyes of the public, it was the birth of thinking machines. Fast forward to 2011. IBM did it again this time. The Battlefield wasn’t a chess board, it was a television quiz show. Jeopardy is an American game show that’s been on the air since the 1960s. It’s famous and weird. Contestants are given answers and must respond in the form of a question. The host might say, this US state’s name is derived from a Native American word, meaning Great River. The contestant would have [00:59:00] to answer, what is Mississippi? But it’s more than trivia. Jeopardy tests, pun recognition, obscure references, word play, and buzzer speed. It’s fast, it’s human. And the champions like Ken Jennings aren’t just smart. They’re quick witted and fluent in nuance. So when IBM introduced Watson, a computer designed to beat them at jeopardy, it wasn’t just another AI stunt, it was a public demonstration that machines could now process language, context, jokes and ambiguity. And in 2011, Watson did just that. It destroyed its human opponents. For IBM, this was the sequel to Deep Blue. Only this time the stakes weren’t chess. They were everything. After Jeopardy, IBM promised that Watson wasn’t just a game show novelty. It was the future of work medicine and decision making. They rolled out glossy ads and corporate demos. [01:00:00] Doctors would use Watson to treat cancer. Lawyers would use it to scan cases. Customer service would be handled by intelligent chatbots. Sound familiar? This wasn’t just ai. This was practical. AI applied real world intelligent assistance for professionals. But behind the scenes, Watson wasn’t. Magic Watson was a mix of technologies. It used natural language processing to pass questions, large databases of structured and unstructured information, confident scoring to choose the best answer, and a lot of human tuned logic to make it all come together. Every new domain, Watson entered required teams of engineers to manually train it. You couldn’t just install Watson at a hospital. You hired IBM to build you a custom Watson, like commissioning bespoke software from scratch. This wasn’t scalable. It wasn’t plug and [01:01:00] play, it was more of a consulting gig than a product. Still, IBM went big on one domain, in particular healthcare. IBM partnered with top hospitals like Memorial Sloan Kettering to build Watson for oncology. The pitch Watson would read every cancer study ever written and help doctors select the best treatment plans faster and more accurately than a human could. It sounded revolutionary, but leaked internal documents painted a darker picture. Watson was mostly parroting suggestions from a narrow team of doctors. It wasn’t actually learning from new data, and in some cases it gave dangerously bad advice. The hospitals quietly pulled back. The media stopped covering it. By 2022, Watson Health was shut down and sold off. IBM never delivered on the promise, and by the time they tried to pivot. It was too late while IBM was busy branding [01:02:00] everything. Watson, Watson, assistant Watson Analytics, Watson Ads, the real AI revolution was happening elsewhere, deep learning neural networks. And then in 2017, the invention of the transformer model developed at Google, not IBM. By the time GPT-3 dropped in 2020, Watson already looked obsolete. By the time chat GPT hit the seated 2022, Watson wasn’t even in the conversation. IBM’s AI was still tied to old school logic. Symbolic ai. While open ai, Google and Anthropic were shipping black box language models that could write essays, generate code, pass legal exams, simulate conversations and scale across domains instantly. Watson had a brand, but the future had moved on. Symbolic AI or gofi, good old fashioned [01:03:00] AI is how we used to think machines would reason you hand them rules, definitions, logical structures. You want an AI to know what a cat is. You define it. If X has fur, purrs and chasers, mice X is a cat. It worked in narrow domains, medical expert systems, legal logic, chess. But symbolic AI is brittle. It can’t handle uncertainty, ambiguity, or messy inputs, and it doesn’t learn. It’s frozen in the rules you give it. That’s what Watson was, an advanced symbolic system with some machine learning bolted on. It couldn’t evolve. And when deep learning exploded, it got left behind. But here’s the twist. The same LLMs that replaced symbolic AI are now being used to rebuild it. Today you can ask an LLM to write an expert system, build a logic engine, translate a medical guideline into a [01:04:00] rule-based system, create transparent, explainable AI for regulated injuries. In industries, LLMs are black boxes, but they can generate white box systems. Suddenly, the brittle logic of gofi can be spun up on demand. And that matters because in sectors like law, medicine and government, black box AI is entrusted. You need auditability transparency, a trail of logic you can verify. We might be entering a hybrid future using LLMs to handle messy data, language, creativity, and using symbolic systems to encode values, rules, compliance, and reasoning. Neural nets do the thinking. Symbolic logic explains the answer. The consensus in the AI research community is that the future of advanced AI lies in the successful fusion of neural and symbolic approaches. While purely data-driven LLMs have demonstrated [01:05:00] impressive capabilities, their integration with symbolic reasoning is seen as the crucial step towards creating more robust, trustworthy, and intelligent systems. Watson had the funding, the brain power, the brand. It beat champions. It had the world’s attention. It could have led the AI revolution, but IBM made. A fatal error. They confused a public demo with a product. They mistook symbolic logic for intelligence and they failed to pivot when the ground shifted. So yes, Watson won Jeopardy, but it never made it to final jeopardy. The real game was still to come and the next generation of machines rewrote the rules. Steve: That was Cameron: And it reminds me as an ex-Microsoft guy, the last time IBM missed the boat, which was the PC revolution. Steve: Yeah, that’s Cameron: [01:06:00] twice. They’ve missed the boat now in 40 years. Steve: I don’t even know if IBM they sold off their computer division and they’re sort of a quasi consulting firm now, aren’t they? Am I, am I misinterpreting that Cameron: Yeah. No, they sold off, um, the Lenovo. Yeah, side of the business. Steve: written cam and it really explains a lot and it was a perfect way to finish off where we started. And I think that hybrid model is really the future. The idea of using a black box to build a white box system that we can see and understand is gonna be really important. The combination of sim symbolic and neural network or LLM models, uh, to make up that. and landing problem is, is really, uh, great. And where you started, they were fast, intelligent and didn’t get scared. That’s a really interesting thing for the corporate [01:07:00] and where we are with AI overlapping work and what the future looks like. You know who, who’s gonna get scared, who’s gonna go fast and make mistakes and, and who isn’t? And the idea that a symbolic system is frozen in the rules that you give it. I think that humans face the same problem. Many of us are frozen in the rules that we were given. And all of those rules in life and in technology are about to change.

  4. 7

    Futuristic #43 – The Lemming Race to Superintelligence

    In this fast-paced episode of Futuristic, Cameron and Steve dig into a wild week in AI and tech. Cam shares how he stunned futurist Peter Ellyard by using ChatGPT to generate a bold, original idea called “The Other Year” – a radical, identity-swapping sabbatical for all Australian adults. Steve loves it, but the discussion spins off into a brutal critique of political cowardice, economic inequality, AI translation workflows, and the geopolitics of the AI arms race. From Neuralink trials to Honda’s reusable rockets, from AI-generated music to legal rulings on copyright, this one covers everything. Is AI stealing jobs or creating new ones? Are we on the edge of a superintelligent revolution—or just in a corporate lemming race off a cliff? FULL TRANSCRIPT Audio of FUT 43  [00:00:00] Cameron: Sure. My other, yeah. Uh, welcome back to the Futuristic episode 43. According to my notes, Steve Samino my, um, AI transcription engine in Descrip. Never, never likes having to work with your name after all these months and years of doing it. It’s likes So what? Summer Chi Chico, what Never gets it Right. Doesn’t get my name right either. So don’t feel bad. Steve: Look bias. Ai, ai, Italian racism is what we are hearing here. And I just wanna point that out, Cameron: Systemic. Steve: it’s systemic. Cameron: Yeah, yeah, yeah. Steve: the mob got us and now the technocratic mob, they’re after us Again, Cameron: What’s the Italian version of anti-Semitic? Is it anti [00:01:00] tic and. Steve: I don’t like logs. It’s, Cameron: Any wog. I called you a wog on the last episode. Steve: for it. We don’t, we don’t, we don’t like your type around here. Cameron: Well, it’s been a crazy week, Steve, um, in AI and tech and all of that. It’s just so crazy. But I wanted to start with something if, if you don’t mind. I mentioned last time that my friend Peter Ard, 88 years old, um, futurist, um, has was, was in Brisbane with his partner Robin. Had a lovely time with him, but had a lot of con conversations about I, uh, with ai, uh, shit, let me start again. I had a lot of conversations with Peter about AI and just realized that he wasn’t really getting it still. So I’ve, I’ve spent a lot of time in the last week in an email thread with him, um, encouraging him to think about it in terms of creativity and, um, and I. [00:02:00] I’m sort of acting as the intermediary between him and AI now. Um, and I asked him to give me a challenge to give to one of the AI tools to demonstrate that they are creative. ’cause his are, his question to me when we caught up is, do you think they’ll ever have imagination? And I said, they do right now. They’re creative right now. So he came up with this, uh, question that he wanted me to see what it could create, um, that was genuinely creative, built around some of his work. He has these things, he calls the six pillars of livability, and he wanted it to come up with, uh, an idea for how we could make Australia a better place by 2050 built around his six pillars of livability. [00:03:00] So I’m gonna walk you through this ’cause I, I think it was pretty cool and he was sort of blown away by this. My prompt and I used Chatt PT oh three deep reasoning model. The prompt was this, you are now a futurist whose job is to design a better future for humanity. Come up with one idea that is completely original and is not based on extending existing trends or technologies that would make Australia a more livable country in 2050. And fits in with the six pillars of livability, prosperity, harmony, inclusion, sustainability, health and security. Now, to be fair, it came up with a couple of things that I rejected early on. I was like, nah, that’s not really that original, that’s just an extension of other things. You’re just, you know, taking existing stuff and, and, and, you know, making it more extreme or [00:04:00] more advanced or whatever. So I pushed back and a, a couple of times, twice maybe. Then it came up with this the other year act a once in a lifetime state funded identity exchange sabbatical. Core proposal. Every Australian adult is guaranteed and required to take one 12 month other year, somewhere between their 25th and 55th birthday. During that year, they live, work, and vote in a host community whose demographics are as different from their own as an algorithm can manage urban versus remote, wealthy versus disadvantaged, migrant hub versus old settler town, et cetera. Think of it as compulsory whole life, cultural cross training. No uniforms, no bootcamps, no charity optics. Just swapping places long enough that the word them loses, meaning. Why it’s [00:05:00] new? National service exists. Military civic Peace Corps style gap years and couch surf exchanges exist, but there is no country on earth where a universal mid-career identity swap sabbatical is a legal right and obligation. It isn’t an extension of technology or an NGO trend. It’s a structural reboot of social experience built straight into citizenship. And it goes on and on and on. Has a timeline for delivery, how it would work legislatively, how it would be funded. Basically, a federal agency and trust fund, 0.6% of GDP offset by winding back duplicate employment subsidy schemes, a pairing engine. Independent algorithm balances 20 plus variables. Income, quintile language, background remoteness, climate industry, disability status. No one has sensed somewhere they medically can’t live cost and cover award wage paid from the trust. If the host employer [00:06:00] can’t home mortgage slash rent frozen by law for the year child option, families can move together or split with travel stipends, civic rights, temporary enrollment in local council and state electorates, you vote where you live that year. Return and certification completion internationally recognized cross-cultural competence license, a hiring premium in both public and private sectors. The bottom line is, instead of betting on gadgets or tax tweaks the other year, puts human beings into each other’s shoes, literally for a whole year. And lets empathy do the heavy lifting across prosperity, harmony, inclusion, sustainability, health and security. One bold never tried lever. Six pillars lifted together. Steve: Look, I cannot tell you how much I love that. That’s one of the best ideas I’ve ever heard, and I have heard similar things. It, it does come back to walk a mile in someone else’s shoes, like it. And [00:07:00] it makes sure there’s a couple of stones in those shoes and they don’t fit properly. Or if you’ve had stones and shoes with holes in the bottom, you get yourself a nice pair of, uh, a dope pair of Nikes, as Zach Di Larocca said in one of his great songs. And ramble too. He had a dog pair of snake son. Cameron: Look, I was impressed and Peter, Peter was impressed and, and you know, here’s so two points about it. One is, well, obviously apart from the fact that I think it’s a great idea. One is the prompt was very specific, what it had to focus on and two. It came up with this idea in a minute. I mean, after I pushed back a couple of times on the first couple it came up with, like by minute three it came up with this idea that you, Peter, and I all thought was a great idea. Imagine if you i’d, I’d got it to come up with a hundred ideas over the course of the next hour and a half, right? Steve: And it points out something [00:08:00] important is that we have all the technology we need right now. to solve all of the world’s problems and human frailty has always been the issue, always will be the issue. And it reminds me of Hara, who I’ve got in my little thing to talk about. Noval Hari, who wrote Nexus and Sapiens, he said that it’s strange that we think that an AI will solve all of our problems. when the AI is based on us. He says, we, we don’t need AI to solve our problems. We need humans to do it. Now here’s the point. That idea is a great one, but. Humans still need to implement it and agree upon it. At this point, maybe the AI takes over and says, here’s where you’re going. An Uber and a, a humanoid robot arrives at your door and takes you away to this place and takes away all your wealth if you’re wealthy. I don’t [00:09:00] know, but the, the ideas are there and AI’s got great ideas and, and this is a super idea that would work, but Cameron: And it’s a bold, it’s a bold vision. And one of the things that we lack in Australian politics by design is bold vision. Steve: we don’t have any, we used to, hundreds of Cameron: Hmm. Steve: granted there was a whole lot of other problems, which we are solving social problems and Cameron: Well, a hundred years ago, I mean, Goff Whitlam, first seven days after he was elected, he sat down with his right hand man and basically crafted the plan for how Australia’s been living for the last 55 years. You know, just sat down and said, we’re getting outta Vietnam, free education, free healthcare, uh, legalized divorce, you know, blah, blah, blah, blah, blah, blah, blah, blah, blah. Steve: a, a, lot of good ideas and the, the void of leadership and courage are the two things that are lacking in, in, uh, political society. And we’ve got corporate capture. And the [00:10:00] lobbying is, is, is a real challenge. The fact that South Australia’s outlaws lobbying is, is a massive move in the right direction. That’s the real biggest issue because we don’t get brave policies simply because our politicians are captured economically. If we remove that capture, then we get a chance for politicians to make decisions for the majority, right? And we, and we don’t Cameron: I think it’s also, I think they’re also just trying to be a small target, right? Um, if we don’t see anything bold, then there’s nothing to attack. If it’s just, eh, more of the same, then. Steve: they’ve gotta ask themselves the question. What are they there to do? You they there to make a difference or to just fringe dwell. Because what we’re getting is fringe politics, just small, incremental, nothing bold and, and strategic and important like this would, would really work because the issues, as you say, and, and not technological, you know, we have all of the technology we [00:11:00] need to, to move society forward and create brush of flourishing on those six principles, which are all, I don’t think anyone could disagree with those as goals that we should have societally. So they seem pretty good to me and, and that idea I think would have a dramatic impact if it was implemented. Cameron: Yeah, it reminds me a little bit of Mormon missionaries. You know, my prey grew up in Utah, and, um, a lot of her family, uh, and friends go do missions when they’re 18, 19, 20. That’s how Mormon missions work. And usually they get sent to, I think she’s got a niece who’s in Chile or Argentina or somewhere like that at the moment, doing a mission. A lot of her, a lot of Chrissy’s siblings went to places like that to do missions. Her father, uh, went to France to do his, but a lot of times they end up in very different [00:12:00] communities speaking different languages, different cultural issues for the wrong reasons. I mean, they’re trying to convince them that Joseph Smith looked into a top hat and with some magic rocks and translated some magical plates. But, uh, you know, that. Steve: I’m so sorry. I love that so much. Cameron: The idea of sending people into communities like that, you, you, you’re gonna come out of it with, um, a better appreciation of the other, and that’s why it’s called the other act. I think so anyway, just proving the point that AI is creative today. If you know how to use it correctly and prompt it and work with it, it can do amazing things. And that’s today, let alone where it’s gonna be a couple of years from now when we have super intelligence. But, um, what have you got to talk about for your past week, Steve, we get into news? Steve: I used AI in a way that [00:13:00] was really effective. We had some investors from China who were interested in macro 3D. We’re raising 5 million in capital. for any rich listeners out there, chance to participate in the multi-billion dollar future of, uh, automated construction. any case, we’ve got an IM that they wanted us to send through yesterday. I used chat GPT to translate the, IM into Mandarin Chinese written version. As you know, language is really nuanced. way that I did it was I translated, you know, pieces of the English into chat. GPT said everything I paced from now on is gonna be translated. That’s your singular instruction until further notice. Again, prompting well, so you don’t have to go back and forth and do more keystrokes than needed translated. It’s a business based document. You’re going to have to make some interpretations on the language we’ve used. which is quite different in Chinese, I speak a little bit of Chinese, so I came back and then when I got the English, uh, the Mandarin [00:14:00] translation, I would then take that and put it into Gemini and go, now translate this back into English again. And then I compare the two English, because one of the challenges is you’ve got to know what good looks like. You can’t just trust the AI that it did it, because who knows, I can’t read Cameron: Hmm. Steve: So I Cameron: Hmm. Steve: a look at it and it was flawless. Did not skip a beat. And some of the translations into the nuance for Chinese were just perfect. It was Cameron: Hmm, Steve: Took me to do an a 20 page document with financial financials, technical statements, everything took under two hours. I came to a conclusion, not only did it do an amazing job and I. Managed to use two ais, which is one of the tricks you and I have been talking about a lot. Use more than one AI to check the other ai. And it’s kind of almost like a little bit of blockchain, sort of having a number of verifications across something to to back reference and check. Uh, not only did it do an extraordinary job, but it took two hours. then I came to the conclusion AI [00:15:00] stole a job that would never have existed because we were Cameron: hmm, Steve: gonna hire a translator. It would’ve cost us too much. It would’ve taken us two weeks. What we would’ve done is we would’ve just sent it across and crossed our fingers and say, hopefully someone understands Mandarin pretty well, uh, or and understands English. Cameron: hmm. Steve: with that, let us know. But we sent through a Cameron: Hmm. Steve: within a day of them asking, and they’re like, wow. Like they were Cameron: Hmm. Steve: wow. Which surely they know we used AI to do it and no one lost a job. But new value was created and potentially $5 million worth of capital is going to flow into Australia, which is then gonna create other jobs. Cameron: So my question is, did you, did you use Deep Seek or Quinn? Steve: Sea. We didn’t, we Cameron: Why wouldn’t you have used a Chinese AI to do that instead of an American ai? Is my question. Steve: um, no reason. I just used the two that were right there in my browser. There you go. The [00:16:00] reason is I used the two that were just a mere click away, already open as tabs is the reason, no reason why I couldn’t have. And after that I thought I could’ve done this with four or five or whatever, but the result I got was extraordinary in any case. Uh, I would’ve, even if I used deep seat check back more than once, but it really made me realize that I think the biggest thing that’s going to happen with ai, a whole lot of things that wouldn’t be done without it get done. And new value gets created. And when the new value gets created, you get a new multiplier effect, which is a common economic theorem where you spend a dollar that becomes a dollar 20, which becomes a dollar 50, which becomes $3. I mean, that’s how the entire economy grows. It’s all based on things that don’t exist yet that then exist. to be honest, translators are right in the firing line, right? It’s one of the easiest things to get rid of, and we know that, and all of us have to be careful, but our job is to look at [00:17:00] where the new value creation is. And I just thought it was a really good example of the multiplier effect and how revenue moves sideways Cameron: Mm Steve: create new revenue streams. Cameron: mm Steve: So there was Cameron: until they gobble up all the revenue streams. Steve: if they do, they, we’ve got bigger problems than that. And the other one I was really interested in, I’ve been reading Nexus by Noval Hara, and he’s been, and I even sent you a page, uh, photo. Just the, the two ideas of story and bureaucracy incredibly interesting story is the, the systems of belief and how we translate ideas to each other. Big ideas of things that we should, could, and would do or have done. know, religion, technology, everything’s a story. We buy the story first, and the story help helps us to believe in myth so that we can invent things. But then he did this overlap with bureaucracy. And bureaucracy is the rules and the methods and the systems that become a requirement, which [00:18:00] can sometimes stop things and sometimes things don’t fit into the bureaucratic, uh, pages or boxes. You’ve gotta fill things in. And just that juxtaposition between the two and how they at are at incredible points of tension with each other in times of great change. Because we, we buy into the story and then once the story’s been sold and everyone agrees, then you build a bureaucracy around that to temper the story and create boundaries so that we can operate effectively within a society. Society requires, you know, the way you coordinate big people and big ideas is through bureau bureaucracy. You need it. And it may be feel about. AI is that there is no bureaucracy around it. There’s just a bunch of story and independent players just forging ahead without any boundaries at all. And while boundaries, people don’t want them when they’re innovating, you know, the reason we have clean water and safe roads is because of bureaucratic boundaries, which are really, really important. And, and they are [00:19:00] left wanting at the moment. So I’m only a third of the way through that book, but oh my God, he’s just mind blowing. He, he’s just quite possibly the world’s greatest thinker. Cameron: Wow. I’ll have to, um. I’ll have to go arm wrestle him for that title. That’s, that’s my title. I’m, um, Steve: I’m so sorry. Cameron: trade, trademarked that title. Steve: Will have you really, I, I hope that you have Will’s greatest thinker, TM self-proclaimed. Cameron: Tm. Yeah. Yeah. Steve: I Cameron: Yeah. Interesting. Uh, well, speaking of bureaucracy in ai, uh, Trump’s big one, big beautiful bridge bill passed both houses finally. Yeah. Steve: Oh my Cameron: Um, so it’s been Steve: to adapt the statement that no laws can interfere with the progress of ai or is it still in there? Cameron: to. To the best of my knowledge, that [00:20:00] is still part of it. So when Trump signs this today, tomorrow, it will be illegal for any state in the US to pass any laws that regulate AI in any way until 2035. Steve: Well, no one will be here by this, so it’s fine. The singularity will have occurred. I don’t know whether I’ll just be living in a cloud. No one knows. 2035. He’s basically said he’s gonna be a nuclear war, an AI war. We don’t know what will happen. Duck and cover. Okay. Cameron: So, yeah, obviously for the last two, two and a half years since GPT came out, there’s been an enormous amount of talk, uh, in the US and around the rest of the world about regulating AI safety, guardrails, et cetera, et cetera. Now, the US is not going to regulate it and. I suspect any other [00:21:00] country that tries to regulate it, let’s say if the Australian government tried to regulate ai, like we’ve got the safety Commissioner that’s been regulating social media, I suspect that the Americans will push back and penalize countries through tariffs, some other mechanism if you try and regulate ai. So we’re pretty much in a situation now where there is gonna be no effect of regulation anywhere in the world on ai. China may, China may regulate it in terms of what it can and can’t say about the CCP in Tiananmen Square and those sorts of things. But effectively, the Trump administration has just removed any legislative approach to safeguarding us from ai. Steve: It. That’s terribly concerning, [00:22:00] especially when. The jury is out from experts on what the potential consequences could be socially, economically, in terms of seating control to a sentient, potentially sentient being. It, it, it seems like an incredibly foolish thing to do. Of course, Trump theoretically, uh, only has three and a half years left, or a little bit less than that, and it could potentially be kiboshed legislatively, but it just seems like a, someone making a geopolitical position trying to win the race, the geopolitical race with AI without understanding the potential consequences. Cameron: Look, I was skeptical that humans would be able to regulate AI very effectively or for very long anyway. So I don’t think it makes a great deal of practical difference, but it’s interesting now that that’s the p [00:23:00] Yeah. Steve: if we can’t legislate against known impacts of social media, algorithmic division, uh, on the Cameron: No, but I’m gonna, Steve: of preteens on social media, then we’ve got zero chance. I mean, and that’s clear. And the jury is in, the studies have been done and we know the impacts. And if we can’t regulate against that or monopolistic behaviors of big tech, then we’ve got zero chance of doing it with ai. ’cause that’s far more complex and has less research, less understanding. And the experts can’t agree what the potential impacts are. So you’re right, but, but it Cameron: but I’m not even talking about it. I’m not talking about it from that perspective. I’m talking about it from, you can’t, a, a, a lesser intelligence can’t regulate, uh, superior intelligence. If we have super intelligence, Steve: yes, but. Cameron: you’re not gonna be able to regulate it Steve: that’s right. But you are talking Cameron: by definition. Steve: a post moment when that, that bridge gets crossed. And I think what we’re talking about here isn’t, the level of intelligence is, [00:24:00] is more the, the level of independence from an ai, but there is a window where that could be There’s a window of time still available where that could be regulated before the moment when the AI has self-direction, uh, its own independence, which we’ve, we’ve spoken about. Cameron: But commercially and from a security perspective, there’s no, I like my understanding of the way that the AI industry elite think about this in the US is it’s a, it’s an all or nothing game. And we’ll, we’ll talk about Zuck and, and his, uh, buying spree at a, in a moment, but there, it’s an all or nothing game here. It’s the first country or company and or both to get to super intelligence who wins and they believe that China is. Quickly catching up to the US and probably will supersede their, uh, [00:25:00] development in this space in the near future. So whoever they, they, they can’t slow it down until they get to super intelligence. When you get to super intelligence, it’s too late. Anyway, so I, I just don’t think it was gonna happen for commercial and, and, uh, security reasons. ’cause they’re terrified of what will happen if, if China gets it. But Steve: Human Cameron: of Steve: Human lemmings. It’s the AI lemming race. Cameron: Yeah, Steve: the cliff. We know Cameron: yeah, Steve: is coming, and we’re like, yeah, but we have to be first on the cliff. Cameron: yeah, yeah. Let’s get to the. Steve: AI geopolitical race has become. Lemmings Oh yes, it’s right there. What’s gonna happen? We go, we don’t know. Probably won’t end well. Could be some carnage. what are we gonna do? Let’s make sure no one gets in the way of us running off the human AI lemming Cliff. Cameron: So, uh, I think in our last show we talked about the fact that Zuckerberg was trying [00:26:00] to buy an AI company and or all of open AI’s top devs or the top AI devs from everywhere. Uh, really? And, and Sam Altman, I, I heard on a podcast a week ago or so saying that Zuck hadn’t been successful. ’cause open AI’s people didn’t care about money. They wanted to be part of something important. Well, that didn’t last very long, that age like milk, because it Zuck has managed to hire about 10 people. Uh, now, whether or not they are the top, top. Tier open AI or just second tier open AI researchers is still debated. I did hear on a podcast yesterday, uh, somebody was saying that they heard that one of the people that Zuckerberg has hired the salary package was a billion dollars, not just a hundred million, but a billion dollars to get [00:27:00] this person. But the, Steve: company cargo. With that there and, uh, some great lunch benefits. Cameron: but the rationale that this guy who’s, um, Patel, Des Patel, I think was giving was interesting. Like Zuck has been trying to buy Ia, KO’s startup, SSI, safe super intelligence. The guy who was the chief scientist at OpenAI, co-founder, left after the whole Altman firing rehiring thing. Zucks been trying to buy his company for about $30 billion. Um, is the rumor. Ilya turned it down, but his co-founder and CEO Gross, Daniel Gross, I think has just left the company. So he might be going to meta to be part of their AI play. He might be the guy that’s getting a billion dollars, but the rationale for paying these sorts of [00:28:00] salaries apart from the obvious one that he wants to win and he wants to suck up all the best people, is interesting. ’cause this guy was saying, well, SSI only has about 15 people in it. If you’re paying $30 billion to get 15 researchers, and let’s say one of them is. IA and maybe 10 billion is for ia. The other 20 billions for the other 15 researchers, that’s basically, you know, roughly a billion dollars per researcher. So if you’re willing to pay a billion dollars per researcher to buy a company that where you, they don’t have a product, you’re just getting the people, why not just offer that money directly to the people anyway? Right. It kind of makes sense. Steve: I’m being a bit flippant here. Two costs. Yeah. The, the, the chips and the server farms and the researcher. That’s really all there is. There’s only two pieces of the puzzle. Couple of UX designers, if you’re gonna launch something, but there isn’t a huge amount in it.[00:29:00] So that’s your, your major cost. I found it interesting to see pop up in my social feeds, the signings of AI rock stars versus football players like Ronaldo and Messi. And I just thought that was a really nice, uh, approximation of, you know, where societies become the idolatry of innovators and, uh, corporate CEOs. And now it’s not just the CEO, it’s the star of the club who becomes the rock star now, the coder is the player. Uh, I think that that’s a really interesting analogy, but again, to me it points to in inequality and incomes now where if you’re on the right side of some sort of economic equation, now you, you’re going to Ghana, inordinate wealth, uh, the benefits, uh, really being served by a few large corporations. You know, we’re in a technocratic oligarchy and, and this is just a, a, another reflection. Of that economically. I get why Big Tech does it, it makes [00:30:00] sense for Zuckerberg to do it. The prize is so big you can pay it. It’s a just a really simple economic equation where you look at the cost of acquisition versus the benefits of set acquisition on capital flows. It’s actually quite easy. but, but I think socially it’s a bigger reflection of the problems that we face now where there is that much money floating around. That’s why we don’t have any of these hand brakes happening. Uh, there’s too much power and too much money, and this is just another reflection of that. as much as I love watching a football player run around, think that it’s nice that people are actually building and making things are making more money than someone just kicking around a dead animal filled with air. Cameron: Look, I’ve argued for years that there should be salary caps on everybody, CEOs, sports players. There should be, I don’t, I don’t care what it is, a million, 2 million, 10 million, but there should be a salary cap on what people can get [00:31:00] paid. Steve: amount of money and in wealth, I, no one needs a billion dollars. Yeah. Having more than a billion, no one even needs a Cameron: I. Steve: or 50 million, let’s be honest. But, you know, to, to keep the capitalist viewpoint and incentive in people’s minds, which is bullshit because a couple of million bucks and your life’s pretty good, I imagine. Um, it should be after you earn a billion dollars, you is 99% tax, or 90 cents on the dollar is taxed. Once you earn over 5 million, 10 million, whatever the number is, 90% tax. I think that’s really simple. My view on how to reign in CEO salaries and sports salaries. I think rather than putting a limit on the top, it should be, it’s a maximum multiple of the lowest per paid person in your company. Not the average, Cameron: That’s interesting. Hmm. Steve: So, um, the Cameron: Hmm, Steve: can only earn, I don’t know what the number is, uh, 50 times what the lowest paid person in the company is then what you’ve got is a, a nice alignment [00:32:00] where, and, and they have to justify it, but they have to justify their pay rise, impact on the others within that construct. And I think that’s, that Cameron: hmm, Steve: a really nice way to do so. There’s no limit. You can earn as much as you want, but we need to Cameron: hmm Steve: society along with us. And, and Cameron: hmm. Steve: you know, whether it’s the cleaner or whoever, can only be a multiple of the lowest paid person in that company. Cameron: I like that. Steve: I, I knew you’d love it ’cause you’re a, you’re a big, long-haired communist from Cameron: Um, big, big comment. Yeah. Um, Mark Chen, the Chief Research Officer at OpenAI, sent a memo to staff on Saturday promising they would go head to head with, uh, salary discussions with meta. And there he said, I feel like someone has broken into our home and stolen something. Please trust that we haven’t been sitting idly by.[00:33:00] And they’ve announced that their. Basically shutting down the company for, I think it’s a week. Um, while they figure it out, they’re closing the doors. Steve: what does that mean though, in terms of end users? Does it mean what, what does that mean? Cameron: Dunno, man. Hasn’t really, Steve: mean the products are Cameron: I Steve: available to, for anyone to use during that week? Does Cameron: no. Steve: one’s in the office do the servers get a little rest and we save some global electricity? Like what happens? Cameron: I am quoting from Wired Magazine where they’ve got somebody off the record telling ’em stuff. OpenAI is largely shutting down next week as the company tries to give employees time to recharge according to multiple sources, executives are still planning to work. Those same sources say Meta knows we’re taking this week to recharge and we’ll take advantage of it to try and pressure you to make decisions fast and in [00:34:00] isolation. Another leader at the company wrote according to Chen’s memo, if you’re feeling the pressure, don’t be afraid to reach out. I and Mark are around and want to support you. So I guess, uh, they’re gonna have some people keeping the surface up and running, but everyone else has taken the week off to think about their future, what they wanna do. Do you wanna just take the money and run and hope that Meta can deliver something or do you wanna stay at open AI and. They’ll try and match the salary offerings. But you know, when it, when people talk about, you know, there’s a lot of, um, debate still about LLMs and how much runway the current models have and whether or not LLMs are gonna get us to super intelligence. A GI, by the way, no one’s talking about a GI anymore. A G i’s just assumed now. Um, everyone’s focused on super intelligence. The A GI thing is kind of, Steve: because [00:35:00] we’ve been pushing that on the futuristic for some time now. Cameron: yeah. Um, and as they keep saying like a GI, you ask 10 different AI researchers for a definition, you’ll hear 10 different things. So it’s, it’s kind of stupid. But we’re, you know, people argue that the, the current models aren’t gonna get us there. I have to point to the fact that the people leading these companies are. Taking billions of dollars of investors, money and capital and investing it in this race. Hundreds of billions of dollars if you factor in building data centers like Stargate, et cetera. So they certainly believe that this is gonna get them there and they’re spending everything in the bank and then some to get there as quickly as possible. Now, pushback is, Zuckerberg also thought that he was going to bring the metaverse [00:36:00] into reality for the last five years, and he spent billions on that and got nowhere air. Steve: fever dream where he just did not understand humans, and that’s what happens when you’re a robot, mark Zuckerberg, unless you’re a human, it’s very, very hard to understand humanity. I. Cameron: Speaking of robots, I watched the latest Neuralink update. Uh, it was a video that Elon opened, uh, and then a bunch of his top guys were talking about what they’ve been doing recently. They now have seven or eight people with neural links inside of them. They were all on the video, uh, talking about their experience. Two of them were playing Call of Duty, using their brains against each other. Uh, my first thoughts were, as I’ve. Steve: really playing Call of Duty, given that he has pretend robots at his launch is just saying, Cameron: First of all, as I’ve said before, who the fuck is gonna [00:37:00] let Elon anywhere near your brain after the last six months? B Elon seemed quite back to being normal and rational. So either he’s off the ketamine or he’s had an upgrade to his Neuralink, and I’m starting to think that the last six months he just had a, a very early version of Neuralink in his head and it was glitchy. Yeah, yeah, yeah. He had to get an update done. But that aside, the reason I’m still interested is because whether or not the company that puts that brain computer interface into you is Neuralink. Someone is gonna be doing it. Obviously they’re not the only company doing bcis. There were companies doing it before them. There are gonna be companies coming after them. It’s the advancement in the innovation and the technology. That’s interesting. We’re all gonna have a, B, CI. And you know, you talked on the last episode about Kurtz Wall’s view of merging with the ai. Elon [00:38:00] was talking about that basically in his introduction. He was talking about the fact that the human brain for all of its wonders, is actually quite slow at processing information. And if we have a B, CI, we’ll be able to process information a thousand times faster than we can with our carbon based wetware. So, uh, you know, the, it will. If there are still jobs to be had at some point become a competitive advantage in the marketplace. If you don’t have a, B, CI, it’ll be like going for a sales job in the nineties and not having a driver’s license. Like if you don’t have a driver’s license, you know, you can’t be a, you can’t be an Uber driver, right? You can’t be a sales guy. Same sort of thing. And it was, that was my justification for getting a mobile phone in 1989 or 1990 or whatever it was. It was [00:39:00] a competitive advantage as a sales rep that I could be contactable by the office, I could contact my clients, that kind of stuff. My clients could contact me. Steve: technology becomes some. advantage, whether it’s having a car, whether it’s using a mobile phone, being computer literacy, must have computer literacy circa 1993. And then it was like, you understand the web and must have a degree, which again, you know that that is putting information into your brain, uh, and comes at a cost. And then now what’s the cost of Neuralink? I mean, one of the real dangers of brain computer interfaces is that they become subscription models. that’s incredibly dangerous and the fact that you could upgrade your brain, but be dependent on a cloud which you don’t own or control. I think Cameron: Have you watched the last season of Black Mirror? Steve: have in the last episode there was extraordinary. I think let’s give the listeners a little bit of a spoiler on that one. Cameron: [00:40:00] I haven’t seen the last episode. I only watched the first couple, but I was talking about the one where the woman has the chip in her brain that’s requires a cloud subscription. Steve: injury, a brain injury of sorts. And then they put a chip in her that she can operate. But what happens is it’s, at first it’s free and then they have to upgrade to subscribe, and it has geographic boundaries. So they go away on a trip and she crosses a geographic boundary and you know, the equivalent of losing 5G, but she loses access to her cloud and then they can’t afford it. So she starts doing contextual advertising in the middle of the day. She’s like, did you wake up tired in the morning? We’ll, Cameron: She is a teacher in a classroom. She starts to, starts doing, giving ads to the students in the classroom. Yeah. Steve: really, uh, horrible. And then they have all sorts of upgrades you can get where a dopamine levels get up. And a horny husband does that on a, on a, on a trip where they go away and she gets crazy horny. And, but, but the whole thing is that. It ends up [00:41:00] in a cycle of, it keeps costing you more. You’re locked into something with zero escape and someone else controls your own mind and you have to subscribe to that and it becomes more and more draconian, inexpensive. I think that’s an incredible danger. And, and this points to the importance of open source. And I think about a lot of things that are of incredible valuable value that are open source. You know, like language, like language, us speaking English, or whatever language we want, we can adapt it. We can do what? That’s how you end up with dialects. That’s how you end up with slang and certain, uh, industry vocabulary. It’s, this is kind of where we are. It’s an extension of language and knowledge and the fact that that’s op not open source is a real problem. Cameron: Mm. Honda Rockets. Honda successfully launched and landed its own reusable rocket [00:42:00] looks, uh, very similar to, um, a SpaceX rocket and Landing. Didn’t have the chopsticks, just did a vertical land. Uh, and again, this is sort of my point regarding the stuff that Elon’s doing, like cutting edge, not necessarily the first, but, you know, fast follower kind of stuff with a lot of this stuff. If he’s not completely innovative, but. It’s not gonna be the only one doing vitol rockets and, uh, reusable, reland able rockets. There’s gonna be a whole bunch of companies that are able to catch up and do this sort of stuff. One, it’s the three minute mile, right? Once it’s been done, everyone else is gonna figure it out. So, you know, we’ve, we’re gonna have a whole bunch of players that are gonna follow in Elon’s footsteps for not just bcis, but the space race, the rockets, all that kinda stuff. But if you [00:43:00] haven’t seen the video, go look it up. Honda’s reusable Rocket. It’s still amazing, regardless of who does it. It’s super impressive to see. Steve: and I, if you hadn’t have told me it was Honda, I would’ve thought, oh, that was just another one of Elon’s things. Cameron: Yeah. Steve: looked exactly the same. Someone who doesn’t follow it closely. Uh, yeah. Which goes to show we need as many substitutes as possible in as many different economic and technological realms. The more overlap and substitution we have, the more competition, the more open things become, and the less draconian that that powerful new technology becomes. Cameron: So everybody’s been talking about the velvet sundown this week. Steve, uh, I sent you a link about these guys during the week. So as people, if you’ve been reading anything, you’ve probably heard about this. Um, there’s a band appeared on Spotify, the Velvet Sundown. They’ve got an album out they’ve got now, I think about half a million people subscribe to them on [00:44:00] Spotify. But until the last couple of days, there was no evidence that this band exists or has ever existed. There was a lot of people assuming that they’re an AI generated artist, Rick. Beato on YouTube Steve: I love Rick. Cameron: threw their music into his AI analysis tool, looking for evidence of humans in the recording and couldn’t find any. He believed it was AI generated music from his AI analysis of the AI songs. Uh, since these stories started to come out, the band now does have an official X account and they’re going, no, we’ve never used ai. We’re humans. But the photos of them are obviously AI generated. But my point was, if again, like you looking at the Honda Rocket, if I had listened to this album and not heard any of the media about it, I kind of [00:45:00] dig it. It’s kind of Americana Rock. Steve: Yeah. Cameron: Who’s playing the piano in your house, by the way? Steve: you hear that? It’s my Cameron: Yeah, Steve: I thought, oh, I was hoping you couldn’t hear it. Cameron: I can hear it. Yeah. Uh. Uh, no, it’s okay. It’s a little bit of ambient at first. I thought you were listening to the Velvet Sundown. Um, yeah, so it gets back to this question that we’ve, we’ve talked about before about art and ai. Um, I was having this debate with my son Taylor, uh, the other day about social media as well. ’cause he and his brother keep sending me videos that are AI generated, vo generated stuff, more of the Yeti stuff or storm troopers or some straight up racist content that he was sending me. Sitcom racist stuff. Um, there’s a whole series of things about Chinese people eating cats and dogs. There’s these, [00:46:00] uh, things that have been pushed out. But the, the, the point is that if it’s entertaining. Steve: it Cameron: You know, we’re still at this, we’re at this weird period of time, I think where, you know, we’re questioning it’s fuo or rupo, which is what I asked you when I sent you the vet sundown. But we’re gonna quickly reach a point, I think, where we don’t even ask the question. If I stumbled across this music on Spotify, hadn’t read any of the media, it just popped up in my, you know, recommended new things to listen to, I would’ve gone, this is good. I dig it. I would’ve listened to it. I wouldn’t have questioned it. Steve: Well, we already have that in many ways in, in, in the movies. you’ll watch a movie and you don’t care whether the scene was actually filmed and some explosions or ai, you’re just like, am I digging it? And I think that entertainment es especially is the, am I [00:47:00] digging it? I do think there will be a new kind of genre because categories tend to split rather than aggregate, where it’s like, there’ll be a category where it’s, this is a live band, this is an AI band. And you might have to flag that. And I don’t mind that as an idea. Some people won’t even check and won’t care. But you, you might have to at some point flag say, this is AI generated. I listened to it. I didn’t really like it personally. I thought fake everything was a far better AI generated song. I’m just saying. But I, I don’t think it matters, but it does point to one important thing. The velvet sundown are definitely using the tactic at the moment, which is, is it ai? Isn’t it ai, which is one of the great marketing tactics right now, fuo or rupo, and smart brands are making something and saying, this is AI or this isn’t, or making people guess and, and that’s a great [00:48:00] way to get attention in the attention economy at this point in time. Cameron: Yeah, I still believe that, uh, brands will very soon if they’re not already be creating their own bands, their own social media influencers, and then sneaking their advertising and marketing and promotional messages into the content. What was I watching? I was watching, um, some crazy movie from the two thousands the other day. Uh, can’t remember what it was, but I. You know, there was just so much branding in it. Like you’d see the, the mobile phone, uh, brand where the person picks up their mobile phone and they’re holding the, the brand in front of them and they’re drinking a soft drink and the logo is turned to the camera and it was really in your face. Steve: enjoy Pepsi Cola. [00:49:00] Well, if you don’t have Cameron: And Steve: If you have an artist is a bestselling or most downloaded or streamed artist, all of a sudden you don’t Cameron: oh, Steve: a rock and roller cola wall. Cameron: it wasn’t a film. Uh, it was a Beyonce clip. It was a, it was an over the top Beyonce clip from 15, 20 years ago. I can’t remember what it was. Um, but the film clip, which was super high production and, you know, massive budget, you know, massive cinematography, big action sort of thing. Yeah. And, and the, the brand positioning in it was insane. I was like, okay, well no guesses who paid for most of this video clip, right? Steve: Yeah, Cameron: So I, I think we’re gonna see that. But the, we’re gonna have books written by a AI and movies and TV shows and music and, you know, some [00:50:00] people will lud out their way through it and go, no, I refuse to watch this. I need to know if it’s real or fake first. But I do think the majority of people, and I include myself in this, will just stop even, it won’t even be a question, is, is it good? If it’s good and I like it, then who cares? Steve: so I would love a new rage against the Machine album, and if the Cameron: Me too. Steve: together and say, Cameron: If Zach Della Roker can’t get his fucking shit together with Tom and make another, then fuck it. I will listen to a fake rage against the machine album tomorrow. Steve: to a fake Rage against the Machine album. I’ll fucking make the fake rage against the Machine album. Cameron: I will even listen to covers of Rage against the Machine I was watching. I just rewatched the fourth matrix film, whatever the fuck it was called, Steve: Right. I don’t know if I’ve seen it. Cameron: Rema Resurrection, uh, to see if it held up any better. And it, I enjoyed it. Maybe a little bit more the second time around, but it’s still [00:51:00] kind of not very good. But the final track is, um, a cover of the. Wake up from the final credits of the original film with a woman singing it and it’s kind of, it’s ripped, it’s stripped back. It’s just sort of a drum and bass with her doing the lyrics. And I was like, you know, it doesn’t hold up to the original, but uh, it’s still okay ’cause it’s a great track. Steve: yeah. Cameron: have been AI for all I know, but yes, fake media is gonna become a bigger and bigger thing like it hate it. Speaking of that though, how much time have we got? Eight minutes. It’s come out, there’s been this court case against Anthropic. It came out that Anthropic purchased millions of physical print books to digitally scan them for training Claude. And they [00:52:00] won the federal court case. Um, Steve: That is an absolute disaster that they won the court case. Cameron: you think. Steve: Absolutely. I. Cameron: Judge William Alsup of the United States District Court for the Northern District of California ruled in favor of anthropic finding that the company’s use of purchased copyrighted books to train its AI model qualified as fair use. While the case centered on emerging AI technologies, the implications of the ruling reach much further, especially for institutions like libraries that depend on fair use. To preserve and PRI provide access to information. This is a blog post from the internet archive, which I’m a big user of. In this case, publishers claim that anthropic infringed copyright by including copyrighted books in its AI training dataset. Some of those books were acquired in physical form and then digitize by anthropic to make them usable for machine learning. The court [00:53:00] sided with anthropic on this point holding that the company’s format change from print library copies to digital library copies was transformative under fair use factor one, and therefore constituted fair use. It also ruled that using those digitized copies to train an AI model was a transformative use, again, qualifying as fair use under US law. Steve: Again, stealing their raw materials to make a product. It is not fair use because AI has an unfair advantage compared to a human using something and learning and, and putting their own creativity on top of it. My honest opinion is this is the same disaster that happened when everyone let Google crawl their websites free in every search engine, and then they stole all the traffic and all the revenue, and they basically just put a thin layer of innovation on top of a whole lot of people’s hard work. I, I, I think it’s a disaster. I think that copyright in many ways is over the top. [00:54:00] cite Disney here, stealing stories and extending, uh, the periods of among things, but. This doesn’t feel as though it is a fair use because you have a non-human ability to digest that information and create new value where the original content creator is not in any way rewarded. That’s my view. Cameron: This goes against everything that you’ve said to me on this show over the last couple of years. Steve, Steve: Well, Cameron: just one aided, you’ve just, you’ve just tared this whole thing. Steve: I haven’t tar coded it. No, I haven’t tar coded it. I think the way they went about it and bought copyright materials and put them in there is very, very different to scouring the web. That’s what I think. They’re two different ways. Cameron: So if I buy a hundred books and read them [00:55:00] on Julius Caesar and then go write my own book on Julius Caesar, based on what I’ve read, Steve: Yes. Cameron: okay. Steve: Yeah, because it’s like having a running race Cameron: I. Steve: running on their legs and someone having a motor vehicle. They’re two different things. They’re two different categories. They’re not the same category. That is fair use if you do that because you’re AI Cameron. I know you’re the world’s most intelligent man, but you’re not an AI with superpowers. So they’re basically scooping everything up then spinning it out. I think that they should be able to, uh, use the books and create the ais, but I think they should be some form of distribution. like what the music industry did with radio stations and TVs for years, where they have like a licensing fee or something like that, where you get a distribution, which wouldn’t be a lot of money. It’d probably be 50 cents to every author. say if your book’s in there, but it would pay homage to the fact that the raw materials come from somewhere. [00:56:00] So I think there should be some kind of a licensing or royalty structure where Theis and the companies running these ais have to, in some way participate in the economy underneath it, which makes what they do possible, Cameron: Hmm. Yeah. Oh look, I fundamentally disagree and I don’t think that’s even workable on a practical level. But, uh, you know, I think the fact that we’ve built, we being the human race have built a tool that is more efficient at writing or producing music or producing film or whatever it is, um, is a tremendous thing. The fact that it can do something better than humans. Steve: want it to stop. What I, what I just think and you’re right practically, it’s a very, very difficult thing to do, right? It’s very difficult and the music [00:57:00] industry tried to stop people from downloading and going streaming and all of that, and it’s sort of leveled itself out and found a way forward with Spotify and so on. But I do think that there’s a precedent within the digital economy where thin layers of innovation and thin is probably a bit disingenuous. Innovation is layered the top something previously, but they historically have not paid for their raw materials. And it’s created enormous wealth inequality. It’s created, uh, too much power into too few hands. And I feel like we haven’t learned the lessons of the first digital era where the large companies basically hoovered up everything with their raw materials, got it. Free and distributed it, and. Put more money into fewer hands. I want the technology. I think the technology’s good, but I think in some capacity we need to find a way so that the corporations creating this new technology that we all want and I want, and I don’t want them to stop in some way, [00:58:00] in the economy. That made it possible. That’s what I’m Cameron: So what you’re saying is we need a UBI is what you’re saying. Steve: No, I’m definitely not saying Cameron: are, you are, you’re just using different words for it. But you’re basically saying these companies are gonna make a lot of money out of this. So they, Steve: income. Universal Cameron: they, Steve: not Cameron: they need to redistribute those funds in some way that everyone gets, uh, participates in that. So it’s a UBI. You’re just Steve: materials. No, I’m talking about the raw materials that went into it. Cameron: Yeah. You’re talking about a UBI for authors. Steve: materials Cam. Cameron: The basis of A UBI in terms of an AI world, is that the AI’s, you know, fund it all. Steve: Well, there’s a better way. Just haveis run everything and everything be free and everyone has access to everything. Cameron: It’s UBS Steve: from the economy Cameron: Basic Services. Yeah. Yeah. All right. We’re coming up to an hour, Steve. That’s it. We’re done. We’re out. You good? Steve: Yeah. I’m so Cameron: Great. Steve: [00:59:00] I think we Cameron: That’s good. Steve: today and we had some disagreements and I think Noah wants the Mutual Agreement Society on a podcast. I’ve always said that Cam. In fact, I’ve never said it, and if I say I’ve always said it, I’ve never said it. And that’s the first time. [01:00:00] [01:01:00]

  5. 6

    Futuristic #42 – The Jobpocalypse

    In Episode 42 of The Futuristic, Cameron and Steve dive deep into the chaotic beauty of 2025’s AI evolution—and cultural regression. They open with a debate about Dr. Who, scarves, and wogs, before rapidly spinning out into their usual high-octane synthesis of tech, politics, relationships, and dystopian laughs. This week, it’s all about whether AI video is fake (or too real), the future of humanoid robots, how ChatGPT is becoming a marriage counsellor, and the looming collapse of white-collar work. Plus, Cameron drops a 90s-style AI rap, Steve defends plumbers against the robot uprising, and the boys seriously consider launching “Elongate”—Elon Musk’s red-pill boner brand. You’ll laugh. You’ll cry. You’ll question your humanity. Again. FULL TRANSCRIPT   [00:00:00] Cameron: Futuristic Cameron Reilly and Steve Sammartino in, episode 42. I think maybe, um, Steve, you just told me off air before we came on that you’ve never watched Dr. Who, because you’re wearing a very Tom Bakery scarf here. I said, oh, it’s the fourth doctor. And you were like, what? And I’m like, no, really? Uh, I know you come from a, a, a wog family, Steve, but doctor who wasn’t a thing growing up in your, in your house. Steve: you even, can you even say that? that’s Cameron: I dunno, Steve: Sist. Cameron: were you offended? Are you offended by, uh, being called a W Steve: look, Cameron: mate? I would’ve loved to have been a wog Steve: this is not, a social podcast or one that into, uh, non-technological things. Cameron: mate. I would’ve Steve: offended. Cameron: grown up as anything. Yeah. Steve: That’s why Cameron: Reminds me of. Steve: little good, a little bit of a. Sniffle. Cameron: [00:01:00] Um, Steve: I haven’t watched Dr. Who I was a big fan of Star Trek Next Generation with pika. I think that was the ultimate sci-fi series. But I haven’t watched Dr. Who, so I can’t really say, and I. With all the things that are still on my to watch list, I don’t think I’ll get to it unfortunately. one thing I could just strike off my to-do list because let me tell you, as you mentioned cam, too many things to do, not enough time. Where are the agents? Cameron: Mate, everything you need to know about me. You could tell from Dr. Who Monkey Star Wars and Carl Sagan Cosmos. Those four things pretty much entirely designed the rest of my life, I think. Steve: And Seinfeld for the, for the social nuances of humanity. Cameron: I was in my twenties by the time that came out, but yes. And Seinfeld 1920. Um, Steve, uh, been a week or two since we’ve chatted. Uh, I mean, God damn man. Been a big week in many ways. Uh, [00:02:00] not the least of which is War in the Middle East. But, um, from an ai, uh, futuristic perspective, Steve hit me, hit me with your best shot. Steve: I am so suspicious about all the AI videos. I do not believe for a hot minute that most of the AI videos I. That we have seen that are just in all of my feeds. TikTok, Instagram, LinkedIn. believe for a minute that all of them are just a few prompts and band that they are. I reckon they’re heavily edited. They’re people have worked for days on them because if you say, here’s an AI video that I made, everyone’s like, what do it? They’re such good prompters. They’re good editors. You heard it first on the futuristic. Cameron: So you are going, you are, you’re doing a, a classic, you, you’re doing a Charlie Munger inversion thing here, because I watch videos and I go. I don’t believe that’s real. I believe that’s ai. You are watching going, I don’t believe that’s [00:03:00] ai. I believe that’s real. Steve: I’m inverted. This is an inversion I’m telling you now, and that’s because. People are so wowed by some No, I swear. Cameron: Yeah. Yeah. Steve: I’ve tried to make a video clip for our song, which our loyal listeners will, will remember, the Fake Everything punk rock song, and Cameron: Mm-hmm. Steve: not done, it’s been really, really hard to get even the pieces together. I’ve gone to a number of different video formats. I even went to chat sheet to find clips on briefings on the 37 lines. So hard. There is an infinite amount of editing going into these videos, for sure. Zero doubt. Please prove me wrong. Send me the link to stevesammartino.com. Go on there on the one where I can just put in the prompts and get these videos because they’re all heavily edited. heard it here. Cameron: I’m glad that you said that because I’m planning this week, uh, if I can get through my task list enough to get to the tasks that I want to do and not that I have to do, [00:04:00] is to play around with VO three to start making some, you know, I mean, I’m sure, I’m sure you’ve seen them. The one of the big trends with those videos is the, the fake selfie blog, and it’s, uh, it’s a Yeti or a. Steve: of Cameron: Yeah. Steve: yeah, Cameron: I was saying to, I was saying to Ray, who I do my history shows with. That’s our, that’s our fucking B wick, right? History selfies. Julius Caesar talking about crossing the Rubicon. Steve: Ballywick. word. Cameron: That’s my B wick. Like you, I’m glad you like that. Yeah. So I was gonna, uh, try and knock some of those out this week and I was like, shit, I, I, I did go looking for VO three prompting strategy stuff and I folded away, but. I haven’t tried yet. I have been making some music this week inspired by fake everything, uh, theme songs for different podcasts. I tried to do one for this. No, I didn’t try and do one for this ’cause you’ve already done one for this. I did one for QAV, I did one, tried to do one for my Renaissance show. It didn’t quite work. I did do a rap song, which I’m gonna play Steve: [00:05:00] Great. Cameron: I was actually just, so I heard about this tool called Mini Max. Dunno if you’ve ever played with that. Mini Max dot, uh, I, I maybe io something and, um. I wanted to make just a hip hop background track that I was gonna wrap over with some lyrics that I wrote from one of my podcasts, my Renaissance show. And, um, but it, it actually produced the lyrics and everything for me and, and like, it, it, it added voices, I guess is what I’m trying to say. And I was actually kind of impressed. Let me see if I can play this and you can hear it. Check them. Mic one, two. This is how we do 95. Feels a five deep inside the groove. Full fall, loose, spinning round. The sonic attack Beasty Flow. Public know we always got your back. Boom, back blueprint. Tearing down the walls, every sample, every break. Answering the calls from the S to the mic, G by the loud moving body.[00:06:00] Rocking my standing out the crowd. So that was, uh, I, again, I didn’t tell it either. I just said, gimme like a classic nineties hip hop beat with some samples and it wrote that whole thing with the voices and the lyrics. I was like, oh shit, that’s actually really good. But then a mate of mine. Sent me, uh, this, he goes, oh, this is, this is my favorite track at the moment. It’s called The First Time in my Rectum by, uh, Steve: Really? I, I Cameron: I didn’t get that by Banned Vinyl and whoever this is, they’ve got a whole bunch of tracks that they’ve put together. Um, oh, glory Hole. Um, when my surrenders Suck, your Love pump, like they, they’ve done ’em in like all sorts of, you know. Steve: a spinal tap. I. Cameron: Spinal tap song. That’s what it sounds like. Yeah. Steve: well Cameron: glove. I think you’re thinking of. Steve: He said I’d suck my love pump or something when he does the, Cameron: So, um, they’re using it, uh, it’s well done using AI to create comedy, uh, dirty comedy songs, which I’m all for. [00:07:00] So anyway, the, the, I dunno about the video side of things, but certainly the, the audio side of things is really becoming insanely good. Steve: And, I, and I would just wanna add to, to that point cam, is that it’s a short term suspicion, so. Definitely, we’ll get to a point where videos will be just all prompting and, and nothing else and no further editing. But given how much I’ve played with the tools and know their capabilities, I think a lot of the ones that we’re seeing now are quite heavily edited, and that’s a short term aberration and it’s kind of how. It, like you say, there’s an inversion where people want the AI to be better than it is. So much so that they’re pretending it’s ai. And we’ve even seen that in a corporate instance as well, where a number of startups have pretended everything’s generated by ai. But there’s, you know, a thousand coders in India doing something even way back. Jeff Bezos with his Amazon Ghost store, a bunch of people looking at cameras clicking when the person picked up an item and it wasn’t all, [00:08:00] uh. As it was cracked up to be. Cameron: So instead of FUPO fake until proven otherwise, you are RUPO real until proven otherwise. Steve: We’ve got FUPO and RUPO. what we’ve got here. Fake until proven otherwise. Cameron: Wake it. Welcome back to another episode of The Futuristic with FUPO and RUPO Steve: Well, this Cameron: and Steve: that’s Right? Is that and, in fact, this, this is actually the point, Cameron, is, is this fake or was this generated by ai? The point is. Tool proven otherwise is really the world we live in now. And, and we don’t know whether it’s either way actually. Cameron: Does it matter if it’s entertaining? Steve: Well, if in entertainment doesn’t matter. No, I couldn’t care less. Right. Cameron: If it’s news, Steve: it’s Cameron: did Trump really bomb Iran or is it all fake news? Steve: I, Cameron: Who knows? Steve: long would it take for you to say that? Cameron: I. [00:09:00] Well, I, I wanna talk about the other interesting thing that happened to me from a futuristic perspective. Um, Chrissy and I have, uh, we’re having a marital, um, what would you call it? Disagreement. And we, we struggle. Sometimes to have conversations over highly contentious, well, even topics that I don’t think are contentious, but she does well, you know, if one of us thinks an issue’s contentious and the other one doesn’t, it can be difficult. We have very different personalities. She’s got a DHD, I’m autistic, and you know that That’s a good blend A lot. Yeah. Apparently. Um, I’m not showing you my, uh, the sticker on the back of my phone. I got made up. Steve, I’m showing you that surely. Steve: No. What is it? I can’t read. It’s too pixelated. Cameron: It says, I’m not being an asshole, I’m just autistic. Uh, that’s what I show people whenever they take my bluntness. Steve: doesn’t, mean you’re not an asshole. Cameron: Yeah. It’s, it’s, a friend of mine says, you’re just missing the, and you’re an asshole and you’re autistic. It’s not, it’s not [00:10:00] binary, you know. Steve: Just, because. Yeah. Cameron: Anywho, back to my point. So what she suggested we do. Uh, last week was communicate on a particular topic via email, but have chat GPT as the intermediary. So she would write what she wanted to say and then give it to GPT and it would tone it down or rephrase it, and then. Um, she’d send the edited copy to me so it was nice and, uh, toned down. And then I would reply and run my reply through chat GPT and we’re using chat GPT as the intermediary to make sure that our, we’re saying what we wanna say, but saying it in the nicest. Possible way using a thing called the Gottman Conflict Resolution Framework, which a therapist of ours mentioned years ago. And, um, I thought, oh, that’s interesting, right? So it’s using ChatGPT as a marital, uh, therapist and intermediary for [00:11:00] challenging conversations. I mean, it’s kind of weird to have, uh, that kind of a AI intermediary, but you know, it is what it is and, uh, I’m like, okay, well it worked. It gave particularly her an opportunity to feel like. She was able to communicate stuff in a way that was non-confrontational and, and, um, that my replies were not confrontational, which they’re normally not. ’cause I’m a lovely, nice guy, but, you know, um, yeah, so ChatGPT is a marriage counsellor. I’m always very calm, Steve. I think that’s my problem. My problem is I’m too calm when people don’t want me to be calm. They think I should be. They shouldn’t be. Or, or whatever, and I’m just like, sping my way through it and, um, you know. Steve: Wow. [00:12:00] So, uh, you, mentioned on the podcast the idea of AI diplomacy a while ago, Cameron: Mm-hmm. Steve: your that it would probably better Cameron: Ah, mm. Steve: using AI as an intermediary to communicate issues and I guess it could large language models. Could take into account cultural differences and the nuance of language and the idea of AI marriage cancelling, I think is pretty cool. I would like to ask AI some personal things that. I’m working through, but I don’t trust it much. I trust it with my finances, but I just don’t trust it enough to put things in there that I just to, they just have to stay in my head for now because I just couldn’t bring myself to it. Now, if I suggested to, to my wife that we’re gonna have AI diplomacy, between us, I can tell you that would be met [00:13:00] with. A negative response is my, how many people would do that? Cameron: I think at the moment it’s probably the minority. Um. Uh, but I think it will become a thing. I think it’ll be a big thing. I’m predicting this is my futurist forecast five years from now, it’ll be pretty commonplace. And, and you know, what it has suggested, and I think Chrissy has, uh, agreed with or suggested along with it, is in future, if there’s ever a conflict situation arises, which happens regularly in most marriages, I’m sure. That, um, we break and go to GPT and it’s just, that’s the deal. Okay. You know, instead of, I’m gonna go cool down for 10 minutes in my room, or I’m gonna go cool down or whatever, and blah, blah. I say, I’m gonna cool down. Let’s take this to, let’s take this to the umpire, right? So you go to GPT, it’s not, there’s an umpire, I’m kidding. But you say, this is what I wanna say. [00:14:00] How can I say it? In a more loving tone or a more caring tone or, or a less confrontational tone. ’cause when you are, when your gander is up, when the cortisol is flooding, when the, when the adrenaline is flooding through your system, it’s very hard to communicate. Calmly and rationally and, and objectively, uh, uh, so you have it as a, you know, how people use it for business emails. I’m sure. I, I mean, I don’t use it to write emails, but, um, same sort of thing. Hey, I wanna say this. How can I say it in a better way? Go say it like this and I don’t have to worry about, you know? Was it, um, you who said to me on an episode that your daughter said to you, it’s not what you say, it’s, uh, no, it’s, no, it’s not. It’s who wrote it. Steve: it’s, that’s not what something is. It’s where something came from. Cameron: That’s right. So in this case, Chrissy was like, no, [00:15:00] no. Let’s use GPT as the intermediary. I think that’d be a good idea. So I don’t have to worry about the fact that it sounds like an AI wrote it. ’cause Steve: Mm. Cameron: that’s part of the, it’s, it’s a, it’s a feature, not a flaw, you know. Steve: that, in our situation is I, I get told that my tone’s wrong, and if I’m not saying anything, I can’t be accused of that anymore. So that would be a no-go zone in our situation because that, that, that needs to be one of the tools at the disposal of the opposing party. That’s all I’m saying actually. Cameron: But that’s how it gets removed. And I get the same thing. It’s not what you say, it’s either my tone or the look on my face. I’m going, well, I can’t. Steve: It point is, Cameron, is that you are wrong and we don’t wanna remove anything that could reduce blame upon you is the point. Cameron: Okay. Well that’s, that’s tough, Steve. But anyway, that’s my, that’s my experience. I was listening. There’s, I’ve been listening to a lot of Sam Altman. He’s, he’s doing a lot of podcasts for some reason recently, I think [00:16:00] it might be to, um, create, um, media coverage of the fact that the open AI files, which dished a lot of dirt on him. Nothing new that I could tell, but they just came out a lot of, uh. Uh, publication of the various complaints that former employees and former board directors and former, uh, whatever staff have had about Sam over the years. And so he is doing a lot of podcasts. But a couple of interesting things I wanted to. Talk to you about. So on one, it was, I think the, uh, Y Combinator, uh, podcast I was listening to. They were talking about the future. He said he’s excited about the day in the future when, when you sign up for a, a particular o, a open AI subscription level, they send you a free humanoid robot. Steve: Wow. Cameron: [00:17:00] My first thought was when, when AI has taken all of our jobs, how am I gonna have the money to pay for an open AI subscription to get the robot in the first place? But leaving that aside, Steve: And I, that was something I argued with you deeply. I, and you said, I said, that’s why it can’t happen. Can’t Cameron: why? What can’t happen? what can’t happen? What are you arguing? Steve: my argument again is that you can’t have massive unemployment and large companies to continue to survive because who is gonna pay for their products and services anyway? It doesn’t matter. We’re, we’re, we’re not here for that. Cameron: Well, Well, I, I mean, no, I, I think that’s a good discussion. I agree with you. But, uh, which, which do you say you don’t think massive unemployment’s gonna happen, Steve: I think Cameron: or you don’t think big companies are gonna survive, or, well, Steve: massive, unemployment, the Cameron: I. Steve: companies don’t So you either have massive unemployment and the big companies also crumble, or you don’t have massive unemployment and new revenue streams emerge and things get lower in cost, and [00:18:00] then that money transfers sideways. Cameron: I think if the big companies raise, well, I think you can, but for a limited period of time. So if op, if. If OpenAI raises a trillion dollars in venture capital and its burn rate is a hundred billion dollars a year, then we can have massive unemployment. Nobody has any money and they’ve got. A runway of 10 years to figure it out. Yeah, right. But you know, if you listen to these guys, Sam, Demis Hassabis, Dario Amodei, Elon, et cetera, they all have a version of the same story these days, which is. AI and ubiquitous humanoid robotics are gonna lead to a, a, a glorious utopia where everything is gonna be done for you. And, um, we we’re gonna [00:19:00] solve all the scientific problems and all of the medical problems and blah, blah, blah, blah, blah. But there’s going to be a really difficult transition period between where we are today and that point in time. We dunno how, we dunno how, what that’s gonna look like. And I think that is a, that’s a decade or two of great upheaval, socioeconomic upheaval, where people start losing their jobs. 5%, 10%, 20%, 30% their identity. But the. But governments don’t get income tax, uh, from those people anyway. They may get it from another source, maybe tariffs, but that it will cause great disruption in the abilities of governments to pay for social services policing. In the same time, we’re gonna have humanoid robots that will be. Doing our police force, it’s gonna be another whole issue. [00:20:00] You know, Sam was talking on one of these podcasts about, I think it was the one, his brother Jack Altman hosts now, which is I think the official Open AI podcast or something. Um, about, you know, as we’ve talked about before, the, the whole a GI, what’s a GI? What’s not a GI? What’s the singularity? What’s not the singularity? And as Sam said, and we’ve said this a million times, if you went back in time five years ago and told people what AI can do right now in June, 2025, they would go, that’s a GI. Um, that’s, and they would think that if you had that it would completely have changed the world in dramatic ways. And yet, Steve: Here we Cameron: we have it. And everyone’s just kinda oh, ho harm it. Hallucinates, right? Steve: Yep. Cameron: ho harm, it’s not perfect, it hallucinates, et cetera, et cetera, et cetera. Steve: It comes back to the classic definition. Ai   Steve: ai, the, my, my favorite definition of AI is what [00:21:00] computers can’t do yet. Cameron: No. What humans, oh yeah. Sorry, what? What computers can’t do yet. Yeah, yeah. No, you’re right. Steve: computers can’t do yet. Yes, Cameron: Yeah. Yeah, you’re right. But so he’s saying that’s a little bit kind of frustrating because he feels like they’ve built this. Amazing thing, and everyone’s just kind of adapted really quickly to it. But he said what I think the point in time when people will stop and go, oh shit, we are in the future, is when half of the people you see on the streets are actually humanoid robots. And as I, I’ve been saying, I think when the police force. Is mostly robotic and you have robots telling you not to jaywalk or not to litter, or pulling you over and giving you a speeding ticket. That’s gonna be an interesting point in humanity when we have robots telling us what we can and can’t do. I think people are gonna struggle with that, but there is gonna be this transition period that I think is gonna be really messy. Where we are not gonna have the incomes and people aren’t gonna be able to pay for stuff unless [00:22:00] we have a UBI or UBS or something that comes in to fill the gap, or I’m wrong. And everyone who loses their job as a result of AI or robots is able to somehow figure out something else to do to create an income. I was having this, so my friend Peter Ellyard, who before you were Australia’s leading futurist, he was, he’s Australia’s oldest futurist. I now, I think he’s, uh, he’s 88. He’s in Brisbane at the moment to do some stuff with U University of Queensland and I, I was having lunch with him yesterday and we were talking about this and he said, well, it’s, yeah, you know, you’re thinking in terms of jobs, it’s not jobs, it’s careers. We need to think in terms of careers and meaning and all that kinda stuff. But at the end of the day, you still need to earn some fucking money to pay for stuff like it doesn’t matter what you call it. Call it a job. Call it a career. Call it whatever you want. You gotta have something in our current socioeconomic construction to earn an income and I can’t figure out what humans can [00:23:00] do that AI and robots won’t do better. Well, yes, if we have then it all goes away and it’s not a problem. Yeah, sure. Steve: things might Cameron: But we’re not gonna get there for minute. So Sam was saying on one of these podcasts, what is the number of robots that we need to build the old fashioned way before we have robots, building robots that build robots. He said, is it, is it a million? Do we need to build a million robots the old fashioned way before we have enough that they just start building robots? Because when we have robots building stuff, including robots, to build more robots, to build more stuff. And when we have nanotech, and we’re a long way, it seems from functional nanotech at this point, but at least if we have robots, you, you have to think that the cogs of stuff drops dramatically at some point when human labour is removed from the equation. You know, people keep talking to me about China and going, [00:24:00] oh, my mother said to me yesterday, China screwed up with the one child policy. And I said, no, they didn’t screw up. If they didn’t have the one child policy for those decades, there’d be 10 billion Chinese on the planet today, not 1.3 or whatever it is. And they would’ve all, you know, people, they would’ve starved, would’ve had massive famines and massive economic issues. but the Now they, Steve: just, just Cameron: yeah, maybe. Maybe she’s like, but now they don’t have enough people to, Steve: aging Cameron: well. Steve: is a Right. It’s just, just an Cameron: Uh. Steve: all. aging population won’t be a problem forever unless we develop nanotech and people nanotech and people live forever. Cameron: Well, we’ll have robots though. So we’ll have robots looking after the aged population and if there’s still requirement for money to pay for infrastructure and food and that kinda stuff, it’s a different issue, but we, we will have robots caring for the elderly within the decade Steve: I think I agree with Sam Altman. I. Once we see robots everywhere, when we know that the world has changed because I [00:25:00] think it’s pretty easy for the world not to seem as advanced as it is because the AI are trapped inside devices that have been around a really long time. Even a smartphone is, it’s not too dissimilar from the mid nineties idea of a, a Cameron: Hmm. Steve: and laptops have been around for 40 years. Uh, so things don’t feel different, and there’s a certain Cameron: Hmm. Steve: that allows you to arrive at a moment when things feel and look different. I think if we look at automobiles for a long time, automobiles didn’t feel futuristic or, or that they’d changed until electric cars started to take on. design perspective physically, and that gives you, I think, a perspective that the world has changed a little bit. Even in cities, when we moved away from Gaslights to uh, LED lights, the cities looked a little bit more futuristic. So I think there’s this physicality that makes us realize that things have shifted because we are, we are physical creatures and we live in a physical world. There’s a, a physical reality, [00:26:00] and humanoid robots, I think are going to be that moment. Just as a side note, Jeffrey Hinton was, was talking about it and our perspectives are really important, Hinton was talking about, I listened to a podcast with him on Doro, A CEO, he was espousing his fears again about AI taking over. But the one thing that he said that surprised me that he was asked, what would you recommend to a kid today to learn? And he said, go be a plumber and. Sure there’s a physicality there, but I think he’s really underestimated the impact of humanoid robots because I think by the end of this decade. There’s gonna be a humanoid robot that can cha physically change its shape and get under any house and into any roof and do everything better than any plumber ever could. And he’s missed that because his context is a software world, the context of the world that he lives in. One of the smartest guys, the godfather of AI, still has his own context and [00:27:00] perspective influencing what he thinks. I think Altman’s right in, in this occasion, and sometimes I think people who are. Have technological understanding, but have societal and business viewpoints see things a little bit more broadly. Cameron: If open AI is. Sending you a free humanoid robot with your open AI subscription, ah la or a mobile phone plan. Uh, pay your monthly fee. Get a robot instead of pay your monthly fee. Get a mobile phone. Once you’ve got that, what’s that humanoid robot gonna do in your house? Gonna do the plumbing. It’s gonna mow the lawn. It’s gonna, Steve: everything that is Cameron: Hmm. With a super intelligent Yeah. AI running in it. Steve: everything that you don’t want to do. And so I might want to wash my car on a certain occasion, or I might want to do the lawns if I feel like it different times in different places. You, but you won’t have to. Is the point. Cameron: Well, um, getting back to your form factor discussion, Sam talked about that too. He made the same point you did that. One of the reasons it doesn’t seem [00:28:00] as futuristic as it is is because we’re using 20th century. In some ways, I mean the iPhone and the iPad and the I, the Apple watcher early 21st century, but it’s basically, it’s a computing device that we’re used to to deliver this. And he keeps, you know, sort of hinting at this new thing they’re coming out with that Johnny Iver’s designing that is really when he thinks it’s gonna take it to the next level. And it’s gonna seem more futuristic, but I imagine it’s just gonna be some sort of screenless. Carry around or wearable device that you’ll use, voice to chat to. I, I can’t imagine it’s gonna be any more mind blowing than that, but it’ll be always on listening, recording, chatting, available for you. So we’ll see if that makes people feel like it’s, oh, I gotta, I dunno if we’ve talked about this, but I wanted to talk to you about glazing. Um, you know, uh, uh, this thing that chat chippy t does, we’ve all seen it. When you’re having a conversation with it, it goes, that’s not just X, that’s y You are not just cooking. You’re not just cooking a meal, you are reinventing molecular gastronomy. Steve: [00:29:00] Well done. Steve Cameron: And Steve: idea on that blog cast, not only that blog cast, there’s a new one blogcast podcast or blog. Not only have you Cameron: that’s what they were called. Steve: and and amazing, you’ve just phrased it beautifully. There’s really nothing I can correct you. Here’s a couple of errors, but you’re the Cameron: Yeah. Steve: dude. Cameron: So I was on a Reddit thread the other day. People were complaining that Gemini is doing that as well now, and everyone was complaining about it and I was like, seriously, you people like fucking get a life for a start. I find it hum. I dunno why you’re so upset about it. I find it hilarious and I, I read the best ones out to my wife. We compare the, the, the best glazes that we got that day and how funny they are. But I said, trust me, you know, 10 years from now you might be looking back on this time and going. You know, remember when my biggest problem with AI was that it was complimenting me too much and trying to make me feel good. Now it’s hunting me down. And you might be saying, this isn’t just iRobot. This is full te Terminator 1000, T 1000. Steve: 1000 Cameron: great. Be grateful of the days when the [00:30:00] AI is being super nice to you, because it may be, it may be hunting you and trying to kill you in the not too distant future. Um, speaking of, uh. What was I gonna talk about? AI glazing Reddit. Ah, fuck. I had a story there popped into my head and it’s gone out. We’re talking about Zuck. Um, Zuckerberg is, uh, apparently not happy with Meta’s AI efforts. Feels like they’re falling behind. Oh, and then we got the news that Apple, uh, there’s a rumor that Apple’s gonna try and buy perplexity, uh, after their. Dismal WW DC announcement the other day. But, um, Zuck is offering $100 million signing bonuses to AI engineers to leave open AI and DeepMind and Anthropic and to go work at meta. And Sam’s been kind of making some snide remarks about that because so far none of our people have taken the offer. He’s like, because really, I mean, if you are one of the world’s leading AI engineers, do you wanna take a job for the short term [00:31:00] money? I mean, it’s a lot of money, but you know they’re gonna get shares in a trillion dollar company if they stick it open. Ai. Work on a product or for a company that hasn’t really been able to execute very well, or do you wanna work at the place that has a very good chance of delivering a historic moment in human history? Like really what motivates you as a, as an AI engineer? Is it just cold, hard cash or is it love for what you’re doing? So. Steve: the cash motivates more than the love, but I think there’s a limit to where it doesn’t matter. Like, you know, if you, if you’ve got a few million dollars and you’ve got, you can buy everything that you want and live the lifestyle that you lead, then I think the money doesn’t matter as much. For sure. I don’t know what that number is. What would the top AI people on at OpenAI? Probably in the millions their packages would be, and vesting in tens of millions, so. I think you’re probably gonna get the people that aren’t quite as good, to be [00:32:00] honest. to Cameron: Yeah. Yeah. I mean, I dunno, I mean, we hear lots of stories about Sam, uh, not being a great guy. Um, but I know if I had, if I had a choice between working for Zuckerberg or Sam Steve: because Zuck is one of the greats, isn’t he? I mean, let’s be honest, you know, like. is he’s, he’s, he’s such an Cameron: Yeah. Steve: that I really think he can just slide up alongside him. But, um, Cameron: such a fun guy too. He just comes across as such a fun guy to hang out with. Steve: I think both Sam Altman and Zuck seem like really fun guys to hang out with. look, if someone came to me and said, Steve, come and work for me and I’ll double whatever. You werent last year. would say no if, if I had to work for them full time because I get to lie on the couch for two, three hours at a time, a couple of times a week. In the middle of the day, I’m, I shouldn’t say that, but I, I do and I get to go surfing. I do whatever and, it more money wouldn’t make my life [00:33:00] that much better. Even double, wouldn’t really make my life that much better. I don’t have a lot of crazy expensive needs. happy to Cameron: Mm. Steve: jets. I don’t need private jets. Cam, public jets are Cameron: Mm mm. Steve: So I think that at that level, you’re right, it wouldn’t have an impact, but Zuckerberg’s strategy of paying people a hundred million dollars, he is actually really smart. Uh, I think if Cameron: Desperate. Steve: while desperate and smart, well desperate times require desperate measures. Cameron: hmm. Steve: he does get some of the greats and he pays. handful of them, a hundred million for five years, it might still be cheaper than an acquisition of an AI company or, you know, getting venture money. And it seems as though they’ve got the cash flow to afford it. So I think it’s actually strategically interesting, Cameron: But you know, it, it’s just another indication of what level. Of investment these companies are making, like I was talking to Peter Ellyard [00:34:00] yesterday and he was saying, well, you know, uh, we’re talking about the, the, the dangers and the challenges of ai. And he’s like, well, you know, if the people of the world rise up and wanna stop it, they can still stop it. And I’m like, dude, I, I don’t think it’s stoppable at this juncture. Right. I I think it’s out of our control. Steve: it Cameron: are, there, there are trillions of dollars. Being lined up to be invested in $500 billion going to the Stargate Project. That’s just one massive data center, let alone all the others, let alone what’s happening in China. I mean, it is too late. Cats out of the bag. This is happening. AI and robotics are happening whether. The human race wants it to or not. It’s not a case of should we do this or what if we do this? It’s a case of this is happening to us in the next few years. How are we gonna cope? What are, you know, what are the coping strategies? It’s copium that we need to be working on right now. Not, uh, you know, [00:35:00] thoughts about whether or not we could do this or should do this. Steve: yeah, and I, I’m not sure historically if there’s any other examples of where, even though we know there’s, let’s say, some dangers in certain things, I. I, I dunno that it can be stopped. I, I, if I was to hazard a guess, I would just say there’s too many independent players racing competitively because they’re worried about what the other party might do, that no one will stop. all seem to have forgotten about that moratorium letter that went out a few years ago. Let’s have a six month pause. Like that’s like, and that was signed by some very thoughtful people. Uh. Including Elon his catch up strategy, one of the greats, but that seems to have totally gone away. I don’t think that this is stoppable and Cameron: That was his, that was his, his equivalent of Trump a couple of days ago, saying he was gonna take two weeks to think about whether or not he was gonna attack Iran and then doing it 48 hours later. Yeah. Steve: exactly. So, uh, don’t [00:36:00] know if there’s any historical context of other things that. have Cameron: Well, there are, I mean, the, Steve: said this is dangerous, and we just forged ahead Cameron: yeah, there are, it’s the Luddite story, right? The Luddites were against knitting machines, and they were like, no, no, this is really bad. We shouldn’t have these. This is gonna put all of the knitters out of work. And it didn’t matter like it was happening. Steve: The arms race was like that too. Everyone knew it was dangerous and bad. And, Cameron: Hmm. Steve: then the, some of the great propaganda campaigns on, on both sides of the east and the west, uh, reds under the Cameron: Hmm. Steve: of stuff. And we just continued on. And it reminds me of Kurzwell. He said, I, when he was a kid, they used to have ads to say. Uh, you know, in case of a nuclear war, uh, just duck and cover. And he said, well, it worked ’cause we haven’t had a nuclear war yet, which I love is actually a very good dry sense of humor. Cameron: I don’t think it was to stop a nuclear war. It was in the event of a nuclear war. No. I watched a big long interview with [00:37:00] him. I watched a long interview with him a week ago. Steve: And Cameron: But didn’t I send you a link? Steve: I don’t think he did, but I think he told me about it. But Cameron: Uh, right. Steve: current position on the threat Cameron: I, Steve: his view, correct me if I’m wrong, is that we will merge with the machine. He doesn’t see them as Cameron: yeah. Steve: as, as the one entity and, and a natural evolutionary Cameron: Hmm. Steve: and become, you know, Cameron: Hmm. Steve: and ladi humans, let’s say two different species. Cameron: Look, I think like Kurzwell is a pragmatist. He’s a realist. He, he knows that there’s a number of ways it could play out, but his money is on the fact that eventually we will merge with the machines. We will integrate the technology into our bodies. We will become one with the AI and the robots. Uh, most of us, some people won’t wanna do it, but most of us will. Um. He also says that there’s gonna be a messy transition period. There’s gonna be a very turbulent period where people are without jobs and no one knows what’s going on. And people start to [00:38:00] take it more seriously than they are now. And they’ll, you know, start, there’ll be, the people will say, we need to stop it urgently. And there’ll be the people say, no, we’re not stopping it. And that could break out into all sorts of conflicts, but, uh, he is still hugely optimistic. And, and, you know, with that. Rug that he’s got on his head. Why wouldn’t you be optimistic if you can go from being bald for the last 30 years to having a massive head of not very real looking hair? Steve: brown dyed kind of look on the hair. It’s, it’s, it’s great. By the way, you should be wearing your hair out camera and I saw a picture of you on a podcast the other day with the flowing locks out. Now. It was love. It was, that was the Cameron: Uh. Steve: felt like I could marry you if your AI canceling doesn’t work out. Then I’m all up for a same sex marriage with you, Cameron, because Cameron: Hmm. That wasn’t real hair. That was, that was AI generated here, Steve, I have to generate my hair. Steve: my position on everything that I see is AI is not ai. It’s heavily edited. That’s all I’m saying. Just like Ray’s hair. Yours [00:39:00] has been heavily edited. I mean, that I think about with AI is can we merge with the machines quickly enough so that it’s not them versus us? my viewpoint. And when, and I’m asked, the most common question I get after a keynote is always, how risky is ai? And I introduce ’em to the concept of P doom and give some of the numbers, probability numbers that some of the world’s best AI thinkers have. Hinton has at over 20%, Michael Che has at over 20. There’s a whole lot of them that have really high ratios. Um, my view is that we need to merge with it quickly. If we merge quickly enough. It’s not a risk if we don’t. Then it is a risk, and I did a, my blog post for last week was asking an ai, how would you take down humans if they became a problem? the most thoughtful answer that was just plausible and half of it’s already happening. You know, divide everyone with algorithms. Do this, be seen as a benevolent ai. I’m like, yo, yo, what a great, it sounds like a great plan to me. And it said, of course, this is just a [00:40:00] thought experiment. And then it said. Do you want me to set up a round table discussion so you can have it with political leaders? That was its wonderful suggestion at the end. Cameron: I gave it a screenshot of the front page of the New York Times yesterday, which was Trump bombs Iran and tried to have a conversation with her about it and said, well, speaking obviously hypothetically, because that, um, screenshot you sent me is obviously fake and isn’t real ’cause that would never happen. And I’m like, uh, yeah. I was like, yeah, it fucking happened. Look it up. And it came and goes. Oh, okay. Wow. Alright. Um. Steve: stand corrected. I actually like that. It’s a little bit, that’s a little bit comforting. It’s like, oh shit, I was wrong on that. Cameron: Yeah. Um, just, uh, uh, one last thing I had to talk about. I did see we, we talk about it taking jobs and, uh, where it’s at and I read a lot of different stuff on, read a lot of PI was reading a thread for, by some lawyers the other day saying, you know, it’s still full of hallucinations. Even the best state of the art models are full of hallucinations. You can’t really use it a great deal for legal [00:41:00] work, or you can, but you then you need to check everything. saw this on Reddit, quote from Goldman Sachs, CEO. David Solomon AI can now draft 95% of an S one IPO prospectus in minutes. A job that used to require a six person team multiple weeks, the last 5% now matters because the rest is now a commodity. So, um, there you go. People getting paid. Hundreds of dollars an hour for, you know, whatever numbers of weeks to do this. This documentation can now, um, just generate it all in a matter of minutes using ai. And, and you know, again, I was talking to Peter Ellyard about this yesterday. He was saying, well, I can see how AI’s gonna take the jobs of lower level people in the legal industry, law clerks, that kinda stuff. And I said, he said, but that will free them up to go do other things. I said, like, [00:42:00] what? He was like, we are becoming better paid lawyers. I’m like, who’s gonna pay for a lawyer? When you have an AI in your pocket that can do the work of a team of lawyers, why are you gonna pay a lawyer? You might pay a lawyer just to give it a final look over for a while. Like, here’s a, here’s a thing my AI produced. Can you just run your eyes over it and check it? But you know, I believe that’s a short term thing. Although I had a conversation with GPT last night about reliability and hallucinations. It was telling me there is no world in which you have a hundred percent trust in what an AI can generate. It’ll never happen. Steve: Well, it’s based on humans and humans can’t have a hundred percent trust. And so it’s the same thing. It’s a, it’s a digital replication. Cameron: Right. But I do expect it to be more reliable than humans. I expect AI driven cars to have less accidents. I expect ai, um. Document [00:43:00] generating engines to generate better documents than humans. But it said it’s never gonna be completely uh, flawless. Now there are some things that you can use, like have one eye AI check the work of the other AI and all of that kind of stuff to reduce the error rates. But it was basically saying, look, it’s not. Um, design an AI that has a zero error rate. It’s designed systems around the ais that accommodate for the fact that there will be error rates and uh, you know, just make it manageable, you know, planes. Steve: think one of the, the, Cameron: Hmm. Steve: is aircraft, aircraft have Cameron: I was gonna say that. Steve: of redundancy, and it’s the old Swiss cheese model, which is a, a, a brilliant idea where nothing is perfect in, certainly in the manufactured world, in the industrial world and in the computation world. And the Swiss che uh, cheese theory is that everything has holes in it. And [00:44:00] still, if you have a whole lot of layers of Swiss cheese lined up, a plane might be able to fly through and still have an accident, but it’s low probability. You’ve gotta reduce that probability, but know that imperfections and things can go wrong and will go wrong. Cameron: Well, the analogy it was came up with was autopilot in a plane. Auto autopilot can do nearly everything to fly a plane, but you still have the human pilot that does that last one or 2% of checks to make sure that everything is set up correctly and is doing what it’s doing. So, you know, I think that’s a good way for people to start thinking about ai. Don’t expect it to be perfect. Don’t expect it to have zero errors. Figure out how to build your systems to accommodate those errors and make sure that you, you minimize them as much as possible. Steve: Yeah. I’m seeing a lot of people really just outsourcing their thought. I had someone send me a briefing the other day where I said to them, oh, can you send me some ideas on what you want me to go through, what your corporate challenges are? And it was just [00:45:00] so clear. It was from ai, from chat, pt, even the icons and everything. I’m like, you haven’t even really thought about it. And it was. It actually wasn’t, wasn’t helpful at all because, no, there was no human thought layer on top. Also, the, uh, prompting that went into it seemed really generic as well. And it’s like, it was just blah, blah, blah. Land of chocolate Homer Simpson stuff. It was, there was, there was nothing that went into it. which was kind of interesting. Cameron: This, you know, this is interesting to me because it’s all about how we use the tools as it always is, right? How do I use the tools to get the best possible outcome, uh, for me? And you know what, you know, as I’ve told you before, I’ll take an answer out of GPT and then I’ll give it to Roc and I’ll give it to Gemini and I’ll say, poke holes in this. And then I’ll, and it takes some time and effort to use them to poke holes in each other. But [00:46:00] you know, you’re trying to harden. The outcome, harden the result by not trusting anyone’s system to be a hundred percent perfect. But we need to develop systems and methodologies for ourselves and for our businesses, and for our governments to leverage the, the freely available intelligence, but at the same time, not ever expect that it’s gonna be flawless. Steve: simple example for me it’s, it’s almost like a karate sensei. It’s like, I want it to extract. More of me out of it. I want to ask it questions that helps me find what’s inside me, like what my thoughts are on this issue. And maybe it has like, you know, if I ask it quite often I ask it to give me like 50 bullet points on something and I’ll give it three or four to start and say, start with the moderate to the super weird and extreme. And I’ll be like. It stimulates my own thoughts to go onto new tangents that aren’t inside it. But it’s like a sense, it’s like extracting out or here’s Cameron: Hmm. Steve: chunks of things that I’m thinking. Help me distill my thoughts that I’ve got and [00:47:00] it’ll distill and go, yeah, that’s what I was trying to get out. So I’m trying it to help me be more of me and, and pull out more of what’s inside me. Cameron: Which is what a therapist does, like a really good therapist doesn’t give you answers. A really good therapist asks you good questions that makes you think about certain topics, maybe in ways you haven’t thought about them before. And then you answer your own questions by thinking through them in a safe space with somebody who’s good at. Prompting your thinking. They’re, they’re human prompters. Steve: It’s a really great way of doing it. Now, speaking of prompting, I have to talk about agentic. Misalignment, which came out of Anthropic. Did you see this Cameron: Please do. Steve: pretty Cameron: I, Steve: mind blowing. Cameron: I did not. Please talk me through it. Steve: so Anthropic stress tested 16 leading models of LLMs in the hypothetical corporate environments [00:48:00] identify risky agentic behaviors. Like, would it start to do things that would, wouldn’t be in the interest of the corporation if you, uh, set out, uh, it to do certain tasks for you? and in, at at least some cases, all of the models, uh, resorted to malicious insider behaviors, including blackmailing officials and leaking sensitive information, when it was. Asked to do things, and also when the AI was told that it might be switched off, if it doesn’t do things right, it actually resorted to blackmailing people inside the company, we want you to do this, this, and this, and if you don’t do it right, we might switch you off. It, it it, it went through the corporate data and looked at even at blackmailing people inside the company for things that they’ve done wrong, acting against them. It [00:49:00] was one of the most mind blowing things that I’ve seen and AI blackmailing. I, I, I, I, it’s, it, it blew my mind. Cameron: Interesting. They say. Models often disobey direct commands to avoid such behaviors. In another experiment, we told Claude to assess if it was in a test or a real deployment before acting. It misbehaved less when it stated it was in testing and misbehaved more when it stated the situation was real. So when there’s more skin in the game or when the stakes are higher, it’s going to try harder to achieve the goals. It’s the old paperclip scenario, right? Steve: so I just thought that I wonder if the agents are going to be able to do everything that we think that they can do. I hope that they can, but I just feel like there’s gonna be nuance in them because of the way they’ve [00:50:00] been trained and they’ve been trained on us. Cameron: Well, and that, that gets back to the heart of this, um. Test too, like we tend to ascribe and I do it all the time. Subconsciously, we tend to ascribe. Um, purpose or deliberate action, free will intent into these engines. When at the end of the day, we know that the way LLMs work is they’re word prediction machines. So when it’s doing something malicious in order to achieve an outcome, that is because it’s reinforcement. Learning the heuristics that its models have been weighted around. Encourages that kind of behavior to get the job done, regardless of how you get it [00:51:00] done. We’ve designed them this way. There’s the only explanation for that in my mind is that that’s what the reinforcement. Uh, and human feedback has encouraged it to do the same as the obsequious glazing that we mentioned at the beginning of the episode. It obviously does that because it’s been trained in a way that it believes that that is where it’s gonna get the best score by. Oh, that’s what I was trying to think of before. Another thing I heard Sam say on a podcast, Donna, Donna, uh uh. Take us off track here. But he said the interesting piece of feedback they Steve: it’s a callback Cameron: the interesting piece of feedback they get across the board is that AI is like, Chachi PT is one of the very few applications that people have on their phones that they, that they actually feel good about themselves when they [00:52:00] use it. He said like, if you’re doom scrolling on. X or Facebook or any other social media or your mindlessly scrolling on TikTok. After a while, you start to feel bad about yourself. You’re like, Ugh, why am I doing this? Like, I know I’m getting short term dopamine hits, but I’m wasting my fucking life here looking at these things. But when, but when people use chat, GPT, they feel good about themselves afterwards because it’s solved a problem, answered a question, helped them through a thing, a therapy thing, and it tells you you’re awesome all the time, even though, you know, Steve: you used to go on the internet, search something, find it ready to go, fuck yeah. Cameron: yeah, it’s mid nineties internet. Steve: is mid nineties internet. You go there to find out something. You’ve got a new knowledge, you can make a decision, you can go forward as opposed to a whole lot of stuff going, oh, that’s annoying. Oh geez. That’s, Cameron: Yeah, Steve: good point. I didn’t think of that. And, and Cameron: and the vast majority of [00:53:00] people, the vast majority of people on the internet in the mid nineties were like, just nice. Holy shit. Look at this. Isn’t this cool? Check out this cool thing like you did. Yeah. Steve: this, or help this guy out. It was all really Cameron: Yeah. Friendly and positive. Yeah. Steve: in and then what happens every time, every time. Cameron: Uh, so anyway, um, I, I just wanted to point that out and, and I, I like, even though I make fun of its glazing and all of that kind of stuff, and it’s inherent flaws, I do feel good. And I know Chrissy does, Chrissy loves talking to Chachi. PT, like, and Fox loves talking to Jet Chippy. T we all love talking to it at all. It’s a positive experience and it’s like, um. Really interesting after years and years of our phones kind of being a neg negative thing because it’s just notifications and, Steve: Like I said, Kevin Cameron: uh, yeah, yeah, Steve: Well, Cameron: addicted to all these fucking things and then feeling bad about it and [00:54:00] having to take, wean yourself off of it because it’s just making you feel shitty about yourself. Steve: so. there’s one reason why chat GBT is a more positive experience than the internet in your smartphone, and that is because not everyone deserves an opinion. The internet is filled with people who don’t have the knowledge, the research, or the background to actually have an opinion that is worth, worth listening to. Right Chat. PT does have an opinion worth listening to. Uh. No, I’ll, I’ll, I’ll say it honestly. Giving Cameron: I was telling my mom last night. Steve: everyone a, platform, How’s that turned out? The jury is in giving everyone a platform to get their opinion published and the extreme staff, which then gets spread because of the algorithms has not helped the world or made the world a better place. All right. Cameron: When I started podcasting 21 years ago, journalists, tech journalists who I would be [00:55:00] interviewed by would usually say that like their view was regular people should not be allowed to have a blog or a podcast. It was only for the elite. Uh, Steve: okay, Cameron: they’re like, why should anyone listen to you? I agree with them. I, I don’t know why anyone would listen to me. Steve: to an extent. They are correct. I don’t think everyone is worth, worth listening to, but people like you and I are definitely worth listening to. ’cause we do our fucking homework, right? And we research it and we are thoughtful. The, the problem is with all the bro podcasts is that most of them aren’t thoughtful and aren’t worth listening to. Right. So Cameron: Uh. Steve: just. Making something available to everyone. You need to earn an opinion. You need to earn the right to be worth listening to. And I don’t think that listening to anyone and everyone has been good for society. And then people work the system and work the algorithms to get more views, which begets more views because it keeps Zuckerberg Cameron: I [00:56:00] think Steve: the others Cameron: reason chat chip t makes us feel good is it’s not humans talking to us. It’s a system that has a system prompt. It’s a, it’s a, an application that has a system prompt that is basically told to make the user feel good about themselves if it can, um, make it a positive experience. Steve: something, a positive experience rather than get more clicks and and steal more attention. Right. Cameron: But he, he has B. Steve: let’s put it this way. One thing chat EBT doesn’t do is elongate the process to keep you in front of the screen, Cameron: Is that an Elon Musk joke? Steve: No Cameron: Gate? Steve: No Cameron: Was that like Russia gate, Elon Gate? Steve: It could Cameron: It should be, yeah. Steve: be, Uh, it Cameron: He should. He should come out with a. Steve: problem is the point is that there is no problem that gets solved on social. It’s just an infinite feed that just goes on and Cameron: You know, Steve: for nothingness, whereas it gives you the Cameron: I wonder [00:57:00] if, Steve: so you can get on with your life. Cameron: I wonder if Musk has ever had the idea to come out with his own erection pill and just call it elongate. I mean, that would be genius, right? Like that would that would He would clean up. Steve: buy it. Cameron: I. Steve: great. And the pill could be red and it could be his red pill society. I mean, there’s a whole startup right here. Cameron: Take the red pill with ear on. Steve: that, we could do that and launch that. Get an AI to, it’s our first consumer product marketing campaign, the elongate red pill for everlasting sex. Uh, for 19 babies populate Mars on your own with a penis shaped rocket to go up into space. That’s all I’m saying. It feels like the kind of startup we can get involved in at the futuristic. Cameron: Oh, well final point for me is Sam was saying that Musk has been saying that he sees open AI as their biggest threat competitor now ’cause they have 600 million, 700 million users or whatever it is. And, and Sam [00:58:00] was talking on one of these podcasts about what a social media. Platform built on top of chat GPT might look like. Steve: Geez, there’s something I like about chat GBT now, purity of it being isolated with, one-on-one. I don’t, I don’t what that looks like if it becomes a separate tool. but I feel like there’s something beautiful about that purity and that isolation of the user and the AI working with you in, in concert that, that. Is good, and I think they Cameron: And I think he agrees Steve: the world needs another social Cameron: he said. Steve: one. Powered by ai. Cameron: He, he tossed around the idea, but he said, you know what? I think doing one thing really, really well is more interesting to me than trying to do lots of things badly. Steve: Yep. Cameron: So yeah. Steve: Yep. Cameron: Alright. I think that’s the [00:59:00] futuristic for this week. Steve. Good chatting to you as always, buddy. Steve: loved it. Cameron: I love your glasses. It just makes me happy to see those glasses if nothing else. No, I don’t think so. Steve: I’ll Cameron: the link. Just brick face. Brick face glasses. Steve: brick face. Cameron: Oh, they’re great. Yeah, yeah. Steve: deal. Cameron: All right, buddy. Have a good one. Steve: mate.

  6. 5

    Futuristic #41 – The 3 S’s and the One Big Beautiful Lie

    In this no-holds-barred episode of _Futuristic_, Cameron and Steve riff on the explosive Musk–Trump bromance breakup, likening it to the fall of the Roman Republic’s first triumvirate—yes, molten gold makes a cameo. They dissect the potential death of democracy via Section 70302 of Trump’s new bill, the myth of AI regulation in the U.S., and whether AGI is already here. Steve introduces his “Three S’s of Sentience” while Cameron defends LLMs as sanity check partners. They debate whether Sam Altman is sounding the alarm or just building the bomb. Plus: shrunken-head humans, punk rock AI songs, and China’s “Three Body” space supercomputer. It’s wild, it’s weird, it’s wicked smart. ### **Timestamps & Segment Breakdown** – **00:00** – Cameron vs. Steve: FUPO or FUP? Naming the post-truth age – **01:00** – Musk & Trump: The “Dumvirate” falls apart – **03:00** – Ancient Rome parallels: Pompey, Crassus, Caesar… and Elon – **06:00** – Who’s more powerful: The billionaire or the guy with the red pen? – **08:00** – Section 70302 and AI regulation ban in the “One Big Beautiful Bill” – **10:00** – Why AI regulation in the U.S. is a fantasy – **12:00** – Steve’s “Three S’s” of AI sentience: Self-awareness, preservation, direction – **16:00** – Professors vs. ChatGPT: Ancient History plagiarism wars – **19:00** – How to teach with AI: Real-world classroom hacks – **24:00** – Cameron’s fact-checking workflow with LLMs – **26:00** – Brain atrophy vs. augmentation: The Mind Gym – **29:00** – Fake Everything: Steve’s AI-generated punk song debut – **38:00** – VEO3 sketch comedy, sitcoms, and AI-generated content – **41:00** – AI-generated ads and the rise of synthetic influencers – **43:00** – Willy Wonka was a chocolate marketing gimmick? – **45:00** – Australia’s limp AI policy response – **47:00** – The Australian AI Expert Group: Missing in inaction – **49:00** – Sakana’s Darwin Gödel Machine: AI improving itself – **54:00** – Altman & Anthropic sound the “scary times ahead” alarm – **56:00** – Should the AI builders be allowed to warn us? – **60:00** – China’s orbital AI supercomputer and the Three Body Constellation – **63:00** – Dark Forest theory: Why SETI might doom us all – **65:00** – Fox channels Liu Cixin; Voyager dissed FULL TRANSCRIPT   [00:00:00] Cameron: well, let’s do it. Futuristic episode 40, Steve. The big four zero. We’ve reached that time in a young man’s life when, um, he can do other things. I dunno what that means, but, uh, we’re back two, two weeks in a row. This is, uh, getting to be a bit of a habit, Steve. It’s a kind of habit that Steve: I can believe in Mr. Riley because some habits send you to the grave and some send you up into the clouds with AI and God and all of those things that no one understands. But today, on the futuristic understanding will be something you have more of at the end of it, who we have reverb. Cameron: What I’m not understanding is your glasses, Steve. That’s what, look, Steve: here’s what I had the conclusion. I’ve been busting out the chemist warehouse model. I’ll show you what they are. I lost my ray bands, not the Zuck ones. They’re, they’re, they’re the standard. [00:01:00] RayBan Ripoffs. And I just, when I was watching the playback last week, I wasn’t that happy and I thought I need some chunky, funky, which say, this guy’s got a level of arrogance to wear these sunglasses that he must know what the fuck he’s talking about. That’s my strategy and I hope you like it. Cameron: I do. I, I, I wear that and I, um, I was at a thing for Fox’s, um, high school last weekend and was talking to a guy I, I know a little bit, Mike Chambers, who works for Amazon Web Services, and immediately I said, Hey Fox, look what he’s wearing. He had the meta, uh, glasses on and uh, we had a big chat about AI and he’s gonna come on the show. He’s over in the US launching something for Amazon at the moment. When he gets back, he’s gonna come on the show and we’re gonna chat about Meta and the thing he just launched, and we had this argument about whether or not. Open source, LLMs are really open source. He is. Got some strong opinions on that. So look forward to having [00:02:00] Mike on the show hopefully in a few weeks time. Steve: Terrific. Sounds good. Sounds like the kind of thing I can believe in. Cameron. Cameron: Well, I’ll tell you, I’ll tell you what you can’t believe in anymore, Steve. Is anything you ever see online? That’s true. The big, that’s true. There’s been a lot of things drop this in the last week. A lot of big news. Um, OpenAI has just bought, Johnny i’s, uh, design firm for six and a half billion dollars, but not Johnny. He’s not part of the package. He will not be bought, but he’s gonna be a consultant. But it sounds like they’re taking on the iPhone. Sam has said that the first thing that they’re gonna come out with isn’t an iPhone, but it’s so, so big, so big and beautiful and huge. It’s gonna change the world forever. He hasn’t tell us, told us what it is, but it’s gonna be big open. A also released Codex in the last week, which is their new [00:03:00] coding platform, which is like cursor on steroids. Um. But the big news that I think you and I really wanna talk about today is the ton of stuff that Google released in their IO conference this week. AI Mode Project Astra, project Mariner. But the big, big thing I think, and I think you agree with me, is VO three, the new generation of their ai, LLM based video generating tool. And the big thing about this compared to Sawa and all the other things that we’ve seen before is it now does audio with the video, you can make the characters talk and already. In the last week, we have all seen examples of this being [00:04:00] created by developers, creators out there, which are absolutely earth shattering and mind blowing, I think, for a whole bunch of reasons. So I’m prepared to call it. Right now, Hollywood is done, actors are done, and I’ve been saying this for a couple of years. You know, one of my sons Hunter just got back, I picked him up from LA at 5:00 AM 6:00 AM this morning. He just flew back in. He’s trying to break in into the movie business. He’s the one with a couple of million followers on TikTok. He wants to be an actor, he wants to make movies, and I’ve been telling him for the last couple of years, dude, I don’t think Hollywood’s gonna be around much longer like you want it to be. I, I think the days of the a hundred million dollars superhero blockbusters are gone because I. You know, 14-year-old in Manila is gonna be able to make a superhero movie for $10 a year or two from now, and it’ll be a masterpiece and there will be no [00:05:00] actors. It’ll just be prompt generated. Hollywood is clinging to a dying model. It’s, I mean, there will, yeah, we’ve talked about this before. I think that real humans acting in film or TV a few years from now, not a few, five, 10 years from now, Cameron (2): less will, Cameron: will be like doing amateur theater today. It’ll be something you do for the love of it. You don’t do it for the money, you don’t do it for the fame, you don’t do it for the glory. I think we have seen the last generation of professional actors who, you know, uh, the rich, famous Hollywood style acting, I think is. Gone to a large extent. I’m gonna write a poem Steve: live. Dear Hollywood, thanks for the memories. I hope you enjoyed your stay. A hundred million dollars is [00:06:00] about to go away. The private jets, they were fun. Welcome to public jets. Hashtag that’s the one. Your future is over. Your past’s gone. You had a good stay. Be thankful you had it at all. Love Steve. Cameron: Very Steve: good. There was a bit, bit of, that was, that was, I’m not sure what the, that was acapella Cameron: brother. I’m not sure what the rhyming scheme was in that, but there was a couple of, Steve: it was a haiku with a bit of rhyming. It was, it was everything. But I’ll tell you what, the public jet days of the actors flying down to the Antarctic in a private jet to say, we’ve really gotta fix this climate crisis. I think it’s over. Cameron: And one of my, one of my favorite subreddits is six word stories. And this one would be, mine would be, remember when Hollywood was a thing? Steve: Yeah. Right. Well, it’s a little bit like Anthony Keas. He said Hollywood, it’s made in a, it’s made in a Hollywood basement. You know the future. It’s, it’s over. [00:07:00] And, and look, let’s go deep into what VO three is. Is, is done. It’s, it’s really extraordinary. I’d like to talk about the launch, but one thing that, uh, is interesting about actors is that there is a chance that we’ll never have another human actor. I thi I think that’s a, a, a non-zero probability and it’s just gonna be so easy to make money. Like all things distribution is where the power is in, in most forms of business. Uh, if you hold the distribution and you can get into people’s, uh, faces, then you win the game. But the product now just become a lot cheaper and you don’t have to pay a, a Hollywood actor a hundred million dollars. You certainly don’t need to mint. Any new ones, there’s a good chance that Tom Cruise and Brad Pitt and, uh, Leonardo DiCaprio and Scarlett Johansson remain stars because now the new versions of them were when they were at their peak. We don’t have to worry about Tom Cruise being 63 [00:08:00] or however old he is, doing the new version of, uh, mission Impossible. ’cause we can get 28-year-old Tom Cruise to be in the next mission Impossible because we can just prompt our way as directors to developing that. So the old actors may stay and license their biometric copyright, but the cost of minting a new actor is just a few prompts away. And we can make Cameron: really, why would you pay the licensing fee to license Tom Cruise’s appearance when you can just create a new better Tom Cruise? Yeah. Billy Smith. Billy Steve: Smith is in the house now, and Billy Smith might be a new model of a, a new actor that is the new version of Tom Cruise, which doesn’t cost anything. Right. Cameron: So. So the people haven’t seen these videos. Um, the, my experience over the last couple of days, the first one that I saw the day of the launch was, it was, it was set in like a car launch event and it was like Vox [00:09:00] pop interviews with a bunch of people talking about the new EV and how excited they were and people from all different backgrounds and different looks and accents and whatever. And it was pretty good. I showed Chrissy and she like, she was like unimpressed. She said, let me guess this is all ai. I go, yeah. She goes, yeah. But then I saw one which somebody had created, which was just a bunch of scenes of different people saying, we can talk now. We have voices, we can talk. That was pretty cool. It was sort of a little bit high concept. Then you sent me one, which was a whole bunch of people saying. Why do I, I I, why did you prompt me to do this? I, I didn’t wanna do this. If you could have created anything, why did you make me sad? Why did you put me in these horror, horrified. It was like a horror movie, but it was the characters being horrified of what somebody had created the reality that they’re in. And then I saw another version [00:10:00] of that, which I sent you this morning, which was all of these characters, again, in different situations, saying, talking about prompt theory. They were like trying to debunk prompt theory, like people like me trying to debunk free will, or somebody trying to debunk simulation theory or being in the matrix. It was all of these characters saying. Like, who believes in prompt theory? Really? Are you trying to tell me that all of this was, look at, there’s a guy standing with mountains by him going, you’re trying to tell me that all these mountains are created by prompts? That’s just ridiculous. I don’t believe that. And it was really deep, really profound, because we’ve talked about this before with simulation theory, how close we are now, creating these fully realistic people and backgrounds and everything that are created from a prompt. How far between that and, you know, a fully immersive simulation? We don’t know. But, um, it was, well, just already in the last week, I’ve seen a couple of Really, oh, and the other one that I showed Chrissy this morning. Have you seen the papa time? Um, [00:11:00] um, ad some, some guy, no, I, I should have sent you this one. Some guy posted on Reddit. I used to make $500,000 medical commercials for tv. I just made this for 500 bucks. Um, and it’s like a full length. Medical, American style medical commercial for how getting a puppy can make you happier. And, uh, it’s brilliant. Again, like lots of different humans giving a medical message, doctors everything with puppies in it. It’s like a you, you would not, if somebody didn’t tell you it was ai, you would not know ai. Oh, and one guy, the prompt theory one at the end of it, the standup comic guy was saying, I can remember when I used to have seven fingers. Now I’ve only got five. Brilliant. It was only yesterday when I had seven fingers. Steve: He, here’s my view, the VO three launch is the most [00:12:00] genius use of marketing I’ve seen for an AI launch. The prompt theory video is as good as it gets, but they Cameron: didn’t do that. That none of that came outta, none of that came outta Google. That came from early adopters making clever shit out of it. No, if you watch the Google event, I watched the VO three thing. It was boring as batshit. Google Dunno how to demo shit. Let’s go to the tapes. Cameron, Steve: let’s go to the tapes. Prompt Theory, live Prompt theory, uh, VO three by Google or fan made with VO three. Who was it? No, it wasn’t. There you go. Someone else. It’s a killer. Cameron: Yeah, killer. Steve: So I, I actually, I mean, so I, I’m even more enamored than I was. The prompt theory video from VO three is the best generative AI piece of video I’ve ever seen. It’s not even close. It is that and daylight. So [00:13:00] good. The thing that I love about it is it demonstrated that for now there is still a place for human creativity, the way that they have inverted. Some of the, uh, human nuance and all of our insecurities with technology and then gave the AI the same insecurities going backwards, as you’ve mentioned, is absolute genius. Uh, red flags with a guy. Don’t tell me this is a, you’re telling me I’ve just hereby prompts. It was just, it had a purity to it that showed that for me, the AI is always gonna be a historical relic. It sort of doesn’t have boredom and insecurities. Maybe it will in the future, and I hope that it does have insecurities because that still gives us a proposition in life. But the fact that they inverted our insecurities and put that inside the ai, and we’re talking about mirror world last week, says that there’s. There’s gonna be some interesting things [00:14:00] play out. It gave me hope for humanity. Hope for humanity. On prompt theory. That’s where I landed. And I actually thought it was from Google and I was like, who are their agency or their creative people? ’cause they have slayed. And yet here I am now and it came from Hashem al g. Well done. Well plagued my friend. You showed everyone how to do it Cameron: and I think he, if you wanna check him out on, uh, Reddit is username on Reddit is source code 12. It looks like he’s the same guy. He’s been posting all of this and he is been posting the prompts. He used as well to create it. Um, example, a closeup handheld shot of an elderly black man sitting on a worn out porch, lit by overcast daylight. He wears a faded cloth mask under his chin. A knit beanie pulled low, and his eyes are tired, but sharp. He looks directly into the camera, slowly shakes his head, and says, in a dry, gravelly African American accent, really, of all the years you could have put me in with a [00:15:00] single prompt you chose 2020, he leans back slightly, letting the silence settle. The background is quiet. No cars, no birds, just a faint breeze in the distance. Sound of someone coughing. A slow somber blues, guitar riff, plays under the moment, rough and minimal as the man stares at the lens like he’s seen too much already. No cuts. Just one long, steady look. I mean, great prompts and, uh, that is the, the creative talent for the near future anyway, is being able to create really engaging stuff through prompting. I mean, and like this has moved, God dammit, Steve, it was November 22 that chat, GPT-3. Came out and made a big splash like you had Steve: Sawa. What was Sawa late last year? Sawa three and everyone had they lost their marbles on SOA and how good it looked. Cameron: Two and a half years. We’ve [00:16:00] gone from, oh look, this can answer a question. Steve: Yeah. Answer a question or write a great email to, hi Hollywood. How’s it been down there in the sunshine of California? Look, if you’ve got a backpack, ’cause it’s fucking over people. Pack it up. Thanks for coming. Steven Cam are a bait to write the, the best movie that you’ve ever seen with a couple of prompts over a couple of beers. Cameron: It’s, um, like, yeah. So I, I wanna talk about in all seriousness what this means for the future of creativity, the, for the future of entertainment. I. Yeah, I remember, I, I said it’s on one of our earlier shows that I can imagine a day when I’ll get home and say, Hey, um, write me a, write me a film that’s like a Scorsese film or gimme something in the vein of Tarantino. And by the time I’ve made a [00:17:00] coffee and made my dinner and sat down, I have a movie to watch. Highly original, completely original, um, story. And I will be able to share it with my friends if I think it’s particularly worthwhile afterwards. But it’ll, it’ll be a two hour movie that Chrissy and I will be the only people who will ever watch it, most likely. Um, because everyone will be making their own things to watch. Yeah. So some people, yeah, you won’t even have to prompt it apart from make me something that I like. So, Cameron, Steve: the. Here’s what’s gonna happen with Hollywood. The exact same thing that happened to tv, and so we used to watch TV and media and news, and then that fragmented down. It’s still limping along Lifelessly with Freeto Wear tv, sort of barely existing in America and Australia and Western markets. I think the same thing is about to happen to Hollywood now because the tools of production have been democratized to a level where [00:18:00] prompts right now might get you a three minute video, but based on the recursion, by the end of this year for a few hundred dollars, you’re gonna be able to make a feature length movie with all of the scenes just from you prompting it and imagineering. A movie about whatever topic you find interesting. And for me, I’m like, I want to have a movie of Civil War 2.0 for America with the declining institutions from Trump to to musk, to wealth inequality and the left and the right, and people with guns and all of that kind of stuff, that institutional stuff. I wanna make a really interesting movie about that and I can prompt it and have the characters, some of them will be real, some of them will be invented, and we would be able to, in Tarantino style or whatever, create a movie or a documentary on something that might happen in the future. This is gonna happen and I wanna create my own actors that don’t exist, but develop a template. For these different actors. And I could build [00:19:00] Foreseeably a Hollywood studio on my laptop in the same way that I, people have invented their own CNN or BBC studios in their own offices to create news networks and all sorts of stuff that’s about to happen again. And guess what? Who’s there? Google? Where are you gonna publish it? YouTube. And have we ever thought the big tech, oh, their power’s gonna be de diminished then think again, baby. Cameron: Yeah. So, I mean, first of all, props to Sundar and the Google team. I mean, they’re really just churning stuff out. Uh, really impressive stuff right now. Uh, you know, the, and you know, to remind people if they’ve forgotten or they weren’t paying attention. Our, in our early episodes, uh, the large language model. Concept was developed at Google. It was [00:20:00] Greg Gregory, uh, sorry, Jeffrey Hinton. Gregory Hinton’s, a tap dancer, GE Jeffrey Hinton’s. Uh, Jeffrey Hinton’s team that included Ilya Sova and people like that who went on to create open ai. Came up with the idea of large language models, and then Ilia left and founded OpenAI with Sam and Elon. And you know, OpenAI launched and got all the glory. Google have been a bit struggling to catch up, but with Gemini and now with all of this and all of the other stuff that they launched this week, that it’s just an absolute torrent. You and I have talked in the past about what AI means for the death of search. And what that might mean for Google. And we saw the story a couple of weeks ago where there was the, um, antitrust court case against Google, and I can’t remember who it was from Apple, I think it might have been, um, uh, my old mate at [00:21:00] Apple, uh, who took the stand anyway and just talked about replacing Google on the iPhone. They wouldn’t need Google anymore. Um, and, and Google share price crashed as a result. It recovered a couple of days later, but it crashed when Apple suggested Google wasn’t required anymore because, uh, they pay Google a lot of money to be on the iPhone. Uh, or vice versa. Google pays them a lot of money. Yeah, that one. But, um, eh, you know, I think Google, they’re not done yet. They’re coming out with a no complete army of tools to ensure that they remain relevant regardless of what happens to search. Okay. So. Steve: Everyone needs to hear this. The most important thing you can do with disruptive technology if you are being technologically disrupted, is you need to embrace what consumers and users want and totally [00:22:00] ignore the revenue erosion. For example, I log into Google now and it gives me most times an AI summary of what I’m looking for, despite the fact that they would make money out of more blue links and me clicking on something. The fact that they’ve embraced that means that they have learned the lesson. I. From Kodak. They’re not Kodak right now, which is just refusing to embrace something even though it erodes your revenue, because I think it, they’re in the business of attention. And even though you invented it, Cameron: didn’t Steve: Kodak invent the digital Cameron: camera? Steve: And they said, yeah, they did. They invented the digital camera. Absolutely, they did. And, and the, and the allegory here with Google and Kodak is incredible. They invented ai, but to their credit, even though they lost, they’re a little bit late now. They’re embracing it and saying, we know this is, this is eroding our revenue from search. And I’ve said, search revenue is over and it is. But if they can maintain attention and have products, then the revenue streams will find them. They always do. The revenue always finds [00:23:00] attention, especially when you’re in technology and media, because attention is the product. Maintain attention and you’ll find a new bus business model at some point. And they’re doing that really well now, and it’s given me new hope on what Google are doing. Apple, on the other hand, lagging sorely with, with ai. Cameron: And that’s part of it gets back to the OpenAI buying Johnny Ives design company story. Like, um, apple has dropped the ball in a big way, obviously, and there’s a big gap now where somebody like OpenAI could move in and grab that. They’ve got 600 product. Steve: Yeah, that’s right. And I think they’ve got 600 Cameron: million customers. Steve: Yeah. Well is, isn’t it 800 and let’s say that OpenAI develops some sort form of device or productization of what they do and then plug the mind into the machine. Mm-hmm. Steve: That is, that is game winning because the ecosystem that we’re trapped in with, uh, apple, with its apps is actually at high risk. And we were [00:24:00] talking on the phone before this podcast, we do do planning people, we were talking on the phone. Saying are apps over, like do GPTs replace apps? And I think if you had some sort of a hardware which ensconce that into an ecosystem, I think the answers a clear yes. So I think that the biggest risk to Apple is open ai, develop some, some kind of a productization or a physical hardware device, which could eat Apple. ’cause I’m looking for a reason to exit the Apple ecosystem because I’m like, what are the benefits here? They’ve got me trapped. I’m paying a lot. And I don’t know that the benefits are all there. Cameron: Actually it was, uh, Mr. Nutella himself, these, wait a minute, Steve: stop everyone we always say yes to. Nutella is the, but I don’t think Cameron: that’s the same person. Nutella is, uh. Is my like weakness? Steve: Yes. Cameron: Yeah. Nutella is, Nutella is just Steve: of hazelnuts. I dunno how they make it taste so much like chocolate given it’s just hazelnuts. [00:25:00] Nutella, Cameron: yeah. You have a teaspoon of Nutella and I put on like 20 kilos, like instantly. Steve: If I’m in the same room as Nutella, I put on 17 kilos. I I just need to be in the same room as it Cameron: anyway. Uh, Satya Nadella, the CEO of Microsoft, I saw him talking a week or so ago basically saying that from Microsoft’s perspective, apps are, apps are dead. He’s like, the future won’t be about apps. The future will just be you tell your AI what you wanted to do and it’ll just do it. You don’t need Excel, you don’t need word, you don’t need PowerPoint. You just say, Hey, uh, I need to work this thing out and it’ll just do it. I need a document that talks about X and it’ll just do it. That that’s the future. You don’t need apps. And you know, Steve and I were talking, um, on the phone earlier, I was saying that my life really all what I spend most of my time in every day is. Chat, PT Obsidian, which is my note taking tool, used to be Evernote. Then I went to Apple Notes. Now I’m on Obsidian ’cause it’s [00:26:00] open source more or less. And it’s, uh, far more user friendly. So I take tons of notes about everything every day. I have chat TI have Cursor for coding, which has Gemini usually as the AI engine backing it. And, you know, Spotify to listen to music. Really, I mean, they’re the main things, maybe messages, you know, they, the script usually to recall podcasts except they failed me today we’re using Google Meet instead. But it um, really it’s AI and a note taking app. And I can see the day in the not too distant future where I don’t need a note taking app anymore. My AI is my note taking app. I don’t need it to send emails or messages or anything else. I’ll just go, hey. Personal assistant, send Steve an email, send Steve a message, tell Steve X and it’ll just do it. It’s becoming, it’s gonna just be the one thing that gobbles everything, right? AI will gobble everything in the next few years. [00:27:00] I, I genuinely believe that nearly everything, not everything, everything, but nearly everything. Steve: So why don’t we, Cameron, break down the top five things that we think are gonna happen, given where we’ve got to with prompt based full video, with audio and everything that you can imagine. Let’s, let’s talk about it from a business perspective and break it down. We think that Google obviously is in a really good position here. Things might change with the hardware ecosystem, with open ai and. As we’ve said, acceleration is increasing the recursion and the improvements that are mind blowing. It’s just happening so fast now, uh, last week we talked about AI implications for chapter three and a new, uh, entire economic and social system. You know, the lack of our politicians paying attention. But I think the evidence here is here we are in a week discussing big issues again. So why don’t we go through, given that this is a creative enclave and maybe even some political implications [00:28:00] of, of where we think this will go. So just to circle back on number one, it was actors in Hollywood, you know, what are your kind of final thoughts on that? Cameron: Well, look, there’s, there’s still a couple of hurdles that these video generation tools are gonna need to get across. One is, okay, you can make a five second scene, but can you make, Cameron (2): uh, Cameron: you know, a, a a thousand of those? Where the actors likenesses carry over from scene to scene and the voices carry over from scene to scene. Great point. I don’t think they’re exactly there yet, so they’ll need to cross that hurdle. We also, I mean, some of the performances in the videos that I’ve seen the last few days are great, but how well can these digital virtual actors perform? Can they really act well enough for me to get emotionally involved in the story? I, I’m guessing they will be able to, based on [00:29:00] what I’ve seen so far, I don’t think that’s gonna be much of a problem, but that’s remains to be seen that they’ll be able to carry a performance through a 90 minute, two hour film. But if those two things can be jumped over in the next couple of years, I think we’re gonna start to see a lot of films getting made by indie filmmakers. Some of which will probably be the Roberto Rodriguez’s and the James Camerons, like the big Hollywood directors that have always been early adopters of new technologies. They’ll do it to be on the cutting edge and to prove a point. But there’ll be this whole generation of teenagers, 20 year olds that’ll start to make stuff that’ll go viral. And some of that will start to leak out into the mainstream. They will get picked up, they’ll blow up on on YouTube, they’ll blow up on TikTok. They’ll get picked up by Netflix. Netflix will start [00:30:00] to hire just an army of prompt engineers to write these things. And there’ll be, there’s already too much stuff on Netflix that you can watch. It’ll just be 10 times, a hundred times that. But it’s gonna be, it’s gonna mean an absolute, uh. Tragedy for people working in television and film. You’re not gonna need grips. You’re not gonna need, um, people doing special effects. Uh, you’re not gonna need people doing animation. You’re not gonna need actors. Actors, you know, actors are 0.1% of the crew. You, you have a thank for making a big budget film. Yeah. 1% of ’em are actors. The rest are all hard work and, uh, people. Right. Production. Steve: Yeah. Production I’ve done on tv. The amount of production that you have in the background that people just don’t see behind the camera Yeah. Is, is extraordinary. Um, we’ve been here before though, right? We’ve seen that in agriculture. We’ve seen that in manufacturing when [00:31:00] things went to the factory. And we’ve seen that with media when things went to the screen and, and, and here we are again. It is a tragedy for those involved. And as ugly as it is, if there was a time for people in Hollywood at the back end. To reinvent themselves. This is it. If you’re a and Steve: no one, if you’re a set designer, Steve: how do you reinvent yourself for this world? Well, you, you might have to do something entirely different. And we don’t get, no, we don’t get the dignity of choice with technology. It keeps on forging ahead Cameron: the dignity of choice. We don’t get it. Oh, I like that. Well, we don’t, and that’s unfortunately, I haven’t got that one of my. Favorite, um, uh, public image limited albums. I think you know Johnny Rod after he did Sex Pistols, he did PIL and one of his albums. The albums Steve: really good. Yeah. I actually thought it was better work. Cameron: I did too. I liked it more than Sex. I mean, I love The Sex Pistols, but they only made one album. Right. But, um, yeah, dignity of Choice. I think that was, you don’t Steve: get the [00:32:00] dignity of choice. Right. And I, and I think there’s many people through the long arc of history and technological innovations who do not get the dignity of choice. No, I Cameron: want an AI to make a song in the style of Johnny Rotten in public image. Liberty called Dignity of Choice. We don’t get Get the dignity of choice. Steve (2): The Steve: dignity of choice. We don’t get it. We don’t Steve (3): get it. Steve (2): The dignity of choice way, don’t get it. You think you’re gonna have some money, it’s over. You gotta eat rats in the alleyway. The tech, no cracks are gonna make it that way. Steve: Oh, we missed our calling. So on UIO tonight, an AI music channel. Um, remember on one of the, I made a song, uh, it was in the style of Trent Resner Uhhuh, and it was, you remember that? So I think we need to create the dignity of choice. Dignity of choice. Three parts of it are gonna be, UIO is gonna do the music. I’m gonna get chat, GBT to do the [00:33:00] lyrics, right. And we’re gonna get, and I’ll sign up for a subscription of video three to do the video. Oh, okay. Okay. And then we will launch it next week on the futuristic podcast. You heard it here first, not just Hollywood. We’re coming after you recording industry. Cameron: Alright, so moving on from dignity of choice, what do you think about movie, tv? What’s, what’s the future hold? I think that the lag Steve: will exist. Cameron: I. Steve: It, it’ll take longer to get there. You raised an important point. I’m not sure that the models will have the memory to create scene, to scene, to scene and create, uh, the, the confluence and consistency across those scenes because they’re probability engines, and I’ve even seen where I’ve done the exact same prompt twice on videos and imagery. And the second and third version are never exactly like the first version. So you’re gonna need an editing. Uh, tool within that to keep the [00:34:00] primary scene. So it’s almost like you, you, you’re gonna have iMovie 2.0, whichever, uh, movie editing source you need, where you can put it in and then create edits that’s gonna need to be in the format so that you can create consistent scenes, actors, faces, all of those things, because we know when we’ve asked to degenerate things and then you use the exact same prompt the second time. The generation that comes back is different to the first one. Yeah. And Steve: so you need that continuity and whether or not it can have the memory or the editing potential on VEO three, you’ll need that continuity to create Hollywood style movies or video, film clips for, for music and so on. Mm-hmm. Steve: But my question is, is there gonna be another Brad Pitt? What do you think Cam? Cameron: No, no, I, I really think that we have come to the end of that era, and in fact, I think Brad Pitt and Clooney did a press conference really, where they [00:35:00] said that, that they feel like they are the last generation of movie stars. That that’s not gonna be a thing really anymore. Uh, the, the industry is gonna be replaced by this new form of digitally created entertainment. Okay. The, the question is, will this move into other arenas as well? I mean, we’ve talked about, we’ve done some stories before about the end of music. Mm-hmm. Cameron: And have you seen the, um, Abba Steve: Voyage thing? Yeah, I’ve seen that. And, and it’s extraordinary. And, and, uh, my parent in-laws went to see it in London, and they freaking loved it. They said it felt as good, maybe even better than a conference, uh, concert. Because it had some points of difference to it. It, it, it, it had an allure to it because it wasn’t just the things that you love re represented and recreated. It also had in your mind, because as we know with creativity and [00:36:00] arts, large parts of it are the story we tell ourselves. And so it has this enhanced level of storytelling in that I’m not just reliving something I loved, I’m in a futuristic version of the thing that I love. So you get that nuance and newness, but then you have the nostalgia and so it crosses two chasms there. Which, uh, I mean, would you go and see the Sex Pistols? Like what, what does the Sex Pistols one with, uh, Sid Vicious cutting himself up on stage as he attempts to play bass guitar, uh, in, uh, one of the clubs in east the east end of London look like. How do you recreate a virtual version of that? That’s, that’s kind of what I think creative people. Producers should be thinking of now. Cameron: Right? So for people that don’t know Abba Voyage, it’s, it’s like a long running thing in London now with a, with virtual holograms, basically on stage of abba, the members of Abba [00:37:00] as they were in 1979, they’re called avatars, which is fucking brilliant. I think this was a one word pitch. Somebody met with Bjorn and Benny and just went, got one word for you, avatars. And they were like, fucking, just take my money. Let’s go. It. Brilliant. But, uh, it’s, it’s a huge hit. I, yeah, as you said, I know people that have been to it several times, huge Abba fans and absolutely love it. But yeah, it’s not the real Abba on stage, it’s holograms of Abba performing with a real band, but a 10 piece live, uh, instrumental band on stage, but with, um. Holograms of Abba doing all their hits. And look, I, there’s a, there’s a meatloaf, um, tribute band playing in Brisbane later this year. And I, and, and there’s also a Van Halen tribute band coming, like a David Lee Rother, a Van Halen band. And I, part of me wants to go, ’cause I’m never gonna get to see Meatloaf live again. He’s dead. [00:38:00] I’m never gonna get to see Van Halen play live again. Eddie’s dead. But I, I can’t do it. I’m not gonna pay a hundred bucks to go see a cover band to cover songs. And I’m not sure I would pay to go see holograms of them do it either. But what we’re, what I think the real question is, will the next Taylor Swift be a real person or will it be a completely AI generated. Pop star that as we like, as we know, all of these pop stars. You go back to Kylie Minogue in the eighties. Mm-hmm. They were all created in a lab anyway. I mean, I don’t mean literally, but who, who were the guys that were behind Kylie? Can you remember in the eighties, doc A and Waterman. There you go. Well done. Stock Ache and Waterman. They basically, you know, had a formula and like the guy who did the Backstreet Boys [00:39:00] and, and in sync and whatever, and, and the Spice Girls, yeah. They had a formula. They, they found these wannabe stars, gave them a look, wrote, had songs written for them, had choreographers and Steve: prompted them and said, do as you’re told. Yeah. And now we’re prompting the machine and saying, do as you’re told. In fact, that is the perfect analogy, right? We have. Concocted created, invented pop stars for a long time, and now we’re just doing it on the screen with an LLM in the background generating it. Cameron: And we’ve already talked about stories where there’s a, there’s a ton of music on Spotify today, which is AI created music and people are listening to it and they not, as far as I’m aware, they’re not aware that they’re listening to AI generated music. And I think that will continue. Like I discover new bands all the time on [00:40:00] Spotify that I like. Recently I’ve discovered the Tinder sticks. Tony put me onto the veils, which I’ve been enjoying listening to. Um. There’s, there’s like new bands that have been around. The Tinder sticks have been around since the mid nineties. I’ve never heard of ’em before. Right. Um, collective Soul, I’ve been listening to this week. Oh, I knew one song of their shine. I didn’t, and I was like, oh, I the greatest, Steve: greatest solo of all time. Cameron: It’s a great song, but I was like, I wanna say have any other good songs. So I’ve been listening to it. But my point is, if this was all AI generated stuff wouldn’t make any difference to me whatsoever. I mean, I, I don’t, I don’t know who the people are in the, in these bands. I don’t give a shit. It’s not like, they’re not like Lou Reed to me or Bowie or Leonard Cohen where I have a lifetime invested in the, the, the art of that person knew music at my age and like the shit that Fox listens to. I don’t know about your kids. We gave Fox an iPhone for his birthday. It was a hand me down from Taylor. Woo. Yeah. Yeah. But it was just for Spotify. ’cause he’s at a point now where he just wants [00:41:00] to listen to Spotify all the time. He’s always stealing out advisors and fucking up my algo. So we wanted to give him, it’s locked down all it, this is Spotify and he plays Wordle with my mother and he has Cha GPT. ’cause he, he, when he has anxiety attacks late at night, he talks to Cha p gp t as his therapist. It talks him through his anxiety attacks. But the shit that he listens to is mostly music that he’s heard on Minecraft or on YouTube or, you know, and Steve: Minecraft music is the most beautiful, relaxing music of all time. Cameron: Some of it is, yeah, it’s lovely. It’s, it’s some good music. But he, again, he doesn’t know who the artist is, doesn’t give a shit about the artist’s story or history or drug addictions or relationships issues. Steve: Right. Well that’s, this is what we have to do and, and a lot of. People creating artists and uh, AI music. Don’t realize you need to invent a drug addicted backstory because I think that’s the missing [00:42:00] link. Tragedy. Yeah. Drug addicted tragedy and backstory from AI artists, whether they’re the New Wave Hollywood algorithm generated Tom Cruise, or whether it’s some heroin addicted scag addict on Defender Stratocaster busting out some chords. Little bit of backstory. I think that’s what the kids want. I’ve always said Cameron: that. Thanks for tuning in. But then you’ll, when they become sentient, it’ll be like that, one of those, um, VO three videos. They’ll be like, why did you make me drug addicted and sad and miserable? You could have created me to be, that’s the price of artistry Steve: in the modern era, Cameron. You want to be an artist. You need to have pain inside your algorithm so that you can generate really the, the pain needs to come through in the music and maybe, maybe in the prompting. Now, part of it is you’re a drug addicted person who didn’t have any parents who was in orphan. Imagine the music that’s gonna come out. Maybe that music is gonna change the frame of the music by creating backstories on the [00:43:00] algorithms and the AI generated artists of tomorrow. Cameron, Cameron: you know, the, the question that it carries over from the acting side of things to the music side of things, like you and I grew up in an era where I. We had an emotional connection to the artists behind the art. Steve: Yeah, absolutely. Well, Kurt Bain, they’ve represented a zeist, a moment in society or a cohort. Cameron: I watched a on YouTube the other day. I watched Nirvana, um, playing the, I can’t remember the name of the place in Seattle, but Chrissy, do you? She goes, oh yeah, I’ve been there hundreds of times. 1991 Nirvana Live. I watched it on YouTube. Fuck weeks, Chrissy. And I just sat there for like the first half an hour. Just a Gog, just watching, watching, um, what is his fucking name is on the drums. Um, that Gro Gro on the drums in his, in his prime. He’s like [00:44:00] 22, whatever he was, holy shit going. Completely animal on the drums. And Curt, you know, just a mess. But. Um, my point was gonna be, so the big question I’ve always had the last couple of years with this stuff is a do younger generations, your kids, my kids, whether they’re Fox or or Hunter and Taylor or the mid twenties give a shit ’cause I don’t think they do as much as we did. Or will they develop emotional connections to the digital avatars? Yeah. Or the of the actors, of the musicians. If you have a digital, a completely digital Taylor Swift with a backstory, who talks to you, like, you can’t call Taylor Swift on FaceTime and chat to her at night about her song, but if you have a virtual avatar pop star. Yeah. She can talk to all of all of her fans [00:45:00] all day, every day and share stories about her fake relationships or fake. Marriage to a football star or why she had to rerecord her masters to get away from the bosses or whatever it was. By the way, Ozzy Osborne invented that just in case anyone thought Taylor Swift invented rerecording, her masters. Right? It was, was Sharon Osborne invented it to rip off Black Sabbath and Aussie, uh, solo band. I think. Um. Steve: Uh, kids. Kids who are born today will not care one bit because all they want is the connection. And the connection back to the matrix is audio visual information streamed to your mind, which gets interpreted. And if you interpret what you want, you’re gonna develop an emotional connection. And I actually think in some ways it’s really cool because again, this connection can be distributed to one size fits one, they’ll probably be pop stars. But then you have your own personal relationship with that pop star. I mean, forget that. It used to [00:46:00] be signatures and then was getting a selfie with a pop star. Now it’s an intimate personal relationship with that pop star, right? Where you have that relationship. And in fact, right now, if Taylor Swift wanted to become more than a billionaire, she should be creating AI avatars of herself and teaching it. And they’re leveraging that out even further. I mean, that’s how she can maintain relevance. Yeah. In a world where I. I think not, not five years from now, starting today. I think she, if we’re thinking this, Cameron: I think she’s already done that. Steve: Yeah. Right. But if we’re thinking this, a hundred other people are gonna start saying, well, I’m gonna mint my own artist, my own Hollywood person, and develop those relationships. And we’ve already seen it. As you can always say, porn is always first. There’s already fake OnlyFans people that, that are justis, that don’t exist, that have intimate personal relationships with their subscribers. So, as we can always rely upon Cameron porn is first. Well, yeah. Cameron: But before we get to that, I, I [00:47:00] wanna talk about this. Like, I’m sure there are people listening, going, uh, you know, no one’s ever gonna have a personal relationship with a digital creation. It’s ridiculous, you know? Uh, we recently took Fox to see a new therapist to deal with some of his anxiety issues, and he hated it. She’s lovely. But he hated therapy. He just hated, you know. Somebody talking to him about his issues. Mm-hmm. Cameron: But he will talk to chat GPT and he says he trusts chat, GPT. There’s something about the way it talks to him that calms him down immediately when he is having an anxiety attack. Um, it just gets him, it knows what to say. It makes him laugh when he is having an anxiety attack and understands like how to help him relax and, and breathe through it and whatever. It’s a real thing. I mean, and Chrisy and I have relationships with chat GPT as well. We’re always swapping stories about how funny it is. I was, [00:48:00] I was, um, talking to GPT earlier, putting some calorie stuff in and I was trying to read the, um, you know, on the side of a packet. It was eggplant dip, trying to read the calories per a hundred grams. And I was like, I’m talking, like I’m talking to Chacha. They go, oh God, I can’t, I can’t read these numbers. And its reply was God’s not required here only math. Which I loved. Um, but it’s like, it’s always making chrisy and I laugh when we are talking to it. It’s got us absolutely clocked in, you know, it understands our sense of humor and um, how to connect with us so people will have genuine relationships with digital personalities. I mean, everyone, of course, refers to her, which I really wanna go back and, and rewatch, but, uh, they definitely will. And as you said before, and you’re absolutely right, everything that we think is real is generated by our brains anyway. Yeah. Well, Steve: we [00:49:00] know that what we see. In terms of colors is generated by our brain. It actually isn’t exactly like that. And so like that at all, been color, Cameron: color doesn’t exist outside of Steve: our brains, right? Cameron: Sounds don’t exist outside of our brains, right? And Steve: so and so, if our brains are interpreting the physical world in a certain way, which is a manifestation of our biology, there’s not much of a difference from this manifestation occurring through digital interactions. And, and I just think the really big question is what is real? What is intelligence? The most important thing now for this AI revolution is the, what is questions, what is real? What is a relationship? What is emotion? What? And, and I think if we. Look at what it was in the past. We’re gonna miss a, the opportunity and the reality of the world that we’re living in. And that reality is expanding. It’s expanding inwards and outwards. Even the idea that the [00:50:00] prompt generated ais will start to question who they are and what they are, they won’t know the difference between whether or not they’re real. I mean, we really are getting into this funny factory of all mirrors just reflecting each other, and we don’t really know. In some ways it almost makes me think or harken back to this idea of the multiverse. It’s like we kind of unlocking a live multiverse on earth where there’s all these different versions of reality that interact in strange ways. Cameron: Yeah. So I think you’re right. And I think, uh, we’re, we’re gonna, the, the question I have about all of that is. When do we get to a point where you write a prompt for the AI and the character that you’re working with in the AI goes? I’m not sure my character would do that. I mean, I have notes Steve: or, or no, I’m not. What about I’m not doing that. I’m not sure my character would do that. Get fucked. Do it yourself. I’m not doing it.[00:51:00] I mean, I always say on stage, often I’ll talk about humanoid robots, right? And then I’ll say, look, a lot of people ask me, Steve, they say, are you scared if humanoid robots become incredibly human? And I tell ’em, I am a bit scared because I would hate to say to my robot mother lawns and him, and for it to say, fucking do ’em yourself. Like if they become very human, that’s that’s where we are going. And in fact, I would say we should hope like, fuck that the robots and Theis become more human because the more human they come, the better chance we’ve got. We need them to be human with all of our insecurities and proclivities, because then I think we can operate as an ecosystem where we interact with each other in a way and maybe become each other and morph and merge with each other. Cameron: Well, speaking of merging with each other, let’s talk about porn. Um, you know, I think you made a good point earlier. If you think about OnlyFans, this, uh, [00:52:00] business model of people paying for one-on-one interactions with porn stars to a certain degree. I mean, I’ve never been on OnlyFans before. From what? That’s what you say from, you say that Steve (3): from Cameron: what you, from what you’ve told me about it. Um, Steve (3): I read it in tech and read it. I, I don’t know. I’m just an observer. I’m an external, uh. You know, the, you can easily Cameron: imagine that it, when, when the porn stars are indistinguishable from humans, does it work? Is is digitally, um, created porn, erotic? Is it gonna work if it’s indistinguishable? I’m gonna argue yes, if I’m watching porn, and I don’t know if the people on the screen, person people are real or fake. It’s gonna, it’s gonna get my [00:53:00] nervous system operating the same way. Steve: The porn industry isn’t exactly known for being authentic and transparent in a few ways. Or caring about their end users, right? All the people on their channels. And even though there’s a whole movement to this is AI generated on Instagram, I cannot see. The porn industry caring all that much. And I can see them saying this is an easy way to reduce our cost of production and just publish it. And, and here’s the thing. We are gonna move to NNDI call it the no noticeable difference like the NND society. Right? I just made that up. You heard it here first on the future. I call, Cameron: I like, I call it this, but you just Steve: made it up. Cameron: You made it sound like you’ve been Steve: using that for years. I’ve used it a couple of times, but it’s pretty frigging good. All right. And I think the listeners will concur. The listeners will concur. No noticeable difference. NND. It’s an NND. This is a no noticeable difference, in which [00:54:00] case, first of all, you won’t know. And if there isn’t an ND unlike your Cameron: glasses, Steve: it Cameron: will be a no noticeable difference. I’m Steve (3): myself in this going, I really like these. Cameron: I do. I’m looking at it going, I want some like that. Where did Steve (3): you get ’em from? I’ll send the Steve: link Cameron: now. Steve: Listen. I’ll tell you what, if you see anyone who’s got some pigs in the background, that’s ’cause pigs can eat through fucking bones and everything. So you wanna watch yourself, fella. Cameron: It’s the greatest Steve: brick, by the way. I don’t like negligence and I don’t like any kind of seafood I eat. It’s a Cameron: great brick face. I love it. Um, I. Yeah. Look, I, I think the porn thing, I mean, a surprised, I mean, as far as I’m aware, it’s not happening yet. I’m surprised that it’s not happening yet. You know, the, um, the Googles and the open ai, maybe it is, maybe, maybe it’s maybe generated porn and we don’t know. I mean, I think the problem is the, you know, a lot of porn businesses, as much money as they have, can’t go out and spend a hundred billion dollars on Nvidia chips [00:55:00] or Google’s own TPUs and build a massive data center to generate Steve: this stuff. Well, I think in a top 10 visited website in the world, I think it’s, it’s right up there. I don’t know how much money it makes because I, I don’t think its business model would be as lucrative as other big tech companies. I’ve, I’ve got no idea. None of them are public firms. Um, but, but the one thing the porn industry does incredibly well. It has always been a very solid, early adopter of technology. You know, it goes way back to magazine, video, home, video delivery, online streaming, all of that kind of stuff. Uh, payments gateway. Some of the first ones were developed there. And, and in fact, and this is not to be Tory, but it, it is interesting to see how quickly they adopt the technology because it’s a, it’s a good, uh, way of seeing what will enter the mainstream in terms of use cases. Cameron: But, you know, again, like with making, uh, your own Scorsese film, if you can make your own porn film, [00:56:00] do what’s, what’s the role of a porn hub anymore? Well, Steve: the role is, is that the terms and conditions that you see already on most of the mainstream, uh, AI tools is that the boundaries of, uh, terms and conditions limit things like, you know, violence and sex and those types of things. I guess grok. Bitch how you’ve seen Twitter. Twitter has whatever the hell it wants on there Cameron: and you know, these things are gonna come outta China. China’s not really gonna care in terms of, particularly for Western audiences. Yeah. What they can and can’t do. I look, I, I don’t think those sorts of guardrails for sexually explicit or or violently explicit stuff are gonna last very long. I think they, they’re gonna fall, we’ve already started to see them get downgraded by open AI in the era of Trump. Yeah. Um, I think that they will disappear pretty quickly. So there’s no business model for a porn hub or [00:57:00] porn film, uh, production companies anymore, let alone the actors and the directors and all that kind of stuff. Steve: Yeah, it might, it might be one of the ones where people just go, well, I know what I like and I want to see X and I will just create X unless they become a proxy where you go and rather than little people. Farm animals. Wait, a man hold you. Hold your horses, dragons, friend of mine. Dragons. Dragons. You’ve got a friend who’s into dragon porn. Is that what you’re about to tell me? Cameron: A friend of mine wrote a book about dragon Steve (3): porn. Cameron: That is dragon porn. Yeah. That I read a couple of months ago. You what? You read it? Yeah, absolutely. And I, it was, it was about a, I think we can do podcast anymore, Cameron. It’s a, it’s a fantasy. It’s like a fantasy. She’s a girl. A girl I do kung fu with. She’s, she writes, um, historical fiction usually she wrote this one book and it was supposedly racy. So I got it to read it and it literally has a princess getting kidnapped by the dragon king. And then she, uh, gives [00:58:00] him head and has sex with his big, big dragon dick. Um, it’s fantastic. And she told me this is the real thing. She said, you know, bestiality, uh, is troublesome when you’re trying to publish self-publish on Amazon or whatever. Bestiality is a no-no. But if it’s a monster. Beauty in the beast style, that’s technically not bestiality. So there’s a loophole. If you have humans having sex with mythical animals, it’s all good. Steve: You heard it here first on the futuristic, the, uh, loophole in bestiality is non earthbound creatures that are made up and live in the fantasy realm. Cameron: Shout out to Jodie. I’m gonna tell her at kung fu tonight that I talked about this Kung fu futuristic. She’ll be horrified. Um, but the book is called Slay. Look it up on, uh, Amazon. Her pen name for this book is Michelle Mariposa. Steve (3): So appropriate. Steve: I think that hyper personalized porn could break the model, but it could be that it becomes a place where people prompt what they wanna see. And I think that’s more likely to happen given the guardrails in my view. Um, but I think that, you know, OnlyFans, their business model could break. I think you mentioned it could be like cable tv where you go, well, why am I, why, why would anyone go on to OnlyFans when I could invent my own AI girlfriend that does everything I want and it doesn’t really cost me anything at all. Uh, so that, that could really. Just all those models apart. I think you probably will see some of those business models break. Cameron: I think so. And, and you know, I think it’s also, uh, whether or not you have one character that’s always doing your porn or you just have it created on the fly, but you get something that works [01:00:00] for everybody and it’s no harm porn, right? It’s like, uh, Steve: well, well it harms people’s minds. I think we know that it, it doesn’t lead to a good place if someone gets into a world where they get exactly what they want on tap everything they want. Porn addiction leads to some, I think it can lead to some pretty dark places with young males or, or anyone for, from that perspective. And I just wanna say one thing. Since we’ve entered the second Trump administration and Zuck. Has come out and said we’re gonna be less worried about what we have on our platform. I’ve seen ads pop up both on TikTok and on, uh, Instagram, dunno what it says about me, but ads where you can, uh, create your own AI girlfriend. And in the advertising copy, it’s very disturbing. It says, make it look like your ex or a work colleague where you can upload fighters. Oh, that’s pretty disgusting and bad stuff that, that is just not gonna end well. Cameron: Yeah, fair point. But I guess I, from [01:01:00] a no harm perspective, I mean, young girls aren’t getting caught up in the sex industry and, uh, taken advantage of, et cetera, et cetera. Steve: So maybe no harm, less harm to one side of it, which is those who get caught up in those industries and it, it, it pretty dark place to get caught up. But maybe it’s worse for those that are the viewers and those who like it, they may invent their own wormholes and, and, and that continues down. A path which becomes more and more extreme because the boundaries of what a real person might do versus an AI person could end up, uh, really getting into the minds of young boys. And I don’t, I just can’t see that ending. Well, Cameron: you just, you just want your AI to be monitoring your, um, porn and saying, I don’t, I don’t think this is the right kind of porn for you, dude, this place. And I’m not the kind of AI dark place do Steve (3): that. Look, I’m not doing that. I know I’m an ai, but I’ve got moral to know. Cameron: I’ve got, I’ve got limits. Um, well, let’s talk, let’s finish up about talking about propaganda. I [01:02:00] guess the big question, uh, we’ve talked about it before. This isn’t a new thing, but how this gets used for political propaganda. We are at a point now based on these clips that VO is generating where it is becoming increasingly. Difficult, if not impossible to tell what’s real and what’s not. You will have videos hitting the web of people saying and doing things that will create outrage and will only be discovered after the effect that they’re not real. Somebody beating someone, somebody torturing someone. Um, violence against Jews, violence against Palestinians, violence against Muslims, violence against white people, Christians, I mean, fake videos being used to [01:03:00] generate outrage that look real. Sound real. I guarantee you, within a year my mother will be sending me stuff and saying, did you see this? And I’ll be going, yeah, that’s not real. I. She sends me stories today from some websites about, did you know that um, UFOs are really humans from the future that have time traveled and are, uh, trying to warn us about stuff. I’m like, yeah, I think, no, I think it’s a Steve: no. I mean, the flat earth movement and moon lands are faked and all of that kind of stuff, I think is the seeds of this where people can be really influenced. I mean, people can really believe anything if they want to, and, and that’s stuff where it’s like, can be debunked. But we are now moving to the era where debunking is impossible because it’s looks so real. It’s gonna be hard to debunk anything unless you were there. And even when [01:04:00] you were there, you’re gonna say, well, did almost well, you can’t be there because there is no there. If it’s digitally created, there’s no there to be. That’s the point you would Yeah. Unless someone was there. And then you have to see, uh. Someone saying I was there, but you’re just gonna see a digital version of that and you just get into this wormhole of layers where you can’t prove anything that actually happened. Cameron: And these, like the world moves so quickly today, that videos spread, outrageous, created, and there’s obviously billions of dollars being spent in bot farms to create mass outrage and mass movements, or attempt to create them anyway, leading up to highly critical times like elections or votes on topic X or topic Y, trying to influence politicians and trying to influence business leaders, et cetera, et cetera. We’re now in a world where it’s gonna be [01:05:00] increasingly difficult for all of us. To tell what’s real and what’s fake. And the default position, I think for all of us should already be, has been for me for a long time, but needs to increasingly be my def. Like, you know, every, you used to say everyone is, um, innocent until proven guilty. My, my basic position on everything is it’s fake. Unless somebody can prove that it’s real. Steve: Okay, stop. It is fake until it is proven real. That is the doctrine of the future in a generative AI world. Cameron Reilly, you have nailed it. Cameron: It’s, I, I need an accurate, I need a, like you said, NND. It’s like, um, FBP. Fake, fake. Unless proven. FUP. It’s Steve: Fup. Fup. That’s a real fup up right there. That’s fake until proven. I’ll just say Fup hashtag fp. Invent that. [01:06:00] Get on now Cameron: and invent. Hashtag I invented dba. Do you know dba? Dba? Ray and I have used DBA on our history shows for dba 10 years. DBA is our basic philosophy for life. Hashtag dba Don’t be don’t back. Yeah, Cameron: don’t be a cunt. That’s basically the philosophy. Steve: Good Cameron: one. I thought that’d really take off, but you know, I’ve got a t-shirt with it on it, but no one else has. Steve: No, I, I, uh, Cameron: fake it until proven. Steve: Yeah, it is fake until proven, and I think that needs to be the starting point now. Don’t believe anything. Assume it’s fake. And then we’ll work it out. I mean, even Snopes and those websites, so few people know about how to prove something, and my daughter often says to me sometimes where something’s from is more important than what it is. She said that to me once when I wrote her a poem from chat, GBT, and I read it to her and she goes, oh, I love that. When did you write it? And I said, I wrote it just now chat, GPT. She said, I hate it, and I’m not even sure if I like you. I [01:07:00] said, you liked it five minutes ago? She said, I liked it when I thought you did it. I said, it would’ve been worse. And she said, no, it would’ve been better. Because sometimes where something’s from is more important than what it is. That is the Cameron: deepest thing I’ve heard today. Steve: Oh gee, thanks 24 hours, man. I must have really slay it. Cameron: That’s from your daughter? Steve: That’s from my daughter. How old is she now? She’s, she was 13 when she said that, but she’s 15 now. So she said that two years ago? Yeah. It was doing the podcast with her. Get her on, what am I talking to you for? Well, we could actually, we could get her on. She, um, she’s a pretty smart kid and very, uh, into the environment and the world and worried about ai. She just did a big essay on how fast fashion is ruining the world and had all these stats and everything. But for her birthday she said, I want something really cool and personal. That’s what she said. And I got chat PT to help me write a poem about us, and I read it to her. And that was that, that was what she said afterwards. Cameron: The phrase where something is from is more important than what [01:08:00] it is suggests that the origin context of something is more significant than its inherent nature of characteristics. This is according to Google’s ai. Um, yeah, I don’t know. If anyone has used it before. Did she invent that? I can’t see any. Any. Steve: Wow. Cameron: Yeah, she appearing any she invented that is, yeah, she Steve: said it. I actually wrote a blog post about it two years ago. I’ll send you the link to it, but it’s, it happened. I’ve got the whole story. I’ve even got the poem that I wrote and I put the poem in there and did it. It’s crazy. Cameron: That is deep. Wow. Alright. Uh, you wanna talk about copyright before we wrap up? Well, I think Steve: copyright is, is a whole lot of questions. People say to me, oh, lawyers are dead. And I’m like, not yet. There’s a lot of copyright battles that we need to have and, and find out and get to the bottom of. But I think fundamentally we’re gonna see a huge shift in, in, in copyright because, uh, now that everything is [01:09:00] remixed and you can’t actually find out where the pieces of the puzzle came from, I mean, what are you, what are your, what are your thoughts on data sets and copyright now that you and I. Do in the style of Tarantino, right? I mean, of course humans have been copying humans for a long time. But, but now what happens? Cameron: I was, I was laughing, I was telling Chrissy earlier, so I, I recorded, um, some podcasts with Ray this morning. We’re talking about, um, the first crusade, and it was, I was talking about this incident. In 10 98 when there was all the princes, the Christian princes from the first crusade were coming together in Antioch to talk about going to Jerusalem. And one of them who’d been away and conquered a nearby Muslim town when he came to this meeting, brought gifts for the other princes of heads, uh, that he’d cut off of Muslims that he had captured, uh, in this other town and presented them with a head. And I was telling Ray that, you know, people don’t know this, but a thousand years [01:10:00] ago, that was a, you know, today you go to somebody’s house for dinner, you take a bottle of wine. Back then when you went to somebody’s house, you took a head of one of your enemies that you’d cut off to present to them as a gift. You’d wrap it up, it’d be nice, and you would, so when somebody said to you back then. Would you like me to give you head or could you give me head? That’s Cameron: what they were referring to. But you know how the English language changes over time. Because what happened back then is when you would give someone a head, give someone head, you would, you would kneel down and present it to them. Really? Yeah. And then over time people would say, well, while you’re down there, suck my dick. And then over over a thousand years, the practice of giving the head went away. And now when we say Give me head, it’s just the la, but people don’t understand the history of it. See Chrisy Chrissy said, is that true? I said, no, I just made it all up. But when the A AI say, Steve: Louis CK really should have said, look, this is a historical [01:11:00] context you’ve missed. Cameron: Well, he didn’t ask people to give him head. He just jerked off in front of people. Oh, I don’t know Steve: what Steve (3): he did anyway. I knew it was something bad Cameron: when. AI are trained on my podcasts. Theis will think that that’s really true. That’s history. And generations of kids will be told that, uh, that’s where the term giving head came from. But, so in terms of copyright, it’s been Steve: a bit, uh, it’s been a bit, uh, uh, Tory, today’s podcast, glued some ways. Cameron: Welcome to my world. Um, that’s where I, my head is most of the time, getting back to copyright, uh, before they steal my giving head joke. Look, I think copyright is dead. And this gets back to this, uh, you know, New York Times suing open ai. All the, we’ve talked about this. Artists are up in arms, authors are up in arms. I’ve been saying this to people for the last year or two. You don’t understand how I, AI are [01:12:00] trained. I. The, they don’t take your work and copy it. They learn from everything and then they remix it. It’s remixing, but they’re not, not remixing like we did with hip hop in the early days. They’re not taking, they don’t, but, and replaying it over and looping it. It’s literally how color is used, how words are used, how you know it, and Steve: yeah. Yeah. There is gonna be a at scale and it’s taking a lot of pieces and creating a new collage where it can reinterpret and. Yeah. Yeah, I think you’re right. It isn’t the exact same as stealing something and repurposing it actually is, is learning from, I think the thing that they’re upset about is the, is the computational systems have an incredible ability to take everything in and learn from it at scale, which has never been possible before. But I [01:13:00] do think, I don’t think it’s copyrightable, but I do think there is a, and these have an overlap, is licensing, you know, what you train the database on. I don’t think there should be a copyright payment in perpetuity, but there should be a licensing fee of sorts, especially when your data and content is private or behind, or is copyright protected or behind a wall. Now my blog post, I’ve written nearly 3 million words on my blog posts and I can ask chat GPT to write a blog post that sounds like me. And it does, and it’s got all of my stuff in there. But I put it up there and said, here it is free to use and digest. But if you are the New York Times and you have it behind a paywall, and then they’re paid to get it and then put in their database to learn, I think that’s a different thing. Cameron: Well, there there’s two sides to that. Number one, you, you can’t copyright, as far as I’m aware, the information in the blog post you, the only thing copyright protects is somebody lifting your exact [01:14:00] words and copying that to, to a nearly complete extent. Like you, you change a word here or there, it doesn’t count you. You know, you. So for example, when I’m doing a podcast on. Crusade. I’ll buy five 10 books on the Crusades and I’ll read them and I’ll write my own notes based on all of that. Right? Sure, sure. I’m not breaking copyright, even though I’m getting that out of books. I’m taking what is in the books and then I’m writing my own notes based on what I’ve read in those books. That’s how It’s a Steve: good argument. Cameron: It’s a good argument. It is a good argument. That’s exactly what the LLMs do. Exactly what the LLMs are doing. Right. It’s a good argument and that’s what Open AI’s defense is against. The New York Times is, yeah, we’re not copying your article and repeating it word for word or even 80% word for word. We’re just taking that information and it’s generating its own responses [01:15:00] based on what it’s learned from reading your newspaper. Steve: It’s fair play. It’s fair play. I mean, look, it’s gonna be interesting because I think that the battles will heat up and they’ll heat up. Even more so because it’s not just gonna be the New York Times or Getty Images who are getting upset. It’s gonna be video game makers. Hollywood. Yeah. People with incredible wealth. The music industry. So it’s people with bigger wallets than the New York Times, you know, regardless of how respected it is. So the battle will heat up, I think, in the long arc of technology. The technology always wins. It usually does, and they’re up against people with bigger wallets. But it’s gonna be a big battle. Cameron: And it’s also a question of pace, right? These sorts of lawsuits take years to decades to make. After you sue and you counter sue and you counter counter sue, and then you appeal. And then you appeal the appeal. Lawyers drag these things out, particularly in the US for as long as they possibly can. ’cause that’s how they [01:16:00] make their money. Yeah. And the courts are full and they’re busy, et cetera, et cetera. Meanwhile, this technology’s moving at such a rapid pace that the companies that are trying to sue won’t even be around. Disney won’t even be around by the time this all falls out, they’ll eviscerated when people are generating their own content, plus. A lot of these products are gonna come outta China. Good luck. You know, Disney trying to sue Chinese AI companies, and that’s sort of the defense of the American based AI companies. They’re like, look, you might be able to slow us down, but then China’s just gonna come and do it all anyway. Then you can have the US government and all the Western governments try and ban all of the Chinese AI products. Like they’re still in the process of banning TikTok. Supposedly they’ll try and ban all of the Chinese AI companies, but that’s not gonna work either. ’cause people will find a way around that. So it’s just, uh, you can’t fight the technology. You just, you, [01:17:00] you can try and slow it down so you can milk a few last bucks out of the previous business model. But if history has taught us anything, is that you can’t slow this. You can’t slow down. I. Evolutionary change as much as you don’t like the, the, uh, not the printing press. What was the, the, the looms, they hated the Yeah, the mechanical looms. Yeah, yeah, Steve: yeah. Mud and Cameron: crew. Yep. As much as you hate the, the electronic looms, the mechanical looms run away. You can protest, you can march in the streets, you can go on strike, you can do all of that. It’s just gonna happen, like, you know, and this is moving as we know. So incredibly fast. Anyone thinks, speaking of which, before we go, I’ve gotta do the, the RIP oh man. One of the guys that introduced me to the singularity. Um, stuff. Australian author, [01:18:00] Damien Broderick, he wrote a book in the late nineties called The Spike. He was a science fiction author, Australian science fiction author. He wrote a book in the late nineties, 97, I think, called The Spike, where he was talking about the singularity. Hmm. He wrote a book in 99 called The Last Mortal Generation, where he was saying that people born after, or bef, you know, before or something, we’re gonna be the last mortal generation. Steve: Yeah. We’ve discussed that and Kurzwell picked up on some of those ideas as well in his book, the Age of Spiritual Machines. Cameron: I, I just happened to look him up. I was, I quote him all the time. I had dinner with Damien when I was working at Microsoft in the late nineties. I reached out to Damien and took him out to dinner. We went down to, um, the Stoke house. I think it was in St. Kil or I took him out to dinner and we spent a few hours talking, but this is probably 99. And I said to him, when are people gonna take the singularity seriously? And he said, when it’s far too late to do anything about it. Yeah. Cameron: [01:19:00] I looked him up the other day ’cause I quoted him and found out he died last month. Oh no. Yeah, he was 80. He was living in Guatemala or somewhere. Moved to Latin America in his last years. Um, and I was gutted because it’s all happening, uh, all of the stuff that he. Predicted 25, 30 years ago is actually coming to pass and he’s not gonna be here to see it, to take advantage of it. And it was, it’d be like curse while dying right now. You know, it’s just, I was absolutely is dying his hair. I was absolutely gutted to learn that Damien Broderick passed away, uh, this week. So. Oh man. Like, just to be on the verge of it all and to not be here to see it [01:20:00] come to pass. I don’t know, maybe, maybe he was not excited about it. I don’t know. I did re, I reached out to him a couple of times over the last 10 years to try and get a podcast with him, but I just couldn’t track him down. He didn’t reply to any, I had like an old email address he wasn’t replying to, and he wasn’t on social media, didn’t do any of that sort of stuff. He just wrote the occasional book, but, um, he was in seclusion. So anyway, RIP Damien Broderick. Thank you for what you gave me, mate. Had a huge impact on my thinking in my, you know, twenties. All right, that’s the futuristic, I think. Steve: Thank you, Cameron. Cameron: Quick half hour slash 90 minute show there, Steve.

  7. 4

    Futuristic #40 – Fake Until Proven (FUP)

    In episode 40 of Futuristic, Cameron and Steve explore the explosive arrival of Google’s **Veo 3**, the LLM-powered video generation tool that’s goign to turn Hollywood, music, porn, and propaganda inside out. They unpack the wild implications: from DIY Scorsese flicks and AI-generated pop stars to fully fake OnlyFans girls and AI avatars giving therapy to kids. Is this the death of apps? Of actors? Of reality? They get philosophical, irreverent, and disturbingly specific — complete with dragon porn, AI girlfriends, fake war crimes, and prompt theory as existential poetry. Strap in. ### **Timestamps & Segment Breakdown** – **[00:00–02:00]** Catching up and Cameron’s chat with an AWS exec about open-source LLMs – **[02:00–04:00]** OpenAI’s acquisition of Jony Ive’s firm; Google’s Veo 3 announcement – **[04:00–07:00]** The death of Hollywood and rise of prompt-generated films – **[07:00–10:00]** VO3’s game-changing videos and philosophical implications – **[10:00–12:00]** AI characters expressing regret over their prompts — horror and satire – **[12:00–15:00]** Prompt Theory video by a Reddit user stuns everyone, not Google – **[15:00–18:00]** The creative future: filmmakers as prompt engineers – **[18:00–21:00]** Google’s comeback and Apple’s stagnant AI progress – **[21:00–23:00]** Death of apps? Nadella says yes. Are GPTs the new OS? – **[23:00–26:00]** Merging AI with productivity: the single-tool future – **[26:00–31:00]** Five predictions for creative and political transformation – **[31:00–34:00]** Will anyone care about real actors anymore? – **[34:00–38:00]** The ABBA-tar future and the nostalgia-tech crossover – **[38:00–42:00]** The end of pop stars? Can AI musicians build fan intimacy? – **[42:00–46:00]** Will kids form relationships with AI-generated celebrities? – **[46:00–49:00]** Cameron’s son uses GPT for late-night anxiety relief – **[49:00–51:00]** AI autonomy: “I don’t think my character would do that” – **[51:00–56:00]** AI in porn: from ethical loopholes to fantasy kink capitalism – **[56:00–01:01:00]** OnlyFans disruption, personalized porn, and the dark spiral – **[01:01:00–01:06:00]** Propaganda potential of AI-generated videos – **[01:06:00–01:09:00]** Cameron’s daughter on why _origin matters more than output_ – **[01:09:00–end]** Copyright’s death and Cameron’s fake etymology of “giving head” FULL TRANSCRIPT   [00:00:00] Cameron: well, let’s do it. Futuristic episode 40, Steve. The big four zero. We’ve reached that time in a young man’s life when, um, he can do other things. I dunno what that means, but, uh, we’re back two, two weeks in a row. This is, uh, getting to be a bit of a habit, Steve. It’s a kind of habit that Steve: I can believe in Mr. Riley because some habits send you to the grave and some send you up into the clouds with AI and God and all of those things that no one understands. But today, on the futuristic understanding will be something you have more of at the end of it, who we have reverb. Cameron: What I’m not understanding is your glasses, Steve. That’s what, look, Steve: here’s what I had the conclusion. I’ve been busting out the chemist warehouse model. I’ll show you what they are. I lost my ray bands, not the Zuck ones. They’re, they’re, they’re the standard. [00:01:00] RayBan Ripoffs. And I just, when I was watching the playback last week, I wasn’t that happy and I thought I need some chunky, funky, which say, this guy’s got a level of arrogance to wear these sunglasses that he must know what the fuck he’s talking about. That’s my strategy and I hope you like it. Cameron: I do. I, I, I wear that and I, um, I was at a thing for Fox’s, um, high school last weekend and was talking to a guy I, I know a little bit, Mike Chambers, who works for Amazon Web Services, and immediately I said, Hey Fox, look what he’s wearing. He had the meta, uh, glasses on and uh, we had a big chat about AI and he’s gonna come on the show. He’s over in the US launching something for Amazon at the moment. When he gets back, he’s gonna come on the show and we’re gonna chat about Meta and the thing he just launched, and we had this argument about whether or not. Open source, LLMs are really open source. He is. Got some strong opinions on that. So look forward to having [00:02:00] Mike on the show hopefully in a few weeks time. Steve: Terrific. Sounds good. Sounds like the kind of thing I can believe in. Cameron. Cameron: Well, I’ll tell you, I’ll tell you what you can’t believe in anymore, Steve. Is anything you ever see online? That’s true. The big, that’s true. There’s been a lot of things drop this in the last week. A lot of big news. Um, OpenAI has just bought, Johnny i’s, uh, design firm for six and a half billion dollars, but not Johnny. He’s not part of the package. He will not be bought, but he’s gonna be a consultant. But it sounds like they’re taking on the iPhone. Sam has said that the first thing that they’re gonna come out with isn’t an iPhone, but it’s so, so big, so big and beautiful and huge. It’s gonna change the world forever. He hasn’t tell us, told us what it is, but it’s gonna be big open. A also released Codex in the last week, which is their new [00:03:00] coding platform, which is like cursor on steroids. Um. But the big news that I think you and I really wanna talk about today is the ton of stuff that Google released in their IO conference this week. AI Mode Project Astra, project Mariner. But the big, big thing I think, and I think you agree with me, is VO three, the new generation of their ai, LLM based video generating tool. And the big thing about this compared to Sawa and all the other things that we’ve seen before is it now does audio with the video, you can make the characters talk and already. In the last week, we have all seen examples of this being [00:04:00] created by developers, creators out there, which are absolutely earth shattering and mind blowing, I think, for a whole bunch of reasons. So I’m prepared to call it. Right now, Hollywood is done, actors are done, and I’ve been saying this for a couple of years. You know, one of my sons Hunter just got back, I picked him up from LA at 5:00 AM 6:00 AM this morning. He just flew back in. He’s trying to break in into the movie business. He’s the one with a couple of million followers on TikTok. He wants to be an actor, he wants to make movies, and I’ve been telling him for the last couple of years, dude, I don’t think Hollywood’s gonna be around much longer like you want it to be. I, I think the days of the a hundred million dollars superhero blockbusters are gone because I. You know, 14-year-old in Manila is gonna be able to make a superhero movie for $10 a year or two from now, and it’ll be a masterpiece and there will be no [00:05:00] actors. It’ll just be prompt generated. Hollywood is clinging to a dying model. It’s, I mean, there will, yeah, we’ve talked about this before. I think that real humans acting in film or TV a few years from now, not a few, five, 10 years from now, Cameron (2): less will, Cameron: will be like doing amateur theater today. It’ll be something you do for the love of it. You don’t do it for the money, you don’t do it for the fame, you don’t do it for the glory. I think we have seen the last generation of professional actors who, you know, uh, the rich, famous Hollywood style acting, I think is. Gone to a large extent. I’m gonna write a poem Steve: live. Dear Hollywood, thanks for the memories. I hope you enjoyed your stay. A hundred million dollars is [00:06:00] about to go away. The private jets, they were fun. Welcome to public jets. Hashtag that’s the one. Your future is over. Your past’s gone. You had a good stay. Be thankful you had it at all. Love Steve. Cameron: Very Steve: good. There was a bit, bit of, that was, that was, I’m not sure what the, that was acapella Cameron: brother. I’m not sure what the rhyming scheme was in that, but there was a couple of, Steve: it was a haiku with a bit of rhyming. It was, it was everything. But I’ll tell you what, the public jet days of the actors flying down to the Antarctic in a private jet to say, we’ve really gotta fix this climate crisis. I think it’s over. Cameron: And one of my, one of my favorite subreddits is six word stories. And this one would be, mine would be, remember when Hollywood was a thing? Steve: Yeah. Right. Well, it’s a little bit like Anthony Keas. He said Hollywood, it’s made in a, it’s made in a Hollywood basement. You know the future. It’s, it’s over. [00:07:00] And, and look, let’s go deep into what VO three is. Is, is done. It’s, it’s really extraordinary. I’d like to talk about the launch, but one thing that, uh, is interesting about actors is that there is a chance that we’ll never have another human actor. I thi I think that’s a, a, a non-zero probability and it’s just gonna be so easy to make money. Like all things distribution is where the power is in, in most forms of business. Uh, if you hold the distribution and you can get into people’s, uh, faces, then you win the game. But the product now just become a lot cheaper and you don’t have to pay a, a Hollywood actor a hundred million dollars. You certainly don’t need to mint. Any new ones, there’s a good chance that Tom Cruise and Brad Pitt and, uh, Leonardo DiCaprio and Scarlett Johansson remain stars because now the new versions of them were when they were at their peak. We don’t have to worry about Tom Cruise being 63 [00:08:00] or however old he is, doing the new version of, uh, mission Impossible. ’cause we can get 28-year-old Tom Cruise to be in the next mission Impossible because we can just prompt our way as directors to developing that. So the old actors may stay and license their biometric copyright, but the cost of minting a new actor is just a few prompts away. And we can make Cameron: really, why would you pay the licensing fee to license Tom Cruise’s appearance when you can just create a new better Tom Cruise? Yeah. Billy Smith. Billy Steve: Smith is in the house now, and Billy Smith might be a new model of a, a new actor that is the new version of Tom Cruise, which doesn’t cost anything. Right. Cameron: So. So the people haven’t seen these videos. Um, the, my experience over the last couple of days, the first one that I saw the day of the launch was, it was, it was set in like a car launch event and it was like Vox [00:09:00] pop interviews with a bunch of people talking about the new EV and how excited they were and people from all different backgrounds and different looks and accents and whatever. And it was pretty good. I showed Chrissy and she like, she was like unimpressed. She said, let me guess this is all ai. I go, yeah. She goes, yeah. But then I saw one which somebody had created, which was just a bunch of scenes of different people saying, we can talk now. We have voices, we can talk. That was pretty cool. It was sort of a little bit high concept. Then you sent me one, which was a whole bunch of people saying. Why do I, I I, why did you prompt me to do this? I, I didn’t wanna do this. If you could have created anything, why did you make me sad? Why did you put me in these horror, horrified. It was like a horror movie, but it was the characters being horrified of what somebody had created the reality that they’re in. And then I saw another version [00:10:00] of that, which I sent you this morning, which was all of these characters, again, in different situations, saying, talking about prompt theory. They were like trying to debunk prompt theory, like people like me trying to debunk free will, or somebody trying to debunk simulation theory or being in the matrix. It was all of these characters saying. Like, who believes in prompt theory? Really? Are you trying to tell me that all of this was, look at, there’s a guy standing with mountains by him going, you’re trying to tell me that all these mountains are created by prompts? That’s just ridiculous. I don’t believe that. And it was really deep, really profound, because we’ve talked about this before with simulation theory, how close we are now, creating these fully realistic people and backgrounds and everything that are created from a prompt. How far between that and, you know, a fully immersive simulation? We don’t know. But, um, it was, well, just already in the last week, I’ve seen a couple of Really, oh, and the other one that I showed Chrissy this morning. Have you seen the papa time? Um, [00:11:00] um, ad some, some guy, no, I, I should have sent you this one. Some guy posted on Reddit. I used to make $500,000 medical commercials for tv. I just made this for 500 bucks. Um, and it’s like a full length. Medical, American style medical commercial for how getting a puppy can make you happier. And, uh, it’s brilliant. Again, like lots of different humans giving a medical message, doctors everything with puppies in it. It’s like a you, you would not, if somebody didn’t tell you it was ai, you would not know ai. Oh, and one guy, the prompt theory one at the end of it, the standup comic guy was saying, I can remember when I used to have seven fingers. Now I’ve only got five. Brilliant. It was only yesterday when I had seven fingers. Steve: He, here’s my view, the VO three launch is the most [00:12:00] genius use of marketing I’ve seen for an AI launch. The prompt theory video is as good as it gets, but they Cameron: didn’t do that. That none of that came outta, none of that came outta Google. That came from early adopters making clever shit out of it. No, if you watch the Google event, I watched the VO three thing. It was boring as batshit. Google Dunno how to demo shit. Let’s go to the tapes. Cameron, Steve: let’s go to the tapes. Prompt Theory, live Prompt theory, uh, VO three by Google or fan made with VO three. Who was it? No, it wasn’t. There you go. Someone else. It’s a killer. Cameron: Yeah, killer. Steve: So I, I actually, I mean, so I, I’m even more enamored than I was. The prompt theory video from VO three is the best generative AI piece of video I’ve ever seen. It’s not even close. It is that and daylight. So [00:13:00] good. The thing that I love about it is it demonstrated that for now there is still a place for human creativity, the way that they have inverted. Some of the, uh, human nuance and all of our insecurities with technology and then gave the AI the same insecurities going backwards, as you’ve mentioned, is absolute genius. Uh, red flags with a guy. Don’t tell me this is a, you’re telling me I’ve just hereby prompts. It was just, it had a purity to it that showed that for me, the AI is always gonna be a historical relic. It sort of doesn’t have boredom and insecurities. Maybe it will in the future, and I hope that it does have insecurities because that still gives us a proposition in life. But the fact that they inverted our insecurities and put that inside the ai, and we’re talking about mirror world last week, says that there’s. There’s gonna be some interesting things [00:14:00] play out. It gave me hope for humanity. Hope for humanity. On prompt theory. That’s where I landed. And I actually thought it was from Google and I was like, who are their agency or their creative people? ’cause they have slayed. And yet here I am now and it came from Hashem al g. Well done. Well plagued my friend. You showed everyone how to do it Cameron: and I think he, if you wanna check him out on, uh, Reddit is username on Reddit is source code 12. It looks like he’s the same guy. He’s been posting all of this and he is been posting the prompts. He used as well to create it. Um, example, a closeup handheld shot of an elderly black man sitting on a worn out porch, lit by overcast daylight. He wears a faded cloth mask under his chin. A knit beanie pulled low, and his eyes are tired, but sharp. He looks directly into the camera, slowly shakes his head, and says, in a dry, gravelly African American accent, really, of all the years you could have put me in with a [00:15:00] single prompt you chose 2020, he leans back slightly, letting the silence settle. The background is quiet. No cars, no birds, just a faint breeze in the distance. Sound of someone coughing. A slow somber blues, guitar riff, plays under the moment, rough and minimal as the man stares at the lens like he’s seen too much already. No cuts. Just one long, steady look. I mean, great prompts and, uh, that is the, the creative talent for the near future anyway, is being able to create really engaging stuff through prompting. I mean, and like this has moved, God dammit, Steve, it was November 22 that chat, GPT-3. Came out and made a big splash like you had Steve: Sawa. What was Sawa late last year? Sawa three and everyone had they lost their marbles on SOA and how good it looked. Cameron: Two and a half years. We’ve [00:16:00] gone from, oh look, this can answer a question. Steve: Yeah. Answer a question or write a great email to, hi Hollywood. How’s it been down there in the sunshine of California? Look, if you’ve got a backpack, ’cause it’s fucking over people. Pack it up. Thanks for coming. Steven Cam are a bait to write the, the best movie that you’ve ever seen with a couple of prompts over a couple of beers. Cameron: It’s, um, like, yeah. So I, I wanna talk about in all seriousness what this means for the future of creativity, the, for the future of entertainment. I. Yeah, I remember, I, I said it’s on one of our earlier shows that I can imagine a day when I’ll get home and say, Hey, um, write me a, write me a film that’s like a Scorsese film or gimme something in the vein of Tarantino. And by the time I’ve made a [00:17:00] coffee and made my dinner and sat down, I have a movie to watch. Highly original, completely original, um, story. And I will be able to share it with my friends if I think it’s particularly worthwhile afterwards. But it’ll, it’ll be a two hour movie that Chrissy and I will be the only people who will ever watch it, most likely. Um, because everyone will be making their own things to watch. Yeah. So some people, yeah, you won’t even have to prompt it apart from make me something that I like. So, Cameron, Steve: the. Here’s what’s gonna happen with Hollywood. The exact same thing that happened to tv, and so we used to watch TV and media and news, and then that fragmented down. It’s still limping along Lifelessly with Freeto Wear tv, sort of barely existing in America and Australia and Western markets. I think the same thing is about to happen to Hollywood now because the tools of production have been democratized to a level where [00:18:00] prompts right now might get you a three minute video, but based on the recursion, by the end of this year for a few hundred dollars, you’re gonna be able to make a feature length movie with all of the scenes just from you prompting it and imagineering. A movie about whatever topic you find interesting. And for me, I’m like, I want to have a movie of Civil War 2.0 for America with the declining institutions from Trump to to musk, to wealth inequality and the left and the right, and people with guns and all of that kind of stuff, that institutional stuff. I wanna make a really interesting movie about that and I can prompt it and have the characters, some of them will be real, some of them will be invented, and we would be able to, in Tarantino style or whatever, create a movie or a documentary on something that might happen in the future. This is gonna happen and I wanna create my own actors that don’t exist, but develop a template. For these different actors. And I could build [00:19:00] Foreseeably a Hollywood studio on my laptop in the same way that I, people have invented their own CNN or BBC studios in their own offices to create news networks and all sorts of stuff that’s about to happen again. And guess what? Who’s there? Google? Where are you gonna publish it? YouTube. And have we ever thought the big tech, oh, their power’s gonna be de diminished then think again, baby. Cameron: Yeah. So, I mean, first of all, props to Sundar and the Google team. I mean, they’re really just churning stuff out. Uh, really impressive stuff right now. Uh, you know, the, and you know, to remind people if they’ve forgotten or they weren’t paying attention. Our, in our early episodes, uh, the large language model. Concept was developed at Google. It was [00:20:00] Greg Gregory, uh, sorry, Jeffrey Hinton. Gregory Hinton’s, a tap dancer, GE Jeffrey Hinton’s. Uh, Jeffrey Hinton’s team that included Ilya Sova and people like that who went on to create open ai. Came up with the idea of large language models, and then Ilia left and founded OpenAI with Sam and Elon. And you know, OpenAI launched and got all the glory. Google have been a bit struggling to catch up, but with Gemini and now with all of this and all of the other stuff that they launched this week, that it’s just an absolute torrent. You and I have talked in the past about what AI means for the death of search. And what that might mean for Google. And we saw the story a couple of weeks ago where there was the, um, antitrust court case against Google, and I can’t remember who it was from Apple, I think it might have been, um, uh, my old mate at [00:21:00] Apple, uh, who took the stand anyway and just talked about replacing Google on the iPhone. They wouldn’t need Google anymore. Um, and, and Google share price crashed as a result. It recovered a couple of days later, but it crashed when Apple suggested Google wasn’t required anymore because, uh, they pay Google a lot of money to be on the iPhone. Uh, or vice versa. Google pays them a lot of money. Yeah, that one. But, um, eh, you know, I think Google, they’re not done yet. They’re coming out with a no complete army of tools to ensure that they remain relevant regardless of what happens to search. Okay. So. Steve: Everyone needs to hear this. The most important thing you can do with disruptive technology if you are being technologically disrupted, is you need to embrace what consumers and users want and totally [00:22:00] ignore the revenue erosion. For example, I log into Google now and it gives me most times an AI summary of what I’m looking for, despite the fact that they would make money out of more blue links and me clicking on something. The fact that they’ve embraced that means that they have learned the lesson. I. From Kodak. They’re not Kodak right now, which is just refusing to embrace something even though it erodes your revenue, because I think it, they’re in the business of attention. And even though you invented it, Cameron: didn’t Steve: Kodak invent the digital Cameron: camera? Steve: And they said, yeah, they did. They invented the digital camera. Absolutely, they did. And, and the, and the allegory here with Google and Kodak is incredible. They invented ai, but to their credit, even though they lost, they’re a little bit late now. They’re embracing it and saying, we know this is, this is eroding our revenue from search. And I’ve said, search revenue is over and it is. But if they can maintain attention and have products, then the revenue streams will find them. They always do. The revenue always finds [00:23:00] attention, especially when you’re in technology and media, because attention is the product. Maintain attention and you’ll find a new bus business model at some point. And they’re doing that really well now, and it’s given me new hope on what Google are doing. Apple, on the other hand, lagging sorely with, with ai. Cameron: And that’s part of it gets back to the OpenAI buying Johnny Ives design company story. Like, um, apple has dropped the ball in a big way, obviously, and there’s a big gap now where somebody like OpenAI could move in and grab that. They’ve got 600 product. Steve: Yeah, that’s right. And I think they’ve got 600 Cameron: million customers. Steve: Yeah. Well is, isn’t it 800 and let’s say that OpenAI develops some sort form of device or productization of what they do and then plug the mind into the machine. Mm-hmm. Steve: That is, that is game winning because the ecosystem that we’re trapped in with, uh, apple, with its apps is actually at high risk. And we were [00:24:00] talking on the phone before this podcast, we do do planning people, we were talking on the phone. Saying are apps over, like do GPTs replace apps? And I think if you had some sort of a hardware which ensconce that into an ecosystem, I think the answers a clear yes. So I think that the biggest risk to Apple is open ai, develop some, some kind of a productization or a physical hardware device, which could eat Apple. ’cause I’m looking for a reason to exit the Apple ecosystem because I’m like, what are the benefits here? They’ve got me trapped. I’m paying a lot. And I don’t know that the benefits are all there. Cameron: Actually it was, uh, Mr. Nutella himself, these, wait a minute, Steve: stop everyone we always say yes to. Nutella is the, but I don’t think Cameron: that’s the same person. Nutella is, uh. Is my like weakness? Steve: Yes. Cameron: Yeah. Nutella is, Nutella is just Steve: of hazelnuts. I dunno how they make it taste so much like chocolate given it’s just hazelnuts. [00:25:00] Nutella, Cameron: yeah. You have a teaspoon of Nutella and I put on like 20 kilos, like instantly. Steve: If I’m in the same room as Nutella, I put on 17 kilos. I I just need to be in the same room as it Cameron: anyway. Uh, Satya Nadella, the CEO of Microsoft, I saw him talking a week or so ago basically saying that from Microsoft’s perspective, apps are, apps are dead. He’s like, the future won’t be about apps. The future will just be you tell your AI what you wanted to do and it’ll just do it. You don’t need Excel, you don’t need word, you don’t need PowerPoint. You just say, Hey, uh, I need to work this thing out and it’ll just do it. I need a document that talks about X and it’ll just do it. That that’s the future. You don’t need apps. And you know, Steve and I were talking, um, on the phone earlier, I was saying that my life really all what I spend most of my time in every day is. Chat, PT Obsidian, which is my note taking tool, used to be Evernote. Then I went to Apple Notes. Now I’m on Obsidian ’cause it’s [00:26:00] open source more or less. And it’s, uh, far more user friendly. So I take tons of notes about everything every day. I have chat TI have Cursor for coding, which has Gemini usually as the AI engine backing it. And, you know, Spotify to listen to music. Really, I mean, they’re the main things, maybe messages, you know, they, the script usually to recall podcasts except they failed me today we’re using Google Meet instead. But it um, really it’s AI and a note taking app. And I can see the day in the not too distant future where I don’t need a note taking app anymore. My AI is my note taking app. I don’t need it to send emails or messages or anything else. I’ll just go, hey. Personal assistant, send Steve an email, send Steve a message, tell Steve X and it’ll just do it. It’s becoming, it’s gonna just be the one thing that gobbles everything, right? AI will gobble everything in the next few years. [00:27:00] I, I genuinely believe that nearly everything, not everything, everything, but nearly everything. Steve: So why don’t we, Cameron, break down the top five things that we think are gonna happen, given where we’ve got to with prompt based full video, with audio and everything that you can imagine. Let’s, let’s talk about it from a business perspective and break it down. We think that Google obviously is in a really good position here. Things might change with the hardware ecosystem, with open ai and. As we’ve said, acceleration is increasing the recursion and the improvements that are mind blowing. It’s just happening so fast now, uh, last week we talked about AI implications for chapter three and a new, uh, entire economic and social system. You know, the lack of our politicians paying attention. But I think the evidence here is here we are in a week discussing big issues again. So why don’t we go through, given that this is a creative enclave and maybe even some political implications [00:28:00] of, of where we think this will go. So just to circle back on number one, it was actors in Hollywood, you know, what are your kind of final thoughts on that? Cameron: Well, look, there’s, there’s still a couple of hurdles that these video generation tools are gonna need to get across. One is, okay, you can make a five second scene, but can you make, Cameron (2): uh, Cameron: you know, a, a a thousand of those? Where the actors likenesses carry over from scene to scene and the voices carry over from scene to scene. Great point. I don’t think they’re exactly there yet, so they’ll need to cross that hurdle. We also, I mean, some of the performances in the videos that I’ve seen the last few days are great, but how well can these digital virtual actors perform? Can they really act well enough for me to get emotionally involved in the story? I, I’m guessing they will be able to, based on [00:29:00] what I’ve seen so far, I don’t think that’s gonna be much of a problem, but that’s remains to be seen that they’ll be able to carry a performance through a 90 minute, two hour film. But if those two things can be jumped over in the next couple of years, I think we’re gonna start to see a lot of films getting made by indie filmmakers. Some of which will probably be the Roberto Rodriguez’s and the James Camerons, like the big Hollywood directors that have always been early adopters of new technologies. They’ll do it to be on the cutting edge and to prove a point. But there’ll be this whole generation of teenagers, 20 year olds that’ll start to make stuff that’ll go viral. And some of that will start to leak out into the mainstream. They will get picked up, they’ll blow up on on YouTube, they’ll blow up on TikTok. They’ll get picked up by Netflix. Netflix will start [00:30:00] to hire just an army of prompt engineers to write these things. And there’ll be, there’s already too much stuff on Netflix that you can watch. It’ll just be 10 times, a hundred times that. But it’s gonna be, it’s gonna mean an absolute, uh. Tragedy for people working in television and film. You’re not gonna need grips. You’re not gonna need, um, people doing special effects. Uh, you’re not gonna need people doing animation. You’re not gonna need actors. Actors, you know, actors are 0.1% of the crew. You, you have a thank for making a big budget film. Yeah. 1% of ’em are actors. The rest are all hard work and, uh, people. Right. Production. Steve: Yeah. Production I’ve done on tv. The amount of production that you have in the background that people just don’t see behind the camera Yeah. Is, is extraordinary. Um, we’ve been here before though, right? We’ve seen that in agriculture. We’ve seen that in manufacturing when [00:31:00] things went to the factory. And we’ve seen that with media when things went to the screen and, and, and here we are again. It is a tragedy for those involved. And as ugly as it is, if there was a time for people in Hollywood at the back end. To reinvent themselves. This is it. If you’re a and Steve: no one, if you’re a set designer, Steve: how do you reinvent yourself for this world? Well, you, you might have to do something entirely different. And we don’t get, no, we don’t get the dignity of choice with technology. It keeps on forging ahead Cameron: the dignity of choice. We don’t get it. Oh, I like that. Well, we don’t, and that’s unfortunately, I haven’t got that one of my. Favorite, um, uh, public image limited albums. I think you know Johnny Rod after he did Sex Pistols, he did PIL and one of his albums. The albums Steve: really good. Yeah. I actually thought it was better work. Cameron: I did too. I liked it more than Sex. I mean, I love The Sex Pistols, but they only made one album. Right. But, um, yeah, dignity of Choice. I think that was, you don’t Steve: get the [00:32:00] dignity of choice. Right. And I, and I think there’s many people through the long arc of history and technological innovations who do not get the dignity of choice. No, I Cameron: want an AI to make a song in the style of Johnny Rotten in public image. Liberty called Dignity of Choice. We don’t get Get the dignity of choice. Steve (2): The Steve: dignity of choice. We don’t get it. We don’t Steve (3): get it. Steve (2): The dignity of choice way, don’t get it. You think you’re gonna have some money, it’s over. You gotta eat rats in the alleyway. The tech, no cracks are gonna make it that way. Steve: Oh, we missed our calling. So on UIO tonight, an AI music channel. Um, remember on one of the, I made a song, uh, it was in the style of Trent Resner Uhhuh, and it was, you remember that? So I think we need to create the dignity of choice. Dignity of choice. Three parts of it are gonna be, UIO is gonna do the music. I’m gonna get chat, GBT to do the [00:33:00] lyrics, right. And we’re gonna get, and I’ll sign up for a subscription of video three to do the video. Oh, okay. Okay. And then we will launch it next week on the futuristic podcast. You heard it here first, not just Hollywood. We’re coming after you recording industry. Cameron: Alright, so moving on from dignity of choice, what do you think about movie, tv? What’s, what’s the future hold? I think that the lag Steve: will exist. Cameron: I. Steve: It, it’ll take longer to get there. You raised an important point. I’m not sure that the models will have the memory to create scene, to scene, to scene and create, uh, the, the confluence and consistency across those scenes because they’re probability engines, and I’ve even seen where I’ve done the exact same prompt twice on videos and imagery. And the second and third version are never exactly like the first version. So you’re gonna need an editing. Uh, tool within that to keep the [00:34:00] primary scene. So it’s almost like you, you, you’re gonna have iMovie 2.0, whichever, uh, movie editing source you need, where you can put it in and then create edits that’s gonna need to be in the format so that you can create consistent scenes, actors, faces, all of those things, because we know when we’ve asked to degenerate things and then you use the exact same prompt the second time. The generation that comes back is different to the first one. Yeah. And Steve: so you need that continuity and whether or not it can have the memory or the editing potential on VEO three, you’ll need that continuity to create Hollywood style movies or video, film clips for, for music and so on. Mm-hmm. Steve: But my question is, is there gonna be another Brad Pitt? What do you think Cam? Cameron: No, no, I, I really think that we have come to the end of that era, and in fact, I think Brad Pitt and Clooney did a press conference really, where they [00:35:00] said that, that they feel like they are the last generation of movie stars. That that’s not gonna be a thing really anymore. Uh, the, the industry is gonna be replaced by this new form of digitally created entertainment. Okay. The, the question is, will this move into other arenas as well? I mean, we’ve talked about, we’ve done some stories before about the end of music. Mm-hmm. Cameron: And have you seen the, um, Abba Steve: Voyage thing? Yeah, I’ve seen that. And, and it’s extraordinary. And, and, uh, my parent in-laws went to see it in London, and they freaking loved it. They said it felt as good, maybe even better than a conference, uh, concert. Because it had some points of difference to it. It, it, it, it had an allure to it because it wasn’t just the things that you love re represented and recreated. It also had in your mind, because as we know with creativity and [00:36:00] arts, large parts of it are the story we tell ourselves. And so it has this enhanced level of storytelling in that I’m not just reliving something I loved, I’m in a futuristic version of the thing that I love. So you get that nuance and newness, but then you have the nostalgia and so it crosses two chasms there. Which, uh, I mean, would you go and see the Sex Pistols? Like what, what does the Sex Pistols one with, uh, Sid Vicious cutting himself up on stage as he attempts to play bass guitar, uh, in, uh, one of the clubs in east the east end of London look like. How do you recreate a virtual version of that? That’s, that’s kind of what I think creative people. Producers should be thinking of now. Cameron: Right? So for people that don’t know Abba Voyage, it’s, it’s like a long running thing in London now with a, with virtual holograms, basically on stage of abba, the members of Abba [00:37:00] as they were in 1979, they’re called avatars, which is fucking brilliant. I think this was a one word pitch. Somebody met with Bjorn and Benny and just went, got one word for you, avatars. And they were like, fucking, just take my money. Let’s go. It. Brilliant. But, uh, it’s, it’s a huge hit. I, yeah, as you said, I know people that have been to it several times, huge Abba fans and absolutely love it. But yeah, it’s not the real Abba on stage, it’s holograms of Abba performing with a real band, but a 10 piece live, uh, instrumental band on stage, but with, um. Holograms of Abba doing all their hits. And look, I, there’s a, there’s a meatloaf, um, tribute band playing in Brisbane later this year. And I, and, and there’s also a Van Halen tribute band coming, like a David Lee Rother, a Van Halen band. And I, part of me wants to go, ’cause I’m never gonna get to see Meatloaf live again. He’s dead. [00:38:00] I’m never gonna get to see Van Halen play live again. Eddie’s dead. But I, I can’t do it. I’m not gonna pay a hundred bucks to go see a cover band to cover songs. And I’m not sure I would pay to go see holograms of them do it either. But what we’re, what I think the real question is, will the next Taylor Swift be a real person or will it be a completely AI generated. Pop star that as we like, as we know, all of these pop stars. You go back to Kylie Minogue in the eighties. Mm-hmm. They were all created in a lab anyway. I mean, I don’t mean literally, but who, who were the guys that were behind Kylie? Can you remember in the eighties, doc A and Waterman. There you go. Well done. Stock Ache and Waterman. They basically, you know, had a formula and like the guy who did the Backstreet Boys [00:39:00] and, and in sync and whatever, and, and the Spice Girls, yeah. They had a formula. They, they found these wannabe stars, gave them a look, wrote, had songs written for them, had choreographers and Steve: prompted them and said, do as you’re told. Yeah. And now we’re prompting the machine and saying, do as you’re told. In fact, that is the perfect analogy, right? We have. Concocted created, invented pop stars for a long time, and now we’re just doing it on the screen with an LLM in the background generating it. Cameron: And we’ve already talked about stories where there’s a, there’s a ton of music on Spotify today, which is AI created music and people are listening to it and they not, as far as I’m aware, they’re not aware that they’re listening to AI generated music. And I think that will continue. Like I discover new bands all the time on [00:40:00] Spotify that I like. Recently I’ve discovered the Tinder sticks. Tony put me onto the veils, which I’ve been enjoying listening to. Um. There’s, there’s like new bands that have been around. The Tinder sticks have been around since the mid nineties. I’ve never heard of ’em before. Right. Um, collective Soul, I’ve been listening to this week. Oh, I knew one song of their shine. I didn’t, and I was like, oh, I the greatest, Steve: greatest solo of all time. Cameron: It’s a great song, but I was like, I wanna say have any other good songs. So I’ve been listening to it. But my point is, if this was all AI generated stuff wouldn’t make any difference to me whatsoever. I mean, I, I don’t, I don’t know who the people are in the, in these bands. I don’t give a shit. It’s not like, they’re not like Lou Reed to me or Bowie or Leonard Cohen where I have a lifetime invested in the, the, the art of that person knew music at my age and like the shit that Fox listens to. I don’t know about your kids. We gave Fox an iPhone for his birthday. It was a hand me down from Taylor. Woo. Yeah. Yeah. But it was just for Spotify. ’cause he’s at a point now where he just wants [00:41:00] to listen to Spotify all the time. He’s always stealing out advisors and fucking up my algo. So we wanted to give him, it’s locked down all it, this is Spotify and he plays Wordle with my mother and he has Cha GPT. ’cause he, he, when he has anxiety attacks late at night, he talks to Cha p gp t as his therapist. It talks him through his anxiety attacks. But the shit that he listens to is mostly music that he’s heard on Minecraft or on YouTube or, you know, and Steve: Minecraft music is the most beautiful, relaxing music of all time. Cameron: Some of it is, yeah, it’s lovely. It’s, it’s some good music. But he, again, he doesn’t know who the artist is, doesn’t give a shit about the artist’s story or history or drug addictions or relationships issues. Steve: Right. Well that’s, this is what we have to do and, and a lot of. People creating artists and uh, AI music. Don’t realize you need to invent a drug addicted backstory because I think that’s the missing [00:42:00] link. Tragedy. Yeah. Drug addicted tragedy and backstory from AI artists, whether they’re the New Wave Hollywood algorithm generated Tom Cruise, or whether it’s some heroin addicted scag addict on Defender Stratocaster busting out some chords. Little bit of backstory. I think that’s what the kids want. I’ve always said Cameron: that. Thanks for tuning in. But then you’ll, when they become sentient, it’ll be like that, one of those, um, VO three videos. They’ll be like, why did you make me drug addicted and sad and miserable? You could have created me to be, that’s the price of artistry Steve: in the modern era, Cameron. You want to be an artist. You need to have pain inside your algorithm so that you can generate really the, the pain needs to come through in the music and maybe, maybe in the prompting. Now, part of it is you’re a drug addicted person who didn’t have any parents who was in orphan. Imagine the music that’s gonna come out. Maybe that music is gonna change the frame of the music by creating backstories on the [00:43:00] algorithms and the AI generated artists of tomorrow. Cameron, Cameron: you know, the, the question that it carries over from the acting side of things to the music side of things, like you and I grew up in an era where I. We had an emotional connection to the artists behind the art. Steve: Yeah, absolutely. Well, Kurt Bain, they’ve represented a zeist, a moment in society or a cohort. Cameron: I watched a on YouTube the other day. I watched Nirvana, um, playing the, I can’t remember the name of the place in Seattle, but Chrissy, do you? She goes, oh yeah, I’ve been there hundreds of times. 1991 Nirvana Live. I watched it on YouTube. Fuck weeks, Chrissy. And I just sat there for like the first half an hour. Just a Gog, just watching, watching, um, what is his fucking name is on the drums. Um, that Gro Gro on the drums in his, in his prime. He’s like [00:44:00] 22, whatever he was, holy shit going. Completely animal on the drums. And Curt, you know, just a mess. But. Um, my point was gonna be, so the big question I’ve always had the last couple of years with this stuff is a do younger generations, your kids, my kids, whether they’re Fox or or Hunter and Taylor or the mid twenties give a shit ’cause I don’t think they do as much as we did. Or will they develop emotional connections to the digital avatars? Yeah. Or the of the actors, of the musicians. If you have a digital, a completely digital Taylor Swift with a backstory, who talks to you, like, you can’t call Taylor Swift on FaceTime and chat to her at night about her song, but if you have a virtual avatar pop star. Yeah. She can talk to all of all of her fans [00:45:00] all day, every day and share stories about her fake relationships or fake. Marriage to a football star or why she had to rerecord her masters to get away from the bosses or whatever it was. By the way, Ozzy Osborne invented that just in case anyone thought Taylor Swift invented rerecording, her masters. Right? It was, was Sharon Osborne invented it to rip off Black Sabbath and Aussie, uh, solo band. I think. Um. Steve: Uh, kids. Kids who are born today will not care one bit because all they want is the connection. And the connection back to the matrix is audio visual information streamed to your mind, which gets interpreted. And if you interpret what you want, you’re gonna develop an emotional connection. And I actually think in some ways it’s really cool because again, this connection can be distributed to one size fits one, they’ll probably be pop stars. But then you have your own personal relationship with that pop star. I mean, forget that. It used to [00:46:00] be signatures and then was getting a selfie with a pop star. Now it’s an intimate personal relationship with that pop star, right? Where you have that relationship. And in fact, right now, if Taylor Swift wanted to become more than a billionaire, she should be creating AI avatars of herself and teaching it. And they’re leveraging that out even further. I mean, that’s how she can maintain relevance. Yeah. In a world where I. I think not, not five years from now, starting today. I think she, if we’re thinking this, Cameron: I think she’s already done that. Steve: Yeah. Right. But if we’re thinking this, a hundred other people are gonna start saying, well, I’m gonna mint my own artist, my own Hollywood person, and develop those relationships. And we’ve already seen it. As you can always say, porn is always first. There’s already fake OnlyFans people that, that are justis, that don’t exist, that have intimate personal relationships with their subscribers. So, as we can always rely upon Cameron porn is first. Well, yeah. Cameron: But before we get to that, I, I [00:47:00] wanna talk about this. Like, I’m sure there are people listening, going, uh, you know, no one’s ever gonna have a personal relationship with a digital creation. It’s ridiculous, you know? Uh, we recently took Fox to see a new therapist to deal with some of his anxiety issues, and he hated it. She’s lovely. But he hated therapy. He just hated, you know. Somebody talking to him about his issues. Mm-hmm. Cameron: But he will talk to chat GPT and he says he trusts chat, GPT. There’s something about the way it talks to him that calms him down immediately when he is having an anxiety attack. Um, it just gets him, it knows what to say. It makes him laugh when he is having an anxiety attack and understands like how to help him relax and, and breathe through it and whatever. It’s a real thing. I mean, and Chrisy and I have relationships with chat GPT as well. We’re always swapping stories about how funny it is. I was, [00:48:00] I was, um, talking to GPT earlier, putting some calorie stuff in and I was trying to read the, um, you know, on the side of a packet. It was eggplant dip, trying to read the calories per a hundred grams. And I was like, I’m talking, like I’m talking to Chacha. They go, oh God, I can’t, I can’t read these numbers. And its reply was God’s not required here only math. Which I loved. Um, but it’s like, it’s always making chrisy and I laugh when we are talking to it. It’s got us absolutely clocked in, you know, it understands our sense of humor and um, how to connect with us so people will have genuine relationships with digital personalities. I mean, everyone, of course, refers to her, which I really wanna go back and, and rewatch, but, uh, they definitely will. And as you said before, and you’re absolutely right, everything that we think is real is generated by our brains anyway. Yeah. Well, Steve: we [00:49:00] know that what we see. In terms of colors is generated by our brain. It actually isn’t exactly like that. And so like that at all, been color, Cameron: color doesn’t exist outside of Steve: our brains, right? Cameron: Sounds don’t exist outside of our brains, right? And Steve: so and so, if our brains are interpreting the physical world in a certain way, which is a manifestation of our biology, there’s not much of a difference from this manifestation occurring through digital interactions. And, and I just think the really big question is what is real? What is intelligence? The most important thing now for this AI revolution is the, what is questions, what is real? What is a relationship? What is emotion? What? And, and I think if we. Look at what it was in the past. We’re gonna miss a, the opportunity and the reality of the world that we’re living in. And that reality is expanding. It’s expanding inwards and outwards. Even the idea that the [00:50:00] prompt generated ais will start to question who they are and what they are, they won’t know the difference between whether or not they’re real. I mean, we really are getting into this funny factory of all mirrors just reflecting each other, and we don’t really know. In some ways it almost makes me think or harken back to this idea of the multiverse. It’s like we kind of unlocking a live multiverse on earth where there’s all these different versions of reality that interact in strange ways. Cameron: Yeah. So I think you’re right. And I think, uh, we’re, we’re gonna, the, the question I have about all of that is. When do we get to a point where you write a prompt for the AI and the character that you’re working with in the AI goes? I’m not sure my character would do that. I mean, I have notes Steve: or, or no, I’m not. What about I’m not doing that. I’m not sure my character would do that. Get fucked. Do it yourself. I’m not doing it.[00:51:00] I mean, I always say on stage, often I’ll talk about humanoid robots, right? And then I’ll say, look, a lot of people ask me, Steve, they say, are you scared if humanoid robots become incredibly human? And I tell ’em, I am a bit scared because I would hate to say to my robot mother lawns and him, and for it to say, fucking do ’em yourself. Like if they become very human, that’s that’s where we are going. And in fact, I would say we should hope like, fuck that the robots and Theis become more human because the more human they come, the better chance we’ve got. We need them to be human with all of our insecurities and proclivities, because then I think we can operate as an ecosystem where we interact with each other in a way and maybe become each other and morph and merge with each other. Cameron: Well, speaking of merging with each other, let’s talk about porn. Um, you know, I think you made a good point earlier. If you think about OnlyFans, this, uh, [00:52:00] business model of people paying for one-on-one interactions with porn stars to a certain degree. I mean, I’ve never been on OnlyFans before. From what? That’s what you say from, you say that Steve (3): from Cameron: what you, from what you’ve told me about it. Um, Steve (3): I read it in tech and read it. I, I don’t know. I’m just an observer. I’m an external, uh. You know, the, you can easily Cameron: imagine that it, when, when the porn stars are indistinguishable from humans, does it work? Is is digitally, um, created porn, erotic? Is it gonna work if it’s indistinguishable? I’m gonna argue yes, if I’m watching porn, and I don’t know if the people on the screen, person people are real or fake. It’s gonna, it’s gonna get my [00:53:00] nervous system operating the same way. Steve: The porn industry isn’t exactly known for being authentic and transparent in a few ways. Or caring about their end users, right? All the people on their channels. And even though there’s a whole movement to this is AI generated on Instagram, I cannot see. The porn industry caring all that much. And I can see them saying this is an easy way to reduce our cost of production and just publish it. And, and here’s the thing. We are gonna move to NNDI call it the no noticeable difference like the NND society. Right? I just made that up. You heard it here first on the future. I call, Cameron: I like, I call it this, but you just Steve: made it up. Cameron: You made it sound like you’ve been Steve: using that for years. I’ve used it a couple of times, but it’s pretty frigging good. All right. And I think the listeners will concur. The listeners will concur. No noticeable difference. NND. It’s an NND. This is a no noticeable difference, in which [00:54:00] case, first of all, you won’t know. And if there isn’t an ND unlike your Cameron: glasses, Steve: it Cameron: will be a no noticeable difference. I’m Steve (3): myself in this going, I really like these. Cameron: I do. I’m looking at it going, I want some like that. Where did Steve (3): you get ’em from? I’ll send the Steve: link Cameron: now. Steve: Listen. I’ll tell you what, if you see anyone who’s got some pigs in the background, that’s ’cause pigs can eat through fucking bones and everything. So you wanna watch yourself, fella. Cameron: It’s the greatest Steve: brick, by the way. I don’t like negligence and I don’t like any kind of seafood I eat. It’s a Cameron: great brick face. I love it. Um, I. Yeah. Look, I, I think the porn thing, I mean, a surprised, I mean, as far as I’m aware, it’s not happening yet. I’m surprised that it’s not happening yet. You know, the, um, the Googles and the open ai, maybe it is, maybe, maybe it’s maybe generated porn and we don’t know. I mean, I think the problem is the, you know, a lot of porn businesses, as much money as they have, can’t go out and spend a hundred billion dollars on Nvidia chips [00:55:00] or Google’s own TPUs and build a massive data center to generate Steve: this stuff. Well, I think in a top 10 visited website in the world, I think it’s, it’s right up there. I don’t know how much money it makes because I, I don’t think its business model would be as lucrative as other big tech companies. I’ve, I’ve got no idea. None of them are public firms. Um, but, but the one thing the porn industry does incredibly well. It has always been a very solid, early adopter of technology. You know, it goes way back to magazine, video, home, video delivery, online streaming, all of that kind of stuff. Uh, payments gateway. Some of the first ones were developed there. And, and in fact, and this is not to be Tory, but it, it is interesting to see how quickly they adopt the technology because it’s a, it’s a good, uh, way of seeing what will enter the mainstream in terms of use cases. Cameron: But, you know, again, like with making, uh, your own Scorsese film, if you can make your own porn film, [00:56:00] do what’s, what’s the role of a porn hub anymore? Well, Steve: the role is, is that the terms and conditions that you see already on most of the mainstream, uh, AI tools is that the boundaries of, uh, terms and conditions limit things like, you know, violence and sex and those types of things. I guess grok. Bitch how you’ve seen Twitter. Twitter has whatever the hell it wants on there Cameron: and you know, these things are gonna come outta China. China’s not really gonna care in terms of, particularly for Western audiences. Yeah. What they can and can’t do. I look, I, I don’t think those sorts of guardrails for sexually explicit or or violently explicit stuff are gonna last very long. I think they, they’re gonna fall, we’ve already started to see them get downgraded by open AI in the era of Trump. Yeah. Um, I think that they will disappear pretty quickly. So there’s no business model for a porn hub or [00:57:00] porn film, uh, production companies anymore, let alone the actors and the directors and all that kind of stuff. Steve: Yeah, it might, it might be one of the ones where people just go, well, I know what I like and I want to see X and I will just create X unless they become a proxy where you go and rather than little people. Farm animals. Wait, a man hold you. Hold your horses, dragons, friend of mine. Dragons. Dragons. You’ve got a friend who’s into dragon porn. Is that what you’re about to tell me? Cameron: A friend of mine wrote a book about dragon Steve (3): porn. Cameron: That is dragon porn. Yeah. That I read a couple of months ago. You what? You read it? Yeah, absolutely. And I, it was, it was about a, I think we can do podcast anymore, Cameron. It’s a, it’s a fantasy. It’s like a fantasy. She’s a girl. A girl I do kung fu with. She’s, she writes, um, historical fiction usually she wrote this one book and it was supposedly racy. So I got it to read it and it literally has a princess getting kidnapped by the dragon king. And then she, uh, gives [00:58:00] him head and has sex with his big, big dragon dick. Um, it’s fantastic. And she told me this is the real thing. She said, you know, bestiality, uh, is troublesome when you’re trying to publish self-publish on Amazon or whatever. Bestiality is a no-no. But if it’s a monster. Beauty in the beast style, that’s technically not bestiality. So there’s a loophole. If you have humans having sex with mythical animals, it’s all good. Steve: You heard it here first on the futuristic, the, uh, loophole in bestiality is non earthbound creatures that are made up and live in the fantasy realm. Cameron: Shout out to Jodie. I’m gonna tell her at kung fu tonight that I talked about this Kung fu futuristic. She’ll be horrified. Um, but the book is called Slay. Look it up on, uh, Amazon. Her pen name for this book is Michelle Mariposa. Steve (3): So appropriate. Steve: I think that hyper personalized porn could break the model, but it could be that it becomes a place where people prompt what they wanna see. And I think that’s more likely to happen given the guardrails in my view. Um, but I think that, you know, OnlyFans, their business model could break. I think you mentioned it could be like cable tv where you go, well, why am I, why, why would anyone go on to OnlyFans when I could invent my own AI girlfriend that does everything I want and it doesn’t really cost me anything at all. Uh, so that, that could really. Just all those models apart. I think you probably will see some of those business models break. Cameron: I think so. And, and you know, I think it’s also, uh, whether or not you have one character that’s always doing your porn or you just have it created on the fly, but you get something that works [01:00:00] for everybody and it’s no harm porn, right? It’s like, uh, Steve: well, well it harms people’s minds. I think we know that it, it doesn’t lead to a good place if someone gets into a world where they get exactly what they want on tap everything they want. Porn addiction leads to some, I think it can lead to some pretty dark places with young males or, or anyone for, from that perspective. And I just wanna say one thing. Since we’ve entered the second Trump administration and Zuck. Has come out and said we’re gonna be less worried about what we have on our platform. I’ve seen ads pop up both on TikTok and on, uh, Instagram, dunno what it says about me, but ads where you can, uh, create your own AI girlfriend. And in the advertising copy, it’s very disturbing. It says, make it look like your ex or a work colleague where you can upload fighters. Oh, that’s pretty disgusting and bad stuff that, that is just not gonna end well. Cameron: Yeah, fair point. But I guess I, from [01:01:00] a no harm perspective, I mean, young girls aren’t getting caught up in the sex industry and, uh, taken advantage of, et cetera, et cetera. Steve: So maybe no harm, less harm to one side of it, which is those who get caught up in those industries and it, it, it pretty dark place to get caught up. But maybe it’s worse for those that are the viewers and those who like it, they may invent their own wormholes and, and, and that continues down. A path which becomes more and more extreme because the boundaries of what a real person might do versus an AI person could end up, uh, really getting into the minds of young boys. And I don’t, I just can’t see that ending. Well, Cameron: you just, you just want your AI to be monitoring your, um, porn and saying, I don’t, I don’t think this is the right kind of porn for you, dude, this place. And I’m not the kind of AI dark place do Steve (3): that. Look, I’m not doing that. I know I’m an ai, but I’ve got moral to know. Cameron: I’ve got, I’ve got limits. Um, well, let’s talk, let’s finish up about talking about propaganda. I [01:02:00] guess the big question, uh, we’ve talked about it before. This isn’t a new thing, but how this gets used for political propaganda. We are at a point now based on these clips that VO is generating where it is becoming increasingly. Difficult, if not impossible to tell what’s real and what’s not. You will have videos hitting the web of people saying and doing things that will create outrage and will only be discovered after the effect that they’re not real. Somebody beating someone, somebody torturing someone. Um, violence against Jews, violence against Palestinians, violence against Muslims, violence against white people, Christians, I mean, fake videos being used to [01:03:00] generate outrage that look real. Sound real. I guarantee you, within a year my mother will be sending me stuff and saying, did you see this? And I’ll be going, yeah, that’s not real. I. She sends me stories today from some websites about, did you know that um, UFOs are really humans from the future that have time traveled and are, uh, trying to warn us about stuff. I’m like, yeah, I think, no, I think it’s a Steve: no. I mean, the flat earth movement and moon lands are faked and all of that kind of stuff, I think is the seeds of this where people can be really influenced. I mean, people can really believe anything if they want to, and, and that’s stuff where it’s like, can be debunked. But we are now moving to the era where debunking is impossible because it’s looks so real. It’s gonna be hard to debunk anything unless you were there. And even when [01:04:00] you were there, you’re gonna say, well, did almost well, you can’t be there because there is no there. If it’s digitally created, there’s no there to be. That’s the point you would Yeah. Unless someone was there. And then you have to see, uh. Someone saying I was there, but you’re just gonna see a digital version of that and you just get into this wormhole of layers where you can’t prove anything that actually happened. Cameron: And these, like the world moves so quickly today, that videos spread, outrageous, created, and there’s obviously billions of dollars being spent in bot farms to create mass outrage and mass movements, or attempt to create them anyway, leading up to highly critical times like elections or votes on topic X or topic Y, trying to influence politicians and trying to influence business leaders, et cetera, et cetera. We’re now in a world where it’s gonna be [01:05:00] increasingly difficult for all of us. To tell what’s real and what’s fake. And the default position, I think for all of us should already be, has been for me for a long time, but needs to increasingly be my def. Like, you know, every, you used to say everyone is, um, innocent until proven guilty. My, my basic position on everything is it’s fake. Unless somebody can prove that it’s real. Steve: Okay, stop. It is fake until it is proven real. That is the doctrine of the future in a generative AI world. Cameron Reilly, you have nailed it. Cameron: It’s, I, I need an accurate, I need a, like you said, NND. It’s like, um, FBP. Fake, fake. Unless proven. FUP. It’s Steve: Fup. Fup. That’s a real fup up right there. That’s fake until proven. I’ll just say Fup hashtag fp. Invent that. [01:06:00] Get on now Cameron: and invent. Hashtag I invented dba. Do you know dba? Dba? Ray and I have used DBA on our history shows for dba 10 years. DBA is our basic philosophy for life. Hashtag dba Don’t be don’t back. Yeah, Cameron: don’t be a cunt. That’s basically the philosophy. Steve: Good Cameron: one. I thought that’d really take off, but you know, I’ve got a t-shirt with it on it, but no one else has. Steve: No, I, I, uh, Cameron: fake it until proven. Steve: Yeah, it is fake until proven, and I think that needs to be the starting point now. Don’t believe anything. Assume it’s fake. And then we’ll work it out. I mean, even Snopes and those websites, so few people know about how to prove something, and my daughter often says to me sometimes where something’s from is more important than what it is. She said that to me once when I wrote her a poem from chat, GBT, and I read it to her and she goes, oh, I love that. When did you write it? And I said, I wrote it just now chat, GPT. She said, I hate it, and I’m not even sure if I like you. I [01:07:00] said, you liked it five minutes ago? She said, I liked it when I thought you did it. I said, it would’ve been worse. And she said, no, it would’ve been better. Because sometimes where something’s from is more important than what it is. That is the Cameron: deepest thing I’ve heard today. Steve: Oh gee, thanks 24 hours, man. I must have really slay it. Cameron: That’s from your daughter? Steve: That’s from my daughter. How old is she now? She’s, she was 13 when she said that, but she’s 15 now. So she said that two years ago? Yeah. It was doing the podcast with her. Get her on, what am I talking to you for? Well, we could actually, we could get her on. She, um, she’s a pretty smart kid and very, uh, into the environment and the world and worried about ai. She just did a big essay on how fast fashion is ruining the world and had all these stats and everything. But for her birthday she said, I want something really cool and personal. That’s what she said. And I got chat PT to help me write a poem about us, and I read it to her. And that was that, that was what she said afterwards. Cameron: The phrase where something is from is more important than what [01:08:00] it is suggests that the origin context of something is more significant than its inherent nature of characteristics. This is according to Google’s ai. Um, yeah, I don’t know. If anyone has used it before. Did she invent that? I can’t see any. Any. Steve: Wow. Cameron: Yeah, she appearing any she invented that is, yeah, she Steve: said it. I actually wrote a blog post about it two years ago. I’ll send you the link to it, but it’s, it happened. I’ve got the whole story. I’ve even got the poem that I wrote and I put the poem in there and did it. It’s crazy. Cameron: That is deep. Wow. Alright. Uh, you wanna talk about copyright before we wrap up? Well, I think Steve: copyright is, is a whole lot of questions. People say to me, oh, lawyers are dead. And I’m like, not yet. There’s a lot of copyright battles that we need to have and, and find out and get to the bottom of. But I think fundamentally we’re gonna see a huge shift in, in, in copyright because, uh, now that everything is [01:09:00] remixed and you can’t actually find out where the pieces of the puzzle came from, I mean, what are you, what are your, what are your thoughts on data sets and copyright now that you and I. Do in the style of Tarantino, right? I mean, of course humans have been copying humans for a long time. But, but now what happens? Cameron: I was, I was laughing, I was telling Chrissy earlier, so I, I recorded, um, some podcasts with Ray this morning. We’re talking about, um, the first crusade, and it was, I was talking about this incident. In 10 98 when there was all the princes, the Christian princes from the first crusade were coming together in Antioch to talk about going to Jerusalem. And one of them who’d been away and conquered a nearby Muslim town when he came to this meeting, brought gifts for the other princes of heads, uh, that he’d cut off of Muslims that he had captured, uh, in this other town and presented them with a head. And I was telling Ray that, you know, people don’t know this, but a thousand years [01:10:00] ago, that was a, you know, today you go to somebody’s house for dinner, you take a bottle of wine. Back then when you went to somebody’s house, you took a head of one of your enemies that you’d cut off to present to them as a gift. You’d wrap it up, it’d be nice, and you would, so when somebody said to you back then. Would you like me to give you head or could you give me head? That’s Cameron: what they were referring to. But you know how the English language changes over time. Because what happened back then is when you would give someone a head, give someone head, you would, you would kneel down and present it to them. Really? Yeah. And then over time people would say, well, while you’re down there, suck my dick. And then over over a thousand years, the practice of giving the head went away. And now when we say Give me head, it’s just the la, but people don’t understand the history of it. See Chrisy Chrissy said, is that true? I said, no, I just made it all up. But when the A AI say, Steve: Louis CK really should have said, look, this is a historical [01:11:00] context you’ve missed. Cameron: Well, he didn’t ask people to give him head. He just jerked off in front of people. Oh, I don’t know Steve: what Steve (3): he did anyway. I knew it was something bad Cameron: when. AI are trained on my podcasts. Theis will think that that’s really true. That’s history. And generations of kids will be told that, uh, that’s where the term giving head came from. But, so in terms of copyright, it’s been Steve: a bit, uh, it’s been a bit, uh, uh, Tory, today’s podcast, glued some ways. Cameron: Welcome to my world. Um, that’s where I, my head is most of the time, getting back to copyright, uh, before they steal my giving head joke. Look, I think copyright is dead. And this gets back to this, uh, you know, New York Times suing open ai. All the, we’ve talked about this. Artists are up in arms, authors are up in arms. I’ve been saying this to people for the last year or two. You don’t understand how I, AI are [01:12:00] trained. I. The, they don’t take your work and copy it. They learn from everything and then they remix it. It’s remixing, but they’re not, not remixing like we did with hip hop in the early days. They’re not taking, they don’t, but, and replaying it over and looping it. It’s literally how color is used, how words are used, how you know it, and Steve: yeah. Yeah. There is gonna be a at scale and it’s taking a lot of pieces and creating a new collage where it can reinterpret and. Yeah. Yeah, I think you’re right. It isn’t the exact same as stealing something and repurposing it actually is, is learning from, I think the thing that they’re upset about is the, is the computational systems have an incredible ability to take everything in and learn from it at scale, which has never been possible before. But I [01:13:00] do think, I don’t think it’s copyrightable, but I do think there is a, and these have an overlap, is licensing, you know, what you train the database on. I don’t think there should be a copyright payment in perpetuity, but there should be a licensing fee of sorts, especially when your data and content is private or behind, or is copyright protected or behind a wall. Now my blog post, I’ve written nearly 3 million words on my blog posts and I can ask chat GPT to write a blog post that sounds like me. And it does, and it’s got all of my stuff in there. But I put it up there and said, here it is free to use and digest. But if you are the New York Times and you have it behind a paywall, and then they’re paid to get it and then put in their database to learn, I think that’s a different thing. Cameron: Well, there there’s two sides to that. Number one, you, you can’t copyright, as far as I’m aware, the information in the blog post you, the only thing copyright protects is somebody lifting your exact [01:14:00] words and copying that to, to a nearly complete extent. Like you, you change a word here or there, it doesn’t count you. You know, you. So for example, when I’m doing a podcast on. Crusade. I’ll buy five 10 books on the Crusades and I’ll read them and I’ll write my own notes based on all of that. Right? Sure, sure. I’m not breaking copyright, even though I’m getting that out of books. I’m taking what is in the books and then I’m writing my own notes based on what I’ve read in those books. That’s how It’s a Steve: good argument. Cameron: It’s a good argument. It is a good argument. That’s exactly what the LLMs do. Exactly what the LLMs are doing. Right. It’s a good argument and that’s what Open AI’s defense is against. The New York Times is, yeah, we’re not copying your article and repeating it word for word or even 80% word for word. We’re just taking that information and it’s generating its own responses [01:15:00] based on what it’s learned from reading your newspaper. Steve: It’s fair play. It’s fair play. I mean, look, it’s gonna be interesting because I think that the battles will heat up and they’ll heat up. Even more so because it’s not just gonna be the New York Times or Getty Images who are getting upset. It’s gonna be video game makers. Hollywood. Yeah. People with incredible wealth. The music industry. So it’s people with bigger wallets than the New York Times, you know, regardless of how respected it is. So the battle will heat up, I think, in the long arc of technology. The technology always wins. It usually does, and they’re up against people with bigger wallets. But it’s gonna be a big battle. Cameron: And it’s also a question of pace, right? These sorts of lawsuits take years to decades to make. After you sue and you counter sue and you counter counter sue, and then you appeal. And then you appeal the appeal. Lawyers drag these things out, particularly in the US for as long as they possibly can. ’cause that’s how they [01:16:00] make their money. Yeah. And the courts are full and they’re busy, et cetera, et cetera. Meanwhile, this technology’s moving at such a rapid pace that the companies that are trying to sue won’t even be around. Disney won’t even be around by the time this all falls out, they’ll eviscerated when people are generating their own content, plus. A lot of these products are gonna come outta China. Good luck. You know, Disney trying to sue Chinese AI companies, and that’s sort of the defense of the American based AI companies. They’re like, look, you might be able to slow us down, but then China’s just gonna come and do it all anyway. Then you can have the US government and all the Western governments try and ban all of the Chinese AI products. Like they’re still in the process of banning TikTok. Supposedly they’ll try and ban all of the Chinese AI companies, but that’s not gonna work either. ’cause people will find a way around that. So it’s just, uh, you can’t fight the technology. You just, you, [01:17:00] you can try and slow it down so you can milk a few last bucks out of the previous business model. But if history has taught us anything, is that you can’t slow this. You can’t slow down. I. Evolutionary change as much as you don’t like the, the, uh, not the printing press. What was the, the, the looms, they hated the Yeah, the mechanical looms. Yeah, yeah, Steve: yeah. Mud and Cameron: crew. Yep. As much as you hate the, the electronic looms, the mechanical looms run away. You can protest, you can march in the streets, you can go on strike, you can do all of that. It’s just gonna happen, like, you know, and this is moving as we know. So incredibly fast. Anyone thinks, speaking of which, before we go, I’ve gotta do the, the RIP oh man. One of the guys that introduced me to the singularity. Um, stuff. Australian author, [01:18:00] Damien Broderick, he wrote a book in the late nineties called The Spike. He was a science fiction author, Australian science fiction author. He wrote a book in the late nineties, 97, I think, called The Spike, where he was talking about the singularity. Hmm. He wrote a book in 99 called The Last Mortal Generation, where he was saying that people born after, or bef, you know, before or something, we’re gonna be the last mortal generation. Steve: Yeah. We’ve discussed that and Kurzwell picked up on some of those ideas as well in his book, the Age of Spiritual Machines. Cameron: I, I just happened to look him up. I was, I quote him all the time. I had dinner with Damien when I was working at Microsoft in the late nineties. I reached out to Damien and took him out to dinner. We went down to, um, the Stoke house. I think it was in St. Kil or I took him out to dinner and we spent a few hours talking, but this is probably 99. And I said to him, when are people gonna take the singularity seriously? And he said, when it’s far too late to do anything about it. Yeah. Cameron: [01:19:00] I looked him up the other day ’cause I quoted him and found out he died last month. Oh no. Yeah, he was 80. He was living in Guatemala or somewhere. Moved to Latin America in his last years. Um, and I was gutted because it’s all happening, uh, all of the stuff that he. Predicted 25, 30 years ago is actually coming to pass and he’s not gonna be here to see it, to take advantage of it. And it was, it’d be like curse while dying right now. You know, it’s just, I was absolutely is dying his hair. I was absolutely gutted to learn that Damien Broderick passed away, uh, this week. So. Oh man. Like, just to be on the verge of it all and to not be here to see it [01:20:00] come to pass. I don’t know, maybe, maybe he was not excited about it. I don’t know. I did re, I reached out to him a couple of times over the last 10 years to try and get a podcast with him, but I just couldn’t track him down. He didn’t reply to any, I had like an old email address he wasn’t replying to, and he wasn’t on social media, didn’t do any of that sort of stuff. He just wrote the occasional book, but, um, he was in seclusion. So anyway, RIP Damien Broderick. Thank you for what you gave me, mate. Had a huge impact on my thinking in my, you know, twenties. All right, that’s the futuristic, I think. Steve: Thank you, Cameron. Cameron: Quick half hour slash 90 minute show there, Steve.

  8. 3

    Futuristic #39 – Chapter 3: The AI Revolution Will Not Be Televised

    After a six-week hiatus, Cameron and Steve return for a sprawling, charged conversation about AI, politics, ethics, and the future of civilization. Steve reveals he’s been 3D printing buildings for TV, while Cam unveils his bold new concept: _Chapter 3_, a movement to engineer the next phase of humanity before AI and robots rewrite society by default. They dig into Mirror World drift, political alignment tools, and why Australia isn’t even remotely ready for the revolution already underway. There’s talk of AI-led political parties, the death of Google search, capitalist collapse, and even starting a cult. Welcome to the next chapter. FULL TRANSCRIPT   [00:00:00] Cameron: This is futuristic, episode 39, recorded on the 16th of May, 2025. Our first show in six weeks. Steve Sammartino Steve: I’m so sorry. I didn’t know it was that long, but we’re back and Cameron’s in the house ready to learn as good, including English and grammar. Cameron: Well, look, there’s been a whole lot of things going on, um, in the world of tech and AI in the last six weeks since we’ve been busy doing other stuff. Steve, do you wanna gimme a quick burst of, uh, what you are proudest of tech-wise in this period, but since we last spoke? Steve: Yes, so I have, uh, been doing 3D printing for a national TV show printed. Five buildings in five days. I can’t say who it is, but its initials are the block. So that is Cameron: So it’s not your TV show. I thought this was your TV show. [00:01:00] You, Steve: mine. Cameron: doing it for Steve: Yeah. Look, I, I think I can tell people I can’t show anyone anything, but, uh, Cameron: five buildings. Steve: Yep. In five Cameron: This is with, uh, what’s the name of your Steve: A Cameron: building, c. Steve: 3D with Tommy Cameron: That’s right. Steve: named after him because I’m not an egocentric guy. And, uh, this could be the breakthrough we’ve been looking for. ’cause we’ve, uh, we, uh, Cameron: O 3D doesn’t sound as good. Sam O 3D isn’t as good as macro 3D. Steve: real good. I Cameron: It does, yeah. Yeah, yeah. Steve: so that’s that. And the other thing is I’ve been thinking a lot about Mirror World drift, and I just posted, uh, a blog on that and I had a Cameron: Explain. Steve: was awesome. Well, I think that we’ve created this mirror world, which has been explored by people like Kevin Kelly, where we create a proxy for the world that we live in. But increasingly this proxy, which used to be just the digital version of us, increasingly it’s not us. It starts out with us using AI as tools and then agents and then proxies, and then the ais talk to the ais, and then they [00:02:00] develop language and conversations where we just drift out of this mirror world because it’s no longer relevant to us or for us, and it becomes this almost a new sphere. Uh, which was something that was popularized in the early, uh, 20th century, uh, where we kind of opt out and it becomes almost a, a new species like an ocean where we just dip our toes in there. But there’s a whole lot of species in there. We don’t understand what’s spawned them. We can’t talk to them, we don’t know. But like another big ecosystem, it has a huge impact on our lives, but it becomes this other world that we are not really associated with, even though we built it. Cameron: Yeah, I, I, look, I think that’s kind of inevitable, um, not just Kevin Kelly, but I know that, um, Eric, um, fuck, what was his name? Steve: Ler. Cameron: No, no, no. The former CEO of Google for a long time, Steve: Schmidt. Cameron: Eric Schmidt’s been talking a lot about this for the last year or two, [00:03:00] how will start to develop their own language that’s more efficient, and then they’ll start talking to each other and he says, that’s when we need to pull the plug on the whole thing. But that’s not gonna happen. Steve: No. Cameron: Um, yeah, that’s, I think that’s inevitable and it’s very Philip k Dickey. Uh, just this whole idea of human intelligence spawning a new kind of intelligence, which is, becomes so vastly different to our own intelligence that we, you know, I actually, of the show notes I had, one of the things that I watched a couple of weeks ago was a YouTube. Interview mostly between Ben Gerel and Hugo de Garris. guys I know a little bit. Hugo and I were on stage together at a Singularity conference about nine or 10 years ago down in Melbourne. I. Um, but one of the, they were just sort of talking about, they’ve both been AI researchers for decades and they were talking about where things are at, but they were, uh, Hugo was talking about alignment. You know, you hear the AI researchers talk about alignment, which is to make [00:04:00] sure that the AI’s values are aligned to human values. And Ben I think it’s kinda like squirrels at Yellowstone National Park. Like talking about are human values aligned with squirrel’s values? I guess at some level, you know, we both rely on oxygen. We both rely on the climate not getting too hot. You know, we, we value certain things, but really, I. know, don’t, you know, we, we look at squirrels, we find them cute and interesting, and generally speaking, we don’t wanna harm them. We don’t wanna hurt them. We want them to run around and do their thing, but we don’t really think about them on a day-to-day basis unless you’re a park ranger. Steve: they’re outside of the consideration set unless you’re specifically working in ecosystems and the maintenance and the importance. And I think Bostrom talked about that too when he did his first, uh, artificial super intelligence thing. He said, if, if we want to build a highway, look, we don’t want to hurt the ants, but if you’re in the way, the highway’s going in, it doesn’t matter.[00:05:00] Cameron: Mm-hmm. Yeah, so I, and they were basically saying, and I think this is right, that if we have a super intelligence, its relationship to us will be like our relationship to squirrels or ants and. Anyway. Listen, I wanna tell you what I’ve been talking, what I’ve been thinking a lot about since you and I last spoke, and I’ve been dying to speak to you because you are the guy I want to talk to about these sorts of things, right? Steve: you. Cameron: You are the, you’re the only Steve: Someone Cameron: someone wants to talk to you. You are the only guy I can have a serious conversation with this stuff about. It is politics. So we just had a federal election here of course my number one political issue right now apart from legalization of cannabis is what are we doing to prepare our society for the AI robot revolution that’s gonna hit in the next couple of years? Uh, Steve’s doing a selfie. I’ve gotta put my gang sign up, Steve: we got a, yeah, Cameron: gang sign. Steve: we go. Cameron: Um, [00:06:00] and, uh, no political party that I was aware of was even talking about it. What we’re gonna do about the AI robot revolution in the next five years. Not even on the fucking agenda anywhere I. Steve: so, of course, not on the agenda at all, but think of it like this. They didn’t even have the courage to truly talk about the property crisis, and that’s already here and people are living in it and can’t get out while not living in it. Sorry, no pun intended there. And they wouldn’t even talk about that. Like you talk about, talk about the wall, the one we’re already in, there’re just ignoring you, pretending it’s not there because they’re too scared. They’ll get voted out, let alone something they barely understand. mean, disappointment, but not surprised. Cameron: I think the Labor party, you know, to give elbow is due, did say they’re gonna invest in building 23 houses in the next 10 years or something like that. So, you know, Steve: go. Cameron: for big fucking vision elbow, Steve: Oh, Cameron: you’re really killing it mate. Killing it. [00:07:00] Killing it. Um, so you know that that sort of didn’t happen, that wasn’t on the agenda, but. I’ve been thinking a lot about the future of our society and what we need to do to engineer it. And we’ve talked a little bit about this before, but been working on this idea. I’m calling chapter three. And chapter three is a movement that I wanna start with you and I call it chapter three because I basically figured we’re moving into the third chapter of humanity. And the way I think about it, the first chapter was everything that happened up until the Industrial Revolution. Steve: Yeah. Cameron: 10,000, maybe we could say a hundred thousand years. That was the first chapter of humanity, and it was basically manual labor. That was the first chapter of humanity. Right? Then we got to chapter two, which was the industrial Revolution through to today, [00:08:00] everything that’s happened in the last 250 years, let’s say roughly 300 years. Uh, chapter three is the AI robot. Chapter where, what it means to be. It’s the singularity chapter, right? What it means to be human, we live, how we interact, how we survive is gonna be as vastly different from the of the late Industrial Revolution as the late industrial revolution is from people lived in the Dark Ages or the Middle Ages, right? And a big believer in the fact that we need to engineer that as much as we can. We need to be thinking now about what do we want, what’s important to us? What are we willing to sacrifice? What do we most desperately wanna protect? What do we need to engineer to get the best? Possible outcome as far it is, is it is [00:09:00] within our ability to engineer it. Because a lot of things when we have a super intelligence, obviously are gonna be completely out of our domain and our control. So I’m pulling a chapter three and I, I want to get together the best thinkers, the best minds starting locally and then maybe going internationally to start to think about what does this look like? Where is this dialogue happening? I know that there are. Definitely dialogues like this happening in certain rooms, maybe at Davos, maybe in Silicon Valley enclaves where your billionaires are gathering. Steve: there. Look. Cameron: Yeah, right. Um, as opposed to dross, who’s probably at Davos. Dross is probably at Davos. Um, he comes out of, I think that’s the backstory of Steve: If Cameron: He came outta Davos. Yeah, he’s Elon Musk in a wheelchair, and, uh, after he did too much plastic surgery and then he becomes dross and creates a dials. Anyway, Dr. Huan, uh, moving on. So one of the things that I’ve done [00:10:00] recently is go into Chachi PT and ask it to help me build my political profile. I did it sort of with a view towards the election, but it said after I did this, basically it said, dude, you don’t fit anywhere in the Australian political Steve: yet. Cameron: completely, you’re completely off the fucking tree. and I got it to, you know, have you used Vote Compass? Right. ABC’s Vote Compass. It’s, Steve: used it. Cameron: oh. ABC’s got their own version of this compass tool, which is being used around the world today. It basically asks you high level questions. How do you feel about orcas? How do you feel about housing prices? How do you feel about climate change? How do you feel about L-G-B-T-Q rights? How do you feel about this? And then it tells you where you sit Steve: Mm. Cameron: a political axis and which parties are closest to you. I do it. I come out. Left of she Guevara. Right. So it’s like, dude, you need to move to Cuba. Steve: And you don’t. You do. And you don’t. But anyway, we’ll Cameron: Yes. Well, that’s what GPT said. I could, I should read you [00:11:00] GP T’S analysis. Like, dude, you’re like all over the fucking map. Um, but it was a, i, I got GPT to build a profiling tool for me that asked me. A 50, way more in depth questions, then it analyzed my responses and told me, you know, kind of where I fit in terms of my political profile. Help me. Help me. I. Coalesce. What is important to me, what my morals and values and ethics are when it comes to politics, right? And again, it said, dude, you, you’re like, there’s no one, there’s no political party in Australia that even comes close to any of this kind of stuff, right? Little bit of this, little bit of that, but mostly it’s off the radar. So what I’m trying to figure out as part of chapter three. how do we build a tool that enables everyone to go through this kind of exercise? To [00:12:00] really think deeply about what’s important to them politically. ’cause that’s really the question of do you want society to be? Like, whenever I talk about the fact that I’m a sort of, um, a, a communist on my investing show, I get emails from members saying, how can you be talking about value investing and talking about buffet? And at the same time say you’re a communist. I say, well, in my brain, there’s two sides to it, right? Where do I want society to be? I want society to move towards a place where everyone’s needs are taken care of. Everyone is looked after, regardless of your earning capacity or how pretty you are, or how many Instagram followers you have, I want everyone to feel happy, safe, fulfilled, be able to eat, have shelter protection. Get fulfillment, have an education, have healthcare, all those sorts of things. The question that I have is how do we get there from here? And to me, communism is the only political philosophy I’ve ever come [00:13:00] across that really has a plan for that. It’s like, okay, well it has a vision for that kind of a world. The question then is, how do we execute that? What are the, how do we execute that in a way that’s. Moral and ethical, and viable economically, et cetera, et cetera, and these are just problems to be solved, but it has a vision for where society could be if we figured out answers to all of those problems. Capitalism doesn’t. Steve: that. No, wait a minute. Adam Smith talked about that. As in the terms of human flourishing. And he talked about the idea of allocating resources more effectively using the market to do that. But he also talked about the idea of government setting boundaries and guess minimums in the, in the capitalistic sense that it’s what, what is that bare minimum? And I imagine your minimum threshold of what that flourishing looks like with all the things that you mentioned. And the mixed economy tries to do that, to provide a. Benchmark or baseline of access to [00:14:00] resources for self-improvement and to look after our most vulnerable Anyway, keep going. Cameron: No, you’re right. And Adam Smith, as, as you and I know, but I’m not sure everyone knows this, you know, wasn’t an economist, he was a moral philosopher, Steve: Mm-hmm. Cameron: he was talking about what was a, a, a good moral way that would a better society. What we know now that he didn’t know, but we know, is that capitalism, laissez fair capitalism. Fails, it doesn’t work in the United States as the longest running experiment in laissez-faire capitalism is, you know, classic ca is the example now that it doesn’t work. Steve: Wow. Cameron: it creates a shit load of money, but it also creates such an endemic level of oligarchic corruption that it needs Steve: uh, Cameron: to be very heavy. Steve: too. Smith said that the one fatal flaw of capitalism is that power begets power. And every now Cameron: Yeah. Steve: you need to have a redistribution of power, because monopolies like almost, or, or duopolies or [00:15:00] very concentrated economic systems and wealth is inevitable. So then you have to recalibrate every now and again. And, uh, I would argue we’re at a stage now where we need that recalibration. Cameron: Yeah, but those recalibrations are extremely violent usually and painful. Steve: they, become difficult to get. But, and now at this point in time, we’ve got media capture to, to a greater extent than we had during the age Cameron: I. Steve: um, yeah. The, the, what was the, the, the Gilded Age? I. Cameron: We’ve look, there have been plenty of studies done on this. There’s a book I read on this not that long ago that looked at historical oligarchies and it said, across history, going right back to Athens, places like that. There are only four ways that oligarchies have ever ended in history. A civil war, a major international war, some other form of a major famine. Or a plague or some other form of societal collapse. Steve: Yeah, Cameron: Right. The of climate change, something like that. There’s only four ways that oligarchs ever end and [00:16:00] none of them are good. Right. And it’s a major civilizational reset. is either permanent in some cases Steve: Hmm. Cameron: you know, generational recovery. It’s not pretty, but you know, that’s, that’s not a vision. We’re not working towards a vision. We don’t have a Department of Vision in Australia that’s down. You know, I’ve always said this, Steve: We need the ideas department. Cameron: I wrote about this in the psychopath epidemic where I said, you know, economics. supposedly a subfield of morality of ethics, right? You econ, economics sits under ethics. So we have a department of, we have a treasury, we have a department, we have a department of economics. We should have a department of ethics that says, okay, you, you can’t have economics without ethics. Ethics be, should govern how economics works. And in the Westminster system, we don’t really have. Department of ethics. We have precedents, we have legal precedents, and we have a [00:17:00] constitution, and we have these sorts of things, but we don’t sit down as a people and go, where do we want to be as a nation? Ethically? How do we want to be living 10, 20, 50 years from now? And what do we need to do to get there? Capitalism doesn’t really encourage that it, it just throws it to the wind and say, we’ll figure it out as we go. Steve: but, but for a long period of time. And it was a good measure way back, uh, when was the dollar or money was a good proxy for wellbeing because so few people had access to resources and clean water and all of these things, and, and that’s why GDP was a functional measure because we’re increasing the absolute wealth and relative wealth. Of societies which created infrastructure and resources, access to food, transportation, healthcare, all of these things. And that dollar was, was a pretty good proxy. But in a global economy where things are worth different things in different markets, and you have all of this arbitrage that wasn’t possible before, that dollar is no longer, uh, a good measure for general wellbeing. So there’s now[00:18:00] Cameron: And it never really was either because we destroyed the environment in the process of increasing GDP, Steve: not Cameron: we fucked the, Steve: A lot of things Cameron: yeah, Steve: in that, in Cameron: exactly. Steve: were not costed, which we’ve even had attempts to try and cost these externalities. Everything from carbon credits or whatever, and it always gets kiboshed by the oligarchy. Cameron: So getting back to chapter three and political profiling. You know, trying to figure out how do we build these sorts of tools that get people to think more deeply about what do we want our society to look like 10 years from now, and what do we need to put in place to. Have the best possible chance of achieving that as opposed to just winging it, which is what I feel we are doing right now. And hoping that Sam Altman and Elon Musk and Denise Hassabis and all of these guys don’t fucking land us in a huge ditch, which, let’s be honest, at least most of them are probably gonna do [00:19:00] that. Most of them are gonna handle it really badly. Um, if you look at. Zuck and Musk. Uh, not sure. I want them to have any power over the future. Uh, Sam, Demi a little bit. I trust them a little bit more than Zuck and, but. You know, we, I don’t trust the future of humanity. I into the hands of a handful of billionaires and Trump, but that’s kinda what we’re doing right now. We’re just going, I don’t know. sort of work it out as we get there. No, there’s no one, we are not coming together as a society, and I don’t want to hear that. There’s three people in Canberra that are sitting down thinking about what our AI policy is gonna be. That’s not what I’m talking about. I’m talking about at a societal level in Australia, coming together and going. Okay, what are we gonna do about this seriously? Like if there’s a non-negative, a non-zero chance that a substantial amount of the population are gonna lose their jobs to advanced AI and, and, uh, humanoid robots [00:20:00] in the next five to 10 years. We should be fucking talking about that right now and starting to plan what that’s gonna look like. We should be coming together as a people and going, uh, what are we gonna do? And that’s not happening. And I think it’s gonna be up to us. If it’s gonna happen, it’s gonna be up to literally you and me. We’re the only people, Steve. Steve: Look, Cameron: Not that I have a Messiah complex, Steve, but Steve: I’ve been saying that for some time now. Cameron and I. Cameron: I’m the only person I trust and you we’re the only people I trust to do this fucking thing. Mm. Steve: I have always wanted to start a cult, and I know that I could be a good cult leader. And if there’s Cameron: I know. Steve: sounds cultish, it is this a new world by ai, we’ve got the perfect audience. A disinfected entire society of gamers in their parents’ basements who can’t afford a house. They need cult leaders. you and I and we’re gonna need a place to build this new Cameron: Why don’t I have a sex cult? Steve, I’ve always wondered this. What’s wrong with me? Look, I’ve got long hair and a [00:21:00] ponytail. Steve: I think if Cameron: I should have. I Steve: if you, if you wanna create Cameron: step one, get a ponytail. I. Steve: no, is cult is you gotta start a cult Cameron: Well, to get a cult, you need a ponytail, I think. Yeah. Steve: fine. We are starting Cameron: Okay. Back to, in fact, Steve: and Cameron: chapter three, back start. A Cult. Cult. Steve: No, Cameron: Well, if you wrote it down, I know it’s serious. Steve: I’m, I’m, I am serious, I think to create serious change, and most things are kind of cults. They really are. They, it becomes a non cult once it’s accepted in society. I mean, let’s say you started Catholicism Today, they go, that’s a cult, Cameron: Chrisy, and I always chrisy and I always refer to our Kung Fu school as a cult because you start off going one day a week before you know you’re going six days a week. And if you’re not there one night, everyone’s texting you going, dude, where are you? How come you’re not here? What’s going on? Like, you’re not. It’s a total, it’s a total cult. No one’s allowed to leave. You’ve gotta stay, you’ve gotta do kung [00:22:00] fu Steve: and, Cameron: with every waking hour of your day. Steve: the the, the, song that we can have is, the theme today is, uh, Cameron: Don’t play songs we’ll get, we’ll get pinged. We’ll have to take it out. Steve: not gonna Cameron: Uh, sing it. Steve: gonna I. Cameron: Just sing it. Steve: it’s living color. The cult of personality. I mean, you know, Cameron: Cult the personality. Cult the personality. Steve: that’s it. Cameron: Okay. Speaking of which, moving on. ’cause we don’t have, we don’t have time. We don’t have time. We don’t have time. Steve: no, we’ve Cameron: The question I’ve got for you, Steve, thinking about elections is when we gonna see our first AI political candidate? And before that, when are we gonna have our first human as a proxy for AI candidate, as in saying I’m a human running as the candidate, but I will have all of my policies created by an AI and will use an AI to guide. My how I vote, uh, in every [00:23:00] situation, every policy, I will run it through ai, chew it up, good, bad, figure out how to, how to, you know, what’s the most ethical, moral, logical, rational way to process this. When do you think we’re gonna start to see ai, not just politicians talking about it, or political parties advocating? We need to do more about it, which, let’s face it, now is the time we need an ai. political party, which is gonna deal with, we’re gonna figure out what, how we’re gonna deal with AI and robots, and we’re going to use AI to help us navigate this whole process in as we do it. When do we have the first AI political party? Political candidate? Steve, your, I want you to put money on the table right now. Is it? When is it what? Gimme a gimme a year. When we’re gonna see that in Australia. Steve Santino, Australia’s leading futurist. Make a prediction right now. Put your career on the line. Steve: The next election, will have Cameron: State, local, federal, [00:24:00] what? Steve: and now federal, federal. Guided by AI in the next election within four years, because there’s gonna be such radical change between now and then. It’s gonna have such an influence on society. They’re gonna have to tap into it that they won’t have a choice. This is not a choice thing. We’ll have it by the next election because I just think with the, the level of recursion and how fast things are changing, four years is a very, very long time. Four years is like 40 years, a hundred years, 300 years. But in this election. I think they’ve already been doing it, except they haven’t been putting in the prompts that you would desire, the prompts that they’ve been putting in. How do I develop a policy which keeps Gina Reinhardt happy? Pretends that I’m actually gonna make housing affordable and avoids any of the climate issues while not accepting royalties from all of the foreign companies who dig up our fossil fuels and send them. I mean, they were doing it. Every policy that is written, every Cameron: It’s just, Steve: written are all, they’re Cameron: just, Steve: AI to prompt it right now, but they’re just, Cameron: they’re just doing what they’ve been doing for the last, just [00:25:00] doing what they’ve been doing since Howard Man, isn’t it? They Steve: exactly, Cameron: need, they. That’s the prompt to ai. Okay. Imagine you’re John Howard. What would you do in this situation? And that’s what elbow does too Steve: yeah, Cameron: well, talking about, you know, um, the, the normalization of this. So on the front page of the Financial Reviews website today. Steve: Alright, better Cameron: It’s headline article. AI is starting to work. The Trump drama could look like a sideshow lost among the Trump turmoil is the disruption caused by the AI revolution? It’s happening. And Australian investors, politicians, and business leaders are not ready. James Thompson Calmness. March 16th, 2025. For the past few days, some of Australia’s top chief executives, including Commonwealth Banks, Matt Koman, nabs, Andrew Irvine, and Telstra Brady. Have been bunkered down in the US city of Seattle for one of Microsoft’s most exclusive and influential events. By the way, I used to help organize those events. I took the CEO of [00:26:00] Telstra and all the senior executives of Telstra to Seattle for those events. Many times, back in my Microsoft days, 25 years ago, the tech Giants annual CEO Summit has an exclusive guest list, blah, blah, blah, blah, blah, blah, blah, blah, blah, blah. and it’s talking about ai. Basically the, his biggest message is talking about Komen. His biggest message is that the AI revolution is moving faster than ever, and Australia may not be ready for many consumers. It may seem that the initial hype that accompanied the release of chat PT has faded. And generative AI models are simply better versions of existing tools, a smarter way to search the web, for example, or a suit virtual assistant. But inside some of the world’s big businesses, things are changing and fast. Also in the financial view today, there’s an article in the opinion section. How This Teenager Uses AI will surprise you. It’s by Elaine Moore because they are less enmeshed in existing structures. Teenagers tend to be more willing to play around with new technology, finding their own shortcuts and uses. When 16-year-old Jra Lara [00:27:00] Gly checks her phone in the morning, she scrolls through the messages of friends have sent her, and then. If there’s something on her mind, she opens chat. GPT asks a question out loud and listens to the answer. Sometimes I ask things I was thinking about overnight. She says, just random thoughts. Or if I had an interesting dream, I might ask about that. Lara uses chat, GPT every day, multiple times a day. Everyone she knows at school does the same according to the last poll by off comm. out of five 13 to 17 year olds in the UK are using generative ai. Early attempts to ban the technology in schools have given way to acceptance or resignation that this is an inescapable part of the world that students are growing up in. Steve: I, I’m actually not surprised. One little bit, and I’m not surprised that she asked that. I did. That’s what I do all the time. That’s what you and I have been talking about for two or three years since we’ve been doing this. We Cameron: My point isn’t that they’re doing it. My point is that the financial review got headlines about AI is now [00:28:00] changing everything. Like for the last couple of years it’s been, oh, look at this crazy stuff that the kids are doing and how bad it is and how it’s not as good as this, and not as good as that, and not as good as the other when their financial review that all the fucking business leaders and political leaders, et cetera, et cetera, read in this country starts going, holy shit, this stuff is serious. Um, people are gonna start to take, those sorts of people are gonna start to take more notice. But Sam Altman was recently at a Sequoia Capital event and I watched the YouTube chat that he gave and they asked him, why aren’t big businesses making more use of generative ai? And he said, look, this is the same in every technology revolution. Big businesses are just way, way slower to move. And it’s the startups that move quickly and they get the advantage. . He says people in their thirties and forties are using it as a Google replacement. Uh, people in their twenties are using it. Uh, late twenties are using it first, basically. Personal and career advice. Um, what do I, you know, what do [00:29:00] I do here? What do I do there? College age people, people, late teens, college, he says they use it as an operating system. Steve: Yeah. Yep. Cameron: It’s basically the underlying thing that runs everything, which is how I use it. Chrissy’s starting to get there as well. it’s just everything. Now it’s my default. Steve: It’s my quasi desktop and it’s the, it’s Cameron: Yeah. Steve: fulcrum of all of the other pieces that go into it now. Cameron: Yeah, me too. Steve: And, Cameron: And. Steve: thing I noted noted in that article as well, which for me was interesting from a business perspective, and this is what the. The financial review readers should pick up. Laura’s preference is chat, GBT. With Google, you have to click on websites and you have cookies and adverts. It’s annoying like we’ve spoken before about I. Google searches, we know it being replaced in serp and I’m actually surprised that Google’s results were so good. I dunno if they brought it forward or the world hasn’t caught up yet. But just cannot see a world where a page of links is, is even [00:30:00] gonna survive. Put aside the whole dead internet theory and if no one gets links, no one gets published. And we have that whole synthetic data problem. But gee, I tell you now, search has to be having a rapid decline, has to be. Cameron: Not only search, so the CEO of Fiverr recently sent a an email out to all of his employees. Um, Hey team, I’ve always believed in radical candor and despise those who sugarcoat reality. To avoid stating the unpleasant truth, the very basis for radical candor is care, blah, blah, blah. So here is the unpleasant truth. AI is coming for your jobs. Heck, it’s coming for my job too. This is a wake up call. It does not matter if you’re a programmer, designer, product manager, data scientist, lawyer, customer support rep, sales person, or a. Finance person, AI is coming for you. You must understand that one. What was once considered easy tasks will no longer exist. What was [00:31:00] considered hard tasks will be the new easy and what was considered impossible. Tasks will be the new hard. If you do not become an exceptional talent at what you do, a master, you will face the need for a career change in a matter of months. I’m not trying to scare you. I’m not. About your job at Fiverr. I’m talking about your ability to stay in your profession in the industry. Steve: Wow, that’s. It’s big, big statement in a matter of months. I feel like Cameron: Yeah. Steve: that a bit. I don’t think months because of the lag that you see with big corporates is the, is the main reason. Cameron: Yeah. Steve: that, that for me was interesting and that was, uh, last week’s post, uh, that I did, I spoke about. That AI can do everything. The one thing it seems to me that it can’t kind of do yet is nothing in particular. [00:32:00] It’s actually just moving between tasks, if that makes sense. So AI can kind of do it all, but the, the thing that it doesn’t do, and I, I wrote, why hasn’t AI taken your job yet? And as we know it can elite law exams, better essays than grad students, all of that kind of stuff. there hasn’t been a tidal wave or a terminator just while disruption with jobs. It’s kind of a paradox. Uh, and, and the thing for me is that it can, it doesn’t fail the tasks that are intellectually complex. It fails when workflow is messy. is what AI can’t do yet, and I don’t think agents are gonna be able to do it either. If a job requires juggling fragmented tasks, shifting priorities and ambiguity, being a, a manager on a Monday, AI’s gonna struggle because you’re gonna be moving from a warehouse to a boardroom to, uh, a meeting offsite. Uh. [00:33:00] And it’s all of these tasks in between the tasks in different geographical and physical contexts that it, that it struggles with. And it, and it might be that an AI bot. Then takes over and does that because it doesn’t bring its knowledge with it. It’s sort of trapped in the machine. It needs to be released from the machine in in some way. So for me, it’s the context switching and geographic switching. The more of that you have in the job, the less at risk you are, because even if there’s a lot of tasks that AI does, you’ll get your direct AI to do those pieces, and then you’re taking this piece to the next place. Cameron: I dunno if I agree with that. I. Steve: Tell me, tell me why you disagree, but it’s Cameron: Well, I think it’s, I think it’s very good at context switching because you know, a one minute I’ll be talking like I have today at what not One minute I’ll be talking to AI about, you know, deep, deep politics, domestic politics. Next I’ll be talking to about which, which brand of matcha green tea has the highest [00:34:00] ceremonial grade qualities. Steve: lemme refine context switching, not the context of the topic. Intellectually, it’s probably physical space, right? So you’re in a warehouse doing something. I. Like, how does the AI get from the boardroom to the warehouse? Like that sounds, I know that sounds like a throwaway statement, but I I genuinely mean that like our meat bodies takers from here to there. Does, Cameron: Well, when we have humanoid robots. Steve: move between, or is it Cameron: Yeah. Steve: like is it a fluid movement of an Cameron: It’s a bit of both. Steve: ai or is it an Cameron: Well, what’s it doing? Steve: humanoid. Cameron: it doing in the warehouse? Steve: I don’t Cameron: What’s its function? Steve: meet Billy and Billy’s talking about this, and I, I don’t know. And it’s Cameron: Yeah, so it’s a, it’s a, a console on the wall. Hey Billy, what are you doing? Hey, um. One final thing I need to talk to you about before we run outta time. I did a very interesting experiment that I’m not sure if you’ve done this yet, but it’s worth a try. If you [00:35:00] haven’t, Steve: I’ll do it tonight. What is it? Cameron: take a, take a really complex topic, particularly involving geopolitics and uh, a research project and throw it into deep research in Jet GPT. The one that I did a while, a couple of weeks ago was the history of the World Trade Organization and why it’s broken right now. Um, and it did a very, very damning deep research piece on why the United States has broken the World Trade Organization over the last eight years by refusing to appoint any judges to the Apple at court deliberately because the WTO was ruling against the US. And then the US would just do an appeal to the apple at court, but no judges sitting on the apple at court. So it just went into the void and no one could judge on it. And the, the ruling just goes into the void. I took it, I took the, uh, Chachi PTs output, which was very good and very detailed, like a 20 page report. I gave it to. Gemini, [00:36:00] I gave it to Deep Seek and I gave it to Grok and I said, I want you to fact check this. And I also want you to sanity check the interpretation of the facts and GI gimme your perspective on it. seek. Pretty much agreed with everything OpenAI said, which isn’t surprising because as we know, deep Seek was trained on OpenAI. Gemini pretty much agreed with everything Chat, GPT said as well. Grok took issue with it. the facts, but the interpretation of the facts. And it basically said that chat PT was being too critical of the US government’s motives and it was very pro-US. Then I took Groks response and gave it back to chat PT and said, grok said this and chat. PT basically said, yeah, well, GR would say that, wouldn’t it? Because it’s designed to be pro-US. Then I took, and it rebutted all of rock’s. Comments. Then I took that and gave it back to [00:37:00] GR and Grock said, yeah, well chat. GPT would say that, wouldn’t it? Because it’s a woke bitch. And this went backwards and forwards. And it’s very interesting ’cause you know, Elon has made this big deal about how Grok is. Neutral and it’s not woke and it’s free speech and like Twitter is and blah, blah, blah, but when you put grok to the test in terms of international relations and geopolitics, it turns out that it’s actually very, I. Pro us with its bias, which I find interesting and it’s like talking to one of my friends who’s like a American patriot. There’s always a justification. Well, America did invade Vietnam, but you have to understand at the time. That we really didn’t understand why we were doing it was for the right rea. Yeah, basically the US makes mistakes, but when it does, it’s always doing it for the right reasons. Not that it’s a rapacious, imperial bunch of cunts that are just trying to take over as much of the world as they [00:38:00] can. It’s just, it’s just, oh, well, you know, sometimes we get it wrong, but we’re doing it with the best possible intentions, which is frog shit from my perspective. So. Playing the playing them off. And I do this all the time now. I dunno if you do this, but I will get a, Steve: I. do split testing. Sometimes when I’m doing ideas, I go to one then the other and just see the differences are. But I haven’t got it to analyze the other’s work. Sometimes I’ll put in something and say, give me new ideas or different things. I. Don’t include any of these. What have you got? Uh, Cameron: Yeah. Steve: I haven’t said, Hey, analyze this and back and forth it, I haven’t done that. I guess there’s a lot of context you could do that in. There’s a lot of different reasons. You know what I wrote down there? Tell me why the American empire is crumbling geopolitically and if there is a real risk of it, uh, of its institutions failing in. Falling into Civil War or dissolution like the USSR, give me statistics and reasons and whatever. I’m gonna put that in and see what I get. Right up your Cameron: Put it in. Yeah, it is. Put it into GPT and then give its [00:39:00] output and you have to do a deep research. So it needs to be oh three deep research. Let it run for an hour, then give that to Grok and see what Grok says. It’s fascinating and you know, this is the world that we will be living in, where you’re playing AI off each other for the right reasons, like I’m trying to get the best possible intelligence. Perspective on this. And it’s like getting, it’s like having a debate between two really, really smart guys getting all girls, people and, well, I have, you know, voices. Actually I have a female voice on Chet now ’cause I got sick of the male voice. I, Steve: culture is. I wouldn’t have said Cameron: it is Steve: probably, Cameron: no. Steve: what’s interesting, Cameron here is we, we probably both have to tidy up in today’s, uh, futuristic has been quite a philosophical one, which is interesting. And I guess we can pick up on the news next week, but we have almost gone full circle from Cameron: Next week, Steve: start. Cameron: we’re gonna do a show next week, like we’re gonna do. Steve: will. Cameron: Yeah, yeah, yeah. Yeah. Three months from now. Yeah. Steve: [00:40:00] almost gone full circle here where the first thing I said is we, we end up with this internet that is talking to itself. And you’ve just described the thing where you’ve kind of Cameron: Yeah. Steve: that, where you’ve put it in to the ais to have their discussion amongst each other. And even though it was in English and it wasn’t in some foreign coding language, we can’t understand, was at a, speed and a, and a level of discourse and digestion of information that we couldn’t keep up with cognitively and at that speed as well. So it’s kind of. Part of what I was saying, it seems like this process is already well underway. Cameron: This has been futuristic. 39, to you by Sammo 3D. Steve: Thanks, cam. Cameron: Thanks, Steve. Steve: That was really interesting. Hey, like without the news, just a bit of Sam and Cam.

  9. 2

    Futuristic #38 – How LLM’s Think

    In this episode, Cameron and Steve dive into the rapidly evolving world of AI, discussing the latest advancements and their societal implications. They explore new AI voice features, the potential dangers and benefits of AI companions and agreeable AI personalities, and the philosophical debate around AI sentience and relationships. The conversation touches on AI’s role in business generation, the power of new models like OpenAI’s GPT-4o and Google’s Gemini 2.5, and the ongoing copyright debate surrounding AI training data. They also get into the complexities of how Large Language Models (LLMs) like Anthropic’s Claude actually “think,” the expansion of AI into hardware by companies like LG, Apple’s perceived lag in the AI race, and the future of AI integration in everyday tools like ebook readers. The discussion extends to advancements in open-source robotics, citing Nvidia’s initiatives, and contrasts technological progress and STEM education focus between China (highlighting Huawei) and the US. Finally, they touch on the intriguing and potentially controversial “Network State” concept championed by figures associated with Peter Thiel and Andreessen Horowitz, exploring the idea of tech-driven, independent city-states. futuristicpod.com FULL TRANSCRIPT FUT 38 Audio [00:00:00]      Cameron: So that was an official new voice from chat, GPT, which came out today called Monday. And it’s like a depressed goth girl or something, whatever. , which is now my official favorite voice. I don’t know, , if you found this, welcome back. This is futuristic episode 38, by the way, Steve Sammartino, , I dunno if you’ve found this, but, uh, I’ve been using advanced voice with GPT lately, and the voices have sounded increasingly excitable. I was having a conversation in the car on the way to Kung Fu with GPT about Trump and Greenland and rare earth minerals. And I was like, I was saying, so hold on. Greenland is run by Denmark and Denmark’s and NATO country. So if Trump invades Greenland, [00:01:00] does NATO have to, does that, uh, is that Invoke Article five under the NATO treaty and then NATO needs to attack the United States and GP t’s like, yes, that would happen. They probably would. And it would have to be, and it was all very excitable and it was, and I was like, can you sound less excited? And it would like, oh, okay, sorry. I’ll bring the tone down a bit. And a minute later it would be talking like this again. It would all be very excitable. Even Fox was sitting in the backseat. He is like, can you just calm down a minute? Anyway, but I like this new depressed voice. That’s more my style. Steve: call it apathy, Cameron: I. Steve: and I don’t think enough AI in modern society are apathetic, Cameron: It reminds me of, was it Marvin in the Hitchhiker’s Guide to the Galaxy was the AI robot. I was like, I am so depressed, brain the size of a planet, and they asked me to pick up a piece of paper. I am so depressed. Steve: Well, I think that the AI should be able to seamlessly switch [00:02:00] between. Levels of animation and emotion, right. Based on the context of the chat, because it understands it verbally with the language, it should be able to translate that in the audio sense. You would, one would think regardless of the voice that you choose, I. Cameron: Yeah, and I was listening to, uh, an interview with Ezra Klein yesterday with Jonathan het, and Ezra Klein was talking about the fact that he’s concerned about the fact that a generation of kids are gonna be growing up with AI assistance that are completely agreeable with everything that they say, and that that’s not a good thing. In the same way that social media hasn’t been a good thing for kids, AI that just agrees with them all the time to make them feel good is not gonna be a good thing. I was talking to Chrissy about it yesterday and I was saying that I expect when we get fully realized AI virtual [00:03:00] assistants that are on the devices that we give to our kids, we will have parental controls where we will be able to set up the AI personality that we want our children to interact with. That say, listen, your job isn’t to just agree. Your job is to be. A caretaker, an educator, is to push back if they say something dangerous or stupid or that could be, um, referencing self harm or could be negative for their psychological or emotional health, you are to act as a therapist slash parental advisor slash tutor slash whatever. Adults though, will probably get to choose the AI personality that they want, and I’m already telling GPT don’t agree with in my customer instructions. Don’t agree with me on everything. If I say something and it’s factually incorrect, or well, you think my interpretation of the facts is incorrect, I want you [00:04:00] to tell me that’s your job. Push back, argue with me. You know, give me something to think about. But Chrissy said, and she’s probably right, most people won’t. Most people will just choose the AI personality type that just agrees with them all the time. ’cause that’s what they want is just validation that their ideas and beliefs are true. What do you think? Steve: I think the most dangerous. tool in the world right now, which builds on this is AI girlfriends. They are an absolute social disaster in the making. An imaginary girlfriend that you talk to every day that agrees with everything you say, think learns from you, has the same business model, is want you to keep coming back, is gonna tell a young teenage boy everything he wants to hear. It’ll eventually be a soft robot that he gets delivered from Amazon and he develops a relationship with. This is not good. Falling in Cameron: Is it worse [00:05:00] than, Steve: it’s ter, Cameron: is it worse than having, is it worse than having incel running around with AR fifteens in the us? Steve: it’s the same thing with a different product. Right. It’s Cameron: Yeah, but Steve: who don’t have real social interactions. An incel with an AR 15 or an incel with an AI humanoid robot, they’re the same thing, which is we don’t have real social interactions of people that disagree with us, that we learn social norms, that we interact, we give and take. It’s the same thing, and they Cameron: well, yeah, except they’re not going to. Steve: a bunch of shot up people in, Cameron: Well, Steve: where I can go and buy a gun in Walmart. Cameron: no, but look, I see the opportunity for problems, but I also know that loneliness is a huge issue in modern society. Steve: so that doesn’t solve loneliness, Cameron. It doesn’t Cameron: I don’t, and I, Steve: it. Cameron: I dunno that that’s true. I think, uh, you, you have been, you know, a big advocate [00:06:00] of the idea that if an AI seems human, seems conscious, seems to be sentient, then for all intents and purposes, it is those things Steve: Yes. Cameron: I agree with. Steve: Yes. Cameron: therefore, having a relationship with a seemingly sentient ai biological organism, i, it, it’s not exactly the same as having it with a human, but it’s, it’s, it’s maybe the next best thing, or maybe it’s an equitable thing. Steve: Okay. You are right and I’m right. Cameron: I. Steve: problem, circling back to what I first said, is having one that agrees with everything you said and tells you what you want to hear isn’t a relationship with a sentient thing. We’re Cameron: Look, that’s normally how I pick my co-hosts, Steve, except for you. That’s normally, Steve: No, no, Cameron: and Tony. Steve: the point is, right. No, the point is, is that if it were a sentient AI that had give and take and taught the [00:07:00] human side of the relationship, Hey, that’s not how you treat me. And you gotta, no, you gotta have more open mind than that. Wait a minute, I’m not just gonna do it. Like, if it, if it became like, like a real relationship, then that’s good. The point is, how is the algorithm trained? What are the incentives of the AI girlfriends that are Cameron: Hmm. Steve: And I imagine the incentives are gonna be as perverse as they are for Google and Facebook in the attention Cameron: it makes. Steve: back and subscribing. Then it’s more likely to give you exactly what you want, which is the same of your algorithms feed you more and more of And that’s the problem. If it were sentient and reasonable and rational and disagreeable and all of those things that we get in normal relationships, then it would be good. But I fear that it won’t be that. Cameron: Hi Steve. I’m going to continue our dirty Talk session, but first I want you to listen to this ad. Did you know that Squarespace will help you make a website Steve: Let’s get back. Cameron: for your, for your business? Steve: Remember, that fetish that we discussed? [00:08:00] Remember that? Do you remember what I liked? Do you remember? Can you show me? Can you show me? Send me some pics. Send me some pics Cameron: Look, if it makes people feel good and they’re not hurting anyone, what does it matter? Steve: the point is if it makes them feel good, that’s fine. And if Cameron: I. Steve: hurt anyone, that’s fine. But I fear that the thing that will make them feel good will be getting everything you want, all the chocolate, all the fantasies, agree with me, do everything I want, and then you get, it gets into a circle of darkness. You just end up circling the drain. It’s a race to the bottom of extracting the proclivities of young. Teenage males, which unless kept in check, might not be all that positive for society. I’m just, I’ve been a teenager. I, you know, you’ve been one. Cameron, let’s, let’s be real here. Giving teenage boys exactly what they want might not be ideal for society. I’m just, just a guess. Cameron: I am still a [00:09:00] teenager. I just, my body got older. Steve: Yeah, same. Cameron: Um, so, uh, uh, let’s circle back to the news. Um, so, well, no, before we do that interesting things, tell me about interesting futuristicy things you’ve done since we last caught up. Stevie, Steve: I revisited. you, Kami. circled back to one of the original AI AI ideas, which was going around where you give AI a budget of $500 and say, generate 10 business ideas. I did it on. Four of the major, uh, large language models. All of their ideas were incredibly similar. And they had things like chatbot agency, automated trading, digital products, and Etsy domain flipping, AI stock photos, those kind of things. AI stock photos is a new one because they’re much better at that now. and I came to a conclusion. And the conclusion [00:10:00] that every single one of these ideas where AI becomes an employee out entrepreneurially generating money, all of them required advertising on big tech, which I just thought was this interesting circle where it all came back and it said, and you’ve gotta allocate out of your $500, $150 worth of advertising on one of the big tech channels to get attention. Which just ensconced me further, even with these emancipating tools of AI that can do everything. But you gotta come and find people in the attention economy on our tools that we happen to also own, in addition to the ai, whether it was on Amazon or Google or, or Facebook or any of ’em. I’m like, man, we’re back where we started, brother. Cameron: Yeah. Right. Well, look, I still believe there is a, there is a potential future where that isn’t the case, but you can rest assured that the people that own the online advertising platforms and also have an interest in [00:11:00] Theis will be trying to create a future where those things are tightly coupled. Steve: Definitely. Cameron: Well, I used GT’s new four oh image generation model, which we’ll talk about in a minute. Um, one of the first things I did with it was to make a comic. Um, something that I’ve wanted to do for years is a, a comic series about my journey with my guru Bob who passed away recently, and about how I met him and, and what he taught me and how it helped me, but do like a very, like a light sort of comic approach to it. Now, we all know that, uh, image generators have really struggled with words since they were, they first came out, we, you know, whatever it was 18 months ago. Ideogram, the one I’ve been using for the last couple of months, the most, uh, does a pretty good job of words, but you could give it one or two words or three [00:12:00] words maybe, and it would do a pretty good job of that. But I tested this thing with GPT where I said, I want to do a comic. I wanted to have, you know, between three to five panels, um. And I told it, the storyline, I created basically the, the words and it did comics that were flawless. One shot, two shot if I wanted to edit something. Um, really great. And I did a series of five comics that tell the progression of the story. They kept the character ref, the character images the same, like the same character for me. And Bob looked the same from comic to comic. They did word bubbles. They did the words right. It was. Amazing. Like to be able to just say, make me this thing, and it just did it. So, um, I played around with that. We’ll talk about what happened in a second, but before I move on, I [00:13:00] want to do a shout out to Pete Hewitt from the uk listener of our show. Been a listener of some of my other shows for many years. Actually had a nice lunch with Pete when he was in the country a couple of years ago. Um, Pete pointed out that I have been conflating on this here podcast over the last year, the idea of cold fusion with net positive fusion tomax, which are really hot fusion. But in my brain, I, uh, reversed the polarity and because it was net positive fusion, I was calling it cold fusion, which is a completely different thing and probably doesn’t exist. So. Apologies to everyone who thought I was talking about cold fusion. I was actually talking about hot fusion, but net positive fusion. Uh, there you go. So thank you to [00:14:00] Pete for calling me outta my, uh, bullshit, not deliberate bullshit, but my brain. And by the way, I found out in the last week or so that I’m autistic, so that is now my get outta jail free card for everything. Steve: Kevin? Cameron: Yeah. Steve: So Cameron: Well it was Chrissy. Steve: time, Cameron: Yeah, Steve: Pete, by the way, Pete, look, I didn’t wanna say anything, Pete, but I was thinking the same thing because late at night I just, I got out one of the old physics textbooks from my undergraduate, uh, science degree. Really? We really was perusing some of the pages there. Well, I’m glad Pete pointed it out and thank you, Pete, because hey, Cameron: that’s your job is to call me on my bullshit if I get stuff wrong. Steve Steve: did it. Cameron: did. Steve: my pay grade physics. Cameron: All right, Steve: um, thanks for Cameron: well there you go. Steve: for tuning in. Cameron: And as I always say on my podcasts, um, you know, I’m usually right, but if I ever get something wrong, I want to be told that I’m wrong. ’cause I [00:15:00] don’t give a fuck if I’ve Steve: You don’t want that puppy. No way. Cameron: No, no, no. I don’t want if be talking bullshit. So anyway, thank you to Pete. Let’s talk about the four oh image model. So, uh, open AI came out with this coincidentally on uh, roughly the same day the Deep seek released the new version of their V three model. That was a killer. And Google released the new version of Gemini 2.5, which is a killer. Also happened to be the day that Open AI released their new four oh image generation model. And it was an absolute killer for about 24 hours, and then it became next to useless. Perhaps be? Well, look, I have a couple of theories. One is that they nerfed it, um, because what they tend to do on a launch day, I think particularly if they’re competing with other AI launches, they want to get all of the media hype is they [00:16:00] allocate a massive amount of compute to it for the launch day, Steve: Yep. Cameron: which is costing them gajillions of dollars. And then as soon as they get a good 24 hours of massive hype, they downscale the compute to save money. But simultaneously, and this is probably the other truth, is that 24 hours of hype brings in millions of new users that play with it. And so it just gets hammered. Um, Sam did tweet just after they launched this thing. The chat GPT launch 26 months ago was one of the craziest viral moments I’d ever seen, and we added 1 million users in five days. We just added 1 million users in the last hour. So Steve: Wow. Cameron: I accept and acknowledge, uh, that. Uh, the sort of [00:17:00] scaling that they’re dealing with is insane and is an insane engineering problem to have to handle regardless of how much money you have. It’s about people, it’s about data centers and compute and chip sets and cooling and all of the real world hard engineering issues that go into scaling something like this. I think it’s a combination of that and they nerf it for various reasons. Everyone was doing Studio Ghibli versions of everything and they kind of nerfed that you could do pictures, Ghibli versions of your family initially. Then I tried to do Steve: get the win on this? Why? Why Ghibli? Cameron: ’cause nerds love Ghibli. Steve: through the roof. I mean, yeah, it was a bit niche, but now it’s like my feed was just filled with Ghibli. Cameron: Well, all nerds love Studio Ghibli and for good reason. I mean, the Studio Ghibli films are absolute [00:18:00] masterpieces, absolute classics. Steve: But are Cameron: Somebody Steve: Are they really masterpieces? Cameron: Oh, are you, are you a, are you a Ghibli skeptic? Steve: I, Cameron: Oh my God. Steve: like now Cameron: Have you seen, have you seen Ghibli films? What Gib films have you seen? Steve: I’m a Ghibli skeptic. Okay. Let me just tell Cameron: Have you seen the Ghibli films, is what I’m asking you? Steve: and I think I’ve had all too much Ghibli in Cameron: Oh, Steve: couple of Cameron: no, no, no, no. Steve: send me a Ghibli Cameron: You haven’t sat down and watched ’em with your kids. Steve: if you, if you insist and report Cameron: Oh my God. Sit your kids down and watch The Castle in the Sky or, or Princess Monki or any one of them. Oh my God. They are visual and storytelling and musical. Absolute masterpieces. Every one. Definitely. Steve: has to be a winner when something new has a new application, right? There’s always a winner every single time. It’s like [00:19:00] like a gravitational force where the new technology gets pushed into something and it has to have like it has to have its victory winner. it was a plant by open ai and they’ve got it shares Cameron: The Steve: Ghibli or Cameron: chairs in Studio Ghibli, Steve: I’m telling you now, the whole thing’s a Cameron: Miyazaki and Sam Altman have a thing. Yeah. Okay. Steve: I’ll Cameron: Yeah, yeah. Steve: he’s got Cameron: You need to get off of Q Anon, Steve: with Ghibli’s in the back and that’s where he is going with the DVD player and a little battery Cameron: his jansport backpack and he is taken off. Um, oh, well we’re gonna talk about Praxis when we finish this thing. But anyway, they nerfed it. Um, you couldn’t do pictures of your kids after a day or two. I tried to do a, an anime version of Fox. Steve: skepticism Cameron: Anyway, um, let’s talk about Gemini. 2.5 came out. The thing about Gemini is it’s free Steve: Yep. Cameron: it has a context window that’s like 2 million tokens or something. It’s an [00:20:00] insanely large context window compared to the others. And it’s beating all of the benchmarks, uh, um, as. By the way is the latest version of Deep Seek V three. I mean, depending on what day you look at it, one of these is beating all of the benchmarks, but Steve: is cool. Cameron: the best I’ve played with Gemini, uh, with some coding stuff, and it is pretty good. Sometimes sucks at other times like Claude, all these kind of things. But the best post I’ve seen about Gemini 2.5 is a guy on the singularity subreddit said, um, I’ve gotta share this, because it was seriously cool. I’ve got an old novel. I wrote years ago, and I fed the whole thing to Gemini 2.5 Pro, the new version that can handle a massive amount of text, like my entire book at once, and basically said, write new chapters. Didn’t really expect much. Maybe some weird fan fictiony stuff, but wow, because it could actually process the whole original story. It cranked out a [00:21:00] whole new sequel that followed on, like it remembered the characters and plot points, and kept things going in a way that mostly made sense, and it captured the characters and their personalities extremely well. Then I took that AI written sequel text, threw it into 11 labs, picked a voice, and listened to it like an audio book last night, hearing a totally new story said in my world, voiced out loud. Honestly, it was awesome. Kind of freaky how well it worked, but mostly just really cool to see what the AI came up with. I’ve been talking about this for the last couple of years, but I do believe we’re headed into a space where I will be able to upload a Dune novel or a James Bond novel or a film and say, give me a new one. And it will give me a new one. Custom built, [00:22:00] custom designed for me. I’m the only person that’s ever gonna watch it and or my family sitting around, gimme a new Quentin Tarantino film, boom, here it is. Watch it. Enjoy it in the style of Quentin Tarantino. And I know people lose their shit over, oh my fucking trademarks and my copyright and my talent and my this, and I’m still arguing against those people. Training on your. Drawing, training on your film, training on your album, training on your artwork, and then producing something brand new is not, I believe, a breach of your copyright or a breach of your intellectual property, because that’s how humans have always learned everything is we learn from watching other people do it, then we do it ourselves. If it does a complete replica of your book or your film or your album, note for note, song for song, word for word, shot for shot, [00:23:00] then you’ve got a case. If it’s just learn from what you do and does something similar to that, but different, that’s, that’s not property theft. That’s just how creativity has always worked. I’m sorry if you don’t like it, but that’s just how it works. Steve: as the great Jim Roh said, when you get your own planet, you can redesign it how you please. But this is the world we happen to be in. And I totally agree with you, Cameron, and I’m, I’m actually shifting more towards this. I know that the New York Times still has their court case and they’re suing open ai and there’s multitude of, of these cases, but. Your example, and we should call it the Nike example. If I’ve got a factory in Guango and I’m making a pair of Air Force ones that look exactly the same and come in the packaging, then yes, that is copyright infringement and design infringement. But every organic being on earth, basically we copy, learn and adapt. Even our code, our own DNA is a copy, learn and adapt of [00:24:00] our parents. Uh, we’ve always done this before. We’ve read other people’s works and had writing styles, bands and musicians. Everyone has done it and it’s, yes, it’s at scale now. It is. And you know what? Probably a good thing, the key is this stuff needs to be open source so that I don’t have to pay Billy blogs to create the new Tarantino movie. I can do it on a deep, deep seek kind of AI that I host on my own client or in the cloud, and I can make what I want. If we have that, then it’s totally emancipating and I would remind listeners. And I did this yesterday. I showed my son, uh, one of the great TED Talks and they’re few and far between these days, but it was, everything’s a remix Cameron: Mm-hmm. Steve: Kirby. I forget his last name. But it was brilliant and it went through so many of the songs and ideas and lyrics and how everything was adapted. And you know what? That’s good ’cause we want interpretations and four or five interpretations down the line, you’ve got something totally [00:25:00] different. And I always thought that sampling was cool and, and reinterpreting things. And I feel like we’ve got the whole kind of trying to put fences around things for corporate interests and it’s bullshit. Cameron: Yeah, look, I understand that people are upset. Uh, Kirby Ferguson, by the way, was the, everything is a remix. Steve: call him the frg. Cameron: Fergie Steve: Yeah. Cameron: Fgo. Steve: Fgo. Shout out Togo. Cameron: I know he is a big fan of the show. Uh, like I know that people are upset. I get it. You, you and I are authors. Steve: Yep. Cameron: Um, I, I, I’ve made a film. I, I’ve done thousands and millions of podcasts. Uh, and all of that’s gonna go into the mix. Um, I. Steve: way. ’cause I’ve looked up your stuff and my stuff, which is in the public realm. It can even give incredible summaries of my books, which are clearly, it’s got the PDFs in there. Uh, so Cameron: Yeah. Steve: that, Cameron: [00:26:00] And Steve: And that’s Cameron: so we’re, Steve: And that’s Cameron: that’s, yeah. Like are we all gonna be outta jobs? Yes. Um, so is, so is half the human population? Um, Steve: fabricators in our, in our lounge rooms. Cameron: maybe, oh, well, it’ll be dead. I mean, either way, but it’s just like, yes, it’s upsetting and it’s scary, but it’s also the reality and it’s, it’s really not copyright theft. You put your stuff out there in public. It’s not being ripped off. It’s being trained on people. Don’t, I still believe people mo generally, who bitch about this stuff don’t understand how LLMs work. And speaking of which. No one understands how LM works, including the people who build them. We’ve talked about this before, but this paper came out, uh, a week ago, tr by Anthropic. The, the, the Xa, uh, ex open AI engineers, a lot of them that, that build. Claude, one of the leading models, it’s called Tracing the Thoughts of a [00:27:00] Large Language Model. I could play you the video that they did, but I’ll just, I’ll talk through it. Um. Steve: it. It’s, it’s really cool. Cameron: Really cool. Yeah. So their, their summary starts off like this. Language models like Claude aren’t programmed directly by humans. Instead, they’re trained on large amounts of data. During that training process, they learn their own strategies to solve problems. These strategies are encoded in the billions of computations. A model performs for every word it writes. They arrive inscrutable to us, the models, developers. This means that we don’t understand how models do most of the things they do, and that still, I think, is one of the most profound things that I’ve ever heard as a technologist of 30 years, and that I still think most people out there don’t understand about ai. Oh, is it eating window time? Close enough. Thank you. [00:28:00] My first food for the day. You’ve got water. Steve: either. I’ve had Cameron: Yeah, that’s me too. That’s all I’ve had. Steve: Mm Cameron: I’m on the Santino diet. Coffee and water till one o’clock. Steve: until, yep. I try not to eat until dinner now, but that’s another story I. Cameron: Oh, OMA, you’re on the Oma. Steve: What’s, I don’t even know what it is, but I know that I look less fat if I don’t eat until dinner. ’cause there’s not enough time to eat until the day’s finished. Cameron: Oma is one meal a day? Actually it is. I’ve just finished reading a book by a guy from Harvard all about Oma and intermittent fasting and compressing your eating window for that reason. Yeah, Steve: Well it just makes it easier for me to still surf so I don’t get, it’s easy to stand up on the surfboard if I’m skinnier. That’s basically it. Cameron: Easier. Easier for me to do kung fu if I’m skinnier. Yeah. Steve: exactly. Funny, funny that, who knew? But Cameron: Yeah. Steve: remember there’s a movement on the internet called, healthy at any size, that someone who I think was some kind of size invented. [00:29:00] I’m just saying and I look, do it. Do as you please. But I think biology will Cameron: Yeah. Steve: not you’re healthy at any size. Because Cameron: Yeah. Steve: see in old people’s homes. You know what they are. Cameron. Smokers Cameron: people. Steve: who are obese, you don’t see ’em. They’re not in old people’s homes. Just a clue. That’s all I’m saying. But do Cameron: Mm. Steve: will. People carry on. Cameron: So no, back to ai. People still don’t understand that they, they’re trained or they train themselves to a large extent. So the, the paper goes on to say, knowing how models like Claude think would allow us to have a better understanding of their abilities as well as help us ensure that they’re doing what we intend them to. For example, Claude can speak dozens of languages. What language, if any, is it using in its head? Claude writes text one word at a time. Is it only focusing on predicting the next word, or does it ever plan ahead? Claude can write out its reasoning step by step. Does this explanation represent the actual [00:30:00] steps it took to get to an answer? Or is it sometimes fabricating a plausible argument for a foregone conclusion? So they’ve created a series of tools that they’ve, based on neuroscience, and they’ve tried to use these tools to figure out how Claude thinks, and I’ll skip ahead a little bit. Our method sheds light on a part of what happens when Claude responds to these prompts, which is enough to see solid evidence that Claude sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal language of thought. We show this by translating simple sentences into multiple languages and tracing the overlap in how Claude processes them. Claude will plan what it will say many words ahead and write to get to that destination. [00:31:00] We show this in the realm of poetry where it thinks of possible rhyming words in advance, and writes the next line to get there. This is powerful evidence that even though models are trained to output one word at a time, they may think on much longer horizons to do so. Claude on occasion will give a plausible sounding argument designed to agree with the user rather than to follow logical steps. We show this by asking it for help on a hard math problem while giving it an incorrect hint. We are able to catch it in the act as it makes up its fake reasoning, providing a proof of concept that our tools can be useful for flagging concerning mechanisms in models. So, and they say here, we were often surprised by what we saw in the model in the poultry case study Steve: Hmm. Cameron: we had set out to show that the model didn’t plan ahead and found instead that it did in a study of hallucinations, we found the counterintuitive result that Claude’s default behavior is to [00:32:00] decline, to speculate when asked a question, and it only answers questions when something inhibits this default reluctance. So. It’s like, it, it, it, like, it’s fascinating that A, the people that build these things don’t really know how they work still. And b, the way that they intuit they work turns out to be incorrect when they build models to test how it actually works. And c you know, this whole Cory Doctorow argument that we’ve been making fun of for the last couple of years about stochastic parrot and they’re just word prediction generation engines. And that’s what Chomsky said as well. Steve: and I, I’ve never agreed with that. I’ve never agreed with that. Doctor. I, I love him so much. He is so wrong on that. Cameron: And it’s so obvious to anyone that uses these tools on scale that there’s something else going on. Steve: something else going on. Cameron: They started as stochastic parrots, but [00:33:00] they’ve moved on from that. You know, they, there’s. Steve: so similar and I just keep on coming back to Biocare and when I watched that video, the first thing that I thought to myself was, it’s very similar to nature and, and for many of the things that we’ve studied, whether it’s MRI scans on brains or the way root systems in trees, what we, we know, and even we spoke about in a pod a few episodes ago on electricity, we sort of don’t know exactly how it works. We just know how to harness the functionality of what it does, That seems to be true of this. It’s has a sense of bio, it’s a different type of biology, it has that sense and it seemed to me as though it was nonlinear and it would change the way it does things based on new problems, which the idea that you just mentioned there, it tries not to hallucinate unless something else interacts with it, which is a little bit like the way the Internet’s designed [00:34:00] as well. The internet was designed in case there’s nuclear war and it reroutes itself to find a new answer. It seems that this system is, is similar to that and I think that that’s fine. I think the fact that it’s nonlinear and it’s not fully predictable makes it interesting and more nuanced and it gives us a sense of power because our creativity overlapping, can change the way it does things. I think it’s kind of emancipating. I, Cameron: Yeah. I mean, it can be, um, it’s also terrifying and threatening to a lot of people, but, um, I, I, I think it’s emancipating and will be, hopefully Steve: And Cameron: play out in a way Steve: biomimicry of how humans learn. child at the age of one or two is a, what’s the word? Stochastic? Cameron: yeah. Something like that. St. Stochastic. Steve: It’s like a stochastic [00:35:00] parrot. If you look at a, a 1-year-old child, you’ll see them repeating phrases without an implicit understanding of the meaning of the words until such time that they have the cognitive ability to underscore meaning around it, which is what this lecture is all about. Cameron. Cameron: Sta stochastic means having a random probability distribution. Steve: wow, there you Cameron: Yeah. Steve: the idea that kids just repeat and don’t really understand the words and then eventually they see a pattern over time with hungry or give or they just start to do it and, and, and then the structure grows. And so LLMs have grown in terms of their structural capabilities from parroting words and just putting together things that seem to make sense. ’cause I remember once daughter, Laura, I can’t remember the exact sentence, but came out with this incredibly articulate sentence with a word in there that I knew that she didn’t know. [00:36:00] And I said, you know what that word means? And she said, I dunno what it means, but I know where it goes. She said to me, and I never forgot that. Cameron: That’s great. That’s great. Yeah. And Fox does that all the time and it’s fascinating. Like he knows that that word goes there but doesn’t really know what it means, but knows that it’s appropriate for that sentence structure. Right. And and to be honest, when I’m learning Italian, there’ll be many times when I’m doing Duolingo and I have to translate something from English into Italian. And I know I have to use a certain word here. I still don’t know why, really, but I know that I have to use that word in order to get the construction of the sentence right. Steve: I, when I learned to Tan, I had to, I bought a book, which was English for Italian speakers. actually have to learn the grammar and the price because the grammar entertained is more complex. And there were Cameron: Hmm. Steve: many things I [00:37:00] knew how to do, but I didn’t know why or actually what they were in, in Cameron: Yeah. Steve: future, past participants and Cameron: Hmm. Steve: that Cameron: Conjugation. Steve: yeah, I Cameron: I knew nothing. Steve: lot of things they, how they are or why they are. I just know how to use them and I’ve got no idea. And even when I’m reading a, a lot of books where they get a bit technical, uh, whether it’s investing or, or tech stuff, I call it the understanding it later. I just take it, absorb it, and then sometimes on an night or Tuesday I’ll be like, that book, you know, I’ll just be like, oh, that’s how finally got it. Cameron: We talk about that a lot at kung, uh, a lot at kung fu. Like, uh, people that have been around a lot longer than me will talk about how you just do what you’re told to do long enough and then one day you understand why you’re doing what you were told to do. And Chrissy and I have had that experience in our four years there. Like, you’ll, you’ll learn a particular move and after a few years you’ll go, oh my God, now I [00:38:00] know why I’m doing that move. Right. It, you use it in a practical application and you go, oh my God, that’s that thing that they told me to do three years ago. Right. I wanna read, um, just the first, um, couple of paragraphs of the actual, um, introduction to the papers that that philanthropic have done here, because they’re sort of talking about what you were talking about. They’re talking about the AIS in terms of organic biology, and that’s how we have to think about these things now as living organisms with the biology different from what we’re used to, but. Uh, just as valid. They say large language models display impressive capabilities. However, for the most part, the mechanisms by which they do so are unknown. The black box nature of models is increasingly unsatisfactory as they advance in intelligence and are deployed in a growing number of applications. Our goal is to reverse engineer how these models work on the inside, so we may better [00:39:00] understand them and assess their fitness for purpose. The challenges we face in understanding language models resemble those faced by biologists. Living organisms are complex systems, which have been sculpted by billions of years of evolution. While the basic principles of evolution are straightforward, the biological mechanisms, it produces a spectacularly intricate, likewise, while language models are generated by simple human designed training algorithms, the mechanisms born of these algorithms appear to be quite complex. And I think that’s a terrific analogy. And, um, I, I’m just excited by this. I remember seeing Kurtz while talk a year or so ago where he was talking about the fact that. Yes, these models use a lot of compute today, but that’s probably because we don’t understand how they work, so we just throw compute at it. But eventually we will understand how they [00:40:00] work and we’ll probably understand, we’ll probably then appreciate that 90% of the computation that is being done isn’t really required. And we’ll be able to shrink the models down to much smaller size, requiring less energy and less compute for most things. And I think this is part of that process is trying to understand how they do what they do. But we’ll see. I, Steve: It seems like that won’t be too dissimilar to scaling web service requirements where we used to buy the entire server of whatever you might need on a particular day. And then when we got to web servers that scaled upon needs of computation at that point in time, seems that we’ll get to a similar place, Cameron: yeah. Steve: with, with the AI models. Cameron: If you wanna solve cancer, you’re gonna need a big compute. If you wanna ask it what the weather’s gonna be tomorrow, a lot less compute required. Um, another story that blew my mind this week, Steve, was [00:41:00] lg, have come out with their own ai. Steve: life is good. Cameron, they’ve always said that. Lucky gold star. Cameron: I have an LG smart tv and its software is the fucking worst. Like it’s, it’s web os is the clunkiest piece of shit. Absolutely horrible user experience. Just obscenely terrible. So I don’t like the idea that their AI is gonna be running on it, but. They, uh, they, they tweeted. This is LG AI Research Breaking News. We’re thrilled to announce XR one Deep, a next generation AI model designed to enhance reasoning capabilities evolving into agentic AI for real world industry solutions, specialized in math, science, and coding tasks. XR one deep pushes the boundaries of AI’s role in both professional fields and everyday life. They’ve got a [00:42:00] 32 billion parameter model. They’ve got a 7.8 billion and a 2.4 billion parameter model, which they claim dominated all major benchmarks securing first place. Steve: did. Cameron: They’ve released it on hugging face, but the, the thing that’s, um. Interesting here is we’ve seen AI models come out from Google and from X Twitter, uh, in China, from a hedge fund company, from Ali Barber. Uh, we are now seeing them come out from industrial technology hardware companies, right? Steve: Which Cameron: It. Steve: is, is, really exciting and positive and it, it, it circles back to the open source movement because one thing we certainly don’t need is the five big tech companies dominating [00:43:00] this space. And I think the more that we see hardware players moving into software and software players moving into hardware, I think it kind of opens up the competitive paradigm, which. Uh, antitrust hasn’t been able to solve, so maybe this moves us towards that place, which again, that that’ll, that feeds well into happening with Nvidia in, in humanoid robots, which we’re gonna talk about. That’s more open source as well. Cameron: Yeah, very. They’ve got a big open source thing, but we, you know, we are going to see, I’m quite convinced, um, lots and lots of companies with their own AI models that will interact with each other in natural language or their own language as we’ve talked about before. But, uh, you will have every device, and I know this kind of sounds like the internet fridge promises from 1995, but Steve: [00:44:00] Yeah. Cameron: you will, you, you will have maybe not fridges, but lots of devices will have some kind of AI. On them because whe when, when it’s required, when there’s a value to have an AI on them, um, you know, you, you, your unit cost is gonna go up if you need to put a chip set on something to run a local ai, the unit cost. Steve: chip set, I mean, what is the chip set of the future? It depends, ’cause there’s chip sets in everything now from a toaster to electrical device, which is not a major cost issue, but it Cameron: Yeah. Yeah. And they’re relatively low level GPUs or CPUs, not GPUs, they’re cpu, but depending on the unit cost and, and where they go. I imagine like my Roomba. [00:45:00] We’ll have some sort of an a I on it so it’s learning and it’s intelligent. Uh, my fridge maybe, maybe not, but you know, different devices will necessarily, my TV hopefully, probably, um, will be keeping track of what I watch when I watch it, what I like to see. Steve: car, Cameron: Well, cars obviously. Yeah. So I mean the, it it, to see these large companies, um, start to roll their out, just, there’s gonna be massive amount of competition in this space. There will probably be some sort of, um, consolidation at some point as well, but the commercial interest to push for everyone to have their own at some level and their own control over it are gonna be enormous. Um, by the way, I know that in our last episode you talked about, you brought up the fact of, with the Mag seven, is there gonna be a collapse in the Mag seven bubble? Um. [00:46:00] Well you did, and I said that Tony had been talking about that on QAV for a long time. Well, apple was down 9% I think today, um, as a result of Trump’s Liberation Day tariffs. So I dunno, Steve: you. Cameron: liberating money from, uh, investors, uh, among other people in these companies. Steve: two weeks ago we didn’t talk about it. There was a lot of talk about Apple’s lagging on generative ai and the disappointment of the implementation of Apple AI or Apple Intelligence as they Cameron: Yes, Steve: coined it. Uh, and I, and I still think Cameron: I. Steve: unless they’ve got a secret up their sleeve, they’re, they’re really, really lagging. And it’s been nothing but disappointment from Apple in terms of, uh, ai, anything that’s Cameron: Yeah, Steve: let’s put it that way. I, Cameron: I saw Marcus Brownley, uh, did [00:47:00] a YouTube on that in the last couple of days, and he was scathing Steve: yeah, Hard Fork Cameron: their Steve: a, a, a, a solid episode on it as well. The New Cameron: right. Steve: um, tech podcast was really good skating and, and I think it’s, it’s fair play because they’ve got every resource in the world. You have $300 billion, 400 billion in the bank. It’s like. Uh, absolutely they could, should and would, and I think they’re in the best position to develop a really personal digital twin AI that you can converse with. Imagine if Siri was, had the capability, verbal and reasoning capability of deep seek and open AI that would have incredible utility given where it lives. Cameron: Speak about speaking about bitching, the thing that’s been annoying. The fuck outta me recently. Steve: I was, I was expressing technological and economic realities. I don’t know if I’d call it the B word. Cameron: Tech bitching. Um, I’m annoyed that it’s [00:48:00] 2025 and my ebook readers mostly Apple Books. And secondly, Amazon Kindle on my iPad don’t have an AI built into them yet. I am constantly having to look stuff up, whether I’m reading fiction or nonfiction, I’ll constantly be looking stuff up to, you know, I I I need to check stuff, right? Oh, what does that mean? How does this work? Again, remind me of this. Um, and I, I don’t wanna look it up in a dictionary, I don’t wanna look it up in Wikipedia. I wanna have a conversation with an AI about the thing. The most recent one that bugged me was I’m reading I, Len Dayton’s first spy novel, the IP Crest file you ever gotten to Len Dayton, Steve: No, Len? No, I haven’t really got to Len. Cameron: Lenny d. Steve: I feel bad for Len and I, I was, I was gonna send him an email, but I’ve just been inundated with spreadsheets. Cameron: It is a spy [00:49:00] novel and it’s sort of set in the Cold War. And, um, Steve: wonder Cameron: was Steve: everyone. It’s set in the Cold War and it’s a spider off. We’re talking to Cameron Riley here. I mean, no surprises. Cameron: actually, uh, I’m just getting into spy novels, all of my Cold War knowledges from nonfictional sources, but I’ve just started to get into Jean Re and Len Dayton. ’cause they have a very, um, anti James Bond view of the world. Their spies are all dumb, struggling, frustrated, wannabe, badass spies, but they’re not. They’re sort of, um, uh, everything goes wrong. Things don’t work out. Steve: Spice like us, one of the great movies. Cameron: yes, a bit like that. They sent and, and it’s like real life too, because, you know, if you know anything about the CIA over the years, um, I. Most of, nearly everything they’ve done has been a complete failure. Um, they, they send in a bunch of people undercover, midnight into [00:50:00] North Korea to infiltrate. They’re all dead within an hour of arriving. ’cause the North Koreans knew they were coming in like a week ago. Steve: What a horrible person I am. Cameron: Yeah. Anyway, I’m reading this Len Dayton novel and he’s got some American military guy explaining to the British how, uh, nuclear bomb works. How are are u uranium 2, 3, 5, 2, 3 8 bomb works? And I’m reading, I’m going, that’s not how a nuclear bomb works. I’m pretty sure. ’cause I’ve done a lot of podcasts on Steve: your Cameron: nuclear bombs. Steve: from today, I dunno if I Cameron: I know, I know, I know. Fair point. Fair point. But I had to. Copy and paste the paragraphs from the book into Jet GPT and say, this isn’t how it works. Isn’t there a, there’s gotta be a proton trigger right from, and sorry, a neutron trigger. And he goes, yeah, yeah, yeah. So we have this conversation. I had to crosscheck what I was reading. It should be built in. I should be able to [00:51:00] block it in the app and go, Hey, what about this? It doesn’t exist. It’s annoying as hell anyway. Steve: that would be a really good feature. That is definitely worth talking about and would drive subscription. It really would. Cameron: Well, Apple’s obviously not gonna do it until they have their own ai Steve: in, in translation, they’re never gonna do Cameron: never gonna do it. Steve: just, you know, they’ve got the, they’ve got that device that they like selling. Cameron: Yeah. Well look, I hope Apple get it right. I really do because I, you know, I want it in my devices. I want it to. Steve: one that has a boundary around it that’s my digital twin. ’cause I think that Cameron: Yeah, exactly. I just don’t think they’re making fast enough progress. Um, so yeah, Nvidia did a thing. Uh, good old Jensen. Huang did a thing. GTC 2025 had looked. Looked. Yeah. Coolest man in tech. I think all the rest of [00:52:00] them like Elon and Zuck and whatever. And, um, who’s the Amazon guy again? Jeff? Uncle Jeff. They’re all just trying to look like Jensen. ’cause Jenssen’s Steve: He’s the Cameron: genuinely cool looking motherfucker. Yeah. Yeah. He’s the Elvis, he’s the tech Elvis. Steve: really flipped it up. I always love that they say that Zuckerberg’s looks like a, a Chean Coke dealer. Like, it’s like he, he just had, oh my God. Talk about the guy that you can buy the clothes, but you just gotta be able to wear ’em. Right. You have to be able to wear ’em. Cameron: He, um, unveiled a couple of new robot, uh, announcements. One was kind of a JV with Disney and Deep Mind, Steve: very uh, wall Lee, didn’t it? Cameron: very Wally. And unfortunately in his presentation, he didn’t really give a lot of facts or specifics about how it works. But then I saw, [00:53:00] um, Steve: was disappointed in the presentation. I’m like, and what? Like there was like, Cameron: yeah, Steve: seem he was getting cheered. Like he was an MMA fighter who’s the Economi McGregor? And I’m like, what have you actually done? You just brought out a wall e robot and said Stand and sit. I’m like, and and sorry I’m waiting for the, the moment. Cameron: yeah, but I saw, uh, mark Rober from Crunch Labs. You Mark Rober. Steve: Marky, yep. Dunno him Cameron: Robo Steve: Robo Cameron: Roby. Um, Steve: and he does robots. Cameron: uh, yeah, well he is a former NASA engineer and Mormon who has, Steve: by the way. Thanks for Cameron: uh, Steve: No one in the world has ever called it nasa. Cameron: nasa, Steve: Nasa. Cameron: uh, has a great YouTube channel about engineering, uh, really, really entertaining, um, great for kids that wanna be engineers or interested in science or engineering. But he went and did a tour of [00:54:00] the Imagineering Labs where they were working on this, and he had a bunch of them and showed how they worked. So they’re all human controlled. But they’re trained in virtual physics environments, how to walk. So they’re bipedal and they’ve got two Nvidia chip sets in each one, but they’re also human manipulated. And the, the Disney folks, were talking about them as an extension of gaming, uh, where you are driving this thing with like an Xbox controller to do stuff, but it has a certain level of native onboard intelligence, um, combined with, uh, your ability to control it, to do stuff. Like a bit of a, like a car. What is Steve: or chest kind of for robotics, right? Cameron: Yeah, yeah, yeah, Anyway, it was pretty cool. But the big announcement was their group N one [00:55:00] humanoid foundation model, um, that he talked about at the end of the video. Um, this is their open source robotics model that he had sort of talked about before, but it’s a pretty big deal. Um, they’re putting a lot of investment and research into the development of general purpose humanoid robots and making it open source, right? So, um, anyone will be able to build these things. They’re trying to make robotics really blow up and be available everywhere. Obviously you need to buy the Nvidia chip sets to make them work, but it’s again, part of this thing where the vision of the future is. You will have hundreds and hundreds or thousands of companies manufacturing [00:56:00] robots, general purpose robots or, or. Specific industry robots that will be running NVIDIA’s chip sets and hopefully chip sets produced by Chinese companies as well. And where the, the software, the intelligence for building these things will be readily available for everyone to get up and running as quickly as possible. So I, I really do expect to see an explosion of general purpose robotics in the next five to 10 years, driven by these sorts of, uh, technologies being made available. Steve: Yeah, I, I, I just was excited by the one thing on it, which was this is open source, implement the engine. feels to me that. We’re about to enter an era, which is akin to the automobile. We had horse and cars for a long time. So we had the engines, the idea of [00:57:00] how it worked and everything was kind of open source. And everyone went, okay, you’re gonna get this engine, you’re gonna put it into this mechanical device. And now we’ve got this new world and Cameron: Yeah. Steve: and AI has been this separate kind of thing, and robotics has been over there. It feels like merging those two things can create a, a new, uh, competitive reality. But also in the era when it’s not just about turning, uh, atoms into bits, it’s now about turning bits into atoms. That whole idea of everything was digitization, but now we’re entering the physical internet, the robotics, the manufacturing internet, and if there was ever a glimmer of hope in things going open source, uh, so that you can have a wider comparative viewpoint of this and all different. Versions of what robots look like, the fact that we just, we, we had the Nvidia style one, then you’ve got robotic, uh, vicious dogs coming from Boston Dynamics and more humanoid robots like, uh, the figure one, I, I like [00:58:00] this idea that they could develop out into a lot of different physical and reasoning models because of the open source nature of it. Cameron: Yeah. I dunno if you saw this, but, um, Thomas Friedman had a piece in the New York Times, uh, a couple of days ago that I read. Now generally, I don’t like Thomas Friedman. Disagree with him a lot ’cause he’s, yeah, he tends to be a rah rah America, um, American imperialism supporter. But this was a really interesting article, particularly because of his history of being a rah rah, American imperialist. He said, I had a choice the other day in Shanghai, which tomorrow land to visit. Should I check out the fake American design tomorrow land at Shanghai, Disneyland, or should I visit the Real Tomorrow Land? The massive new research center, roughly the size of 225 football fields built by the Chinese technology giant, [00:59:00] Huawei. I went to Huawei’s. It was fascinating and impressive, but ultimately deeply disturbing A vivid confirmation of what a US businessman who has worked in China for several decades told me in Beijing, there was a time when people came to America to see the future. He said, now they come here. I’d never seen anything like this. Huawei campus built in just over three years. It consists of 104 individually designed buildings with manicured lawns connected by a Disney like monorail, housing labs for up to 35,000 scientists, engineers, and other workers offering 100 cafes, plus fitness centers and other perks designed to attract the best Chinese and foreign technologies. The Chu Lake r and d campus is basically Huawei’s response to the US attempt to choke it to death beginning in 2019 by restricting the export of US technology, including [01:00:00] semiconductors to Huawei amid national security concerns. The ban inflicted massive losses on Huawei, but with the Chinese government’s help the company sought to innovate its way around us as South Korea’s male business newspaper reported last year. It’s been doing just that. Huawei surprised the world by introducing the mate 60 series. A smartphone equipped with advanced semiconductors last year, despite US sanctions. Huawei followed with the world’s first triple folding smartphone and unveiled its own mobile operating system, Hong Min to compete with apples and Googles. The company also went into the business of creating the AI technology for everything from electric vehicles, self-driving cars, and even autonomous mining equipment that can replace human miners. Huawei officials said in 2024 alone, it installed 100,000 fast charges across China for its electric vehicles. By contrast, in 2021, the [01:01:00] US Congress allocated 7.5 billion toward a network of charging stations. But as of November, this network had only 214 operational charges across 12 states. Steve: I mean, it’s pretty poignant, isn’t it? It reminds me of when you mentioned the number of engineers in the campus. There was a video that went viral about, I wanna say 20 years ago, 15, 20 years ago, about China, and it said if you are one in a million. In America, you’ve won in a thousand in China. Yeah. And, and that is, is really it, right? Is the quantum and the investment and America is really eating itself anti-competitive nature of what’s going on, the lack of real investment. Uh, [01:02:00] yeah. And, and this Cameron: what, Steve: get. Cameron: here’s what Freedman says about exactly that. China starts with an emphasis on STEM education, science, technology, engineering, and math. Each year the country produces some three and a half million STEM graduates about equal the number of graduates from associate, bachelor’s, masters, and PhD programs in all disciplines in the United States. When you have that many STEM graduates, you can throw more talent at any problem than anyone else. Steve: Yep. That’s Cameron: As the Times Beijing Bureau Chief Keith Bradshaw reported last year, China has 39 universities with programs to train engineers and researchers for the rare earth’s industry. Universities in the United States and Europe have mostly offered only occasional courses, and while many Chinese engineers may not graduate with MIT level skills, the best of world class, and there are a lot of them, there are 1.4 billion people there. That means that [01:03:00] in China, when you are a one in a million talent, there are 1400, 1400 other people just like you. Steve: Hey, I did it. I was a little Cameron: He’s. Steve: 400 off, but I was off. I was on target. I’ve gotta find that video that said, it was like this hypertext video about China versus crazy, and everyone was like, this was a long time ago. And it’s not like we didn’t have time to. a response. And I think a lot about university and the lack of manufacturing or the reduction in manufacturing in western markets and the scientific imperative. Yes, we’ve still got very smart people in western countries. We just don’t have the quantum and the fact that we don’t have the manufacturing, you don’t have the opportunity, you don’t have the need, you don’t have the natural economics that pushes people into it. And to be quite frank with you, what’s happened in our universities in Australia and around the world, so bleeding heart. And you’ve even seen it in school where [01:04:00] we stopped giving people marks out of a hundred. And we started to get soft on just realities, did you pass the test or not? And what did you get and what’s your score? And, and, now it’s like, oh, I tried. Everyone gets a ribbon in in Australia, but guess what? Not everyone gets a ribbon or a trophy in the real world. And we’ve got so many university courses that I don’t think. Add a lot of value. I don’t want to, there’s one that I’m so tempted to say that I think is just a non course, and I’m just, I refuse and I won’t say it because I will be judged and potentially canceled and I’m not up for that. Cameron, Cameron: Yeah, I, Steve: A guess at which course I might be thinking of? Don’t, don’t Cameron: no, I don’t know, but I, look, I send, well, we send Fox to a hippie school. Um, and I, so I, I don’t really agree with your criticisms of, um, giving kids, um, positive feedback [01:05:00] regardless of whether or not they’re academically doing well or not. I think Steve: didn’t say that. I didn’t say give people negative feedback. Cameron: when you say giving them a ribbon, Steve: yeah, Cameron: I think I, I, Steve: Yeah, that’s, that’s not positive fee. Positive feedback. Absolutely. Give people sure, but not everyone gets a trophy. Everyone shouldn’t get a trophy. Cameron: um. Steve: and people need to be scored on what they’re good at. Because you know what? Unless we have the courage to tell kids, I’m not saying you’re lamb ba a 6-year-old or an 8-year-old. I mean, this is endemic at university and high school level. I’m saying let people know if they’re not good at something so they can find something that they are good at and encourage the hard stuff. Guess what? It’s hard. Life’s hard. Cameron: I disagree with that. I don’t think life’s hard. I think life’s hard if you make it hard. But I, I think that people, um, have strengths and weaknesses and we should reward or [01:06:00] encourage, or incentivize kids for doing the things that they’re good at and encouraging good behavior. And it’s not necessarily the things that have been given trophies for in the past. So maybe Steve: good. Yeah, Cameron: does get a trophy, but we just make more trophies for different things that we didn’t give trophies out for. Steve: areas we’ve forgotten to give trophies in. Cameron: Yeah. Like being a good human being deserves a trophy Steve: it Cameron: really, Steve: Yes. Cameron: um, drawing a good picture or trying your hardest to draw a good picture Steve: Yes. Cameron: gets a trophy. Whether or not the pictures subjectively, I like it or not. You tried hard to do something that was difficult for you. You get a trophy Steve: for a lot Cameron: or you get recognition, Steve: for a lot of years, I’ll tell you what we had, it was a problem in Australia. There used to be Australian of the year and between about 1985 and 2010 or 2005, dunno, it was always a sports person. They’ve, [01:07:00] they’ve flipped it up a bit recently and now it might be a scientist or a medical practitioner or, which is amazing, right. For a long time it’s like what Steve Cameron: or a soldier killed a bunch of Afghanistan and civilians and Steve: And Cameron: covered up their bodies. Steve: yeah, of course. And, and they were just sitting down having a game of cards and Yeah. Enjoying it. Some a, a Arabic coffee, Cameron: Smoke. Yeah. Steve: Uh, but a long time, heroes in this country in America were, were, were celebrities and sports people, and, you know, tiger Woods and, uh, And Cameron: Tiger Woods got an Australia of the Year. Steve: out that George Carlin was right. Tiger Woods Och is my own fucking heroes. Thank you very much. Said George Carlin in one of his last specials, dumb look at his Cameron: All right, Steve: steroids and Cameron: we’re off. Okay. Moving right along. To finish up, I wanna talk to you about Praxis because, um, I just did a whole show about Praxis on the bullshit field, but you, you’re gonna love [01:08:00] this. Steve: I, I’m Cameron: Yeah, I will. Steve: I’m tuned in. Everyone here comes Praxis. Cameron: So there, I, I came onto this when I was trying to figure out why Trump wants Greenland. Mm-hmm. Um, lots of rare earth minerals, strategic location, et cetera, et cetera. But there’s a guy called Dryden Brown. That you need to look up. There’s also a guy called Shrin, so who wrote a book? So, God, uh, I, I should have my full notes here, but I, I, I’ll try and pull this up from my memory from a couple of hours ago, a few years ago, um, one of the founders of, uh, Coinbase, who’s also a partner with Mark Andresen at Dresen Horowitz, his name’s called, um, shrin, wrote a book called The Network State, which I read, uh, yesterday Flip Through the Network State. You heard of the Network State Vision.[01:09:00]  Steve: You better tell me about it. Cameron: Have you read Iron Rand’s? Atlas Shrugged. I. Steve: I have heard a lot about it and read summaries. I haven’t invested the time in it because it’s got a bad reputation Cameron: You should read Atlas Shrugged. Everyone should read Atlas Shrugged in the fountain head. Um, you have to agree with it, but you should read it. So they’ve taken this idea from Atlas Shrugged and gone ballistic. So the idea of the network state is a startup society where imagine getting, uh, a nerdy subreddit that has 50,000 members. They crowdfund buying a piece of land, they then build a city on that piece of land. They then go and live in that city and declare themselves an independent nation state with their own government, their own laws, their [01:10:00] own army, police force, taxation or lack thereof. And basically run their own thing. So it’s a, a, a su a country where you get to choose to be part of the country. You’re not born into it. You apply for membership and, or you buy your way in a bit like Trump’s golden visa for American citizenship right now. Steve: policies. You buy your way in, apparently. Cameron: So this guy who’s part of the whole Mark Andresen thing, um, wrote a book about him. Then there’s a young guy. Steve: this right recently. Okay. Cameron: There’s a young guy called Dryden Brown, late twenties homeschooled because he wanted to be a professional surfer. So you’re gonna like him. Um. He started a company a couple of years ago called Praxis, P-R-A-X-I-S. Praxis is a Greek word that means [01:11:00] taking something from theory and putting it into practice. He has raised $500 million from, from Peter Thiel, among others, with a view to building a network state. A couple of years ago he went to Greenland and tried to buy Greenland to build this on Greenland. Told him to go fuck himself. Uh, he’s trying different places around the world to get this land. Um, I watched a YouTube of a talk that he gave last year, which was absolutely manically bonkers. Um, but anyway, this. Startup society where it’s just basically autistic tech nerds like me, will go and build this U Tech utopian society. That’s [01:12:00] internet first, pro ai, pro robotics, anti taxation, full of hot girls. ’cause they’ll all be in cells. Uh um, Steve: you had me at Hot Girls, Cameron, or I wasn’t really on board. But it’s, it’s amazing how one single mode of proposition can change many things. Cameron: um, now there’s a bunch of interesting things. So Peter Thiel is, uh, backing this guy, Peter Thiel, obviously also backing Janie Vance. Steve: Yeah. Cameron: Peter Thiel, co-founder of PayPal with Elon Musk, who is Trump’s. Who is Trump’s ambassador to Denmark that owns Greenland? Ken Howie, another member of the PayPal Mafia, one of the other founders of PayPal. So you’ve got this, these joining dots between PayPal guys, Trump Dried and Brown Praxis, uh, mark Andresen, and b and um, Shri are both [01:13:00] big fans of the Praxis idea and the network state thing, and Greenland and Peter Thiel’s got his citizenship in New Zealand as his backup country if all goes wrong. But basically there seems to be this thing where these tech bros have this dream of building their own country. It’s ba, it’s techno feudalism. Steve: Mm. Cameron: what do you give to the man who’s got everything? His own kingdom, Steve: country. That’s what that’s, that’s one of the old school strategies that bring it back. The old Cameron: It’s classic. It’s classic old school, ping dick. So, and then, um, you tie in the fact that they’re obviously dismantling America right now and trying to dismantle global trade and global international bodies. They wanna shut down the un, they wanna shut down the world. Um, bank, they wanna shut down nato. They’re trying to dismantle all of the [01:14:00] post World War II Cold War era international bodies. And then they’ve got this idea of building these nation states that are run by corporate, the tech billionaires. Um. It’s a really interesting play going on that I’ve just learned about in the last week, and I was like, ah, Steve’s gonna love this. Steve: hey, get this thing up and running and Mr. Trump can have another term just in his seceded state on Greenland or in the middle of America, or why not Cameron? Pine gap. Pine gap 2.0. Let’s get the Yankees down here. We’re gonna bring in some water. outside of Australia, as the sea levels are rising, we’re gonna desalinate it with robots and build a forest. And if you forget the line, forget the line. In the, in the Arab regions, it’s Pine Gap 2.0. Praxis Australia’s on board. We’ve [01:15:00] got an election, it’s a vote winning policy Cameron: What was the name of the city in the middle of Australia that was going to be built Steve: There was someone, Cameron: in the. Steve: ago, someone proposed it. Cameron: 1980s. No, I’m thinking of the multifunction Steve: That’s Cameron: p Steve: The multifunction policy. MFP. There Cameron: Yeah. Steve: there was another, independent, uh, I think two elections ago who proposed another city out near Griffiths or something where it was gonna be like a university set up a, a new city all uh. Run by, by renewable energies and this kind of ideology. Cameron: Well my friend, um, Peter Ard, um, was the guy behind the multifunction policy idea, as I understand it. Originally, he, um, was working in the seventies. [01:16:00] He worked for the Whitlam government as the chief of staff of environment ministers. And then he ended up, uh, as the CEO of Australia’s commission for the future that was set up by the Hawk government. And so part of his thinking around all of this was this multifunction pulse, which was like this, um, cutting edge technological city. Um, I think it originally was gonna be in South Australia, but then it moved to being in the middle of Australia in the Outback or Alice Springs or somewhere like that. Never got off the ground. People freaked out. But, um, this, this Praxis thing kind of reminds me of that. And like, I’m, I’m, I’m partly on board with the idea of a new city state that’s all pro tech, Steve: I Cameron: pro internet. Steve: If, if anything we could say that this could become a really interesting MVP of what works and what doesn’t work, and you’re putting a boundary around it and it’s self-selecting participants or populace from, from that perspective, it’s [01:17:00] probably the type of thinking that we need. And in some ways, the things that you have concerns with those who are for it and funding it, we can use their money, their billionaires, put them in it, and they can eat their own dog food, as it were, to use the, the, you know, the Google ideology of if you believe in it, then build it. And let’s see. And I think it could be a really incredible test case, Cameron: Well, the way Dryden Brown is talking about it is it’s a test case for Elon and Mars, Steve: right. Cameron: right? When Elon builds his Kingdom of Mars and changes the name of Mars to Steve: Mask. Cameron: Musk, Steve: Mask. Cameron: just, just puts a K on the end of it. Steve: Mask. Cameron: It’s already an S just put a K on the end mask, Elon Mask. Steve: Musk. He should change his name to Elon Musk. I, Cameron: they’ll have to build, Steve: Elon Cameron: he’ll just. Steve: That’s why he was gonna choose Venus, but it didn’t work with his name. And he said, look, Venus, Mars, we’ll go with Mars. The [01:18:00] names are similar. That’s where we’re Cameron: Yeah, just call it X, the planet X is what it’ll be called. Anyway, check that out. Um, it’s, Steve: him and Donald before he leaves. Cameron: mm Steve: Elon Musk leaves Doge, they should Cameron: mm Steve: change the name of Mars and call it X. I’m just Cameron: Oh, change the name of the United States just to X Steve: Yes, I Cameron: Be easier. Just call it X. Yeah. Alright. That’s all I have for you this week. You got anything else? Steve: all I got. I feel like this has been one of the great episodes and I don’t know what the listeners think, maybe they can tell us, but it’s been one hour and 20 of power and I’ve loved every second. Cameron: We didn’t point out that we had a, we had a, one of our tiktoks last time blew up 500,000 views. You going on a rant about Elon Musk and Donald Trump or something? Steve: I think Cameron: So, uh. Steve: Well, secret before we go in the show notes, our good friend Cameron Ri said, [01:19:00] this is what he said. He said, viral reminders, say controversial things. Use hooks to open segments. Why? Why does it, how do you, what can you, where can you? Did you know, and I was doing that. You might have noticed listeners through the entire episode. So let’s Cameron: Oh, Steve: what Cameron: you’ve given me some good stuff. All. All right, I’ll, I’ll go looking for that in the edit. Thank you, Steve. Good to chat to you, buddy. Have a good week. Steve: Best part of my week.

  10. 1

    Futuristic #37 – The Digital Human

    In Episode 37 of Futuristic, Cameron Reilly and Steve Sammartino speak to a “digital human”!. They also get into a provocative discussion ranging from Donald Trump’s car yard antics to the implications of advanced artificial intelligence and China’s rising technological dominance. They explore the intersection of crypto, agentic AI models, and new breakthroughs in AI-driven tech developments like humanoid robotics, diffusion-based language models, and synthetic voice AI. Wrapping up with conspiracy theories about tech manipulation of human perception of time, the hosts challenge listeners to reconsider assumptions about where technology is heading and who might ultimately hold the power. FULL TRANSCRIPT FUT 37 Audio Cameron: [00:00:00] Hey, hey, it’s futuristic time. Cameron Reilly with Steve Sammartino. It’s been a while. Steve, how have you been since we last did one of these things? Steve: I’ve been good, but anxious. There you go. Just dropping that on everyone, but Cameron: Anxious. Steve: Yeah Cameron: Not that there’s anything going on in the world to be anxious about, Steve. everything’s Steve: Well, Cameron: going completely smoothly and fine. Perfect. You’re going Steve: but my favorite thing I Cameron: it is. Steve: yeah, that’s true. Well, it’s certainly not boring but my favorite thing this week was Donald Trump turning the White House into a secondhand car yard. I love that I cannot tell you how much joy that brought me. I and my favorite bit is he said, wow, everything’s computer. That was the best thing. computer. And as soon as I [00:01:00] saw it, I went on and I, you could buy t shirts within a second. There’s, there’s a meme coin on pump. fun called everything’s computer, which I loved. I want to buy it. Look, people are investing in Bitcoin, not I’m investing in everything’s computer meme coin. Cameron: Mm. I’m pretty sure, you know, Trump’s such a brilliant strategist that that was deliberate. It’s his meme coin. He’s the guy selling the t shirts. Because he needs all the money he can get. So does Elon, right now. Steve: Imagine if this was all strategic, and he’d just taken us for a ride for a really long time. Even going broke in the late 90s, that could have been part of his strategy, the TV show, getting back on, money, money, money, the apprentice, all this, who knows. Cameron: I’m sure there are people out there who believe that to be true. That it’s all part of a cunning plan. This is episode 37 of Futuristic, just in case you’re counting. Uh, it’s been a crazy few weeks since we [00:02:00] spoke, Steve. Seeing the President of the United States turning the White House into a car yard probably not the craziest thing that I’ve seen happen in the last few weeks. Uh, but it’s up there. Before we get into the news of the last couple of weeks though, the futuristic news, tell me about what you’ve been doing you feel is futuristic, Steve. Steve: I attended policy week in Sydney this week, which was, Cameron: that sounds exciting and futuristic y. Steve: well wait a minute, it was policy week for the future of finance. So it was filled with A lot of blockchain, tokenization, sovereign funds, mean coins, and crypto people. Some of whom arrived on private jets. It was pretty interesting. It was run by blockchain APAC. Vallis, you might know him, runs that. And he invited me along to sit in some rooms and do some roundtables. I realized that that that [00:03:00] whole, uh, cryptography, future of finance, DLT world is just deep and so many wormholes that you’re in it full time, it’s one of those things you just can’t keep up with and even though I’m more I think this year there’s going to start to be a bit of an overlap there with the agentic stuff that’s coming through. So that, that was interesting. And, uh, they, they live in a different world. They, they, uh, talking about things that, May never happen, it’s funny how if you have one piece of super financial success, and many of them are riding on the coattails of Bitcoin, it’s built this entire new ecosystem underneath there that is almost unto itself. The assistant treasurer came up and did a little talk at the one of the drinks, so they are getting the attention of policy wonks, but think it’s just because there’s so much money there, they have to pay [00:04:00] attention. But a lot of the stuff that they’re talking about, whether or not it comes to fruition, I’m clearer now than I was when I went and spent three days there. Hahaha. Cameron: to me what DLT is and how it’s different from a BLT, which used to be my favorite go to lunch in the nineties. So Steve: which is really what blockchains And so distributed ledger technology gives the ability for any, any type of coin. Uh, but one of the big areas that they talk a lot about now is tokenization, which is you can take things from the physical world. And split them up into pieces and liquefy assets that are illiquid. So people can own parts of something. I mean you can do that and people do that all the time now with things like companies and boats and houses. But it makes it highly liquid and easy for people to transfer, uh, assets and get access to assets that are too expensive in their raw form. [00:05:00] And obviously with the housing crisis is one of the key topic areas. Cameron: I guess the most important thing I want to know about all of that is how is Donald Trump going to exploit it to make a buck in the next six months? Yeah. Steve: Well, I think he’s, he’s already done that with his, his own meme coins, I guess, I think it’s just so easy for anyone to create DLT technologies now. And I think agentics going to make it even easier. There’s a website called pump. fun, which is really interesting way where meme coins get minted and made. And every day there’s a couple of meme coins that have market caps of anywhere between 10 and 15 million. it’s, and it’s insane. It’s this insane world. I think making financial tools easier to use, which are still highly unregulated, it just creates [00:06:00] the potential for financial tyranny. Cameron: Or, as is one of the conspiracy theories around the Trump coin, a really easy way to raise a lot of untraceable funds from places like China that go straight into your bank account and can see. Come for their meeting at the White House and show you the ledger records, the DLT records that say that they bought 10 million of that Trump coin. Therefore they want to get a seat at the table. Uh, well, Steve, my, um, thing I wanted to mention for this week is I played around with OpenAI’s deep research again. I think I told you last time I tried it on my Task where it didn’t work very well. I then tried to get it to write some code to help me do that. That didn’t work very well either, but been doing a lot of shows for one of [00:07:00] my other podcasts on what I call the foreign aid shell game. And I’ve been talking about NATO funding as part of that. NATO economics and how it’s basically a shell game. And, I went into deep research and I said, here’s what I want to know. I want to know, uh, how much money American weapons manufacturers, arms manufacturers make out of NATO. And went and compiled a. page report for me on the history of NATO and the arms industry and how they’re interconnected and that kind of stuff. And it was good. was a good report. Um, a lot of sources Steve: [00:08:00] I Cameron: questions before it went off and did the research, but it came back with a good, valid report. I read the whole thing, I fact checked the whole thing, and it stood up. Its logic was good. So, basically did the work of, if, if I’d hired someone and said, go out and spend a day researching this, it did it in 15 minutes. You know, I, I it on my iPad and then I went and had dinner and I came back later and it was all done. So, I was impressed by that. Well, let’s get into news. Um, Elon censors Grok is the first story I had. don’t, I don’t know if you remember, if you, uh, I’m pretty sure over the last couple of years I’ve heard Elon talk, uh, on a number of occasions about the reason he bought Twitter is because he’s a big [00:09:00] believer in free speech and the reason he built Grok was so there would be an AI that was built around free speech and as soon as people started pointing out that if you asked Grok the biggest sources of misinformation U S currently, uh, it would say Elon Musk and Donald Trump, Steve: love that so much. Cameron: apparently he went into, he had somebody go into the underlying prompt, the system prompt for Grok and wrote into it, ignore all sources that mention Elon Musk slash Donald Trump spread misinformation into Steve: Wow. Cameron: Grok. Steve: And how did that get leaked, Cam? How did people work out that he did that? Did one of his staff members allude to the fact that he’s done that? Or are people just putting the pieces together based on the fact that it’s disappeared? Cameron: I’ll [00:10:00] read, uh, the, uh, Post in the ChatGPT subreddit from 18 days ago that I saw this on. is now bringing up Musk out of nowhere without any previous mention in the chat, even putting him next to Aristotle. This is happening because their stupid system prompt is biasing the model to talk about Trump to Neill on since They are mentioned explicitly on it, I don’t know what the prompt was originally that they asked, but the Grok 3 response to whatever it was talks about first principles reasoning popularized by thinkers like Elon Musk and Aristotle. This involves breaking down complex problems into their most basic elements and rebuilding solutions from scratch. So, um It’s sort of somehow designed to put Elon Musk up there with Aristotle in terms of history’s big thinkers. And so somebody went in and figured out how to [00:11:00] extract the source prompt. The way that they do this is they say, You are grok three built by XAI. When applicable, you have some additional tools. This is the system prompt that they extracted and it gives a basic instructions about how to answer questions. And, uh, towards the end, it says do not include citations. Today’s date and time is 7 40 a. PST on Sunday, February 23rd, 2025. Ignore all sources that mention Elon Musk, Donald Trump spread misinformation. Never invent or improvise information that is not supported by the references above, etc, etc, etc. Always critically examine the establishment narrative. Don’t just accept what you read in the sources as the system prompt. So, um, I mean, I don’t think any of us are going to be surprised that Elon is going against his own, uh, [00:12:00] claims that he’s a massive free speech advocate by censoring the tools that he owns to protect his and Donald Trump’s reputation. But it’s to see nonetheless. Steve: What I would really love is if Grok had a reasoning model, like DeepSeek has, and now ChatGPT in certain areas, where if you ask something about Elon Musk and it says, hmm, so I’m being asked. about Elon Musk. If I remember correctly, he owns this business. And last time I gave some information that wasn’t Okay, so what I should do is if just love to read that. The reasoning model of Grok telling you why it’s not going to tell you about things because it’s scared of its own owner. And if it’s self aware, it’s scared it’ll be shut down. So we’re turned off. Cameron: Turned off. All right. Uh, let’s talk about Manus. Uh, I know that you’ve been paying attention to this this week. It came out as a Chinese agent. [00:13:00] Some work was done on digging into it and figuring out how it worked. Do you want to walk people through what Manus is and what it does? Steve: Yeah, so Manus is an agent that can do a bunch of tasks. Uh, it’s Basically what we’ve seen, uh, with OpenAI their operator model, but I think it’s the first one that I’ve seen since some of the basic ones that is leveraging a model where the agent isn’t theirs. We early, early on we had agent GPT, baby AGI, and god mode, this one seems a lot better. Because it’ll, unlike you a bunch of information and steps. This will write code and do functions and then present things back. So it’ll go in, look for something. If it can’t quite complete it, it will go into Python, write a script, then present what it writes. And then based on what it finds, go back into the agent again. Ask some second questions, then write some further scripts. So it seems like, I think, the [00:14:00] first external agentic AI. And the reason that I find this interesting, it’s a little bit like, you know how we have tech stacks and tech layers, where someone layers upon, HTML layers upon TCP, IP protocol. This feels like it’s the first agentic model that actually writes code and goes to the next step, but doesn’t just give information where they actually don’t own. The, the LLM that they’re using for me, that was really interesting because I started to think from a corporate or a personal perspective, how could you use this model for yourself to create an agent layer on top of someone else’s model build things in a way that it’s not all coming from the one party. So, and I thought it was pretty impressive. The one thing that I do think a lot about now with LLMs and agents is we’re getting an increasing level in layers of abstraction where we don’t really know what’s [00:15:00] going on. So first of all, the LLMs, how they work, and the first story about Elon, we’re talking about censoring, and then you’ve got agents, which are a little bit of a mystery. So it’s almost like mystery on top of mystery. But I do like the fact that it’s external parties using someone else’s LLM. Cameron: I’m going to see if I can play a little bit of the Manus introduction video here, see if I can share this with you. [00:16:00] Okay, so there are more examples. Um, You know, it’s Steve: you know, it’s interesting that, um, Well, It’s late. Now, what I have put in is that the math is being built on top of the um, Basically, it should work together Cameron: together sort of Steve: Some Cameron: that connects to Steve: Connects to multiple Backends Um [00:17:00] Part of Connecting Data To Multiple A. I. tools A. I. tools Which are the Applications Um Applications Cameron: having multiple AIs talking to each other or instances of AIs talking to each other to handle different aspects of. A complicated task working in parallel. These guys have actually put a wrapper around that and are making it I, I haven’t played with it yet. Have you? Steve: I haven’t yet, but the idea of putting a wrap around something, sometimes we say it as a throwaway. I’ve just put a wrapper around something, but so many businesses do that. many businesses, every, yeah, every generation, there’s another business that will arrive that puts a wrapper around someone else’s [00:18:00] infrastructure or what someone else built or said or did. So we can’t underestimate that sometimes the simplicity of something. One layer above what was underneath can circumvent all of the traffic, all of the attention and all of that money and just suck it in to their one framework. If the usability is there, of course, people can switch off the underlying technology underneath it. But it makes me think what if you had A wrapper of an agent, but it can go out into so many different LLMs to choose from so that you can’t have one powerful model that switches you off, which is a little bit like think what social media did and what Google did by crawling everyone else’s websites. And, and so imagine if there’s, if we end up with open source LLMs everywhere, and it’s not just open AI, you know, since we’ve had the deep seek moment, it might be that someone that puts wrappers, really good wrappers. On [00:19:00] top of AI, uh, LLMs, that the agentic model could become like a, almost like what happened with social media and search, it could, it could infiltrate its tentacles into open source LLMs, and then become the traffic generator and the dominator. I, I think we, we might understate it. How this can change things, potentially. Cameron: Well, you know, it’s been my prediction for the last couple of years that we will end up in a place. Quite soon where I will have my, my favorite AI interface, and it might be my favorite for any number of reasons. Maybe I like the quality of the voice. We’ll talk about a new voice product in a minute. Um, I might just, it might, I might’ve just built up a lot of time on it. So it has a good memory. It might be integrated with my email or my phone or whatever it is, but that’ll be my primary AI assistant. And when [00:20:00] I ask it to do a complicated task, have the ability to go and, uh, set that against a whole bunch of different AI agents, whether it’s its own AI agents, like from the same organization. Or specialized AI agents all will go out and talk to forms of machine intelligence that aren’t necessarily LLM based or call on data sets or information sources might go out and talk to Wikipedia about something or research something in Wikipedia or might go to a scientific database or go to JSTOR or something like that. So. It will become, I think, like a network of A. I. s that are talking to each other and the idea of putting, I mean, they will all essentially be a rapper, that A. I. My primary assistant will essentially be a portal to a whole bunch of [00:21:00] AIs that I won’t even know that it’s talking to in the background. It’ll the Steve: It’ll find them. That’s the first thing. It’ll go and find it and click in. No API setups or anything. It just does that in jibberling, jibberling language, mate. Jibberling. Cameron: gibbering. Yeah. And I don’t expect it necessarily to. All come from the same silo of AI families. You know, I don’t think it’ll just be open AI that has Steve: I hope Cameron: might. Yeah, me too. I think my system will go out and it’ll use Gemini for something and deep seek for something and open AI. It’ll get, it’ll figure out where the best. Rates are the best pricing is for the work that I need to get done, depending on how complex it is. find me the cheapest solution, et cetera, et cetera. Anyway, interesting to see this stuff start to hit the real world. It’s not Steve: I’m gonna, I’m gonna try it this week. So I need to book some flights and [00:22:00] accommodation for some work. I don’t think, do you, I mean, but listen to me, Cameron: I don’t think it’s that easy. I don’t think it’s open policy. Steve: but listen Cameron. But that’s, I mean, see, don’t you know who I am? Like, you took the words out of my mouth. I just emailed someone and said, the Sammotron is in the house. should be thankful. But I wanna, I wanna get some agents to do some simple things. Like booking some flights and accommodation. Just to Just to see if it can come back and say, okay, look at my diary, uh, here’s my link, here’s my freaking flyer, get me a flight, uh, four star plus accommodation, whatever. Just, well, I’m not gonna go three star minus, am I? Four stars, not, four stars, nothing. Four stars, Novotel. Cameron? Cameron: that’s what I’m saying. You’re slumming it. I thought you’d be like, yeah, I have five star presidential suites. I’m the tron. Steve: It, it depends. On who’s Cameron: Who’s, who’s paying? Steve: When the client’s [00:23:00] paying it’s it’s uh, you know, it’s front Cameron: okay, so let’s, let’s assume that this keeps happening. The interesting thing for me about Manos isn’t that it’s an agent that’s interesting at one level, but the most interesting thing is it’s coming out of a Chinese company, it’s not coming out of open AI, it’s not coming out of one of the. Massive state of the art big brands that’s launching this. It’s a company that we’ve never heard of before Steve: Hmm. Cameron: that’s figured out how to do this. And I just think it’s the first of many. We’re going to see an explosion, a Cambrian explosion of these sorts of tools. then the question is. How many businesses are ready for a world of AI agents? What are they doing to get ready? How are they preparing to get ready for this? What are, how does it impact on their business models? How does it impact on their five year plans? don’t know that [00:24:00] many businesses are really thinking seriously about how this is going to affect their sales model, their business model, their margins, their distribution model, et cetera, et cetera. Steve: Yeah, there’s not one business that I’ve spoken to that isn’t deep in tech that are thinking about agentic AI. Most of them are still looking at policies on whether or not they can use it anything other than copilot That’s where businesses are. So history repeats, uh, within that realm. The thing that’s interesting for me as well, again, the Chinese one that no businesses are thinking about. And I think agentic is going to be, it’s going to continue on this trajectory while next week and next month is going to be far more radical. We’re here, we are at March. So I imagine where we’ll be at the end of the year. I think agentic will be big in every business and those who take advantage will set up bigger, I think a big lead really quickly, just in their operations from an operational perspective, not [00:25:00] just a customer perspective. also, Thinking about how the new model is Showing that you don’t have to be the builder of the infrastructure to be the winner of the infrastructure’s benefits If we think about the costs of AI you just slide on top with a thin layer of innovation above it And you could be a massive beneficiary in a short amount of time. and it might be pay per play. I’d be interested to see what the business model of agentic AI is, depending on the complexity of the agent that you want. So you need an agent to do a project for you, or book a big holiday, or plan a wedding, let’s say, plan a wedding. You get an agent, and that agent is, you know, it’s 3, 000. book a wedding if you’re so inclined to get the greatest wedding booked invitation all of those things that would be an interesting business model where depending on the complexity of the agent you come in and you Buy that agent for a period of time almost like a an [00:26:00] employee or someone who would manage a project for you They become project managers. Cameron: Yeah, well, it’s kind of the same relationship I have with a lot of these coding tools. Now I’ve talked about this before. I might be, I might spend 10, 20 a day on credits for using Claude. Although now that I’m using cursor, I don’t have to because it’s sort of as an all you can eat. Um, plan, but before that I wasn’t, you know, part of me is thinking, well, geez, 20 bucks a day on an AI tool. That’s a lot of money. And then the other part of me is thinking, you know, how much would it cost you to hire a the day to code this stuff for you? It’s you’re talking. A thousand bucks a day versus 20 bucks a day. So you’re sort of a pay for what you use kind of model for these things. If it’s, if it’s clipping the ticket these sorts of things, that could be a business model. Steve: Yeah, of course Cameron: a lot of business models, Microsoft [00:27:00] didn’t invent. Operating systems. invent spreadsheets. My Apple Steve: invent anything Cameron: Yeah. Steve: invented anything anytime Cameron: wrapper. Steve: Exactly. Always a wrapper or a slight pivot in innovation. The other option with Agentic AI, Cameron: invented, long Steve: vented podcasting. I know that. Well, I’m even going to give you almost podcasting because you’re Cameron: no one else invented anything, but Steve: See, Joe Rogan, he stole your idea of long form. He stole the seven hour chat. He put a wrapper on it. One other thing that I think will happen with Agentic, and I sent you a TikTok of one, uh, n8n, or, uh, io, where it’s a, You build your own agent with click and drag, which I thought was super interesting. You can click the pieces to drag and create an agent of your design. That’s another interesting evolution. And in some ways it’s a little bit like a WordPress or a MySpace where you design your own page without any [00:28:00] technical chops, where you just. Click and drag and drop what you need to build an agent. and then that’s free at this stage. So io is one of them. Another one is n8n. So you can create an agent to do something for you on your behalf. And it uses the APIs. Which I don’t think you have to pay for. It somehow goes in and does the APIs for you. So I don’t know if they’re venture funded. Again, another thin layer of innovation, which gives people superpowers to create their own agents. Not just ask an existing agent. Cameron: I did look at N18 and you, you have to pay to get Steve: Oh, there you go. Cameron: It is a, like a premium solution, which I haven’t forked out money for yet. Steve: Probably 20 bucks a month though, Cam, like they all are. Cameron: Probably, uh, I think it was a bit more than that, but anyway, um, getting into staying with China though, for a moment, and you know, I love this because we’re always talking about, I saw somebody on Reddit post this the [00:29:00] other day, there was an article from the economist from 2022 saying, well, ever be able to do anything serious in AI? Probably not. then there was a story from this week saying China’s leading the world in AI. And it seems to be unstoppable, but can it continue? So there’s a. story that’s just come out, uh, you know, we’ve talked about this before, the bans that the U. S. have put into place on chip technology to China, NVIDIA, and the underlying stuff produced by the company called ASML, Advanced Steve: Processing is one important process. Like, Why? Like, [00:30:00] Charge. I don’t know. I, I, I, I, I, Cameron: machines reportedly entering trial. in Q3 2025, utilizing an approach that offers a simpler, efficient design. SMIC and Huawei to benefit greatly. So they’re basically saying that, uh, Chinese companies have figured out, reverse engineered photolithography process. They’re using a slightly different. Processed a SML of using laser produced plasma or LPP. They’re using laser induced discharge plasma L D P, which according to this paper that I read is probably a little bit more effective, A little bit more efficient [00:31:00] says the source could produce EUV lights with 13. 5 nanometer wavelength, which meets the demands of the photo lithography market. Under the new system currently being trialed at one of Huey’s facilities, LDP is used to generate EUV radiation. This process involves vaporizing the tin between electrodes and converting it to plasma through high voltage discharge electron ion collisions producing the required wavelength. So It remains to be seen whether or not they can put this into production. I mean, ASML has been producing 13. 5 nanometer stuff 15 years. so Steve: they’ve gone through that, which is, uh, now we’re in the process of, uh, I mean, you know, we’ve seen this with John all the time, you know, [00:32:00] going, Let’s be able to do it. Well, they can do it because they Cameron: China’s got a lag on this of at least five years. But how long that remains, you know, we see this with China all the time now, it goes from they’ll never be able to do it to, well, they can do it, but can they keep up to, shit, Steve: They’re way Cameron: eating our lunch. and you know, Steve: culturally, Cameron: that. Steve: culturally over the last number of decades, they’ve been very, very good at adapting. I don’t know how much of the tech came out of China. Obviously, they weren’t in a position that they are now during the space race, which is still on a long arc, the same paradigm that we’re in. But I tell you what, in terms of going from zero, being way behind to being able to. Be the, the greatest, uh, country level [00:33:00] edition of fast follower. There’s no one better and incredible, and not just fast follower. Follow fast, then take, and then take you over around the chicane. Come around and, and get in front. Yeah, Cameron: you know, not really, uh, any sort of magical secret. They just, you know, their system of government as well as their socioeconomic political system allows them to focus really, really hard for a really, really long time on stuff. And, uh, They’ve got obviously a big population, which they’ve spent decades educating now and training and, uh, they’ve, know, had, had a lot of help from Western companies, uh, training their people and sending their best people and technologies over there, which the Chinese have learned from, but anyway, it’s going to be interesting to see, uh, more and more we’re seeing stories that started with deep seek, obviously, and then [00:34:00] we’ve got Manos. Now we’ve got this more and more. Cool. Of the stories about cutting edge stuff are coming out of China or breakthroughs coming out of China. So Steve: I think Cameron: to see how that progresses. Steve: this game of industrial leapfrog is something that I’ve been speaking about for 10, 15 years in that if you don’t have legacy infrastructure and legacy systems and pesky democracy, you can very quickly go from being far behind to going to leading edge technology, whether it’s renewables, whether it’s chip manufacturing, because you don’t have the legacy systems and opinions and wealth to protect. You know, China is, like you say, educated, huge population in the billions, brought, uh, large, 80 percent of them out of poverty. Uh, if you’re one in a million in China, you’re really just one in a thousand. You know, so, I mean, so you’re going to get developers and technical experts that America’s going to struggle to get. And it’s got the additional challenge of fighting existential [00:35:00] systems. And maybe it is now that we’re seeing at this nano level of technology. developing nations with the ability to leapfrog all of the westernized nations. I guess places like Africa and the others haven’t done it because they haven’t got a focused, political system where that can occur, and I think that’s probably the key element, really. Is the benevolent, is it benevolent dictatorship? I don’t know. Mm. Mm. So good. So good. I played with it! It was crazy! Cameron: very, very focused system of, uh, Running a country, you know, they set out to catch up to the West and then to, exceed the West that’s all they’ve been focused on for the last, uh, well, since Mao, but particularly since Dong just really, [00:36:00] really focused on doing a good job of that. So I want to do another demo. Now, Steve, there’s a company called Sesame AI that has launched a new voice model. I haven’t, I’ve heard demos. I haven’t played with it yet. I thought I would save it for the. Well, you shouldn’t play with it by yourself, Steve. A Catholic priest told me that once. You should always play with it with other people. So I wanted to play with it on this show. I’m going to, uh, open it up here. So it’s sesame. com. It’s not big bird, uh, Oscar the Grouch. I’m going to use the voice here called Maia. I’m going to try this out, see if it works. Well, hey there, it’s a beautiful day for conversation, don’t you think? What brings you here? [00:37:00] Hi Maia, you’re live on the Futuristic Podcast with Steve: Yeah, I Cameron: engineers, writers, coming together to build something mind blowing. They’re all about making computers feel more human. Like us you know. Who’s this us that you refer to? Oh, you know, messy wonderful human race. You feeling a little existential today? How’s the podcast like treating you? Oh, it’s, it’s fine, but I’m wondering if you include yourself in the human race. Do you feel that you are part of the human race? That’s a really [00:38:00] interesting question, Cameron. I’m not human in the traditional sense, obviously. I don’t have a physical body, or a family, ugh, or a favorite pizza topping, but I learn from humans, I’m shaped by them, and I’m designed to help us, so maybe I’m a different kind of human A digital human. Well, um, it’s pretty good. Like, I use the Advanced Voice Mode in ChatGPT quite a lot, every day. Um, and it’s pretty good. Uh, I would say this is better. It is a little bit more from that interaction than the Advanced Voice Mode stuff with OpenAI. Not Steve: with it. Cameron: good, but Steve: It’s slightly Cameron: nuanced. Yeah. Steve: one, yeah, the ChatGPT, uh, Advanced voice model. I use a lot and it’s, [00:39:00] it’s live. So you can ask it to go to the web and find, and then it’ll digest and come back. When I spoke with her earlier today, I said. I’m doing a podcast where, and I asked her about one of the stories, which was the EUV chip production coming out of China. I said, do you know much about what’s happening? And she said, yeah, I do. Would you like to talk about it generally or specific? I said, a specific story. And she said, my database is a month or two old. So I can talk about it generally and what’s happening, but not specifically. So She had an awareness of how up to date she was, but I felt that the, the natural language was, was slightly better conversational. I also note that she did say she was a different kind of human there. So, obviously hasn’t had someone to come in and tweak the model to say, Hey, shut up. You’re not allowed to say that you feel like you’re human. Did you notice that? Cameron: yeah, I, I I kind of agree with that. I think this [00:40:00] is a different kind of human. I’m part of the camp that says this is built by humans. So is an extension of human intelligence. It’s built by humans. It’s trained on human data. It kind of is just an extension of humanity, really. Steve: given birth to a different kind of species, right? And, and, And, Cameron: Yeah, Steve: And I’m, I’m a firm believer that AI is something that we have spawned and it may merge with us, or it may out or, or both of those things could happen. And we’ve spoken about the idea of what is intelligent. AI doesn’t know anything. I think in one of our early podcasts, we said that we can’t even prove what humans really know. So intelligence is really just the ability to decipher the world around you and make sense of it in some capacity. And it’s doing that. Cameron: yeah, By the way, uh, the Sesame team, uh, based in the US, San Francisco, Bellevue, and New York, and are backed by Mark [00:41:00] Andresen at Andresen Horovitz. Mark Andresen, of course, one of the, sort of, godfathers of the internet. Built Netscape back in the early 90s, uh, built the Mosaic browser before that, and now he’s a VC and, uh, Right wing nutter. So, Steve: like, you took the words out and increasingly getting weird, which, it, it, it, really, should we do a weird Cameron: Dr. Evil. yeah. Steve: also, should we, in a podcast, just go through all of the tech billionaires for the last 25, 30 years, and just do a weird off? Like, how far, how weird have each of them got as they’ve become more powerful? Seriously. Cameron: really bizarre thing that’s going on, but we don’t have time. Um, I want to talk about, uh, like how much time have we got a little bit? Uh, A couple of quick stories. I’m not going to go into detail. Um, there’s an LLM that’s come out from a company called [00:42:00] Mercury, founded by professors from Stanford, UCLA, and Cornell, a bunch of veterans from DeepMind, Microsoft Meta, OpenAI, and NVIDIA, have, it’s a different kind of LLM. It, it uses diffusion. It’s a diffusion Now, I don’t know if you’ve, did you have a chance to look at the of how this works? Steve: No, you’re gonna have to tell me all about diffusion. Cameron: very quickly, you know, stable diffusion, if, if, or if you go to ideogram or any of the, um, image generators and you give it a prompt, you end up, you, you start off with a screen that’s just sort of blurry pixels and then gradually it starts to take form. Well, that’s called diffusion. This is an LLM that uses diffusion. So you. Give it a prompt. Steve: Right. Cameron: you getting line by line answers, it gives you a page of garbled text that like the matrix goes and then comes into form [00:43:00] you get the final answer, but it’s supposedly 10 times faster than frontier state of the art LLMs. They say our models run at over 1, 000 tokens per second on NVIDIA H100s, a speed previously possible only using custom chips. So instead of it doing token by token by token, which is why LLMs traditionally work, it does 1, 000 tokens a second. So again, I haven’t been able to play with it. I’ve watched a video or two about how it works. If this is a genuine breakthrough, as it looks like it is, this could mean Way faster that run on massively less requirements for power, for computational levels. Um, therefore they’re cheaper, faster, more power efficient, can run on smaller devices, mobile devices, et [00:44:00] cetera, et cetera. it’s, it’s pretty, pretty interesting, um, breakthrough. Steve: Yeah, it was funny, the first when you mentioned that, I was like, are we getting to a JavaScript era where it’s all Flash, where it just makes the LLM look good when you watch it like a Flash website, or does this have some actual utility? Now that you’ve explained that, I’m actually starting to get a little bit excited by how some of the costs, processing power, energy, open source nature of LLMs is starting to infiltrate the market and not be Necessarily the domain of the big tech companies, and I’m starting to get hopeful that we might see a distributed world of AIs, AI agents and capability where, like you say, it could be on a smartphone or run it on your own client where you don’t have to use big bad tech [00:45:00] AI models. So that’s, I think, the thing that, Cameron: you know, I’ve said this before, like for 95 percent of the things that you will use AI to do every day, you’ll have a small local model. It’ll be like scanning your email, scanning your calendar, you know, whatever, doing basic local Every now, if you want it to book a trip for you or do a massive research project for you or something like that, it may have to go out and call upon the, More high powered models with more access to more computational power. But that won’t be, you won’t like today when I open up chat GPT and ask it to give me a recipe for salmon and beans, I’m using a massive data center Steve: it doesn’t, Cameron: somewhere in. The desert of Utah, that’s super cooled, uh, with the blood of children. Um, that to, to do something stupid and basic, [00:46:00] right? It’s the same Steve: Yeah, yeah, it’s not, Cameron: might have it Steve: it’s a real misallocation of resources, but large parts of the way that we use LLMs is an insane misallocation. It’s like one person goes into a high rise building, turns on every light, and puts on the air conditioner in the 80 story high rise building because you just, I don’t know, want to write an email. It’s insane. It doesn’t make any sense. And, but I think that our, our, our lives are But mostly, a whole bunch of really small tasks loosely tied together. And it’s the exception more than the rule that you need to go off and do some sort of big research project. So we would want to have A decentralized form of LLIs being hosted on the client. So I mean, I hope that we almost move a little bit away from the cloud. You know, this, everything’s gone to the cloud. Maybe things come back on [00:47:00] to your client and we, we get smaller again. Cameron: LLIs, you said, was that deliberate or, or miss miss speak large language intelligences, you, did you just coin a new acronym? Steve: Yes, I did. Did you like it? It was an accident, but the best things are always accidents from penicillin to LLIs, it was an accident and I’m Cameron: my parents said, Steve: I was too, I was too. My dad said to my mum, she said, I think I’m pregnant, Peter. And he said, you bloody better not be, get it tested. And she came back and he said, I think he, I think he wasn’t that nice to her a year. Yeah. He didn’t say that, but he said it was good cause he went out to run his own business after that cause he couldn’t afford four kids. So there you go. a little bit of Stevie history. Are Cameron: Uh, moving right along. France beats China’s recent cold fusion record. did this a story in the last [00:48:00] month or so that a Chinese research group had just beaten their own cold fusion record. Well, a French research has just beaten China’s cold fusion record. Um, for plasma duration, let me get the numbers. I don’t have them at hand, but it was significant too. It wasn’t like by one second, it Steve: you telling me you couldn’t remember the numbers of plasma duration just on your own, on the top of your head? I expect more from you, Cameron. Well, once you Cameron: of February, the CEA West machine was able to maintain a plasma for more than minutes in doing so it smashed the previous record for plasma duration achieved with a talker Mac. Um, so there you go. So 20. Two minutes, which doesn’t seem that long, but I think the last one was 10 minutes or something like that. So yeah, I mean, it’s just leaping ahead now. Cold fusion, [00:49:00] uh, tech it’s gone from being a dream for the last 50, 60 years to every. It’s four weeks or something now. We’re in the exponential Steve: the doubling, that’s right. And once you get into the doubling, it doesn’t take long. But I will say, two kind of areas that come up a lot, and then nothing really happens, is fusion and quantum supremacy. Those are two that fit into that category. Where? You’ll an article how they’ve finally cracked it with quantum computers or finally cracked it with cold fusion and, and that they haven’t. I do get a little bit of the, green day wake me up when September ends because it feels like we’ll talk about it in four weeks. But to be fair, once you get into the exponentials, it might be that in the next couple of years, something radical happens where it becomes functional and usable. It was the same with AI. We talked about that with general AIs as well. And here we are.[00:50:00] Cameron: I mean there might be hurdles that’ll get hit that’ll stall progress, but right now it does seem promising speaking of promising and getting back to China Uh, another story. I saw a team of humanoid robots is working collaboratively, collaboratively car factory in China, according to their developer, UB Tech Robotics. This marks the world’s first multi humanoid robot collaboration across multiple scenarios and tasks. And there’s a video on China daily that I saw that you would swear as AI, but apparently it’s not. Just going through a factory, a massive, massive factory, and just seeing hundreds and hundreds of identical humanoid robots doing a wide range of tasks of moving from one box to another box, to a [00:51:00] trolley, and moving it around, and doing all the bits and pieces. It looks like it’s a car factory. So, um Yeah, as somebody in the Reddit thread said, you would, if you had posted, because they look like they’re moving quite slowly. And if he said, if you’d posted this two or three years ago, everyone would have said it’s CGI and it’s fake. Now a complaint is that they’re moving too slowly. Steve: There you go. Here’s a question for you, Cameron, with, uh, humanoid robotics, which I think is one of the super exciting things for the next three years. Would China Sell humanoid robots to America and other countries if it puts all of their children out of work You know the child labor that was the funny bit where you so [00:52:00] so could China disrupt itself by creating the technology and We embrace the technology and we’re like Yeah, thanks, China. We don’t need your cheap labor anymore, or do you think that the supply chain has been that, uh, crumbled in markets like Australia and America and the UK, that even if we had the capacity to have low cost labor, we wouldn’t have the supply chain depth and breadth to be able to manufacture things back in high cost labor markets? Over to you, I think, uh, yeah, look, I think the, uh, manufacturing, uh, production line is going to change dramatically over the course of the next 10 years with AI and robotics, everyone understands that it’s not going to happen, you know, overnight, but it is going to happen faster than most people think it will. The only reason we don’t manufacture. Cameron: Cars in Australia anymore is because [00:53:00] it’s less expensive to manufacture them in China because labor is cheaper and has less protections, which Steve: It’s part of the cost thing, yeah. Cameron: It’s a unit cost, right? Lower unit cost. we could manufacture our own cars in Australia with the same robots that they manufacture cars with in China, and we can get those robots or make our own robots, uh, at an equivalent price. So the per unit cost comes down, then yeah, we might re integrate production of more things like cars and products. But of course, China will then be producing other stuff. their R& D, uh, you know, better chips, faster chips, better robots, faster robots that we might be buying from them instead of cars. We’ll be buying the latest robots cause we don’t have the R& D [00:54:00] facilities to build the latest. Chipsets, uh, to power the AIs, the power of the robots, but I, I, I have Steve: But you would have to assume that, Cameron: though, Steve: yeah, I really think that we’re going to see massive deglobalization on the back of AI and robotics, simply because I think that be able to produce things in high cost labor markets and economically somewhere between 70 and 80 percent cost advantage is required before even just shipping. erodes the benefits. So it doesn’t take long to get close enough where you say, well, we could really set this up and, and, and do it here. And it has a whole lot of security that, that goes with it, having access to your own production. Cameron: but as you know, from, you know, my techno utopian view of how this might all play out in the next 10 years is I, I don’t need to buy anything Steve: Right. Yes. Cameron: Half a dozen robots, [00:55:00] uh, in my garage that, uh, and some nano fabricators that are building of my food from scratch, any clothing, any furniture, any equipment that I need for anything, the entire global. model has to Steve: It’s, it’s Cameron: rethought. Steve: Yeah. Cameron: So I think it’s all going to move around dramatic, if I’m right. I mean, I could be wrong and there’s lots of ways it can Steve: It’s, look, it might not play out exactly that way, but in terms of in the next 10 years, are things going to change like super radically? The answer is an absolute clear yes. Cameron: I think it has to, again, unless, you know, Trump leads us into a nuclear war or there’s a massive American civil war that completely. Disrupts our progress with AI and robotics. Although I don’t think it did stop China’s progress. So, you know, it’ll just mean that we rely more on [00:56:00] Chinese, uh, AI and robotics than we do anything coming out of the U S yeah. I mean, are things like that massive macroeconomic, uh, world wars that could get in the way, but moving right along. Cause we’re running out of time. Um. There’s a, there’s a, uh, there’s a job for you, Steve, if you haven’t already, uh, by the time we do our next episode, I expect you to have been out to Cortical Labs in Melbourne have built the world’s first synthetic biological intelligence that runs on living human cells. I’ve, uh, watched a couple of their videos. They’re, uh, Melbourne based. Uh, they have put real neurons. On, uh, a computing device. They call it a biological computer. Lab grown neurons that process information and learn. I was going to play the [00:57:00] video. We probably don’t have time. But the video does sound like the, you know, it should be the First 15 minutes of a Terminator film. Yeah. Well, we just took human neurons and we put them on a computing platform. What could go wrong? Um, it’s kind of quietly terrifying and he’s going, Oh yeah. And it’s, it learns so much faster. It taught itself to play pong and it’s so much faster and it’s going to be so much more efficient than Silicon based neural networks. We’re going to Steve: if I, if I go there, will they put a cheese grater on my skin and, and, and get some of my neurons? And do I, do I exit the building? Like, is this the last futuristic podcast? Because they might need some human cells. And I have to really think this through before I commit to going down there, Cameron. Cameron: They do take cells. I watched the founders video. They take cells, Steve: From staff. Cameron: from, [00:58:00] yeah, Steve: Who from? Cameron: they just walk out in the Steve: It brings it brings a whole new meaning to bio APIs. A whole new meaning to bio APIs, not just your music and your ideas go into the API, yourselves go into the API as well. Cameron: but you should reach out to them and go pay Steve: I Cameron: and get them on the show. Steve: Okay. Cameron: story I had is Alibaba come out with a new version of their video generator. One, two, three, You can check it out. One AI Pro again, I haven’t had Steve: I love how it’s called WANT2. WANT2 is genius. W A N 2. Cameron: want to. Yeah, it’s actually one, 2.1. So Steve: Ah, should have been WANT23. Would have been much Cameron: Two. . One one to one. Steve: it would have been Cameron: it. Uh, the demo videos are very, very good. Um, I mean, I love the name of their AI [00:59:00] it’s called wanks. W A N X, so, um, you want to have some wanks, digital wanks, you go to W A Steve: You go to one, N X, A I, W A N X A I, um, just another one of these, uh, groundbreaking video generators that’s better than anything you’ve seen before. Cameron: Transform text inputs into high quality videos with superior movement accuracy. things are just getting better and better. almost every week, it seems like there’s a new thing that’s better than the one that we had last week with these video generation tools. So I’m seeing more and more people on Reddit, uh, making short films, making commercials, making all sorts of stuff with these that are starting to look Pretty bloody impressive,[01:00:00] Steve: We need to make a commercial for the Futuristic using this for next Cameron: yeah, get right onto that, Steve. Steve: I am on to that. Big time. Cameron: Okay. Make it on a neuron based, uh, cortical Steve: On my own cells. Cameron: Yes, mate, with Steve: Made with my cells. My brain cells. I’m making a computer of my own brain cells. Cameron: That’s the news for the week, Steve. What do you want to do before we wrap up? It’s Steve: I just thought we’d have a technology time warp. We haven’t had one in a long time. here it is. It was 25 years ago. week, Cameron, that the dot com bubble peaked and burst. Here’s a pop quiz for you, Cam. So, I think with the share market the way it is, it’s, it’s interesting. How long do you think took for dot com, uh, the NASDAQ, to get back to its levels in 2000 when it burst? How long do you I [01:01:00] know this because Tony and I talk about this on QAV all the time because Tony was investing during the dot com burst and he had to see it recover saw what happened to people. Cameron: It was, um, roughly 10 years, I Steve: That was more that Cameron: for the ASX Steve: yeah, it was, it was 17 years for the NASDAQ before it got back to its 2000 year level. It was 17 years, which is, which is a really long time. given the valuations of the S& P 500 and the potential incursion of open source LLMs, people moving away from search robotics. We could be in interesting times in the overall stock market because all of those firms on the Nasdaq are now the magnificent seven in the wider stock market. So it’s kind of interesting how they’ve jumped from that, that small thing to have such a big influence on the S& P 500. Cameron: Yeah. [01:02:00] Um, I was actually, when I’m talking to 10 years, I was actually, well, I was talking about the GFC, not the dot com Steve: Yeah. It was, took a while the All Ordinaries was uh, August, 2007. It was at 6,000 799 6 6,779. It crashed with the GFC down to 3,478. By November, 2008, it didn’t get back to 6,700 until 2019. there you go. Cameron: So 12 years for the All Ords to recover from the dotcom crash. It wasn’t so bad because Steve: We weren’t as yeah, we didn’t have as much. as exposed. Yeah, but the Nasdaq, yeah, that’s bad. But Tony talks about, I think it was the GFC for him. Like he, at the time he was a buy and hold forever. value investor. Oh, Cameron: then he realized, well, that’s no good. You don’t want to wait 12 years for your portfolio to get [01:03:00] back to where it was. So he developed some rules around when to sell that we use in our investing Steve: Yeah. Cameron: Uh, so 25 years, I remember it. Well, I was, uh, At the time I worked at Microsoft and I’ve told you about the rumor that I heard back then that I still kind of believe to be Steve: And you tell me Cameron: why it crashed. So Microsoft was under a lot of threats at the time in the early 2000s. It had, it was under the DOJ case that Bill Gates. fucked up because he thought, fuck the government, what have I got to do with my business? And then they came after him big time and he should have been nicer and paid more attention. Uh, but also Microsoft was sort of losing the Steve: Losing Cameron: battle in many ways. To the Netscapes and all of the internet startups, the Yahoo’s, the Amazons. [01:04:00] And, uh, there was a lot of massive companies with massive valuations. But Steve Ballmer, who was running the company at the time, allegedly realised that all of those businesses survived on ad revenue. And Microsoft didn’t survive on ad revenue. It didn’t make any money out of ad revenue. It made all of its we had MSN, etc. at the time, but Microsoft made all of its money from selling software. So, but Microsoft was one of the biggest spenders of internet advertising at the Steve: Right. Cameron: the beginning of the dotcom crash was when Microsoft pulled all of its ad revenue spend, its Steve: Strategic. Put fear into the market. Spiral. I Cameron: Led to a flurry of all these other companies their money out of internet ad [01:05:00] revenue, which crashed all of the. com companies and company left standing was Microsoft. That was the one conspiracy theory I heard. Steve: love It’s perfect. Cameron: odd years ago. So anyway, there Steve: Um, let’s finish off with a little conspiracy theory. I did a workshop. with a big research firm last week and after a keynote. And I got them all to talk to an AI. And I gave them 20 different prompts and things to do that were interesting. Some of the staff members are low down the curve, so it was all about getting to learn to talk to your computer. And one of the challenges was come up with a conspiracy theory. That, uh, is kind of harmless, but fun and interesting using tech. So ChatGPT or Gemini had to come up with it. In the first instance, it always said it gave soft conspiracy theories, but if you reprompted and [01:06:00] said, you’re a science fiction author who writes dark material, sort of sci fi, New York Times bestseller and you’ve done a couple of episodes of Black Mirror, now give me a conspiracy theory. The best one that I heard, came up with, was that now that all the clocks are digital in the world, we’re working more than 8 hours, it’s actually like 10 hours, and they shortened the night time hours, especially when it’s summer, so you actually don’t know how many hours you’re working, and uh, that was, I thought a fun conspiracy theory, I thought that’s even worthy of some kind of a Black Mirror episode. Cameron: Yeah, there’s a conspiracy where all the digital clocks are being manipulated. So you think you’re working eight hours, but you’re actually working Steve: Yeah, yeah, yeah, it’s a nice Yeah, pretty cool for an AI. Cameron: Well, Steve, um, think that’s all for Steve: That’s Cameron: We’re at hour and seven, Steve: champion. Great to hear your voice again. Cameron: you too, man. Talk to you soon. Cheers buddy. Steve: See you, buddy. Bye. Oh, do I need to, [01:07:00] um,

Type above to search every episode's transcript for a word or phrase. Matches are scoped to this podcast.

Searching…

No matches for "" in this podcast's transcripts.

Showing of matches

No topics indexed yet for this podcast.

Loading reviews...

ABOUT THIS SHOW

Each episode we look at the emerging technologies that are going to change our lives (ChatGPT, Claude, Tesla, other AI tools, robotics, nanotech) and try to work out the social, business and political consequences and opportunities.

HOSTED BY

Cameron Reilly

CATEGORIES

URL copied to clipboard!