Meta smart glasses privacy leak & Perplexity becomes Samsung AI layer - AI News (Mar 3, 2026)

EPISODE · Mar 3, 2026 · 7 MIN

Meta smart glasses privacy leak & Perplexity becomes Samsung AI layer - AI News (Mar 3, 2026)

from The Automated Daily - AI News Edition · host TrendTeller

Please support this podcast by checking out our sponsors: - Consensus: AI for Research. Get a free month - https://get.consensus.app/automated_daily - Discover the Future of AI Audio with ElevenLabs - https://try.elevenlabs.io/tad - Prezi: Create AI presentations fast - https://try.prezi.com/automated_daily Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: Meta smart glasses privacy leak - Investigations say Meta Ray-Ban smart glasses data can reach human reviewers, including sensitive recordings. Keywords: GDPR, consent, Nairobi annotators, on-device claims, EU data transfer. Perplexity becomes Samsung AI layer - Perplexity claims deep OS-level integration on Samsung Galaxy S26, powering both its assistant and Bixby with real-time search plus LLM reasoning. Keywords: Android ecosystem, default search, agentic browsing, core apps access. OpenAI mega-funding and compute - OpenAI announced massive new investment and expanded infrastructure partnerships to scale AI usage worldwide. Keywords: valuation, SoftBank, NVIDIA compute, Amazon enterprise partnership, scaling inference. AI labs pulled into defense - A clash over 'lawful use' and surveillance red lines highlights how Pentagon budgets could turn AI labs into defense contractors. Keywords: procurement, classified networks, autonomous weapons, surveillance loopholes, contract enforceability. Claude outage disrupts developers - Anthropic’s Claude services saw elevated error rates on March 3, 2026, affecting claude.ai and developer platforms before recovery. Keywords: reliability, incident response, API downtime, monitoring, platform risk. Google Gemini goal-based scheduling - Google accidentally exposed an unreleased Gemini mode hinting at adaptive, goal-oriented scheduled actions. Keywords: feature flag, persistent agent, LearnLM, education workflows, long-term goals. Agents: protocols, CLIs, hybrids - Debate is heating up on how agents should use tools: new protocols like MCP versus simple CLIs, plus a trend toward deterministic code scaffolding. Keywords: MCP adoption, CLI composability, guardrails, blueprint workflows, reliability. Verification crisis in expert data - A data-infrastructure veteran argues most 'expert' training data can’t be graded objectively, limiting RL with verifiable rewards. Keywords: subjective judgment, reward signals, rubric distortion, evaluation, frontier training. AI hallucinations hit courts, media - AI-generated fabrications are showing up in high-stakes settings, from Indian court citations to a newsroom retraction over fake quotes. Keywords: hallucinations, accountability, verification, editorial standards, judicial integrity. AI drug discovery meets trial reality - An essay pushes back on claims that AI-designed drugs will make clinical trials radically faster, because logistics and endpoints still dominate timelines. Keywords: recruitment, surrogate endpoints, Phase III, regulation, trial speed. Stablecoins for agent payments - A payments essay predicts AI agents will favor programmable, low-friction rails—potentially stablecoins—over card-style transactions. Keywords: B2B invoices, micropayments, reconciliation, cross-border, programmability. - https://framer.link/TLDRAI - https://www.perplexity.ai/hub/blog/perplexity-apis-deliver-powerful-ai-to-the-world%E2%80%99s-largest-android-device-maker - https://openai.com/index/scaling-ai-for-everyone/ - https://www.astralcodexten.com/p/all-lawful-use-much-more-than-you - https://ejholmes.github.io/2026/02/28/mcp-is-dead-long-live-the-cli.html - https://status.claude.com/incidents/yf48hzysrvl5 - https://www.svd.se/a/K8nrV4/metas-ai-smart-glasses-and-data-privacy-concerns-workers-say-we-see-everything - https://framer.link/TLDRAI), - https://press.asimov.com/articles/ai-clinical-trials - https://go.clerk.com/fEmCMF1 - https://www.testingcatalog.com/google-tests-new-learning-hub-powered-by-goal-based-actions/ - https://www.algolia.com/resources/asset/what-to-know-when-implementing-rag-with-your-search-solution - https://philippdubach.com/posts/when-ai-labs-become-defense-contractors/ - https://framer.link/TLDRAI) - https://x.com/phoebeyao/status/2027117627278254176 - https://gist.github.com/sshh12/e352c053627ccbe1636781f73d6d715b - https://www.bbc.com/news/articles/c178zzw780xo - https://a16zcrypto.substack.com/p/agents-arent-tourists - https://x.com/ctatedev/status/2028128730132922760 - https://cursor.com/blog/third-era - https://www.inc.com/fast-company-2/andrew-ng-agi-artificial-general-intelligence-ai-bubble-risk-training-layer/91310210 - https://getbruin.com/blog/go-is-the-best-language-for-agents/ - https://futurism.com/artificial-intelligence/ars-technica-fires-reporter-ai-quotes - https://tomtunguz.com/hybrid-state-machine-agents/ - https://openai.com/index/our-agreement-with-the-department-of-war/ Episode Transcript Meta smart glasses privacy leak Let’s start with privacy, because it’s getting harder to see where “personal device” ends and “data pipeline” begins. Swedish outlets Svenska Dagbladet and Göteborgs-Posten report that Meta’s AI-enabled Ray-Ban smart glasses can generate extremely sensitive recordings that may be viewed by human reviewers—reportedly including outsourced annotators in Nairobi working through a subcontractor. Workers described seeing everything from accidental nudity to bank cards in view. Meta’s policies say AI interactions may be reviewed, but the investigation questions whether users truly understand when capture happens, how long data is kept, and who ultimately gets access—especially under GDPR and cross-border data transfer rules. Perplexity becomes Samsung AI layer On the flip side of consumer AI, Perplexity says it’s now deeply embedded in Samsung’s Galaxy S26 at the operating-system level—powering search and reasoning for both the Perplexity assistant and Samsung’s Bixby. The big deal here isn’t just “another assistant app.” It’s the claim of OS-level access, including reading from and writing to core apps like Notes and Calendar, plus plans to show up inside Samsung Browser with more agent-like browsing. If that holds, it’s a meaningful shift in the Android AI stack: a non-Google player potentially becoming a default layer for how millions of people search and get tasks done. OpenAI mega-funding and compute Now to the heavyweight infrastructure story: OpenAI says demand is surging, and it’s responding with a huge new financing round—paired with deeper ties to major compute and cloud partners. The headline is scale: more GPUs, more distribution, more capital, and faster capacity for both training and inference. OpenAI is also positioning these partnerships as a way to ship systems that are not only more capable, but also more stable and safer under real-world load. Whether you buy that framing or not, it’s another signal that frontier AI is settling into an “industrial era,” where deployment logistics matter as much as model breakthroughs. AI labs pulled into defense That industrial era gets even more complicated when the customer is the military. A widely discussed essay—and a separate longform critique—both point to the same tension: AI labs want to draw hard lines on surveillance and autonomous weapons, but “lawful use” can be a slippery phrase. One account describes Anthropic being labeled a supply chain risk after refusing broad usage terms, followed quickly by an OpenAI agreement-in-principle to fill the gap. Critics argue that legal and policy loopholes can still allow mass-scale analysis via commercial data purchases, and that autonomy limits can shift if department policies change. The larger takeaway is bigger than any one contract: with Pentagon AI budgets rising, procurement incentives could pull leading labs toward becoming defense contractors in practice—locked in through classified network access, long contracts, and the difficulty of switching once a system is embedded. Claude outage disrupts developers Staying with reliability, Anthropic also had a very concrete problem today: an incident causing elevated error rates across claude.ai, its developer platform, and Claude Code. The company said it deployed a fix and recovered within hours, but it’s a reminder that AI isn’t just “a model,” it’s an always-on service. For developers building workflows on top of these APIs, uptime becomes product functionality—and outages quickly become business risk. Google Gemini goal-based scheduling On the “agents are becoming persistent” front, Google briefly exposed an unreleased Gemini mode labeled something like goal-based scheduled actions. Unlike today’s scheduled prompts that just rerun a request on a timer, this looks aimed at adapting over time toward a user-defined objective—possibly tied to education, study plans, and ongoing check-ins. It vanished quickly, which suggests a feature-flag slip rather than a launch, but it’s another breadcrumb that the major platforms want assistants to feel less like chat and more like an ongoing manager of tasks and goals. Agents: protocols, CLIs, hybrids Meanwhile, the developer world is arguing about what the best plumbing for agent tool use should be. One critique says Anthropic’s Model Context Protocol—MCP—may be fading, partly because it adds complexity without delivering clear wins over tools that already exist. The author’s alternative is blunt: focus on solid APIs and especially good CLIs. The reasoning is practical—LLMs “speak terminal” surprisingly well, humans can debug by rerunning commands, and CLI composability is hard to beat. In that same spirit of pragmatism, another builder described an arc many teams are quietly following: start with an LLM doing everything, then gradually replace large chunks with deterministic code. In their case, most workflow steps became non-AI nodes, while the model is reserved for the ambiguous parts like synthesis and extraction. The point isn’t that agents are failing—it’s that reliability often comes from scaffolding, constraints, and clear handoffs between code and the model. Verification crisis in expert data A deeper bottleneck may be hiding upstream in training data. Phoebe Yao argues that the “scale up experts” approach is running into a verification wall: most professional judgment can’t be scored objectively. That matters because many training approaches need a clean reward signal, and in real-world domains the signal is fuzzy, subjective, or missing. The risk she flags is that we end up training models to follow rigid rubrics rather than learn true expert judgment—because only the rubric is gradeable. AI hallucinations hit courts, media Two separate stories show what happens when verification fails in the real world. In India, the Supreme Court warned of serious consequences after a judge relied on AI-generated, fictitious case citations in a property dispute—calling it an institutional integrity issue, not a harmless mistake. And in the media, Ars Technica terminated a senior AI reporter after a story was retracted for including AI-fabricated quotes. Different settings, same pattern: if AI is allowed anywhere near “authoritative text,” the checks need to be explicit, enforced, and routine—not vibes-based. AI drug discovery meets trial reality Finally, a reality check from biotech: an Asimov Press essay argues that better AI-designed drugs won’t automatically compress clinical trials into something like a single year. AI may raise success rates by producing better candidates, but trial speed is still constrained by patient recruitment, logistics, regulation, and the time it takes to observe meaningful outcomes—especially in chronic disease. If we want faster medicine, the essay argues, it’s not just better models—it’s better trial design, accepted surrogate endpoints, and less friction in early-stage regulatory steps. Stablecoins for agent payments One more forward-looking piece to close the loop on “agentic everything”: a payments essay predicts that as AI agents transact on users’ behalf, payments will look less like one-off card checkouts and more like ongoing, negotiated B2B relationships—with credit, net terms, and programmable flows. The author’s bet is that stablecoins may fit early agent commerce better than traditional card rails, especially for cross-border or very small, high-volume transactions. The subtext: whoever sets the default payment plumbing for agents could quietly shape a lot of future commerce. Subscribe to edition specific feeds: - Space news * Apple Podcast English * Spotify English * RSS English Spanish French - Top news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French - Tech news * Apple Podcast English Spanish French * Spotify English Spanish Spanish * RSS English Spanish French - Hacker news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French - AI news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French Visit our website at https://theautomateddaily.com/ Send feedback to [email protected] Youtube LinkedIn X (Twitter)

NOW PLAYING

Meta smart glasses privacy leak & Perplexity becomes Samsung AI layer - AI News (Mar 3, 2026)

0:00 7:58

No transcript for this episode yet

We transcribe on demand. Request one and we'll notify you when it's ready — usually under 10 minutes.

No similar episodes found.

AI – IC之音竹科廣播 FM97.5 IC之音竹科廣播 全球華人的心靈故鄉 MG Show MG Show The MG Show, hosted by Jeffrey Pedersen and Shannon Townsend, is a leading alternative media platform dedicated to uncovering the truth behind today’s most pressing political issues. Launched in 2019, the show has grown exponentially, offering unfiltered insights, comprehensive research, and real-time analysis. With a commitment to independent journalism and factual integrity, the MG Show empowers its audience with knowledge and encourages active participation in the political discourse. The Game Radio Popolare Soldi, lavoro, avidità, disoccupazioni: il grande gioco dell’economia smontato ogni giorno da Raffaele Liguori. Photo Breakdown Scott Wyden Kivowitz Photo Breakdown is a podcast in which we explore the world of photography with a trusted guide, host Scott Wyden Kivowitz. His expertise and passion bring the industry to life as we explore the stories, trends, and ideas shaping it today. Join us as we dissect everything from incredible photographs and creative techniques to the latest gear releases and hot topics in the photography community.In each episode, we break down what’s happening behind the scenes - whether it’s making a powerful image, a candid discussion on industry trends, or a reflection on the tools and technology changing how we make photographs. You’ll get insights, expert opinions, and a fresh perspective on what’s top of mind for photographers right now.Anticipate short, engaging episodes brimming with ideas and inspiration. Be part of the conversation by sharing your thoughts, voice notes, and comments. Your participation is what makes our community vibrant and dynamic.It’s more than just photography - everyth
URL copied to clipboard!