EPISODE · Feb 21, 2026 · 11 MIN
Android malware uses Gemini live & India AI summit and investments - Tech News (Feb 21, 2026)
from The Automated Daily - Tech News Edition · host TrendTeller
Please support this podcast by checking out our sponsors: - Build Any Form, Without Code with Fillout. 50% extra signup credits - https://try.fillout.com/the_automated_daily - Prezi: Create AI presentations fast - https://try.prezi.com/automated_daily - KrispCall: Agentic Cloud Telephony - https://try.krispcall.com/tad Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: Android malware uses Gemini live - ESET reports “PromptSpy,” the first known Android malware to use generative AI at runtime—querying Google Gemini with on-screen XML to automate persistence via Accessibility. Keywords: Android spyware, Gemini, PromptSpy, Accessibility Service, VNC remote access. India AI summit and investments - At the India AI Impact Summit in New Delhi, leaders pitched India as an AI bridge to the Global South while tech CEOs discussed multilingual and “inclusive” AI plus large data-center ambitions. Keywords: Modi, AI infrastructure, Global South, data centers, multilingual AI. US AI diplomacy volunteer corps - The U.S. is weighing a “Technology Prosperity Corps,” a Peace Corps-style AI diplomacy push that would send up to 5,000 tech volunteers abroad to promote U.S.-aligned tools and standards. Keywords: AI diplomacy, soft power, China competition, OSTP, Technology Prosperity Corps. Meta and TikTok lawsuits surge - Meta, TikTok, and other platforms face a widening wave of child-safety and mental-health lawsuits, with juries now hearing bellwether cases that could test Section 230 and product-design liability. Keywords: Meta, addictive design, youth harms, Section 230, bellwether trials. AI music and video copyright - Google’s Lyria 3 expands AI music generation while ByteDance’s Seedance 2.0 raises alarms with cinema-like video-plus-audio—fueling fresh fights over voice likeness and copyright. Keywords: Lyria 3, Seedance 2.0, AI-generated music, deepfakes, licensing. China’s humanoid robots and propaganda - China’s Lunar New Year gala showcased humanoid robots doing parkour and flips, highlighting rapid robotics progress—while experts caution staged demos can overstate real-world capability. Keywords: humanoid robots, Unitree, military use, supply chain, state propaganda. Artemis II launch and Luna 9 hunt - NASA readies Artemis II for early March, while researchers using crowdsourcing and machine learning say they’re close to finding the Soviet Luna 9 landing site—possibly confirmable by Chandrayaan-2 imagery. Keywords: Artemis II, SLS, Orion heat shield, Luna 9, LRO. Episode Transcript Android malware uses Gemini live We’ll start with security, because this one feels like a line being crossed. ESET says it has found what may be the first Android malware that uses generative AI during runtime to change how it behaves on different devices. The family is called PromptSpy, and the clever part isn’t just phishing or writing better scam text—it’s automation. The malware reportedly sends Google’s Gemini model a prompt plus an XML-style dump of the current screen, including interface labels and coordinates. Gemini returns step-by-step tapping instructions in a structured format, and the malware executes them through Android’s Accessibility Service. The goal: persistence. Specifically, it tries to “pin” or “lock” itself in the Recent Apps list so the system is less likely to kill it, and so “Clear all” doesn’t easily sweep it away. Because that pinning process differs across phone makers, the malware effectively uses Gemini as a universal UI translator. If it gains Accessibility permissions, PromptSpy also carries classic spyware capabilities—screen recording, screenshots, intercepting lock screen credentials, and even a VNC module for full remote control. ESET notes it can also make removal harder by overlaying invisible touch-blocking rectangles on buttons like uninstall. The practical takeaway is simple: treat Accessibility permission requests as a high-risk moment, especially if an app has no obvious reason to need it. India AI summit and investments Next up: the AI geopolitics swirl around the India AI Impact Summit in New Delhi—part ambition, part diplomacy, and, frankly, a bit of chaos. Prime Minister Narendra Modi used the stage to pitch India as a cost-effective hub for building AI systems that can scale, pointing to the country’s track record with digital public infrastructure like identity rails and online payments. He also framed India as a bridge between advanced economies and the Global South, pushing the idea that AI should be “democratized” rather than concentrated among a handful of nations or billionaires. The summit pulled in heavyweight voices—from France’s Emmanuel Macron to Google CEO Sundar Pichai, and U.N. Secretary-General António Guterres. Guterres added a concrete proposal: a three-billion-dollar fund aimed at helping poorer countries build baseline AI capacity—skills, data access, and affordable compute—arguing the playing field is tilting too sharply. But the event itself reportedly had real operational stumbles: long lines, delays, theft complaints that organizers say were resolved, and an embarrassing incident where a private university was expelled after showing a commercially available Chinese robotic dog while presenting it as its own innovation. And then there was the notable late change: Bill Gates withdrew from a scheduled keynote, with the Gates Foundation saying it was meant to keep focus on summit priorities amid renewed questions about his past ties to Jeffrey Epstein. On the business side, the message from tech leaders was that India’s user base—approaching a billion internet users—remains a huge strategic prize, and discussions included “inclusive and multilingual” AI principles. At the same time, even boosters acknowledge constraints: limited access to top-tier chips, the sheer cost of data centers, and the complexity of training models across hundreds of languages. US AI diplomacy volunteer corps Staying with policy and power projection: the U.S. is reportedly preparing a Peace Corps-style reboot for the AI era. The proposed initiative, dubbed the Technology Prosperity Corps, would deploy up to 5,000 technology-focused volunteers overseas—explicitly framed as a form of AI diplomacy and a counterweight to China’s influence on global tech adoption. The idea is straightforward: put people on the ground helping partners adopt tools, workflows, and standards that align with U.S. interests—similar in spirit to Cold War soft-power programs, but aimed at today’s competition over AI platforms and governance. This is still a plan, not a finished program, but it signals where Washington thinks the contest is heading: not just who has the best models, but whose technology ecosystems become the default elsewhere. Meta and TikTok lawsuits surge Now to the courtroom, where social media companies are facing a wave of lawsuits that’s starting to look less like a skirmish and more like a multi-year campaign. Across the U.S., plaintiffs—including families, school districts, and state governments—are accusing platforms like Meta and TikTok of deliberately using addictive design features that harm kids’ mental health, while also failing to protect minors from predators and dangerous content. What’s changed is that some of these cases are finally being heard by juries, not just argued in motions. In Los Angeles, a bellwether trial is underway with Meta and YouTube still in the case, centered on a 20-year-old plaintiff identified as “KGM.” Meta CEO Mark Zuckerberg testified, emphasizing age restrictions and efforts to detect users who misstate their age—while rejecting the idea that addiction is the right frame for Meta’s products. A separate suit in New Mexico brought by the state’s attorney general leans on investigators posing as children and documenting sexual solicitations and platform responses. That case also spotlights an ongoing friction point: encryption. Critics argue end-to-end encryption can limit safety monitoring; Meta counters that encryption is broadly supported for privacy and security. The big legal question underneath all of this is where liability lands. Defendants lean on the First Amendment and Section 230. Plaintiffs are increasingly focusing on product design—algorithms, engagement loops, and the mechanics of recommendation—trying to argue it’s not about any single user post, but about engineered behavior at scale. Either way, this is heading toward long timelines, big legal bills, and potentially operational changes if plaintiffs start winning in a consistent way. AI music and video copyright Related to that, social psychologist Jonathan Haidt—author of The Anxious Generation—has been arguing that Zuckerberg’s first jury trial appearance could become a meaningful accountability moment for Big Tech. Haidt’s position is that the harms aren’t limited to a small slice of vulnerable kids. He claims the broader damage shows up in deteriorating attention, weaker learning outcomes, and reduced social skills across much of the developed world’s post-1995 cohort. Haidt also points to internal research—particularly a set of Meta studies made public by advocates—as evidence the company understood engagement dynamics and their risks. Even if courts ultimately don’t accept every part of that argument, the strategy shift matters: it’s an attempt to reframe these cases from “bad content slipped through” to “the product was built this way on purpose.” China’s humanoid robots and propaganda Let’s move to generative media, where the technology is accelerating faster than the rulebook. Google has introduced Lyria 3, a new AI music generator built with Gemini teams and Google DeepMind. Right now, the emphasis is creator-friendly: it powers features like “Dream Track” in YouTube Shorts, letting users generate royalty-free, customized soundtracks. Outputs are still short—around 30 seconds—yet the broader direction is clear. Brands are increasingly interested in adaptive audio that can change tone and pacing on the fly as ads are served across AI-driven platforms. And hanging over all of this is voice and rights management. The release comes amid ongoing concerns about voice replication, including a lawsuit from NPR veteran David Greene, who alleges Google’s NotebookLM used a voice based on his cadence and speech patterns. Google says the voice is unrelated and that Audio Overviews use a paid professional actor. Regardless of who’s right in that specific dispute, it underscores the wider issue: as of February 2026, copyright rules for AI-generated music—and the boundaries around voice likeness—are still unsettled, with licensing markets trying to fill gaps that the law hasn’t fully addressed. Artemis II launch and Luna 9 hunt If AI music is one pressure point, AI video is the other—and it’s getting hotter. ByteDance, TikTok’s parent company, is drawing major attention in Hollywood for Seedance 2.0, a model that can generate cinema-quality video with sound effects and even dialogue from simple prompts. Observers say what’s notable isn’t just visual fidelity, but the apparent unification of text, visuals, and audio into one system, producing clips that look like they came from a real production pipeline. That’s already triggering backlash. Studios including Disney and Paramount have reportedly accused ByteDance of copyright infringement and sent cease-and-desist demands, and Japan is investigating after AI videos featuring popular anime characters spread online. Beyond the legal fight, there’s a practical one: if systems like this are trained on creative work without clear compensation pathways, the industry will push for licensing, provenance, labeling, and mechanisms to contest misuse. Meanwhile, smaller production teams see upside—cheap tools that could make ambitious genres accessible on micro-budgets. It’s the same story, two vantage points: disruption for incumbents, leverage for newcomers. Story 8 On robotics: China’s Lunar New Year’s Eve TV gala went viral after humanoid robots performed coordinated martial-arts routines and surprisingly athletic parkour—vaults, flips, wall-assisted backflips, and more. Compared with last year’s wobblier performance, the machines looked noticeably steadier, and it’s a high-profile signal of how fast China is moving in AI-enabled robotics. Still, defense experts caution that staged demonstrations can be misleading: rehearsed routines in controlled settings don’t necessarily translate to robust performance in messy, unpredictable environments. The more serious long-term question is how humanoid—or animal-shaped—robots might be used in security and military contexts, largely because they can navigate human-built spaces like stairs, doors, and vehicles. The guidance from experts in Europe is: don’t panic, but don’t ignore it either. Track progress closely, learn from what works, and avoid reinventing everything from scratch. Story 9 Finally, space. NASA is preparing to launch Artemis II, the first crewed mission around the Moon since 1972, with an early-March target that could land around March 6 U.S. time depending on final readiness. The four-person crew—Reid Wiseman, Victor Glover, Christina Koch, and Jeremy Hansen—will run a roughly 10-day mission testing Orion’s manual handling and life-support performance in deep space, then return for a high-energy re-entry where the heat shield will again be closely watched after issues seen on Artemis I. At the same time, researchers say they may be closing in on a different kind of lunar milestone: the long-lost landing site of the Soviet Union’s Luna 9, the first successful soft landing on the Moon, dating back to February 1966. Two independent efforts—one crowdsourced, one using machine learning trained on imagery of known Apollo hardware—have proposed candidate locations. Confirmation may come from sharper imaging during upcoming passes by India’s Chandrayaan-2 orbiter, which could potentially resolve small hardware pieces that have blended into shadows for decades. It’s a neat preview of where lunar exploration is going: not just getting to the Moon, but managing, cataloging, and monitoring a surface that’s filling up with human-made objects—using AI to find what human eyes miss. Subscribe to edition specific feeds: - Space news * Apple Podcast English * Spotify English * RSS English Spanish French - Top news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French - Tech news * Apple Podcast English Spanish French * Spotify English Spanish Spanish * RSS English Spanish French - Hacker news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French - AI news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French Visit our website at https://theautomateddaily.com/ Send feedback to [email protected] Youtube LinkedIn X (Twitter)
NOW PLAYING
Android malware uses Gemini live & India AI summit and investments - Tech News (Feb 21, 2026)
No transcript for this episode yet
Similar Episodes
No similar episodes found.