EPISODE · Feb 20, 2026 · 11 MIN
Android spyware uses Gemini live & Gemini makes music and art - Tech News (Feb 20, 2026)
from The Automated Daily - Tech News Edition · host TrendTeller
Please support this podcast by checking out our sponsors: - Prezi: Create AI presentations fast - https://try.prezi.com/automated_daily - KrispCall: Agentic Cloud Telephony - https://try.krispcall.com/tad - Effortless AI design for presentations, websites, and more with Gamma - https://try.gamma.app/tad Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: Android spyware uses Gemini live - ESET reports “PromptSpy,” an Android spyware family that queries Google Gemini at runtime to adapt taps and UI navigation via Accessibility—novel genAI-driven persistence. Gemini makes music and art - Google’s Gemini rolls out Lyria 3 to generate 30-second music from text, photos, or video prompts, plus AI cover art—mainstreaming consumer generative audio tools. Apple Music builds AI playlists - Apple’s iOS 26.4 beta adds “Playlist Playground,” using Apple Intelligence to turn prompts into 25-song playlists with cover art and descriptions—competing with Spotify’s AI playlist features. ByteDance boosts AI video realism - ByteDance’s Seedance 2.0 draws Hollywood scrutiny for near-production-looking AI video with dialogue and sound effects, intensifying copyright, labeling, and licensing debates. Gemini 3.1 Pro benchmark leap - Google previews Gemini 3.1 Pro, claiming stronger reasoning and big benchmark jumps (Humanity’s Last Exam, ARC-AGI-2) while competition remains tight versus Anthropic and OpenAI. OpenAI’s strategy under pressure - Analyst Benedict Evans argues OpenAI lacks durable moats: frontier models are converging, distribution favors incumbents, engagement is shallow, and “new AI experiences” may be hard to own alone. AI productivity: gains, bottlenecks - Evidence on AI at work is mixed: coding assistants show measurable throughput gains, but PR review and quality become bottlenecks; teams with strong fundamentals benefit far more than others. Alzheimer’s blood test clock model - WashU researchers published a plasma p-tau217 “clock” in Nature Medicine that estimates Alzheimer’s symptom onset within ~3–4 years—potentially accelerating prevention trials via cheap blood tests. Personalized mRNA vaccine for TNBC - A Nature paper reports individualized neoantigen mRNA (RNA–LPX) vaccination in early-stage triple-negative breast cancer is feasible, immunogenic, and shows durable T-cell responses over years. Social media addiction lawsuits grow - Meta, TikTok, and others face escalating U.S. courtroom fights over alleged addictive design harms to children, testing Section 230 and First Amendment defenses with major bellwether trials. Meta pivots Horizon Worlds mobile - Meta is splitting Horizon Worlds from Quest, pushing Worlds toward an almost mobile-first product while reaffirming third-party Quest developer support—reshaping its metaverse strategy. SaaS shakeout and durability tests - After a software-stock selloff, investors are sorting durable SaaS from vulnerable categories as AI lowers build costs and shifts budgets; switching costs and data compounding are key moats. AI scaling laws beyond language - A deep dive on scaling laws argues they’re most reliable in language and image generation; other domains like robotics and biology scale more slowly and need better data, evals, and post-training. Episode Transcript Android spyware uses Gemini live First up, security—because this one is worth your attention. ESET says it’s found what may be the first Android malware family that uses generative AI during runtime to change how it behaves on a victim’s device. The malware, dubbed “PromptSpy,” appears to query Google’s Gemini with a description of the current screen—down to UI elements and coordinates—then gets back step-by-step instructions in a JSON-like format for what to tap. The goal is persistence: it tries to pin itself in Android’s Recent Apps so the system is less likely to kill it, and so “Clear all” doesn’t easily shake it loose. On top of that, it’s spyware, with remote-control capabilities via a built-in VNC module once Accessibility permissions are granted. ESET also describes a nasty removal obstacle: invisible overlays that block taps on uninstall or stop buttons, meaning some victims may need Safe Mode to remove it. Even if this turns out to be a proof-of-concept, it’s a clear signal—genAI isn’t just helping attackers write phishing emails; it’s starting to automate the fiddly, device-specific parts of actually operating malware. Gemini makes music and art Staying in the AI realm, Google and Apple are both pushing generative features into mainstream music workflows—and doing it in a way that could make “AI audio” feel as normal as filters in a camera app. Google says Gemini can now generate 30-second music clips using DeepMind’s Lyria 3 model, taking prompts not only from text but also from photos or even user-uploaded video. You can ask for instrumental tracks or add lyrics, and Google is also pairing the music feature with AI-generated cover art. There are usage caps—free users get a limited number of generations per day—and Google says users have rights to use what they create, while also claiming it’s training on music it has rights to use and deploying filters to prevent imitation of specific artists. Apple, meanwhile, is taking a less “make a song” approach and more of a “make the vibe” approach: Apple Music is adding Playlist Playground in iOS 26.4, turning a prompt into a playlist of 25 songs with cover art and a short description. It’s squarely aimed at the same space Spotify’s been exploring, and it’s a good reminder that AI features are increasingly shipping as product polish—not as standalone apps. Apple Music builds AI playlists Now, if music generation feels like a gentle step forward, video generation is landing more like a jolt. ByteDance is drawing fresh heat in Hollywood over Seedance 2.0, a model that reportedly generates cinema-quality video—and notably, can produce dialogue and sound effects along with visuals from simple prompts. Viral clips have made the rounds, including content that resembles well-known characters, and studios are responding the way you’d expect: cease-and-desist letters and copyright accusations. Beyond the legal fight, the industry conversation is shifting toward practical safeguards: clearer labeling to prevent deception, and real licensing and redress mechanisms so creators can get paid—or at least contest misuse—when their styles or assets are effectively absorbed into training data. ByteDance boosts AI video realism On the model race itself, Google is also shipping a more traditional upgrade: Gemini 3.1 Pro. The company is positioning it as better at reasoning and complex problem-solving, with a big jump on benchmarks that are designed to be harder to train around. Google highlighted improvements on Humanity’s Last Exam and a sharp rise on ARC-AGI-2, which focuses on novel logic problems. It’s also claiming better results for agentic workflows, the kind that matter when you’re trying to automate real multi-step tasks instead of just chatting. That said, the leaderboard story remains messy. Preference-based rankings can reward answers that look plausible, and other labs—especially Anthropic—are still very much in the mix depending on whether you care most about text quality, code, or tool use. Gemini 3.1 Pro benchmark leap This brings us neatly to a bigger strategic question: what happens when frontier models start to feel interchangeable? Analyst Benedict Evans has a blunt take on OpenAI’s situation: no unique tech moat, a massive user base that doesn’t necessarily translate into deep engagement, and incumbents like Google and Meta bundling AI into products people already use daily. Evans also argues that much of the real value may come from entirely new “AI experiences”—workflows and interfaces that go beyond a chatbot—and those are hard for any one lab to invent and own alone. In his framing, OpenAI’s recent scattershot of initiatives reads like a race to find the next stable platform position before the market fully commoditizes the chatbot form factor. OpenAI’s strategy under pressure Meanwhile, the economics of AI in the workplace are looking a lot more incremental—and a lot more uneven—than the loudest narratives suggest. One synthesis making the rounds argues we’re not heading for an overnight white-collar wipeout, but we are seeing clear productivity impact in software development. Across large field experiments, coding assistants boosted developer task completion substantially, but once you account for partial adoption and the fact that coding isn’t all a developer does, the net project-level gain looks closer to a steady, meaningful bump—think around ten percent rather than magic doubling. There’s also a catch: quality and review. Data from engineering orgs suggests high-AI-adoption teams can ship more, but pull request review time balloons—human approval becomes the bottleneck. And the teams that benefit most aren’t always the ones you’d expect. Some benchmarks indicate senior engineers often get far more leverage than juniors, because they can spot subtle failures, steer architecture, and clean up the rough edges AI tends to produce. AI productivity: gains, bottlenecks If you’re building agent systems, a separate set of lessons is converging around the same theme: don’t confuse autonomy with value. Practical writeups from teams in the trenches emphasize tight evaluation loops—prompt to output to eval to iteration—plus strong observability, and “micro-agent” decomposition where each agent does one narrow thing reliably. Another recurring recommendation: use strict tooling and constraints wherever possible, because compile-time checks and structured interfaces can function like guardrails against the kind of silent, high-confidence mistakes that long tool chains are famous for. Alzheimer’s blood test clock model Let’s pivot to health tech, where we’ve got a story with real-world stakes. Researchers at Washington University School of Medicine in St. Louis published results in Nature Medicine describing a way to estimate when someone is likely to develop Alzheimer’s symptoms using a single blood test marker—plasma p-tau217. Their “clock” models predicted symptom onset within roughly three to four years, and importantly, the approach held up across different test assays, including one that’s FDA-cleared. A striking detail: the time between p-tau217 rising and symptom onset appears shorter for older people. In the study’s example, an elevation at age 60 corresponded to symptoms roughly two decades later, while elevation at 80 lined up with symptoms about eleven years later—hinting that older brains may show clinical signs with less pathology. The immediate promise here isn’t consumer screening tomorrow; it’s faster, cheaper recruitment for prevention trials, because a blood test scales far better than PET scans or spinal fluid tests. The team also released code and a web app for researchers, and expects future models could improve by combining additional biomarkers. Personalized mRNA vaccine for TNBC On the biotech front, there’s also a notable cancer-immunotherapy update: a Nature paper reported results from a phase 1 umbrella trial testing individualized neoantigen mRNA vaccination in early-stage triple-negative breast cancer after surgery and standard therapy. The approach uses sequencing from each patient’s tumor to pick up to 20 mutations, then encodes them into mRNA delivered in lipid nanoparticles targeted to dendritic cells. In a small cohort—14 evaluable patients—every participant mounted vaccine-induced or vaccine-amplified T-cell responses to multiple neoantigens, and some immune responses persisted for years. Clinically, most patients were relapse-free at long follow-up, but the study also details how recurrence can still happen through mechanisms like loss of antigen presentation. It’s early and it’s small, but it’s another data point that personalization in cancer vaccines is moving from theory toward repeatable manufacturing and measurable immune effects. Social media addiction lawsuits grow A very different kind of accountability story is playing out in U.S. courts: social media companies including Meta and TikTok are facing a widening wave of lawsuits alleging addictive design harms to children and failures to protect minors from dangerous content and predators. What’s changing is procedural reality—some cases are reaching juries, with bellwether trials that could influence thousands of related claims. Outcomes could hinge on how courts interpret platforms’ defenses under the First Amendment and Section 230, and whether plaintiffs can convincingly frame design choices and recommendation systems as product harms rather than protected speech. Even if this takes years, it’s the kind of litigation pressure that can force product and policy shifts long before final judgments land. Meta pivots Horizon Worlds mobile In VR and virtual worlds, Meta is making a strategic separation that says a lot about where it thinks the audience is. The company is splitting Horizon Worlds from its Quest VR platform, effectively turning Worlds into its own product and making it “almost exclusively mobile” in focus. Quest, for its part, remains positioned as a VR developer ecosystem with continued third-party support and monetization tooling. Read between the lines, and it looks like Meta is prioritizing distribution where the users already are—phones—while keeping VR gaming alive but no longer treating it as the single on-ramp to a unified metaverse vision. SaaS shakeout and durability tests Finally, the software business itself is in a moment of sorting. After a major selloff in public software stocks, more investors are asking which companies have real durability as AI makes it cheaper to build competitors and shifts spending from SaaS seats toward usage-based AI. One useful lens that’s emerging: can customers switch easily, and does the product’s value compound with scale—through proprietary data, risk-bearing operations, or network effects? If the answers are no, the concern is that many products start to look less like defendable “assets” and more like replicable inventory. This isn’t the end of software companies, but it is a forcing function for pricing, go-to-market, and what counts as a moat. AI scaling laws beyond language And as a closing thought for the builders: a long essay on scaling laws is making the rounds with a reality check—smooth, predictable scaling has been most convincingly demonstrated in language and image generation. Outside of that, in areas like robotics, biology, and world modeling, gains per 10x scale-up tend to be shallower, datasets can be messier, and metrics often correlate poorly with real performance. The takeaway isn’t “don’t scale.” It’s: do the hard work first—data pipelines, domain benchmarks, and downstream evaluations—before you commit to the kind of training runs that can burn through millions and still leave you with results that don’t transfer. Subscribe to edition specific feeds: - Space news * Apple Podcast English * Spotify English * RSS English Spanish French - Top news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French - Tech news * Apple Podcast English Spanish French * Spotify English Spanish Spanish * RSS English Spanish French - Hacker news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French - AI news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French Visit our website at https://theautomateddaily.com/ Send feedback to [email protected] Youtube LinkedIn X (Twitter)
NOW PLAYING
Android spyware uses Gemini live & Gemini makes music and art - Tech News (Feb 20, 2026)
No transcript for this episode yet
Similar Episodes
No similar episodes found.