EPISODE · Feb 21, 2026 · 14 MIN
Wikipedia bans Archive.today links & Android sideloading and F-Droid warning - Hacker News (Feb 21, 2026)
from The Automated Daily - Hacker News Edition · host TrendTeller
Please support this podcast by checking out our sponsors: - Prezi: Create AI presentations fast - https://try.prezi.com/automated_daily - Discover the Future of AI Audio with ElevenLabs - https://try.elevenlabs.io/tad - Invest Like the Pros with StockMVP - https://www.stock-mvp.com/?via=ron Support The Automated Daily directly: Buy me a coffee: https://buymeacoffee.com/theautomateddaily Today's topics: Wikipedia bans Archive.today links - Wikipedia’s English edition is deprecating and blacklisting Archive.today after allegations of DDoS abuse and snapshot tampering, impacting ~695,000 links across ~400,000 pages; alternatives include Archive.org and Ghostarchive. Android sideloading and F-Droid warning - F-Droid warns Android app-install policy changes are still coming despite public confusion, adding banners in F-Droid clients and urging outreach to regulators; F-Droid Basic 2.0-alpha3 adds mirrors, install history, CSV export, and more. LinkedIn blue badge identity vendor - A LinkedIn verification attempt reveals the “real person” badge is handled by Persona, involving passport scans (including NFC), selfie biometrics, device/IP/location signals, and broad subprocessor sharing—raising GDPR and US CLOUD Act concerns. Dependabot noise vs real security - Filippo Valsorda argues GitHub Dependabot creates alert fatigue—especially for Go—citing a GO-2026-4503 / CVE-2026-26958 case; he recommends scheduled `govulncheck` plus CI tests against latest deps instead. Vulnerability disclosure gone legal - A platform engineer reports a diving insurer portal with sequential IDs and shared default passwords, enabling account takeovers; the response reportedly came via lawyers demanding confidentiality, highlighting coordinated disclosure and GDPR notification questions. Local AI stack: Claws and llama.cpp - Andrej Karpathy’s “Claws” idea frames a new orchestration layer for LLM agents on personal hardware, while the ggml/llama.cpp team joins Hugging Face to strengthen local inference, packaging, and tighter Transformers integration. Facebook feed flooded with AI slop - A returning Facebook user finds the News Feed dominated by suggested, often AI-generated thirst-trap posts and engagement bait, raising concerns about recommendation quality, bot-like comments, and harmful content. Rebuilding the first web browser - CERN’s WorldWideWeb Rebuild recreates the original 1990 NeXT-based browser inside a modern browser, letting people experience early web navigation and even authoring workflows like editing pages and creating links. - https://f-droid.org/2026/02/20/twif.html - https://thelocalstack.eu/posts/linkedin-identity-verification-privacy/ - https://words.filippo.io/dependabot/ - https://dixken.de/blog/i-found-a-vulnerability-they-found-a-lawyer - https://simonwillison.net/2026/Feb/21/claws/ - https://pilk.website/3/facebook-is-absolutely-cooked - https://github.com/ggml-org/llama.cpp/discussions/19759 - https://arstechnica.com/tech-policy/2026/02/wikipedia-bans-archive-today-after-site-executed-ddos-and-altered-web-captures/ - https://worldwideweb.cern.ch/ Episode Transcript Wikipedia bans Archive.today links Let’s start with that Wikipedia decision, because it’s huge in both scale and symbolism. Wikipedia’s English-language community has reached a consensus to deprecate and effectively blacklist Archive.today—also known across a bunch of domains like archive.is, archive.ph, and others. The headline detail: this change touches more than 695,000 links across roughly 400,000 articles. Why the sudden purge? Two core issues. First, editors concluded the site was used to “hijack users’ computers” into participating in a distributed denial-of-service attack—specifically connected to malicious code on a CAPTCHA page that was reportedly used to direct a DDoS at the Gyrovague blog. Second, and arguably just as damaging for an archive: Wikipedia editors say they saw evidence that Archive.today may have altered archived snapshots—like inserting a targeted blogger’s name into captures. If an archive can’t be trusted to show what a page looked like at a point in time, it stops being an archive and starts being a publishing system with questionable incentives. Operationally, Wikipedia is moving fast: adding Archive.today to spam blacklists or edit filters to block new links, and encouraging large-scale replacement of existing ones. Suggested substitutes include the Internet Archive, Ghostarchive, Megalodon, or simply citing sources that don’t need an archive link. There’s also a broader takeaway here: archiving is infrastructure, and the web’s “memory” gets fragile when critical services are run by opaque operators. Android sideloading and F-Droid warning Staying with platform governance—Android’s app ecosystem is facing another stress test, and F-Droid is sounding the alarm. In its latest “This Week in F-Droid,” the project says it’s “under threat” because Google’s previously announced plans to change how apps are installed on Android are still on track. The interesting part is the social layer: F-Droid says that at FOSDEM 2026, many users told them they believed Google had backed off—basically that Android’s openness was secured. F-Droid calls that a misconception reinforced by PR and repeated reporting. Their point is simple: Google talked about an “advanced flow” for installing apps, but F-Droid says it’s not clear what has actually shipped in recent Android versions—or when the final change lands. In response, F-Droid is getting more visible and more political: warning banners are now on the website, plus updated banners inside the F-Droid and F-Droid Basic clients, urging users to raise concerns with local authorities. And it’s not just F-Droid—IzzyOnDroid is adding a banner too, and Obtainium already shows an in-app warning dialog. On the product side, F-Droid Basic’s rewrite continues with a 2.0-alpha3 release. It adds practical features like CSV export of installed apps, an install history, mirror selection, a “prevent screenshots” privacy toggle, tooltips, a reworked overflow menu for “My Apps,” and persistent sort order—plus translations and bug fixes. The gotcha: users on the stable 1.23.x line won’t get that alpha automatically; you have to opt into beta updates. They also mention recovering from Debian upgrade fixes and some delays, and they nudge developers still on Java 17 to consider moving to Java 21. The rest of the weekly roundup reads like a pulse check on the FOSS Android ecosystem: Buses returns after two years, Conversations and Quicksy bring workflow and tablet improvements—and notably tweak their Play Store flavor to avoid directly using a Google library by talking to Play Services over IPC. There are also updates to Dolphin Emulator, Image Toolbox with new AI-adjacent tools, Luanti fixes, frequent Nextcloud client releases around “Nextcloud Hub 26 Winter,” and ProtonVPN shrinking significantly by dropping OpenVPN in favor of WireGuard and “Stealth.” LinkedIn blue badge identity vendor Now, from app distribution to identity verification—another place where “it’s just a badge” turns into a serious data pipeline. One writer walked through LinkedIn’s “real person” blue badge verification and found that the process is handled not by LinkedIn directly, but by Persona Identities, Inc., an identity verification vendor. The author’s complaint isn’t that ID checks exist—it’s the scope and downstream use. They describe a flow that involved full passport images, including NFC chip data; a live selfie; and the creation of facial-geometry biometrics. On top of that, Persona’s policy mentions collecting national ID details, contact and address information, device identifiers, IP address and inferred geolocation, and even behavioral signals—things like hesitation patterns and copy/paste detection. Persona also says it cross-references users against a “global network” of third-party sources—government and ID registries, credit agencies, utilities, mobile providers, and postal databases. Then there’s the part that will make privacy engineers wince: the policy language the author cites suggests Persona can use uploaded IDs and selfies to improve its systems, including AI training, under a “legitimate interests” justification rather than explicit consent—raising GDPR questions. LinkedIn, according to the write-up, receives a relatively constrained output—your name, birth year, ID type and issuer, the verification result, plus a blurred or redacted ID image. But Persona’s subprocessor list includes 17 companies, overwhelmingly US-based, with cloud infrastructure providers and also AI-related processors named. The author argues that even if data is stored in Germany, US jurisdiction—via the CLOUD Act—could still compel disclosure, potentially with gag orders. The practical advice they end with is straightforward: if you did this verification, consider submitting data access and deletion requests, and object where applicable, before deciding whether a profile badge is worth the trade. Dependabot noise vs real security Let’s pivot to security engineering, where one of the recurring themes is “signal versus noise.” Cryptography engineer Filippo Valsorda argues that GitHub’s Dependabot has become, in his words, a noise machine—especially in Go projects. His example is telling. He published a security fix for `filippo.io/edwards25519` involving `(*Point).MultiScalarMult` potentially returning invalid results if the receiver wasn’t the identity point. It’s tracked as GO-2026-4503 and CVE-2026-26958. But he says the vulnerable method is essentially unused in practice, even though the module is widely depended on indirectly. Dependabot, however, reportedly reacted by opening thousands of update pull requests across repositories that weren’t actually affected, attaching what he calls a nonsensical CVSS v4 score and even a “73% compatibility score”—despite the patch being a one-line change in a rarely used path. He also points out a more basic failure mode: Dependabot raised an alert for a repository that didn’t even import the vulnerable package—only a different, unaffected subpackage—prompting them to disable it entirely. The core critique is that Go has unusually rich vulnerability metadata and tooling. The Go Vulnerability Database can describe issues down to the symbol level, and `govulncheck` can do static analysis to see whether vulnerable code paths are actually reachable. That’s a much better filter than “the module exists somewhere in your dependency graph.” Valsorda’s recommendation is pragmatic: turn off Dependabot for Go security alerts, and replace it with two scheduled GitHub Actions—one that runs `govulncheck`, and another that runs your test suite against the latest dependency versions. That second part is key: it catches breakages early without forcing a constant stream of upgrade PRs, and it keeps new dependency code confined to CI until you intentionally ship it. His broader point lands well beyond Go: false positives burn attention. And when teams are exhausted by low-value alerts, they’re less likely to respond quickly when a vulnerability actually requires production changes—like patching, incident review, or secret rotation. Vulnerability disclosure gone legal On the subject of real-world vulnerabilities, there’s a report from Yannick Dixken—a diving instructor and platform engineer—who says he discovered a critical flaw in a major diving insurer’s member portal while on a two-week trip around Cocos Island, Costa Rica. The vulnerability is painfully old-school: sequential numeric user IDs, a static default password shared across newly created student accounts, no enforced password change, and apparently no rate limiting, lockout, or MFA. In effect, account takeover becomes a guessing game: pick an ID, try the default password, and you’re in. He says the exposure included personal data—names, addresses, phone numbers, emails, dates of birth—and that minors’ data could be involved. He also says he validated the issue with minimal access and stopped. The disclosure story is where things get messy. He reported it on April 28, 2025 using coordinated disclosure, notifying both the organization and CSIRT Malta under Malta’s national policy, and setting a 30-day remediation window. Two days later, he heard back—not from engineers, but from the organization’s Data Privacy Officers’ law firm. The letter acknowledged an investigation and mentioned steps like resetting default passwords and rolling out 2FA. But it also criticized him for involving authorities first, suggested public disclosure could expose him to liability, and asked him to sign a same-day declaration—reportedly including providing passport ID—asserting deletion and strict confidentiality. He viewed it as an NDA meant to control the narrative of the disclosure process. He refused blanket confidentiality, offered a narrower confirmation of deletion, and says the organization doubled down with references to Maltese criminal code. The good news: he reports the issue has been fixed and 2FA is being rolled out. The unresolved question he raises is whether affected users were notified, which matters under GDPR breach notification duties. This is a reminder that security isn’t only technical. The way organizations respond—especially when lawyers lead and engineers follow—can chill reporting and reduce overall safety for users. Local AI stack: Claws and llama.cpp Now for AI—specifically the continuing migration from cloud-only workflows to personal hardware. Two stories fit together neatly. First, the ggml.ai team—the people behind `ggml` and `llama.cpp`—announced they’re joining Hugging Face to support the long-term progress of what they call “Local AI.” Maintainer Georgi Gerganov frames it as a sustainability move: the projects remain open-source and community-driven, but with more resources behind full-time maintenance. They credit Hugging Face engineers for real, hands-on contributions—things like an inference server and UI, multimodal support, new architecture implementations, GGUF compatibility improvements, and integration into Hugging Face Inference Endpoints. Looking forward, they want tighter “single-click” integration with `transformers`, plus better packaging and user experience so non-experts can run local models without turning setup into a weekend project. Second, Andrej Karpathy posted a mini-essay about buying a Mac Mini to experiment with “Claws.” The term is deliberately playful, but the concept is serious: a layer on top of LLM agents that handles orchestration—scheduling, tool calling, persistence, and context management. Karpathy notes there are already multiple small implementations popping up, and he highlights one called NanoClaw because its engine is around 4,000 lines of code—small enough to feel auditable and adaptable. It also runs everything in containers by default, which is a strong signal that the ecosystem is thinking about isolation and repeatability. Simon Willison, linking to the thread, predicts “Claw” could become a term of art for personal-hardware agent systems that communicate over messaging protocols and can schedule tasks. Zooming out: this is the stack maturing. Local inference is getting easier to run, and agent orchestration is getting more structured—two trends that reinforce each other. Facebook feed flooded with AI slop Before we wrap, a quick look at the state of mainstream social feeds—because it’s affecting how people perceive the internet day to day. One writer logged into Facebook after about eight years, expecting to find a neighborhood group. Instead, they found the News Feed dominated by suggested content they didn’t ask for. After one post from a followed page, they report the next ten posts weren’t from friends or followed pages at all. The feed was packed with “thirst trap” images of young women, many appearing AI-generated, plus generic captions and mildly NSFW content. They also describe AI-made “heartwarming” videos, relationship memes, and sketch clips designed to farm engagement. A particularly unsettling detail: Meta’s “suggested questions” feature surfaced prompts the author read as sexist or objectifying—less curiosity, more bait. And when they checked comments, they saw lots of low-effort praise that felt bot-like. The moment that made them stop entirely was encountering AI images of girls who looked around 14. Whether this is a cold-start problem for returning users, a recommender-system failure mode, or something more systemic, it’s another data point in a growing complaint: feeds are becoming synthetic, repetitive, and in some cases dangerous. Rebuilding the first web browser Finally, a calmer note—and a nice piece of web history. CERN has a project called the “2019 WorldWideWeb Rebuild,” which recreates the original WorldWideWeb application—the first web browser—built on a NeXT machine at CERN in December 1990. The rebuild runs inside a modern browser, but it tries to preserve the original experience: you open a URL through menu flows like “Document,” then “Open from full document reference,” then type the URL. And yes, you double-click links—because that’s how the original interaction model worked. What’s especially cool is that it doesn’t just show pages; it demonstrates early authoring features too—editing documents and creating links. It’s a reminder that the web started as both a reading and writing system, and that simplicity was a feature, not a limitation. Subscribe to edition specific feeds: - Space news * Apple Podcast English * Spotify English * RSS English Spanish French - Top news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French - Tech news * Apple Podcast English Spanish French * Spotify English Spanish Spanish * RSS English Spanish French - Hacker news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French - AI news * Apple Podcast English Spanish French * Spotify English Spanish French * RSS English Spanish French Visit our website at https://theautomateddaily.com/ Send feedback to [email protected] Youtube LinkedIn X (Twitter)
NOW PLAYING
Wikipedia bans Archive.today links & Android sideloading and F-Droid warning - Hacker News (Feb 21, 2026)
No transcript for this episode yet
Similar Episodes
No similar episodes found.