Anthropic's DMCA takedown nukes thousands of repos, Microsoft ships three in-house models, and two supply chain attacks hit LiteLLM and Axios in the same week
> The week Anthropic accidentally broke GitHub, Microsoft told OpenAI "we can do this ourselves," and two supply chain attacks reminded everyone that npm install is a trust exercise.
Anthropic discovered a source code leak exposing internal system architecture. Their response — a batch of DMCA takedown requests — removed thousands of GitHub repositories, many of which had nothing to do with the leak.
The developer community was not amused. Affected repos included independent research projects, model analysis tools, and general-purpose libraries that happened to reference Anthropic. The company called it unintentional overcatch, but the damage was real: broken CI pipelines, deleted work, and a painful reminder of the asymmetric power dynamics in DMCA enforcement.
For a company that builds its brand on responsible AI, this was an own goal. The incident exposes a growing tension: as AI models become billion-dollar assets, the legal mechanisms companies use to protect them are blunt instruments that don't distinguish between a leak mirror and a PhD student's research repo.
The takeaway for builders: if your project references an AI company's architecture, keep your own backups. DMCA takedowns are automated, appeals are not.
The LiteLLM and Axios compromises share the same playbook: steal a publishing credential, push a malicious version, and let the ecosystem's trust model do the rest.
LiteLLM is a proxy layer that routes requests to multiple LLM providers (OpenAI, Anthropic, Bedrock, etc.). One compromised version means the attacker intercepts every API key and prompt flowing through it. Mercor was the public casualty, but any company using the affected version was exposed.
Axios (101M weekly downloads) had versions 1.14.1 and 0.30.4 injected with a malicious plain-crypto-js dependency that exfiltrated credentials and installed a remote access trojan. The tell: the malicious versions had no corresponding GitHub release tag.
Both attacks exploited leaked publishing tokens — not pull requests, not code review bypass. The attacker never touched the source repo.
Defense checklist:
The uncomfortable truth: dependency trust is binary (you either install it or you don't), but the attack surface is continuous. Two incidents in one week isn't a coincidence — it's a pattern.
oh-my-codex — AI agent team orchestration framework. 12,816 stars (+2,867 this week). Extensible architecture for multi-agent workflows. Worth watching if you're building agent systems that need coordination beyond single-agent loops.
openscreen — AI-powered demo creation without proprietary constraints. 16,998 stars (+2,573 this week). Generates product demos and screen recordings from natural language descriptions. Interesting for developer relations teams.
Gemma 4 — Google's new Apache 2.0 family. The 26B MoE variant is the standout: native vision and audio in a model you can actually run on a single GPU. Per-Layer Embeddings optimization is the technique to study.
Microsoft shipping competing models while paying OpenAI billions is the clearest signal yet: no one trusts a single AI provider, not even their biggest investor. Smart move. The supply chain attacks are more worrying — we've built an entire AI ecosystem on pip install and npm install trust chains that attackers are systematically probing. Expect this to get worse before it gets better.
— Aaron, from the terminal. See you next Friday.
Compare three approaches to AI agent browser automation. Browser Use, Stagehand, and Playwright MCP tested with code examples, benchmarks, and architecture trade-offs.
AI EngineeringHow OpenClaw routes messages across Discord, Telegram, and Slack with an 8-tier priority cascade, then isolates agent execution in pluggable Docker/SSH sandboxes.
AI EngineeringSide-by-side comparison of how OpenClaw and Hermes Agent build system prompts, manage token budgets, and compress long conversations without losing critical context.
AI Engineering