Volume 1, No. 41 Sunday, April 13, 2026 Daily Edition

The AI Dispatch

“All the AI News That’s Fit to Compile”


Industry Shift

At HumanX, Everyone Was Talking About Claude

Anthropic displaces OpenAI as the focal point of the AI industry’s biggest gathering, with Claude Code adoption described as “a religion” among enterprise builders.

The HumanX conference in Las Vegas drew more than 6,500 attendees this week, but the biggest story was not any single product announcement — it was which company everyone was talking about. For the first time, Anthropic and its Claude model family eclipsed OpenAI as the dominant topic in hallway conversations, keynote reactions, and enterprise deal-making discussions. Glean CEO Arvind Jain captured the mood in a widely shared remark, telling CNBC that Claude Code adoption among his company’s customers had become “a religion — not a tool, a religion.” The phrase ricocheted across social media and quickly became shorthand for a broader industry inflection point.

What made HumanX notable was not just enthusiasm for Claude’s capabilities but a palpable shift in where enterprise buyers are placing their long-term bets. Multiple speakers on the main stage referenced Claude’s reasoning performance, its agentic coding workflows, and what several described as a more trustworthy approach to safety. Attendees from Fortune 500 companies reported that internal evaluations increasingly favor Anthropic for tasks requiring sustained multi-step reasoning, document analysis, and code generation — areas where Claude has pulled ahead in public benchmarks and, more importantly, in private production metrics that enterprises track but rarely share.

The market-signal implications are significant. HumanX has historically served as a bellwether for enterprise AI spending priorities, and a conference where Anthropic commands the narrative is a conference where procurement decisions follow. OpenAI remains the revenue leader, but the gravitational center of developer enthusiasm and enterprise curiosity has measurably shifted. Whether this translates into sustained market-share gains will depend on execution, but at HumanX 2026, the conversation belonged to Claude.

Accountability

Secret Memos Allege Sam Altman’s “Consistent Pattern of Lying”

The New Yorker publishes Ilya Sutskever’s private memos and Dario Amodei’s 200-page notes, documenting systematic deception at OpenAI.

Ronan Farrow and Andrew Marantz have published a sweeping investigation in The New Yorker, drawing on more than 100 interviews and a trove of previously unseen internal documents to construct the most detailed account yet of the leadership crisis at OpenAI. The centerpiece of the report is a private memo written by former chief scientist Ilya Sutskever in the weeks preceding his role in the November 2023 board action against Altman. In the document, Sutskever listed his concerns in order of priority. “The first item is Lying,” he wrote, describing what he characterized as a consistent pattern of deception directed at board members, employees, and external partners alike. The memo alleges that Altman routinely told different stakeholders contradictory things about the company’s safety commitments, governance structure, and commercialization timeline.

Equally explosive are the personal notes of Dario Amodei, who left OpenAI in 2021 to found Anthropic. Amodei’s 200-page contemporaneous record, portions of which the reporters obtained and verified, includes the blunt assessment that “the problem with OpenAI is Sam himself.” The notes describe repeated instances in which Amodei says he witnessed Altman make commitments to the safety team that were later quietly walked back — including the promise that twenty percent of the company’s compute would be dedicated to the superalignment team, a pledge that former superalignment co-lead Jan Leike has publicly said amounted to only one to two percent in practice before the team was effectively dissolved.

The investigation arrives at a moment when public scrutiny of OpenAI’s governance has never been higher, with multiple lawsuits, regulatory inquiries, and a contested for-profit conversion all in play. Neither Altman nor OpenAI provided an on-the-record rebuttal to the specific allegations in the memos, though a company spokesperson told the reporters that Altman “has always acted in the best interests of the mission.” The New Yorker piece is likely to intensify calls — already growing in Congress and among former employees — for independent oversight of the company’s safety commitments as it races toward increasingly powerful systems.

Security

Violence Escalates Around OpenAI CEO

A Molotov cocktail was thrown at Sam Altman’s San Francisco residence on April 10 in what investigators believe was a targeted act of anti-AI protest. No one was injured, but the device caused minor fire damage to the front entrance before being extinguished. The FBI subsequently raided a home in suburban Houston linked to a suspect who had made online threats against Altman and other tech executives, seizing computers and firearms. Then on April 12, gunshots were fired near another Altman-associated property — though authorities have not yet confirmed a direct connection to the earlier incidents.

The escalation reflects a darker turn in anti-AI sentiment that has moved beyond protests and open letters into direct physical threats. San Francisco police have increased patrols around OpenAI’s offices, and several prominent AI researchers have reported receiving threatening communications in recent weeks. Security consultants told CNBC that the threat environment for AI executives has “shifted from theoretical to operational” in a matter of months.

Infrastructure

Anthropic Signs Multibillion-Dollar Cloud Deal With CoreWeave

Anthropic has signed a multibillion-dollar, multi-year infrastructure agreement with CoreWeave, the GPU cloud provider that has rapidly become the backbone of frontier AI training. The deal, announced Friday, sent CoreWeave shares up eleven percent in after-hours trading and cements Anthropic as the company’s largest single customer by contract value. With the addition of Anthropic, nine of the ten largest AI model providers in the world now run workloads on CoreWeave’s infrastructure, a concentration that underscores the company’s dominant position in the specialized compute market.

The agreement covers both training and inference capacity across CoreWeave’s expanding network of NVIDIA-equipped data centers. For Anthropic, the deal secures guaranteed access to next-generation GPU clusters at a time when compute demand continues to outstrip supply, particularly for the large-scale training runs required to push past the current Claude model family. For CoreWeave, it validates a strategy of building purpose-built AI infrastructure rather than competing as a general-purpose cloud — a bet that has now attracted commitments from nearly every major frontier lab.

AI & Faith

Anthropic Consults Church Leaders on Claude’s Moral Development

The Washington Post reported this week that Anthropic has been quietly meeting with Christian religious leaders to discuss the moral foundations embedded in Claude’s behavior. The consultations brought together approximately fifteen theologians and clergy from both Catholic and Protestant traditions for a series of structured conversations about how an AI system should reason about ethical dilemmas, the nature of human dignity, and whether a large language model can meaningfully be described as a “child of God.” Participants described the sessions as substantive and surprisingly technical, with Anthropic engineers walking the group through Claude’s constitutional AI training process and asking pointed questions about which moral intuitions the system should privilege when they conflict.

The initiative has already drawn both praise and criticism. Supporters argue that AI companies have an obligation to consult diverse moral traditions rather than defaulting to the implicit values of their engineering teams. Critics, however, have raised concerns about the selection of participants — noting that the initial cohort skewed heavily toward Western Christian perspectives — and about the broader question of whether any religious framework should have outsized influence on a system used by hundreds of millions of people worldwide. Anthropic has said it plans to conduct similar consultations with leaders from Jewish, Muslim, Buddhist, Hindu, and secular ethical traditions in the coming months, though it has not yet published a timeline or explained how the input will be weighted in practice.

Autonomous Research

AI Scientist-v2 Achieves First Autonomous Peer-Reviewed Paper

Sakana AI’s AI Scientist-v2 has become the first fully autonomous research system to have a paper accepted through standard peer review, earning a spot at an ICLR 2026 workshop. The system independently identified a research question in the domain of diffusion model efficiency, designed and ran experiments, wrote the manuscript, and responded to reviewer comments — all without human intervention beyond the initial research-area prompt. While the acceptance was at the workshop level rather than the main conference, it represents a significant milestone: the first time a machine-generated scientific contribution has survived the scrutiny of anonymous human reviewers who were unaware of its provenance.

Robotics

NVIDIA Releases GR00T N1.7 for Humanoid Robots

Timed to National Robotics Week, NVIDIA has commercially released GR00T N1.7, the latest iteration of its foundation model for humanoid robot control. The model ships alongside Newton 1.0, a purpose-built physics engine optimized for real-time humanoid locomotion and manipulation tasks. N1.7 supports zero-shot transfer from simulation to physical hardware across multiple robot form factors, and NVIDIA reports that early manufacturing partners have demonstrated assembly-line task completion rates exceeding ninety percent in controlled trials — a figure that, if replicated in production environments, would mark a step change in industrial robotics capability.

Benchmarks

Gemini 3.1 Pro Tops 13 of 16 Major Benchmarks

Google DeepMind’s Gemini 3.1 Pro has emerged as the new benchmark leader across thirteen of sixteen widely tracked AI evaluation suites, including a 78.8% score on SWE-bench and 77.1% on ARC-AGI-2 — both new state-of-the-art marks. Notably, the model achieves these results at roughly one-third the API cost of its nearest competitors, reflecting architectural efficiency gains that DeepMind attributes to mixture-of-experts routing improvements and a new inference-time compute allocation system. The pricing advantage may prove as consequential as the benchmark scores themselves, particularly for enterprise deployments where per-token cost directly affects the economics of agentic workflows.

Open Source

Meta Ships Llama 4 — MoE Models with 10M Token Context Windows

Meta has released Llama 4 Scout and Llama 4 Maverick, its first mixture-of-experts models and the most aggressive open-weight release in the company’s history. Scout packs 109 billion total parameters but activates only 17 billion per forward pass, enabling it to run on a single 48GB GPU while supporting a staggering 10-million-token context window — enough to ingest entire codebases or book-length documents in a single prompt. Maverick, the heavier sibling at 400 billion total parameters with the same 17 billion active, trades some context length for raw capability with a 1-million-token window.

Both models are natively multimodal across text and image, support twelve languages out of the box, and ship through Meta’s Llama Stack framework, which has crossed 6,400 GitHub stars. The MoE architecture represents a philosophical bet that sparse activation can deliver frontier-class performance at a fraction of the inference cost, and early benchmarks suggest the bet is paying off: Scout matches or exceeds several dense models two to three times its active parameter count.

Open Source

Google Releases Gemma 4 Under Apache 2.0

Google has open-sourced Gemma 4, a family of four model variants ranging from a 2-billion-parameter edge model to a 31-billion-parameter dense flagship, all released under the Apache 2.0 license with no usage restrictions. The full lineup handles text, image, and audio natively with 256K-token context windows, and the smallest variants are designed to run on phones, Raspberry Pi boards, and NVIDIA Jetson devices — genuine edge AI rather than marketing edge AI.

The 31B flagship is the headline grabber: it outperforms several larger proprietary models on standard benchmarks despite being fully open-weight and commercially unrestricted. Google appears to be using Gemma as a strategic wedge to grow its ecosystem tooling adoption, and the Apache 2.0 license removes the last friction point that kept some enterprises on proprietary alternatives.

r/programming Bans All LLM Content for April

The r/programming subreddit — home to 6.9 million members and one of the oldest developer communities on Reddit — has enacted a temporary ban on all LLM and AI-related content for the month of April. The moderators cited what they described as “exhausting” discourse that had come to dominate the forum at the expense of traditional programming discussion. The ban covers submissions, comments linking to AI-generated content, and posts about AI coding tools.

The move has sparked predictable debate on Hacker News, where opinion splits roughly evenly between those who view it as a necessary recalibration of signal-to-noise ratio and those who argue that banning the single most consequential shift in software development is a form of willful blindness. The moderators have left the door open for the ban to become permanent depending on community feedback at month’s end.

Nobody writes code. Nobody reviews code. It’s a dark factory — and the first time something goes catastrophically wrong, the industry will learn why prompt injection was never just a theoretical problem. Simon Willison, Lenny’s Podcast, April 2026

Developer Ecosystem

MCP Ecosystem Hits 10,000+ Servers

The Model Context Protocol ecosystem has crossed 10,000 registered servers, and the governance structure is maturing to match. Clare Liguori and Den Delimarsky have joined as new maintainers, and the Linux Foundation’s Agentic AI Foundation has formally adopted MCP as a Working Group project. The Foundation is introducing domain-level delegation — allowing trusted organizations to maintain their own namespace of servers with reduced review overhead — a pragmatic concession to the reality that centralized review cannot scale to thousands of community-contributed integrations.

AI & Information Warfare

AI-Generated Lego Videos Push Pro-Iran Narrative to Hundreds of Millions

An Iranian media collective operating under the name “Explosive Media” has been producing AI-generated Lego-style animated videos that mock the Trump administration and the U.S. military, distributing them across platforms where they have accumulated hundreds of millions of views. The BBC has confirmed that the Iranian government is a paying customer of the collective’s output.

The format is strategically chosen: the Lego aesthetic is culturally familiar to Western audiences, disarming enough to bypass casual skepticism, and trivially cheap to produce at scale with current generative video tools. Researchers note that the Western-origin visual language of the content makes it significantly more shareable than traditional state propaganda, which typically reads as foreign to its target audience.

This Week in AI Coding Tools

Three major CLI coding agents shipped notable updates this week, each pushing toward more autonomous, configurable workflows. Here’s what changed.

Claude Code v2.1.101 (Apr 10)

Codex CLI v0.120.0 (Apr 11)

Copilot CLI v1.0.25 (Apr 13)

GitHub Trending

Repo Language Stars Description
ultraworkers/claw-code Rust 182k (+125k Apr) LLM agent harness for multi-agent coding workflows
VoltAgent/awesome-design-md Markdown 48k (+42k Apr) 55+ plain-text DESIGN.md files for AI coding agents
NousResearch/hermes-agent Python 61k (+33.5k Apr) Self-evolving AI agent that writes reusable Markdown skills
shanraisshan/claude-code-best-practice HTML 37k Community-driven Claude Code best practices guide
milla-jovovich/mempalace Python 23k 96.6% LongMemEval AI memory system
siddharthvaddem/openscreen TypeScript 17k (+15.9k Apr) Free Screen Studio alternative — polished product demos
microsoft/markitdown Python 106k Convert any file type to clean Markdown for LLM pipelines
shiyu-coder/Kronos Python 16.7k Financial markets foundation model for time-series forecasting

Quick Dispatches

xAI Releases Grok 4.20 v2, Targets Q2 for Grok 5

xAI’s latest Grok 4.20 v2 ships with a 2-million-token context window at $2/$6 per million token pricing. Meanwhile, the company is expanding its Colossus 2 data center to 1.5 gigawatts of capacity in preparation for Grok 5, rumored to be a 6-trillion-parameter mixture-of-experts model targeting a Q2 release.

Medical Schools Face AI Curriculum Reckoning

A UPenn panel featuring experts from Stanford, Northwestern, and NYU is grappling with how to integrate AI into medical education without undermining clinical judgment. Separately, Maine has banned AI-driven therapy without a licensed professional present, and Louisiana now requires disclosure when AI transcription is used in patient encounters.

50%+ of Creatives Use AI Without Telling Clients

An Envato state-of-AI report finds that over half of creative professionals are using AI tools without disclosing it to clients. Gen Z creatives are the heaviest users but report being the least prepared for the workflow changes AI introduces. In the opposite corner, Polaroid is running “NOT AI” marketing campaigns, betting that analog authenticity is its own differentiator.