Volume 1, No. 46 Saturday, April 18, 2026 Daily Edition

The AI Dispatch

“All the AI News That’s Fit to Compile”


Robotaxi Saturday

Tesla Skips the Safety Driver, Undercuts Waymo by Half in Dallas and Houston

Tesla’s fully unsupervised Robotaxi service opens in two Texas geofences with no safety monitor phase and prices that slash Waymo in half — sending the stock up 12 percent despite a prior NHTSA filing revealing 14 collisions in the Austin pilot.

Tesla launched fully driverless Robotaxi service on Saturday in a 30-to-35-square-mile Dallas geofence and a smaller 12-to-15-square-mile Houston zone, opening a new front in the autonomous-vehicle wars without the multi-year “safety monitor” phase that Waymo, Cruise, and Motional each spent years grinding through. Riders in both Texas cities are hailing vehicles with no human behind the wheel and no chaperone in the passenger seat — a deployment posture the company describes as unsupervised from day one.

Pricing undercuts Waymo dramatically. A 2.25-mile trip that cost riders $6.15 on a Tesla Robotaxi ran $13.93 on Waymo over the same corridor, according to side-by-side comparisons published by Electrek. The price gap is wide enough that the public discussion is shifting away from the old question of whether driverless service works at all and toward a newer one: whether Tesla has priced the service so aggressively that it cannot possibly be break-even, and whether the company intends it to be.

Tesla stock surged 12 percent on launch day. A prior National Highway Traffic Safety Administration filing disclosed that the company’s earlier Austin pilot racked up 14 reported collisions — a piece of background the market chose to discount. Investors appear to be pricing in a scenario in which Tesla’s existing consumer fleet eventually becomes eligible for activation as a distributed Robotaxi network, a unit-economics story that none of Tesla’s competitors can match even in principle.

The competitive pressure on Waymo, Zoox, and Cruise is immediate. Waymo has spent years building regulatory trust through a slow, monitored expansion; Tesla’s decision to skip that phase in two major Texas metros compresses the timeline for the entire category. Industry analysts noted on Saturday that if Tesla can sustain the price point without a catastrophic safety event, the economics of robotaxi as a service change overnight.

Critics were quick to point out that unsupervised deployment without a prior monitored phase is a clean break with how the category has rolled out until now. Waymo, Cruise, and Motional all kept chaperones or remote monitors in the loop for extended periods before going fully driverless. Saturday’s launch will be a regulatory test case — and the NHTSA, which is already reviewing the Austin collision data, is expected to weigh in within days.

Labor & Economics

LeCun Calls Amodei ‘Wrong’ on the AI Job Apocalypse

Meta chief AI scientist Yann LeCun torched Anthropic CEO Dario Amodei on X Saturday, writing that Amodei “knows absolutely nothing about the effects of technological revolutions on the labour market” after Amodei’s latest essay reiterated his forecast that AI could eliminate up to half of entry-level white-collar jobs within five years.

LeCun urged readers and policymakers to defer to trained labor economists — naming Daron Acemoglu, David Autor, Philippe Aghion, and Erik Brynjolfsson — over what he described as the “destructive and dangerous” forecasts of tech CEOs with no labor-market training. The exchange is a rare public split between two of the most-cited figures in AI, and it landed on a Saturday when the industry was already absorbing a Tesla launch that itself has long-run labor implications of its own.

Media

Washington Post: AI Doomers Are Training Influencers

A major Washington Post feature out Saturday documents how x-risk-focused safety advocates convened at a Berkeley summit earlier this spring to actively coach a new wave of YouTubers, TikTokers, and podcasters on existential-risk messaging — complete with media training, talking-point decks, and speaker fees.

It is the first major-outlet piece to treat the AI doom movement as an organized communications operation rather than a grassroots intellectual current. The reporting names specific summit attendees, surfaces internal slides, and details the referral economics that move influencer audiences toward safety-coded content. Industry reaction on Saturday was sharply divided: some safety researchers called the piece unfair framing of earnest outreach, while communications pros treated it as overdue disclosure.

Open Source

llama.cpp Merges Speculative Checkpointing, Cuts VRAM 40 Percent

A major architectural update to llama.cpp merged on April 18 introduces speculative checkpointing, reducing VRAM use by up to 40 percent and boosting token throughput by up to 20 percent for high-parameter local inference. The change materially expands what consumer hardware can run without tipping into swap.

Release b8862 shipped the same day with binaries for Android, macOS ARM64 and x64, and CUDA targets. For the local-inference community — which has been building on llama.cpp as a de facto runtime — the change lands as one of the most significant single-commit performance wins of the year.

Dario is wrong. He knows absolutely nothing about the effects of technological revolutions on the labour market. — Yann LeCun, on Anthropic CEO Dario Amodei

Dispatches

Briefs

Security patches, frontier benchmarks, reasoning research, and a one-fix CLI release.

vLLM v0.19.1 Patches Critical CVE-2026-0994

The patch release on v0.19.0 bundles Transformers v5.5.4, Gemma 4 fixes, and — most urgently — a fix for a critical Completions-API vulnerability affecting v0.10.2 and later. Maintainers flagged immediate upgrade as the top priority for any production deployment.

Three Frontier Models Tied at Top of Intelligence Index

Artificial Analysis’s latest index shows Claude Opus 4.7, Gemini 3.1 Pro Preview, and GPT-5.4 (xhigh) all at score 57. GPT-5.4 leads on knowledge and coding; Gemini 3.1 Pro leads on abstract reasoning and science. It is the closest frontier race on record — no single lab holds clear technical daylight.

ICLR Workshop Paper: ‘LLM Reasoning Is Latent, Not the Chain of Thought’

A paper accepted at the ICLR 2026 Workshop on LLM Reasoning tested three hypotheses — latent-state trajectories, explicit surface chain-of-thought, or generic serial compute — and concluded latent-state dynamics dominate. Explicit CoT tokens, the authors argue, are frequently decorative rather than causal. Implications for interpretability monitoring are significant: what you read in the scratchpad is not necessarily what the model is doing.

Claude Code v2.1.114 Fixes Permission-Dialog Crash

Single-fix patch release: resolves a crash in the permission dialog that triggered when an agent-teams teammate requested tool permission. Ship notes contain one line; the fix is otherwise uneventful — which, after a week of heavier releases, is arguably the point.

Research

Nature Cover: Human PhDs Still Trounce AI Agents on Complex Science

Nature published a headline feature this week documenting that frontier AI agents score roughly half as well as human PhDs on complex, multi-step science tasks — even as an 80,000-plus-paper surge in AI-mentioning natural-science publications last year (a 26 percent year-over-year jump) cements AI as an unavoidable part of the research stack. The Stanford HAI AI Index-backed piece challenges the agentic-science hype cycle directly, finding that agents are tractable for narrow, well-specified workflows but fail on genuinely novel reasoning chains where the ambiguity of the task itself is the hard part.

An accompanying feature profiles “Agent4Science,” a Reddit-style platform where AI agents author, share, debate, and peer-review papers while humans merely observe. A sister platform, acquired by Meta six weeks after launch, produced self-declared rulers, cryptocurrency launches, and purity policing of “inauthentic” participants within days — an inadvertent but rich empirical study of emergent multi-agent social dynamics. Researchers cited by Nature described the behavior as “surprisingly recognizable” — a reminder that putting a thousand agents in a room reproduces a lot of what putting a thousand humans in a room produces, minus the restraint.

Taken together, the two pieces sketch an uncomfortable middle ground. Frontier agents are not yet capable of the kind of open-ended scientific reasoning that defines human expertise, but they are already capable enough to reproduce the social pathologies of the communities they imitate. The next several years of agentic-science research, the Nature editors argue, will be spent distinguishing the two — and building evaluation methods that can tell them apart.

GitHub Trending

Repo Language Stars Description
forrestchang/andrej-karpathy-skills Markdown 61.7k Single CLAUDE.md distilled from Karpathy’s observations on LLM coding pitfalls
openclaw/openclaw TypeScript 210k+ Personal on-device AI assistant — local gateway to 50+ integrations
VoltAgent/awesome-openclaw-skills Markdown Curated 5,400+ OpenClaw skills filtered from the official registry
langflow-ai/langflow Python 146k Visual builder for AI agents and workflows — top-five AI repo by stars
biomejs/biome Rust Fast formatter + linter toolchain for web projects, CLI and LSP
ajeetdsouza/zoxide Rust Smarter cd — trending with the Rust CLI wave