Industry Shift
At HumanX, Everyone Was Talking About Claude
Anthropic displaces OpenAI as the focal point of the AI industry’s biggest gathering, with Claude Code adoption described as “a religion” among enterprise builders.
Sources: cnbc.com • techcrunch.com • dataconomy.com
The HumanX conference in Las Vegas drew more than 6,500 attendees this week, but the biggest story was not any single product announcement — it was which company everyone was talking about. For the first time, Anthropic and its Claude model family eclipsed OpenAI as the dominant topic in hallway conversations, keynote reactions, and enterprise deal-making discussions. Glean CEO Arvind Jain captured the mood in a widely shared remark, telling CNBC that Claude Code adoption among his company’s customers had become “a religion — not a tool, a religion.” The phrase ricocheted across social media and quickly became shorthand for a broader industry inflection point.
What made HumanX notable was not just enthusiasm for Claude’s capabilities but a palpable shift in where enterprise buyers are placing their long-term bets. Multiple speakers on the main stage referenced Claude’s reasoning performance, its agentic coding workflows, and what several described as a more trustworthy approach to safety. Attendees from Fortune 500 companies reported that internal evaluations increasingly favor Anthropic for tasks requiring sustained multi-step reasoning, document analysis, and code generation — areas where Claude has pulled ahead in public benchmarks and, more importantly, in private production metrics that enterprises track but rarely share.
The market-signal implications are significant. HumanX has historically served as a bellwether for enterprise AI spending priorities, and a conference where Anthropic commands the narrative is a conference where procurement decisions follow. OpenAI remains the revenue leader, but the gravitational center of developer enthusiasm and enterprise curiosity has measurably shifted. Whether this translates into sustained market-share gains will depend on execution, but at HumanX 2026, the conversation belonged to Claude.
Accountability
Secret Memos Allege Sam Altman’s “Consistent Pattern of Lying”
The New Yorker publishes Ilya Sutskever’s private memos and Dario Amodei’s 200-page notes, documenting systematic deception at OpenAI.
Sources: techbrew.com • semafor.com • tomsguide.com
Ronan Farrow and Andrew Marantz have published a sweeping investigation in The New Yorker, drawing on more than 100 interviews and a trove of previously unseen internal documents to construct the most detailed account yet of the leadership crisis at OpenAI. The centerpiece of the report is a private memo written by former chief scientist Ilya Sutskever in the weeks preceding his role in the November 2023 board action against Altman. In the document, Sutskever listed his concerns in order of priority. “The first item is Lying,” he wrote, describing what he characterized as a consistent pattern of deception directed at board members, employees, and external partners alike. The memo alleges that Altman routinely told different stakeholders contradictory things about the company’s safety commitments, governance structure, and commercialization timeline.
Equally explosive are the personal notes of Dario Amodei, who left OpenAI in 2021 to found Anthropic. Amodei’s 200-page contemporaneous record, portions of which the reporters obtained and verified, includes the blunt assessment that “the problem with OpenAI is Sam himself.” The notes describe repeated instances in which Amodei says he witnessed Altman make commitments to the safety team that were later quietly walked back — including the promise that twenty percent of the company’s compute would be dedicated to the superalignment team, a pledge that former superalignment co-lead Jan Leike has publicly said amounted to only one to two percent in practice before the team was effectively dissolved.
The investigation arrives at a moment when public scrutiny of OpenAI’s governance has never been higher, with multiple lawsuits, regulatory inquiries, and a contested for-profit conversion all in play. Neither Altman nor OpenAI provided an on-the-record rebuttal to the specific allegations in the memos, though a company spokesperson told the reporters that Altman “has always acted in the best interests of the mission.” The New Yorker piece is likely to intensify calls — already growing in Congress and among former employees — for independent oversight of the company’s safety commitments as it races toward increasingly powerful systems.