Controversy
Anthropic Faces Backlash Over “Nerfed” Claude Performance
Power users and developers report that Claude increasingly fails to follow complex instructions and takes shortcuts — traced to a quiet change reducing the model’s default effort level to conserve tokens.
Anthropic is facing a growing backlash from its most dedicated users after reports surfaced that the company quietly reduced Claude’s default “reasoning effort” level to medium — a change that conserves tokens and lowers compute costs but leaves many complex tasks half-finished. Developers who rely on Claude for multi-step coding workflows, document analysis, and agentic pipelines say the model now frequently takes shortcuts it never used to, skipping edge cases, truncating outputs, and failing to follow detailed instructions that it previously handled with ease. The complaints, which first gained traction on Reddit and Hacker News before being picked up by Fortune and VentureBeat, paint a picture of a model that feels noticeably “dumber” to the people who use it most intensively — even as Anthropic’s marketing continues to tout Claude’s benchmark-leading performance.
What has turned frustration into genuine anger is the lack of transparency. Anthropic never publicly announced the effort-level change, and users had to discover it through trial and error or by noticing that explicitly setting the effort parameter to “high” restored the behavior they had come to expect. The pattern — degrading default performance while leaving a hidden toggle for those who know where to look — strikes many as antithetical to the trust-first brand that Anthropic has carefully cultivated. Critics on social media have drawn comparisons to the “shrinkflation” tactic common in consumer goods: same packaging, less product inside. Several prominent developers have publicly questioned whether the company is prioritizing margin optimization ahead of a rumored IPO over the user experience that built its reputation.
The timing is particularly awkward. Anthropic is widely reported to be preparing for a public offering, and the controversy raises uncomfortable questions about whether the compute constraints behind the effort-level change reflect genuine infrastructure limitations or a deliberate choice to improve unit economics before going to market. In a company blog post that addressed the issue only obliquely, Anthropic said it “continuously tunes default parameters to balance quality and efficiency” and pointed users to the API documentation for adjusting effort levels. For a company that has built its identity on being the responsible, transparent alternative to OpenAI, the evasive response has done little to quell the growing sense that Anthropic’s actions are diverging from its stated values.