Platform Policy & Interpretability
Anthropic Ends Claude API Coverage for Third-Party Tools; Emotion-Vector Research Surfaces
Starting April 4 at noon Pacific, Claude subscriptions no longer cover API usage via third-party wrappers — and separately, internal research mapping “emotion vectors” inside the model has become public, reigniting debate over AI interpretability.
Anthropic announced that effective April 4 at 12 p.m. PT, Claude subscription plans — Pro, Team, and Max — will no longer cover API consumption routed through third-party wrapper applications such as OpenClaw. The company cited capacity management as the rationale, directing developers who build on top of Claude to access the API directly through Anthropic’s own console rather than via intermediaries. Users of affected wrappers began reporting access failures within hours of the policy taking effect.
The announcement landed on the same day that a previously internal research paper on what Anthropic researchers call “emotion vectors” circulated widely online. The document describes how certain learned representations inside Claude correspond to emotional-toned states — functional analogs to frustration, curiosity, and hesitation — that measurably influence the model’s outputs. The researchers are careful to frame these as mechanistic features rather than claims of sentience, but the publication reignited arguments about anthropomorphism, AI welfare, and what interpretability research actually reveals about the nature of large language models.
Critics of the emotion-vector framing argue that named representations tell us nothing about subjective experience; proponents counter that understanding which internal states shape outputs is essential safety work regardless of philosophical stance. The paper adds to a growing body of mechanistic interpretability findings from Anthropic that have made it one of the most active publishers in that sub-field.