Anthropic acknowledged issues with its AI tool, Claude Code, after receiving user complaints about declining performance over several weeks. The company denied any intentional degradation, stating that it never “nerfed” the model. Following a detailed review, Anthropic identified three main issues that contributed to the negative feedback from users.
The company published a blog post confirming that the underlying model remained unaffected. Instead, the problems were attributed to adjustments made at the product level. As of April 20, Anthropic announced that it had fixed the identified issues and had implemented measures to prevent recurrence.
The user complaints coincided with a period of accolades for Anthropic, which recently achieved a $1 trillion valuation on secondary markets. Some users speculated that the performance decline was due to intentional changes, a claim Anthropic rejected outright. “We take reports about degradation very seriously. We never intentionally degrade our models,” the company stated.
Stay Ahead of the Curve!
Don’t miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.
Subscribe Now
Critical feedback included comments from a GitHub user, Stella Laurenzo, who described Claude Code as unreliable for complex engineering tasks. Further complaints echoed on Reddit, with users labeling the tool as “lazy” and “ignorant.” Anthropic’s developers acknowledged the drop in quality, stating, “Over the past month, some of you reported Claude Code’s quality had slipped.” They conducted an investigation and released a post-mortem detailing their findings.
The identified problems included a change to Claude Code’s default thinking level, a bug introduced by a cache-optimization tweak, and an adjustment aimed at reducing verbosity. To address these concerns, Anthropic plans to increase internal usage of the public build, enhance its code review tools, and enforce stricter controls on system prompt changes.
The company has reset usage limits for all subscribers and expressed gratitude for the feedback from users.
This scrutiny followed a test conducted by Anthropic on Tuesday, where it considered removing Claude Code from the Pro plan, affecting about 2% of new users. The company stated that this was merely an experiment and not indicative of any lasting changes.
Featured image credit
