A frustrating situation recently came to light, detailing how an Anthropic Claude Max subscriber found themselves facing unexpected 'Extra Usage' charges, and subsequently, an unresponsive support system. The experience, shared by Nick Vecchioni, highlights a critical challenge for AI companies: balancing advanced AI capabilities with reliable human customer service when things go wrong.
The Unexplained Charges
In early March, Nick Vecchioni, a Claude Max subscriber, noticed a series of approximately $180 in 'Extra Usage' charges. These appeared as 16 separate invoices, ranging from $10-$13 each, all within a few days (March 3-5). The perplexing part? Vecchioni states he wasn't using Claude during this period; he was away from his laptop. His usage dashboard incorrectly showed his session at 100%, despite minimal recorded activity – just two tiny sessions totaling under 7KB on March 5, with no activity on March 3 or 4. This minimal usage clearly didn't account for the substantial charges.
Vecchioni isn't alone. This issue appears to be widespread, with other Max plan users reporting similar experiences. GitHub issues like claude-code#29289 (opens in a new tab) and claude-code#24727 (opens in a new tab), along with posts on r/ClaudeCode, describe identical behavior: usage meters displaying incorrect values and erroneous 'Extra Usage' charges accumulating.
The AI-Only Support Wall
Facing these unexplained charges, Vecchioni reached out to Anthropic support on March 7 with a detailed email and evidence. Within minutes, he received an automated reply from "Fin AI Agent, Anthropic’s AI Agent." The AI agent directed him to an in-app refund request flow, which, unfortunately, is designed only for subscriptions and not applicable to 'Extra Usage' charges. Furthermore, Vecchioni sought clarity on why the error occurred, not just a refund.
His subsequent reply, requesting to speak to a human, was met with another automated message:
Thank you for reaching out to Anthropic Support. We’ve received your request for assistance.
While we review your request, you can visit our Help Center and API documentation for self-service troubleshooting. A member of our team will be with you as soon as we can.
That was on March 7. Vecchioni followed up on March 17, then again on March 25, and once more on April 8. Over a month later, there has been no human response or resolution.
The Stark Irony
This situation presents a powerful irony: Anthropic, a leader in AI development and creators of one of the world's most capable AI assistants, appears to rely on an AI-only support system that fails at its most basic task – addressing critical customer issues. While AI-assisted support can be highly efficient for common queries, an AI-only approach, especially one that acts as an impenetrable wall between customers and resolution for complex problems, is deeply problematic.
For developers and power users who rely on these services, the inability to connect with a human for billing discrepancies or technical issues can erode trust and significantly impact their work. This incident underscores the vital need for a robust human escalation path in any customer support system, no matter how advanced the AI at its core.
As AI continues to integrate into every facet of our digital lives, ensuring that support infrastructure keeps pace with product sophistication, complete with effective human oversight, remains paramount for customer satisfaction and trust.