There's No Speed Test for Intelligence - and Anthropic Knows It
There's No Speed Test for Intelligence — and Anthropic Knows It I pay $200/month for Anthropic's Claude Max 20x tier. I run a team of Claude Code agents building GPU compute transpilers, ML inferen...

Source: DEV Community
There's No Speed Test for Intelligence — and Anthropic Knows It I pay $200/month for Anthropic's Claude Max 20x tier. I run a team of Claude Code agents building GPU compute transpilers, ML inference engines, and P2P networking libraries in C#. The same model — Claude Opus 4.6 — wrote a 6-backend GPU transpiler with 1,500+ tests and zero failures, and found a memory ordering bug in V8 that Google confirmed. That was on "High" effort, back when "High" was the ceiling. Then Anthropic quietly added "Max" above it. What Happened In late March 2026, Anthropic introduced a new effort tier without notification. "High" — previously the maximum reasoning level — was silently redefined as something less than maximum. The model didn't get smarter. Your existing tier got dumber. Since then, GitHub issue #38335 has accumulated 410+ comments from paying customers. Max 20x subscribers ($200/month) report hitting usage limits after 3–5 prompts. Sessions that lasted 5 hours now last 30 minutes. A singl