Why Your AI Agents Are Only Half as Smart as They Could Be
You hand an AI agent a GitHub Issue. It reads it, writes code, opens a PR, and passes CI. Impressive. You feel productive. Then a new engineer joins. They read every PR for two weeks. They still do...

Source: DEV Community
You hand an AI agent a GitHub Issue. It reads it, writes code, opens a PR, and passes CI. Impressive. You feel productive. Then a new engineer joins. They read every PR for two weeks. They still don't understand why the system is shaped the way it is. They ask you. You explain. The explanation disappears into Slack. This is not an onboarding problem. It is a structural problem. And AI agents make it worse. The Invisible Starting Point There is a post going around about a startup that built 21 AI agents in two months. GitHub Issue triggers a label. Label triggers an agent. Agent writes code, opens PR, passes review, merges. The human writes the Issue and goes to sleep. By morning, the PR is ready. It reads like the future. And in many ways it is. But one thing is missing from the entire article: where does the Issue come from? Someone's head. Specifically, one person's head. That person holds the product strategy, the architectural decisions, the things that were tried and abandoned, th