Going Too Fast
The vibe coding trap: when AI velocity creates more work than it saves.
So I asked Claude to reorder the epics on my roadmap.
Put the done stuff first, current work next, backlog last. Two minute task. Maybe three.
Thirty minutes later I’m renaming 16 files, updating 4 documents, and writing rules for myself about why I should ask clarifying questions before letting an AI touch my codebase.
Yeah.
What Actually Happened
“Reorder the epics” is ambiguous. Could mean:
- Shuffle the display order on the page
- Actually renumber them (Epic 6 becomes Epic 4)
Claude picked 1. I meant 2.
So now my roadmap shows epics in this order: 6, 2, 4, 1, 7, 3, 5. Which is… technically correct? They’re reordered. The numbers are just wrong now. The whole thing looks like someone shuffled a deck of cards and called it organization.
The Part Where It Gets Worse
Here’s the thing about epic numbers. They’re not just labels. They’re embedded everywhere:
- The planning doc references “Epic 6: Launch Hardening”
- Story files are named
story-6-1-api-cost-protection.md - The sprint tracker uses
6-1-api-cost-protection: drafted - Tech specs, retro files, context files. All of them.
Changing Epic 6 to Epic 4 means touching like 20 files. And you can’t just find-replace because the old Epic 4 needs to become Epic 2, and now you’ve got naming collisions.
One vague request. Half an hour of cleanup. And I had to use temp file names because — I shit you not — renaming 2-1-* to 1-1-* while 1-1-* still exists creates a race condition on the filesystem.
Classic.
The Pattern (Because Obviously I’ve Done This Before)
This is the vibe coding trap. You’re moving fast. AI is spitting out code. Commits are flowing. Feels productive as hell.
But you’re not checking. You’re not asking questions. You’re just… vibing.
And then you hit the wall.
The fix takes longer than doing it right would have taken. The “velocity” you thought you had? Paid back with interest. Sometimes you break shit that was working.
The pattern, roughly:
- Vague request (“fix this”, “clean that up”, “reorder these”)
- Claude picks an interpretation (reasonable! just not what you meant)
- Fast execution (code ships, files change, commits fly)
- Wrong result (not broken, just… not right)
- Cleanup (more work than the original task)
I’ve done this like five times now. Maybe six. You’d think I’d learn.
The One Question
“Do you want me to change the display order, or actually renumber the epics?”
Five seconds. One question.
Instead I got to rename files from story-6-1 to story-4-1, fix every reference, update the sprint status, rename the tech specs, rename the retros…
Look, I’m not saying Claude did anything wrong. It did exactly what I asked. The problem is what I asked was vague, and I was moving too fast to notice.
What I Actually Did About It
Added verification rules to my project context file. Pretty simple:
- Ambiguous request? Ask one clarifying question before doing anything
- Multi-file structural change? Use a workflow, don’t ad-hoc it
- Before committing? Check the actual rendered output, not just the code
It’s not rocket science. It’s just discipline. Which, apparently, I don’t have naturally.
The Irony
I’m literally building a slop detector. A tool that identifies when AI output is lazy or generic or low-quality.
And here I am shipping sloppy work because I was going too fast to verify what I was shipping.
I don’t know what to tell you. AI collaboration isn’t about speed. It’s about clarity. Clear input, clear output. Vague input, technically-correct-but-wrong output.
Slow down. Ask the question. Check your shit.
Or spend thirty minutes renaming files and feeling like an idiot.
Your call.
This post was written using a new voice module I built — basically a style guide extracted from my actual speech patterns. Ran it through the Slop Detector: Quality 100, Origin 93. Verdict: “Polished AI.” Sounds way more like me than anything I’ve written with AI before. Kind of neat.