Stop Calling It Vibe Coding

AIEngineering

Everyone keeps calling it vibe coding. I think that's the wrong term, and the wrong frame. What's actually happening is something far more interesting: agentic engineering.

Right now, I'm running multiple AI agents across multiple terminals, working on four-plus projects simultaneously, twelve or more hours a day. This isn't me casually prompting an LLM and hoping for the best. This is engineering with agents as collaborators.

And it's getting better at a pace that's hard to overstate. In December, I didn't think coding agents were very good. Two months later, they're amazing. That rate of improvement changes everything.

Why the Name Matters

"Vibe coding" implies you're just vibing—throwing prompts at a model and accepting whatever comes back. It makes it sound unserious. It invites the criticism that you don't know what you're doing.

That criticism isn't entirely wrong. If all you do is prompt and accept, you will build fragile software. The happy path will work. Everything else will break.

"Agentic engineering isn't about removing the engineer. It's about changing what the engineer focuses on."

When I work with AI agents, my job shifts. I spend less time writing individual lines of code and more time on the things that actually determine whether software succeeds: documentation, specification, architecture, and testing.

Where It Breaks

The biggest risk with letting agents do the heavy lifting is getting comfortable. Too comfortable. You start trusting the output without questioning it, and that's where security holes and edge cases slip through.

I've seen it in my own work. The agent produces something that looks clean, passes a quick review, and ships. Then a week later you find an edge case that should have been obvious—but you didn't catch it because you were moving fast and the code looked right.

The fix isn't to stop using agents. It's to invest more in the things that catch problems before they ship:

  • Documentation and spec. The clearer your spec, the better the agent's output. Garbage in, garbage out still applies.
  • End-to-end testing. Build a testing loop where the agent can validate and correct its own work. This is the single highest-leverage investment you can make.
  • Refactor often. Agents produce functional code, but it accumulates cruft fast. Regular refactoring isn't optional—it's maintenance.

What I'd Tell a Non-Technical Founder

If you're a non-technical founder thinking about building with AI coding tools: go for it. Seriously. The barrier to entry has never been lower, and the tools have never been better.

But start with one thing and ship it. Don't try to build your entire platform in a weekend just because the agent makes it feel possible. That's how you end up with a broad, shallow product that breaks everywhere.

Pick the smallest version of your idea that solves a real problem. Build that. Ship it. Learn from real users. Then expand.

The Real Shift

What's actually happening isn't that coding is getting easier. It's that the role of the builder is changing. The best agentic engineers I know aren't the best prompters—they're the best specifiers, the best testers, the best architects.

They write clear documentation. They build feedback loops. They review what the agent produces with the same rigor they'd apply to a junior developer's pull request. They refactor constantly.

That's not vibing. That's engineering—with better tools.