TechStack '26 is live. Explore tools
·Jack Stephen·6 min read

From Vibe Coding to Agentic Engineering

From Vibe Coding to Agentic Engineering

Vibe coding was supposed to democratise software development. Describe what you want, get working code, ship it. No CS degree required. And for a while, it looked like that promise was real. The category hit $4.7 billion with a 38% CAGR, and 92% of US developers reported using AI coding tools daily.

Then the bugs arrived. And the security vulnerabilities. And the mounting evidence that code nobody understands is code nobody can maintain.

In February 2026, Andrej Karpathy declared vibe coding 'passe' and introduced a new term: agentic engineering. The shift isn't cosmetic. It reflects a fundamental rethinking of how AI should fit into the software development process.

What Was Vibe Coding, Exactly?

The term came from Karpathy himself in early 2025. The idea: you describe what you want in natural language, an AI generates the code, and you accept or reject the output based on whether it feels right. You don't read every line. You don't necessarily understand the implementation. You trust the vibes.

For prototyping, this worked brilliantly. Need a landing page? A CRUD app? A quick data visualisation? Vibe coding could produce something functional in minutes. 63% of vibe coding users were non-developers, according to Colan Infotech's 2026 analysis. Product managers, designers, and founders were suddenly building working software.

The problem is that prototypes and production systems have different requirements. And the gap between them is where vibe coding fell apart.

What Went Wrong?

The data is stark. AI co-authored code produces 1.7 times more major issues and 2.74 times more security vulnerabilities than human-written code. That's not a marginal difference. That's a categorical failure in quality.

Three specific problems kept recurring:

Nobody reads the code. That's the point of vibe coding, and it's also the fatal flaw. When you accept code without understanding it, you accept bugs, security holes, and architectural decisions you can't reason about later. A function that works today might fail silently under edge cases nobody tested because nobody knew what the function actually did.

Context windows aren't architecture. AI coding tools generate code within a context window. They don't understand your system's architecture, your deployment constraints, your data model's evolution over three years, or why that legacy endpoint exists. They produce locally correct code that's globally incoherent. Stitch enough of it together and you get a codebase that works but can't be modified, debugged, or scaled.

Security is an afterthought. Language models optimise for functionality, not security. They'll generate SQL queries without parameterisation, API endpoints without authentication checks, and client-side validation without server-side enforcement. Not because they can't do better, but because the prompts rarely ask them to. And vibe coders, by definition, aren't reviewing the output for security issues they don't know to look for.

This is the pattern we see across AI projects that fail: the technology works, but the process around it doesn't account for what happens after the demo.

What Is Agentic Engineering?

Agentic engineering treats AI as an engineering partner, not a code generator. The human defines the architecture, the constraints, the quality standards, and the test criteria. The AI handles implementation within those boundaries. The human reviews, refines, and owns the result.

The distinction matters in three specific ways:

The human stays in the loop architecturally. An agentic engineering workflow starts with a human designing the system: data models, API contracts, component boundaries, error handling strategies. The AI implements within that design. It doesn't make architectural decisions. It executes them.

Testing is built into the workflow. Rather than generating code and hoping it works, agentic engineering tools generate code and tests together. The AI proposes an implementation, the tests verify it, the human reviews both. If the tests fail, the AI iterates. This is closer to test-driven development than to autocomplete.

The AI understands context beyond the current file. Modern agentic coding tools like Claude Code operate across entire codebases. They read your existing code, understand your patterns, respect your conventions, and produce implementations that fit within the larger system. That's fundamentally different from generating isolated snippets in a chat window.

How Does This Change the Way We Build Software?

At Valentis, we've been building with agentic engineering tools for months. The workflow looks nothing like vibe coding. It looks like pair programming with a very fast, very tireless partner who needs clear direction.

A typical session: define the feature requirements and acceptance criteria. Outline the architecture. Point the AI at the relevant code. Let it implement. Review the diff, not the vibes. Run the tests. Iterate on specifics. The AI handles the mechanical work. The human handles the judgment calls.

The productivity gain is real, but it's not where people expect it. The speed increase isn't in writing code. It's in exploring solutions, generating test cases, refactoring existing code, and handling the tedious-but-important work that engineers typically defer. Documentation. Error handling. Edge cases. The stuff that makes software reliable rather than merely functional.

The engineers who benefit most from agentic engineering are experienced ones. They know what good code looks like. They can evaluate AI output critically. They can spot when the AI has made a subtle architectural mistake that would compound over time. Junior developers using these tools without supervision reproduce the vibe coding problem with better tooling.

Where Does This Leave Non-Developers?

Vibe coding's most interesting contribution was bringing non-developers into software creation. That doesn't go away. It evolves.

The tools are getting better at maintaining quality without requiring the user to understand implementation details. Guardrails, automated testing, and opinionated frameworks catch more of the problems that vibe coding ignored. A product manager building an internal tool with Lovable or Bolt today gets a significantly better result than the same person got twelve months ago.

But the ceiling is still there. Non-developers can build prototypes, internal tools, and simple applications. Production software that handles real data, real users, and real edge cases still needs engineering discipline. The tools reduce the gap. They don't close it.

The practical implication for businesses: agentic engineering tools make your existing engineering team dramatically more productive. They don't replace the need for an engineering team. If you're building anything that matters, you still need people who understand what the AI is producing and can take responsibility for it.

That's a less exciting pitch than 'anyone can code now.' It's also true. And in software, true tends to win over exciting given enough time and enough production incidents.

If you're figuring out how AI-assisted development fits into your team's workflow, that's a conversation we have regularly.

Contributors

Jack Stephen
Jack StephenFounder, Valentis AI