Back to Blog
DevOps & Platform Eng

Vibe Coding and AI-Generated Infrastructure: The Promise and Peril

Vibe coding has achieved universal adoption with 92% of US developers using AI tools daily and 41% of code AI-generated. However, AI code has 1.7x more major issues and 2.74x more security vulnerabilities. 63% debug AI code longer than writing manually. 75% face AI-driven technical debt. 28.65M secrets leaked in public commits. Trust collapsed to 33%. Karpathy declared vibe coding passe and proposed agentic engineering. Organizations must audit tools, implement AI-specific security scanning, and require human review.

DevOps & Platform Eng
Thought Leadership
10 min read
7 views

Vibe coding has achieved universal adoption yet introduced unprecedented risks to enterprise software quality and security. 92% of US developers now use AI coding tools daily, and 41% of all code globally is AI-generated. However, AI co-authored code contains 1.7 times more major issues than human-written code. Security vulnerabilities appear 2.74 times more frequently. Furthermore, 63% of developers report spending more time debugging AI-generated code than writing it manually. Even Andrej Karpathy, who coined the term in February 2025, declared vibe coding “passe” by February 2026. He proposed “agentic engineering” as its mature successor. In this guide, we break down the promise and peril of vibe coding, what the quality data shows, and how organizations should adopt AI-assisted development responsibly.

92%
of US Developers Use AI Coding Tools Daily
2.74x
More Security Vulnerabilities in AI Code
63%
Spend More Time Debugging AI Code Than Writing

The Rise of Vibe Coding in Enterprise Development

Vibe coding describes an approach where programmers describe what they want in natural language and AI generates the code. Karpathy originally framed it as coding where you “fully give in to the vibes and forget that the code even exists.” Collins Dictionary named it Word of the Year for 2025. By 2026, the concept has moved from experimentation to mainstream enterprise adoption across every industry.

Furthermore, the market for AI coding tools is projected to reach $8.5 billion in 2026. Over $5 billion in venture capital was invested in AI coding tools in 2024 alone. Cursor crossed $100 million in annual recurring revenue. Therefore, vibe coding is not a passing trend. It represents a fundamental shift in how software gets built. The developer’s role is evolving from writing syntax to orchestrating AI agents that handle implementation.

In addition, Y Combinator’s Winter 2025 batch included startups with 95% AI-generated codebases achieving significant revenue with tiny teams. Meanwhile, Google reports that a quarter of their code is already AI-assisted. However, developer trust in AI code accuracy has collapsed from 43% to 33% between 2024 and 2026. As a result, the industry has adopted a technology it does not trust, with usage climbing while confidence falls.

From Vibe Coding to Agentic Engineering

Karpathy proposed “agentic engineering” as the mature successor to vibe coding in February 2026. The distinction is critical. Vibe coding means accepting AI output without full review. Agentic engineering involves structured human-AI collaboration where AI agents handle implementation while humans provide architecture, review, and quality assurance. This evolution reflects the industry recognizing that “forgetting the code exists” works for weekend projects but fails for production systems that underpin business operations.

The Quality Crisis Behind AI-Assisted Code Adoption

The data on AI-generated code quality tells a troubling story that every engineering leader must understand before scaling AI-assisted development across their organization. Specifically, multiple independent studies converge on the same conclusion: AI-generated code is faster to produce but introduces measurably more defects. Security vulnerabilities and maintenance complexity are significantly higher than in human-written code. Engineering leaders who understand these trade-offs can capture the speed benefits while implementing the guardrails that prevent quality erosion.

1.7x More Major Issues
A CodeRabbit analysis of 470 open-source pull requests found AI co-authored code has 1.7 times more major issues. These include logic errors, incorrect dependencies, and flawed control flow. Consequently, AI-generated code requires significantly more review effort than human-written code.
2.74x Security Vulnerabilities
AI-generated code shows 2.74 times more security vulnerabilities than human-written code. Furthermore, AI-assisted commits leak secrets at 3.2% versus a 1.5% baseline. GitGuardian documented 28.65 million new hardcoded secrets in public commits during 2025.
75% of Enterprises Face Technical Debt
Forrester predicts that by 2026, 75% of enterprises will face moderate to high severity technical debt directly attributable to AI-driven rapid development. Therefore, the speed gains from vibe coding create compounding maintenance costs that erode initial productivity benefits.
Comprehension Debt Crisis
Beyond traditional technical debt, vibe coding creates “comprehension debt” where developers do not understand the code they are responsible for maintaining. Once initial prompts are lost, projects become nearly unmaintainable. As a result, organizational resilience degrades as human capability to modify systems atrophies.

“95% of developers feel productive while measurably producing lower-quality code.”

— Developer Productivity Analysis, 2026

The Security Threat From AI-Generated Code at Scale

AI-generated code creates security risks that traditional development practices were not designed to handle. The scale of credential exposure alone demands entirely new approaches to securing the software supply chain. Furthermore, the attack surface is bidirectional. AI coding tools themselves have become targets for supply chain compromise, with vulnerabilities disclosed against multiple major platforms.

Security Risk Scale Impact
Hardcoded Secrets 28.65M new secrets in public commits (34% YoY increase) ✗ Largest single-year jump ever recorded
AI-Specific CVEs 35 in March 2026 alone (estimated 5-10x higher actual) ✗ Formal attribution to AI tools accelerating
Package Hallucinations 20% of AI code references non-existent packages ✗ Attackers exploit through slopsquatting
Misconfigurations 75% more common in AI-generated code ◐ Traditional scanning catches most but not all
Tool Vulnerabilities CVEs disclosed against Cursor, Amazon Q, Copilot ◐ AI tools themselves are attack targets

Notably, approximately 20% of AI-generated code samples reference packages that do not exist. Attackers exploit this through “slopsquatting,” registering hallucinated package names as malicious software before developers install them. Furthermore, existing SDLC frameworks, security training, and CI/CD tooling were designed for human-authored code. They require deliberate extension to address AI-specific failure patterns. Consequently, organizations should immediately audit which AI coding tools are deployed, including unsanctioned developer-introduced tools that IT may not know about or formally approved.

The Productivity Paradox

METR ran a randomized controlled trial with experienced open-source developers on real codebases. The result was counterintuitive. Developers using AI tools were measurably slower on complex tasks. However, they believed AI had helped them. Broader surveys confirm this pattern. 95% of developers report feeling productive while measurably producing lower-quality code. 74% report productivity increases. The subjective experience and objective measurement diverge. In other words, speed increases on simple tasks mask quality degradation on complex work.

How to Adopt AI-Assisted Development Without Catastrophic Risk

Specifically, smart organizations treat AI coding tools as accelerators, not replacements. They give developers more time for architecture, system design, and the genuinely complex problems that AI still struggles with. However, the key insight from successful enterprise adoption is that AI works best for boilerplate code, prototyping, and routine tasks where the speed gains are real and the quality risks are manageable. For security-critical paths and complex business logic, human engineering discipline remains essential. Systems requiring long-term maintainability need developers who understand every line of code they ship to production. The “Vibe and Verify” workflow has emerged as the recommended best practice. Developers use AI to generate initial code rapidly, then review, test, and understand every line before committing to production.

Safe Adoption Practices
Implementing mandatory AI code review before any merge to production
Running automated security scanning calibrated for AI failure patterns
Maintaining developer comprehension through “Vibe and Verify” workflows
Using AI for boilerplate and prototyping while hand-coding critical paths
Dangerous Practices
Accepting AI output without review for production codebases
Reducing junior engineer hiring based on AI productivity assumptions
Allowing unsanctioned AI tools without governance or audit trails
Measuring developer productivity by code volume rather than quality

Five Priorities for Enterprise AI Code Governance

Based on the quality and security data, here are five priorities for engineering leaders governing AI-assisted development:

  1. Audit all AI coding tools across the development environment: Because developers adopt tools without IT approval, inventory every AI tool in use including unsanctioned ones. Consequently, you establish the visibility needed for governance and security controls.
  2. Implement AI-specific security scanning in CI/CD pipelines: Since AI code has 2.74x more vulnerabilities and 75% more misconfigurations, extend existing scanning to cover AI failure patterns. Furthermore, scan for hardcoded secrets with particular attention to AI-assisted commits.
  3. Establish mandatory human review for all AI-generated production code: Because 63% spend more time debugging than writing manually, require review that ensures comprehension, not just functionality. As a result, comprehension debt does not accumulate silently.
  4. Transition from vibe coding to agentic engineering workflows: With Karpathy himself declaring the original approach insufficient, adopt structured human-AI collaboration. Therefore, AI handles implementation while engineers own architecture and quality.
  5. Protect junior engineer development pipelines: Since comprehension debt threatens organizational resilience, maintain hiring and training of junior engineers who build foundational understanding. In addition, these engineers become the senior developers who can review AI output effectively in the future.
Key Takeaway

Vibe coding is universal with 92% daily adoption and 41% of code AI-generated. However, AI code has 1.7x more issues and 2.74x more security vulnerabilities. 63% debug AI code longer than manual writing. 75% face AI-driven technical debt. 28.65M secrets leaked. Trust collapsed to 33%. Karpathy declared vibe coding passe and proposed agentic engineering. Organizations must audit AI tools, implement AI-specific security scanning, require human review, and protect junior developer pipelines.


Looking Ahead: From Vibe Coding to Agentic Engineering

Vibe coding will mature into agentic engineering as the default professional development workflow by 2027. AI agents will handle planning, writing, testing, and shipping code under structured human oversight. Moreover, the developer role will shift permanently from writing code to directing AI systems. Humans will own architecture, security, and the quality decisions that determine whether systems remain reliable under production conditions at enterprise scale.

However, organizations that adopted AI-assisted development without governance face years of technical and comprehension debt remediation. In contrast, those that implemented structured review, security scanning, and developer training from the start will transition smoothly to agentic engineering. In contrast, the competitive advantage goes to organizations that captured AI speed without sacrificing the code quality and human understanding that sustainable software requires.

For engineering leaders, AI code governance is therefore not about restricting AI adoption. It is about ensuring that the unprecedented productivity gains from AI-assisted development translate into maintainable, secure, and comprehensible systems rather than fragile codebases that no human can modify when the AI tools are unavailable or insufficient for the task at hand.

Related Guide
Our DevOps and Platform Engineering Services


Frequently Asked Questions

Frequently Asked Questions
What is vibe coding?
Vibe coding is a development approach where programmers describe what they want in natural language and AI generates the code. Coined by Andrej Karpathy in February 2025 and named Collins Word of the Year. 92% of US developers use AI coding tools daily. The $8.5B market reflects widespread adoption.
How much riskier is AI-generated code?
AI code has 1.7x more major issues, 2.74x more security vulnerabilities, and 75% more misconfigurations. AI-assisted commits leak secrets at double the baseline rate. 20% of AI code references non-existent packages that attackers exploit. 35 CVEs were attributed to AI tools in one month alone.
What is comprehension debt?
Comprehension debt means the humans responsible for code do not understand what it does. Traditional technical debt means messy code. Comprehension debt is worse because developers cannot debug, maintain, or modify systems they did not write or review. If AI tools become unavailable, organizations face systems nobody can change.
What is agentic engineering?
Agentic engineering is the structured successor to vibe coding proposed by Karpathy in February 2026. AI agents handle implementation while humans provide architecture, review, and quality assurance. The key difference is greater oversight and review rather than blindly accepting AI output.
Should enterprises ban vibe coding?
No. Banning is impractical with 92% adoption. Instead, implement governance: mandatory review, AI-specific security scanning, approved tool lists, and structured workflows. The goal is capturing speed gains while maintaining quality. Use AI for boilerplate and prototyping while hand-coding security-critical paths.

References

  1. 1.7x Issues, 2.74x Vulnerabilities, 75% Misconfigurations, 92% Adoption, Trust Collapse: Prof. Hung-Yi Chen — The Dark Side of Vibe Coding: The AI Code Quality Crisis
  2. 28.65M Secrets, 3.2% Leak Rate, 35 CVEs, Slopsquatting, MAESTRO Framework: CSA — Vibe Coding Security Crisis: Credential Sprawl and SDLC Debt
  3. METR Trial, 95% Feel Productive, $8.5B Market, Agentic Engineering, Karpathy Evolution: Hashnode — The State of Vibe Coding in 2026: Adoption Won, Now What?
Weekly Briefing
Security insights, delivered Tuesdays.

Join 1 million+ security professionals. Practical, vendor-neutral analysis of threats, tools, and architecture decisions.