Before reading this commentary: Read the original paper first.
arXiv Abstract |
Download PDF
"Vibe Coding Kills Open Source" by Miklós Koren, Gábor Békés, Julian Hinz, and Aaron Lohmann (January 2026)
The Maintainability Problem
The paper highlights a key concern about what Andrej Karpathy coined in February 2025 as "vibe coding" — where you "fully give in to the vibes, embrace exponentials, and forget that the code even exists":
"Under vibe coding, an AI agent can select, compose, and modify packages end-to-end, and the human developer may not know which upstream components were used."
This particular flavor of vibe coding — where the developer genuinely doesn't know what's in their own codebase — is the problematic variant. It's not just about AI assistance; it's about abdication of understanding.
Code written this way is fundamentally unmaintainable. Garbage accumulates. Technical debt compounds. And eventually, someone needs to come in and untangle the mess.
Which raises an interesting question: is this a new income stream for open source consultants? Someone has to clean up the vibe-coded projects when they inevitably break down.
The Real Question About OSS Business Models
The paper discusses how vibe coding threatens open source maintainers' income. But let's examine what that "primary income source" often actually is.
Many OSS companies generate revenue because their software is hard to understand and use. Users pay for support, consulting, and managed services not because the software is so sophisticated it requires experts, but because the documentation is sparse, the APIs are confusing, and the configuration is byzantine.
This isn't new. It's exactly how many proprietary software companies have operated for decades. Create complexity, then charge to navigate it.
This is a flawed model. Users aren't getting real value — they're paying a tax on unnecessary complexity. It's job security through overcomplicating and overengineering. The software serves the maintainer's income needs more than the user's actual needs.
The Alternative: Software That Carries Real Value
What's the alternative to this complexity-as-revenue-stream model?
Software that is genuinely valuable — built with well-thought-out approaches, using established patterns, without the "secret sauce" and "magic" that exists primarily to obscure rather than to solve problems.
Consider: we can recreate most CRM, ERP, CMS, and LMS systems using simple building blocks and well-known patterns. These are solved problems. The complexity in many commercial implementations isn't intrinsic to the domain — it's artificial.
But we will genuinely struggle to write:
- A kernel driver
- A high-performance 3D engine
- A real-time control system
...without deep understanding of the domain. This is inherent complexity. It cannot be simplified away because the problem itself is complex. This is where real expertise creates real value.
But here's the catch: even in those "simple" CRM and ERP systems, can we trust vibe-coded financial logic? Can we trust a vibe-coded security model?
We already know the answer. Research shows that 29.8% of GitHub Copilot-generated code contains security weaknesses. A CodeRabbit study found AI-generated code is 2.74x more likely to introduce XSS vulnerabilities, 1.88x more likely to have improper password handling, and 1.91x more likely to create insecure object references compared to human-written code.
These aren't theoretical risks. Knight Capital lost $440 million in under an hour from a trading software bug — and that was human-written code where someone at least understood the logic. AXA Rosenberg paid $217 million plus a $25 million penalty for an error in their investment model code. Now imagine those same financial calculations written by someone who "Accept All" without reading the diffs.
The building blocks may be simple, but the critical paths — authentication, authorization, financial calculations, audit trails — still require expert oversight. A vibe-coded login form might work, but would you trust it to protect customer data? A vibe-coded invoice generator might produce PDFs, but would you trust it to calculate taxes correctly across jurisdictions?
The distinction isn't between "simple" and "complex" systems. It's between code where mistakes are inconvenient and code where mistakes have serious consequences.
What Vibe Coding Cannot Replace
There's another aspect the paper touches on: cooperation and modification of libraries.
Real open source development often requires modifying upstream libraries without carrying the full burden of long-term maintenance costs. This means:
- Understanding the library's design philosophy
- Writing patches that upstream maintainers will accept
- Participating in discussions about direction and priorities
- Building relationships and trust
Vibe coding cannot help here. An AI that assembles code without understanding cannot meaningfully participate in a community's technical discussions or navigate the social dynamics of getting patches merged.
The Library Trap
Here's a very real scenario playing out right now:
A company creates a product that has genuine value. They use AI to accelerate development. The AI "helps" them integrate various libraries, using its own approach — code that works, passes tests, ships to customers.
Then, months later, they discover the AI used:
- Undocumented internal APIs that were never meant for external use
- Deprecated functions scheduled for removal in the next major version
- Insecure methods that the library maintainers explicitly warn against
- Workarounds for bugs that have since been fixed differently
Now the company is trapped. Their product depends on these libraries, but no one understands how. The original AI conversation is lost or meaningless. The code works but for reasons nobody can explain. And when the library updates, everything breaks in ways nobody can debug.
This isn't hypothetical. Consider the
ongoing crisis in Google's python-genai ecosystem.
When Google migrated from google-generativeai to google-genai,
the API architecture changed completely — but AI code generators keep suggesting the
old, deprecated GenerativeModel patterns. As one developer put it: the training
data for AI models "remains starved of correct examples, cementing the AI's bias towards
the old, broken API." The irony of AI tools breaking AI libraries is not lost on anyone.
This pattern will repeat across every major library evolution. AI training data is frozen (typically between 2021-2023), with no awareness of deprecations announced after the cutoff. Every library that refactors its API creates a new trap for vibe coders.
And how does this get solved? The same way it always does: the company pays a consulting firm. The consultants — people who actually understand the libraries — cooperate with upstream maintainers to either validate the existing integration or rewrite it properly.
So the "cost savings" from vibe coding become deferred consulting fees, often larger than what proper development would have cost in the first place. The money doesn't disappear — it just flows to different people, later, under worse circumstances.
This is the new open source economy that vibe coding creates: not the death of paid expertise, but the relocation of it. From upfront development to emergency remediation. From building things right to fixing things that were built wrong.
The Home Assistant Example
Consider Home Assistant, the popular open source home automation platform. You can vibe-code a plugin for your IoT product. It works. You ship it to customers.
But if that plugin isn't upstreamed into the official project, you carry the full burden of maintaining it. And it will keep breaking — every time Home Assistant updates its internal APIs, changes its architecture, or deprecates the patterns your AI happened to use.
Worse: open source developers increasingly treat out-of-tree vibe-coded integrations as tainting. The Linux kernel community has formalized this approach. When you load an out-of-tree module, the kernel is marked with TAINT_OOT_MODULE, and the community's response to bug reports is explicit:
"If there's a bug and we see that taint we usually do say, 'Can you reproduce that bug without the module that caused the OOT taint?'. If they can't then we ignore it."
This same pattern is emerging across open source projects. File a bug report with a vibe-coded plugin installed? "Please reproduce without third-party integrations." The project maintainers have no obligation to debug code they didn't write and can't inspect.
The only real solution, again: pay to cooperate with the project maintainers. Have someone who understands the codebase write a proper integration that follows the project's patterns and can be upstreamed.
Here's the irony: vibe coding makes it harder, not easier, to write code that will be accepted upstream. Upstream maintainers want contributions that follow their conventions, use their preferred patterns, and demonstrate understanding of the project's architecture. Vibe-coded patches are the opposite of this — they work by accident, follow no conventions, and their authors can't explain or defend the implementation choices.
So vibe coding doesn't reduce the need for expertise. It just ensures you'll need that expertise later, at higher cost, in a more desperate situation.
Erosion and Creation
The paper's central thesis is that vibe coding will damage open source. This is partially true — but only for certain models of open source.
What may erode:
- Revenue from support contracts for unnecessarily complex software
- Consulting income based on arcane knowledge rather than real expertise
- The "complexity moat" that protects some maintainers' positions
What will remain (and may grow):
- Software solving genuinely hard problems requiring deep domain expertise
- Clean, well-designed tools that people actually enjoy using
- Communities built around shared understanding, not shared confusion
- Consulting for cleaning up vibe-coded messes
Vibe coding, as described in the paper, can lead to erosion of particular models of open source. But at the same time, it will create new models — or rather, reward models that were always better but couldn't compete against the incumbents' complexity moats.
Conclusion
The paper raises legitimate concerns about vibe coding's impact on open source sustainability. But it inadvertently reveals something else: much of what passes for "open source business model" is really just a complexity tax.
If vibe coding threatens your income because users no longer need your help to understand your own software — perhaps the problem isn't vibe coding. Perhaps the problem is that your software was never as valuable as your support contract made it appear.
The open source projects that will thrive are those that solve genuinely hard problems, maintain genuine expertise, and build genuine communities. No amount of AI-generated code will replicate what kernel developers, database architects, or compiler engineers do.
The rest? Maybe it's time for that market correction.
References
- Koren, M., Békés, G., Hinz, J., & Lohmann, A. (2026). "Vibe Coding Kills Open Source." arXiv:2601.15494
- Karpathy, A. (2025). Original "vibe coding" post on X
- "Security Weaknesses of Copilot-Generated Code in GitHub Projects." ACM Transactions on Software Engineering and Methodology
- CodeRabbit: State of AI vs Human Code Generation Report (2025)
- GitHub Issue: AI Code Generation Breaking the google-genai Ecosystem
- Linux Kernel Documentation: Tainted Kernels
- CNN: Knight Capital's $440 Million Software Bug (2012)
- Raygun: 11 of the Most Costly Software Errors in History