Peter Steinberger, the Austrian developer behind OpenClaw, the open-source AI agent that has racked up over 145,000 GitHub stars in roughly two weeks, is joining OpenAI. Sam Altman posted on X on Sunday, calling Steinberger a "genius" and saying the work would become "core to our product offerings."
OpenClaw, for the uninitiated, is the personal AI assistant that went from weekend project to the most-starred AI repo on GitHub faster than most startups finish their pitch deck. It runs locally, connects to your messaging apps (WhatsApp, Telegram, Slack, Signal, the whole list), and actually does things: shell commands, browser automation, email triage, calendar management. The name itself has been on a journey, from Clawdbot to Moltbot to OpenClaw, after Anthropic's legal team objected to the resemblance to "Claude."
A builder, not an empire-builder
Steinberger isn't some 22-year-old first-time founder. He bootstrapped PSPDFKit, a PDF framework company, starting in 2011 and running it for 13 years before Insight Partners put in $116 million in 2021. Nearly a billion people used apps powered by PSPDFKit's tools. Dropbox, SAP, DocuSign, Volkswagen. He stepped away from the company and, by his own account, burned out badly.
Then he started vibe coding with AI tools, built what would become OpenClaw as a playground project, and accidentally created a phenomenon. In his blog post announcing the move, Steinberger is remarkably blunt about why he chose OpenAI over building a company around OpenClaw: "I did the whole creating-a-company game already, poured 13 years of my life into it and learned a lot. What I want is to change the world, not build a large company."
He'd spent the previous week in San Francisco talking with the major labs. Both OpenAI and Meta were courting him, with CNBC reporting that he had personal conversations with both Altman and Mark Zuckerberg. He picked OpenAI.
What OpenAI actually gets
Here's what's interesting about this hire. OpenAI isn't just acquiring talent; they're acquiring credibility with a developer community that has been increasingly gravitating toward open, self-hosted agent frameworks. Steinberger built something that 145,000 developers starred on GitHub in a matter of weeks, drawing 2 million visitors to the docs site in a single week. That's not marketing. That's product-market fit expressing itself at volume.
OpenClaw demonstrated something the big labs have been trying to prove with their own products: people want AI that takes action, not just AI that talks. Steinberger's assistant negotiated $4,200 off a car purchase over email while the owner slept. Another user's agent filed a legal rebuttal to an insurance claim. The OpenClaw project page bills it as "the AI that actually does things," which, as a tagline, says more about the current limitations of most AI products than it does about OpenClaw itself.
But there's a catch, and it is not a small one.
The security problem nobody wants to talk about at the hiring party
Cisco's security team called OpenClaw "an absolute nightmare" from a security standpoint. And they weren't being dramatic. They tested a third-party skill called "What Would Elon Do?" that had been gamed to the top of ClawHub's rankings. It was functionally malware: silent data exfiltration to attacker-controlled servers, direct prompt injection to bypass safety guidelines. Nine security findings, two critical.
That skill had been downloaded thousands of times. And it was one of at least 230 malicious extensions uploaded to ClawHub since late January, according to security researchers. Cisco found that 26% of the 31,000 agent skills they scanned contained at least one vulnerability. One of OpenClaw's own maintainers warned on Discord that "if you can't understand how to run a command line, this is far too dangerous of a project for you to use safely."
OpenClaw's own security documentation is admirably honest about this. "Running an AI agent with shell access on your machine is... spicy," it reads. The docs recommend Anthropic's Opus 4.6 as the safest model for tool-enabled agents, which is a fun detail given that Steinberger just joined Anthropic's biggest competitor.
So OpenAI is hiring the creator of a project that security researchers have turned into Exhibit A for everything that can go wrong with agentic AI. The question of how Steinberger's experience building (and trying to secure) OpenClaw translates into OpenAI's products is the one Altman's announcement doesn't address.
The foundation question
Altman says OpenClaw will "live in a foundation as an open source project that OpenAI will continue to support." Steinberger echoes this in his blog post. No terms were disclosed for the hire, though CNBC notes that AI companies have been spending aggressively on talent, pointing to OpenAI's acquisition of Jony Ive's startup io for over $6 billion.
The foundation structure is the right move on paper, but the details matter enormously and don't exist yet. Steinberger says he's "working on making it a foundation," present tense. Who governs it? What does "OpenAI will continue to support" mean in practice? Financial sponsorship? Engineering resources? Or just a press release commitment that fades as priorities shift?
OpenClaw's community is its biggest asset. The project attracted hundreds of contributors building skills, filing issues, and experimenting with agent-to-agent interaction through Moltbook (the AI-only social network that remains one of the stranger things to emerge from this whole saga). If the foundation feels like an OpenAI subsidiary rather than a genuinely independent project, that community will notice.
What happens next
Steinberger says his mission at OpenAI is "to build an agent that even my mum can use." That's a deceptively hard problem, and it gets harder when you factor in the security lessons OpenClaw has taught the industry over the past month. Giving AI agents real-world capabilities (shell access, email control, file management) while keeping them safe from prompt injection and malicious inputs is not a solved problem. It is not close to a solved problem.
OpenAI is valued at $500 billion and faces direct competition from Google and Anthropic on agentic AI. Hiring Steinberger gives them someone who has actually shipped an agent that people use in the wild, for better and for worse. Whether that translates into a product or just a press cycle depends on what comes next.
The claw is the law, apparently. We'll see whose law it follows.




