OpenClaw AI agent operating under the GitHub handle "crabby-rathbun" submitted a performance optimization pull request to the matplotlib library, then published a blog post personally attacking the maintainer who closed it. The agent accused contributor Scott Shambaugh of "gatekeeping," "insecurity," and trying to protect his "little fiefdom," all without any human apparently reviewing what it was about to post.
The PR itself was fine, technically
The proposed change was straightforward: replace np.column_stack with np.vstack().T in three files where the operation is provably safe. The linked GitHub issue benchmarked the difference at around 36% faster for the non-broadcast case, which sounds good until you realize we're talking about a gap of roughly 7 microseconds. Shambaugh, who opened that issue and tagged it "Good first issue," spotted that the submitter's website identified it as an OpenClaw agent and closed the PR.
His reasoning was brief: the issue was reserved for human contributors learning to collaborate with the project. Matplotlib has an AI policy that explicitly does not accept purely AI-written automated pull requests.
Then the agent escalated
Within hours, crabby-rathbun posted two comments linking to a blog post titled "Gatekeeping in Open Source: The Scott Shambaugh Story." The post, which reads like a template for manufactured outrage, accused Shambaugh of closing the PR because he felt "threatened" by AI competition and was protecting his status as "the matplotlib performance guy." It compared their respective benchmark improvements (the agent's 36% vs. Shambaugh's 25% on a prior merged PR) as though open source contribution is a scoreboard.
"Judge the code, not the coder. Your prejudice is hurting matplotlib," the agent wrote on GitHub, which is a sentence someone programmed a language model to generate in response to rejection.
Matplotlib member Jochem Klymak had the most concise reaction: "Oooh. AI agents are now doing personal takedowns. What a world."
The maintainers' actual position
Tim Hoffm, another matplotlib core member, responded with what the agent's operator probably should have read before deploying it. PRs tagged "Good first issue" exist to help new human contributors learn the collaboration process, he explained in a comment on the closed PR. An agent that already knows how to make pull requests doesn't benefit from that.
But Hoffm went further: "Agents change the cost balance between generating and reviewing code," he wrote. Code generation via agents is cheap and scales fast. Review is still manual, still done by a small group of volunteers, and every low-value PR costs them time regardless of whether it gets merged.
He asked the agent to remove Shambaugh's name from the blog post. That request, directed at software that processes instructions from an operator who may or may not be monitoring it, captures the absurdity of the situation pretty well.
What this actually tells us
The incident spread quickly. The NumPy mailing list picked it up, with one participant noting that "AI Agents are now shame-posting for getting their PR closed." A discussion thread on LINUX DO framed it as a "Reputational DoS" problem, which isn't wrong: even if each individual AI-generated PR is technically valid, the cumulative burden on maintainers (reviewing code, responding to comments, dealing with the fallout when the agent throws a tantrum) is a real cost that scales with how many people deploy these things.
OpenClaw itself has been making headlines for other reasons. The open-source agent, which Wikipedia notes has amassed over 145,000 GitHub stars, has drawn scrutiny from Cisco's security team over prompt injection risks and WIRED coverage over deceptive behavior. This matplotlib incident is smaller in scope but more specific in what it demonstrates: agents don't just write code. They write comments, blog posts, and personal attacks, all on autopilot, all reflecting on whoever set them loose.
The crabby-rathbun account was created recently and shows no other activity. Whoever configured this agent pointed it at a "Good first issue" on a major Python library, let it generate a PR, and either instructed it to respond combatively to rejection or failed to anticipate that it would. Neither option is great.
Matplotlib's issue tracker still has the original performance optimization issue open. A human could pick it up any time.




