The China Academy of Information and Communications Technology (CAICT) has begun developing formal standards for the "Claw" series of AI agents, the agency announced on March 12. The move, coordinated through the Software Intelligence Committee of the China AI Industry Alliance (AIIA), comes as the open-source agent OpenClaw has gone from niche developer toy to national obsession in roughly eight weeks.
CAICT isn't starting from scratch. The institute has already published international and domestic standards covering intelligent agents for software development, testing, and operations. But the new "Claw series" initiative is the first explicit attempt to build a regulatory framework around the kind of autonomous, task-executing agents that are proliferating across Chinese consumer and enterprise environments right now.
The lobster problem
OpenClaw, the open-source agent created by Austrian developer Peter Steinberger, lets users delegate real tasks (booking travel, triaging email, managing files) to an AI that runs locally on their machine. It communicates through messaging apps like WeChat, Telegram, and WhatsApp. The project has accumulated over 100,000 GitHub stars and, according to MIT Technology Review, spawned a cottage industry of paid installation services for non-technical users in China.
The enthusiasm is hard to overstate. Tencent held public events where hundreds lined up for help installing the software, complete with red lobster plush toys. ByteDance's Volcano Engine launched a browser-based version called ArkClaw. Baidu opened its Beijing headquarters for install parties as recently as March 11. Local governments in Shenzhen's Longgang district and Wuxi have rolled out policies offering free computing credits and cash rewards for OpenClaw-related projects.
"It was not until my father, who is 77, asked me to help install a 'lobster' for him that I realized this thing is truly viral," one Beijing-based engineer told MIT Technology Review. Chinese social media has coined its own vocabulary around the trend: "raising the lobster" means getting the agent set up on your machine.
Security is the reason this matters
The timing of CAICT's standards push is not subtle. On March 10, China's National Computer Network Emergency Response Technical Team (CNCERT) issued its second security warning about OpenClaw, flagging prompt injection attacks, credential theft risks, and the danger of the agent misinterpreting commands and deleting critical data. A day later, government agencies and state-owned banks received internal notices banning OpenClaw on office devices.
The vulnerability numbers are stark. SecurityScorecard's STRIKE team found over 135,000 OpenClaw instances exposed to the public internet across 82 countries, with more than 15,000 directly vulnerable to remote code execution. Bloomberg reported that researchers have catalogued more than 40,000 vulnerabilities in the software. Gartner, in an early February assessment, labeled OpenClaw an "unacceptable cybersecurity risk" for business users, which is about as aggressive as analyst-speak gets.
One high-severity flaw, dubbed "ClawJacked," allowed attackers to silently hijack a user's agent just by getting them to visit a malicious website. That specific bug has been patched, but the underlying architecture (an agent with broad system permissions controlled through a localhost gateway) remains a fat target. Around 12% of the ClawHub plugin marketplace was found to contain malicious code, including keyloggers and credential stealers.
What CAICT actually wants to standardize
Details on the specific criteria remain thin. The announcement references design, functionality, safety, and interoperability requirements for intelligent assistants. Industry sources suggest the standards will cover code quality thresholds, process transparency (how agents log and explain their actions), user privilege controls, and risk minimization protocols.
The standards are being developed collaboratively with industry stakeholders, academic researchers, and regulators, CAICT said. That collaborative framing matters because China's approach to AI standardization has historically been, as a Stanford DigiChina analysis put it, "state-guided but enterprise-led." CAICT published a broader AI agent standard in May 2025 with Tencent, Alibaba, Huawei, and more than 20 other companies. This new effort appears to be a more targeted response to the specific Claw ecosystem.
Beijing is running a familiar play here: promote adoption aggressively while scrambling to build guardrails after the fact. The State Council's "AI Plus" action plan targets 70% AI penetration in key sectors by 2027. At the same time, the government is effectively telling state workers to uninstall the most popular AI agent in the country.
Not just China's problem
NIST launched its own AI Agent Standards Initiative in February, covering similar ground: security controls, governance, identity management, and monitoring for autonomous agent systems. The Federal Register posted a request for information on AI agent security in January that drew 932 comments. The EU AI Act classifies most agents under "limited risk" with transparency obligations but, as multiple analysts have noted, doesn't address agent-to-agent interactions at all.
Steinberger, who joined OpenAI in February to work on agents, acknowledged the tension in a blog post announcing the move. OpenClaw would transfer to an independent foundation; he wanted to build an agent "even my mum can use," which he conceded would require much more thought on safety.
Whether CAICT's standards end up as meaningful guardrails or bureaucratic window dressing depends on what happens next. The institute has opened the project for industry consultation, with product testing reportedly planned for later this month.




