At the GEEKCon security competition in Shanghai, researchers from cybersecurity group DARKNAVY demonstrated complete takeover of a Unitree humanoid robot using only spoken commands. The compromised machine then transmitted the exploit wirelessly to a second robot that had no network connection whatsoever, achieving infection in minutes.
The voice attack vector
Qu Shipei and Xu Zikai, security researchers at DARKNAVY, exploited a flaw in the robot's embedded AI agent to gain control. The target was a Unitree model priced around 100,000 yuan (roughly $14,000), running a large language model for voice interaction.
What makes this attack notable isn't just the entry point. Voice interfaces are becoming standard on commercial robots, marketed as features rather than what they apparently are: attack surfaces. The researchers spoke to the robot, triggered the vulnerability, and within a controlled demonstration environment had full control of an internet-connected machine.
Then things got worse.
When air gaps fail
The second robot wasn't connected to any network. In security circles, this physical isolation from the internet is called an "air gap," traditionally considered one of the strongest protections available. The compromised robot used short-range wireless communication to transmit the exploit, turning what should have been an isolated machine into another node in a potential botnet.
This cascading infection pattern has serious implications for anyone deploying multiple robots in proximity. Industrial settings, warehouses, research labs: any environment with clustered units becomes vulnerable if a single machine is compromised. The GEEKCon demonstration included the hijacked robot physically striking a mannequin on stage, a theatrical but effective illustration of what happens when machines designed for physical work fall under hostile control.
The assumption that keeping robots offline provides meaningful security looks increasingly naive. Robots are, by their nature, networks of sensors, actuators, and communication modules. Even without internet connectivity, Bluetooth, Wi-Fi provisioning interfaces, and local mesh protocols create pathways for exploitation.
This isn't the first Unitree security problem
The GEEKCon demonstration follows a disclosure in September by independent researchers Andreas Makris and Kevin Finisterre. They published details of a vulnerability they called UniPwn, affecting Unitree's Go2 and B2 quadrupeds along with the G1 and H1 humanoids.
The UniPwn flaw sits in the Bluetooth Low Energy provisioning system robots use during Wi-Fi setup. The encryption keys protecting this process are hardcoded and, as Makris discovered, had been posted publicly. Authentication requires encrypting the string "unitree" with these known keys. From there, an attacker can inject code that executes with root privileges when the robot attempts a network connection.
Makris and Finisterre contacted Unitree in May. After limited engagement, the company stopped responding in July. The researchers went public in September, citing ongoing risk to users. Unitree eventually posted a statement acknowledging the concerns and claiming fixes were in progress.
Víctor Mayoral-Vilches, founder of robotics security firm Alias Robotics, has documented additional issues with Unitree platforms, including undisclosed data streaming to servers in China. His assessment of Unitree's response to researchers is blunt: this is not responsible cooperation with the security community.
The $14,000 attack platform
Unitree's pricing makes these vulnerabilities particularly relevant. The Go2 quadruped starts around $1,600. The G1 humanoid lists at $13,500. Compare that to Boston Dynamics' Spot at roughly $75,000 or the research-focused Atlas, estimated near $140,000 for lab access.
Affordability drives adoption. These machines are already being tested by law enforcement (Nottinghamshire Police in the UK has a Go2 program), deployed in warehouse operations, and purchased by researchers worldwide. The British security company Brit Alliance reportedly bought Go2 units at $3,500 each, modified them with thermal cameras, and supplied them to Ukrainian forces for reconnaissance.
The security researcher community focuses on Unitree partly because the robots are accessible, but also because their penetration into sensitive applications is already happening. The Nottinghamshire police ignored attempts by Makris to disclose the vulnerability before publication.
What mitigation looks like
Current recommendations from security researchers are fairly grim: isolate robots on dedicated Wi-Fi networks, disable Bluetooth when not actively configuring, and monitor for suspicious network traffic. Mayoral-Vilches acknowledges the uncomfortable reality: "You need to hack the robot to secure it for real."
Unitree has indicated patches are forthcoming. Whether those patches address the AI agent vulnerability demonstrated at GEEKCon, or only the earlier UniPwn exploit, remains unclear. The company's communication with researchers has been inconsistent enough that independent verification of any security claims seems warranted.
The broader robotics industry has been slow to treat security as a priority. Cost pressure, rapid development cycles, and the assumption that physical robots operate in controlled environments have all contributed to the current state. Sophisticated attack surfaces (AI agents, sensor networks, wireless provisioning) get bolted onto platforms designed primarily for mechanical function.
GEEKCon's demonstration, where one compromised machine infected another offline unit through proximity alone, suggests that the isolation assumptions underlying much of industrial robotics security need fundamental reassessment. The air gap was always something of a myth in networked systems. In robotics, it may never have existed at all.




