Data Labeling

Niantic Turns 30 Billion Pokémon Go Player Photos Into Robot Navigation

Niantic Spatial's delivery robot deal with Coco Robotics runs on a decade of crowdsourced player scans.

Liza Chan
Liza ChanAI & Emerging Tech Correspondent
March 16, 20265 min read
Share:
Autonomous delivery robot navigating a city sidewalk past urban buildings and storefronts

Niantic Spatial, the geospatial AI company spun out of the Pokémon Go developer last year, has partnered with Coco Robotics to provide visual positioning for autonomous delivery robots. The deal, announced March 10, puts a 30-billion-image dataset built from years of player scans to its first commercial robotics use.

The images came from Pokémon Go and Ingress players who scanned real-world landmarks through their phones, often in exchange for in-game rewards. Niantic Spatial now claims it can pinpoint a device's location to within a few centimeters using those scans, which is the kind of promise that sounds too clean until you consider what GPS actually does in downtown Manhattan.

The data nobody thought about

Pokémon Go launched in 2016, and for a while it was inescapable. But the feature that matters here came later. Niantic introduced AR Mapping in late 2020, letting players scan locations and objects by walking around them with their cameras. The game framed this as "Field Research" and rewarded players with in-game items for completing scans.

"Making Pikachu run realistically down the street and enabling Coco's robot to navigate a city safely and precisely are the same problem," which is a fun line from Niantic Spatial CEO John Hanke, though it papers over the fact that players were doing one of those things on purpose and the other by accident.

Each scanned location accumulated thousands of images from different players, at different times of day, in different weather. A single PokéStop might have morning shots in sunshine, evening shots in rain, winter shots with snow. No fleet of camera cars replicates that kind of diversity on the same budget. MIT Technology Review reports Niantic Spatial has trained its model on images clustered around roughly one million in-game hotspots.

So what does Coco actually get?

Coco Robotics runs about 1,000 delivery robots across Los Angeles, Chicago, Jersey City, Miami, and Helsinki, with over 500,000 deliveries completed. The robots carry up to eight extra-large pizzas and trundle along sidewalks at around five miles per hour. GPS works fine in suburbs. In dense urban cores, the signal bounces off buildings and can drift 50 meters, which is the difference between the right restaurant entrance and the middle of an intersection.

Niantic Spatial's Visual Positioning System uses camera snapshots from the robot's four onboard cameras to match against its database and determine exact position. CTO Brian McClendon, who previously worked on Google Maps and Google Earth, called it solving the "urban canyon" problem. "You see that blue dot on your phone, and it often drifts 50 meters, placing you on another block, in another direction, or across the street," he told MIT Technology Review.

Whether this actually outperforms existing solutions is harder to assess. Hackaday notes that competitors like Starship Technologies already use visual positioning with 3D maps built from their own robot sensors. Niantic Spatial's advantage is supposed to be the sheer volume and diversity of its pre-existing data, but the robots see the world from hip height while Pokémon Go players held their phones at eye level. Coco CEO Zach Rash says the data adaptation isn't complex. Maybe. The partnership is new enough that there's no independent performance data to check that against.

The consent question nobody wants to answer

Niantic's terms of service let the company use player-submitted data however it wants, and pass that freedom to other entities. Players agreed to this when they installed the app. But as Popular Science pointed out, agreeing to terms and understanding what you're consenting to are different things. When 500 million people installed Pokémon Go in its first 60 days, precisely none of them were thinking about delivery robot navigation.

The comparison to Google's reCAPTCHA is unavoidable. Users solved CAPTCHAs thinking they were proving they were human; they were labeling training data for self-driving cars. Pokémon Go's scanning feature followed the same playbook: a useful-seeming interaction that doubled as a data pipeline.

Niantic has pushed back on the framing. An editor's note on the company's blog stresses that scanning is optional, requires visiting a specific public location and clicking to scan, and that "merely walking around playing our games does not train an AI model." That's technically true. But the game actively incentivized scanning by tying it to rewards, which makes "optional" do some heavy lifting.

The business math

Last March, Scopely acquired Niantic's games division for $3.5 billion, with Niantic contributing an additional $350 million in cash. Niantic Spatial spun off as a standalone company with $250 million in funding. Coco Robotics is its first robotics partner, which Hanke's team is framing as validation of the entire approach.

One partnership does not validate an approach. But the timing is notable: world models are the current obsession in AI, with Google DeepMind and World Labs building virtual environments for training AI agents. Niantic Spatial is betting that models grounded in real-world data have an edge over synthetic ones. The 30 billion images are the moat. Whether that moat holds depends on how many robotics companies actually need this level of street-level visual data, and how quickly competitors can build their own.

Coco Robotics launched its Coco 2 robot in February, designed to operate on streets and bike lanes in addition to sidewalks. The Niantic Spatial integration is expected to roll out across Coco's existing fleet, though neither company has announced specific deployment timelines.

Tags:Niantic SpatialPokémon GoCoco Roboticsdelivery robotsvisual positioninggeospatial AIautonomous navigationcrowdsourced dataAR mapping
Liza Chan

Liza Chan

AI & Emerging Tech Correspondent

Liza covers the rapidly evolving world of artificial intelligence, from breakthroughs in research labs to real-world applications reshaping industries. With a background in computer science and journalism, she translates complex technical developments into accessible insights for curious readers.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

Niantic Uses 30B Pokémon Go Images for Robot Navigation | aiHola