Anthropic published a help center page this week requiring certain Claude users to submit a government-issued photo ID and a live selfie before accessing parts of the platform. No other major AI chatbot currently asks for this. ChatGPT doesn't. Gemini doesn't.
The timing is something. Millions of users migrated to Claude earlier this year after OpenAI signed a deal to deploy AI on Pentagon classified networks. Anthropic turned that contract down, positioning itself as the privacy-conscious alternative. Free signups reportedly surged 60% in January and February alone. Those users may now need to hand over their passport to keep using the product they chose specifically to avoid surveillance.
What Anthropic is actually asking for
The verification system uses Persona Identities as a third-party processor. Users need a physical, undamaged government ID (passport, driver's license, or national identity card) and may be asked for a live selfie. Photocopies, mobile IDs, and student credentials are all rejected. The whole process takes about five minutes, according to Anthropic.
Where it gets murky: Anthropic won't say which features are gated behind verification, what user behavior triggers a check, or how "routine platform integrity checks" are defined. The help center page describes the rollout as covering "a few use cases" tied to "certain capabilities" and "safety and compliance measures." That's a lot of qualifying language for a system that collects biometric data.
The Persona problem
Anthropic says your ID and selfie go to Persona's servers, not its own. The company positions itself as the "data controller" setting the rules while Persona processes on its behalf. Data is encrypted, excluded from model training, and won't be shared for marketing. Standard reassurances.
But Persona's track record deserves scrutiny. Discord adopted Persona for age verification earlier this year, then dropped them in under a month after researchers found nearly 2,500 of Persona's front-end files sitting on a U.S. government-authorized endpoint. The exposed code revealed that Persona conducts 269 distinct verification checks, including screening users against politically sensitive watchlists and adverse media categories like terrorism and espionage. Persona CEO Rick Song called the exposure not "a major vulnerability." Discord disagreed.
"We didn't even have to write or perform a single exploit; the entire architecture was just on the doorstep," one of the researchers wrote. And this is separate from the October 2025 Discord breach that exposed roughly 70,000 government IDs through a compromised customer service vendor.
A Zurich-based privacy researcher who investigated Persona through LinkedIn's verification process found the platform routes data to 17 subprocessors, all based in North America. Among them: Anthropic, OpenAI, and Groqcloud, listed as handling "data extraction and analysis." Your government ID data potentially flowing through the same companies building large language models. Persona's CEO has disputed the characterization, saying clients select which products and subprocessors are used.
Why now?
Anthropic hasn't offered a detailed public explanation beyond abuse prevention and legal compliance. But the timing lines up with the company's release of Claude Mythos Preview, a model so capable at finding and exploiting software vulnerabilities that Anthropic refused to release it publicly. During testing, Mythos Preview autonomously discovered zero-day vulnerabilities in every major operating system and web browser, some of them decades old. The model is restricted to 40 organizations through Project Glasswing for defensive security work only.
Identity verification could be Anthropic's attempt to build a paper trail before more powerful models reach users. If a future Claude variant can write browser exploits or escalate kernel privileges (Mythos Preview can do both), knowing who's behind the keyboard becomes a liability question, not just a safety one.
There's also a pattern forming. In December, Anthropic deployed classifiers to flag conversations where users self-identify as minors, which promptly flagged adult paying subscribers as children and locked them out. Some reported losing entire project histories during the appeal process. That system used Yoti, a different verification vendor, not Persona.
The reaction
"AI KYC is here," wrote crypto commentator Ryan Sean Adams on X, calling it a voluntary choice by Anthropic rather than a regulatory mandate. "Not even a regulatory requirement. Anthropic just doing it because they want to." That framing has stuck. Multiple posts pointed out that Anthropic chose this, which makes the privacy trade-off harder to swallow for users who specifically chose Claude to avoid it.
Accounts registered from regions Anthropic doesn't formally serve face the sharpest edge of this policy. A live selfie matched against a physical government document is, by design, difficult to circumvent. For Chinese users accessing Claude through intermediaries, or anyone in an unsupported region, verification is effectively a ban.
Anthropic did not respond to press inquiries about the rollout's scope or trigger conditions as of publication. The help center page says users who fail verification can retry or submit a form for manual review. Accounts can be banned for repeated policy violations, unsupported region access, terms of service violations, or being under 18.
The Electronic Frontier Foundation warned in February, in response to Discord's age verification push, that platforms making these commitments have "little independent visibility" into whether their safeguards work in practice. That assessment applies here too.




