OpenAI has deployed an age prediction model across ChatGPT consumer plans that analyzes how people use the chatbot to determine if they're likely teenagers. The system, which went live globally on Tuesday, moves beyond trusting whatever birthdate users enter at signup.
The system guesses, then restricts
The age prediction model examines what OpenAI calls behavioral and account-level signals: how long an account has existed, typical times of day when someone is active, usage patterns over time, and yes, the stated age too. When the algorithm decides an account probably belongs to someone under 18, ChatGPT automatically tightens the experience.
Restricted topics include graphic violence, depictions of self-harm, sexual or romantic roleplay, and content promoting what OpenAI describes as extreme beauty standards or unhealthy dieting. The company says it will default to the under-18 experience whenever there's uncertainty about a user's age.
Adults who get incorrectly flagged can regain full access by verifying their age through Persona, a third-party identity verification service that handles similar checks for Roblox. The process involves a live selfie scan where users turn their head left and right while a camera estimates their age. Government ID is an option if the selfie fails.
Privacy advocates aren't convinced
The accuracy question looms large. Aliya Bhatia, a senior policy analyst at the Center for Democracy and Technology, points out that predicting age from behavioral signals is difficult for multiple reasons. Teenagers tend to be early adopters of new tech, she notes, meaning the earliest ChatGPT accounts might disproportionately represent young users. And distinguishing between a teacher using ChatGPT to help teach math and a student using it to study isn't straightforward.
The Electronic Frontier Foundation's Alexis Hancock raised a different concern: OpenAI has only offered ChatGPT to consumers for about four years, so account age isn't particularly telling. Meanwhile, users who get misclassified face pressure to hand over biometric data to a third party just to restore functionality they had before.
OpenAI acknowledges mistakes will happen. The company says it's using the rollout to learn which signals actually improve accuracy, refining the model over time. EU users will get the feature in the coming weeks to account for regional requirements.
Why now?
The timing isn't subtle. OpenAI faces an FTC investigation into how AI chatbots affect children, along with multiple wrongful death lawsuits linking ChatGPT to teen suicides. The company introduced parental controls last fall and convened a panel of mental health experts in October, but regulatory and legal pressure keeps mounting.
There's also the matter of what comes next. OpenAI's CEO of Applications, Fidji Simo, said in December that an "adult mode" for ChatGPT would arrive in Q1 2026. CEO Sam Altman had previously promised the feature by December 2025, including access to erotica for verified adults. That deadline slipped specifically because the company said it needed to get better at age prediction first.
ChatGPT now has 800 million weekly active users. OpenAI started showing ads to some US users last week. The company's annualized revenue hit $20 billion in 2025, up from $6 billion the year prior.
Partitioning that audience into adults and minors isn't just about safety. It's about offering different products to different people, and making sure the wrong content doesn't reach the wrong users when the money starts flowing from advertising and less restricted AI interactions.




