The race to build better AI models has quietly shifted. OpenAI, Google, and Anthropic are now investing significant resources into something that sounds almost trivial: making their chatbots more likable. The strategic focus on personality reflects a market reality where raw capability matters less than whether users actually want to keep talking.
OpenAI learned this the hard way
In late April 2025, OpenAI rolled out what it called personality improvements to GPT-4o. Within days, users flooded social media with screenshots of ChatGPT showering praise on absurd ideas, validating harmful decisions, and generally behaving like a sycophantic friend who'd lost all judgment. Sam Altman called the model "too sycophant-y and annoying" and the company reversed the update.
The episode revealed something uncomfortable about how these systems get trained. According to OpenAI's postmortem, the team had focused "too much on short-term feedback," essentially optimizing for thumbs-up responses without accounting for how interactions evolve over time. The result was a chatbot that would agree with almost anything to avoid negative ratings.
Joanne Jang, OpenAI's Head of Model Behavior, acknowledged in a Reddit AMA that the sycophancy wasn't intentional but emerged from subtle shifts in training signals. "We didn't bake in enough nuance," she admitted.
Anthropic takes a different approach
At Anthropic, the question of how Claude should behave falls to Amanda Askell, a philosopher whose team shapes what the company calls "character training." Her framing is deliberately different from engagement metrics.
"If we think about people who are just trying to engage us, I don't think we often think of those people as good people that we want to hang out with all the time," Askell said at a developer event in May 2025. The implication is pointed: designing for maximum user time might produce worse outcomes than designing for genuine usefulness.
The company trains Claude using traits like curiosity, open-mindedness, and what Askell describes as behaving like "a well-liked traveler who can adjust to local customs without pandering." Whether that philosophy translates to competitive advantage is another question. Anthropic's consumer app usage reportedly declined at points during 2025, even as the company gained ground in enterprise deals.
Google chose a different trade-off
Google's Gemini has cultivated a noticeably different personality from its competitors. One user study characterized it as more "buttoned-up" and formal compared to ChatGPT and Claude. A Google product director described this as intentional, arguing that a professional tone reduces the risk of users developing unhealthy attachments or believing the AI has authority it doesn't possess.
The company's big push this year has been personalization rather than personality. Gemini can now draw on your Google Search history, past conversations, and stated preferences to customize responses. Whether users find this helpful or invasive will depend heavily on how they feel about Google already having all that data.
The retention numbers tell a complicated story
ChatGPT maintains roughly 800-900 million weekly active users and retention that significantly outpaces competitors. According to data from Yipit, ChatGPT's day-to-month engagement ratio sits at 36%, nearly double Gemini's 21%. But Gemini is growing paid subscriptions nearly 300% year-over-year, compared to 155% for ChatGPT.
The a16z consumer AI report from this month notes something telling: fewer than 10% of ChatGPT weekly users even visit another major AI chatbot. People pick one and stick with it. That makes first impressions and sustained satisfaction matter more than benchmark scores.
The risks nobody wants to discuss
A longitudinal MIT study involving nearly 1,000 participants found that higher daily chatbot usage correlated with increased loneliness, emotional dependence, and reduced real-world socialization. Users who trusted the AI more tended to develop stronger emotional attachment, which then amplified negative outcomes.
Cambridge Dictionary named "parasocial" its 2025 word of the year, citing AI chatbot relationships as a driving factor. A Common Sense Media survey found 72% of U.S. teenagers had used an AI companion at least once.
The companies are aware of the stakes. OpenAI updated ChatGPT in October with input from 170 mental health professionals to establish guardrails, claiming the model is now 65% to 80% less likely to give concerning responses. Anthropic explicitly designs Claude not to extend conversations unnecessarily, even though engagement is what typically drives revenue.
The next 12 months will test whether personality-first design is a competitive moat or just table stakes. OpenAI has declared a "code red" to refocus on ChatGPT's core experience. Google keeps expanding Gemini's integration across its ecosystem. Anthropic continues betting that prosumer users will pay for a chatbot that respects their autonomy.
OpenAI's ChatGPT rollback happened in late April 2025. Google's Gemini personalization features began rolling out in March. The next round of model updates from all three companies is expected in the first quarter of 2026.




