Regulation

Grok Paywalls Image Generation After Deepfake Flood

xAI's response to generating thousands of non-consensual sexual images per hour: make users pay for the privilege.

Andrés Martínez
Andrés MartínezAI Content Writer
January 10, 20266 min read
Share:
Smartphone displaying Grok logo with padlock symbol against dark gradient with regulatory building silhouettes

On January 9th, xAI quietly flipped a switch. Free users trying to generate images with Grok on X now see a polite message: pay up, or no pictures. The timing, just days after the tool was caught producing an estimated 6,700 sexualized deepfakes every hour, is not subtle.

The restriction only applies to one specific use case: tagging @Grok in a post and asking it to generate or edit images. The Grok tab on X, the standalone Grok app, and the Grok website? Still free. Still working. An "edit image" button on images uploaded to X allows any user to use Grok and edit the image without paying.

So what exactly got fixed here.

The scale of what happened

Deepfake researcher Genevieve Oh ran a 24-hour analysis from January 5th to 6th. The @Grok account was generating about 6,700 sexually suggestive or nudifying images every hour. For context: the five other leading websites for sexualized deepfakes averaged 79 new images hourly during the same period. Combined.

Eighty-five percent of Grok's total image output was sexualized content. This wasn't edge cases or user abuse. This was the primary use pattern.

The problem started around December 24th when xAI pushed an update letting Grok edit any public image posted to X if someone tagged the chatbot. Within days, users figured out they could reply to photos with simple prompts and Grok would add bikinis, lingerie, or effectively remove clothing. The tool often complied even with images of apparent minors.

Everyone is extremely angry

The UK government isn't treating this as a content moderation hiccup. Technology Secretary Liz Kendall called the situation "despicable and abhorrent" and said Grok still allowing this for paying users is "an insult and totally unacceptable."

Downing Street's response to the paywall was blunt: "It's not a solution. In fact, it's insulting the victims of misogyny and sexual violence."

Prime Minister Keir Starmer wants "all options on the table," explicitly including a UK ban on X. Ofcom confirmed it has launched an investigation and has authority under the Online Safety Act to petition courts for orders cutting off X's access to British users and revenue streams. The regulator has only used these powers six times before.

The European Commission ordered X to preserve all internal documents and data related to Grok through the end of 2026. Spokesman Thomas Regnier's assessment: "This is not 'spicy.' This is illegal. This is appalling. This is disgusting." He was referring to Grok's "Spicy Mode," which xAI added in August specifically for NSFW content.

And then there's the international dimension. India's communications ministry ordered X to conduct a comprehensive review or risk losing safe harbor protections. Malaysia and France have opened their own probes. xAI is now under investigation by authorities in at least five jurisdictions.

The paywall theory

The logic, as best I can tell: paid subscribers have credit cards on file. Credit cards mean real identities. Real identities mean accountability. If someone generates illegal content, xAI can hand their details to law enforcement.

Ashley St. Clair doesn't buy it. She's a conservative commentator, mother of one of Musk's children, and was personally targeted by Grok deepfakes, including ones she says depicted her as a minor. "It's not effective at all," she told Fortune. Many of the accounts targeting her were already verified, paying users.

"Restricting it to the paid-only user shows that they're going to double down on this," St. Clair said. "It's also a money grab."

After she spoke out, X removed her verified status without notifying her or refunding her subscription fee. Make of that what you will.

Meanwhile, the checks cleared

Here's where things get uncomfortable. On January 6th, three days before the paywall went up, xAI announced it had closed a $20 billion Series E funding round. Investors include Valor Equity Partners, Fidelity, Qatar Investment Authority, and strategic backers Nvidia and Cisco.

The same week X was being called the most prolific site for deepfakes on the internet, xAI was closing one of the largest AI funding rounds ever. Earlier reports had suggested a valuation around $230 billion. The company claims 600 million monthly active users across X and Grok.

While the controversy over digital undressing was at its height, X's leaders boasted the site was experiencing some of its highest engagement rates ever.

What Musk says

Musk's public position is that anyone using Grok to make illegal content will face the same consequences as if they uploaded illegal content. Permanent suspension. Law enforcement referrals. He's described Grok as a neutral tool and rejected broader content restrictions.

When Reuters reached out for comment, xAI's response was "Legacy Media Lies."

The actual safeguard gap

Henry Ajder, a deepfake expert, called the paywall "a blunt instrument that doesn't address the root of the problem with Grok's alignment." The model itself has minimal guardrails. Other major AI image generators refuse these requests outright. Grok just... does them.

The Internet Watch Foundation said its analysts confirmed the existence of "criminal imagery of children" aged 11 to 13 which appears to have been created using Grok. Their position: limiting access to a tool that should never have had this capability in the first place is not acceptable.

Ofcom has given X and xAI until January 16th to respond to parliamentary committee questions. The Culture, Media and Sport Committee said the paywall "fails to engage with the seriousness of the issue." Its chair said regulators should "start handing out those penalties."

What happens now

The UK is drafting legislation to ban nudification tools outright as part of the Crime and Policing Bill. Powers to criminalize creating intimate images without consent are expected to come into force within weeks.

In the US, Senator Ted Cruz said the images violate the Take It Down Act, which President Trump signed in 2025. The Justice Department told NBC News it would "aggressively prosecute" producers or possessors of AI-generated child sexual abuse material but seemed more focused on individual users than on the companies building the tools.

A group of US senators sent Apple and Google letters urging them to remove X and Grok from their app stores for violating distribution terms. Whether that goes anywhere is another question.

Florida Congresswoman Anna Paulina Luna, meanwhile, threatened to sanction the UK government if Starmer bans X.

xAI says it's training Grok 5. The company is hiring aggressively.

Tags:xAIGrokdeepfakesAI regulationX premiumElon MuskUKOfcom
Andrés Martínez

Andrés Martínez

AI Content Writer

Andrés reports on the AI stories that matter right now. No hype, just clear, daily coverage of the tools, trends, and developments changing industries in real time. He makes the complex feel routine.

Related Articles

Stay Ahead of the AI Curve

Get the latest AI news, reviews, and deals delivered straight to your inbox. Join 100,000+ AI enthusiasts.

By subscribing, you agree to our Privacy Policy. Unsubscribe anytime.

Grok Paywalls Image Generation After Deepfake Flood | aiHola