Before any AI chatbot can reach Chinese users, it must prove its loyalty to the Communist Party. The Cyberspace Administration of China requires companies to submit question banks of 20,000 to 70,000 items designed to test whether their models produce ideologically safe answers, according to people familiar with the approval process cited in reports from Axios and the Financial Times. Roughly half of a separate 5,000-10,000 question refusal dataset relates to political ideology and criticism of Party leadership.
What the testing actually looks like
The approval process has evolved into something more elaborate than standard safety evaluations seen in Western AI development. Technical standards from China's TC260 committee specify that testing question banks must contain at least 2,000 items covering 31 defined security risks. Models need a 90% pass rate on random samples of at least 1,000 questions to clear the content safety bar.
CAC officials batch-test large language models against politically sensitive topics, with particular attention to questions about President Xi Jinping and Taiwan. One AI company told the Financial Times their model failed the first round for unclear reasons but passed after "guessing and adjusting." The opacity is the point: companies learn what's acceptable through repeated rejection.
The testing infrastructure has spawned its own cottage industry. Third-party agencies now help AI companies navigate the approval gauntlet, essentially tutoring chatbots on correct political answers before the exam.
Beijing classifies AI as national emergency threat
The Party's anxiety about AI runs deeper than content moderation. China's National Emergency Response Plan, revised in February 2025, now lists AI risks alongside epidemics, cyberattacks, and financial anomalies. A Politburo study session in April classified AI under "public security" threats requiring "preemptive prevention" governance.
That's not rhetorical positioning. The AI Safety Governance Framework released by TC260 explicitly mentions catastrophic scenarios including AI self-replication, malware creation, and assistance with biological or chemical weapons. Version 2.0, published in September 2025, introduced a five-level risk classification and added "catastrophic" as the highest tier.
The framing matters because it shifts AI governance from a technology policy question to a national security imperative.
The compliance tax on innovation
ByteDance's chatbot scored 66.4% on "safety compliance" in testing by a Fudan University research lab. OpenAI's GPT-4 managed 7.1% on the same evaluation. That gap isn't a measure of capability but of political calibration.
The practical effect: Chinese chatbots refuse queries about Tiananmen Square, won't compare Xi Jinping's leadership style to other world leaders, and treat questions about Taiwan as settled matters of territorial integrity. When Reporters Without Borders tested three major Chinese chatbots on the Nobel Peace Prize laureate Liu Xiaobo, none would provide information. DeepSeek's real-time response would begin generating text, then erase itself mid-sentence when blocked keywords appeared.
Security filtering works by removing "problematic information" from training data and maintaining databases of sensitive words that trigger refusals. If users ask too many politically sensitive questions in sequence, systems must terminate the conversation.
What's less clear is whether this constraint meaningfully hampers technical development. Chinese AI companies continue filing more generative AI patents globally than any other country. The CAC has approved and registered hundreds of platforms including DeepSeek and Baidu's Ernie Bot. The question isn't whether Chinese AI can function under these rules but what it becomes when optimized primarily for political compliance.
Over 5,000 algorithms have been filed through the CAC's national registration platform as of late 2025, suggesting the regulatory framework has become routine rather than exceptional.




