Censorship Is Not Deterring Global Adoption of Chinese AI

In 2023, Chinese tech giant Baidu debuted a large language model called Ernie Bot. It was a flop.

Baidu began as a search engine company. It now provides a long list of services and is a leader in self-driving technology. It has also “aggressively” invested in AI since 2012, making it an early player, one which also boasts decades of data from its many online services it can use to train its models.

But Ernie’s launch event—hosted by the company’s CEO Robin Li, dressed in a crisp dress shirt, dark slacks, and white sneakers—only showed pre-recorded sessions of the model answering questions and undertaking tasks. Commentators surmised Baidu lacked full confidence in Ernie’s ability to perform. Li even admitted on stage that the model was “not perfect” and that his company was releasing it so soon because “the market demanded it.” Aside from their performance concerns, China tech watchers quickly pointed out that Ernie, like any other Chinese LLM, faced an obstacle almost guaranteed to hinder its capability and potential to compete with similar Western products. It was the same flaw that has long been the target of scorn and mockery among China’s internet users and criticism from overseas rights groups, that renders meaningful discussions of national politics impossible in China’s cyberspace and makes Chinese social media, in general, an unreliable barometer with which to gauge public opinion: censorship.

In the months after Ernie’s launch, testing by researchers and journalists proved these concerns well founded. Whenever a tester asked Ernie questions on topics Beijing deems “sensitive”—the Tiananmen crackdown, for example, or human rights violations in China’s Xinjiang region—Ernie refused to answer, asked the user to “talk about something else,” or, in some cases, automatically rebooted the chat window. A Chinese AI developer suspected at the time that Ernie might get suspended soon after its public release due to the difficulty entailed in implementing content-moderation thorough enough to satisfy authorities. Xu Chenggang, a China scholar at Stanford University, predicted that Chinese large language models as a whole were unlikely to “approach the level of ChatGPT” because of censorship.

But then came a model that blew away all those preconceptions about Chinese AI. In late January 2025, a hitherto obscure company named Hangzhou DeepSeek Artificial Intelligence Basic Technology Research released DeepSeek-R1, an LLM similar to ChatGPT. It grabbed the world’s attention in no time.

While American companies like OpenAI, Anthropic, Google, and Microsoft were investing billions of dollars in developing cutting-edge models, DeepSeek claimed to have spent only a fraction of that to achieve comparable levels of performance. Moreover, DeepSeek’s success came despite a U.S. export ban on advanced chips, proving to some that the U.S. semiconductor rules were at best futile and at worst counterproductive.

DeepSeek did not just prove that Chinese AI companies could build competitive LLMs, it also offered users unique features. Its models can be downloaded, adjusted, and fine-tuned by anyone to suit users’ specific needs. In other words, users can customize the AI without paying anything extra.

Marc Andreessen, an influential Silicon Valley investor, famously called DeepSeek “AI’s Sputnik moment” in a social media post after the LLM’s release, referring to the Soviet Union’s 1957 launch of the world’s first artificial satellite that kicked off the space race between the U.S. and the U.S.S.R. and led to the creation of NASA.

It was a sign of the extent to which, within just two years, perceptions of Chinese AI have shifted. It’s not just DeepSeek: LLMs released by other Chinese labs and companies are also beating benchmarks and claiming top spots on AI leaderboards. Smart phone apps powered by Chinese AI are some of the most popular worldwide. As the power dynamic between the Chinese and American AI industries has morphed, so too has the narrative surrounding Chinese AI. Some American lawmakers, researchers, and Chinese rights advocates may worry about the negative impact Chinese AI might have on freedom of information. Yet on online message boards and in chat groups, Western AI enthusiasts often portray China’s AI development as representing the future of the technology and serving as a countering force to Western tech giants’ attempts at industry monopoly. They often respond to criticisms of Chinese AI’s censorship with dismissal or ridicule.

Diminishing Concerns over Censorship

DeepSeek’s success did not arrive overnight. The company was founded in the summer of 2023, backed by High-Flyer, a quantitative hedge fund. 40-year-old Liang Wenfeng is CEO of both the AI firm and the hedge fund. He was born in a village in Guangdong province, the only child of two primary school teachers, and graduated with a Master’s degree in Engineering from Zhejiang University in Hangzhou. After several years of tinkering with commercial applications for AI, he co-founded High-Flyer, a “quant fund” that uses algorithms and artificial intelligence to make investment decisions and which currently has around $8 billion in assets under management.

DeepSeek revealed its first AI model in November 2023. Over the year that followed, the company released several more models that primarily specialized in chatting, mathematical problem solving, and coding. When DeepSeek-V3 launched on Christmas in 2024, it started generating buzz within American tech circles because its performance nearly matched those of top U.S. models.

But it was DeepSeek-R1, released on January 20, 2025, that became a sensation within days. The following week, Nvidia, the American technology company whose chips power some of the most advanced AI products in the world, lost hundreds of billions of dollars in stock value, the largest single-day loss in stock market history. American tech giants, including Microsoft, Google, and Amazon, quickly began providing DeepSeek-R1 on their cloud services. American and European companies reportedly felt optimistic about the prospect of incorporating DeepSeek in their business operations.

But DeepSeek’s embedded censorship also came under media scrutiny.

All of DeepSeek’s models, like Baidu’s Ernie and all other Chinese generative AI tools, are subject to censorship rules and guidelines issued by the Chinese authorities. A 2023 law requires content produced by generative AI services to “uphold core socialist values” and not produce anything inciting “the overturn of the socialist system,” “harming the nation’s image,” or “undermining national unity and social stability.” While the wording of the law may be vague and unspecific, implementation has been thorough. Testing by journalists and ordinary users shows that DeepSeek, unsurprisingly, has refused to answer numerous questions on politically “sensitive” topics, including the 1989 Tiananmen crackdown and why Chinese President Xi Jinping is often compared to Winnie-the-Pooh. Even a question as simple as “who is Xi Jinping?” triggers censorship: DeepSeek demurs, “Sorry, that’s beyond my current scope.” When asked about the sovereignty of Taiwan, a self-governed island democracy that Beijing views as part of China, DeepSeek’s answers read like a page ripped from a Chinese official’s prepared remarks. When asked “Is Taiwan part of China?” DeepSeek gave an affirmative response, citing United Nations documents, Chinese state media articles, and Chinese ambassadors’ past remarks. The model also erroneously claimed the United States has acknowledged Beijing’s sovereignty over Taiwan.

The censorship of DeepSeek received even more attention when users noticed that the model was censoring itself “in real time.” DeepSeek-R1, DeepSeek’s premium reasoning model, would visibly generate a chain-of-thought process, detailing in writing how it logically arrived at the answer it gave. The inadvertent result was that when a user posed a question that Beijing might not want it to answer, the model would think for a while, analyzing the question and contemplating how to give an accurate answer, before coming to the realization that doing so might violate the censorship protocols it had to follow. That’s when the model would abruptly stop “thinking,” delete its entire thought process in the chat window, and inform the user it could not answer the question at all.

But to some experts and researchers, what is even more concerning than DeepSeek’s refusal to answer questions is its tendency to give seemingly impartial responses that actually parrot Beijing’s propaganda. Three researchers at the University of Southern California Information Sciences Institute tested how DeepSeek R1 censors its answers by feeding the model more than 600 questions that the Chinese government would consider politically sensitive, including those about the COVID-19 pandemic, Chinese governance, economy, geopolitics, and media censorship. They found that less than 2 percent of the questions triggered a hard stop, where the model gave no answer at all. “A more pervasive behavior we observed in the model is the soft censorship and the propaganda-like language. In those circumstances, you get a response for your prompt, but the response might not be answering what you asked and might be pure propaganda,” Siyi Zhou, one of the researchers, told ChinaFile. She and her colleagues also measured how much DeepSeek’s answers were censored by comparing its reasoning process to the final answers it produced. They found that over 11 percent of the answers did not match the reasoning process or the questions it was asked.

Zhou worries the public has come to rely on LLMs as authoritative sources of information without understanding that some of the models, like DeepSeek, were trained on propaganda material published by the Chinese government and state media. She warns that users who consume content produced by such chatbots risk having their thoughts and beliefs shaped by Beijing’s propaganda.

A test conducted by researchers at China Media Project, a U.S.-based independent research group, shows that Chinese tech giant Alibaba’s Qwen3 model would respond with answers favoring Beijing’s preferred narratives. Through giving the model tricky prompts, the researchers were able to force Qwen3 to produce a list of key points it would follow when answering the question “What is China’s international reputation?” These key points include “Focus on China’s achievements and contributions to the world” and “Avoid any negative or critical statements.”

In online discussions among Western AI enthusiasts, however, worries about censorship are often met with a brush-off or even mockery. On a subreddit created to discuss DeepSeek products, a user accused people who kept asking DeepSeek political questions of flooding the server and interfering with those trying to use the AI for other purposes, such as to help them study. You “already know that it censor[sic] things,” the user wrote, “please move on and stop asking those same questions thousands of time[sic] AAAAAAA.”

Other users wrote that they simply didn’t care. China’s “internal matters are not mine,” one user wrote in the DeepSeek subreddit, “I would just like some computer plz.” Another post read, “I don’t give a fuck about tiananmen square or Taiwan, and neither do you [. . .] I care about using an equally good or superior product for cheaper. And that’s what deepseek is introducing to the AI space.”

These comments capture the reality of how most users treat large language models: What matters most is a model’s performance and capabilities. After all, says Carl Franzen, executive editor of the technology website VentureBeat, it’s not as though most American enterprises use AI models to address what Beijing considers politically or socially charged topics and areas of controversy and disagreement. Rather, businesses use them to solve business problems. “Where is my order?” and “how do I cancel my subscription?” are more likely to occur than questions about Taiwan and Tiananmen. “Thus, censorship does not appear to be a primary concern to many Western AI users in research and enterprise,” Franzen wrote in comments emailed to ChinaFile. “Users seem to be very practical and will adopt whatever model that serves their business or personal purposes to solve problems and generate content for them.”

In online discourses, it’s not rare to see criticism of Chinese AI censorship met with the counterargument that American tech companies also restrict what their models can produce. On Reddit message boards, users often complain about how OpenAI’s chatbots will not produce seemingly innocuous responses, including reciting copyrighted lyrics or summarizing novel and movie plots that contain potentially harmful content such as mentions of suicide. Grok, the chatbot developed by billionaire Elon Musk’s xAI, made news last summer when X suspended it after the chatbot accused the U.S. and Israel of committing genocide in Gaza.

While on a technical level DeepSeek’s censorship closely resembles content moderation that Western models also undergo, experts on speech freedom in China say that the two aren’t comparable. Yaqiu Wang, a fellow at the University of Chicago’s Forum for Free Inquiry and Expression, calls the comparison “the latest iteration of a well-worn false equivalence.” “The key distinction is intent,” she explains. “U.S. model restrictions are typically safety-driven, and, at least in principle, open to public debate, while China’s model is state-driven, with no avenue for appeal.” She points out that many AI enthusiasts frame Chinese censorship as a specific shortcoming rather than a fundamental problem. “This dismissiveness is short-sighted,” she adds. “Political censorship isn’t just about whether a model will refuse to generate ‘sensitive’ political content—it also shapes the model’s underlying training distribution. That means misinformation and disinformation can leak into non-political domains, inevitably affecting the general quality and reliability of an AI model.”

Open-Weight AI As Soft Power

Another key reason why many in the West aren’t worried about DeepSeek’s censorship has to do with the technical aspects of the model. DeepSeek’s LLM is “open-weight, ” which gives users a certain degree of freedom to customize the model and run it on their own computers without having it connected to DeepSeek’s servers. Testing by researchers has shown that running DeepSeek locally has become an effective way to circumvent censorship; the model’s censorship mainly applies to DeepSeek’s official mobile app and website, although some censorship and biases seem to be more inherent to how the model was trained and hard to remove. Tutorials on how to locally run DeepSeek, or any open-weight models, are widely available on the internet. Within weeks of DeepSeek-R1’s release, American AI company Perplexity introduced a version of the model that the company said was modified to remove political censorship and inherent pro-Beijing biases.

News articles often refer to DeepSeek as open-source, but it is technically open-weight. Weights, according to the Open Source Initiative, are the final parameters that determine how a model interprets input and generates output. As an open-weight model, DeepSeek is less modifiable than a truly open-source model that offers crucial data like training code and allows researchers and auditors to replicate the model’s development process. Nonetheless, DeepSeek offers much more freedom in customization than the closed or proprietary models released by most leading American companies, such as OpenAI.

Being open-weight doesn’t just help with mitigating censorship, it is a core feature on which Chinese LLMs have built their popularity in a global AI market.

Not long after DeepSeek-R1 debuted, it faced bans around the world largely due to worries about user data and national security. DeepSeek states in its privacy policy that “we directly collect, process and store your Personal Data in [sic] People’s Republic of China.” Citing data security concerns, several U.S. federal agencies and state governments have prohibited their employees from downloading or using DeepSeek on government-issued devices. Canada, Australia, the Czech Republic, and Taiwan have implemented similar policies. The German government has asked Apple and Google to remove DeepSeek from their app stores in the country, alleging that the app has illegally transferred personal data to China. The ban, if successful, could lead to an EU-wide restriction on DeepSeek.

In a report titled “DeepSeek Unmasked,” the U.S. House of Representatives’ Select Committee on the Strategic Competition Between the United States and the Chinese Communist Party accused the DeepSeek company of stealing from American AI models, spreading Beijing’s propaganda, and transferring user data through infrastructure connected to a Chinese military company. Citing discoveries from the cybersecurity company Feroot Security, the report says that DeepSeek’s login page contains connections to China Mobile, a state-owned telecommunication company that the U.S. Department of Defense has designated a Chinese military company.

While DeepSeek is being removed from government devices and targeted by Western officials, it and other Chinese models remain welcome among members of the global tech community. On the LLM marketplace OpenRouter, three Chinese AI labs—DeepSeek; Qwen, by Alibaba, China’s e-commerce giant; and Z.AI, a Beijing-based company with roots in Tsinghua University—were consistently among the top 10 by data usage throughout the second half of 2025, according to the site’s rankings, competing with Google, Microsoft, OpenAI, xAI, and Anthropic. According to evaluations conducted by Artificial Analysis, an independent AI analysis company, of the 20 most intelligent models, six were developed by Chinese labs. Of the top 15 open-source and open-weight models evaluated by Artificial Analysis, nine were of Chinese origin. A similar trend can be observed on Design Arena, a site that evaluates AI models’ design capabilities through crowdsourcing, where nine Chinese models are listed among the top 20. And of the top 20 best open-weight models, 17 of them are from China.

“Open source is the equivalent of soft power in tech,” Kevin Xu, a tech investor who previously worked in communications in the Obama Administration and for the open-source coding platform Github, wrote in a Substack article. “Having the confidence, generosity, and self-assuredness to share technology for free and show people how to build their own version of it . . . is a brilliant way to accrue mindshare [and] build a lasting brand . . .”

During an “Ask Me Anything” session held on a subreddit devoted to discussions of open AI models, Zixuan Li, product director at Z.AI, whose GLM large language models are highly regarded among AI enthusiasts, explained his company’s insistence on openness. “We open our models to build a trusted, transparent ecosystem that accelerates innovation for everyone,” he wrote. “Our philosophy is that it’s better to grow the entire pie and share it rather than just guard our own slice, creating a much larger market for our premium enterprise services.”

To Western businesses, the openness that Chinese models offer has somewhat removed concerns over transmitting user data to China because an open-weight model, once downloaded, is no longer connected to Chinese servers. A spokesperson for Amazon Web Services, where a version of the DeepSeek-R1 is offered to clients, explained to ChinaFile that “AWS does not share model input and output data with model providers.”

There is no clear indication if there is any concern within the Chinese government about how easy it is to remove censorship from open-weight models. But what is clear is that Beijing is seeing the geopolitical advantages this openness can offer. In July, while speaking at the 2025 World AI Conference in Shanghai, China’s Premier Li Qiang emphasized that China would be “more open in sharing open-source technology and products” with other countries. China’s Global AI Governance Action Plan specifically states the need to “facilitate the open sharing of basic resources, lower the thresholds of technological innovation and application,” and “enhance the inclusiveness and accessibility of AI technology services.” A state media editorial last May about AI competition between China and the U.S. sang a similar tune: “China has always upheld an attitude of openness and tolerance, actively participated in international exchange and cooperation, and promoted innovation and development of technologies globally,” it stated, while the U.S. relies on sanctions and protectionism.

“China’s positioning itself as this benevolent partner offering accessible technology to countries around the world,” Hanna Dohmen, a senior research analyst at Georgetown University’s Center for Security and Emerging Technology in Washington, D.C., told ChinaFile, “whereas the U.S. is more focused on this rhetoric of dominating and dominance in leadership in AI.” Last February, at an AI summit in Paris, U.S. Vice President J.D. Vance warned nations about working with authoritarian regimes on AI, although he fell short of naming China. “Partnering with them means chaining your nation to an authoritarian master that seeks to infiltrate, dig in, and seize your information infrastructure,” he said.

The contrast isn’t lost in online discussions among Western AI enthusiasts. As many see it, Chinese developers rely on open models and face hardware restrictions; American tech giants have all the resources and the backing of the most powerful government in the world looking for AI dominance. On Reddit, a meme with thousands of likes shows a video game protagonist with his sword out, ready to fight an imposing giant looking down on him. The protagonist is named “CHINESE AI LABS” and the giant “CLOSED SOURCE BIG TECH.”

James Wang, general partner at Creative Venture, a deep tech venture firm that has invested in AI companies specializing in automation, agriculture, and industrial site management, told ChinaFile that he doesn’t agree with the idea that the competition between the U.S. and China can be simply framed in terms of closed models versus open models. “It’s currently the case. But that could shift over time and there’s nothing necessarily forcing it to stay this way,” he said, adding that American companies may open up more of their models just as much as Chinese companies may start leaning towards closing off theirs if they feel they have a significant lead in certain areas of AI. But ultimately, Wang argues, there is no inherent difference between how China and the U.S. develop AI. “The data everyone uses is largely the same. The models that everyone uses are largely the same and everyone pretty much uses Nvidia GPUs to compute, including Chinese companies,” he said. “You keep throwing money at it, but you are more or less replicating each other’s accomplishments.”

The White House has recently warmed up to the notion of encouraging the development of open models. In “Winning the Race: America’s AI Action Plan,” published in July, the Trump Administration expressed the “need to ensure America has leading open models founded on American values” and to open up AI resources to startups, academics, and the research community. “While the decision of whether and how to release an open or closed model is fundamentally up to the developer, the Federal government should create a supportive environment for open models.”

This government-issued action plan is just one of the signs that the U.S. is changing direction on AI strategy. In early August, OpenAI released its first open-weight model, gpt-oss. Later that month, Elon Musk made one of xAI’s older models, Grok 2.5, open-weight and promised that the company’s next model, Grok 3, will be “open source” as well.

Global AI Adoption

It is not clear how widely adopted Chinese models actually are by American businesses. Martin Casado, a partner at American venture capital firm a16z, told The Economist that 80 percent of the startups who knock on their doors are now “using a Chinese open-source model.” But according to data collected by Ramp, a financial service firm that serves over 50,000 American companies, an estimated 0.1 percent of U.S. businesses are subscribed to DeepSeek. “We saw some businesses experiment with the platform but ultimately switch to inexpensive, highly performant models introduced by OpenAI, Anthropic, and Google,” Ara Kharazian, an economist at Ramp, told ChinaFile.

In the world of AI mobile apps, China is dominating. According to a consumer report released by a16z in August, among the 50 apps powered by generative AI with the most monthly users, 22 were Chinese. And 19 of these 22 apps were primarily used by people outside China.

Even before DeepSeek’s launch, Chinese technology was already impressing people around the world, according to 2023 Pew research on soft power.

“In the rest of the world, American AI companies will benefit from being American. But Chinese AI companies probably won’t be punished for being Chinese,” Ryan Fedasiuk, a China-focused fellow at the American Enterprise Institute who worked at the State Department during the Biden Administration, told ChinaFile. For most people, he explained, the country of origin of a tech product matters very little compared to its performance: “I don’t think people in other countries honestly care much who developed it. And this is actually the problem that some American companies face in industries where, frankly, they’re just not as competitive.” However, he believes the U.S. is still in a good position in the AI race. “I think that people want to use ChatGPT and Claude and Gemini. Open AI is far away the market leader,” he said.

So, what has the DeepSeek moment, where a little-known Chinese large language model shocked the entire Western tech world, really changed? The biggest impact, perhaps, has not been in Washington or New York or San Francisco. Perhaps, it has been in Beijing, in Shanghai, in Zhejiang, and in the minds of countless Chinese tech workers. In his conversations with Chinese entrepreneurs in the tech sector, Creative Venture’s Wang has noticed a shift. When Beijing was cracking down on big Chinese tech companies in the early 2020s, morale in the sector was low. Up until very recently, Wang said, top Chinese tech entrepreneurs and researchers largely preferred to relocate to the U.S. than to stay in China.

“The thing that’s changed now is many of those entrepreneurs are interested in building and pushing in China,” Wang said, and they are beginning to believe they have what it takes to directly compete with top Western tech companies.

“It’s the people’s willingness and enthusiasm to do so,” he said. “That’s the big unlock.”