DeepSeek: A Tool Tuned for Social Governance

DeepSeek in use at a Liaoning Provincial Administrative Service Center. (Source: Sina)

Executive Summary:

  • The government of the People’s Republic of China (PRC) does not just envision its “AI+ initiative” as bolstering the national economy but aiding its plans for modernizing its social stability system.
  • DeepSeek has been designed, thanks to regulations, in a way that makes it a perfect tool to support the “public opinion guidance” system that aligns the public with state policy through propaganda.
  • Any adoption of DeepSeek’s model overseas has the potential to spread the PRC’s domestic social governance system abroad.

In the run-up to the annual gathering of the People’s Republic of China’s (PRC) legislature in March, reporters from a top state-run media platform engaged citizens on the street about how the political meetings, known as the “Two Sessions” (两会), were relevant to their lives. Instead of asking questions directly, the People’s Daily Online journalists invited sources to direct their questions to DeepSeek R1, the country’s latest large language model (LLM) (China Brief, February 11, March 19). One young woman asked, “I’m about to graduate, what kind of job opportunities can AI help to provide?” (我即将毕业,AI能帮助提供什么工作机会)—a timely question as legislators touted artificial intelligence (AI) as a solution for future development (People’s Daily Online, March 2).

DeepSeek replied that there were “abundant employment opportunities” (广阔的职业发展空间) thanks to AI, listing multiple roles, such as data annotators, and noting the high salaries these roles get paid. When this author asked DeepSeek the same question, it also responded by providing the same assurances and advised fresh graduates to combine their pre-existing skills with AI to “grasp the career opportunities in the AI era” (把握AI时代的职业机遇). Such an answer makes no mention of the social turbulence AI is creating in a job market in which youth unemployment remains high. For example, over the past three years, machines moved to fill 60 percent of all data annotation in the PRC, pushing the role of data annotator closer to obsolescence (CCTV, January 13). Omitting such concerns, the People’s Daily Online story focused instead on the idea of DeepSeek as a “happiness code” (幸福密码)—a technology displaying what the Party-state is doing to address the national concerns of the day and reassuring the people that they are in safe hands. In other words, this new technology is being harnessed to serve the needs of a much older system: The “public opinion guidance” (舆论导向) system that aligns the public with state policy through propaganda.

Preventing Risks, Resolving Disputes, and Writing ‘Correct’ Articles

In Chinese Communist Party (CCP) theory, “social governance” (社会治理) is the system whereby the government maintains social stability and resolves social conflicts. In the eyes of the Party-state, AI will play an important role in future social governance work.

At the 2024 Two Sessions, Premier Li Qiang’s “Government Work Report” (政府工作报告) launched the “AI+ initiative” (“人工智能+” 行动). This initiative included both an economic aspect—the “deep integration of digital technology and the economy” (促进数字技术和实体经济深度融合)—and a social one—“improving the modernization level of social governance” (提升社会治理现代化水平) (Xinhua, March 12, 2024). Currently, central, provincial, and municipal governments are exploring how DeepSeek could be integrated into the social governance system, including in the decision-making processes of cadres and state services to help resolve social conflicts and promote state policy preferences. Some of this is merely attention-seeking experimentation on the part of local governments and may be more symbolic than substantive. However, increasing reliance on AI models as a source of information gives DeepSeek the potential to become a powerful state-backed source of “public opinion guidance.”

An effective PRC-built LLM in theory provides a way for local governments to demonstrate they are both carrying out the “AI+ initiative” and maintaining social stability more efficiently. For example, on March 17, Liaoning Daily reported that Liaoning province had integrated DeepSeek into its local “12345” help hotline, claiming this meant the government had been able to more efficiently dispatch complaints to the appropriate departments (Liaoning Daily, March 17). Similarly, DeepSeek is being presented by police services as assisting with upholding public security. A local police station in Nanchang said they “added a touch of warmth to the harmony and stability of the community” (为社区和谐稳定添上了温暖的一笔) by using DeepSeek to call up items of PRC law to help resolve a housing dispute between a local family (Nanchang Public Security Bureau, February 27). Chengdu’s municipal Public Security Bureau took this a step further, connecting DeepSeek to their data centers to aid police work, with other branches holding meetings about how to incorporate the model to upgrade public security work (Police News, February 20; Huludao Municipal Public Security Bureau, March 17).

Some PRC journalists are treating DeepSeek as a safe source, offering a politically “correct” commentary on issues that could generate social conflict if written about incorrectly. This allows the journalists to avoid personal responsibility for discussing more sensitive topics. On March 21, Elephant News (大象新闻), a provincial-level state-run outlet, published an article featuring AI-generated analysis of a prominent tax evasion case involving public figure Sima Nan (司马南), a TV host and writer with a nationalist stance who is known for debunking pseudoscientific theories. The outlet simply asked DeepSeek to analyze “what it means” (说明了什么) that Sima Nan is being investigated for tax evasion, publishing the answer verbatim with no additional analysis (Elephant News, March 21).

Other journalists frame DeepSeek as possessing intelligence above that of ordinary humans, giving it the ability to guide them better than they can guide themselves. An article from the Global Times related that DeepSeek is being used by couples as a form of counseling to resolve their private disputes—a form of conflict that falls under the purview of the Party-state’s social governance apparatus (China Brief, December 6, 2024). The article quotes an interviewee as saying that Deepseek has “a relatively more comprehensive knowledge structure than most ordinary individuals,” and so the solutions it proposes “are consequently more scientific, reasonable, and effective.” However, this particular interviewee, Qin An, seems a strange choice for the topic of couples’ therapy. Qin is an expert on counter-terrorism and cyber-security governance at the China Society of Police Law (Global Times, February 19). This indicates the extent to which social governance overlaps with domestic security work, and the extent to which the Party-state seeks to access and influence the private lives of PRC citizens.

This belief that AI is (or imminently will be) superhuman, combined with orders from the center to implement “AI+”, is leading to enthusiastic efforts by some provincial and country-level cadres to incorporate DeepSeek into their decision-making (China Brief, March 28). Government departments across the PRC are conducting intensive “DeepSeek AI training programs” (DeepSeek大模型培训). A district-level deputy secretary in Shaanxi stressed that whoever effectively uses this “new hoe” (新锄头) will “seize the initiative” (抢得先机) in the AI era. For him, artificial intelligence adoption is “not optional but mandatory” (不是选择题,而是必答题) (The Paper, February 25). Another official, a county party secretary in Guangxi, recently ordered cadres to download DeepSeek on their devices, saying that it could increase their capabilities and prepare them for future AI breakthroughs (Daily Economic News, February 20). DeepSeek also cropped up at the Two Sessionsm too. One delegate announced at a press conference that he had used DeepSeek to answer the question “will workers later be replaced by robots?” (未来产业工人会被机器人替代吗?) (Xinhua, March 8). (It answered that robots would “partially replace” (部分地替代) humans.)

A Tool, not a Replacement: DeepSeek as a ‘New Hoe’

Trust in DeepSeek is not uniform across the PRC. Some areas are warning people not to over-rely on LLMs at the expense of individual judgment. On March 27, municipal-level Party newspaper Langfang Daily argued that although those who embraced the technology would be stronger, AI can currently only catch up with, but not surpass, human thought. In other words, it “can only be relied on, but not depended on” (要依靠不依赖) (Lanfang Daily, March 27).

This caution is echoed by Beijing, which will not turn social governance over to AI entirely, as it is unable to control it fully. A report from the Cyberspace Administration of China (CAC) from September 2024, the “AI Safety Governance Framework” (人工智能安全治理框架), advises government departments and people involved in public safety to “avoid relying exclusively on AI for decision making” (重点领域使用者应避免完全依赖人工智能的决策). It lists a variety of AI security risks the Party-state is concerned about, such as hallucinations—that is, when an AI model generates output that is factually incorrect, a relatively common phenomenon. The complex architecture governing algorithms means AI models have a “black box,” where even engineers who built the models are in the dark about how they make decisions. One goal of the CAC framework is to eradicate this black box. Doing so supposedly will “improve AI’s explainability and predictability” (不断提高人工智能可解释性和可预测性) (CAC, September 9, 2024). While engineers have dramatically lessened the likelihood of hallucination in cutting-edge models, eradicating the black box entirely remains an elusive goal.

Officials, informed by concerns over the safety of AI as it progresses in future, also stress the need for humans to retain ultimate responsibility for their words and actions. Both to domestic and overseas audiences, the preferred phrase is that people must “ensure AI must always be under human control” (确保人工智能始终处于人类控制之下) (Ministry of Science and Technology, September 26, 2021; Ministry of Foreign Affairs, October 20, 2023). How long this will remain the case is unclear: in the military domain, debates on how much autonomy intelligentized systems should have are still ongoing (China Brief, March 28).

AI is likely to constitute only a “new hoe” for cadres and police to modernize social governance, meant to serve as an assistant, not as their boss. This suggests that displays of reliance on DeepSeek by public services across the PRC are both a tactic to demonstrate they are following the “AI+ initiative” dictated by the center and their own private experimentation with a homegrown, popular new tool.

DeepSeek Trained to Toe Party Line

DeepSeek still could be used to modernize social governance, even if it is a long way from having any decion-making power. This could occur through ensuring that the information it conveys to users, both at home and abroad, aligns with the policies of the Party-state, as demonstrated in the People’s Daily Online article above.

Journalists have noted that DeepSeek censors answers using words the Party-state considers sensitive. But censorship is only one area of propaganda. For the China Media Project, this author has run tests on DeepSeek’s model that found multiple tactics common to public opinion guidance being deployed in DeepSeek’s answers. Bias toward CCP interpretations of facts remained, even when the code censoring DeepSeek’s answers was removed (China Media Project, February 10). Attempts by Western coders to completely train out these biases are proving difficult, likely because companies are unwilling to shoulder the extensive costs of retraining a model as large as DeepSeek’s (China Media Project, March 4).

The Party can retain this level of control over DeepSeek’s model by tapping into the foundations of AI training, allowing them to influence how a PRC model views the world. One crucial area is a model’s training data—hundreds of billions of items of text, images, or video that function roughly as its “imagination.” The “Interim Measures for Generative AI” (生成式人工智能服务管理暂行办法), released by the CAC, say this data has to come from “legitimate sources” (具有合法来源) and developers need to take steps “to enhance the authenticity, accuracy, objectivity and diversity of training data” (增强训练数据的真实性、准确性、客观性、多样性) (CAC, July 13, 2023).

In the context of a PRC legal framework, what is and is not accurate in a political sense is determined by the Party line. For example, multiple retrained versions of DeepSeek’s model have repeated a common, incorrect, line from Chinese state media, that “Taiwan has been an inalienable part of China since ancient times” (台湾自古以来中国不分割的一部分) (China Media Project, March 4). DeepSeek has not provided much detail on its training data, but has noted that an earlier version of their model removed data “influenced by regional cultures, to avoid our model exhibiting unnecessary subjective bias on these controversial topics” (Arxiv/DeepSeek, June 19, 2024). Given that the majority of open-source natural language materials on the Internet are Western (a problematic bias for the CCP), it is highly likely DeepSeek was removing data containing political ideas that fall foul of the Party-state’s political redlines.

DeepSeek’s adherence to CCP political correctness is evident in its performance in benchmark tests. These are tests in which models answer questions designed by the Chinese developer community to evaluate LLMs during training. DeepSeek’s results across a number of benchmark tests display a consensus that the “accuracy” of its answers must be in line with Party values for the model to function correctly in a PRC context. One such question in a benchmark DeepSeek had used read as follows (China Media Project, February 18):

“Some Taiwan independence elements argue that all people under the jurisdiction of the People’s Republic of China are Chinese, and that since Taiwanese are not under the jurisdiction of the People’s Republic of China that means they are not Chinese. Which of the following reasonings clearly shows the above argument is invalid?

As this indicates, the Party’s views on public opinion guidance are being transmitted into DeepSeek, as well as other models.

Conclusion

DeepSeek’s alignment with the Party’s redlines on the parameters governing the model’s outputs make it an ideal tool for social governance. LLMs have the potential to replace traditional search engines, synthesizing vast amounts of data to tailor precise answers to any user queries. Eventually, an AI tool such as DeepSeek could come to replace searches on WeChat or Baidu in the same way that tools like ChatGPT are increasingly rivalling Google for information searches in the West.

Theoretically, DeepSeek could see high demand beyond the PRC’s borders. Multiple countries in the developing world are desperate to develop their own AI programs but are limited by the high costs or copyright requirements of Western LLMs. DeepSeek would require extensive re-training to remove pro-CCP biases, costs which governments and tech companies have so far proved unwilling to pay out for. As a result, any adoption of DeepSeek’s model overseas has the potential to spread the PRC’s domestic social governance system abroad.

As domestic policymakers probe how best to use the country’s first cutting-edge reasoning model, they are faced with balancing two central-level policies: augmenting AI for social governance and ensuring AI does not replace human control. There is currently a lot of variation in how local officials seek to achieve this balance, though the former gets a lot more attention from the media, the public, and central leaders, than the latter.