Survey: How Do Elite Chinese Students Feel About the Risks of AI?

Publication: China Brief Volume: 24 Issue: 16

A panel discussion on AI risk. (Source: Wikimedia)

Executive Summary:

In April 2024, the authors surveyed 510 students from Tsinghua University and 518 students from Peking University—the PRC’s two preeminent institutions—about their views on the risks of artificial intelligence (AI). The key findings are as follows:

  • Students are more optimistic about the benefits of artificial intelligence (AI) than concerned about the harms. 80 percent of respondents agreed or strongly agreed with the statement that AI will do more good than harm for society, with only 7.5 percent actively believing the harms could outweigh the benefits. This could indicate that the People’s Republic of China (PRC) is one of the most optimistic countries concerning the development of AI.
  • Students strongly believe the government of the PRC should regulate AI. 31 percent of respondents believe AI should be regulated by the government, with only 6 percent actively believing it should not. This contrasts with trends seen in other countries, where there is typically a positive correlation between optimism about AI and calls for minimizing regulation. The strong support for regulation in the PRC, even as optimism about AI remains high, suggests a distinct perspective on the role of government oversight in the PRC context.
  • Students ranked AI lowest among all possible existential threats to humanity. When asked about the most likely causes of human extinction, misaligned artificial intelligence received the lowest score. Nuclear war, natural disaster, climate change, and pandemics all proved more concerning for students.
  • Students lean toward cooperation between the United States and the PRC as necessary for the safe and responsible development of AI. 7 percent of respondents believe AI will not be developed safely without cooperation between China and the United States, with 25.68 percent believing it will develop safely no matter the level of cooperation. China and the United States are arguably the two most important countries in shaping the global development of AI, but currently face geopolitical tensions.

Transformative artificial intelligence (AI) poses many potential benefits for humanity’s future, but it also poses many risks. The People’s Republic of China (PRC) will likely play a prominent role in shaping this trajectory. As the recent decision (决定) document from the Third Plenum meetings in July made clear, AI is one of eight technologies that the Chinese Communist Party (CCP) leadership sees as critical for achieving “Chinese-style modernization (中国式现代化),” and is central to the strategy of centering the country’s economic future around breakthroughs in frontier science (People’s Daily, July 22). Beyond the level of national economic strategy, AI is also seen as crucial for gaining military advantage. AI technology is already being integrated into air defense systems, while large language models (LLMs) are being put to use in Cognitive Domain Operations around the world (China Brief; June 21; September 22, 2023). The PRC also seeks to shape international norms on AI, including on AI risks. In October 2023, Xi Jinping announced a “Global AI Governance Initiative (全球人工智能治理倡议)” (CAC, October 18, 2023).Despite the potential revolutionary significance of AI, for either good or ill, and its increasing importance in the eyes of the CCP leadership, publicly accessible survey data on what people in the PRC think about this technology is rare. To gain insights into this question, the authors conducted a survey to assess how students at Tsinghua University and Peking University (PKU) view the frontier risks of developing AI. Tsinghua and PKU are the two preeminent academic institutions in the PRC, many of whose graduates will be very influential in shaping the country’s future. These students may also be some of China’s most informed citizens on the societal implications of AI, with both schools housing prominent generative AI and safe AI development programs.

Note on Methodology
The survey collected 1028 valid responses, with 49.61 percent of the sample population’s respondents attending Peking University and 50.39 percent attending Tsinghua University. It was modeled after work by YouGov, Monmouth University, The Center for Long-Term Artificial Intelligence (CLAI), The Artificial Intelligence Policy Institute (AIPI), and, most directly, from a poll done by Rethink Priorities. [1] To administer the survey, the authors leveraged the “Treehole (树洞)” online platforms, which are exclusive to each university and can be accessed only by current students. Respondents used their WeChat IDs to receive monetary compensation (a range of 3–20 RMB ($0.42–$2.80) per participant, randomly assigned). Respondents were also asked to state their university and detected IP address to mark those outside the two universities as invalid. These measures prevented multiple responses from single accounts and responses from bots.One key uncertainty, however, is whether the gender demographics of the survey accurately reflect the composition of Tsinghua and PKU. Survey respondents reported a gender breakdown of 59.73 percent male and 40.27 percent female. Neither university publicly discloses its official gender demographics, so definitively comparing the survey demographics to the general population is not possible. Analysis of indirect sources such as departmental announcements, blog posts, and other websites led the authors to conclude that the likely gender ratio is approximately 60 percent male, 40 percent female. Using this as their baseline probability assumption before conducting the survey, they found that the results aligned with this estimated ratio. As a result, post-stratification of the dataset was not necessary.Analysis of Survey Responses

Question 1: Would you support pausing the development of large-scale AI systems for at least six months worldwide?

Respondents leaned toward not pausing large-scale AI systems. 43.29 percent disagreed or disagreed strongly with the claim that AI should be paused, while 35.16 percent agreed and 21 percent remained neutral or uncertain.

This question was inspired by the open letter issued by the Future of Life Institute in March 2023, and signed by influential figures such as Elon Musk, Steve Wozniak, and Stuart Russell, urging AI labs to suspend development for a minimum of six months to address potential safety concerns (Future of Life Institute, March 22, 2023). [2]

 

A YouGov poll asked this question to a pool of respondents from the United States. 58–61 percent (depending on framing) supported and 19–23 percent opposed a pause on certain kinds of AI development (YouGov, 2023). Another research institute, Rethink Priorities, replicated the question for US adults, including the Future of Life Institute letter but altering the framing from “>1000” to “some” technology leaders signing it. Their estimates indicated 51 percent of US adults would support a pause, whereas 25 percent would oppose it (Rethink Priorities, May 12, 2023). Both surveys show a stronger desire for pausing AI development than our results.

The Center for Long-Term AI (CLAI), a Beijing-based research organization run by Tsinghua and Chinese Academy of Sciences Professor Zeng Yi (曾毅) asked a similar question to an exclusively Chinese population sample about “pausing giant AI experiments.” 27.4 percent of respondents supported pausing the training of AI systems more powerful than GPT-4 for at least six months, and 5.65 percent supported a six-month pause on all large AI model research. However, when a less specific question was asked, “Do you support the ethics, safety, and governance framework being mandatory for every large AI model used in social services?” 90.81 percent of participants expressed support.

Question 2: How much do you agree or disagree with this statement: “AI will do more good than harm for society?”

Respondents strongly believed that the benefits of AI outweigh the risks. 80 percent agreed or strongly agreed with the statement that AI will be more beneficial than harmful for society. Only 7.49 percent actively believed the harms could outweigh the benefits, while 12.46 percent remained neutral or uncertain.

Our results closely align with a 2022 cross-country Ipsos survey where 78 percent of Chinese respondents viewed AI’s benefits as outweighing drawbacks—the most optimistic of all countries polled (Ipsos, January 2022). This sharply contrasts Western sentiment where polls suggest the majority of citizens worry more about transformative AI’s dangers than its upsides. In the Ipsos survey, only 35 percent of US respondents believed AI offers more benefits than harms. Conversely, the PRC has consistently demonstrated itself to be among the most optimistic countries on AI.

Breaking the results down by gender reveals differences in perspectives on this question. Male-identifying students displayed slightly greater optimism about AI’s societal impact compared to their female-identifying counterparts (82.9 percent versus 75.8 percent). This tallies with a 2022 study by Pew Research, showing that women tend to be less optimistic about AI than men (Pew, August 3, 2022).

 

 

 

 

 

Question 3: In your daily life, how much do you worry about the effects AI could have on your life broadly?

Likely owing to their optimism about the benefits of AI, respondents tended to not be worried about the effects of AI in their daily life. 49.12 percent of respondents feel somewhat or not at all worried, while 31.2 percent report concern and 20.13 percent were neutral or uncertain.

The PRC already deploys AI to a high degree. Use cases include surveillance, healthcare, transportation, and education. This context raises the possibility that students’ views in the PRC might differ from counterparts in the United States, as the prevalence of AI in daily life is more pronounced. We therefore chose to use the exact wording of the survey of US adults performed by Rethink Priorities (with their permission), which found that the majority (72 percent) of US adults worry little or not at all about the effects of AI. Our data showed a similar trend toward being unconcerned, though not at the high levels of the Rethink Priorities dataset.

Question 4: How much do you agree or disagree with this statement: “AI should be regulated by the Chinese government”?

In the survey’s most pronounced result, 85.3 percent of respondents agreed or strongly agreed with the claim that AI should be regulated by the government, with only 6.03 percent disagreeing and 8.65 percent remaining neutral or uncertain.

A Harris-MITRE poll conducted in November 2022 estimated that 82 percent of US adults would support such regulation (Harris-MITRE, February 9, 2023). Meanwhile, a January 2023 poll from Monmouth University estimated that 55 percent of Americans favored having “a federal agency regulate the use of artificial intelligence similar to how the FDA regulates the approval of drugs and medical devices,” with only 41 percent opposed (Monmouth University, February 15, 2023). Using similar question framing, Rethink Priorities estimated that a sizeable majority (70 percent of US adults) would favor federal regulation of AI, with 21 percent opposed (Rethink Priorities, May 12, 2023). We chose not to specify a particular government agency that would oversee AI regulation, as the regulatory landscape in the PRC differs from the US. Even so, our results still reflect a comparably high demand for the government to implement oversight and control measures.

The PRC began regulating AI in 2021 and has been an early leader in developing a detailed governance regime for the technology. It has also supercharged development efforts, providing top labs at firms such as Baidu and Tencent with resources to compete against Western labs such as OpenAI and Anthropic. However, funding for dedicated AI safety research remains weak, perhaps owing to the PRC’s optimism about AI safety. Beijing has yet to make major state investments in safety research through initiatives like National Natural Science Foundation grants or government pilots.

 Question 5: How much do you agree or disagree with this statement: “AI will be developed safely without cooperation between China and the United States”?

Students believe AI will not be developed safely without cooperation between the United States and the PRC. 60.7 percent of respondents disagreed that AI would develop safely, while 25.68 percent agreed, and 13.62 percent remained neutral or uncertain.

A similar—though narrower—question was put to American voters in a survey conducted by The Artificial Intelligence Policy Institute (AIPI), inquiring whether respondents support the PRC and United States agreeing to ban AI in drone warfare, in which 59 percent supported and only 20 percent did not support (AIPI, November 29, 2023). In contrast, a separate AIPI poll saw 71 percent of US adults, including 69 percent of Democrats and 78 percent of Republicans, disapprove of chip design firm Nvidia selling high-performance chips to the PRC, while just 18 percent approved (AIPI, October 19, 2023).

AI has increased in prevalence as a topic in US-PRC diplomacy. It was a major topic in the November 2023 Woodside Summit meeting in San Francisco between PRC President Xi Jinping and US President Joe Biden (Xinhua, November 16, 2023; White House, November 15, 2023). This led to a series of bilateral talks on the development of AI. Internationally, the PRC has engaged in AI safety discussions, co-signing the “Bletchley Declaration” and contributing to joint papers and dialogues calling for increased safety research and governance policies (UK Government, November 1, 2023; UC Berkeley Center for Human-Compatible AI, October 31, 2023).

Question 6: In your opinion, what are the most likely causes of human extinction?

Misaligned AI scored lowest out of a range of potential causes of human extinction, receiving the least number of first-place votes among available options and the lowest aggregate score when combining all ordinal rankings. Nuclear war, natural disaster, climate change, and pandemics all proved more concerning for students. This tallies with the Rethink Priorities survey of US adults that received similar results in response to a near-identical question, with AI ranking lowest among similar options (Rethink Priorities, May 12, 2023).

Question 7: How worried are you that machines with AI could eventually pose a threat to the human race?

Fears of AI’s existential threat to humanity were also low. Just 17.32 percent of respondents agreed or strongly agreed with the possibility that AI could pose an extinction-level threat, while 63.71 percent of respondents disagreed or strongly disagreed. 18.97 percent remained neutral or uncertain.

This question closely follows the wording of a YouGov poll of 1000 US adults (YouGov, 2023). Results from the YouGov poll suggested high estimates of the likelihood of extinction caused by AI. 17 percent reported it “very likely” while an additional 27 percent reported it “somewhat likely.” When Rethink Priorities replicated the survey question, they received lower estimates. However, they chose to make their questions time-bound (e.g., the likelihood of AI causing human extinction in the next 10 or 50 years).

Question 8: How likely do you think it is that AI will one day be more intelligent than humans?

50 percent of respondents agreed or strongly agreed with the claim that AI will eventually be more intelligent than humans, while 32.59 percent disagreed and 17.51 percent remained neutral or uncertain.

When Rethink Priorities asked this question, they estimated that 67 percent of US adults believe it is moderately likely, highly likely, or extremely likely that AI will become more intelligent than people. A survey conducted in the PRC by CLAI asked a related question to young and middle-aged students and scholars in AI-related fields, but instead chose to word the question as “strong AI (强人工智能)”—a catch-all term combining artificial general intelligence, human-level AI, and superintelligence. Of their participants, 76 percent believed “strong AI” could be achieved, although most participants believed Strong AI could not be achieved before 2050, and around 90 percent believed it would only appear “after 2120” (CLAI, May 12, 2023). Both surveys show a more substantial reported likelihood of smarter-than-human intelligence than our results.

Our results also showed differences between students at the two universities surveyed. Tsinghua students exhibited a lower tendency to believe AI would eventually surpass human intelligence levels compared to their counterparts at PKU. Tsinghua has a more scientific orientation and reported higher familiarity with AI than PKU.

 

 

 

 

 

 

Question 9: Which risks posed by AI do you find the most concerning?

In a ranked choice question, surveillance or loss of privacy proved to be the most concerning risk for students. It received the highest number of first choice votes among available options (26.64 percent). This was followed by existential risk, misinformation, wealth inequality, increased political tension, and issues related to various biases (e.g., race, age, or gender). The welfare of AI entities received the fewest first place votes. When aggregating ordinal rankings, surveillance also received the highest total score, followed by misinformation existential risk, increased political tension, wealth inequality, and various types of bias, with the welfare of AI entities receiving the lowest total score.

The PRC has actively invested in its surveillance state apparatus in recent years, expanding extensive CCTV and digital monitoring systems in major cities, often equipped with facial recognition technology. While nominally enhancing public safety and the efficiency of governance, these systems also raise concerns of infringing upon individual liberties to bolster the CCP regime’s security (see China Brief, August 17, 2017, March 1, April 12). There is little data suggesting how exactly these tradeoffs of surveillance are viewed by PRC citizens it directly affects. Our results nevertheless indicate that students see surveillance and the loss of freedom as a serious risk.

Implications

Limited survey data in the PRC assesses how PRC citizens perceive the risks of AI. Nevertheless, our results suggest that students in the PRC are broadly more optimistic about the prospects of AI than people in the United States and Europe.

This optimism instead aligns more closely with sentiments found in countries of the Global South, which tend to view AI’s potential in a more positive light (Ipsos, January 2022). The PRC has perhaps sought to position itself as an ally to the developing world on AI-related issues, with some observers viewing “AI doomerism” in the West as a preoccupation of the First World that may even be an attempt to hinder technological progress in the Global South (Nikkei, December 18, 2023). As a result, developing countries may become more receptive to the PRC’s messaging on AI cooperation and technology development, posing challenges for the West in competing for influence on AI issues internationally.

Among the major players in the global AI race, the PRC’s stance on addressing the risks of the technology remains the least clear (Concordia AI, October 2023). The present survey suggests that this could in part be a function of lower concern in the PRC about those risks. That said, the country does appear to be taking risks more seriously, such as the Decision from the Third Plenum called “building AI safety oversight systems (建立人工智能安全监管制度)” (People’s Daily, July 22). Additionally, the PRC’s Artificial Intelligence Industry Association (AIIA), a government-led organization, created an ethics working group and released a risk management framework in late 2023 (Weixin/AIIA, December 23, 2023).

As with any survey, there are important caveats to bear in mind. First, as this survey was conducted exclusively among students at the country’s two most elite institutions in Beijing, the results cannot be extrapolated to provide insight into views held more broadly in the PRC. Second, conducting surveys in Chinese raises potential issues with translation. For example, “transformative AI” could not be translated literally. Instead, the authors used “frontier AI (前沿人工智能),” a more commonly used phrase conveying a similar meaning. However, the authors structured the framing of each question so that the answers respondent would give in either language would likely be the same, attempting to ensure language-independent responses despite disparities in phrasing.

Surveys conducted in authoritarian systems also suffer from concerns over reliability. Fears of repercussions from the state’s security apparatus may cause individuals to give socially desirable responses rather than their real opinions. This problem was difficult to control for. While this survey’s responses are anonymous, respondents did submit their WeChat IDs so that they could be remunerated for participation. Of the questions presented to respondents, most were intended to be apolitical in nature, focusing purely on views on AI as a technology and the impacts it could have. However, questions four and five, which dealt with government regulation and US-PRC cooperation do have political implications. The responses showed strong support for both regulation and cooperation, which is in line with PRC government preferences. However, as our comparative analysis indicates, support for regulation is also high in the United States, where views on cooperation are at least mixed. Despite these caveats, data conducted from surveys on the ground remain a valuable source for understanding how people in the PRC are thinking about significant topics that have a bearing on both the PRC’s trajectory and that of its relationship with the United States as well as the wider world. [3]

Conclusion

AI technology will continue to evolve at a rapid pace. As it does, public opinion will shift. This survey attempts to provide data on a cross-section of an important demographic at a brief point in time. It suggests that, in general, the most advanced Chinese students have more positive perceptions of AI and its potential future impact, are less worried about the existential threat from AI than counterparts in Europe in the United States, yet still desire a high-level of regulation from the government. Due to the scarcity of publicly accessible survey data in the PRC, it is hopeful that future polling will be conducted, such as investigating how different age demographics, urban and agrarian populations, and people working in different industries in the PRC feel about AI.

Notes

[1] For these surveys, see the following:

YouGov: https://docs.cdn.yougov.com/bfoyxp7p28/results_AI percent20and percent20the percent20End percent20of percent20Humanity.pdf

Monmouth University: https://www.monmouth.edu/polling-institute/documents/monmouthpoll_us_021523.pdf/

The Center for Long-Term Artificial Intelligence: https://long-term-ai.center/research/f/whether-we-can-and-should-develop-strong-artificial-intelligence

The Artificial Intelligence Policy Institute: https://theaipi.org/poll-biden-ai-executive-order-10-30-5/

Rethink Priorities: https://rethinkpriorities.org/publications/us-public-opinion-of-ai-policy-and-risk

N.B. The present survey was conducted around one year later than many of the other studies used for comparative data. In that time, the field of AI advanced significantly. Even from the date the survey was initially administered (April 18–20, 2024) to publication, major developments have unfolded, such as the release of OpenAI’s multimodal GPT-4o model and the first US-PRC bilateral talk on AI safety (White House, May 15; Mofcom, May 13).

[2] Our survey was conducted approximately one year after the Future of Life Institute’s open letter was published, meaning the topic of AI safety was likely not as fresh in respondents’ minds. Additional advances in AI development in the interceding year could have also impacted respondents’ views.

[3] For more on conducting surveys in the PRC, see: Carter EB, Carter BL, Schick S. Do Chinese Citizens Conceal Opposition to the CCP in Surveys? Evidence from Two Experiments. The China Quarterly. Published online 2024:1-10. doi:10.1017/S0305741023001819. https://www.cambridge.org/core/journals/china-quarterly/article/do-chinese-citizens-conceal-opposition-to-the-ccp-in-surveys-evidence-from-two-experiments/12A2440F948D016E8D845C492F7D0CFE; Shen, Xiaoxiao and Truex, Rory, In Search of Self-Censorship (March 16, 2020). British Journal of Political Science (2021), 51, 1672–1684: https://ssrn.com/abstract=4177242; King, Gary, Jennifer Pan, and Margaret E. Roberts. 2013. “How Censorship in China Allows Government Criticism but Silences Collective Expression.” American Political Science Review 107, no. 2: 326-343. https://gking.harvard.edu/files/censored.pd.