Deepfakes with Chinese Characteristics: PRC Influence Operations in 2024

Publication: China Brief Volume: 24 Issue: 7

Tencent Cloud’s product - "Digital Intelligence Human", based on the new generation of multimodal human-computer interaction technology, covering industries such as anchors and customer service. (Source: HKSilicon)

Executive Summary:

  • The PRC’s potential to interfere in elections with deepfakes has been noted, with strategies including creating false narratives around candidates and misleading information on electoral processes. Advanced AI tools could further sophisticate these interference efforts, impacting democratic processes worldwide.
  • Beijing appears to have a dual stance on deepfakes—strict regulation domestically due to potential socio-economic and security threats, coupled with an ambition to leverage them for international influence operations.
  • Beijing is likely to integrate deepfakes with AI to conduct smear campaigns against critics, amplify PRC propaganda by creating fake personae, and interfere in elections, exacerbating the spread of disinformation.

 

On March 27, PRC social media platform Douyin announced a ban on the use of artificial intelligence generated content (AIGC) to create and post content that “goes against science, fabricates information, or spreads rumors” (Douyin, March 27). This latest development offers a glimpse into how Beijing perceives deepfakes. As one of the first countries in the world that implement thorough regulations on deepfakes, the PRC sees them as a threat, wary of their being leveraged to disrupt socio-economic stability and threaten national security. However, Beijing appears to be torn between these concerns and its ambition to utilize deepfake technology for influence operations overseas.

The Cyberspace Administration of China (CAC; 国家互联网信息办公室) released regulations on network audio and video information services (网络音视频信息服务管理规定) in November 2019. At the time, officials noted that deepfakes magnify the risk for the dissemination and amplification of “illegal and harmful information” and may be exploited to endanger national security and disrupt social stability and order (China News Service, November 30, 2019). A paper published by the PRC’s Journal of International Security Studies in 2022 titled “Deepfakes and National Security: Perspectives Based on the Overall National Security Concept” offered some insights into the fears (Liu, March 31, 2022). These ranged from impersonating government officials for cyber fraud to manipulating the stock market to fabricating false emergencies. Such uses of deepfakes drove the initiative to pre-emptively “drawing the redlines in advance (提前划红线),” and implement these “ex ante regulations (事前规制)” (Xinhua, March 19, 2021). In parallel with these concerns is the alacrity with which Beijing leverages deepfakes for influence operations. Recently, attempts were made to reduce public support for the Democratic Progressive Party (DPP) with altered video footage during Taiwan’s election.

The challenges posed by emerging technologies when it comes to projecting trends on how the PRC could weaponize deepfakes to amplify their influence operations are understudied. An analysis of current trends, disinformation patterns, and case studies suggests three specific ways Beijing is likely to integrate past strategies with artificial intelligence (AI) in attempts to reinforce transnational repression, influence public opinion, and conduct electoral interference in liberal democracies. First, the PRC could leverage deepfakes to manifest smear campaigns. Second, AIGC could be used to strengthen PRC propaganda narrative, leveraging deepfake personas—from witnesses to news anchors—to support and disseminate its preferred narratives to influence public perception. Third, Beijing is likely to integrate higher quality and a higher quantity of deepfakes to conduct electoral interference worldwide.

Smear Campaigns: The ‘50 Cent Army’ and Spamouflage [1]

Creating accusations out of thin air has long been a strategy for PRC influence operations aiming to defame politicians, regime critics, and human rights activists. For instance, Taiwan President Tsai Ing-wen has been accused of “academic fraud,” suggesting that her Ph.D. in law from LSE is fake; former US House Speaker Nancy Pelosi has been called a “wicked witch (妖婆子)”; and Hong Kong democracy activists have been portrayed as rioters (Global Times, September 10, 2021; Huanqiu, April 22, 2020; WaPo, June 8, 2023).

PRC smear campaigns often involve sexual assault and misconduct allegations. Previously, posts were often accompanied by memes or poorly edited photos (The Guardian, February 9, 2023). Deepfakes, however, allow accusations to be supported by persuasive “evidence,” disseminated and amplified in large-scale influence operations. This can occur via nationalist trolls such as the “50 Cents Army (五毛党),” the “little pink (小粉红)” cybernationalists, and state-controlled cross-platform political spam networks such as Spamouflage. Given the difficulty of debunking fake news in real-time and the acute virality of negative content, reputational damage and psychological harm to the targeted individual is most likely done before any countermeasures can be deployed (MIT News, March 8, 2018). [2] [3]

Beijing has tended to harass women of Asian descent who have “public platforms, opinions, and expertise on China” (ASPI, June 3, 2022). Statistics indicate that women accounted for the targets of 99 percent of deepfake pornography in 2023 (Home Security Heroes, 2023). A recent example involved deepfakes of the artist Taylor Swift appearing on Twitter/ X. However, she is almost uniquely positioned to mobilize actions such as hashjacking (flooding the original hashtag with positive messages and drown out the original image in X’s search function) to protect her from such defamation campaigns (NBC, January 27). The PRC could easily create and amplify harmful deepfakes to discredit and silence its opposition, particularly its female opposition, with little recourse for the victims.

To state the obvious, the US Department of Justice’s latest indictment on seven hackers affiliated with the PRC Ministry of State Security revealed how their operations could target numerous government and political officials worldwide via sending “thousands of malicious tracking email messages to personal and professional email accounts” in attempts to gain personal information of the receipts such as their location data and contacts since at least 2015 (DOJ, March 25). While the incident demonstrated Beijing’s evolving capabilities to target its adversaries, these cyberattack tactics will likely intertwine with deepfakes technologies for a more far-reaching dissemination during its influence operations.

Under the PRC’s established legal framework, dissemination and amplification of deepfakes is strictly restricted. However, those regulations do not appear to apply to operations conducted by the PRC-affiliated entities. According to Article 3 of the Regulations on the Management of Online Deep Synthesis Image Services (互联网信息服务深度合成管理规定) enacted in January 2023, despite the CAC is responsible for coordinating and managing the nation’s comprehensive synthesis services, the Ministry of Public Safety (MPS) will also manage these services according to its “respective responsibilities” (Government of the People’s Republic of China, November 25, 2022).

And Spamouflage campaigns are directly linked to Beijing’s MPS. In other words, the PRC could produce deepfakes and amplify them via the cross-platform spam network as they see fit.

Additionally, Spamouflage’s tactics are also constantly evolving. Methods include innovations in AIGC and manipulated media, links to fake news sites, hashjacking, and directly replying to targets’ posts. Despite efforts from social media platforms such as Meta (which removed thousands of fake Facebook accounts as a part of its investigation from 2022 to 2023) and US authorities (which have pressed charges on PRC police officers operating the troll farm that attacked Chinese dissidents and disseminated propaganda) (Meta, February 14; DOJ, April 17, 2023), such tactics have enabled them to slowly garner more real-life user engagement (Graphika, February 2023).

These campaigns often have achieved sizeable impact. In many cases, such as the targeting of Hong Kong protestors during the 2019 pro-democracy movement, the enforcement of transnational repression within the Chinese diaspora, and the harassment of Canadian MPs, there has been little pushback (Graphika, September 2019; Bloomberg, December 15, 2023; Government of Canada, October 23, 2023). While there are some current weaknesses over specific language used, images, and issues of cultural sensitivity, these could be eliminated through the use of better AIGC. Some improvements are already apparent: Tweets are becoming more fluent in multiple languages, avatars posing as real-life users are more lifelike, and with the right timing and dissemination methods deepfakes could sow division in target societies. Interference in elections worldwide will undermine the core structures of liberal democracies.

Deepfakes for “Telling China’s Story Well”

In 2013, Xi Jinping began to emphasize that the PRC’s international communications must “tell China’s story well (讲好中国故事)” (Xinhua, August 21, 2013). The goal of this external propaganda strategy is to enhance China’s “international discourse power (国际话语权)” (Xinhua, June 1, 2021). In past influence operations, PRC state media, gray media,[4] and private actors have worked together to amplify Beijing’s preferred narrative. One example is the “happy Uyghurs” meme—videos featuring an airbrushed version of Uyghurs’ lives, focusing on traditional dances and innocuous cultural phenomena, giving the false impression that Uyghurs are living “happy lives,” and ignoring the hardship and persecution many face (X/@XinjiangChannel, May 13, 2021; X/@D[iscover]Xinjiang, February 1).

Specifically, article 4 of the Regulations on the Management of Online Deep Synthesis Images noted that the provision of comprehensive synthesis services shall “adhere to the correct political orientation, public opinion orientation, and value orientation” and “promote deep synthesis services to be positive and upward” (Government of the People’s Republic of China, November 25, 2022). The use of deepfakes to “tell China’s story well” conceivably falls into the category of “correct” political and public opinion.

AIGC could easily help Beijing create large quantities of propaganda portraying how “peaceful” Xinjiang is while “debunking” Western countries’ allegations about extensive human rights violations. For instance, PRC state media outlet the Global Times published an “investigation of 30,000 Xinjiang-related stories exposing how certain Western media fabricate, hype up ‘forced labor’ smear” (Global Times, October 15, 2023). Beijing’s media, which does not observe the West’s journalistic standards, could fabricate testimonies to lend a veneer of authenticity to their narrative. This is not where the greatest potential lies, however.

The real threat of deepfakes for influence operations is beyond the mainland, in Hong Kong. Deepfakes could fabricate scenes of peace and harmony, portraying the city in a positive light. Instead of foregrounding “riots” and the aftermath of the national security law and now Article 23, Hong Kong could be perceived as “advancing to prosperity (由治及興),” as the city’s Chief Executive John Lee claimed (WenWeiPo, October 25, 2023). Hong Kong authorities have less control of the information ecosystem than in the PRC proper, so leveraging deepfakes to depict “happy Hongkongers” to “tell the Hong Kong story well” would be more cost-effective than getting dozens of local interviews and filming shots equivalent to Uyghurs dancing in the streets.

Beijing’s influence operations frequently use first-person accounts to “tell the story.” [5] Cases include fake news about the Kansai Evacuation in 2018 [6] and the instance of a Chinese blogger who pretended to be in Israel during the early days of the Israel-Palestine conflict. Both of these emphasized the PRC’s effective protection of overseas citizens, something which did not accurately reflect realities on the ground (Taipei Times, September 9, 2023; CDT, October 17, 2023; 404 Beifen, October 17, 2023). Such disinformation could be amplified via deepfake technology to influence public perception of China.

In the past year, Beijing has unveiled deepfake news anchors and websites posing as local news outlets have spread disinformation and ad hominem attacks worldwide (Graphika, February 2023; Citizen Lab, February 7). AI-generated witnesses, anchors, and news sites could conceivably merge into one formidable influence operation.

The aim of deepfake content is not to convince everyone exposed to it that the disinformation is true. Rather, the goal is simply to create an alternative narrative for the international audience, and especially for those who have no prior contextual knowledge and for whom repeated exposure through saturating the information space could increase their receptivity. At the very least, malign state actors could sow confusion or erode trust in more accurate narratives.

Leveraging deepfakes to falsify the testimonies of political prisoners constitutes one logical future use case. Televised forced confessions could be reimagined by creating videos of prominent activists (since public figures would likely have enough visual data to train the algorithms) confessing to conspiracy theories or falsely depicting them in good health (as opposed to suffering from torture).

Deepfakes for Electoral Interference

2024 is an unprecedented year in the number of national elections. Beijing thus has numerous opportunities to experiment with and conduct electoral interference (ODNI, March 11). Previous operations have not made a measurable impact, but the potential ramifications of AIGC, especially given the enormous advances in the last eighteen months, indicate a new toolkit (Misinformation Review, October 18, 2023). Beijing’s suspected interference in Taiwan’s presidential and legislative elections exemplifies its strategies (see China Brief, February 16).

Deepfake content was rampant in the disinformation campaign targeting Democratic Progressive Party (DPP) presidential candidate Lai Ching-Te (賴清德). Altered video footage with Lai’s synthesized voice portrayed him as supporting a coalition between the Kuomintang (KMT) and Taiwan People’s Party (TPP) and falsely depicted him commenting on DPP scandals (Taiwan FactCheck Center, December 29, 2023; Taiwan FactCheck Center, November, 2023). While cross-platform posting is a common phenomenon in influence operations, Beijing’s ability to manipulate the information environment, especially on PRC-based social, evolves as the data it collects increases—fueling its ability to curate disinformation to target users with specific value-orientations. Based on current trends, two key manifestations of deepfake technology in the near future include content to mislead on electoral laws and regulations, and content inciting chaos.

In early January, there was a widely circulated disinformation campaign on social media platforms that falsely stated, “Taiwan’s Central Election prohibits the circulation of political propaganda ten days prior to the election” (Taiwan FactCheck Center, January 4). [7] This narrative blends half-truths (real legal references) and half-lies (laws do prohibit the dissemination of poll data close to election day but not general political discussion, and only on election day does the Taiwan law prohibit electioneering) (MyGoPen, January 3). [8] It thus exploits the general public’s unfamiliarity with the nuances of election laws. It also leverages the psychological insight that people who recognize part of the information conveyed as true are more inclined to accept additional false claims—in this case, discouraging political discussion in the most decisive part of the election cycle (RAND, July 11, 2016). Integrating this tactic with deepfake videos of politicians or reputable experts would further spread confusion and suppress political discourse. The tactic has already been used in the United States, though by a US-based individual. A fake robocall—deepfake audio of President Joe Biden urging voters to skip the primary election in New Hampshire—could be a sign of what to expect later this year (Taiwan FactCheck Center, January 5; Bloomberg, January 22; BBC, January 22; DOJ.NH, February 6).

Deepfake videos depicting chaotic scenes may amplify rumors. The PRC used such tactics during the Taiwan election. On election day, a disinformation campaign alleged the existence of “multiple stabbing incidents across various polling stations” in Tainan, the southern part of the island (Taiwan FactCheck Center, January 13). The rumor was accompanied by an edited photo featuring a victim covered in blood to “prove” its authenticity. The photo originated from PRC-affiliated media Haixia Net (海峡网), which is owned by Fujian Daily Newspaper Press Group (福建日报报业集团) and controlled by the Fujian Provincial Committee of the Communist Party of China (中国共产党福建省委员会). Beijing could have used the power of deepfakes to craft even more compelling videos to incite chaos and dissuade voters from participating in the democratic process. Text-to-video technology, which could conceivably turn manually typed prompts into powerful weapons of information warfare, already exist. Sora, a recent tool developed by Open AI (although not yet publicly available), is one such tool. In the hands of malign actors, such powerful tools could wreak havoc on electoral processes.

Conclusion

The PRC has already used an array of tactics to create and spread disinformation as part of influence operations to affect democratic elections and undermine liberal democracies. Some of the most advanced deepfake technology currently originates in the country. There is a clear and obvious potential for deepfake technology to be deployed and intertwined with older techniques to maximize their impact. From weaponizing deepfakes for smear campaigns, and electoral interference to fabricating testimonies for manipulating global narratives, it is only a matter of time until Beijing figures out the best way to integrate these new and evolving tools into its influence operation playbook.

Notes

[1] Spamouflage is a cross-platform political spam network that hijacked or faked accounts on social media platform to amplify PRC narratives while disguising the spam messages as legitimate. The campaign has been attributed to the PRC Ministry of Public Security (MPS). It was first exposed by social media analytics business Graphika, of which the campaign was used on the Hong Kong protesters during the pro-democracy movement in 2019.

For more details, see Nimmo, B., Shawn Eib, C. and Tamora, L. (2019). ‘Cross-Platform Spam Network Targeted Hong Kong Protest’. Graphika. Available at: https://graphika.com/reports/spamouflage

Martin, A. (2023). “Chinese law enforcement linked to largest covert influence operation ever discovered.” The Record. Available at: https://therecord.media/spamouflage-china-accused-largest-covert-influence-operation-meta

[2] Schöne JP, Garcia D, Parkinson B, Goldenberg A. Negative expressions are shared more on Twitter for public figures than for ordinary users. PNAS Nexus. 2023 Jul 6;2(7):pgad219. doi: 10.1093/pnasnexus/pgad219. PMID: 37457891; PMCID: PMC10338895.

[3] Rainer Greifeneder et al., eds. The Psychology of Fake News: Accepting, Sharing, and Correcting Misinformation. Routledge, 2020

[4] Gray media are pro-nationalistic media outlets that have alleged ties with the PRC state entities, such as being financed by China-linked corporations, acknowledged by PRC officials for their work and/ or having members that hold positions in the CCP or other government bodies.

[5]洪浩唐. 戰狼來了: 關西機場事件的假新聞、資訊戰.. 新自然主義, 2021

[6] In the 2018 Kansai Evacuation incident, fake news circulated online claiming that the PRC embassy helped evacuate its citizens from Kansai Airport after a Typhoon while defaming the Taiwan administration’s “inaction.”

[7] Also data scraped by the author during the election period.

[8] According to the “Presidential and Vice-Presidential Election and Recall Act” and the “Public Officials Election and Recall Act,” it is prohibited to publish, report, distribute, comment on, or quote any polling data within the ten days leading up to an election. However, these laws do not restrict individuals from discussing their political preferences or supporting a particular party or candidate, which the disinformation falsely claims. https://www.mygopen.com/2024/01/500k.html