Chinese social media manipulation has evolved from a feared disruption to a strategic weapon in Beijing’s influence arsenal. Initially cautious of social media’s potential to challenge its authority, the Chinese Communist Party (CCP) has since embraced it as a tool for shaping domestic and international opinion. By leveraging both overt propaganda and covert cyber operations, China is increasingly using platforms like Twitter (X) and Facebook—despite banning them at home—to advance its geopolitical goals. With the rise of generative artificial intelligence (AI), figures like Dr. Li Bicheng are leading the charge, exploring how AI can transform these efforts into a far more powerful force.
Insights into Chinese Use of Generative AI and Social Bots from the Career of a PLA Researcher
China’s Evolution in Social Media Manipulation
Initially, the CCP viewed social media as a destabilizing force, capable of giving rise to movements that could challenge its control, as seen in the Arab Spring or the pro-democracy protests in Hong Kong. However, Beijing’s approach took a turn by the mid-2010s. The CCP, and more specifically the PLA, began exploring ways to harness the influence power of social media, focusing on cyber-enabled operations designed to promote its strategic objectives, including defending the regime’s global image.
According to open-source research, the PLA’s interest in social media manipulation dates back to at least the mid-2010s. By 2018, China’s cyber warfare units were already actively engaging in these operations. What began as rudimentary campaigns—posting pro-Beijing messages or undermining foreign critics—evolved into far more complex activities involving AI, large-scale bot networks, and other sophisticated techniques.
Dr. Li Bicheng: The PLA’s Key Researcher on Cyber Influence
One of the key figures shaping China’s military strategy in this domain is Dr. Li Bicheng, whose work provides critical insights into the CCP’s operational approach. Dr. Li has been involved in research focused on leveraging social media for influence operations, with a particular interest in generative AI and its applications in automating and enhancing manipulation efforts. His work highlights the PLA’s strategic focus on gaining a competitive advantage in the information battlefield.
Dr. Li’s research suggests that the PLA views the use of AI in social media manipulation as essential for future warfare. Social bots—automated accounts programmed to mimic human behavior online—are just one example of how AI can amplify these operations. By deploying vast networks of bots, China could spread propaganda and misinformation at a much larger scale, using AI to craft more personalized, persuasive messages.
The Power of Generative AI in Influence Operations
Generative AI—artificial intelligence that can create new content such as text, images, or videos—marks a significant leap forward in cyber influence operations. Unlike traditional propaganda, which requires substantial human effort, AI can create highly customized, responsive content tailored to specific audiences. This opens the door for more subtle and wide-reaching manipulation.
By using AI to run large-scale social bot networks, the PLA can engage in more efficient and effective influence campaigns. These bots can post content, engage with users, and amplify particular narratives, all while remaining virtually indistinguishable from genuine human activity. This capability, combined with data collected on social media users, allows the Chinese military to refine its propaganda and target individuals or groups more effectively.
One of the justifications the Chinese military offers for these operations is a perceived threat from the United States. The CCP argues that Washington is seeking to undermine the Chinese regime, and so it has rationalized its efforts as a defensive measure. This narrative not only shapes domestic support for the CCP’s manipulation efforts but also sets the stage for broader global information warfare.
The Global Threat to Democracies
The use of AI in social media manipulation poses a significant challenge to democracies worldwide. Beijing’s ability to influence foreign public opinion has been limited so far, but the integration of generative AI could dramatically expand its capabilities. Democracies, which value free speech and open media, are particularly vulnerable to influence campaigns that blur the lines between authentic discourse and state-sponsored disinformation.
Chinese influence operations often exploit existing divisions within societies, exacerbating polarization and sowing confusion. With AI in the mix, these tactics can be scaled up, making it harder to counter false narratives. Moreover, the integration of generative AI allows for the production of deepfake videos, altered images, and fake news articles that are more convincing than ever before.
Recommendations for Democracies
To counter these evolving threats, global democracies need to adopt proactive measures. The following recommendations offer a starting point:
-
Promote Media Literacy: Public awareness campaigns should educate citizens on how to spot disinformation and recognize manipulation tactics, particularly those involving AI-generated content.
-
Increase Public Reporting: Social media platforms must work with governments to identify and report state-backed influence campaigns. Transparency is key to building resilience.
-
Enhance Diplomatic Coordination: Democracies need to collaborate more closely on intelligence sharing and countermeasures. The problem is global, and responses need to be coordinated across borders.
-
Engage Beijing on Restrictions: It may be beneficial for the U.S. and other democracies to engage China diplomatically in discussions on limiting AI-driven influence operations. While challenging, such discussions could help set norms around the ethical use of AI in information warfare.
The Road Ahead
China’s capabilities in social media manipulation are likely to grow as the PLA continues to develop its expertise in AI and cyber operations. Understanding the role of figures like Dr. Li Bicheng offers critical insights into how the CCP views and executes these campaigns. As generative AI continues to evolve, it is vital for democracies to stay ahead of the curve by anticipating the threats that lie ahead and preparing accordingly.
By focusing not just on what has already happened but on the underlying strategy, operational planning, and future technological capabilities, this report provides a valuable framework for understanding the potential impact of AI-driven social media manipulation on global stability. The stakes are high, and the time to act is now.
Share this article on