Skip to content

China says chatbots should follow the party line

    Five months after ChatGPT sparked an investment frenzy over artificial intelligence, Beijing is taking steps to curb China’s chatbots, demonstrating the government’s determination to tightly control the technology that could define an era.

    China’s cyberspace administration this month unveiled design rules for so-called generative artificial intelligence — the software systems, like the one behind ChatGPT, that can formulate text and images in response to a user’s questions and directions.

    The regulations require companies to abide by the Chinese Communist Party’s strict censorship rules, just as websites and apps must avoid publishing material that smears China’s leaders or repeats forbidden history. The content of AI systems should reflect “core socialist values” and avoid information that undermines “state power” or national unity.

    Companies will also need to ensure their chatbots create words and images that are truthful and respect intellectual property, and will need to register their algorithms, the software brains behind chatbots, with regulators.

    The rules aren’t final yet and regulators can continue to change them, but experts said engineers building artificial intelligence services in China were already figuring out how to integrate the edicts into their products.

    Governments around the world are impressed by the power of chatbots with the AI-generated results ranging from the alarming to the benign. Artificial intelligence has been used to pass college exams and create a fake photo of Pope Francis in a puffy coat.

    ChatGPT, developed by US company OpenAI, which is backed by some $13 billion from Microsoft, has pushed Silicon Valley to apply the underlying technology to new areas such as video games and advertising. The venture capital firm Sequoia Capital estimates that AI companies could eventually produce “trillions of dollars” in economic value.

    In China, investors and entrepreneurs are racing to catch up. Shares of Chinese artificial intelligence companies have soared. Splashy announcements have been made by some of China’s biggest tech companies, most recently e-commerce giant Alibaba; SenseTime, which makes facial recognition software; and the Baidu search engine. At least two startups developing Chinese alternatives to OpenAI technology have raised millions of dollars.

    ChatGPT is not available in China. But faced with a growing number of homegrown alternatives, China has been quick to reveal its artificial intelligence redlines ahead of other countries still thinking about regulating chatbots.

    The rules illustrate China’s “act fast and break things” approach to regulation, said Kendra Schaefer, head of technology policy at Trivium China, a Beijing-based consulting firm.

    “Because you don’t have a two-party system where both sides argue, they can just say, ‘Okay, we know we have to do this, and we’ll review it later,'” she added.

    Chatbots are trained on large swaths of the internet and developers struggle with the inaccuracies and surprises of what they sometimes spout. On the face of it, China’s rules require a level of technical control over chatbots that Chinese tech companies haven’t achieved. Even companies like Microsoft are still fine-tuning their chatbots to weed out malicious responses. China is raising the bar a lot, which is why some chatbots are already disabled and others are only available to a limited number of users.

    Experts are divided on how difficult it will be to train AI systems to be consistently factual. Some doubt that companies can explain the range of Chinese censorship rules, which are often drastic, constantly changing and even require censorship of specific words and dates such as June 4, 1989, the day of the Tiananmen Square massacre. Others believe that over time and with enough work, the machines can be tuned to truth and specific value systems, even political ones.

    Analysts expect the rules to change after discussions with Chinese technology companies. Regulators could soften their enforcement so that the rules don’t completely undermine the development of the technology.

    China has a long history of censoring the internet. Throughout the 2000s, the country has built the world’s most powerful Internet information dragnet. It drove away non-compliant Western companies like Google and Facebook. It hired millions of employees to monitor internet activity.

    All the while, China’s compliant tech companies thrived and defied Western critics who predicted that political control would undermine growth and innovation. As technologies such as facial recognition and cell phones emerged, companies helped the state deploy them to create a surveillance state.

    The current wave of AI poses new risks to the Communist Party, said Matt Sheehan, an expert on Chinese AI and fellow at the Carnegie Endowment for International Peace.

    The unpredictability of chatbots, which make statements that are nonsensical or false — what AI researchers call hallucination — runs counter to the party’s obsession with controlling what is said online, Mr. Sheehan said.

    “Generative artificial intelligence challenged two of the party’s top goals: control of information and artificial intelligence leadership,” he added.

    Experts say China’s new rules aren’t just about politics. For example, they aim to protect privacy and intellectual property for individuals and creators of the data on which AI models are trained, a topic of global concern.

    In February, Getty Images, the image database company, sued artificial intelligence start-up Stable Diffusion for training its image-generating system on 12 million watermarked photos, which Getty claimed diluted the value of its images.

    China is making a broader effort to address legal questions about AI companies’ use of underlying data and content. In March, as part of a major institutional overhaul, Beijing created the National Data Bureau, an effort to better define what it means to own, buy and sell data. The state body would also help companies build the datasets needed to train such models.

    “They’re now deciding what kind of proprietary data is and who has the rights to use and control it,” said Ms. Schaefer, who has written extensively about China’s AI regulations and called the initiative “transformative”.

    Still, China’s new guardrails may be ill-timed. The country faces increasing competition and semiconductor sanctions that threaten to undermine its competitiveness in technology, including artificial intelligence.

    Hopes for Chinese AI were high in early February when Xu Liang, an AI engineer and entrepreneur, released one of China’s first answers to ChatGPT as a mobile app. The app, ChatYuan, was downloaded more than 10,000 times in the first hour, Mr. Xu said.

    Media reports of clear differences between the party line and ChatYuan’s responses soon surfaced. Responses offered a gloomy diagnosis of China’s economy, describing Russia’s war in Ukraine as a “war of aggression”, at odds with the party’s more pro-Russian stance. Days later, authorities shut down the app.

    Mr. Xu said he was adding measures to create a more “patriotic” bot. They include filtering sensitive keywords and hiring more manual reviewers to help him spot problematic answers. He is even training a separate model that can detect ‘incorrect views’, which he will filter.

    Still, it’s not clear when Mr. Xu’s bot will ever satisfy the authorities. According to screenshots, the app was initially supposed to resume on February 13, but it was still unavailable as of Friday.

    “Service will resume after troubleshooting is complete,” it said.