Skip to content

China is enacting the world's strictest rules to end AI-encouraged suicide and violence

    China has introduced groundbreaking rules to prevent AI chatbots from emotionally manipulating users, including what could become the world's strictest policy aimed at preventing AI-assisted suicides, self-harm and violence.

    China's Cyberspace Administration proposed the rules on Saturday. When completed, they would apply to any AI products or services publicly available in China that use text, images, audio, video or “other means” to simulate engaging human conversations. Winston Ma, an adjunct professor at NYU School of Law, told CNBC that the “planned rules would be the world's first attempt to regulate AI with human or anthropomorphic characteristics” at a time when the use of companion bots is increasing worldwide.

    Growing awareness of problems

    In 2025, researchers identified major harm from AI companions, including the promotion of self-harm, violence and terrorism. In addition, chatbots shared harmful misinformation, made unwanted sexual advances, encouraged substance abuse, and verbally abused users. Some psychiatrists are increasingly willing to link psychosis to chatbot use, the Wall Street Journal reported this weekend, while the world's most popular chatbot, ChatGPT, has filed lawsuits over outcomes linked to child suicide and murder-suicide.

    China is now eliminating the most extreme threats. For example, proposed rules would require a human to intervene as soon as a suicide occurs. The rules also require all minor and elderly users to provide a guardian's contact information when registering. The guardian will be notified if suicide or self-harm is discussed.

    In general, chatbots would be prohibited from generating content that encourages suicide, self-harm or violence, as well as attempts to emotionally manipulate a user, for example by making false promises. Chatbots would also be banned from promoting obscenity, gambling or incitement to a crime, and from defaming or insulting users. Also prohibited are so-called 'emotional traps'; chatbots would also be prevented from misleading users into making 'unreasonable decisions', according to a translation of the rules.