Chinese regulators have likely learned from the EU AI Act, said Jeffrey Ding, an assistant professor of political science at George Washington University. “Chinese policymakers and scientists have said they have used EU laws as inspiration for past cases.”
But at the same time, some of the measures taken by Chinese regulators are not really replicable in other countries. For example, the Chinese government is asking social platforms to screen user-uploaded content for AI. “That seems like something that is very new and could be unique to the Chinese context,” says Ding. “This would never exist in the US context because the US is known for saying that the platform is not responsible for the content.”
But what about freedom of expression online?
The draft regulation on tagging AI content calls for public feedback until October 14, and it could take several more months before it is amended and adopted. But there is little reason for Chinese companies to delay preparation until its entry into force.
Sima Huapeng, founder and CEO of Chinese AIGC company Silicon Intelligence, which uses deepfake technologies to generate AI agents and influencers and replicate living and dead people, says his product now allows users to voluntarily choose whether they want to mark the generated product as AI. But if the law is passed, he may have to change it to mandatory.
“If a feature is optional, companies will most likely not add it to their products. But if it becomes legally mandatory, everyone must implement it,” says Sima. It's not technically difficult to add watermarks or metadata labels, but it will increase operating costs for compliant companies.
Policies like this can ensure AI isn't used for scams or privacy invasions, he says, but they could also lead to the growth of a black market for AI services, where companies try to skirt legal compliance and cut costs. to spare.
There is also a fine line between holding AI content producers accountable and monitoring individual expressions through more sophisticated tracking.
“The major underlying human rights challenge is to ensure that these approaches do not further compromise privacy and freedom of expression,” says Gregory. While the implicit labels and watermarks can be used to identify sources of misinformation and inappropriate content, the same tools can allow the platforms and government to gain stronger control over what users post on the internet. In fact, concerns about how AI tools can operate fraudulently have been one of the main drivers of China's proactive efforts in AI legislation.
At the same time, China's AI industry is pushing for the government to be given more room to experiment and grow, as they are already lagging behind their Western counterparts. A previous Chinese Generative AI law was significantly watered down between the first public draft and the final bill, removing identity verification requirements and reducing penalties for companies.
“What we've seen is that the Chinese government is really trying to walk the fine line between 'making sure we maintain control over the content' and also 'giving these AI labs in a strategic space the freedom to innovate',” says Ding. “This is another attempt at that.”