Skip to content

The global battle to regulate AI has only just begun | WIRED

    At the end of April, the European Parliament had drawn up a list of prohibited practices: social scoring, predictive policing, algorithms that randomly scour the internet for photos and real-time biometric recognition in public spaces. On Thursday, however, MPs from the conservative European People’s Party still doubted whether the ban on biometrics should be lifted. “It is a highly divisive political problem, because some political forces and groups see it as a crime-fighting force and others, like the progressives, we see it as a system of social control,” said Brando Benifei, co-rapporteur and an Italian MEP member of the Socialists and Democrats group.

    Next came conversations about the types of AI that should be flagged as risky, such as algorithms used to manage a company’s workforce or by a government to manage migration. These are not prohibited. But because of their possible implications – and I underline the word potential— in terms of our rights and interests, they have to go through some compliance requirements to make sure those risks are properly mitigated,” said Nechita’s boss, Romanian MEP and co-rapporteur Dragoș Tudorache, adding that most of these requirements mainly relate to transparency. Developers must show what data they used to train their AI, and they must show how they proactively tried to dispel bias. A new AI body would also be created to create a central hub for enforcement.

    Companies using generative AI tools such as ChatGPT should disclose whether their models have been trained on copyrighted material, making lawsuits more likely. And text or image generators, such as MidJourney, should also identify themselves as machines and mark their content in a way that shows it was artificially generated. They must also ensure that their tools do not produce child abuse, terrorism or hate speech, or any other form of content that violates EU law.

    One person, who wished to remain anonymous because they did not want to draw negative attention from lobby groups, said some rules for general-purpose AI systems were weakened in early May after lobbying by tech giants. Requirements for basic models – which form the basis of tools like ChatGPT – to be verified by independent experts have been removed.

    However, parliament agreed that foundation models should be registered in a database before they are put on the market, so that companies must inform the EU of what they have started to sell. “That’s a good start,” says Nicolas Moës, director of European AI governance at the Future Society think tank.

    The lobbying by big tech companies, including Alphabet and Microsoft, is something lawmakers worldwide should be wary of, says Sarah Myers West, general manager of the AI ​​Now Institute, another think tank. “I think we’re seeing a playbook emerge for how they try to tilt the policy environment in their favor,” she says.

    What the European Parliament finally came up with is an agreement that tries to please everyone. “It’s a real compromise,” said one parliament official, who asked not to be named because they are not authorized to speak in public. “Everyone is equally unhappy.”