Skip to content

5 Notes of the Big Paris AI -TOP

    World leaders, tech-moguls and various pendants (including your real) will be collected this week in Paris for the Artificial Intelligence Action Summit, a conference co-organized by Emmanuel Macron, the French President, and Narendra Modi, the Prime Minister of India, discussing on A large number of AI-related issues.

    The leaders of three American AI companies – Sam Altman of OpenAi, Dario Amodei from Anthropic and Demis Hassabis of Google DeepMind – are here, just like a herd of prominent AI leaders, academic researchers and social organizations. (Vice -President JD Vance, who leads the American delegation, is expected to appear on Tuesday.)

    Between bites of pain au chocolat, here is a part of what I see so far:

    The background for the AI ​​summit is that Europe – that has taken heavy laws on data privacy and social media in the past decade and had a lead on regulating AI with the AI ​​act of the European Union – has second thoughts.

    Mr. Macron, who announced $ 112.5 billion in private investments this week in the French AI ecosystem, was mainly on their care to fall behind. He has become a cheerleader for Mistral, a French AI-start-up, and has argued against the “punitive” regulations that could make the technical sector of the country less competitive.

    Technology companies (and their lobbyists) appreciate the assist. But it is probably too late to stop the AI ​​law, which will come into force in the phases the following year. And several American AI leaders told me that they still considered Europe to be a hard place to do business compared to other large markets, such as India, where regulations are relatively lax.

    The Paris AI top is actually the third in a series of global AI tops. The first two -held in Britain in 2023 and last year in South Korea -were much more focused on the potential risks and damage of advanced AI systems, up to and including human extinction.

    But in Paris, the Doomers were sidelined in favor of a brighter, more optimistic vision of the potential of technology. Panel members and speakers were invited to accelerate AI to accelerate progress in areas such as medicine and climate science, and gloomy conversations about AI takeover risks were usually banished to unofficial secondary events. And a leaked version of the official top statement, which is expected to be signed by some of the countries present, was designed by AI security groups for too little attention to catastrophic risks.

    Partially reflects that a deliberate decision by Mr. Macron and his lieutenants to play the positive side of AI (one of them, Anne Bouverot, a special envoy for the top, focused on the 'exaggerated fears' of people who were on AI aimed at AI Safety during its opening comments on Monday.) But it also reflects a greater shift within the AI ​​industry, which seems to realize that it is easier to make policy makers enthusiastic about AI preface if they are not worried about it will kill them.

    Like all AI events in the past month, the Paris Summit buzzed with a conversation about Deepseek, the Chinese AI-start-up that surprised the world with its powerful reasoning model, reportedly built for a fraction of the costs of leading American models.

    In addition to lighting a fire under the American AI Giants, Deepseek has given new hope to smaller AI outfits in Europe and elsewhere that had counted out of the race. By using more efficient training techniques and smart engineering hacks to build their models, Deepseek proved that you may need only tens of millions of dollars – instead of hundreds of billions of dollars – to keep pace on the AI ​​border.

    “Deepseek has shown that all countries can be part of AI, which was not clear earlier,” said Clément Delangue, the Chief Executive of Hugging Face, an AI development company, born in France.

    Now Mr. Delangue said, “The whole world is catching up.”

    The most popular gambling game of the week is what the attitude of the Trump administration will be on AI.

    The new administration has so far made a few movements on AI, such as withdrawing the executive Order of the Biden Witte Huis that has drawn up a test program for powerful AI models. But it has not yet established a full agenda for technology.

    Some people here are hopeful that Elon Musk – one of the best advisers of the president and a man who both runs an AI company and has expressed fear of powerful AI Run Amok – will convince Mr. Trump to have a more cautious approach to follow.

    Others believe that the venture capitalists and so-called AI gearboxes in Mr Trump's track, such as the investor Marc Andreessen, will convince him to leave the AI ​​industry alone and to tear up all the regulations that can delay it.

    Mr. Vance can tap the hand of the administration on Tuesday during its top address. But nobody expects stability here soon. (One AI director characterized the Trump administration to me as a 'high variance', which is AI-speak for 'Chaotic'.)

    For me, the biggest surprise of the top of Paris has been that policy makers cannot understand how quickly powerful AI systems could arrive, or how disturbing they can be.

    Mr. Hassabis, from Google DeepMind, said during an event at the company's office on Sunday that Agi – artificial general intelligence, an AI system that corresponds to or surpassing human skills in many domains – could arrive within five years. (Mr. Amodei, Van Anthropic, and Mr Altman, Van OpenAi, have predicted the arrival earlier, possibly within the following year or two.)

    Even if you apply a discount on the predictions of Tech CEOs, the discussions I have heard in Paris have missed the urgency you would expect, if powerful AI is really around the corner.

    The policy woners here are great in fuzzy concepts such as “multi-stakeholder engagement” and “innovation-enable frameworks.” But few think seriously about what would happen if Smarter-Dan-Humble AI systems would arrive within a few months or ask the correct follow-up questions.

    What would it mean for employees as powerful AI agents who are able to replace millions of jobs in white borders were not distant fantasy, but an approaching reality? What types of regulations would be needed in a world where AI systems were able to perform self-improvement to recursive self-improvement or autonomous cyber attacks? And if you are an AGI optimist, how should institutions prepare for rapid improvements in areas such as scientific research and discovery of medicines?

    I do not want to accumulate the policy makers, many of whom do their best to keep pace with AI preliminary output. Technology moves with one speed; Settings move another. And it is possible that market leaders are far away in their AGI predictions, or that new obstacles for AI improvement will arise.

    But sometimes this week, listening to policy makers, discussing how they can rule AI systems that are already several years old – with the help of regulations that are probably outdated shortly after they have been written – I was struck by how different these time scales are. It sometimes feels, such as looking at policy makers on horseback, struggling to install safety belts on a passing Lamborghini.

    I'm not sure what to do about this. It is not as if market leaders are vague or unclear about their intentions to build Agi, or their intuition that it will happen very quickly. But if the top in Paris is an indication, something is lost in the translation.