Skip to content

Commonly available AI can have deadly consequences

    The researchers cautioned that while AI is becoming more powerful and accessible to everyone, there is almost no regulation or oversight for this technology and that researchers, like himself, have limited awareness of its potential malicious use.

    “It is extremely difficult to identify dual-use equipment/material/knowledge in the life sciences, and there have been decades of attempts to develop frameworks to do this. Few countries have specific legal provisions for this,” said Filippa Lentzos, associate professor of science and international security at King’s College London and co-author of the paper. “There has been some discussion about dual use in the AI ​​field, but the main focus has been on other social and ethical issues, such as privacy. And there has been very little discussion of dual use, even less in the area of ​​AI drug discovery,” she says.

    While a significant amount of work and expertise has gone into developing MegaSyn, hundreds of companies around the world are already using AI for drug discovery, according to Ekins, and most of the tools needed to iterate his VX experiment are publicly available. available.

    “As we were doing this, we realized that anyone with a computer and the limited knowledge of being able to find the datasets and find these kinds of software that are all publicly available and just put them together can do this,” Ekins says. . “How do you keep track of possibly thousands of people, maybe millions, who could do this and have access to the information, the algorithms and also the know-how?”

    Since March, the newspaper has been viewed more than 100,000 times. Some scientists have criticized Ekins and the authors for crossing a gray ethical line when conducting their VX experiment. “It’s a really bad way to use the technology, and it didn’t feel right to do it,” acknowledged Ekins. “I had nightmares after that.”

    Other researchers and bioethicists have praised the researchers for delivering a concrete, proof-of-concept demonstration of how AI can be abused.

    “I was quite alarmed when I first read this article, but not surprised either. We know that AI technologies are becoming more powerful, and the fact that they can be used in this way doesn’t seem surprising,” said Bridget Williams, a public health physician and postdoctoral associate at the Center for Population-Level Bioethics at Rutgers University. .

    “I initially wondered if it was a mistake to publish this piece as it could lead to people with bad intentions using this kind of information maliciously. But the benefit of a paper like this is that it can encourage more scientists and the research community at large, including funders, journals and pre-print servers, to think about how their work could be misused and take steps to protect themselves. to guard against it. like the authors of this article did,” she says.

    In March, the US Office of Science and Technology Policy (OSTP) called Ekins and his colleagues to the White House for a meeting. The first thing OSTP representatives asked was if Ekins had shared any of the deadly molecules MegaSyn generated with anyone, according to Ekins. (OSTP did not respond to repeated requests for an interview.) The second question from the OSTP representatives was whether they could have the file containing all the molecules. Ekins says he turned them down. “Someone else could do this anyway. There is certainly no supervision. There is no control. I mean, it’s just us, right?” he says. “There’s just a great reliance on our morals and our ethics.”