The renowned scientific journal Nature announced in an editorial on Wednesday that it will not publish images or videos created using generative AI tools. The ban comes amid the publication’s concerns about research integrity, consent, privacy and intellectual property protection as generative AI tools increasingly permeate the worlds of science and art.
Founded in November 1869, Nature publishes peer-reviewed research from a variety of academic disciplines, primarily in the fields of science and technology. It is one of the world’s most cited and most influential scientific journals.
Nature says its recent decision on AI artwork followed months of intense discussion and consultation, prompted by the rising popularity and advancing capabilities of generative AI tools like ChatGPT and Midjourney.
“Except in articles dealing specifically with AI, Nature will not publish content in which photography, videos, or illustrations have been created in whole or in part using generative AI, at least for the foreseeable future,” the publication wrote in a piece attributed to itself.
The publication considers the issue to fall within its ethical guidelines regarding integrity and transparency in its published works, which includes being able to cite data sources in images:
“Why don’t we allow the use of generative AI in visual content? Ultimately, it’s a matter of integrity. The process of publishing – for both science and art – is underpinned by a shared commitment to integrity. That includes transparency.” As researchers, editors and publishers, we all need to know the sources of data and images so that they can be verified as accurate and true. Existing generative AI tools do not provide access to their resources for such verification to take place.”
As a result, all artists, filmmakers, illustrators and photographers commissioned by Nature will be asked to confirm that none of the work they submit has been generated or augmented using generative AI.
Nature also mentions that the practice of attributing existing work, a core tenet of science, is another barrier to ethically using generative AI artwork in a scholarly journal. Mapping AI-generated artwork is difficult because the images are typically synthesized from millions of images fed into an AI model.
That fact also leads to issues related to consent and consent, especially with regard to personal identification or intellectual property rights. Here too, according to Nature, generative AI falls short, routinely using copyrighted works for training without obtaining the necessary permissions. And then there’s the issue of falsehoods: the publication cites deepfakes as accelerating the spread of false information.
However, nature is not entirely against the use of AI tools. The magazine still allows the inclusion of text produced using generative AI such as ChatGPT, provided it is done with the necessary caveats. The use of these LLM (Large Language Model) tools should be explicitly documented in the methods or acknowledgments section of a paper. In addition, sources for all data, even those generated with AI assistance, must be provided by authors. However, the journal has firmly stated that no LLM tool will be recognized as a research paper author.