Skip to content

Stability AI plans to dispense artists with Stable Diffusion 3 image training

    An AI-generated image of someone exiting a building.
    Enlarge / An AI-generated image of a person exiting a building, opting out of the vertical blind convention.

    Ars Technica

    On Wednesday, Stability AI announced it would allow artists to remove their work from the training dataset for an upcoming release of Stable Diffusion 3.0. The move comes from an artist advocacy group called Spawning tweeted that Stability AI would honor opt-out requests collected on its Have I Been Trained website. However, the details of the implementation of the plan remain incomplete and unclear.

    As a quick recap, Stable Diffusion, an AI image synthesis model, gained the ability to generate images by “learning” from a large dataset of images scraped from the internet without asking rights holders for permission. Some artists are upset about it because Stable Diffusion generates images that could potentially rival human artists in an unlimited amount. We’ve been following the ethical debate since the public launch of Stable Diffusion in August 2022.

    To understand how the Stable Diffusion 3 opt-out system is supposed to work, we created an account on Have I Been Trained and uploaded an image of the Atari Pong arcade flyer (which is not ours). After the site’s search engine found matches in the Large-scale Artificial Intelligence Open Network (LAION) image database, we right-clicked several thumbnails individually and selected “Opt-Out This Image” from a pop-up menu.

    Once marked, we could see the images in a list of images that we had marked opt-out. We encountered no attempt to verify our identity or any legal scrutiny over the images we supposedly “opted out”.

    A screenshot of
    Enlarge / A screenshot of “opting out” images that don’t belong to us on the Have I Been Trained website. Images with flag icons are “signed out”.

    Ars Technica

    Other issues: To remove an image from training, it must already be in the LAION dataset and must be searchable by Have I Been Trained. And there is currently no way to exclude large groups of images or the many copies of the same image in the dataset.

    The system, as it is currently implemented, raises questions echoed in the announcement threads Twitter and YouTube. For example, if Stability AI, LAION, or Spawning went through the massive effort of legally verifying ownership to determine who opts out of images, who would pay for the labor? Would people trust these organizations with the personal information necessary to verify their rights and identity? And why even try to verify them as the CEO of Stability say that no legal permission is required to use them?

    A video from Spawning announcing the opt-out option.

    Also putting the responsibility on the artist to register for a site with a free connection going to either Stability AI or LAION and then hoping their request is honored doesn’t seem popular. In response to statements about consent by Spawning in its announcement video, some people noted that the opt-out process does not fit the definition of consent in the European General Data Protection Regulation, which states that consent must be actively given, not assumed by default (“Consent must be freely given, specific, informed and unambiguous. In order to be able to freely obtain consent, it must be given on a voluntary basis.”) In this sense, many to argue that the process should be opt-in only and that all illustrations should be excluded from AI training by default.

    Currently, it appears that Stability AI is working within US and European law to train Stable Diffusion using scraped images collected without permission (although this issue has not yet been tested in court). But the company is also taking steps to acknowledge the ethical debate that has sparked a major outcry against AI-generated art online.

    Is there a balance that can satisfy artists and continue advances in AI technology for image synthesis? For now, Stability CEO Emad Mostaque is open to suggestions, tweet“The team @laion_ai is super open to feedback and wants to build better datasets for everyone and are doing a great job. For our part, we believe this is transformative technology and we love working with all parties and trying to be as transparent as possible. All of them moving and mature, fast.”