More

    Stable Diffusion made artist copying and porn generation more difficult and users are angry

    Users of AI image generator Stable Diffusion are angry about a update to the software that “nerfs” its ability to generate NSFW output and graphics in the style of specific artists.

    Stability AI, the company that funds and distributes the software, has announced Stable Diffusion version 2 early this morning European time. The update redefines key components of the model and improves certain features such as upscaling (the ability to increase the resolution of images) and in-painting (context-aware editing). But the changes also make it harder for Stable Diffusion to generate certain types of images that have drawn both controversy and criticism. These include nude and pornographic output, photo-realistic photos of celebrities, and images that mimic specific artists’ artwork.

    “They nerfed the model”

    “They nerfed the model,” noted one user on a Stable Diffusion subreddit. “It’s a bit of a bad surprise,” said another on the software’s official Discord server.

    Users note that asking version 2 of Stable Diffusion to generate images in the style of Greg Rutkowski – a digital artist whose name is become a literal shorthand for producing high-quality images – no longer creates works of art that closely resemble his own. (To compare this one two images for example). “What have you done with Greg😔,” noticed one user on Discord.

    Changes to Stable Diffusion are noteworthy, as the software is hugely influential and helps set standards in the fast-moving generative AI scene. Unlike competing models such as OpenAI’s DALL-E, Stable Diffusion is open source. This allows the community to quickly improve the tool and allows developers to integrate it into their products for free. But it also means Stable Diffusion has fewer limitations in how it’s used and has received significant criticism as a result. In particular, many artists, such as Rutkowski, are annoyed that Stable Diffusion and other image-generating models have been trained on their artwork without their permission and are now able to reproduce their styles. Whether or not this kind of AI-assisted copying is legal or not is an open question. Experts say training AI models on copyrighted data is likely legal, but certain use cases could be challenged in court.

    A comparison of Stable Diffusion’s ability to generate images similar to specific artists.
    Image: I want through Reddit

    Stable Diffusion’s users have speculated that the changes to the model were made by Stability AI to mitigate such potential legal challenges. When, however The edge asked Stability AI founder Emad Mostaque if this was the case in a private chat, Mostaque didn’t answer. However, Mostaque confirmed that Stability AI did not remove artist images from the training data (as many users have speculated). Instead, the model’s reduced ability to copy artists is the result of changes made to the way the software encodes and retrieves data.

    “There’s been no specific filtering of artists here,” Mostaque told me The edge. (He also elaborated on the technical underpinnings of these changes in a message posted on Discord.)

    However, what has been removed from Stable Diffusion’s training data are nude and pornographic images. AI image generators are already used to generate NSFW output, including both photo-realistic and anime-style photos. However, these models can also be used to generate NSFW images that resemble specific people (known as non-consensual pornography) and images of child abuse.

    The changes discuss Stable Diffusion Version 2 in the software’s official Discord, Mostaque notes this latter use case is the reason for filtering out NSFW content. “can’t have kids & nsfw in an open model,” says Mostaque (since the two types of images can be combined to create child sexual abuse material), “so get rid of the kids or get rid of the nsfw.”

    A user on the Stable Diffusion subreddit said the removal of NSFW content was “censorship” and “goes against the spiritual philosophy of the Open Source community”. Did the user say, “Choosing to make NSFW content or not should be in the hands of the end user, no [sic] in a restricted/censored model.” However, others noted that the open source nature of Stable Diffusion means that naked training data can can easily be added back in third party releases and that the new software will not affect previous versions: “Don’t worry about the lack of artists/NSFW in V2.0, you will soon be able to generate your favorite celeb nude and you can do that in anyway already.”

    While the changes in Stable Diffusion version 2 have annoyed some users, many others praised the potential for deeper functionality, such as the software’s new ability to produce content that matches the depth of an existing image. Others said the changes made it more difficult to produce high-quality images quickly, but that the community would likely add this functionality in future releases. As a single user on Discord summarized the changes: “2.0 is better at interpreting clues and creating coherent photographic images in my experience so far. however, it won’t make rutkowski tits.

    Mostaque himself compared the new model to a pizza crust on which anyone can add ingredients (ie training data) of their choice. “A good model should be usable by everyone and if you want to add things, add things,” he said said on Discord.

    Mostaque also said that future versions of Stable Diffusion would use training datasets that allow artists to opt in or out — a feature that many artists have been asking for, which could help mitigate some criticism. “We try to be super transparent as we improve the base models and incorporate feedback from the community,” Mostaque told me The edge.

    A public demo of Stable Diffusion Version 2 can be approached here (although due to high user requirements the model may be inaccessible or slow).


    Recent Articles

    Related Stories

    Leave A Reply

    Please enter your comment!
    Please enter your name here

    Stay on op - Ge the daily news in your inbox