Stop marking models with a warning: "Someone has submitted a claim that this uses their art in it's training data and this claim is pending review". #114
Replies: 10 comments 10 replies
-
Recently noticed several notes stating "Someone has submitted a claim that this uses their art in it's training data and this claim is pending review". I urge Civitai either to A. Do not get involved in this emotional based drama, or B. remove all models and shut your site down entirely. Every single model you host on your site were trained using images form a wide variety of artist, many of whom have copyrighted images. I would THINK that the creators of this site would be aware how models are made and would know this. There is no case at all that has established training a mode on a sample of an artist's style would be a violation of anything. In fact, just the opposite is true. You can't copyright a style. How do you not know this? Please, be professional, do not get involved in emotional dramas and keep providing your service in a professional manner with slapping false and accusatory tags on people's models. |
Beta Was this translation helpful? Give feedback.
-
A site can't ignore take down request, and having too many could result from a number of things including having their domain revoked, like if their domain holder was pressured like Patreon and KickStarter were with another recent AI campaign, perhaps uploaders can use different legalese to avoid such things, like saying their model is |
Beta Was this translation helpful? Give feedback.
-
Understand that there is no middle ground with the Anti-AI crowd: if you don't stop appeasing them you're going to get shut down, or lose so much utility you go defunct. The choice is yours, and it's right now. |
Beta Was this translation helpful? Give feedback.
-
I'll add relevant part of the discussion on Reddit. So do you consider copying style as morally wrong, but models that copy more than one style are fine by you? Is that your official standard? Maybe the phrasing of the warning should be changed, to explain that copying one style is wrong, but copying more than one style is fine. I am aware of at least one model that was marked with this warning, that copying much more than one style, but was marked seemingly just due to using the letters "S" "A" "M", how do you explain this fact? Do you have another special rule for models that use the letters "S" "A" "M"? Another point, on the site, there are Textual Inversion embeddings for SD v 1.5 that point to the same style that was marked by the message above. It proves my point that all models are "guilty" of copying even this particular style, and you should apply the same rules for all models, including any model based on SD v 1.4, 1.5, 2.0, and 2.1, i.e. all models on the site, mark them all or mark none. |
Beta Was this translation helpful? Give feedback.
-
A couple of people brought up the argument of overfitting, my response:
|
Beta Was this translation helpful? Give feedback.
-
The issue of fine-tuning models on the works of a single artist raises complex questions about law, morality, and ethics. While we have tried to gather input from both sides of the debate, it has been difficult to find a clear answer that addresses the concerns of all parties involved. As a result, we have decided to approach this issue on a case-by-case basis until we can establish a precedence. To facilitate this process, we recently added a feature that allows artists to report models that they believe were fine-tuned on their work. Our intention was to provide a way for artists to connect with us and work towards a resolution with the model creators. In the interest of transparency, we also chose to inform the community when such a report has been made. Please note that our goal is only to remove models that violate our Terms of Service. We also want to be sensitive to the feelings of hurt expressed by creators who see their work being taken from them. This applies to both the artists whose styles are being imitated and the creators of the fine-tuned models. In regards to the models that are currently flagged, it appears that they are a random assortment of models that include the name "Sam." We have reached out to Sam Yang and the individual who made the report, but have not yet received a response. If we do not hear back within 7 days, we will dismiss the claim. We would like to know your thoughts on the following questions:
|
Beta Was this translation helpful? Give feedback.
-
Great... I just got my warning banner within 24 hours of my upload. I must admit, it is really "encouraging" for a trainer knowing that the model you spent countless time and effort fine tunning/ messing with hyper params can be taken down at any moment. As I clearly stated in my description, I didn't even "target trained" the artist's works. My dataset is scraped from public image sites with specific tags (literally the same training procedure as SD 1.4, 1.5, 2.0, 2.1). After the training, I found out that the model performs exceptionally well on specific prompt from tags of this artist's images and developed a heavy bias towards this artist's style, and that's why I gave it the name, not the other way around. I don't know why even bother having a cloud based community anymore. So everyone must be gathering as much models as possible to avoid one random day losing access to a specific model? If anyone is seeing this, check the link I posted above. I will leave this site and never come back if my model goes down, and you should too. |
Beta Was this translation helpful? Give feedback.
-
Thanks for your response. I wish only the best to CivitAI, I hope it would become a very successful business, with great user experience and clear rules. .1. You are using the term "fine-tuning models" as if there are two completely different kinds of models, one type is a general model which can produce many styles and the other type is a fine-tuned model that can produce only a single specific style. I would argue that this dichotomy is false due to many reasons. .1.1. Dreambooth models or regular continued training models, are not erasing most of the content from the base model, they are adding new concepts and styles. Most of the styles of the original model have remained there. So if the original model had X number of styles, the fine-tuned model has X+1 styles. Maybe some of the styles got distorted, nevertheless, the additional training does not erase all base styles. There is no fine-tuned model with a single style, unless it was trained from zero. I don't think you would be able to find even a single model that was created that way. .1.2. Most of the Dreambooth models are using regularization images while training to minimize the bleeding of newly added content into existing areas. Practically the training process is designed to add a style/concept without erasing the existing style/concept. Therefore by design, fined-tuning does not do what the original dichotomy assumes. .1.3. With most Dreambooth models you don't get the added style without using a triggering keyword. Just like any other concept. Obviously, the randomness of the system can push it more or less to certain areas. .1.4. Fine-tuning can add more than one style at once to the base model, there are many examples of such models working as well as any other fine-tuned model. Is adding more than one style less fine-tuning due to this fact? When a single style is added it is bad, when two or more styles are added it is acceptable?! SD 1.4 and 1.5 are continued training of previous training themselves. .2. I've already addressed here the "overfitting" argument, so I want preemptively say that the overfitting argument is a bad argument. Not only that, overfitting can be a tool for mixing models, to steer easily the baseline of a model into the desired direction. Overfitting can be easily normalized and adjusted by mixing it with a low ratio with another model. Also, any model can be overfitted terribly by adding repeatedly its differences from the base to itself. .3. The issue of transparency is an important one, it's better for everybody to know what's happening. You definitely should continue the line of transparency. If the site would decide on a major change in policy without properly communicating it in advance it can ruin the reputation of the site and possibly devastate its potential future. .4. You added an option for artist report models that don't violate your TOS. What is the use of it? .5. I have argued my points in the beginning, artists don't have a legal case against anyone, not the model creator and not against the site that distributes them. You would have been in legal procedure with them if they had legal justification. Currently, artists are harassing and bullying individuals who produce models that resemble their style. And you are helping them by letting them mark any model, because something resembles their style. The person who created the model has not violated the law or even your TOS, what's the point of adding a warning to the model? .6. All models can imitate many artists, if you believe that it's immoral. You need to add it to your TOS. Then remove all models. .7. I don't understand why would you allow anyone to mark certain models with strange warnings. If there is a violation of TOS, you can add a small notice about the report inside the page of the model, with an option to read about the report, and maybe add a discussion section. But you definitely should not mark models when they are shown in the preview besides others, like some sort of badge of shame. .8. It's nice that you sympathize with artists, we all do. It's a new technology and the world needs to adjust to the change. But some artists want to go back in time to before this technology existed, and it is just impossible. They will use anything to attack anyone on their way. If you are caving in, by scapegoating some invented bad guys that created "bad" fine-tuned models. And by repeating legally not based and immoral arguments regarding style ownership. You are setting yourself to failure. If you give up on just a few models, that did nothing worse than any other model, you will lose it all, because you lost any coherency and option of equal implementation of rules. |
Beta Was this translation helpful? Give feedback.
-
There are a few models that appear now with a warning: "Someone has submitted a claim that this uses their art in it's training data and this claim is pending review".
All Stable Diffusion models were trained on someone's art, including all main versions 1.4, 1.5, 2.0, 2.1. Training model on someone's art is 100% legal, copying style is 100% legal. Not only it's legal, but it also moral, it's called freedom, freedom of expression and freedom of thought, freedom to learn, and freedom to create. If you start prohibiting including someone's art in models without "consent", you need to delete all models on the site. Singling out any model is complete hypocrisy on your part.
If your standards prohibit including someone's art in training, please erase all the models and close your site.
Otherwise, remove this shameful warning.
Beta Was this translation helpful? Give feedback.
All reactions