-
Notifications
You must be signed in to change notification settings - Fork 184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
class-preservation target loss for LoRA / LyCORIS #1031
Comments
I'm the author of this. I am not entirely convinced yet myself that this is a useful feature. It seems to limit somewhat the training of the concept you do want to change ("ohwx woman" in this sample), by insisting that the concept "woman" remains exactly the same during training. this was an experiment I first ran yesterday, so I have limited test data myself. Training TE or training additional embeddings might overcome the issue mentioned above by separating the concepts in TE space? I am currently trying embeddings. Happy to help with your implementation of this! |
TIPO with random seeds and temperatures can be used to generate random prompts for related concepts. It can do tags -> natural language prompt or short prompt -> long prompt. |
There is no need to train the text encoder for flux models, as the model is partially a large text encoder aligned to image space. |
source, more info? |
mm-dit is this. |
After running some more tests, now I do think this is worth implementing. Making this a feature that does not require data, captions or configuration otherwise. Since there is no prompt provided, it can potentially preserve multiple classes and whatever you train on. |
branch here for anyone who wants to try: https://github.com/dxqbYD/OneTrainer/tree/prior_reg |
the idea is based on this pastebin entry: https://pastebin.com/3eRwcAJD
snippet:
the idea is that we can set a flag inside the multidatabackend.json for a dataset that contains our regularisation data.
instead of training on this data as we currently do, we will instead;
instead of checking for
woman
in the first element's caption, the batch will come with a flag to enable this behaviour, frommultidatabackend.json
somehow.this will indeed run more slowly as it runs two forward passes during training from the regularisation dataset but it has the intended effect of maintaining the original model's outputs for the given inputs, which helps substantially prevent subject bleed.
note: i'm not aware of the author of the code snippet, but i would love to give credit to whoever did create it.
example that came with the snippet:
requested by a user on the terminus research discord.
The text was updated successfully, but these errors were encountered: