-
Notifications
You must be signed in to change notification settings - Fork 480
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Everything is Working Great, quick question.... #19
Comments
@nitrosocke Do you mind to share the process you did to get these results? Im trying to use only LORA to finetune SD, but the results were not that good. |
@pedrogengo Sure thing here is my workflow:
|
@nitrosocke And thanks for sharing! What prompt did you use to generate the last two images from your grid? |
@pedrogengo I did try training my wife and the settings didn't work. I haven't done any more testing on that but I assume it's something like 2k steps for 5 images. Would have to try though. Prompt for Joseph was "modern disney style joseph gordon levitt", it's using my custom model and the JGL training .pt on top. |
Thanks for all the responses everyone!! I generally only overwrite known concepts (like actors) with photos of myself. I'll take a look at all the settings you all posted and tweak mine accordingly :) I'm helping to test a local installed UI implementation that now included LORA dreambooth so any information is very useful for guiding users !! |
Now you can also fine-tune CLIP encoder with LoRA as well, just like dreambooth option Checkout fine-tuning with this shell-code: Checkout using LoRAs this notebook (on example LoRA i've made): https://github.com/cloneofsimo/lora/blob/master/scripts/run_with_text_lora_also.ipynb |
@cloneofsimo Thanks for the work. Do you have any Colab / Notebook for basic training. I currently saw 4 scripts on the scripts folder, but not sure if you got the example there or somewhere else. Basically, if i've got 4 768x768 images and I want to fine-tune SD2.0, then which is the script I should look at? |
You can run: |
Thank you @cloneofsimo sorry for a dumb question. So the input images should be inside this folder which is assigned as the instance directory |
Yes @amrrs |
@amrrs I did this Colab notebook if you want to perform all the steps on Colab: https://colab.research.google.com/drive/1iSFDpRBKEWr2HLlz243rbym3J2X95kcy?usp=sharing @cloneofsimo If you like, you can update the README with it :) |
@pedrogengo is it possible to run colab training with batch_size=5? It is the default setting but I'm getting CUDA_OUT_OF_MEMORY |
Which GPU are you using? T4 15GB (standard) or A100 40GB (premium)? I'm trying to figure out ideal settings for the A100. |
Sorry, I didn't test with batch_size=5. I was using 1 during my experiments. What you can do is use 1 and the gradient_step can be 5, so you update the gradients only after 5 steps, which is the same of use batch_size=5. I will create a field for this info too on Colab |
@amerkay @Daniel-Kelvich I just updated the Colab with Gradient Accumulation Steps! Enjoy :) |
Nice work @pedrogengo! I'll update the readme. |
Sure thing! I can to it until EOD |
@pedrogengo @cloneofsimo the script train_lora_dreambooth.py seems to be missing a call for accelerate to manage the accumulation context: so I'm not sure passing the parameter will do anything. |
Right gradient accumulation doesn't work now because it implicitly updates all other params wrapped inside it. So i removed it. |
Even if you set return_grad=None or filter the parameters? |
Yes I think so. But Im not really used to accelerate package so it was probably wasn't the way to fix it. I'll try to make it work with grad accumulation |
Just gonna drop a link to more training/tuning discussion here: |
@cloneofsimo I just updated the colab and as a workaround to gradient accumulation I'm making |
Is there a way to train new images with manually added captions? |
You can try this colab notebook: that allows using captions with @cloneofsimo's lora training |
as seen here, (example outputs included)
https://twitter.com/MushroomFleet/status/1602341447952437249
and below (just the merge experiments)
What are your advised Training settings? I did use a bunch that i used in other Dreambooth methods, but it would be interesting to know if you have a recommended setting for LORA because it is a little different.
Thanks for your hard work in bringing this to us :)
The text was updated successfully, but these errors were encountered: