Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for tiny PEFT-based Flux LoRA based on TheLastBen's post on Reddit #912

Merged
merged 2 commits into from
Sep 1, 2024

Conversation

mhirki
Copy link
Contributor

@mhirki mhirki commented Sep 1, 2024

As suggested on Reddit by TheLastBen:
https://www.reddit.com/r/StableDiffusion/comments/1f523bd/good_flux_loras_can_be_less_than_45mb_128_dim/

I'm currently testing --flux_lora_target=tinier with a rank 16 LoRA and it's having some effect:

100 steps 1300 steps
step_100_validation_1024x1024 step_1300_validation_1024x1024

File size is only 576 kB for this ultra-tiny LoRA. My learning rate is at 1e-3 with adamw_bf16 optimizer.

helpers/training/adapter.py Outdated Show resolved Hide resolved
Co-authored-by: Bagheera <59658056+bghira@users.noreply.github.com>
@bghira bghira merged commit 440f701 into bghira:main Sep 1, 2024
1 check passed
@mhirki
Copy link
Contributor Author

mhirki commented Sep 1, 2024

Damnit, you merged it too fast, I still needed to change "tinier" to "nano" in other places

@bghira
Copy link
Owner

bghira commented Sep 1, 2024

it wasn't marked as draft 😓 no worries. i was going to add a note to the flux quickstart anyway.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants