-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tried the "official" flux-RealismLora and met with missing transformer error #40
Comments
I'm experiencing a similar issue, I suspect the LoRA isn't in a compatible format compared to the implementation in mflux. I've tried the phlux and Amateur Photography LoRAs and both didn't work with the same error message. Would love to see support for these though, mflux runs so much faster than any other implementation using torch on mac such as Comfy, such a great project. |
@meetbryce Thanks for posting this issue. As you said @andyw-0612 there seems to be a mismatch of the format here. This adapter looks a bit different than the ones tried and tested during development, but it is great to get these examples in quickly so we can fix how the weights are loaded and applied. Along with a similar PR #34 this kind of issue is now top priority. |
Maybe helps while debugging the issue. I can confirm that LoRA's trained on Flux SCHNELL with LoRA Rank / Linear 32 /32 with Ostris SCHNELL Adapter work. If I should train some others to test on Dv with different settings, please just reach out. The problem with some HF LoRA's is missing train settings docs: may makes debugging more difficult. |
Thanks for confirming that the Ostris ones work! Since LoRA adapters can come from different sources (e.g Diffusers/ Official Flux repo etc) , I am thinking it might actually be good to have some kind of "compatibility table" or something in the readme of what works and what does not... Quickly looking at some examples, if we inspect the safetensors file we can see the differences. In the best case, we simply need to remap the names and everything works. I don't know at this stage. It would be nice to have a few example here to get a sense of the different formats that exits out there: flux-RealismLora (not working) amateurphoto-v3.5 (not working) ostris/yearbook-photo-flux-schnell (working): |
Think this could be helpful. Also me with this problem! https://gist.github.com/Leommm-byte/6b331a1e9bd53271210b26543a7065d6 |
I was able to convert and use amateurphoto-v3.5 loras using the conversion tool of this library: https://github.com/kohya-ss/sd-scripts/tree/sd3?tab=readme-ov-file#convert-flux-lora Those of XLabs-AI seems to be of a strange format |
@fabiovac Very interesting, thanks for sharing!! |
I was able to run the script and do the conversion but it seems to have trouble saving the file and gives me these runtime errors: RuntimeError: |
That is related to safetensors library. It has issues with shared tensor's memory. It is a known issue/design decision (not clear to me). Easy hack: in the env you have created to install the library you edit the safetensors library. Assuming you have a venv environment, you go in .venv/lib/python3.xx/site-packages/safetensors/torch.py and you edit the method _flatten where it says "if failing:". You can remove the if statement with the raise or replace the raise with something like a print. Nice way: forking the library and edit this |
I got the same error as mentioned above. I tried cloning the weights:
so at the bottom of the script I have
and it seemed to have solved the error. This script really does seem to work well for the examples I have tried so I will try to integrate this into the weight handling logic soon. |
That's what I tried as well and it worked, seems to be compatible with a lot of LoRAs out there |
@filipstrand what about XLabs-AI's format? |
@fabiovac Will look into XLabs format as well. I have now merged a fixed for loading certain LoRA weights by incorporating the script suggested above. I also made a format compatibility table in the readme and a new issue were people can post additional suggestions about other formats to consider. I will include these updates in a new 0.2.1 release soon. |
When will see LoRA fine tuning support with local dataset ?
Vijay
…On Fri, Sep 13, 2024 at 1:48 PM Filip Strand ***@***.***> wrote:
@fabiovac <https://github.com/fabiovac> Will look into XLabs format as
well.
I have now merged a fixed for loading certain LoRA weights by
incorporating the script suggested above. I also made a format
compatibility table
<https://github.com/filipstrand/mflux?tab=readme-ov-file#supported-lora-formats-updated>
in the readme and a new issue
<#47> were people can post
additional suggestions about other formats to consider.
I will include these updates in a new 0.2.1 release soon.
—
Reply to this email directly, view it on GitHub
<#40 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BLF4UZGMGQMLYPHQ4ONQP2LZWNFTVAVCNFSM6AAAAABN2R65EKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNJQGEZTONBTGA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
I just used a custom LORA trained by civic AI the 0.2.1 it works with steps = 20, thank you for the release ! |
@vponukumati Unfortunately, I cannot promise a timeline. It will require some changes under the hood so it is a bit of a bigger task, but it is the most important new feature that we want to support so it has the highest priority. |
Trying https://huggingface.co/XLabs-AI/flux-RealismLora/tree/main resulted in the following error. I assume this is operator error?
The text was updated successfully, but these errors were encountered: