Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tried the "official" flux-RealismLora and met with missing transformer error #40

Open
meetbryce opened this issue Sep 8, 2024 · 16 comments
Labels
invalid This doesn't seem right

Comments

@meetbryce
Copy link

Trying https://huggingface.co/XLabs-AI/flux-RealismLora/tree/main resulted in the following error. I assume this is operator error?

mflux-generate --model dev --prompt "Luxury food photograph of steak and lobster" --lora-paths "lora.safetensors" --steps 25 -q 8
Traceback (most recent call last):
  File "/Users/york/miniforge3/bin/mflux-generate", line 8, in <module>
    sys.exit(main())
  File "/Users/york/miniforge3/lib/python3.10/site-packages/mflux/generate.py", line 35, in main
    flux = Flux1(
  File "/Users/york/miniforge3/lib/python3.10/site-packages/mflux/flux/flux.py", line 49, in __init__
    weights = WeightHandler(
  File "/Users/york/miniforge3/lib/python3.10/site-packages/mflux/weights/weight_handler.py", line 28, in __init__
    LoraUtil.apply_loras(self.transformer, lora_paths, lora_scales)
  File "/Users/york/miniforge3/lib/python3.10/site-packages/mflux/weights/lora_util.py", line 15, in apply_loras
    LoraUtil._apply_lora(transformer, lora_file, lora_scale)
  File "/Users/york/miniforge3/lib/python3.10/site-packages/mflux/weights/lora_util.py", line 35, in _apply_lora
    lora_transformer, _ = WeightHandler.load_transformer(lora_path=lora_file)
  File "/Users/york/miniforge3/lib/python3.10/site-packages/mflux/weights/weight_handler.py", line 68, in load_transformer
    raise Exception("The key `transformer` is missing in the LoRA safetensors file. Please ensure that the file is correctly formatted and contains the expected keys.")
Exception: The key `transformer` is missing in the LoRA safetensors file. Please ensure that the file is correctly formatted and contains the expected keys.
@andyw-0612
Copy link

I'm experiencing a similar issue, I suspect the LoRA isn't in a compatible format compared to the implementation in mflux. I've tried the phlux and Amateur Photography LoRAs and both didn't work with the same error message. Would love to see support for these though, mflux runs so much faster than any other implementation using torch on mac such as Comfy, such a great project.

@filipstrand
Copy link
Owner

@meetbryce Thanks for posting this issue.

As you said @andyw-0612 there seems to be a mismatch of the format here. This adapter looks a bit different than the ones tried and tested during development, but it is great to get these examples in quickly so we can fix how the weights are loaded and applied. Along with a similar PR #34 this kind of issue is now top priority.

@sandrop8
Copy link

sandrop8 commented Sep 8, 2024

Maybe helps while debugging the issue. I can confirm that LoRA's trained on Flux SCHNELL with LoRA Rank / Linear 32 /32 with Ostris SCHNELL Adapter work.

If I should train some others to test on Dv with different settings, please just reach out. The problem with some HF LoRA's is missing train settings docs: may makes debugging more difficult.

364000931-a8575584-cac1-4450-9e11-1c1f17a53cb4

@filipstrand
Copy link
Owner

Thanks for confirming that the Ostris ones work! Since LoRA adapters can come from different sources (e.g Diffusers/ Official Flux repo etc) , I am thinking it might actually be good to have some kind of "compatibility table" or something in the readme of what works and what does not...

Quickly looking at some examples, if we inspect the safetensors file we can see the differences. In the best case, we simply need to remap the names and everything works. I don't know at this stage. It would be nice to have a few example here to get a sense of the different formats that exits out there:

flux-RealismLora (not working)
Screenshot 2024-09-08 at 12 15 38

amateurphoto-v3.5 (not working)
Screenshot 2024-09-08 at 12 16 10

ostris/yearbook-photo-flux-schnell (working):
Screenshot 2024-09-08 at 12 14 56

@fabiovac
Copy link

@fabiovac
Copy link

I was able to convert and use amateurphoto-v3.5 loras using the conversion tool of this library: https://github.com/kohya-ss/sd-scripts/tree/sd3?tab=readme-ov-file#convert-flux-lora

Those of XLabs-AI seems to be of a strange format

@filipstrand
Copy link
Owner

@fabiovac Very interesting, thanks for sharing!!

@andyw-0612
Copy link

I was able to convert and use amateurphoto-v3.5 loras using the conversion tool of this library: https://github.com/kohya-ss/sd-scripts/tree/sd3?tab=readme-ov-file#convert-flux-lora

Those of XLabs-AI seems to be of a strange format

I was able to run the script and do the conversion but it seems to have trouble saving the file and gives me these runtime errors:

RuntimeError:
Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'transformer.transformer_blocks.0.attn.to_v.lora_A.weight', 'transformer.transformer_blocks.0.attn.to_k.lora_A.weight', 'transformer.transformer_blocks.0.attn.to_q.lora_A.weight'}, {'transformer.transformer_blocks.0.attn.add_k_proj.lora_A.weight', 'transformer.transformer_blocks.0.attn.add_v_proj.lora_A.weight', 'transformer.transformer_blocks.0.attn.add_q_proj.lora_A.weight'}, {'transformer.transformer_blocks.1.attn.to_v.lora_A.weight', 'transformer.transformer_blocks.1.attn.to_q.lora_A.weight', 'transformer.transformer_blocks.1.attn.to_k.lora_A.weight'}, {'transformer.transformer_blocks.1.attn.add_v_proj.l...

@fabiovac
Copy link

I was able to convert and use amateurphoto-v3.5 loras using the conversion tool of this library: https://github.com/kohya-ss/sd-scripts/tree/sd3?tab=readme-ov-file#convert-flux-lora
Those of XLabs-AI seems to be of a strange format

I was able to run the script and do the conversion but it seems to have trouble saving the file and gives me these runtime errors:

RuntimeError: Some tensors share memory, this will lead to duplicate memory on disk and potential differences when loading them again: [{'transformer.transformer_blocks.0.attn.to_v.lora_A.weight', 'transformer.transformer_blocks.0.attn.to_k.lora_A.weight', 'transformer.transformer_blocks.0.attn.to_q.lora_A.weight'}, {'transformer.transformer_blocks.0.attn.add_k_proj.lora_A.weight', 'transformer.transformer_blocks.0.attn.add_v_proj.lora_A.weight', 'transformer.transformer_blocks.0.attn.add_q_proj.lora_A.weight'}, {'transformer.transformer_blocks.1.attn.to_v.lora_A.weight', 'transformer.transformer_blocks.1.attn.to_q.lora_A.weight', 'transformer.transformer_blocks.1.attn.to_k.lora_A.weight'}, {'transformer.transformer_blocks.1.attn.add_v_proj.l...

That is related to safetensors library. It has issues with shared tensor's memory. It is a known issue/design decision (not clear to me).

Easy hack: in the env you have created to install the library you edit the safetensors library. Assuming you have a venv environment, you go in .venv/lib/python3.xx/site-packages/safetensors/torch.py and you edit the method _flatten where it says "if failing:". You can remove the if statement with the raise or replace the raise with something like a print.

Nice way: forking the library and edit this

@filipstrand
Copy link
Owner

I got the same error as mentioned above. I tried cloning the weights:

state_dict = {k: v.detach().clone() for k, v in state_dict.items()}

so at the bottom of the script I have

logger.info(f"Saving destination file {dst_path}")
state_dict = {k: v.detach().clone() for k, v in state_dict.items()}
save_file(state_dict, dst_path, metadata=metadata)

and it seemed to have solved the error.

This script really does seem to work well for the examples I have tried so I will try to integrate this into the weight handling logic soon.

@andyw-0612
Copy link

That's what I tried as well and it worked, seems to be compatible with a lot of LoRAs out there

@fabiovac
Copy link

@filipstrand what about XLabs-AI's format?

@filipstrand
Copy link
Owner

@fabiovac Will look into XLabs format as well.

I have now merged a fixed for loading certain LoRA weights by incorporating the script suggested above. I also made a format compatibility table in the readme and a new issue were people can post additional suggestions about other formats to consider.

I will include these updates in a new 0.2.1 release soon.

@vponukumati
Copy link

vponukumati commented Sep 13, 2024 via email

@kasnol
Copy link

kasnol commented Sep 15, 2024

I just used a custom LORA trained by civic AI the 0.2.1 it works with steps = 20, thank you for the release !
(the previous release < 0.2.0 throws several errors when I tried generate an image with custom LORA)

@filipstrand
Copy link
Owner

@vponukumati Unfortunately, I cannot promise a timeline. It will require some changes under the hood so it is a bit of a bigger task, but it is the most important new feature that we want to support so it has the highest priority.

@filipstrand filipstrand added the invalid This doesn't seem right label Dec 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
invalid This doesn't seem right
Projects
None yet
Development

No branches or pull requests

7 participants