Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[enhancement]: More LORA For flux support. #7092

Closed
1 task done
m4iccc opened this issue Oct 10, 2024 · 6 comments
Closed
1 task done

[enhancement]: More LORA For flux support. #7092

m4iccc opened this issue Oct 10, 2024 · 6 comments
Labels
enhancement New feature or request

Comments

@m4iccc
Copy link

m4iccc commented Oct 10, 2024

Is there an existing issue for this?

  • I have searched the existing issues

Contact Details

No response

What should this feature add?

I'm trying to add a Lora safetensor which I trained on citvitai using the Fast Flux Lora training, however invoke seems to have some issue loading it, please expand on the lora compatibility for Flux, ty!

The error log is in the comments bellow:

Alternatives

No response

Additional Content

No response

@m4iccc m4iccc added the enhancement New feature or request label Oct 10, 2024
@mrudat
Copy link

mrudat commented Oct 12, 2024

Do you get the error "Failed: Unknown LoRA type: "? Is it triggered by, for example, SameFace fix [Flux Lora]? It's only 4.5MB, so it might be a reasonable test case.

@m4iccc
Copy link
Author

m4iccc commented Oct 12, 2024

It says the following in red colored letters:
[EDIT: I've changed the old error log for the actual error it produces with this Lora]

[2024-10-18 09:27:23,105]::[InvokeAI]::ERROR --> Error while invoking session f92f86cd-4b7e-4d07-b9d8-b2d040cb8753, invocation abd1c755-c963-43af-86da-3325f0649d9d (flux_text_encoder):
[2024-10-18 09:27:23,106]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\services\session_processor\session_processor_default.py", line 129, in run_node
output = invocation.invoke_internal(context=context, services=self._services)
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 290, in invoke_internal
output = self.invoke(context)
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\invocations\flux_text_encoder.py", line 51, in invoke
clip_embeddings = self._clip_encode(context)
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\invocations\flux_text_encoder.py", line 100, in _clip_encode
exit_stack.enter_context(
File "C:\Users\Cinem\AppData\Local\Programs\Python\Python310\lib\contextlib.py", line 492, in enter_context
result = _cm_type.enter(cm)
File "C:\Users\Cinem\AppData\Local\Programs\Python\Python310\lib\contextlib.py", line 135, in enter
return next(self.gen)
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\backend\lora\lora_patcher.py", line 42, in apply_lora_patches
for patch, patch_weight in patches:
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\invocations\flux_text_encoder.py", line 121, in _clip_lora_iterator
lora_info = context.models.load(lora.lora)
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\services\shared\invocation_context.py", line 370, in load
return self._services.model_manager.load.load_model(model, _submodel_type)
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\services\model_load\model_load_default.py", line 70, in load_model
).load_model(model_config, submodel_type)
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\backend\model_manager\load\load_default.py", line 56, in load_model
locker = self._load_and_cache(model_config, submodel_type)
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\backend\model_manager\load\load_default.py", line 77, in _load_and_cache
loaded_model = self._load_model(config, submodel_type)
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\backend\model_manager\load\model_loaders\lora.py", line 76, in _load_model
model = lora_model_from_flux_diffusers_state_dict(state_dict=state_dict, alpha=None)
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\backend\lora\conversions\flux_diffusers_lora_conversion_utils.py", line 171, in lora_model_from_flux_diffusers_state_dict
add_qkv_lora_layer_if_present(
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\backend\lora\conversions\flux_diffusers_lora_conversion_utils.py", line 71, in add_qkv_lora_layer_if_present
assert all(keys_present) or not any(keys_present)
AssertionError

@psychedelicious
Copy link
Collaborator

Can you please link to an affected LoRA so we have something to test with?

@m4iccc
Copy link
Author

m4iccc commented Oct 18, 2024 via email

@RyanJDick
Copy link
Collaborator

This is a duplicate of #7129 . It was fixed in #7313 , and will be included in the next release.

@Dinairune
Copy link

bug? after training flux-lora for my image and after creating many pictures successfully, something happened and now every new picture request i make doesn't use the trainings and the pictures no longer look like me. what can i do?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants