Skip to content

Conversation

@Pfannkuchensack
Copy link
Collaborator

Summary

Fix Z-Image LoRA/DoRA model detection failing during installation.

Z-Image LoRAs use different key patterns than SD/SDXL LoRAs. The base LoRA_LyCORIS_Config_Base class only checked for key suffixes like lora_A.weight and lora_B.weight, but Z-Image LoRAs (especially those in DoRA format) use:

  • lora_down.weight / lora_up.weight (standard LoRA format)
  • dora_scale (DoRA weight decomposition)

This PR overrides _validate_looks_like_lora in LoRA_LyCORIS_ZImage_Config to recognize Z-Image specific patterns:

  • Keys starting with diffusion_model.layers. (Z-Image S3-DiT architecture)
  • Keys ending with lora_down.weight, lora_up.weight, lora_A.weight, lora_B.weight, or dora_scale

Related Issues / Discussions

Fixes installation of Z-Image LoRAs trained with DoRA (Weight-Decomposed Low-Rank Adaptation).

QA Instructions

  1. Download a Z-Image LoRA in DoRA format (e.g., from CivitAI with keys like diffusion_model.layers.X.attention.to_k.lora_down.weight)
  2. Try to install the LoRA via Model Manager
  3. Verify the model is recognized as a Z-Image LoRA and installs successfully
  4. Verify the LoRA can be applied when generating with Z-Image

Merge Plan

Standard merge, no special considerations.

Checklist

  • The PR has a short but descriptive title, suitable for a changelog
  • Tests added / updated (if applicable)
  • ❗Changes to a redux slice have a corresponding migration
  • Documentation added / updated (if applicable)
  • Updated What's New copy (if doing a release after this PR)

Override _validate_looks_like_lora in LoRA_LyCORIS_ZImage_Config to
recognize Z-Image specific LoRA formats that use different key patterns
than SD/SDXL LoRAs.

Z-Image LoRAs (including DoRA format) use keys like:
- diffusion_model.layers.X.attention.to_k.lora_down.weight
- diffusion_model.layers.X.attention.to_k.dora_scale

The base LyCORIS config only checked for lora_A.weight/lora_B.weight
suffixes, missing the lora_down.weight/lora_up.weight and dora_scale
patterns used by Z-Image LoRAs.
@github-actions github-actions bot added python PRs that change python files backend PRs that change backend files labels Dec 27, 2025
@Pfannkuchensack
Copy link
Collaborator Author

@nphSi
Copy link

nphSi commented Dec 27, 2025

My Onetrainer trained loras install now but give error on gen.

[2025-12-27 09:00:29,105]::[InvokeAI]::ERROR --> Error while invoking session 5c876236-38b2-4e92-a340-121b4c29df3b, invocation e84246fe-7d29-43e9-90bf-603e2e641ccb (z_image_text_encoder): Unsupported lora format: dict_keys(['to_k
.alpha', 'to_q.alpha', 'to_v.alpha'])
[2025-12-27 09:00:29,105]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "C:\Invoke.venv\Lib\site-packages\invokeai\app\services\session_processor\session_processor_default.py", line 130, in run_node
output = invocation.invoke_internal(context=context, services=self._services)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Invoke.venv\Lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 244, in invoke_internal
output = self.invoke(context)
^^^^^^^^^^^^^^^^^^^^
File "C:\Invoke.venv\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Invoke.venv\Lib\site-packages\invokeai\app\invocations\z_image_text_encoder.py", line 59, in invoke
prompt_embeds = self._encode_prompt(context, max_seq_len=Z_IMAGE_MAX_SEQ_LEN)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Invoke.venv\Lib\site-packages\invokeai\app\invocations\z_image_text_encoder.py", line 83, in _encode_prompt
exit_stack.enter_context(
File "C:\Users\xxx\AppData\Roaming\uv\python\cpython-3.12.9-windows-x86_64-none\Lib\contextlib.py", line 526, in enter_context
result = _enter(cm)
^^^^^^^^^^
File "C:\Users\xxx\AppData\Roaming\uv\python\cpython-3.12.9-windows-x86_64-none\Lib\contextlib.py", line 137, in enter
return next(self.gen)
^^^^^^^^^^^^^^
File "C:\Invoke.venv\Lib\site-packages\invokeai\backend\patches\layer_patcher.py", line 39, in apply_smart_model_patches
for patch, patch_weight in patches:
^^^^^^^
File "C:\Invoke.venv\Lib\site-packages\invokeai\app\invocations\z_image_text_encoder.py", line 190, in _lora_iterator
lora_info = context.models.load(lora.lora)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Invoke.venv\Lib\site-packages\invokeai\app\services\shared\invocation_context.py", line 392, in load
return self._services.model_manager.load.load_model(model, submodel_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Invoke.venv\Lib\site-packages\invokeai\app\services\model_load\model_load_default.py", line 71, in load_model
).load_model(model_config, submodel_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Invoke.venv\Lib\site-packages\invokeai\backend\model_manager\load\load_default.py", line 57, in load_model
cache_record = self._load_and_cache(model_config, submodel_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Invoke.venv\Lib\site-packages\invokeai\backend\model_manager\load\load_default.py", line 78, in _load_and_cache
loaded_model = self._load_model(config, submodel_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Invoke.venv\Lib\site-packages\invokeai\backend\model_manager\load\model_loaders\lora.py", line 137, in _load_model
model = lora_model_from_z_image_state_dict(state_dict=state_dict, alpha=None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Invoke.venv\Lib\site-packages\invokeai\backend\patches\lora_conversions\z_image_lora_conversion_utils.py", line 117, in lora_model_from_z_image_state_dict
layer = any_lora_layer_from_state_dict(values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Invoke.venv\Lib\site-packages\invokeai\backend\patches\layers\utils.py", line 35, in any_lora_layer_from_state_dict
raise ValueError(f"Unsupported lora format: {state_dict.keys()}")
ValueError: Unsupported lora format: dict_keys(['to_k.alpha', 'to_q.alpha', 'to_v.alpha'])

Lora:
https://huggingface.co/nphSi/Z-Image-Lora/blob/main/ZIdT%20Ashley%20Williams%20(vrtlAshleyWilliams).safetensors

Two fixes for Z-Image LoRA support:

1. Override _validate_looks_like_lora in LoRA_LyCORIS_ZImage_Config to
   recognize Z-Image specific LoRA formats that use different key patterns
   than SD/SDXL LoRAs. Z-Image LoRAs use lora_down.weight/lora_up.weight
   and dora_scale suffixes instead of lora_A.weight/lora_B.weight.

2. Fix _group_by_layer in z_image_lora_conversion_utils.py to correctly
   group LoRA keys by layer name. The previous logic used rsplit with
   maxsplit=2 which incorrectly grouped keys like:
   - "to_k.alpha" -> layer "diffusion_model.layers.17.attention"
   - "lora_down.weight" -> layer "diffusion_model.layers.17.attention.to_k"

   Now uses suffix matching to ensure all keys for a layer are grouped
   together (alpha, dora_scale, lora_down.weight, lora_up.weight).
@Pfannkuchensack
Copy link
Collaborator Author

Can you try again with the now made changes? @nphSi

@nphSi
Copy link

nphSi commented Dec 27, 2025

Can you try again with the now made changes? @nphSi

That was fast 👍 - Works now. Thanks!!

@Pfannkuchensack Pfannkuchensack marked this pull request as ready for review December 27, 2025 08:38
@blessedcoolant blessedcoolant merged commit d42bf9c into invoke-ai:main Dec 27, 2025
25 checks passed
@Pfannkuchensack Pfannkuchensack deleted the fix/z-image-lora-dora-detection branch December 27, 2025 23:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backend PRs that change backend files python PRs that change python files

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants