Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: Model installation doesn't proceed after download finishes #5807

Closed
1 task done
psychedelicious opened this issue Feb 27, 2024 · 2 comments
Closed
1 task done
Assignees
Labels
4.0.0 bug Something isn't working

Comments

@psychedelicious
Copy link
Collaborator

Is there an existing issue for this problem?

  • I have searched the existing issues

Operating system

Linux

GPU vendor

Nvidia (CUDA)

GPU model

No response

GPU VRAM

No response

Version number

next

Browser

FF

Python dependencies

No response

What happened

When I initiate a model install via the API using a HuggingFace URL or repo id, the model is downloaded but the installation never happens. I tried 2x safetensors and a diffusers. Here's my invokeai/models/ after:

❯ tree tmp* -h
[128K]  tmpinstall_b0tgnn6y
└── [2.0G]  Reliberate_v3.safetensors
[128K]  tmpinstall_omd5n2xh
└── [2.0G]  Reliberate_v2.safetensors
[128K]  tmpinstall_wnoelcl9
└── [128K]  juggernaut-xl-v5
    ├── [ 577]  model_index.json
    ├── [128K]  scheduler
    │   └── [ 474]  scheduler_config.json
    ├── [128K]  text_encoder
    │   ├── [ 560]  config.json
    │   ├── [235M]  model.safetensors
    │   └── [235M]  pytorch_model.bin
    ├── [128K]  text_encoder_2
    │   ├── [ 570]  config.json
    │   ├── [1.3G]  model.safetensors
    │   └── [1.3G]  pytorch_model.bin
    ├── [128K]  tokenizer
    │   ├── [512K]  merges.txt
    │   ├── [ 472]  special_tokens_map.json
    │   ├── [ 737]  tokenizer_config.json
    │   └── [1.0M]  vocab.json
    ├── [128K]  tokenizer_2
    │   ├── [512K]  merges.txt
    │   ├── [ 460]  special_tokens_map.json
    │   ├── [ 725]  tokenizer_config.json
    │   └── [1.0M]  vocab.json
    ├── [128K]  unet
    │   ├── [1.7K]  config.json
    │   └── [4.8G]  diffusion_pytorch_model.safetensors
    └── [128K]  vae
        ├── [ 602]  config.json
        └── [160M]  diffusion_pytorch_model.safetensors

Installing from a local path works

What you expected to happen

I can download models from HF

How to reproduce the problem

No response

Additional context

No response

Discord username

No response

@psychedelicious
Copy link
Collaborator Author

I'm having trouble wrapping my head around the application flow and figuring out what is going wrong here.

@psychedelicious
Copy link
Collaborator Author

Resolved by #5835

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
4.0.0 bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants