Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] [iopaint] with local_files_only parm, sd model failed to load #423

Closed
huangzhike opened this issue Jan 16, 2024 · 6 comments
Closed

Comments

@huangzhike
Copy link

Model
"Sanster/PowerPaint-V1-stable-diffusion-inpainting"
"kandinsky-community/kandinsky-2-2-decoder-inpaint"
"diffusers/stable-diffusion-xl-1.0-inpainting-0.1"

Describe the bug

starting the iopaint app with local_files_only=True, and the parm will not be passed to huggingface model, which causes the model fails to load.

image
image

Screenshots

System Info
Software version used

  • lama-cleaner:
  • pytorch:
  • CUDA:
@Sanster
Copy link
Owner

Sanster commented Jan 16, 2024

--local-files-only should used after the model has been successfully downloaded,it takes effect through environment variables, so it does not need to be passed to the from_pretrained() function. https://github.com/Sanster/lama-cleaner/blob/316198a97aee5a1429ca7f4ff7ca26314ee089e8/iopaint/cli.py#L154

Could you share the complete error log? I tested it locally and if the model has already been downloaded, adding --local-files-only should not cause an error.

@huangzhike
Copy link
Author

huangzhike commented Jan 16, 2024

╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ E:\work_space\lama-cleaner\iopaint\cli.py:169 in start                                           │
│                                                                                                  │
│   166 │   from iopaint.schema import ApiConfig                                                   │
│   167 │                                                                                          │
│   168 │   app = FastAPI()                                                                        │
│ ❱ 169 │   api = Api(                                                                             │
│   170 │   │   app,                                                                               │
│   171 │   │   ApiConfig(                                                                         │
│   172 │   │   │   host=host,                                                                     │
│                                                                                                  │
│ E:\work_space\lama-cleaner\iopaint\api.py:151 in __init__                                        │
│                                                                                                  │
│   148 │   │   api_middleware(self.app)                                                           │
│   149 │   │   self.file_manager = self._build_file_manager()                                     │
│   150 │   │   self.plugins = self._build_plugins()                                               │
│ ❱ 151 │   │   self.model_manager = self._build_model_manager()                                   │
│   152 │   │                                                                                      │
│   153 │   │   # fmt: off                                                                         │
│   154 │   │   self.add_api_route("/api/v1/gen-info", self.api_geninfo, methods=["POST"], respo   │
│                                                                                                  │
│ E:\work_space\lama-cleaner\iopaint\api.py:354 in _build_model_manager                            │
│                                                                                                  │
│   351 │   │   )                                                                                  │
│   352 │                                                                                          │
│   353 │   def _build_model_manager(self):                                                        │
│ ❱ 354 │   │   return ModelManager(                                                               │
│   355 │   │   │   name=self.config.model,                                                        │
│   356 │   │   │   device=torch.device(self.config.device),                                       │
│   357 │   │   │   no_half=self.config.no_half,                                                   │
│                                                                                                  │
│ E:\work_space\lama-cleaner\iopaint\model_manager.py:32 in __init__                               │
│                                                                                                  │
│    29 │   │   ):                                                                                 │
│    30 │   │   │   controlnet_method = self.available_models[name].controlnets[0]                 │
│    31 │   │   self.controlnet_method = controlnet_method                                         │
│ ❱  32 │   │   self.model = self.init_model(name, device, **kwargs)                               │
│    33 │                                                                                          │
│    34 │   @property                                                                              │
│    35 │   def current_model(self) -> ModelInfo:                                                  │
│                                                                                                  │
│ E:\work_space\lama-cleaner\iopaint\model_manager.py:56 in init_model                             │
│                                                                                                  │
│    53 │   │   if model_info.support_controlnet and self.enable_controlnet:                       │
│    54 │   │   │   return ControlNet(device, **kwargs)                                            │
│    55 │   │   elif model_info.name in models:                                                    │
│ ❱  56 │   │   │   return models[name](device, **kwargs)                                          │
│    57 │   │   else:                                                                              │
│    58 │   │   │   if model_info.model_type in [                                                  │
│    59 │   │   │   │   ModelType.DIFFUSERS_SD_INPAINT,                                            │
│                                                                                                  │
│ E:\work_space\lama-cleaner\iopaint\model\base.py:279 in __init__                                 │
│                                                                                                  │
│   276 │   def __init__(self, device, **kwargs):                                                  │
│   277 │   │   self.model_info = kwargs["model_info"]                                             │
│   278 │   │   self.model_id_or_path = self.model_info.path                                       │
│ ❱ 279 │   │   super().__init__(device, **kwargs)                                                 │
│   280 │                                                                                          │
│   281 │   @torch.no_grad()                                                                       │
│   282 │   def __call__(self, image, mask, config: InpaintRequest):                               │
│                                                                                                  │
│ E:\work_space\lama-cleaner\iopaint\model\base.py:35 in __init__                                  │
│                                                                                                  │
│    32 │   │   """                                                                                │
│    33 │   │   device = switch_mps_device(self.name, device)                                      │
│    34 │   │   self.device = device                                                               │
│ ❱  35 │   │   self.init_model(device, **kwargs)                                                  │
│    36 │                                                                                          │
│    37 │   @abc.abstractmethod                                                                    │
│    38 │   def init_model(self, device, **kwargs):                                                │
│                                                                                                  │
│ E:\work_space\lama-cleaner\iopaint\model\sdxl.py:48 in init_model                                │
│                                                                                                  │
│    45 │   │   │   │   │   "madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch_dtype               │
│    46 │   │   │   │   )                                                                          │
│    47 │   │   │   │   model_kwargs["vae"] = vae                                                  │
│ ❱  48 │   │   │   self.model = handle_from_pretrained_exceptions(                                │
│    49 │   │   │   │   StableDiffusionXLInpaintPipeline.from_pretrained,                          │
│    50 │   │   │   │   pretrained_model_name_or_path=self.model_id_or_path,                       │
│    51 │   │   │   │   torch_dtype=torch_dtype,                                                   │
│                                                                                                  │
│ E:\work_space\lama-cleaner\iopaint\model\utils.py:994 in handle_from_pretrained_exceptions       │
│                                                                                                  │
│    991 │   │   │   │   "If the model has already been downloaded, you can add --local-files-onl  │
│    992 │   │   │   )                                                                             │
│    993 │   │   │   exit(-1)                                                                      │
│ ❱  994 │   │   raise e                                                                           │
│    995 │   except Exception as e:                                                                │
│    996 │   │   raise e                                                                           │
│    997                                                                                           │
│                                                                                                  │
│ E:\work_space\lama-cleaner\iopaint\model\utils.py:976 in handle_from_pretrained_exceptions       │
│                                                                                                  │
│    973                                                                                           │
│    974 def handle_from_pretrained_exceptions(func, **kwargs):                                    │
│    975 │   try:                                                                                  │
│ ❱  976 │   │   return func(**kwargs)                                                             │
│    977 │   except ValueError as e:                                                               │
│    978 │   │   if "You are trying to load the model files of the `variant=fp16`" in str(e):      │
│    979 │   │   │   logger.info("variant=fp16 not found, try revision=fp16")                      │
│                                                                                                  │
│ E:\work_space\lama-cleaner\venv\lib\site-packages\huggingface_hub\utils\_validators.py:118 in    │
│ _inner_fn                                                                                        │
│                                                                                                  │
│   115 │   │   if check_use_auth_token:                                                           │
│   116 │   │   │   kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=ha   │
│   117 │   │                                                                                      │
│ ❱ 118 │   │   return fn(*args, **kwargs)                                                         │
│   119 │                                                                                          │
│   120 │   return _inner_fn  # type: ignore                                                       │
│   121                                                                                            │
│                                                                                                  │
│ E:\work_space\lama-cleaner\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py:1096 in  │
│ from_pretrained                                                                                  │
│                                                                                                  │
│   1093 │   │   │   │   │   f'The provided pretrained_model_name_or_path "{pretrained_model_name  │
│   1094 │   │   │   │   │   " is neither a valid local path nor a valid repo id. Please check th  │
│   1095 │   │   │   │   )                                                                         │
│ ❱ 1096 │   │   │   cached_folder = cls.download(                                                 │
│   1097 │   │   │   │   pretrained_model_name_or_path,                                            │
│   1098 │   │   │   │   cache_dir=cache_dir,                                                      │
│   1099 │   │   │   │   resume_download=resume_download,                                          │
│                                                                                                  │
│ E:\work_space\lama-cleaner\venv\lib\site-packages\huggingface_hub\utils\_validators.py:118 in    │
│ _inner_fn                                                                                        │
│                                                                                                  │
│   115 │   │   if check_use_auth_token:                                                           │
│   116 │   │   │   kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=ha   │
│   117 │   │                                                                                      │
│ ❱ 118 │   │   return fn(*args, **kwargs)                                                         │
│   119 │                                                                                          │
│   120 │   return _inner_fn  # type: ignore                                                       │
│   121                                                                                            │
│                                                                                                  │
│ E:\work_space\lama-cleaner\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py:1656 in  │
│ download                                                                                         │
│                                                                                                  │
│   1653 │   │   model_info_call_error: Optional[Exception] = None                                 │
│   1654 │   │   if not local_files_only:                                                          │
│   1655 │   │   │   try:                                                                          │
│ ❱ 1656 │   │   │   │   info = model_info(pretrained_model_name, token=token, revision=revision)  │
│   1657 │   │   │   except HTTPError as e:                                                        │
│   1658 │   │   │   │   logger.warn(f"Couldn't connect to the Hub: {e}.\nWill try to load from l  │
│   1659 │   │   │   │   local_files_only = True                                                   │
│                                                                                                  │
│ E:\work_space\lama-cleaner\venv\lib\site-packages\huggingface_hub\utils\_validators.py:118 in    │
│ _inner_fn                                                                                        │
│                                                                                                  │
│   115 │   │   if check_use_auth_token:                                                           │
│   116 │   │   │   kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=ha   │
│   117 │   │                                                                                      │
│ ❱ 118 │   │   return fn(*args, **kwargs)                                                         │
│   119 │                                                                                          │
│   120 │   return _inner_fn  # type: ignore                                                       │
│   121                                                                                            │
│                                                                                                  │
│ E:\work_space\lama-cleaner\venv\lib\site-packages\huggingface_hub\hf_api.py:2084 in model_info   │
│                                                                                                  │
│   2081 │   │   │   params["securityStatus"] = True                                               │
│   2082 │   │   if files_metadata:                                                                │
│   2083 │   │   │   params["blobs"] = True                                                        │
│ ❱ 2084 │   │   r = get_session().get(path, headers=headers, timeout=timeout, params=params)      │
│   2085 │   │   hf_raise_for_status(r)                                                            │
│   2086 │   │   data = r.json()                                                                   │
│   2087 │   │   return ModelInfo(**data)                                                          │
│                                                                                                  │
│ C:\Users\RTX4090\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\sessions.py: │
│ 600 in get                                                                                       │
│                                                                                                  │
│   597 │   │   """                                                                                │
│   598 │   │                                                                                      │
│   599 │   │   kwargs.setdefault("allow_redirects", True)                                         │
│ ❱ 600 │   │   return self.request("GET", url, **kwargs)                                          │
│   601 │                                                                                          │
│   602 │   def options(self, url, **kwargs):                                                      │
│   603 │   │   r"""Sends a OPTIONS request. Returns :class:`Response` object.                     │
│                                                                                                  │
│ C:\Users\RTX4090\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\sessions.py: │
│ 587 in request                                                                                   │
│                                                                                                  │
│   584 │   │   │   "allow_redirects": allow_redirects,                                            │
│   585 │   │   }                                                                                  │
│   586 │   │   send_kwargs.update(settings)                                                       │
│ ❱ 587 │   │   resp = self.send(prep, **send_kwargs)                                              │
│   588 │   │                                                                                      │
│   589 │   │   return resp                                                                        │
│   590                                                                                            │
│                                                                                                  │
│ C:\Users\RTX4090\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\sessions.py: │
│ 701 in send                                                                                      │
│                                                                                                  │
│   698 │   │   start = preferred_clock()                                                          │
│   699 │   │                                                                                      │
│   700 │   │   # Send the request                                                                 │
│ ❱ 701 │   │   r = adapter.send(request, **kwargs)                                                │
│   702 │   │                                                                                      │
│   703 │   │   # Total elapsed time of the request (approximately)                                │
│   704 │   │   elapsed = preferred_clock() - start                                                │
│                                                                                                  │
│ E:\work_space\lama-cleaner\venv\lib\site-packages\huggingface_hub\utils\_http.py:78 in send      │
│                                                                                                  │
│    75                                                                                            │
│    76 class OfflineAdapter(HTTPAdapter):                                                         │
│    77 │   def send(self, request: PreparedRequest, *args, **kwargs) -> Response:                 │
│ ❱  78 │   │   raise OfflineModeIsEnabled(                                                        │
│    79 │   │   │   f"Cannot reach {request.url}: offline mode is enabled. To disable it, please   │
│    80 │   │   )                                                                                  │
│    81                                                                                            │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
OfflineModeIsEnabled: Cannot reach https://huggingface.co/api/models/diffusers/stable-diffusion-xl-1.0-inpainting-0.1: offline mode is enabled. To disable it, please unset the `HF_HUB_OFFLINE` environment variable.

@huangzhike
Copy link
Author

huangzhike commented Jan 16, 2024

i think it's for network reason, even with local_files_only, it will still request huggingface, which it's a problem in China

@huangzhike
Copy link
Author

also when using stable-diffusion-xl-1.0-inpainting-0.1 with Extender & LCM Lora, with local_files_only, an error occurred:

Traceback (most recent call last):
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\anyio\streams\memory.py", line 97, in receive
    return self.receive_nowait()
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\anyio\streams\memory.py", line 92, in receive_nowait
    raise WouldBlock
anyio.WouldBlock

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\starlette\middleware\base.py", line 159, in call_next
    message = await recv_stream.receive()
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\anyio\streams\memory.py", line 112, in receive
    raise EndOfStream
anyio.EndOfStream

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "E:\work_space\lama-cleaner\iopaint\api.py", line 108, in exception_handling
    return await call_next(request)
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\starlette\middleware\base.py", line 165, in call_next
    raise app_exc
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\starlette\middleware\base.py", line 151, in coro
    await self.app(scope, receive_or_disconnect, send_no_error)
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\starlette\middleware\exceptions.py", line 62, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\starlette\_exception_handler.py", line 55, in wrapped_app
    raise exc
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\starlette\_exception_handler.py", line 44, in wrapped_app
    await app(scope, receive, sender)
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\starlette\routing.py", line 746, in __call__
    await route.handle(scope, receive, send)
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\starlette\routing.py", line 288, in handle
    await self.app(scope, receive, send)
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\starlette\routing.py", line 75, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\starlette\_exception_handler.py", line 55, in wrapped_app
    raise exc
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\starlette\_exception_handler.py", line 44, in wrapped_app
    await app(scope, receive, sender)
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\starlette\routing.py", line 70, in app
    response = await func(request)
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\fastapi\routing.py", line 299, in app
    raise e
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\fastapi\routing.py", line 294, in app
    raw_response = await run_endpoint_function(
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\fastapi\routing.py", line 193, in run_endpoint_function
    return await run_in_threadpool(dependant.call, **values)
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\starlette\concurrency.py", line 35, in run_in_threadpool
    return await anyio.to_thread.run_sync(func, *args)
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 2134, in run_sync_in_worker_thread
    return await future
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 851, in run
    result = context.run(func, *args)
  File "E:\work_space\lama-cleaner\iopaint\api.py", line 239, in api_inpaint
    rgb_np_img = self.model_manager(image, mask, req)
  File "C:\Users\RTX4090\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "E:\work_space\lama-cleaner\iopaint\model_manager.py", line 86, in __call__
    self.enable_disable_lcm_lora(config)
  File "E:\work_space\lama-cleaner\iopaint\model_manager.py", line 185, in enable_disable_lcm_lora
    self.model.model.load_lora_weights(self.model.lcm_lora_id)
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\diffusers\loaders\lora.py", line 1441, in load_lora_weights
    state_dict, network_alphas = self.lora_state_dict(
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 118, in _inner_fn
    return fn(*args, **kwargs)
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\diffusers\loaders\lora.py", line 235, in lora_state_dict
    weight_name = cls._best_guess_weight_name(
  File "E:\work_space\lama-cleaner\venv\lib\site-packages\diffusers\loaders\lora.py", line 307, in _best_guess_weight_name
    raise ValueError("When using the offline mode, you must specify a `weight_name`.")
ValueError: When using the offline mode, you must specify a `weight_name`.

@Sanster
Copy link
Owner

Sanster commented Jan 16, 2024

i think it's for network reason, even with local_files_only, it will still request huggingface, which it's a problem in China

感谢反馈问题,我能够复现这个问题,不是网络问题,发现是新版 diffusers 的问题, huggingface/diffusers#1767 (comment)

后续我会把 local_files_onlyfrom_pretrained 加上来修复这个问题

@Sanster
Copy link
Owner

Sanster commented Jan 16, 2024

fixed in 1.0.0b8

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants