You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
shark_tank local cache is located at C:\Users\jonbr.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag
vulkan devices are available.
cuda devices are not available.
Running on local URL: http://0.0.0.0:8080
To create a public link, set share=True in launch().
Found device AMD Radeon RX 580 2048SP. Using target triple rdna2-unknown-windows.
Using tuned models for stabilityai/stable-diffusion-2-1-base/fp16/vulkan://00000000-0100-0000-0000-000000000000.
torch\jit_check.py:172: UserWarning: The TorchScript type system doesn't support instance-level annotations on empty non-base types in __init__. Instead, either 1) use a type annotation in the class body, or 2) wrap the type in torch.jit.Attribute.
warnings.warn("The TorchScript type system doesn't support "
loading existing vmfb from: C:\Users\jonbr\Downloads\euler_scale_model_input_1_512_512fp16.vmfb
WARNING: [Loader Message] Code 0 : windows_read_data_files_in_registry: Registry lookup failed to get layer manifest files.
loading existing vmfb from: C:\Users\jonbr\Downloads\euler_step_1_512_512fp16.vmfb
WARNING: [Loader Message] Code 0 : windows_read_data_files_in_registry: Registry lookup failed to get layer manifest files.
Inferring base model configuration.
safetensors\torch.py:99: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
torch_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.get(instance, owner)()
torch\storage.py:899: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
storage = cls(wrap_storage=untyped_storage)
Loading Winograd config file from C:\Users\jonbr.local/shark_tank/configs\unet_winograd_vulkan.json
Retrying with a different base model configuration
Retrying with a different base model configuration
Retrying with a different base model configuration
Retrying with a different base model configuration
Traceback (most recent call last):
File "gradio\routes.py", line 384, in run_predict
File "gradio\blocks.py", line 1032, in process_api
File "gradio\blocks.py", line 858, in call_function
File "anyio\to_thread.py", line 31, in run_sync
File "anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
File "anyio_backends_asyncio.py", line 867, in run
File "gradio\utils.py", line 448, in async_iteration
File "apps\stable_diffusion\scripts\txt2img.py", line 122, in txt2img_inf
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 355, in from_pretrained
File "apps\stable_diffusion\src\models\model_wrappers.py", line 541, in call
SystemExit: Cannot compile the model. Please create an issue with the detailed log at https://github.com/nod-ai/SHARK/issues
The text was updated successfully, but these errors were encountered:
Told to send following log:
shark_tank local cache is located at C:\Users\jonbr.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag
vulkan devices are available.
cuda devices are not available.
Running on local URL: http://0.0.0.0:8080
To create a public link, set
share=True
inlaunch()
.Found device AMD Radeon RX 580 2048SP. Using target triple rdna2-unknown-windows.
Using tuned models for stabilityai/stable-diffusion-2-1-base/fp16/vulkan://00000000-0100-0000-0000-000000000000.
torch\jit_check.py:172: UserWarning: The TorchScript type system doesn't support instance-level annotations on empty non-base types in
__init__
. Instead, either 1) use a type annotation in the class body, or 2) wrap the type intorch.jit.Attribute
.warnings.warn("The TorchScript type system doesn't support "
loading existing vmfb from: C:\Users\jonbr\Downloads\euler_scale_model_input_1_512_512fp16.vmfb
WARNING: [Loader Message] Code 0 : windows_read_data_files_in_registry: Registry lookup failed to get layer manifest files.
loading existing vmfb from: C:\Users\jonbr\Downloads\euler_step_1_512_512fp16.vmfb
WARNING: [Loader Message] Code 0 : windows_read_data_files_in_registry: Registry lookup failed to get layer manifest files.
Inferring base model configuration.
safetensors\torch.py:99: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
torch_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.get(instance, owner)()
torch\storage.py:899: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
storage = cls(wrap_storage=untyped_storage)
Loading Winograd config file from C:\Users\jonbr.local/shark_tank/configs\unet_winograd_vulkan.json
Retrying with a different base model configuration
Retrying with a different base model configuration
Retrying with a different base model configuration
Retrying with a different base model configuration
Traceback (most recent call last):
File "gradio\routes.py", line 384, in run_predict
File "gradio\blocks.py", line 1032, in process_api
File "gradio\blocks.py", line 858, in call_function
File "anyio\to_thread.py", line 31, in run_sync
File "anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
File "anyio_backends_asyncio.py", line 867, in run
File "gradio\utils.py", line 448, in async_iteration
File "apps\stable_diffusion\scripts\txt2img.py", line 122, in txt2img_inf
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 355, in from_pretrained
File "apps\stable_diffusion\src\models\model_wrappers.py", line 541, in call
SystemExit: Cannot compile the model. Please create an issue with the detailed log at https://github.com/nod-ai/SHARK/issues
The text was updated successfully, but these errors were encountered: