Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: CUDA: out of memory before even loading any models #524

Open
4 of 6 tasks
XD1674 opened this issue Aug 14, 2024 · 2 comments
Open
4 of 6 tasks

[Bug]: CUDA: out of memory before even loading any models #524

XD1674 opened this issue Aug 14, 2024 · 2 comments
Labels
zluda About ZLUDA

Comments

@XD1674
Copy link

XD1674 commented Aug 14, 2024

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

i wanted to enable zluda, and for some reason it says cuda: out of memory, while i used --lowram and i have 8gb of vram. i also searched on the internet like everywhere but found nothing. maybe its because i tried to download zluda with rocm 5.7.1 on an old amd rx 570, but im not sure.

Steps to reproduce the problem

no idea, for everyone else it works apparently

What should have happened?

it should work normally, i dont get this error, also i should be fine with 24gb ram (i use operagx)

What browsers do you use to access the UI ?

Other

Sysinfo

sysinfo-2024-08-14-20-07.json

Console logs

venv "D:\stable diffusion\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
ROCm Toolkit 5.7 was found.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1-amd-2-g395ce8dc
Commit hash: 395ce8dc2cb01282d48074a89a5e6cb3da4b59ab
Using ZLUDA in D:\stable diffusion\stable-diffusion-webui-directml\.zluda
WARNING:xformers:A matching Triton is not available, some optimizations will not be enabled
Traceback (most recent call last):
  File "D:\stable diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\xformers\__init__.py", line 57, in _is_triton_available
    import triton  # noqa
ModuleNotFoundError: No module named 'triton'
D:\stable diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --no-download-sd-model --lowvram --opt-sdp-attention --opt-sub-quad-attention --precision full --no-half
ONNX failed to initialize: Failed to import diffusers.pipelines.auto_pipeline because of the following error (look up to see its traceback):
Failed to import diffusers.pipelines.aura_flow.pipeline_aura_flow because of the following error (look up to see its traceback):
cannot import name 'UMT5EncoderModel' from 'transformers' (D:\stable diffusion\stable-diffusion-webui-directml\venv\lib\site-packages\transformers\__init__.py)
ZLUDA device failed to pass basic operation test: index=None, device_name=Radeon RX 570 Series [ZLUDA]
CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Additional information

i have messed a bit with rocm, but other than that, nothing else. i have done 2 clean reinstalls and i still get this error

@lshqqytiger
Copy link
Owner

HIP SDK has a bug with RX 500 cards. (pre-navi) It throws OOM even if the memory is not full.

@lshqqytiger lshqqytiger added the zluda About ZLUDA label Aug 15, 2024
@CS1o
Copy link

CS1o commented Aug 16, 2024

Multiple Users fixed that with this step:

Go into the stable-diffusion-webui-amdgpu folder and click in the explorer bar (not searchbar)
There Type cmd and hit enter.Then type and run these three commands on by one:
venv\scripts\activate
pip uninstall torch torchvision torchaudio -y
pip install torch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1 --index-url https://download.pytorch.org/whl/cu118

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
zluda About ZLUDA
Projects
None yet
Development

No branches or pull requests

3 participants