Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: WebUI not working on intel iGPU uhd graphics using directml #582

Open
1 of 6 tasks
akaGik-jit opened this issue Feb 3, 2025 · 2 comments
Open
1 of 6 tasks

Comments

@akaGik-jit
Copy link

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

clicking on generate create error

Steps to reproduce the problem

fresh install using directml argument and click on generate

What should have happened?

generate image

What browsers do you use to access the UI ?

Mozilla Firefox

Sysinfo

sysinfo-2025-02-03-02-09.json

Console logs

InvokeAI cross attention- https://pastebin.com/9uD2pB6p

V1 cross attention- https://pastebin.com/qL9TxpCB

Additional information

Updated recent Intel and Nvidia drivers.
Intel UHD Graphics 620. Nvidia MX130
Normally, I use nvidia gpu on my laptop but since it only has 2gb vram so, it is very slow using SD WebUI Forge because it has to keep offloading. I tried using directml on Intel iGPU and it produces error. I don't know much about programming so i can't solve it on my own. Tried every cross attention it doesn't work.
I am trying to iGPU because i have more ram so model doesn't have to offload and maybe see if it is much faster than using my laptop's nvidia gpu.
Thank you

@fifskank
Copy link

fifskank commented Feb 3, 2025

ive done some research and talk to the dev,

first off dont use invoke on igpu, use DOGGETIX isntead, then, edit the requirements.txt and change to the correct version of the transformers=4.25.1 then edit the webui-user.bat and add top the command arguments, --use-directml, let me know if it works,

if somehow it loads, go the settings - optimizations and change it to DOGGETIX

@akaGik-jit
Copy link
Author

akaGik-jit commented Feb 6, 2025

@fifskank
DOGGETIX doesn't work either.

https://pastebin.com/ecAJ0PrG

Edit: Deleted everything and disabled nvidia gpu. reinstalled everything. Launched with "--skip-torch-cuda-test --use-directml --device-id 0 ", changed transformers version and doggetix and it worked but only for 3 steps after that out of memory. There is ahuge memory leak where it loads models again. This is especially visible when i press generate and there is an error, it loads the model again.

https://pastebin.com/BZqs81JU

EDIT 2: So, I tried few more times and it worked but very slow (4 times slower than nvidia gpu) ,close to 11.4 gb vram and 18.1 gb ram.

Doggetix work but very slow will try to see if something makes it faster. any suggestion will help.

Onnx still doesn't work. ONNX failed to initialize: module optimum.onnxruntime has no attribute ORTStableDiffusionXLPipeline

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants