-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resource issue #9
Comments
Thanks for the typo. It is fixed. Not sure about memory usage: will look into it! |
Thanks, I know other programs I use do not have this issue it seems. Unsure why, I use MSTY and it has it's own "keep alive" setting for ollama and it never seems to impact my stable diffusion performance. Perhaps a dedicated setting for this could help |
Alright, I'll do some more testing, it seems to recover after a bit, I'll see if i can figure out why. |
Okay, so this appears to only happen when I use the context menu, the "ai commands" feature. If I use the regular LLM interface you made, it does not occur after using it and my performance returns to normal. If I use the ai commands feature, the performance is significantly impacted, it persists until i kill witsy or I use the regular LLM interface and send a query. Indeed, if I use the above window, it will not occur and if I use this window the issue recovers. If I use this, the issue occurs. I will add that the command I'm using is the insert below option. An option that brings up a new popup, it doesn't occur. |
First this seems like a very helpful app and has been so far. I was using this for stable diffusion to refine prompts. The text replacement quick prompt feature is very nice for this. However I began to notice that after using this feature my stable diffusion speed is halved. The speed takes a big hit. That being said I don't see any model loaded into my VRAM so i don't know the cause.
But as soon as i kill the process for witsy, my speed doubles and returns to full speed. Often I have this issue with several apps/programs i've tried. I would love it to be able to use this without a hit to speed. I'm using ollama and the model doesn't seem to stay in my vram which is what i want So i'm not sure what causes this speed issue.
I also noticed what I hope is a typo on your website lol
"Witsy itself does collect any of your information. Not even performance data. Everything Witsy needs is saved on your computer and nowhere else. The models you use may have their own privacy policies, so make sure to check those out."
Witsy itself does collect any of your information.
The text was updated successfully, but these errors were encountered: