Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resource issue #9

Open
andy8992 opened this issue Nov 1, 2024 · 4 comments
Open

Resource issue #9

andy8992 opened this issue Nov 1, 2024 · 4 comments
Assignees
Labels
question Further information is requested

Comments

@andy8992
Copy link

andy8992 commented Nov 1, 2024

First this seems like a very helpful app and has been so far. I was using this for stable diffusion to refine prompts. The text replacement quick prompt feature is very nice for this. However I began to notice that after using this feature my stable diffusion speed is halved. The speed takes a big hit. That being said I don't see any model loaded into my VRAM so i don't know the cause.

But as soon as i kill the process for witsy, my speed doubles and returns to full speed. Often I have this issue with several apps/programs i've tried. I would love it to be able to use this without a hit to speed. I'm using ollama and the model doesn't seem to stay in my vram which is what i want So i'm not sure what causes this speed issue.

I also noticed what I hope is a typo on your website lol

"Witsy itself does collect any of your information. Not even performance data. Everything Witsy needs is saved on your computer and nowhere else. The models you use may have their own privacy policies, so make sure to check those out."

Witsy itself does collect any of your information.

@nbonamy
Copy link
Owner

nbonamy commented Nov 1, 2024

Thanks for the typo. It is fixed. Not sure about memory usage: will look into it!

@nbonamy nbonamy self-assigned this Nov 1, 2024
@nbonamy nbonamy added bug Something isn't working question Further information is requested and removed bug Something isn't working labels Nov 1, 2024
@andy8992
Copy link
Author

andy8992 commented Nov 3, 2024

Thanks, I know other programs I use do not have this issue it seems. Unsure why, I use MSTY and it has it's own "keep alive" setting for ollama and it never seems to impact my stable diffusion performance. Perhaps a dedicated setting for this could help

@andy8992
Copy link
Author

andy8992 commented Nov 3, 2024

Alright, I'll do some more testing, it seems to recover after a bit, I'll see if i can figure out why.

@andy8992
Copy link
Author

andy8992 commented Nov 3, 2024

Okay, so this appears to only happen when I use the context menu, the "ai commands" feature.

If I use the regular LLM interface you made, it does not occur after using it and my performance returns to normal.

If I use the ai commands feature, the performance is significantly impacted, it persists until i kill witsy or I use the regular LLM interface and send a query.

image

Indeed, if I use the above window, it will not occur and if I use this window the issue recovers.

image

If I use this, the issue occurs.

I will add that the command I'm using is the insert below option. An option that brings up a new popup, it doesn't occur.

@nbonamy nbonamy changed the title Typo on website | Resource issue Resource issue Nov 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants