-
Notifications
You must be signed in to change notification settings - Fork 426
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Expose llama.cpp --no-mmap option #37
Comments
This should be fixed in the latest version, currently v1.3 at time of writing. Edit: for version 1.4 it is now mmap by default, you can toggle it off with |
Thanks for the update! I'm seeing much better memory utilization now, although not quite the same sort of performance improvement I saw in llama.cpp (maybe a generation speedup on the order of 10ms/token, running 30B/4-bit llama on an M1 Max). This is a great option to have in koboldcpp, but I don't think it should be enabled by default. For the majority of users, I don't expect the memory tradeoffs to provide meaningful benefit. |
Yeah it was quite a divided response, some people hate it others love it. In the end it is a toggle so everyone can just pick whichever option they prefer. |
There was a performance regression in earlier versions of llama.cpp that I may be hitting with long running interactions. This was recently fixed with the addition of a --no-mmap option which forces the entire model to be loaded into ram, and I would like to also use it with koboldcpp.
ggml-org#801
The text was updated successfully, but these errors were encountered: