You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Cannot load a draft model for Mistral-Large. It seems like the draft model directory is not being recognised. This is different from #177. There is also an ambiguity in the documentation.
This follows the config_sample.yml format, where the draft_model is its own top-level section. I've also tried as per the docs which say it is "a sub-block of models".
I'd expect it to load the draft model (as it used to).
Logs
INFO: ExllamaV2 version: 0.2.2
INFO: Your API key is: XXXXX
INFO: Your admin key is: XXXXX
INFO:
INFO: If these keys get compromised, make sure to delete api_tokens.yml and restart the server. Have fun!
INFO: Generation logging is disabled
WARNING: Draft model is disabled because a model name wasn't provided. Please check your config.yml!
WARNING: The given cache_size (65536) is less than 2 * max_seq_len and may be too small for requests using CFG.
WARNING: Ignore this warning if you do not plan on using CFG.
INFO: Attempting to load a prompt template if present.
INFO: Using template "from_tokenizer_config" for chat completions.
INFO: Loading model: /srv/models/Panchovix/Mistral-Large-Instruct-2407-4.0bpw-h6-exl2
INFO: Loading with tensor parallel
Loading model modules ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 179/179 0:00:00
INFO: Model successfully loaded.
INFO: Developer documentation: http://0.0.0.0:5001/redoc
INFO: Starting OAI API
INFO: Completions: http://0.0.0.0:5001/v1/completions
INFO: Chat completions: http://0.0.0.0:5001/v1/chat/completions
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:5001 (Press CTRL+C to quit)
Additional context
No response
Acknowledgements
I have looked for similar issues before submitting this one.
I have read the disclaimer, and this issue is related to a code bug. If I have a question, I will use the Discord server.
I understand that the developers have lives and my issue will be answered when possible.
I understand the developers of this program are human, and I will ask my questions politely.
The text was updated successfully, but these errors were encountered:
OS
Linux
GPU Library
CUDA 12.x
Python version
3.11
Describe the bug
Cannot load a draft model for Mistral-Large. It seems like the draft model directory is not being recognised. This is different from #177. There is also an ambiguity in the documentation.
Reproduction steps
My config.yml (comments removed):
This follows the config_sample.yml format, where the draft_model is its own top-level section. I've also tried as per the docs which say it is "a sub-block of models".
Looking in the code this was all changed here
tabbyAPI/common/tabby_config.py
Line 73 in fb903ec
which is about the time this broke.
I also tried a
draft
subsection.Expected behavior
I'd expect it to load the draft model (as it used to).
Logs
Additional context
No response
Acknowledgements
The text was updated successfully, but these errors were encountered: