-
-
Notifications
You must be signed in to change notification settings - Fork 838
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't load BnB models #1513
Comments
More details regarding error please. Were you also the one who posted a bnb issue on discord? |
Any new update regarding this error? I have a similar issue |
Could you let me know what else are you looking for? |
Could someone post logs of the issue? Is it due to the check of quant_config? |
Ayt, got it! I will post the logs later today |
@NanoCode012 So if my model is previously BnB quantized i have no clue of how i can finetune with axolotl |
@Blaizzy what was your fix? |
I used a full precision model and set Example: base_model: meta/llama-7b-hf
load_in_4bit: true Whilst, I actually wanted to load a prequantized model. base_model: meta/llama-7b-hf-4bit |
Thanks +1 id like to do the same (would be a nice addition) |
Please check that this issue hasn't been reported before.
Expected Behavior
I want to load a BnB quantized model.
Current behaviour
It throws a ValueError.
Steps to reproduce
Launch the config yaml.
Config yaml
Possible solution
Extend or remove the fixed check of gptq introduced here: #913
Which Operating Systems are you using?
Python Version
3.10
axolotl branch-commit
main
Acknowledgements
The text was updated successfully, but these errors were encountered: