-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KeyError when loading Mistral 7b via Transformers #713
Comments
I can reproduce this. @slundberg is this another aspect of the unicode error you pushed a fix for earlier this week? |
I believe that this is at least part of what is going wrong in #716 |
Any update on this? Also getting same error. On Google Colab. |
I've not heard from @slundberg who knows most about this. I've just started prodding it again |
As a work around @JosephGatto and @lachlancahill , it does appear that if you load |
Since our notebooks feature Mistral, add an example to our test matrix. The Mistral model itself is loaded via llama-cpp. However: - Due to #713, have to skip loading Mistral via `transformers` - To avoid running out of disk space on the GitHub runner machines, we have to narrow the testing in `test_transformers.py`
I get this issue no matter what model I'm using, even if it was not based on mistral, any fixes? |
After a bit of testing, I switched to the commit where the version number was 0.1.12, the same error happened, switched to the commit where the version number was 0.1.11, and it started working again? |
Fix #713 for mistral models in transformers
Running into this issue as well, on a model finetuned from Mistral, loaded directly from pytorch |
This issue has not been resolved I'm afraid. I have force-reinstalled from git and still get the same error. |
Yes, this should still be open. I'm running into this error too on latest version. |
Running into the same error, with latest version |
The bug
attempting to load the model yields the following output:
To Reproduce
System info (please complete the following information):
guidance.__version__
): 0.1.13The text was updated successfully, but these errors were encountered: