-
-
Notifications
You must be signed in to change notification settings - Fork 971
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue load Llama tokenizer #20
Comments
I can just use |
are you pointing to a locally downloaded model? I've seen this issue when that's the case. |
No, I'm point to the HuggingFace repo, but I got it cached locally. export HF_DATASETS_CACHE="/workspace/data/huggingface-cache/datasets"
export HUGGINGFACE_HUB_CACHE="/workspace/data/huggingface-cache/hub" Thing is: it was working till I merged latest pulls. Although there were over 31 commits in just a few days, no code touched that line. |
what are you using as your config for |
I pointed In the past, this would work. However, for some reason, it does not work anymore despite that line not being changed. I have changed to point both to the same path which works now. |
Hello, I'm getting a weird issue loading tokenizer. I've checked that the line of code hasn't changed even on my latest pull. The only difference could be
transformer
source changed something.https://github.com/winglian/axolotl/blob/7576d85c735e307fa1dbbcb8e0cba8b53bb1fa48/src/axolotl/utils/models.py#L138-L139
The text was updated successfully, but these errors were encountered: