We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
transformers
@ArthurZucker
examples
llama-tokenizer/7B
✗ python ../venv/lib/python3.12/site-packages/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir llama-tokenizer/7B --output_dir llama7b --model_size 7B Traceback (most recent call last): File "/mloscratch/homes/shcherba/landmark-attention/llama/../venv/lib/python3.12/site-packages/transformers/models/llama/convert_llama_weights_to_hf.py", line 478, in <module> main() File "/mloscratch/homes/shcherba/landmark-attention/llama/../venv/lib/python3.12/site-packages/transformers/models/llama/convert_llama_weights_to_hf.py", line 452, in main args.special_tokens = DEFAULT_LLAMA_SPECIAL_TOKENS[str(args.llama_version)] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^ KeyError: '1'
No errors, completed conversion, as described in https://huggingface.co/docs/transformers/en/model_doc/llama
The text was updated successfully, but these errors were encountered:
Hey! Indeed, we can set the default to []. As this is only used for Llama3. Would you like to open a PR for the fix? 🤗
[]
Sorry, something went wrong.
Sure, one moment
No branches or pull requests
System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
transformers
version: 4.43.3- distributed_type: NO
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
Who can help?
@ArthurZucker
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
llama-tokenizer/7B
Expected behavior
No errors, completed conversion, as described in https://huggingface.co/docs/transformers/en/model_doc/llama
The text was updated successfully, but these errors were encountered: