Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

convert : fix vocab size when not defined in hparams #3421

Merged
merged 1 commit into from
Oct 2, 2023

Conversation

cebtenzzre
Copy link
Collaborator

If vocab_size is somehow missing from config.json, or in the case of the previous GPT-NeoX script, it is ignored entirely, we can end up in a case where vocab_size is less than len(reverse_vocab), even though the purpose of vocab_size is to enlarge the vocabulary with padding tokens

Use len(tokenizer.vocab) instead of attempting to interpret JSON directly, to account for added tokens. Also, add the missing hparams check to the GPT-NeoX script.

With this change, GPT-NeoX is now attempting to use added tokens, though it is failing due to reasons described in PR #3405. Before this change, it wasn't even trying.

cc @goerch (yes, I know this conflicts with your PR)

# ref: https://github.com/cmp-nct/ggllm.cpp/blob/master/falcon_convert.py
tokenizer = AutoTokenizer.from_pretrained(dir_model)

# The number of tokens in tokenizer.json can differ from the expected vocab size.
# This causes downstream issues with mismatched tensor sizes when running the inference
vocab_size = hparams.get("vocab_size", len(tokenizer.vocab))
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using len(tokenizer.vocab) is not ideal, we really should use vocab_size from AutoConfig, which has an architecture-specific default. Although, this whole thing is a hack to work around downstream issues, right? I would much rather just store the vocab size in the GGUF and avoid this whole padding tokens mess.

@cebtenzzre cebtenzzre marked this pull request as draft October 1, 2023 04:02
Copy link
Owner

@ggerganov ggerganov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the change is fine, although I don't fully understand all intricacies with the vocab size. As long as this does not break conversion of the standard Falcon and Starcoder models, it should be OK to merge

@cebtenzzre cebtenzzre marked this pull request as ready for review October 2, 2023 22:00
@cebtenzzre
Copy link
Collaborator Author

I wasn't able to use the Falcon convert script as-is, because it was renamed from RWForCausalLM to FalconForCausalLM. I'll make a PR for that.
This won't affect standard Falcon or Starcoder because they specify vocab_size in config.json, which take priority.

@cebtenzzre cebtenzzre merged commit 1c84003 into ggerganov:master Oct 2, 2023
9 checks passed
joelkuiper added a commit to vortext/llama.cpp that referenced this pull request Oct 5, 2023
…example

* 'master' of github.com:ggerganov/llama.cpp: (24 commits)
  convert : fix Baichuan2 models by using vocab size in config.json (ggerganov#3299)
  readme : add project status link
  ggml : fix build after ggerganov#3329
  llm : add Refact model (ggerganov#3329)
  sync : ggml (conv 1d + 2d updates, UB fixes) (ggerganov#3468)
  finetune : readme fix typo (ggerganov#3465)
  ggml : add RISC-V Vector Support for K-Quants and improved the existing intrinsics (ggerganov#3453)
  main : consistent prefix/suffix coloring (ggerganov#3425)
  llama : fix session saving/loading (ggerganov#3400)
  llama : expose model's rope_freq_scale in the API (ggerganov#3418)
  metal : alibi for arbitrary number of heads (ggerganov#3426)
  cmake : make LLAMA_NATIVE flag actually use the instructions supported by the processor (ggerganov#3273)
  Work on the BPE tokenizer (ggerganov#3252)
  convert : fix vocab size when not defined in hparams (ggerganov#3421)
  cmake : increase minimum version for add_link_options (ggerganov#3444)
  CLBlast: Add broadcast support for matrix multiplication (ggerganov#3402)
  gguf : add BERT, MPT, and GPT-J arch info (ggerganov#3408)
  gguf : general usability improvements (ggerganov#3409)
  cmake : make CUDA flags more similar to the Makefile (ggerganov#3420)
  finetune : fix ggerganov#3404 (ggerganov#3437)
  ...
yusiwen pushed a commit to yusiwen/llama.cpp that referenced this pull request Oct 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants