Skip to content

Which tokenizer.model is needed for GPT4ALL? #614

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
cannin opened this issue Mar 30, 2023 · 3 comments
Closed

Which tokenizer.model is needed for GPT4ALL? #614

cannin opened this issue Mar 30, 2023 · 3 comments
Labels
question Further information is requested

Comments

@cannin
Copy link

cannin commented Mar 30, 2023

Which tokenizer.model is needed for GPT4ALL for use with convert-gpt4all-to-ggml.py? Is it the one for LLaMA 7B? It is unclear from the current README and gpt4all-lora-quantized.bin seems to be typically distributed without the tokenizer.model file.

@BenjiKCF
Copy link

I wanna kwo this question too.

@KASR
Copy link
Contributor

KASR commented Mar 30, 2023

I've used the same tokenizer as the one for the llama models.

@gjmulder gjmulder added the question Further information is requested label Mar 30, 2023
@cannin cannin closed this as completed Mar 31, 2023
@MeinDeutschkurs
Copy link

Yeah, it's interesting. Want to use the model of freedomGPT inside of it. :D But it does not work, if I simply copy the file to GPT4ALLs path.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

5 participants