Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: Llama 3 tokenizer is not available. Make sure you have the necessary permissions. #356

Open
sjghh opened this issue Nov 30, 2024 · 2 comments

Comments

@sjghh
Copy link

sjghh commented Nov 30, 2024

https://github.com/LLaVA-VL/LLaVA-NeXT/blob/main/docs/LLaVA-NeXT.md
I am using the inference code from this, and I encountered this issue. Is it because I need to set the path for Llama 3? I only set the path for "mm_vision_tower" in the model weights config file.

Model Class: LlavaLlamaForCausalLM
Traceback (most recent call last):
File "/data/hongbo.xu/Datasets/MC-ERU/llava-next/LLaVA-NeXT/inference.py", line 30, in
prompt_question = conv.get_prompt()
File "/data/hongbo.xu/Datasets/MC-ERU/llava-next/LLaVA-NeXT/llava/conversation.py", line 99, in get_prompt
raise ValueError("Llama 3 tokenizer is not available. Make sure you have the necessary permissions.")
ValueError: Llama 3 tokenizer is not available. Make sure you have the necessary permissions.

@smile-struggler
Copy link

smile-struggler commented Dec 2, 2024

I just solved this problem. You can try it.

conv = copy.deepcopy(conv_templates[conv_template])
conv.tokenizer = tokenizer
conv.append_message(conv.roles[0], question)
conv.append_message(conv.roles[1], None)

The key lies in add a line conv.tokenizer = tokenizer

@ISPZ
Copy link

ISPZ commented Dec 13, 2024

You can modify the path to the tokenizer in the LLaVA-NeXT/llava/conversation.py file.
First, you need to download llama3 in https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct.

conv_llava_llama_3 = Conversation(
system="You are a helpful language and vision assistant. " "You are able to understand the visual content that the user provides, " "and assist the user with a variety of tasks using natural language.",
roles=("user", "assistant"),
version="llama_v3",
messages=[],
offset=0,
sep="<|eot_id|>",
sep_style=SeparatorStyle.LLAMA_3,
# tokenizer_id="meta-llama/Meta-Llama-3-8B-Instruct", # Change to your local path.
# tokenizer=safe_load_tokenizer("meta-llama/Meta-Llama-3-8B-Instruct"), # Change to your local path.
stop_token_ids=[128009],
)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants