-
Notifications
You must be signed in to change notification settings - Fork 176
Issues: NVlabs/VILA
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
I can correctly obtain reasoning results using this code:“vila-infer \ --model-path /data/workspace/zhaoyong/model/weight_files/VILA1.5-3B \ --conv-mode vicuna_v1 \ --text "Please describe the video." \ --media /data/workspace/zhaoyong/data/安全帽.mp4”, but I get an error when using this code:“python -W ignore server.py \ --port 8000 \ --model_path /data/workspace/zhaoyong/model/weight_files/VILA1.5-3B \ --conv_mode vicuna_v1”. Why is that? How should I solve it?
#163
opened Dec 18, 2024 by
HAOYON-666
ValueError: Cannot use chat template functions because tokenizer.chat_template is not set and no template argument was passed! For information about writing templates and setting the tokenizer.chat_template attribute, please see the documentation at https://huggingface.co/docs/transformers/main/en/chat_templating
#160
opened Dec 16, 2024 by
HAOYON-666
Issues with the effectiveness of W4A16 quantization using AWQ
#157
opened Dec 10, 2024 by
RanchiZhao
Issue: The size of tensor a (2) must match the size of tensor b (8) at non-singleton dimension 0
#151
opened Nov 20, 2024 by
apfsds3bm9
How to run longvila large context, sequence parallel inference?
#130
opened Aug 27, 2024 by
zadeismael
How to run vila with TinyChatEngine with multiple understanding enabled?
#129
opened Aug 27, 2024 by
yg1988
Previous Next
ProTip!
Exclude everything labeled
bug
with -label:bug.