-
Notifications
You must be signed in to change notification settings - Fork 979
Issues: salesforce/LAVIS
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
RuntimeError: Error(s) in loading state_dict for Blip2OPT: size mismatch for opt_proj.weight: copying a param with shape torch.Size([2560, 768]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for opt_proj.bias: copying a param with shape torch.Size([2560]) from checkpoint, the shape in current model is torch.Size([768]).
#773
opened Dec 7, 2024 by
chilljudaoren
text and image embedding from xgen-mm-phi3-mini-base-r-v1.5
#770
opened Dec 3, 2024 by
sangeethkumar1997
The odd captions for inputing pure black/white picture to the BLIP model
#768
opened Nov 27, 2024 by
FaxinZ
ImportError: numpy.core.multiarray failed to import
when trying to use salesforce-lavis in Huggingface app
#767
opened Nov 16, 2024 by
jchwenger
hasattr(dataset[split], "coco_fmt_qust_file"), KeyErrror: "val"
#764
opened Nov 7, 2024 by
ArkZero35
build dependencies did not run successfully because there is no compatible version of numpy
#754
opened Oct 13, 2024 by
7w01
How can I provide some examples to BLIP2 and InstructBLIP models?
#748
opened Sep 24, 2024 by
xukefaker
Previous Next
ProTip!
Exclude everything labeled
bug
with -label:bug.