You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to understand on the use of Model Architecture difference between the author release of lmms-lab and the HF team releases on llava-hf. For the same set of Models does using one over the another has performance difference?
And is there any plans to transfer weights trained on one to another. I want this since I want to do vLLM inferences but they only support the ones developed by llava-hf
The text was updated successfully, but these errors were encountered:
Hi, I posted an issue 1 month ago about this topic: #193
By following the script and adapting it to your own checkpoints, you can convert your lmms-lab into llava-hf format and perform inference with the huggingface library.
I want to understand on the use of Model Architecture difference between the author release of
lmms-lab
and the HF team releases onllava-hf
. For the same set of Models does using one over the another has performance difference?And is there any plans to transfer weights trained on one to another. I want this since I want to do
vLLM
inferences but they only support the ones developed byllava-hf
The text was updated successfully, but these errors were encountered: