We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
1 parent 3045da8 commit 0a4feb7Copy full SHA for 0a4feb7
VideoLLaVa/README.md
@@ -6,7 +6,7 @@ and the model can handle images and video in the same example!
6
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/videollava_example.png"
7
alt="drawing" width="600"/>
8
9
-<small> VideoLLaVa example. Taken from the <a href="https://arxiv.org/abs/2311.10122">original paper.</a> </small>
+<small> Video-LLaVa example. Taken from the <a href="https://arxiv.org/abs/2311.10122">original paper.</a> </small>
10
11
* Docs: https://huggingface.co/docs/transformers/main/en/model_doc/video_llava
12
* Checkpoint: https://huggingface.co/LanguageBind/Video-LLaVA-7B-hf
0 commit comments