You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Deepseek-vl2 is absolutely the recent work that we all should pay attention to, which demonstrates exceptional grounding ability, surpassing literally all models presented in TABLE.2 of the original paper.
Our recent work Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models (https://arxiv.org/abs/2501.05767), largely extends the paradigm of visual grounding to the multi-image scenario, brings much wider potential applications and scenarios for Visual Grounding task.
We have comprehensively crafted massive training dataset and insightful benchmarks, which are all fully opensourced on Hunggingface https://huggingface.co/Michael4933.
We would really appreciate it if your could include our work into your excellent survey ASAP~~~
The text was updated successfully, but these errors were encountered:
We have comprehensively crafted massive training dataset and insightful benchmarks, which are all fully opensourced on Hunggingface https://huggingface.co/Michael4933.
We would really appreciate it if your could include our work into your excellent survey ASAP~~~
The text was updated successfully, but these errors were encountered: