-
Notifications
You must be signed in to change notification settings - Fork 72
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproduce the pre-training tasks of Video-LLaMAv2, but the video dimensions are misaligned. #141
Comments
By the way, my training parameters in VSCode are: Meanwhile, the parameters in my preprocessor_config.json for clip-vit-large-patch14-336 are as follows: |
@BenoitHanotte @hill2hill @lixin4ever @hangzhang-nlp Could you help me with this issue? It's very important to me, and I’ve already spent about two weeks on it. I would greatly appreciate your assistance in resolving it. |
@BenoitHanotte @hill2hill @lixin4ever @hangzhang-nlp I hope you can take the time to help address my concerns. The purpose of our platform is to solve users' problems, not merely to serve as a display tool without proper follow-up management. Such a situation might raise doubts about the impact of the method. |
Sorry for late 🙏🙏. Could you please fork our codebase and commit your modifications? I try the provided commands, but it seems I can't reproduce the bug. |
I am attempting to reproduce the pre-training tasks of Video-LLaMAv2. I have already downloaded the Vallay and LLaVA-image datasets and started experimenting with pre-training. However, I noticed that the video dimensions obtained in


LazySupervisedDataset
andDataCollatorForSupervisedDataset
are 16, 3, 336, 336. Without making any modifications, I found that the video dimensions became 2, 3, 336, 336 in the forward method ofVideoLLaMA2MistralForCausalLM
. I couldn't find where the changes occurred and couldn't understand the logic behind the modification. Could you help me resolve this issue?The text was updated successfully, but these errors were encountered: