You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing the paper and code. It's a great work.
Since Uni3D has scaled the point encoder to 1B parameters, that are rather big. But the scale of Objaverse 1.0 is only 800K 3D objects, I think it is still relatively small to support the 1B-scale point encoder pre-training and the generalization is still far behind the counterparts in image and text field.
Now Objaverse-xl is released, which contains 10M+ 3D objects, does your team have such a plan to pre-train on larger Objaverse-xl? I think BAAI has the computing source to finish such a task. How do you think about it?
The text was updated successfully, but these errors were encountered:
I also think scaling up pre-training using the Objaverse-xl dataset could greatly improve the generalization of the point encoder. I’m curious—has your team processed the Objaverse-xl dataset for Uni3D adaptation yet? If not, perhaps there’s an opportunity for collaboration.
Thanks for sharing the paper and code. It's a great work.
Since Uni3D has scaled the point encoder to 1B parameters, that are rather big. But the scale of Objaverse 1.0 is only 800K 3D objects, I think it is still relatively small to support the 1B-scale point encoder pre-training and the generalization is still far behind the counterparts in image and text field.
Now Objaverse-xl is released, which contains 10M+ 3D objects, does your team have such a plan to pre-train on larger Objaverse-xl? I think BAAI has the computing source to finish such a task. How do you think about it?
The text was updated successfully, but these errors were encountered: