-
Notifications
You must be signed in to change notification settings - Fork 380
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Persue higher resolution for mesh #36
Comments
Hi @zhu-yuefeng , Re detailed mesh: What's your Re inaccurate SDF, it could be a problem if the training and inference have different resolutions, but shouldn't be too much, as the network that predicts SDF is a continuous network. In our model, we train the GET3D with a resolution of 90, you can train a higher resolution and run inference at a higher resolution as well. Re computation complexity. There can be several problems, 1. At the beginning of this script, it needs to compile some packages that might take some time (e.g. |
Hi @zhu-yuefeng, any updates on this issue? do you have further questions? or it's good to close this? |
Hi, I was on holiday this weekend~~ Now I am back to code. In all previous experiments I used default 'tet_res'=90. By changing it to 100 I see little improvement in resolution. I am going to do some training work, which could take a few days to update. Your explanation and guidance about SDF is quite clear, quite helpful. Time cost analysis also makes sence. Well, actually I was to state my case, and was not hunger to optimize this part, because its speed is acceptable to me. Seems I didn't make myself clear. |
Update: I made a super-high resolution *.npz file, but GPU memory is not large enough to support. Details: I installed quartet, and then followed your instructions in BTW: data/generate_tets.py seems to apply different name style in line 23 and line 44. Do these lines share the same relationship between res and frac? Thank you! |
Another question: why we apply quartet here? |
An experiment with --tet_res=64 also cost 11G memory at peak. I am confused about memory and resolution. |
I am seeing interesting results training at tet_res 100 with an 8Gb card. I lowered some other options to do it. (I secretly believe that latent_dim 512 is not neeeded for my project.) |
With A4000 I got to almost 1 000 000 polys for stuff I was generating. That's a lot :) |
I am happy to see pre-trained model released! Thank you!
Now I have infered a few times with checkpoint of car category. As far as I observe, the triangular meshes has nearly the same resolution of about 5 cm, or 5% of model total length. Is there a way to make more detailed meshes?
Also, I guess your mesh reconstruction is based on SDF-driven DMTet algorithm. Do you expect bad behavior due to inaccuracy of SDF when applyibg finer resolution?
p.s. My inference took 20 mins to produce 100 textured meshes, so computation complexity do not quite impact me.
The text was updated successfully, but these errors were encountered: