-
Notifications
You must be signed in to change notification settings - Fork 34
Leave-one-out rendering #8
Comments
You should be able to just use/adapt this method. It creates an |
In more detail, add an entry here with some name that calls a modified (such that it loads your models) |
Hi, Thanks for the reply. I managed to use get_train_set_tat but I am quickly running out of GPU memory in train mode in a 16GB GPU. With get_eval_set_tat I can render just fine. Is there any quick and dirty way to use less GPU with get_train_set_tat ? Cheers, |
What do you use for |
I had that at default which seems to be 3. Seems already pretty small to me right? |
Yes, that should be more than feasible. Batch size is set to 1? |
Batch size is 1, both for eval and train. I am attaching my git diff in case you notice something. Unfortunately I don't have a GPU with bigger memory available. svs_diff.txt Thanks again, |
Hi @griegler Could you point to the code which removes the target image from the neighbors when the Dataset object is created with I was able to adapt my dataset and run evaluation, however I was wondering if there is actually leave one out rendering occuring? If the target image exists in the dataset, wouldn't the unprojected points on the mesh, always take points from the very same target image, or could you please clarify, if I understand something incorrectly. Perhaps, I need to define a |
Hi,
I am trying to do a leave-one-out rendering for the input cameras, I have been doing it this way:
In this line https://github.com/intel-isl/StableViewSynthesis/blob/main/experiments/dataset.py#L263
I am adding a nbs.remove(idx) before passing it to the next function.
The results are rather extremely blurry, am I doing something wrong?
All the best,
George Kopanas
The text was updated successfully, but these errors were encountered: