You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that your code was written with the assumption that batch_size = 1, but when I increased the batch_size, it resulted in dimension errors. I want to know why batch_size is limited to 1.
If it cannot be increased, it will not be possible to more efficiently utilize my device resources.
Apart from GPU memory constraint, the batch size is set to one because of 1) the different lengths of point cloud data, and 2) the for-loop we use here to filter out unnecessary sampling locations (same as BEVFormer).
For point cloud lengths, you can simply sample a fixed number of points from each point cloud, and that would fix the error as posted above.
For the for-loop, you can insert even another for-loop to take into account the batch size.
I noticed that your code was written with the assumption that
batch_size = 1
, but when I increased thebatch_size
, it resulted in dimension errors. I want to know whybatch_size
is limited to1
.If it cannot be increased, it will not be possible to more efficiently utilize my device resources.
TPVFormer/dataloader/dataset_wrapper.py
Lines 116 to 127 in bbed188
The text was updated successfully, but these errors were encountered: