You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for providing this fascinating framework. I am currently attempting to adapt FUTR3D for a new dataset, but I have encountered some challenges, particularly during the voxelization process of the point cloud.
So far, we have aligned our data format with nuScenes and can train the model successfully. However, the training time for our LiDAR-only setup is over one month(35DAYS 24EPOCH), which is 8x longer than training on the nuScenes dataset(4DAYS 24EPOCH). We are using 4*A100 80GB GPUs.
Our single-frame point cloud contains 1M+ points, whereas nuScenes' single-frame point cloud has only 260k points. After debugging, we identified the main bottleneck in the following function: futr3d/plugin/futr3d/models/detectors/futr3d.py
def voxelize(self, points):
voxels, coors, num_points = [], [], []
for res in points:
res_voxels, res_coors, res_num_points = self.pts_voxel_layer(res)#<---------------------------Here
Upon further investigation into mmcv-1.6.2/mmcv/ops/voxelize.py, we found that the primary delay occurs in this section:
We suspect that some voxels are being discarded due to this limit. However, with my limited experience, I am unsure how to further optimize this process. As an initial attempt, I tried cropping the original point cloud by reducing the number of points and shrinking the detection range (while maintaining point_cloud_range = [-54, -54, -5.0, 54, 54, 3.0]) for training.
Nevertheless, I believe this is not an ideal solution. If you have any insights or suggestions on how to accelerate this task, I would greatly appreciate your advice.
Thank you very much for your time and consideration!
The text was updated successfully, but these errors were encountered:
Dear Author,
Thank you for providing this fascinating framework. I am currently attempting to adapt FUTR3D for a new dataset, but I have encountered some challenges, particularly during the voxelization process of the point cloud.
So far, we have aligned our data format with nuScenes and can train the model successfully. However, the training time for our LiDAR-only setup is over one month(35DAYS 24EPOCH), which is 8x longer than training on the nuScenes dataset(4DAYS 24EPOCH). We are using 4*A100 80GB GPUs.
Our single-frame point cloud contains 1M+ points, whereas nuScenes' single-frame point cloud has only 260k points. After debugging, we identified the main bottleneck in the following function:
futr3d/plugin/futr3d/models/detectors/futr3d.py
Upon further investigation into
mmcv-1.6.2/mmcv/ops/voxelize.py
, we found that the primary delay occurs in this section:After debugging the related variables, we observed the following outputs:
Here, 120000 is defined by max_voxels in the configuration file:
We suspect that some voxels are being discarded due to this limit. However, with my limited experience, I am unsure how to further optimize this process. As an initial attempt, I tried cropping the original point cloud by reducing the number of points and shrinking the detection range (while maintaining point_cloud_range = [-54, -54, -5.0, 54, 54, 3.0]) for training.
Nevertheless, I believe this is not an ideal solution. If you have any insights or suggestions on how to accelerate this task, I would greatly appreciate your advice.
Thank you very much for your time and consideration!
The text was updated successfully, but these errors were encountered: