This is a Pytorch-Lightning implementation of the paper "ATPPNet: Attention based Temporal Point cloud Prediction Network" submitted to ICRA 2024.
ATPPNet Architecture. ATPPNet leverages Conv-LSTM along with channel-wise and spatial attention dually complemented by a 3D-CNN branch for extracting an enhanced spatio-temporal context to recover high quality fidel predictions of future point clouds.
Clone this repository and run
cd atppnet
git submodule update --init
to install the Chamfer distance submodule. The Chamfer distance submodule is originally taken from here with some modifications to use it as a submodule. All parameters are stored in config/parameters.yaml
.
In our project, all our dependencies are managed by miniconda. Use the following command to create the conda environment:
conda env create -f atppnet.yml
Then activate the environment using the command conda activate atppnet
Download the Kitti Odometry data from the official website.
We process the data in advance to speed up training. To prepare the dataset from the KITTI odometry dataset, set the value of GENERATE_FILES
to true in config/parameters.yaml
. The environment variable PCF_DATA_RAW
points to the directory containing the train/val/test sequences specified in the config file. It can be set with
export PCF_DATA_RAW=/path/to/kitti-odometry/dataset/sequences
and the destination of the processed files PCF_DATA_PROCESSED
is set with
export PCF_DATA_PROCESSED=/desired/path/to/processed/data/
Download the Lidar blobs for parts 1 and 2 and the metadata from the full dataset and map expansion pack(v1.0) from the official website
For preparing the nuScenes dataset, set up the folder structure in the following manner:
NuScenes
├── v1.0-test
│ ├── maps
│ ├── samples
│ │ └── LIDAR_TOP
│ ├── sweeps
│ │ └── LIDAR_TOP
│ └── v1.0-test
└── v1.0-trainval
├── maps
├── samples
│ └── LIDAR_TOP
├── sweeps
│ └── LIDAR_TOP
└── v1.0-trainval
v1.0-test/samples
and v1.0-test/sweeps
contains the lidar scans from part 2 of the dataset, v1.0-test/v1.0-test
contains the metadata files and v1.0-test/maps
contains the data from map expansion pack.
Similarly, v1.0-trainval/samples
and v1.0-trainval/sweeps
contains the lidar scans from part 1 of the dataset, v1.0-trainval/v1.0-trainval
contains the metadata files and v1.0-trainval/maps
contains the data from map expansion pack.
In the parameters filr config/nuscenes_parameters.yml
, change the value DATASET_PATH
to the NuScenes
folder described above, and the the value of SAVE_PATH
to the destination of processed images.
Then run:
python -m atppnet.utils.process_nuscenes
After following the data preparation tutorial, the model can be trained in the following way:
The training script can be run by
python -m atppnet.train
using the parameters defined in config/parameters.yaml
. Pass the flag --help
if you want to see more options like resuming from a checkpoint or initializing the weights from a pre-trained model. A directory will be created in pcf/runs
which makes it easier to discriminate between different runs and to avoid overwriting existing logs. The script saves everything like the used config, logs and checkpoints into a path pcf/runs/COMMIT/EXPERIMENT_DATE_TIME
consisting of the current git commit ID (this allows you to checkout at the last git commit used for training), the specified experiment ID (pcf
by default) and the date and time.
Example:
pcf/runs/7f1f6d4/pcf_20211106_140014
7f1f6d4
: Git commit ID
pcf_20211106_140014
: Experiment ID, date and time
The training script on the nuScenes dataset can be run by
python -m atppnet.train_nuscenes
Test your model by running
python -m atppnet.test -m COMMIT/EXPERIMENT_DATE_TIME
where COMMIT/EXPERIMENT_DATE_TIME
is the relative path to your model in pcf/runs
. Note: Use the flag -s
if you want to save the predicted point clouds for visualiztion and -l
if you want to test the model on a smaller amount of data.
Example
python -m atppnet.test -m 7f1f6d4/pcf_20211106_140014
or
python -m atppnet.test -m 7f1f6d4/pcf_20211106_140014 -l 5 -s
if you want to test the model on 5 batches and save the resulting point clouds.
Test your model by running
python -m atppnet.test_nuscenes -m COMMIT/EXPERIMENT_DATE_TIME
where COMMIT/EXPERIMENT_DATE_TIME
is the relative path to your model in pcf/runs
. Note: Use the flag -s
if you want to save the predicted point clouds for visualiztion and -l
if you want to test the model on a smaller amount of data.
Example
python -m atppnet.test_nuscenes -m 7f1f6d4/pcf_20211106_140014
or
python -m atppnet.test_nuscenes -m 7f1f6d4/pcf_20211106_140014 -l 5 -s
if you want to test the model on 5 batches and save the resulting point clouds.
Please download the model file for KITTI dataset from (here)[https://drive.google.com/file/d/1szIkdw917Fc7WKzZQxiXU1K-lY24CogC/view?usp=sharing]
Please download the model file for nuScenes dataset from (here)[https://drive.google.com/file/d/153DMNjYsGhdHllKRKnru0rc2q3ZtGuq5/view?usp=drive_link]
The codebase in this repo has been built on top of the amazing code base of TCNet by Benedikt Mersch, Andres Milioto and Christian Diller et al.