Skip to content
This repository has been archived by the owner on Feb 22, 2024. It is now read-only.

Commit

Permalink
Initial public code release
Browse files Browse the repository at this point in the history
  • Loading branch information
kylevedder committed Jun 15, 2023
0 parents commit 803a87c
Show file tree
Hide file tree
Showing 165 changed files with 14,369 additions and 0 deletions.
22 changes: 22 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
.mypy_cache/
.vscode/
**/__pycache__/
docker_history.txt
work_dirs/
model_checkpoints/
model_checkpoints_old/
tb_logs/
tb_logs*/
nsfp_split_configs/
lightning_logs/
kylevedder_offline_sceneflow_latest.sqsh
chamfer_investigation/merged_dicts.pkl
validation_results/
validation_results*/
**.pkl
**.png
job_dir/
.pl_auto_save.ckpt
cluster_tb_logs/
*_tb_logs/
**.zip
75 changes: 75 additions & 0 deletions GETTING_STARTED.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
# How to use this code

## File system assumptions

### Argoverse 2

Somewhere on disk, have an `argoverse2/` folder so that the downloaded files live inside

```
argoverse2/train
argoverse2/val
argoverse2/test
```

and generate the train and val supervision labels to

```
argoverse2/train_sceneflow
argoverse2/val_sceneflow
```


The [Argoverse 2 Scene Flow generation script](https://github.com/nchodosh/argoverse2-sf) to compute ground truth flows for both `train/` and `val/`.

### Waymo Open

Download the Scene Flow labels contributed by _Scalable Scene Flow from Point Clouds in the Real World_. We preprocess these files, both to convert them from an annoying proto file format to a standard Python format and to remove the ground points.

Do this using the `data_prep_scripts/waymo/extract_flow_and_remove_ground.py` file in the Waymo Open docker container.

## Docker Images

This project has three different docker images for different functions.

Each image has an associated convinence script. You must edit this script to modify the mount commands to point to your Argoverse 2 / Waymo Open data locations. The `-v` commands are these mount commands. As an example,

```
-v `pwd`:/project
```

runs `pwd` inside the script, getting the current directory, and ensures that it's mounted as `/project` inside the container. You must edit the `/efs/` mount, i.e.

```
-v /efs:/efs
```

so that the source points to the containing folder of your `argoverse2/` and `waymo_open_processed_flow/` directories. As an example, on our cluster I have `~/datasets/argoverse2` and `~/datasets/waymo_open_processed_flow`, so if I were to run this on our cluster I would modify these mounts to be

```
-v $HOME/datasets:/efs
```

It's important that, once inside the docker container, the path to the Argoverse 2 dataset is `/efs/argoverse2/...` and the path to Waymo Open is `/efs/waymo_open_processed_flow/...`

### Main image:

Built with `docker/Dockerfile` [[dockerhub](https://hub.docker.com/repository/docker/kylevedder/zeroflow)]

Convenience launch script is `./launch.sh`. Make sure that Argoverse 2 and Waymo Open preprocessed are mounted inside a folder in the container as `/efs`.

### Waymo preprocessing image:

Built with `docker/Dockerfilewaymo` [[dockerhub](https://hub.docker.com/repository/docker/kylevedder/zeroflow_waymo)]. Includes Tensorflow and other dependencies to preprocess the raw Waymo Open format and convert it to a standard format readable in the main image.

Convenience launch script is `./launch_waymo.sh`.

### AV2 challenge submission image:

Built with `docker/Dockerav2` [[dockerhub](https://hub.docker.com/repository/docker/kylevedder/zeroflow_av2)]. Based on the main image, but includes the [AV2 API](https://github.com/argoverse/av2-api).

Convenience launch script is `./launch_av2.sh`.

## Setting up the base system

The base system must have CUDA 11.3+ and [NVidia Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker) installed.
20 changes: 20 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
Copyright (c) 2023 Kyle Vedder

Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:

The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
52 changes: 52 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# ZeroFlow: Fast Zero Label Scene Flow via Distillation

[Kyle Vedder](http://vedder.io), [Neehar Peri](http://www.neeharperi.com/), [Nathaniel Chodosh](https://scholar.google.com/citations?user=b4qKr7gAAAAJ&hl=en), [Ishan Khatri](https://ishan.khatri.io/), [Eric Eaton](https://www.seas.upenn.edu/~eeaton/), [Dinesh Jayaraman](https://www.seas.upenn.edu/~dineshj/), [Yang Liu](https://youngleox.github.io/), [Deva Ramanan](https://www.cs.cmu.edu/~deva/), and [James Hays](https://faculty.cc.gatech.edu/~hays/)

Project webpage: [vedder.io/zeroflow](http://vedder.io/zeroflow)

arXiv link: [arxiv.org/abs/2305.10424](http://arxiv.org/abs/2305.10424)

**Citation:**

```
@article{Vedder2023zeroflow,
author = {Kyle Vedder and Neehar Peri and Nathaniel Chodosh and Ishan Khatri and Eric Eaton and Dinesh Jayaraman and Yang Liu Deva Ramanan and James Hays},
title = {{ZeroFlow: Fast Zero Label Scene Flow via Distillation}},
journal = {arXiv},
year = {2023},
}
```

## Pre-requisites / Getting Started

Read the [Getting Started](./GETTING_STARTED.md) doc for detailed instructions to setup the AV2 and Waymo Open datasets and use the prepared docker environments.

## Training a model

Inside the main container (`./launch.sh`), run the `train_pl.py` with a path to a config (inside `configs/`) and optionally specify any number of GPUs (defaults to all GPUs on the system).

```
python train_pl.py <my config path> --gpus <num gpus>
```

The script will start by verifying the val dataloader works, and then launch the train job.

## Testing a model

Inside the main (`./launch.sh`), run the `train_pl.py` with a path to a config (inside `configs/`), a path to a checkpoint, and the number of GPUs (defaults to a single GPU).

```
python test_pl.py <my config path> <my checkpoint path> --gpus <num gpus>
```

## Generating paper plots

After all relevant checkpoints have been tested, thus generating result files in `validation_results/configs/...`, run `plot_performance.py` to generate the figures and tables used in the paper.

## Submitting to the AV2 Scene Flow competition

1. Dump the outputs of the model
a. `configs/fastflow3d/argo/nsfp_distilatation_dump_output.py` to dump the `val` set result
b. `configs/fastflow3d/argo/nsfp_distilatation_dump_output_test.py` to dump the `test` set result
2. Convert to the competition submission format (`av2_scene_flow_competition_submit.py`)
3. Use official zip `make_submission_archive.py` file (`python /av2-api/src/av2/evaluation/scene_flow/make_submission_archive.py <path to step 2 results> /efs/argoverse2/test_official_masks.zip`)
Loading

0 comments on commit 803a87c

Please sign in to comment.