Project Page - https://gamma.umd.edu/spectralcows
Please cite our work if you found it useful.
@article{chandra2020forecasting,
title={Forecasting trajectory and behavior of road-agents using spectral clustering in graph-lstms},
author={Chandra, Rohan and Guan, Tianrui and Panuganti, Srujan and Mittal, Trisha and Bhattacharya, Uttaran and Bera, Aniket and Manocha, Dinesh},
journal={IEEE Robotics and Automation Letters},
year={2020},
publisher={IEEE}
}
Important - This repo is no longer under active maintenance. Also, please note that the current results produced by the code are normalized RMSE values and not in meters. Furthermore, the trained models provided by in this codebase may not reflect the results in the main paper.
- Paper - Forecasting Trajectory and Behavior of Road-Agents Using Spectral Clustering in Graph-LSTMs
- Repo Details and Contents
- How to Run
- Our network
Python version: 3.7
Please cite the methods below if you use them.
- TraPHic: Trajectory Prediction in Dense and Heterogeneous Traffic Using Weighted Interactions, CVPR'19
Rohan Chandra, Uttaran Bhattacharya, Aniket Bera, Dinesh Manocha. - Convolutional Social Pooling for Vehicle Trajectory Prediction, CVPRW'18
Nachiket Deo and Mohan M. Trivedi. - Social GAN: Socially Acceptable Trajectories with Generative Adversarial Networks, CVPR'18
Agrim Gupta, Justin Johnson, Fei-Fei Li, Silvio Savarese, Alexandre Alahi. - GRIP: Graph-based Interaction-aware Trajectory Prediction, ITSC'19
Xin Li, Xiaowen Ying, Mooi Choo Chuah
As the official implementation of the GRIP method was not available at the time creating this repo, the code provided here is our own effort to replicate the GRIP method to the best of our ability and does not necessarily convey the original implementation of the authors.
The original GRIP implementation by the authors is provided here. Please cite their paper if you use their method.
- Argoverse (input length: 20 & output length: 30)
- Apolloscape (input length: 6 & output length: 10)
- Lyft Level 5 (input length: 20 & output length: 30)
-
Create a conda environement
conda env create -f env.yml
-
To activate the environment:
conda activate sc-glstm
-
Download resources
python setup.py
- To run our one & two stream model:
cd ours/
python main.py
- To change between one stream to two stream, simply change the variable
s1
in main.py between True and False. - To change the model, change
DATA
andSUFIX
variable in main.py.
- To run EncDec comparison methods:
cd comparison_methods/EncDec/
python main.py
- To change the model, change
DATA
andSUFIX
variable in main.py.
- To run GRIP comparison methods:
cd comparison_methods/GRIP/
python main.py
- To change the model, change
DATA
andSUFIX
variable in main.py.
- To run TraPHic/SC-LSTM comparison methods:
cd comparison_methods/traphic_sconv/
python main.py
- To change the model and methods, change
DATASET
andPREDALGO
variable in main.py.
Note: During evaluation of the trained_models, the best results may be different from reported error due to different batch normalization applied to the network. To obtain the same number, we may have to mannually change the network.
Resources folder structure:
- data -- input and output of stream 1 & 2 (This is directly avaiable in resources folder)
- raw_data -- location of the raw data (put the downloaded dataset in this folder to process)
- trained_model -- some saved models
Important steps if you plan to prepare the Argoverse, Lyft, and Apolloscape from the raw data available from their websites.
- Run
data_processing/format_apolloscape.py
to format the downloaded apolloscape data into our desired representation - Run
data_processing/format_lyft.py
to format the downloaded lyft data into our desired representation - Run
data_processing/generate_data.py
to format the downloaded Argoverse trajectory data into our desired representation
- Use
data_processing/data_stream.py
to generate input data for stream1 and stream2. - Use
generate_adjacency()
function indata_processing/behaviors.py
to generate adjacency matrices. - Must use
add_behaviors_stream2()
function indata_processing/behaviors.py
to add behavior labels to the stream2 data before supplying the data to the network.
Our code supports any dataset that contains trajectory information. Follow the steps below to integrate your dataset with our code
The first step is to prepare your dataset in our format which is a text file where each row will contain 'Frame ID', 'Agent_ID', 'X coordinate', 'Y Coordinate', 'Dataset_ID'.
Make sure:
-
The Frame_ID's range between
1 to n
. And Agent_ID's also range from1 to N
.n
is total number of frames andN
is total number of agents. If your dataset uses a different convention to represent the Frame_ID's (for example, few datasets use Time Stamp as Frame_ID), you need to map these ID's to1 to n
. If your dataset uses a different convention to represent Agent_ID's (for example few datasets represent Agent_ID's using string of characters), you need to map these ID's to1 to N
. -
If the Frame_ID's and Agent_ID's of your dataset are already in ranges of
1 to n
and1 to N
, make sure they are sequential. Make sure there are no missing ID's.
- Dataset_ID's are used to differentiate different scenes/sets of a same DATASET
3. Run the data_stream.py
file in /data_processing. This will generate the pickle files needed to run the main.py
files for any method.
Mandatory precautions to take before running data_stream.py
:
-
Make sure you have taken all the mandatory precautions mentioned above for preparing your data.
-
You must know the frame rate at which the trajectories of the vehicles are recorded. i.e., you must know how many frames does 1 second corresponds to? E.g. if the FPS is 2Hz, this means each second corresponds to 2 frames in the dataset.
-
You must set the
train_seq_len
andpred_seq_len
indata_stream.py
appropriately based on the frame rate. For example, if the frame rate is 2Hz, and if you want to consider 3 seconds as observation data, thentrain_seq_len
would be3*2 = 6
. if you want the to consider next 5 seconds as prediction data, thenpred_seq_len
would be5*2 = 10
. Make sureframe_lenth_cap >= (train_seq_len + pred_seq_len)
. We use thisframe_lenth_cap
to enforce that anAgent_ID
is present/visible/seen in atleastframe_lenth_cap
number of frames. -
If your data is too huge, you may want to consider only few scenes/sets from the whole data. Use the Dataset IDs (
D_id
) list to tweak the values and shorten the amount of data. -
Assign a short keyword
XXXX
for naming your dataset. -
Expect to see multiple files generated in the
./resources/DATA/XXXX/
with names starting withstream1_obs_data_, stream1_pred_data_, stream2_obs_data_, stream2_pred_data_,stream2_obs_eigs_, stream2_pred_eigs_
.