Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

settled some errors and add the environment.yaml file #46

Open
wants to merge 9 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 16 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,20 @@ In this repo, we show the example of model on NTU-RGB+D dataset.
* pyyaml
* argparse
* numpy
* torch 1.7.1

# Environments
We use the similar input/output interface and system configuration like ST-GCN, where the torchlight module should be set up.
```
cd torchlight
cp torchlight/torchlight/_init__.py gpu.py io.py ../
```
change all "from torchlight import ..." to
"from torchlight.io import ..."

Run
```
cd torchlight, python setup.py, cd ..
cd torchlight, python setup.py install, cd ..
```


Expand All @@ -30,22 +37,24 @@ For NTU-RGB+D dataset, you can download it from [NTU-RGB+D](http://rose1.ntu.edu
```
Then, run the preprocessing program to generate the input data, which is very important.
```
python ./data_gen/ntu_gen_preprocess.py
cd data_gen
python ntu_gen_preprocess.py
```

# Training and Testing
With this repo, you can pretrain AIM and save the module at first; then run the code to train the main pipleline of AS-GCN. For the recommended benchmark of Cross-Subject in NTU-RGB+D,
```
PretrainAIM: python main.py recognition -c config/as_gcn/ntu-xsub/train_aim.yaml
TrainMainPipeline: python main.py recognition -c config/as_gcn/ntu-xsub/train.yaml
PretrainAIM: python main.py recognition -c config/as_gcn/ntu-xsub/train_aim.yaml --device 0 1 2
TrainMainPipeline: python main.py recognition -c config/as_gcn/ntu-xsub/train.yaml --device 0 --batch_size 4
# only can use one gpu otherwise got the error "Caught RuntimeError in replica 0 on device 0""
Test: python main.py recognition -c config/as_gcn/ntu-xsub/test.yaml
```

For Cross-View,
```
PretrainAIM: python main.py recognition -c config/as_gcn/ntu-xsub/train_aim.yaml
TrainMainPipeline: python main.py recognition -c config/as_gcn/ntu-xsub/train.yaml
Test: python main.py recognition -c config/as_gcn/ntu-xsub/test.yaml
PretrainAIM: python main.py recognition -c config/as_gcn/ntu-xview/train_aim.yaml
TrainMainPipeline: python main.py recognition -c config/as_gcn/ntu-xview/train.yaml
Test: python main.py recognition -c config/as_gcn/ntu-xview/test.yaml
```

# Acknowledgement
Expand Down
85 changes: 85 additions & 0 deletions asgcn_3090_cuda11_1_environment.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
name: asgcn
channels:
- pytorch
- https://mirrors.ustc.edu.cn/anaconda/pkgs/main
- defaults
dependencies:
- _libgcc_mutex=0.1=main
- blas=1.0=mkl
- ca-certificates=2021.4.13=h06a4308_1
- certifi=2020.12.5=py36h06a4308_0
- cffi=1.14.5=py36h261ae71_0
- cuda90=1.0=h6433d27_0
- cudatoolkit=10.0.130=0
- cudnn=7.6.5=cuda10.0_0
- cycler=0.10.0=py36_0
- dbus=1.13.18=hb2f20db_0
- expat=2.3.0=h2531618_2
- fontconfig=2.13.1=h6c09931_0
- freetype=2.10.4=h5ab3b9f_0
- glib=2.68.1=h36276a3_0
- gst-plugins-base=1.14.0=h8213a91_2
- gstreamer=1.14.0=h28cd5cc_2
- icu=58.2=he6710b0_3
- intel-openmp=2019.4=243
- jpeg=9b=h024ee3a_2
- kiwisolver=1.3.1=py36h2531618_0
- lcms2=2.11=h396b838_0
- ld_impl_linux-64=2.33.1=h53a641e_7
- libffi=3.3=he6710b0_2
- libgcc-ng=9.1.0=hdf63c60_0
- libgfortran-ng=7.3.0=hdf63c60_0
- libpng=1.6.37=hbc83047_0
- libstdcxx-ng=9.1.0=hdf63c60_0
- libtiff=4.2.0=h3942068_0
- libuuid=1.0.3=h1bed415_2
- libwebp-base=1.2.0=h27cfd23_0
- libxcb=1.14=h7b6447c_0
- libxml2=2.9.10=hb55368b_3
- lz4-c=1.9.3=h2531618_0
- matplotlib=3.3.2=h06a4308_0
- matplotlib-base=3.3.2=py36h817c723_0
- mkl=2018.0.3=1
- mkl_fft=1.0.6=py36h7dd41cf_0
- mkl_random=1.0.1=py36h4414c95_1
- ncurses=6.2=he6710b0_1
- ninja=1.10.2=py36hff7bd54_0
- olefile=0.46=py36_0
- openssl=1.1.1k=h27cfd23_0
- pcre=8.44=he6710b0_0
- pillow=8.1.2=py36he98fc37_0
- pip=21.0.1=py36h06a4308_0
- pycparser=2.20=py_2
- pyparsing=2.4.7=pyhd3eb1b0_0
- pyqt=5.9.2=py36h05f1152_2
- python=3.6.13=hdb3f193_0
- python-dateutil=2.8.1=pyhd3eb1b0_0
- qt=5.9.7=h5867ecd_1
- readline=8.1=h27cfd23_0
- setuptools=52.0.0=py36h06a4308_0
- sip=4.19.8=py36hf484d3e_0
- six=1.15.0=py36h06a4308_0
- sqlite=3.35.1=hdfb4753_0
- tbb=2021.2.0=hff7bd54_0
- tbb4py=2021.2.0=py36hff7bd54_0
- tk=8.6.10=hbc83047_0
- tornado=6.1=py36h27cfd23_0
- wheel=0.36.2=pyhd3eb1b0_0
- xz=5.2.5=h7b6447c_0
- zlib=1.2.11=h7b6447c_3
- zstd=1.4.5=h9ceee32_0
- pip:
- argparse==1.4.0
- cached-property==1.5.2
- dataclasses==0.8
- h5py==3.1.0
- imageio==2.9.0
- numpy==1.19.5
- opencv-python==4.5.1.48
- pyyaml==6.0
- scikit-video==1.1.11
- scipy==1.5.4
- torch==1.8.1+cu111
- torchvision==0.9.1+cu111
- tqdm==4.60.0
- typing-extensions==3.7.4.3
Loading