You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Download the [SMPL body model](https://smpl.is.tue.mpg.de/), and place the `.pkl` files for both genders and put them in `/body_models/smpl/`. Follow the [instructions](https://github.com/vchoutas/smplx/blob/master/tools/README.md) to remove the Chumpy objects from both model pkls.
33
-
-Then simply run `pip install -r requirements.txt` (do this at last to ensure `numpy==1.16.2`).
30
+
- Install [PSBody Mesh package](https://github.com/MPI-IS/mesh/releases/tag/v0.3). Currently we recommend installing version 0.3.
31
+
-`pip install -r requirements.txt`
32
+
- Download the [SMPL body model](https://smpl.is.tue.mpg.de/) (Note: use the version 1.0.0 with 10 shape PCs), and place the `.pkl` files for both genders and put them in `/body_models/smpl/`. Follow the [instructions](https://github.com/vchoutas/smplx/blob/master/tools/README.md) to remove the Chumpy objects from both model pkls.
33
+
-`pip install numpy==1.16.2` (do this at last to ensure `numpy==1.16.2`).
34
34
35
35
## Quick demo
36
36
@@ -144,6 +144,10 @@ If you find our code / paper / data useful to your research, please consider cit
144
144
145
145
### Related projects
146
146
147
+
[SCALE (CVPR 2021)](https://qianlim.github.io/SCALE): We use a novel explicit representation --- hundreds of local surface patches -- to model pose-dependent deformation of humans in clothing, including those wearing jackets and skirts!
148
+
149
+
[SCANimate (CVPR 2021)](https://scanimate.is.tue.mpg.de): Trained on the CAPE dataset, we use implicit functions to build avatars directly from *raw* scans, with pose-dependent clothing deformation, without the need for surface registration or clothing/body template. Check it out!
150
+
147
151
[CoMA (ECCV 2018)](https://coma.is.tue.mpg.de/): Our (non-conditional) convolutional mesh autoencoder for modeling extreme facial expressions. The codes of the CAPE repository are based on the [repository of CoMA](https://github.com/anuragranj/coma). If you find the code of this repository useful, please consider also citing CoMA.
148
152
149
153
[ClothCap (SIGGRAPH 2017)](http://clothcap.is.tue.mpg.de/): Our method of capturing and registering clothed humans from 4D scans. The *CAPE dataset* released with our paper incorporates the scans and registrations from ClothCap. Check out our [project website](https://cape.is.tue.mpg.de/dataset) for the data!
0 commit comments