This is the official implementation of Towards Garment Sewing Pattern Reconstruction from a Single Image.
Lijuan Liu *,
Xiangyu Xu *,
Zhijie Lin *,
Jiabing Liang *,
Shuicheng Yan†,
ACM Transactions on Graphics (SIGGRAPH Asia 2023)
- Clone this repository to
path_to_dev
andcd path_to_dev/Sewformer
, download the pre-trained checkpoint and put it intoassets/ckpts
. - The environment can be initialized with
conda env create -f environment.yaml
. Then you can activate the environmentconda activate garment
.
-
Download our provided dataset and put it into
path_to_sewfactory
, update the local paths insystem.json
to make sure the dataset setup correctly. -
Train the model with
torchrun --standalone --nnodes=1 --nproc_per_node=1 train.py -c configs/train.yaml
The output will be located at the
output
insystem.json
.
- Inference sewing patterns with the pretrained model:
-
evaluate on sewfactory dataset:
torchrun --standalone --nnodes=1 --nproc_per_node=1 train.py -c configs/train.yaml -t
-
inference on real images (e.g. from deepfashion):
python inference.py -c configs/test.yaml -d assets/data/deepfashion -t deepfashion -o outputs/deepfashion
-
Simulate the predicted results (Windows):
cd path_to_dev/SewFactory
and runpath_to_maya\bin\mayapy.exe .\data_generator\deepfashion_sim.py
to simulate the predicted sew patterns. (Please prepare the SMPL prediction results with RSC-Net and update the predicted data root specified indeepfashion_sim.py
.)See more details about the SewFactory dataset and the simulation here.
Please cite this paper if you find the code/model helpful in your research:
@article{liu2023sewformer,
author = {Liu, Lijuan and Xu, Xiangyu and Lin, Zhijie and Liang, Jiabin and Yan, Shuicheng},
title = {Towards Garment Sewing Pattern Reconstruction from a Single Image},
journal = {ACM Transactions on Graphics (SIGGRAPH Asia)},
year = {2023}
}