- Windows10 or Linux
- Python 3.7
- NVIDIA GPU + CUDA CuDNN
- PyTorch 1.7.1 (or higher)
- Clone this repo:
git clone https://github.com/VCL3D/PanoDR.git
cd PanoDR
- We recommend setting up a virtual environment (follow the
virtualenv
documentation). Once your environment is set up and activated, install thevcl3datlantis
package:
cd src/utils
pip install -e .
We use Structured3D dataset. To train a model on the dataset please download the dataset from the official website. We follow the official training, validation, and testing splits as defined by the authors. After downloading the dataset, split the scenes into training train, validation and test folders. The folders should have the following format:
Structured3D/
train/
scene_00000
scene_00001
...
validation/
scene_03000
scene_03001
...
test/
scene_03250
scene_03251
...
In order to estimate the dense layout maps, specify the path to train and test folders and run:
python src\utils\vcl3datlantis\dataset\precompute_structure_semantics.py
In order to train the model, first specify the required parameters:
--train_path
: /../Structured3D/train/--test_path
: /../Structured3D/test/--results_path
: The folder where metrics are saved--gt_results_path
: The folder where ground truth images are saved for testing--pred_results_path
: The folder where predicted images are saved for testing--segmentation_model_chkpnt
: The path for the pre-trained dense layout estimation model--model_folder
: The folder where checkpoints are saved
After starting visdom on ther server:
python -m visdom
run:
python src/train/train.py --visdom
You can download the pre-trained models from here
and specify the arguments --eval_chkpnt_folder
and --segmentation_model_chkpnt
, respectively.
Assuming the input image and mask are in the format as in the input
folder run:
python src/train/test.py --inference --eval_path input/
Model is also available via torchserve. First, install the required dependencies via
cd service
pip install -r requirements.txt
Next, download the .mar
file from here and place it under service/model_store
. In order to serve the model using REST calls, run:
torchserve --start --ncs --model-store ./model_store --models panodr=/model_store/panodr.mar torchserve --start --ncs --model-store ./model_store --models panodr=/model_store/panodr.mar
Once the model is served, the endpoint is reachable on http://IP:8080/predictions/panodr
, with IP
as selected when configuring torchserve (typically localhost
, but more advanced configuration is also possible to serve the model externally or make it reachable from other machines, using the inference_address
setting).
A server is provided for hosting inputs and saving the output files. It can be started via:
cd Imageserver\
python .\imageserver.py
All images are hosted on http://IP:PORT
. Further, an endpoint on http://IP:PORT/save/inpainted
is provided for obtaining the output files from the service.
The following arguments have to be specified in inputs/requests.json
file to call the service:
DataInputs["rgb"]
DataInputs["mask"]
Finally, to obtain predictions from the model, a callback URL json payload needs to be POSTed. Simply run:
curl.exe -X POST http://IP:8080/predictions/panodr -H "Content-Type: application/json" -d @/PATH_TO/PanoDR/service/inputs/request.json
If you use this code for your research, please cite the following:
@inproceedings{gkitsas2021panodr,
title={PanoDR: Spherical Panorama Diminished Reality for Indoor Scenes},
author={Gkitsas, Vasileios and Sterzentsenko, Vladimiros and Zioulis, Nikolaos and Albanis, Georgios and Zarpalas, Dimitrios},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={3716--3726},
year={2021}
}
This project has received funding from the European Union's Horizon 2020 innovation programme ATLANTIS under grant agreement No 951900.
Our code borrows from SEAN and deepfillv2.