Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A question about the inference #31

Open
youyou0805 opened this issue Jan 4, 2024 · 5 comments
Open

A question about the inference #31

youyou0805 opened this issue Jan 4, 2024 · 5 comments

Comments

@youyou0805
Copy link

Hello,
Thanks for the excellent code. I face a problem with the inference. When I run the below code for image inpainting with provided transformer model:

‘python scripts/inference_inpainting.py --func inference_inpainting --name transformer_ffhq --image_dir data/image.png --mask_dir data/mask.png --save_dir sample --input_res 512,512’

The output is two blank txt files, as shown in the figure below:
QQ图片20240104211313
Could you help me identify where the problem might be occurring? Your help is greatly appreciated!

@liuqk3
Copy link
Owner

liuqk3 commented Jan 5, 2024

@youyou0805 Thanks for your interest in our project. I do not see any errors in your provided screenshot. But two things are strange.

(1) You only provided one pair of image and mask. It is better to specify one GPU to use (such as --gpu 0). It seems that the script finds two GPUs in your machine.

(2) Currently, the publicly available code only support the resolution with 256x256. 512 x 512 is not supported.

You can have a try by fixing above two things.

@youyou0805
Copy link
Author

Thanks for your reply, my question has been resolved!

@boyu-chen-intern
Copy link

Hello, how to keep the size of the original image, now the image output by using the Simpler Inference method to call the model is only 256*256, which is not very clear, thank you.

@liuqk3
Copy link
Owner

liuqk3 commented Jan 9, 2024

Hi @boyu-chen-intern , the P-VQVAE is compatible with different image sizes. But the UQ-Transformer is dedicated to the sequence with 1024=32x32 length. Hence, the model can not inpaint images that have other sizes except for 256x256.

@boyu-chen-intern
Copy link

Thank you for your reply and wonderful work

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants