-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A question about the inference #31
Comments
@youyou0805 Thanks for your interest in our project. I do not see any errors in your provided screenshot. But two things are strange. (1) You only provided one pair of image and mask. It is better to specify one GPU to use (such as (2) Currently, the publicly available code only support the resolution with You can have a try by fixing above two things. |
Thanks for your reply, my question has been resolved! |
Hello, how to keep the size of the original image, now the image output by using the Simpler Inference method to call the model is only 256*256, which is not very clear, thank you. |
Hi @boyu-chen-intern , the P-VQVAE is compatible with different image sizes. But the UQ-Transformer is dedicated to the sequence with 1024=32x32 length. Hence, the model can not inpaint images that have other sizes except for 256x256. |
Thank you for your reply and wonderful work |
Hello,
Thanks for the excellent code. I face a problem with the inference. When I run the below code for image inpainting with provided transformer model:
‘python scripts/inference_inpainting.py --func inference_inpainting --name transformer_ffhq --image_dir data/image.png --mask_dir data/mask.png --save_dir sample --input_res 512,512’
The output is two blank txt files, as shown in the figure below:

Could you help me identify where the problem might be occurring? Your help is greatly appreciated!
The text was updated successfully, but these errors were encountered: