-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change the output channel #22
Comments
Hello, Best |
Hi @giddyyupp , I am getting the same error. These are the parameters I am using: It happens just when I put the flag And this is the error I get: Thanks |
Hello @giddyyupp, I have a quick question regarding the input image size requirement. Could you kindly provide some information about the preferred dimensions or aspect ratio that the images should have? I have tried using an image with dimensions of 4000x2250, which was accepted without any issues. However, I encountered difficulties when using images with dimensions such as 4496x2776. It would be helpful to know the specific aspect ratio or guidelines for the image dimensions. Thank you in advance for your assistance. |
Hello, I guess I need to update the transform part or the whole base_dataset.py :( |
Hello. Thank you for the contribution!
I have a novice question about the output channel. I changed the output channel in the base_option.py to 1 as I input gray images. But there is one error thrown out: RuntimeError: The size of tensor a (32) must match the size of tensor b (31) at non-singleton dimension 3
I can't figure out why this would happen. Could you please tell me what could be the reason and where should I change in the network.py file?
Thank you very much!
The text was updated successfully, but these errors were encountered: