Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Config file and documentation #1

Open
podelai opened this issue Feb 14, 2024 · 1 comment
Open

Config file and documentation #1

podelai opened this issue Feb 14, 2024 · 1 comment

Comments

@podelai
Copy link

podelai commented Feb 14, 2024

Hello, I'm interested in using your proposed depth super resolution for volume electron microscopy.
So far I've converted a stack of images to hdf5 ( not forgetting to rename the set to "raw"). The dataset is well recognized and imported. I have the impression that my computer doesn't have enough RAM, as learning doesn't start systematically. I'd like to reduce the size of the patches used for learning, but when I modify the config file I get error messages about different tensor sizes:

RuntimeError: The size of tensor a (16) must match the size of tensor b (64) at non-singleton dimension 1
Exception ignored in atexit callback: <function FileWriter.__init__.<locals>.cleanup at 0x0000017BEA34C680>

No doubt I need to modify the model accordingly. However, I have no idea how.

In more general terms, do you plan to add to the documentation and comment on the code, or do you intend to leave the github as it is?
Thanks in advance for your feedback.

@ECHOEeyes
Copy link
Collaborator

Hello, thank you for your early feedback. At present, our code has undergone a comprehensive update, supporting input data formats of H5 or tif, and more importantly, adding a GUI interface to facilitate user use. As for your issue, in the case of limited RAM, we first suggest rewriting the batch size to 1 instead of shrinking the model architecture. On the one hand, modifying the network layer requires some fine division or multiple calculations to ensure that errors will not occur. On the other hand, our model size is not very large, and modifying key parameters such as patch size may lead to a decline in model performance. Currently, the code provides users with 8x and 10x recovery functions, which means you only need to modify the scale factor parameter, and the code will automatically call the recommended corresponding model parameters. If you have any other questions, please feel free to contact us at any time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants