You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I'm interested in using your proposed depth super resolution for volume electron microscopy.
So far I've converted a stack of images to hdf5 ( not forgetting to rename the set to "raw"). The dataset is well recognized and imported. I have the impression that my computer doesn't have enough RAM, as learning doesn't start systematically. I'd like to reduce the size of the patches used for learning, but when I modify the config file I get error messages about different tensor sizes:
RuntimeError: The size of tensor a (16) must match the size of tensor b (64) at non-singleton dimension 1
Exception ignored in atexit callback: <function FileWriter.__init__.<locals>.cleanup at 0x0000017BEA34C680>
No doubt I need to modify the model accordingly. However, I have no idea how.
In more general terms, do you plan to add to the documentation and comment on the code, or do you intend to leave the github as it is?
Thanks in advance for your feedback.
The text was updated successfully, but these errors were encountered:
Hello, thank you for your early feedback. At present, our code has undergone a comprehensive update, supporting input data formats of H5 or tif, and more importantly, adding a GUI interface to facilitate user use. As for your issue, in the case of limited RAM, we first suggest rewriting the batch size to 1 instead of shrinking the model architecture. On the one hand, modifying the network layer requires some fine division or multiple calculations to ensure that errors will not occur. On the other hand, our model size is not very large, and modifying key parameters such as patch size may lead to a decline in model performance. Currently, the code provides users with 8x and 10x recovery functions, which means you only need to modify the scale factor parameter, and the code will automatically call the recommended corresponding model parameters. If you have any other questions, please feel free to contact us at any time.
Hello, I'm interested in using your proposed depth super resolution for volume electron microscopy.
So far I've converted a stack of images to hdf5 ( not forgetting to rename the set to "raw"). The dataset is well recognized and imported. I have the impression that my computer doesn't have enough RAM, as learning doesn't start systematically. I'd like to reduce the size of the patches used for learning, but when I modify the config file I get error messages about different tensor sizes:
No doubt I need to modify the model accordingly. However, I have no idea how.
In more general terms, do you plan to add to the documentation and comment on the code, or do you intend to leave the github as it is?
Thanks in advance for your feedback.
The text was updated successfully, but these errors were encountered: