Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

submit failed #11

Open
orange-111222 opened this issue Jul 22, 2024 · 3 comments
Open

submit failed #11

orange-111222 opened this issue Jul 22, 2024 · 3 comments

Comments

@orange-111222
Copy link

When I test directly with Try-out Algorithm it succeeds, but when I submit on submit page it reports the following error:
numpy.core._exceptions._ArrayMemoryError: Unable to allocate 16.6 GiB for an array with shape (49, 271, 410, 410) and data type float64

@orange-111222
Copy link
Author

Snipaste_2024-07-22_17-58-39

@LucaLumetti
Copy link
Collaborator

Hi @orange-111222,

Thanks for reaching out. I'm not familiar with the specific details behind GrandChallenge or the environment used by the "Try-out Algorithm" to run the provided Docker. However, here are a few suggestions that might help:

  • You can try increasing the required CPU memory during the Docker upload on your algorithm page. I am not aware if there are any constraints on the maximum memory that you can set.

  • Your output currently consists of 49 channels, while we only use 42 labels. If the 7 additional channels are not essential for your algorithm (e.g., for post-processing), consider removing them.

  • If you are processing the input volume patch-wise (and as it seems that your algorithm is based on nnUNet, probably it is), using float32 instead of float64 can significantly reduce memory usage without compromising performance in most cases.

Best regards,
Luca Lumetti

@orange-111222
Copy link
Author

Dear Luca Lumetti,

  • I've used up to 32 gigs of CPU memory so far.

  • Since I'm currently using a dataset I downloaded around May, I'll try to download the lasted dataset for training.

  • yes,my algorithm was built based on nn-Unet, and I noticed that nn-Unet itself suffers from a large amount of RAM used for inference, so it's possible that nn-Unet is not suitable for ToothFairy.

  • Thank you and all the best.

Best regards,
ChiZhang

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants