Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

export_prediction_from_logits takes a very long time to finish #2540

Closed
pooya-mohammadi opened this issue Oct 10, 2024 · 8 comments
Closed
Assignees

Comments

@pooya-mohammadi
Copy link

The prediction process takes about 10-20 seconds on my system: 4090GPU
100%|██████████| 120/120 [00:16<00:00, 7.30it/s]
But the this function export_prediction_from_logits is super slow on my system with 32 cores of CPU
[INFO] Elapsed time for export_prediction_from_logits: 534.6257283687592
I added a simple time tracker.
What could have caused this?

The number of the classes is equal to the number of the classes of Totalsegmentator for whole all items.
I run the Totalsegmentator on the same image and it takes less than 30 seconds to finish!

Can any one help me on this?

@pooya-mohammadi
Copy link
Author

I also saved the data before passing to export_prediction_from_logits and ran it in a separate process and the output took the same amount of time while only 10-20 percent of the CPU is occupied!

I noticed that Totalsegmentator uses the same function, however, they resize/resample the inputs before passing it to nnunet predictor.

And the reason nnunet takes very long is because it resizes each class label one by one in this code snippet:

for c in range(data.shape[0]):
    reshaped_final[c] = resize_fn(data[c], new_shape, order, **kwargs)

@FabianIsensee Do you have any plans to change this or use a method like Totalsegmentator?
Or is there anyway to skip this part? For segmentations with large number of classes this becomes very slow.

@ancestor-mithril
Copy link
Contributor

The reason TS2 is fast is because it uses NN interpolation (creating less smooth masks). nnUNet uses tricubic resampling (or 3-linear resampling if you configure your training to use torch resampling).

@pooya-mohammadi
Copy link
Author

pooya-mohammadi commented Oct 11, 2024

@ancestor-mithril Thanks for your response, can you show me how to do it?
However this code snippet is too slow:

for c in range(data.shape[0]):
    reshaped_final[c] = resize_fn(data[c], new_shape, order, **kwargs)

Do you have any comments on this?

@pooya-mohammadi
Copy link
Author

One more thing, since TS2 does not save probabilities, there is no need to do the resize on all the classes separately. And it only does a single zoom on the output segmentation.

@ancestor-mithril
Copy link
Contributor

You can change the resampling by using a different experiment planner (see Documentation). For example, you can use https://github.com/MIC-DKFZ/nnUNet/blob/master/nnunetv2/experiment_planning/experiment_planners/resampling/resample_with_torch.py.

@ancestor-mithril
Copy link
Contributor

One more thing, since TS2 does not save probabilities, there is no need to do the resize on all the classes separately. And it only does a single zoom on the output segmentation.

This is not supported by nnUNet, you would have to change the code to obtain the TS2 behavior.

@pooya-mohammadi
Copy link
Author

@ancestor-mithril Exactly, I noticed that when the return_probabilities is set to False, it would make it faster to get the segmentation and then send it to resampling with order=0 resize. This drastically increases the inference time.
I'll create a pull request.

@sten2lu
Copy link
Contributor

sten2lu commented Nov 5, 2024

As ancestor-mithril has already answered this issue, I will close it.
Fabian will handle the pull request.

@sten2lu sten2lu closed this as completed Nov 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants