-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
broken nnunetv2 import #2
Comments
I'm getting the same error. Here's a more detailed traceback:
The culprit seems to be e49f45c, where an Lines 11 to 18 in b2493d4
I assume the fix should be as simple as |
even with the import blosc2, the crop_to_bbox error persists unless you use acvl_utils==0.2 |
I had the same error and found that the crop_to_bbox function from acvl_utils/cropping_and_padding/bounding_boxes.py was recently removed, which is causing this import issue with nnUNetPredictor. By adding this function back into the file, the problem can be resolved. I've made a pull request to address this, and hopefully, it will be merged soon. Proposed Solution: Add the following function to bounding_boxes.py:
|
This issue has been reported by a user of TotalSegmentator using the 3D Slicer application. See https://discourse.slicer.org/t/totalsegmentator-failed-to-compute-results-returned-non-zero-exit-status-1/38142/6?u=jamesobutler |
Error reports are flooding in for 3D Slicer for TotalSegmentator. It would be great if this issue could be fixed soon! Thank you! |
I am on a trip, will do this tomorrow when I am back. Sorry :-/ |
Hey all I tried to patch it from where I am. There is now a v0.2.2 with the missing function added back in and the blosc2 dependency added. I hope this fixes it. Please let me know if it works now. |
Thank you! Fixing the problem and yanking the broken version are sufficient as corrective actions. For preventive action, you could set up continuous testing by github actions (test all features that you would feel embarrassed about if they were broken). Please do not set version with |
nnUNetv2 is working again on my system. Thanks for the patch! |
This error came up with me on rebuilding a Docker image that installs nnunetv2==2.3.1. We need fixed versions because this is a deployed app so the inference version has to match the training version. Can a hotfix be applied to nnunetv2 2.3.1 so it's not broken by the downstream changes? |
@pwrightkcl nnunetv2 v2.3.1 allows acvl_utils 0.2.x, so no hotfix is necessary. Just don’t use 0.2.1, you can use 0.2.2 or 0.2.0 and you can install this package directly or through clean install to pull in the latest working 0.2.2. |
In this issue we are talking about version requirements at library level. Python packages should never require exact versions of packages that they rely on (version requirements should be as broad as possible), because otherwise it would be practically impossible to pip-install more than a few Python packages in an environment. At application level (e.g., in your application's installation script), it can make sense to specify exact version of each and every Python package for reproducibility. |
@lassoan this is a production app deployed in clinical environments based in Docker images, so we need to be able to build reproducibly. In this case, unit tests started failing for the component that uses nnunet despite no changes to the code and all requirements having fixed versions. My view is that a hotfix is still required to get nnunet 2.3.1 back to working "out of the box" as it used to. If I were in a dev / research environment, it wouldn't be a big deal to add a new line to my requirements file or even install from some custom git commit. But that is not my use case. I think it would also prevent anyone else having this issue from having to track down this issue thread. |
It sounds like you have only specified some top-level version requirements. You actually need to freeze the version of all the packages that you use (recursively, for all dependencies of dependencies). Since only you know what is a working version combination of all the packages that you need, only you can do this, at the application level. Fortunately, this is very easy to do, as you can run
Of course, any broken Python package version must be yanked from PyPI. In the acvl-util release history it seems that the broken 0.2.1 version has still has not been yanked yet. @FabianIsensee could you please yank acvl-utils 0.2.1 from PyPI? It would make pip automatically uninstall/skip this broken version in existing and future installs. It is really simple, just log in to pypi, go to https://pypi.org/manage/project/acvl-utils/releases/, click on |
@lassoan I'll look into doing a complete For nnunetv2 2.3.1, it still seems to be installing acvl_utils 0.2.1, not 0.2.2. |
This would only be if you are not installing it fresh (aka nothing in your pip cache). Your pip cache will reuse a downloaded whl file if it still passes the version range checks. At this side acvl_utils should yank the release so that is not pulled to download any more, and nnutev2 could specify an exclusion for 0.2.1, but you can handle this from your application side without waiting for those safeguards. You can run a pip install upgrade command to make sure to pull the latest always. But as @lassoan suggested you are looking for a pip freeze type action to install hardened versions of all packages including their dependencies (and dependencies of the direct dependencies). A library is not supposed to do that, but that can make sense from your application level. As you can review in nnunetv2 v2.3.1 set of dependenices there are no hardened pinned versions. When setting up a docker image fresh you would get a different version of |
@jamesobutler yes, TIL about freezing deeper dependencies (thanks both). I am implementing this as I type. In case anyone in my position finds it useful, I also learned The bit about 0.2.1 being used not 0.2.2 was just FYI, since the above solves it for me. I don't think it's pip cache, as I built a fresh docker image with no caches, but I just tried in a local venv instead and it installed acvl_utils==0.2.2, so I guess disregard. |
pip maintains its own system-wide cache, which is independent from docker cache. |
Interesting. So even inside a Docker build, pip will search the local system cache? I would have expected it to only look inside the docker environment. Getting a bit off-topic, I suppose, since the issue is resolved, but interesting to me. |
@FabianIsensee it appears 0.2.1 is still not yanked on PyPI. Do you have plans to complete this action? |
This issue is still continuously burning users. Due to extensive caching in pip, a having 0.2.2 version is not sufficient, the broken 0.2.1 will still be installed on many users' computers by default, until you yank this corrupted package. @FabianIsensee please yank acvl-utils 0.2.1. We provide user support for nnUnet for free and we are struggling to keep up with the demand because of this issue. |
ImportError: cannot import name 'crop_to_bbox' from 'acvl_utils.cropping_and_padding.bounding_boxes' (/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/acvl_utils/cropping_and_padding/bounding_boxes.py)
File , line 7
5 import pyspark.sql.functions as f
6 from nnunetv2.imageio.simpleitk_reader_writer import SimpleITKIO
----> 7 from nnunetv2.inference.predict_from_raw_data import nnUNetPredictor
8 from nnunetv2.paths import nnUNet_results, nnUNet_raw
9 import torch
It seems that the update to acvl broke nnunetv2 imports. locking the version to 0.2 fixes this issue.
The text was updated successfully, but these errors were encountered: