-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Plugging a bunch of test holes back to CI #160
Conversation
I merged #158 here to take the changes to fix tensorflow-nufft. This way we can have passing tests as we merge. |
tests/conftest.py
Outdated
if metafunc.config.getoption( | ||
"backend" | ||
) != [] and backend != metafunc.config.getoption("backend"): | ||
# Skip tests if the backend does not match what we want to test. | ||
callspec.marks.append( | ||
pytest.mark.skip("Not testing ref backend with self.") | ||
pytest.mark.skip(f"Not testing {backend} as not requested") | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is slightly more sophisticated, as we moved from skipping if we are testing with reference to
testing only if we request for it. Else, we skip. @paquiteau to comment in case this causes new test holes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this change is redundant with what is in line 20-30 above (we modify the availability of the backend dynamically in the tests config). I would rather advise to remove these lines (84-90) and see what happens.
The whole idea of this was to not test the backend against itself (it does not make sense, and it save time, esp. for the test_interfaces.py
tests).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thats weird, in that case, for some reason we trigger autodiff tests for finufft
when the requested backend is tensorflow
:P . This is why I added it in first place..
@@ -511,12 +511,14 @@ def samples(self, samples): | |||
samples: np.ndarray | |||
The samples for the Fourier Operator. | |||
""" | |||
self._samples = samples |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was a bug we missed earlier. We used to compute density without updating samples. Fixed this now. However, I added a TODO that we still have performance issue, this needs gpuNUFFT to sort the samples twice, once for pipe and once for the operator. While we do have pipe
method. It currently is messy to just call it directly (as we have osf
and upsampfac
as 2 different parameters. Also, we still create an operator to ensure we can call pipe without user needing to create a new fourier_op. @paquiteau whats the best way forward here? Lets discuss this sometime as we have all the elements to get rid of perf issues but we need to agree on standard to follow. This can greatly benefit in learning trajectory
mri-nufft/src/mrinufft/operators/interfaces/gpunufft.py
Lines 551 to 557 in 20a77aa
volume_shape = (np.array(volume_shape) * osf).astype(int) | |
grid_op = MRIGpuNUFFT( | |
samples=kspace_loc, | |
shape=volume_shape, | |
osf=1, | |
**kwargs, | |
) |
76.96 to 77.89% in coverage ! Yayy xD |
It seems like tensorflow NUFFT and also let us resolve #140 in this PR
611cddb
to
8b52a13
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM !
"cufinufft", | ||
"gpunufft", | ||
"torchkbnufft-gpu", | ||
"tensorflow", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we support torch array with tensorflow backend ? Seems weird to me
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup xD, cause why not ... But yes, its not efficient at allll. There weirdly is no way to convert torch tensor to tensorflow tensor, without copying to CPU first... However, its just a way to prevent issues and bugs, it reallly isnt a usecase.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well, okay, I'll let you merge :)
It seems like tensorflow NUFFT and also let us resolve #140 in this PR