Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Plugging a bunch of test holes back to CI #160

Merged
merged 35 commits into from
Jul 18, 2024
Merged

Plugging a bunch of test holes back to CI #160

merged 35 commits into from
Jul 18, 2024

Conversation

chaithyagr
Copy link
Member

It seems like tensorflow NUFFT and also let us resolve #140 in this PR

@chaithyagr chaithyagr marked this pull request as draft July 12, 2024 09:18
@chaithyagr chaithyagr changed the title Adding a bunch of test holes back to CI Plugging a bunch of test holes back to CI Jul 12, 2024
@chaithyagr
Copy link
Member Author

I merged #158 here to take the changes to fix tensorflow-nufft. This way we can have passing tests as we merge.

Comment on lines 84 to 91
if metafunc.config.getoption(
"backend"
) != [] and backend != metafunc.config.getoption("backend"):
# Skip tests if the backend does not match what we want to test.
callspec.marks.append(
pytest.mark.skip("Not testing ref backend with self.")
pytest.mark.skip(f"Not testing {backend} as not requested")
)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is slightly more sophisticated, as we moved from skipping if we are testing with reference to
testing only if we request for it. Else, we skip. @paquiteau to comment in case this causes new test holes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this change is redundant with what is in line 20-30 above (we modify the availability of the backend dynamically in the tests config). I would rather advise to remove these lines (84-90) and see what happens.

The whole idea of this was to not test the backend against itself (it does not make sense, and it save time, esp. for the test_interfaces.py tests).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thats weird, in that case, for some reason we trigger autodiff tests for finufft when the requested backend is tensorflow :P . This is why I added it in first place..

@@ -511,12 +511,14 @@ def samples(self, samples):
samples: np.ndarray
The samples for the Fourier Operator.
"""
self._samples = samples
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was a bug we missed earlier. We used to compute density without updating samples. Fixed this now. However, I added a TODO that we still have performance issue, this needs gpuNUFFT to sort the samples twice, once for pipe and once for the operator. While we do have pipe method. It currently is messy to just call it directly (as we have osf and upsampfac as 2 different parameters. Also, we still create an operator to ensure we can call pipe without user needing to create a new fourier_op. @paquiteau whats the best way forward here? Lets discuss this sometime as we have all the elements to get rid of perf issues but we need to agree on standard to follow. This can greatly benefit in learning trajectory

volume_shape = (np.array(volume_shape) * osf).astype(int)
grid_op = MRIGpuNUFFT(
samples=kspace_loc,
shape=volume_shape,
osf=1,
**kwargs,
)

@chaithyagr chaithyagr marked this pull request as ready for review July 15, 2024 09:52
@chaithyagr chaithyagr requested a review from paquiteau July 15, 2024 12:28
@chaithyagr
Copy link
Member Author

76.96 to 77.89% in coverage ! Yayy xD

@paquiteau paquiteau mentioned this pull request Jul 17, 2024
src/mrinufft/operators/autodiff.py Outdated Show resolved Hide resolved
src/mrinufft/operators/base.py Outdated Show resolved Hide resolved
src/mrinufft/operators/interfaces/tfnufft.py Show resolved Hide resolved
tests/operators/test_update.py Outdated Show resolved Hide resolved
@chaithyagr chaithyagr requested a review from paquiteau July 18, 2024 09:41
Copy link
Member

@paquiteau paquiteau left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM !

"cufinufft",
"gpunufft",
"torchkbnufft-gpu",
"tensorflow",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we support torch array with tensorflow backend ? Seems weird to me

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup xD, cause why not ... But yes, its not efficient at allll. There weirdly is no way to convert torch tensor to tensorflow tensor, without copying to CPU first... However, its just a way to prevent issues and bugs, it reallly isnt a usecase.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well, okay, I'll let you merge :)

@chaithyagr chaithyagr merged commit 30e9f3a into master Jul 18, 2024
10 checks passed
@chaithyagr chaithyagr deleted the add_tfnufft_ci branch July 18, 2024 15:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Fix a test hole for ref_backend, i.e. finufft
2 participants