Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update Gradient descent notebook #218

Merged
merged 3 commits into from
Mar 19, 2024
Merged

update Gradient descent notebook #218

merged 3 commits into from
Mar 19, 2024

Conversation

KrisThielemans
Copy link
Member

  • use minus log-likelihood for PET and fix wrong comments about ascent/descent for MR and PET
  • fix has_astra => have_astra
  • no longer use relative step-size as it fails for MR and CT
  • hide PET warnings

@KrisThielemans
Copy link
Member Author

@DANAJK I'm currently still using the "simulated" data. This has the disadvantage that a gradient descent step immediatelly gets the correct solution (but then things with the current default step-size get into difficulties).

With your new data, I also have to set the tau smaller (.1 as opposed to .3), and then get
Figure 41
Figure 42

I'm tempted to leave this as-is for now as updating the data will be hard (including retagging SIRF)

@DANAJK
Copy link
Contributor

DANAJK commented Mar 18, 2024

My new data was a Shepp-Logan phantom so not sure how you get a real looking image!

@KrisThielemans
Copy link
Member Author

😄 ok. I must have run this with the "simulated" data (from brainweb), as in the notebook.

I just now tried to reproduce that on a GitHub Codespace, but I get garbage backprojection (and reconstruction...) Maybe you could give it a go?

image = bwd_mr.as_array()
centre_slice = image.shape[0]//2
plt.figure();plot_2d_image([1,1,1],numpy.abs(image[centre_slice,:,:]),'bla', cmap="viridis")

Figure 3

I'm a bit lost in different versions that I have of everything at the moment.

@KrisThielemans
Copy link
Member Author

correction: the gradient_descent_mr_pet_ct doesn't reconstruct the data in the .h5 file, but just uses it as a template.

@KrisThielemans
Copy link
Member Author

KrisThielemans commented Mar 18, 2024

So... the image above was created in a GitHub Codespace, running the synerbi/jupyter docker image, which is apparently somehow 3.5 (see email).

On my VM with SIRF 3.6, I get the following backprojection in the notebook when using simulated_MR_2D_cartesian.h5
Figure 6
and this with your Rep1 data
Figure 9
These make sense I believe.

My takeaway therefore is that the MR version that is on the current docker (latest I guess) is broken.

@ckolbPTB
Copy link
Contributor

correction: the gradient_descent_mr_pet_ct doesn't reconstruct the data in the .h5 file, but just uses it as a template.

Yes, we get the parameters for the acquisition model from the .h5 file. We also use the .h5 file to calculate realistic coil maps. Unfortunately, they need to somehow match the brainweb data. This I guess is the reason for the strange image which you had before which looked like it had bits cut out. These were the black ellipses in the centre of the Shepp-Logan-phantom which don't have any signal. This lead to 0 in the csm due to some threshholding which we had at some point. Probably by solving this issue #1221 we indirectly fixed this notebook in 3.6

@KrisThielemans
Copy link
Member Author

interesting. Thanks for that clarification, @ckolbPTB.

So, this PR is good to merge? I might take the opportunity to move it to Introduction, as I suggested in #193 (comment)

- use minus log-likelihood for PET and fix wrong comments about ascent/descent for MR and PET
- fix has_astra => have_astra
- no longer use relative step-size as it fails for MR and CT
- hide PET warnings
@KrisThielemans KrisThielemans merged commit 617c166 into master Mar 19, 2024
@KrisThielemans KrisThielemans deleted the GD_update branch March 19, 2024 16:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants