-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix trajectory scaling in KTrajectoryPulseq #551
Conversation
📚 Documentation |
I guess for 2D trajectories it does not matter because the k2 dimension would then be singleton and we simply ignore it in the FourierOp. The only one using 3D pulseq-trajectories is probably @JoHa0811 - not sure if he encountered any problems. |
Patrick noticed the bug because the reconstruction did not work. The reconstruction should still have worked, in theory, but, we also have #553 , the fourier op does not work with recon matrix smaller 6. |
For 2D acquisitions? Strange, because we also have pulseq reconstructions in our examples |
It was a 2D trajectory in my case, but the kz values after rescaling differed in the range [-0.3, 0.5] and thus much more than the default
insted of This actually lead to errors in the reco. |
Indeed. I think it depends on the sequence if pypulseq returns 0 or something close to zero. |
Ok, in the pulseq example, kz is constant (but not 0). This avoids the possible division by zero bug and results in a singleton dimension as all kz after scaling are -0.5 ... |
I came across a bug in the
reshape_pulseq_traj
function in ourKTrajectoryPulseq
class.In
KTrajectoryPulseq
, we useseq.calculate_kspace()
to get our k-space trajectory from the seq-file. Due to some rounding errors etc it happens that we get different kz values even for single 2D acquisitions. These values are usually about 10 orders of magnitude smaller compared to typical kx and ky values.However, in the
reshape_pulseq_traj
function we rescale the values withencoding_size / (2 * torch.max(torch.abs(k_traj)))
, which can be in the order of 1e8 for very small kz values (caused by mentioned rounding errors). Forencoding_size = 1
, the resulting max value of kz after rescaling was always 0.5 and thus 2 orders of magnitude larger than ourrepeat_detection_tolerance
, which is meant to compensate for the small rounding errors.I actually don't understand why this hasn't been a problem in the past, but @fzimmermann89 and I are sure that the scaling for encoding_size = 1 is a bug, which should be fixed by this PR.