Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Various bug fixes, features, and cleanups #159

Merged
merged 14 commits into from
Jan 2, 2024

Conversation

FrankZijlstra
Copy link
Collaborator

@FrankZijlstra FrankZijlstra commented Dec 13, 2023

Bug fixes:

  • Sequence.adc_times gave an error when no ADC events are in the sequence.
  • Sequence.read now clears the block cache (e.g. if there already was a sequence loaded, read overwrites it, and should clear the block cache), and makes sure the block cache remains valid while filling in grad.first and grad.last.
  • Fixed a small issue in Sequence.read where grad.last would not be filled in correctly (a larger issue with restoring these values remains, added a TODO note).
  • Fixed split_gradient, which didn't work before.
  • Fixed various pieces of code that assumed a linear ordering in the keys of block_events, which fixes Interpretation of block ID and block order pulseq/pulseq#40. One small issue remains in set_block, where finding the previous block would be expensive, given how often this function is called...

Features:

  • Support for loading the cardiac PNS model in asc_to_hw (only available for newer scanners)
  • Added time_range parameters for calculate_pns and get_gradients
  • Added possibility to change default values in Opts (using opts.set_as_default), similar to pulseq/pulseq@6138cc0. This saves having to pass a system object if it is always the same (see example below).

Cleanups:

  • Removed unused imports from sequence.py
  • Updated type signature of Sequence.adc_times
  • calculate_pns now uses piecewise-polynomial gradients from get_gradients
  • Removed trapezoid gradient check in set_block because it always passed (first and last amplitudes are always 0)
  • Removed the keys field from event_lib because it was always keys[k] = k, and updated all references to this field to use data or data.keys() instead.
  • event_lib.update now ignores the old_data argument because it was not needed in the first place (old data was just used to update the keymap field and can just be looked up in data)

Notes:

  • Please check whether the implementation of setting default values in Opts is as we would want it to be:
pp.Opts(max_grad=80).set_as_default()
g = pp.make_trapezoid('x', area=1000)

Gives the same result as:

system = pp.Opts(max_grad=80)
g = pp.make_trapezoid('x', area=1000, system=system)

Compared to the implementation in Matlab:

system = mr.Opts(max_grad=80, 'setAsDefault',true)
  • @btasdelen regarding your TODO regarding update_data in read: Any modification of any event library needs to clear the cache for all blocks that use the changed event. I think that users should not access and change these libraries themselves, or it should be documented that the block cache needs to be cleared when doing so (and using functions that iterate over all blocks afterwards, e.g. calculate_kspace). For event libraries to do this any time an update happens (insert can in theory also overwrite data), it would need to access the Sequence object and it might incur a performance hit.
  • In addition to the previous point, I think it is a bit wasteful that read accesses all blocks using get_block, just to update the first/last values of arbitrary gradients.

- Also fixes a few functions that could modify the `system` object
- Removed unused imports from `sequence.py`
- Updated type signature of `Sequence.adc_times`
- Fixed bug in `Sequence.adc_times` that would give an error if no ADC events are in the sequence
- `Sequence.read` now clears the block cache
…get_gradients`

- Added `time_range` option to `get_gradients`
- Added `time_range` option to `calculate_pns`
- Removed `keys` field from `event_lib` (always ended up as a keys[k] = k mapping)
- Ignored `old_data` in `event_lib.update` (`old_data` can just be looked up in the `data` field)
- Moved `block_cache` resets so it always remains valid
- Added `block_cache` reset for `rf_library` updates.
- Fixed small issue where `grad.last` would not be restored correctly
- Updated `grad_library.update_data` call because `old_data` is no longer needed
@btasdelen
Copy link
Collaborator

@FrankZijlstra I have some questions:

  • Is it safe to remove version check in read? I guess yes, first and last are not a part of the .seq file, so actually has nothing to do with the version. Just wanted to double-check.
  • Would it make sense to add first and last when we are first adding the blocks during reading the file? That would avoid get_blocks.
  • Are you sure we need to update the cache every time first and last is added? I thought about that and first put it there, but then realized it is only needed when the same shape occurs twice in the same block. I thought we are directly overwriting the cache anyway, maybe I missed something.
  • Minor comment: There is no harm adding the last if first is found, but I supposed if first is there, last also must be there as they are updated together.

@FrankZijlstra
Copy link
Collaborator Author

FrankZijlstra commented Dec 13, 2023

* Is it safe to remove version check in `read`? I guess yes, `first` and `last` are not a part of the .seq file, so actually has nothing to do with the version. Just wanted to double-check.

The check is only there to calculate old_data, but that's already contained in the gradient library anyway. So with the fix in event_lib that ignores old_data, this check is unnecessary.

* Would it make sense to add `first` and `last` when we are first adding the blocks during reading the file? That would avoid `get_blocks`.

The problem is that the calculation can depend on the previous block, and it needs the gradients definitions and waveforms, which aren't necessarily loaded once you get to the [BLOCKS] section. I think the alternative is to look only for arbitrary waveforms/extended trapezoids in the gradients after everything is read, then look up which blocks they're used in, then do the first/last calculations. But it's not a major issue, the current implementation just makes reading a bit slower.

The bigger issue I found is that setting first/last during reading is not always consistent with creating a sequence. For example, make_arbitrary_grad sets first and last irrespective of preceding blocks (as it obviously can't know at that point). But even when that gradient is added to the sequence, first/last is unchanged. During reading a different implementation (the odd_step1 part) sets first/last. I'm not sure if this can actually be fixed in general as the information on first/last is lost. For example, one could manually set first/last of a gradient while creating the sequence, write it to a file, and then it's impossible to restore the first/last values when reading it back in.

The first/last values of these gradients are only relevant for functions like calculate_kspace and plotting, for example. Running the sequence will not be affected by these.

* Are you sure we need to update the cache every time first and last is added? I thought about that and first put it there, but then realized it is only needed when the same shape occurs twice in the same block. I thought we are directly overwriting the cache anyway, maybe I missed something.

You might be right on this one, I hadn't realized the cached blocks could be edited in-place (should check that this is not happening anywhere else in the codebase). I'll go over it another time to see if this change was necessary.

* Minor comment: There is no harm adding the `last` if `first` is found, but I supposed if `first` is there, `last` also must be there as they are updated together.

I think you're referring to this?

               if hasattr(grad, "first"):
                    grad_prev_last[j] = grad.last

The reason this is necessary is because grad_prev_last is used to set grad.first, but grad_prev_last was only set when the library is updated. It's possible for a later block to use the same gradient (which now has first/last already set, and thus previously wouldn't update grad_prev_last), and then the next block to have a gradient without first/last set, where first is set to grad_prev_last. I have a little testcase for this, which I will be compiling into some unit tests when I have the time...

@btasdelen
Copy link
Collaborator

Thanks for the detailed explanations! There is a conflict due to me merging your previous PR, so we need to solve that, too.

I also checked the Opts.defaults, I think the implementation is nice. I will try it with a couple of my own sequences and test if it works as expected, as well.

@FrankZijlstra
Copy link
Collaborator Author

FrankZijlstra commented Dec 14, 2023

I updated the block cache code, now it avoids the new get_block call altogether and just updates the cache in-place.

Added a few more small things:

  • Support for pathlib in write_seq (useful for unit tests later, pytest provides temporary paths as pathlib Path objects)
  • Made sure rf.use field is always set in get_block to avoid an inconsistency between creating a sequence and reading it in from file again
  • Added a reset_default function to Opts (again useful for unit tests which might be modifying Opts.default)
  • And resolved the merge conflict :)

@btasdelen
Copy link
Collaborator

@FrankZijlstra GitHub says there are still conflicts, so I could not merge.

@FrankZijlstra
Copy link
Collaborator Author

That's weird, for me it says "This branch has no conflicts with the base branch".

@btasdelen
Copy link
Collaborator

btasdelen commented Dec 22, 2023

image

Well, I guess you can go ahead and merge, because I can't.

Edit: Oh, I can't rebase, but I can merge-commit.

@btasdelen btasdelen merged commit db19dda into imr-framework:dev Jan 2, 2024
4 checks passed
@schuenke schuenke mentioned this pull request Jun 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Interpretation of block ID and block order
2 participants