-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[REF] Rename T1c to "minimum image regression" #609
Conversation
Codecov Report
@@ Coverage Diff @@
## master #609 +/- ##
=======================================
Coverage 93.53% 93.53%
=======================================
Files 26 26
Lines 1965 1965
=======================================
Hits 1838 1838
Misses 127 127
Continue to review full report at Codecov.
|
tedana/gscontrol.py
Outdated
acc = comptable[comptable.classification == 'accepted'].index.values | ||
ign = comptable[comptable.classification == 'ignored'].index.values | ||
acc = comptable[comptable.classification == "accepted"].index.values | ||
ign = comptable[comptable.classification == "ignored"].index.values | ||
not_ign = sorted(np.setdiff1d(all_comps, ign)) | ||
|
||
optcom_masked = optcom_ts[mask, :] | ||
optcom_mu = optcom_masked.mean(axis=-1)[:, np.newaxis] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we're naming everything based on stat names, shouldn'lt we make this optcom_mean
instead of optcom_mu
? I figure since we have other style changes in this PR, this would be a good one to change this for.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only one slight change requested, otherwise looks good!
bold_ts = np.dot(cbetas[:, acc], mmix[:, acc].T) | ||
t1_map = bold_ts.min(axis=-1) | ||
# Build BOLD time series without amplitudes, and save T1-like effect | ||
mehk_ts = np.dot(comp_pes[:, acc], mmix[:, acc].T) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a little counter-intuitive to me to change this from "bold_ts" to "mehk_ts" (mid-elbow, high-kappa?) but leaving the comment the same. I'd recommend either changing the comment or reverting the variable name change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're totally right. I will improve the comment. I switched to MEHK
(multi-echo high kappa) because "BOLD" is so unclear when we're dealing with multi-echo data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I think that helps !! On the same point : can we write out MEHK here ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @tsalo ! I think this looks great.
I had a few small clarifications / questions.
from scipy import stats | ||
from scipy.special import lpmv | ||
|
||
from tedana import io, utils | ||
|
||
LGR = logging.getLogger(__name__) | ||
RepLGR = logging.getLogger('REPORT') | ||
RefLGR = logging.getLogger('REFERENCES') | ||
RepLGR = logging.getLogger("REPORT") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was this just for style ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I ran black
and isort
on the file and then only kept changes from the top of the file and the minimum_image_regression
function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Scratch that- I don't think I ran isort
. But I did run black
.
bold_ts = np.dot(cbetas[:, acc], mmix[:, acc].T) | ||
t1_map = bold_ts.min(axis=-1) | ||
# Build BOLD time series without amplitudes, and save T1-like effect | ||
mehk_ts = np.dot(comp_pes[:, acc], mmix[:, acc].T) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I think that helps !! On the same point : can we write out MEHK here ?
t1_map = bold_ts.min(axis=-1) | ||
# Build time series of just BOLD-like components (i.e., MEHK) and save T1-like effect | ||
mehk_ts = np.dot(comp_pes[:, acc], mmix[:, acc].T) | ||
t1_map = mehk_ts.min(axis=-1) # map of T1-like effect |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know that this is a bigger question, but if we can maybe mention in the docstring that this is how we derive the T1-like effect ? Or should we change this comment / variable to reference the "minimum image" ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think adding to the docstring would be better, because it'll also show up in the API docs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've improved the docstring.
io.filewrite(utils.unmask(cbetas_norm[:, 2:], mask), | ||
op.join(out_dir, 'betas_hik_OC_T1c'), ref_img) | ||
np.savetxt(op.join(out_dir, 'meica_mix_T1c.1D'), mmixnogs) | ||
# Find the global signal based on the T1-like effect |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure why the diff is splitting like this, but just to confirm: None of the functionality is changed here, no ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(From my look-through I'd think not, but wanted to double-check !
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that's weird... You're correct, though, that the functionality shouldn't change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's splitting like that because of the change from """ -> #
They're all so close together that the heuristic is counting it as big text blobs since it's multi-line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good, thanks @tsalo !
I think it's a good addition, thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great !! I had one small suggestion, but it's not worth holding up this PR over. Happy to see this in 👍
Co-authored-by: Elizabeth DuPre <emd222@cornell.edu>
Closes #605.
Changes proposed in this pull request: